text
stringlengths 1
1.94M
| meta
dict |
|---|---|
\section{Measuring work}
\label{Measuring work}
Consider a closed system that does not interact with its surroundings, so that no heat can be added to the system, described by the Hilbert space $\mathcal{H}_S$. In accordance with the first law, the work done on the system by a time-dependent Hamiltonian $H_S(t)$ between an initial time $t=t_i$ and final time $t=t_f$ is equal to the change in its internal energy
\begin{align}
W = E_n(t_f) - E_m(t_i), \label{work}
\end{align}
where $E_n(t)$ is an eigenvalue of the system Hamiltonian associated with the eigenvector $\ket{E_n(t)} $ at time $t$, that is, $H_S(t) \ket{E_n(t)} = E_n(t) \ket{E_n(t)}$. For simplicity, we have assumed that the spectrum of $H_S(t)$ is non-degenerate and discrete (labeled by the index $n$), however, the results that follow are expected to generalize straightforwardly.
\subsection{Two-point measurement scheme}
\label{twopoint}
One of the most common operational definitions of work is the so-called two-point measurement scheme~\cite{campisiColloquiumQuantumFluctuation2011,talknerFluctuationTheoremsWork2007}. Suppose the system is prepared in the state $\rho_S(t_i) \in \mathcal{S}\left(\mathcal{H}_S\right)$. A projective measurement of the system's energy is made at $t=t_i$ yielding the outcome $E_m(t_i)$. The system then evolves from $t_i$ to $t_f$ as described by the unitary $U_S(t_f)$ generated by $H_S(t)$. Then, the system energy is measured again yielding the outcome $E_n(t_f)$. From the outcomes of these two measurements the work performed in this particular realization of the protocol is given by Eq.~\eqref{work}. The outcomes of these energy measurements are probabilistic and thus so too is the amount of work $W$ done on the system. The probability associated with an amount of work $W$ is
\begin{align}
\mathcal{P}(W) = \sum_{m,n} \mathcal{P}_m \mathcal{P}_{m \to n} \delta \big(W - [E_n(t_f)- E_m(t_i)] \big),
\label{TMP}
\end{align}
where $\delta$ is the Dirac delta function, $\mathcal{P}_m \ce \braket{E_m(t_i) | \rho_S(t_i) | E_m(t_i)}$ is the probability of outcome $m$ in the first measurement, and $\mathcal{P}_{m \to n} \ce \abs{\braket{E_n(t_f) | U_S(t) | E_m(t_i)}}^2$ is the probability of outcome $n$ in the second measurement conditioned on outcome $m$ in the first measurement; see Ref.~\cite{dechiaraAncillaAssistedMeasurementQuantum2018} for a recent discussion.
\subsection{Ancilla-assisted protocol}
Alternative to the two-point measurement scheme, one can consider an explicit measurement model that describes an apparatus which couples to the system at the times $t_i$ and $t_f$ in such a manner that a subsequent projective measurement of the apparatus yields the amount of work performed on the system between $t_i$ and $t_f$.
Let the measuring apparatus be modeled as a free particle on the real line, whose associated Hilbert space is $\mathcal{H}_A \simeq L^2(\mathbb{R})$ and whose free evolution is governed by the Hamiltonian $H_A = P^2/2m$, where $m$ is a mass parameter that governs the dispersion of the measuring apparatus in position space. Suppose that the system and apparatus are prepared at the time $t_p < t_i$ in the separable state $\rho_S(t_p) \otimes \rho_A(t_p)$, where $\rho_S(t_p) \in \mathcal{S}\left(\mathcal{H}_S\right)$ and $\rho_A(t_p) \in \mathcal{S}\left(\mathcal{H}_A\right)$. For simplicity we will suppose that the apparatus is initially a pure state $\rho_A(t_p) = \ket{\psi_A(t_p)}\! \bra{\psi_A(t_p)}$ localized in position space around $x = 0$,
\begin{align}
\ket{\psi_A(t_p)} = \frac{1}{\pi^{1/4}\sqrt{\sigma_{x}}} \int dx \, e^{-\frac{x^2}{2\sigma_{x}^2}} \ket{x},
\label{Astatepgaus}
\end{align}
where $\ket{x}$ is the generalized eigenvector of the position operator $X$, that is, $X \ket{x} = x \ket{x}$ for all $x \in \mathbb{R}$. The apparatus must interact with the system such that it keeps a coherent record of the energy of the system at the times $t_i$ and $t_f$. An interaction Hamiltonian that accomplishes this is
\begin{align}
H_{SA}(t) = \lambda f(t) H_S(t) \otimes P, \nn
\end{align}
where $\lambda \in \mathbb{R}$ is the strength of the interaction, and $P$ is the momentum operator acting on $\mathcal{H}_A$, $f(t) \ce g(t-t_f) - g(t-t_i)$, and $g(t)$ is a function with narrow support around $t=0$. Because the momentum operator $P$ generates a translation of the position operator $X$, the evolution generated by $H_S(t)$ first translates the apparatus to the left by an amount conditioned on the internal energy of the system at time $t_i$ and then translates the apparatus to the right conditioned on the internal energy of the system at time $t_f$. The system and apparatus evolve from the time $t_p$ to $t_m>t_f$ according to the unitary operator
\begin{align}
U_{t_p \to t_m} = \mathcal{T} e^{-i \int_{t_p}^{t_m} \dif{t} \, H(t)}, \nn
\end{align}
where $\mathcal{T}$ denotes the time ordering operator and the total Hamiltonian $H(t)$ describing the system, apparatus, and their interaction is
\begin{align}
H(t) = H_S(t) + H_A + H_{SA}(t)
\label{totalH}
\end{align}
At the time $t_m$, a position measurement of the apparatus is made. The outcome of which corresponds to the measured work $W$ in this realization of the process governed by $H_S(t)$. Accordingly, the probability density of an amount of work $W$ being done on the system is given by
\begin{align}
\mathcal{P}(W) = \tr \left[ I_S \otimes \Pi_{x = W} U_{t_p \to t_m} \rho_S(t_p) \otimes \rho_A(t_p) U_{t_p \to t_m}^\dagger \right], \nn
\end{align}
where $\Pi_{x} \ce \ket{x}\!\bra{x} \in \mathcal{E} \left(\mathcal{H}_A\right)$ is the effect operator associated with outcome $x \in \mathbb{R}$. This protocol constitutes a measurement model and is depicted in Fig.~\ref{protocolFig} as a quantum circuit.
\begin{figure}[t]
\includegraphics[width= 245pt]{ProtocolFig.pdf}
\caption{The measurement model described in the text is depicted as quantum circuit that implements a single measurement described by a POVM. The different evolution channels are: $U_{S}$ for the evolution of $\rho_{S}(t_{p})$ under $H_{S}$, similarly for the measuring apparatus $A$, and $U_{i},U_{f}$ represent the evolution of the joint state under the interaction Hamiltonian.
}
\label{protocolFig}
\end{figure}
The above measurement model induces a positive operator-valued measure (POVM) described by effect operators $ E(W) \in \mathcal{E}(\mathcal{H}_S)$ for all $W \in \mathbb{R}$ such that
\begin{align}
\mathcal{P}(W) &= \tr \big[E(W) \rho_S(t_p) \big] \nn \\
& = \tr \left[ I_S \otimes \Pi_{x = W} U_{t_p \to t_m} \rho_S(t_p) \otimes \rho_A(t_p) U_{t_p \to t_m}^\dagger \right]\!, \label{ProbabilityReproducability}
\end{align}
where the last equality defines $E(W)$ and is known as the the probability reproducibility condition~\cite{heinosaariMathematicalLanguageQuantum2011}. Inverting Eq.~\eqref{ProbabilityReproducability} allows for the POVM elements to be solved for explicitly
\begin{align}
E(W) = \bra{\psi_A(t_p)} U_{t_p \to t_m}^\dagger I_S \otimes \Pi_{x=W} U_{t_p \to t_m} \ket{\psi_A(t_p)}.
\label{Effect}
\end{align}
In the ideal limit where the initial state of the apparatus is completely localized in the position/measurement basis, $\sigma_x \to 0$, the measurement interaction happens infinitely fast, $g(t) \to \delta(t)$, and the initial state of the system is prepared in the state $\ket{E_m(t_p)}$ with probability $\mathcal{P}_m \ce \braket{E_m(t_p) | \rho_S(t_p) | E_m(t_p)}$, then the probability distribution in Eq.~\eqref{ProbabilityReproducability} is equivalent to the work distribution sampled in the two-point measurement scheme and given in Eq.~\eqref{TMP}. Henceforth, we will refer to this limit as the \textit{ideal measurement limit}.
\label{secideal}
\subsection{Thermodynamic considerations}
\label{thermocons}
Suppose the system of interest is subject to a time-dependent Hamiltonian $H_S(t)$ and evolves as $\rho_S(t)$. The first law of thermodynamics states that
\begin{align}
\Braket{\Delta U} &= \Braket{W} + \Braket{Q},
\label{firstlaw}
\end{align}
where $\braket{W} \ce \int_{t_{p}}^{t_{m}} \dif{t} \, \tr{\left[ \dot{H}_{S}(t)\rho_{S}(t)\right]}$ and $\braket{Q} \ce \int_{t_{p}}^{t_{m}} \dif{t} \, \tr{\left[H_{S} (t)\dot{\rho}_{S}(t)\right]}$; see for example~\cite{Vinjanampathy2016QuantumT}.
The fact that the measurement apparatus has to interact with the system in order to sample the work distribution leads to the possibility of the apparatus performing work on the system and modifying the work distribution. Although this does not occur when using ideal von-Neumann measurements in the two-point measurement scheme \cite{debarba2019work}, we expect a different outcome based on the finite resolution of our measurement model. To examine this further we have to specify additional layers of detail defining the measurement process. The first layer involves completely ignoring the effect of the measurement interaction between the system and apparatus on the evolution of the system, $\rho_S(t)$, and corresponds to the ideal measurement limit. An additional layer of detail takes into account the measurement interaction, which in turn modifies the evolution of the system state to $\tilde{\rho}_S(t) \neq \rho_S(t) $.
As a consequence, this results in different amounts of average work being performed on the system
\begin{align}
\braket{W}_{S} &\ce \int_{t_{i}}^{t_{f}} \dif{t} \tr{\left[ \dot{H}_{S}(t)\rho_{S}(t)\right]}, \\
\braket{\tilde{W}}_{S} &\ce \int_{t_{i}}^{t_{f}} \dif{t} \tr{\left[ \dot{H}_{S}(t)\tilde{\rho}_{S}(t)\right]}.
\label{Defworktilde}
\end{align}
The difference between these quantities,
\begin{align}
\Delta W_{\rm int} \ce \braket{\tilde{W}}_{S} - \braket{W}_{S}, \label{workint}
\end{align}
quantifies the additional work done on the system due to its interaction with the measuring apparatus. Using the first law in Eq.~\eqref{firstlaw}, we similarly define
\begin{align}
\braket{Q}_{S} &\ce \braket{\Delta U} - \braket{W}_{S}, \nn \\
\braket{\tilde{Q}}_{S} &\ce \braket{\tilde{\Delta U}} - \braket{\tilde{W}}_{S}, \nn
\end{align}
and their difference
\begin{align}
\Delta Q_{\rm int} \ce \braket{\tilde{Q}}_{S} - \braket{Q}_{S}. \label{heatint}
\end{align}
Both of the above average work quantities reference observables that are to be measured on the system itself, as opposed to an observable on the measuring apparatus. The average work computed from the work distribution is
\begin{equation}
\braket{W}_{\rm dist} \ce \int \dif{W} \, W \mathcal{P}(W,t_{m}). \nn
\end{equation}
Similarly, the difference in the average work arising from sampling this work distribution,
\begin{align}
\Delta W_{\rm POVM} \ce \braket{W}_{\rm dist} - \braket{W}_{S},
\nn
\end{align}
quantifies the effect of using the measured work distribution $\mathcal{P}(W,t_{m})$ and the additional work done on the system relative to the ideal measurement limit. Note that we do not define similar quantities for the heat since that would require a prescription for calculating $\braket{\Delta U}$ using the measuring apparatus.
In Sec.~\ref{Self-commuting system Hamiltonians}, we show that $\Delta W_{\rm POVM}$, which is non-zero in general, vanishes upon taking the ideal measurement limit. More surprisingly, we find that $\Delta W_{\rm int}$ vanishes when the system Hamiltonian commutes with itself at different times, which means that the second layer of detail in describing realistic work measurements does not suffice in finding the average work imparted by the apparatus in the sense defined above. Finally, in Sec.~\ref{not self-commuting}, which is the example of a non-self-commuting Hamiltonian, we find that in general $\Delta W_{\rm POVM}$ and $\Delta W_{\rm int}$ are non-zero and differ from each other.
\begin{figure}[t]
\includegraphics[width= 245pt]{setupFig.pdf}
\caption{This figure depicts the three measurement model layers outlined in Sec.~\ref{thermocons}, illustrating qualitatively their associated averages, $\braket{W}_S$, $\braket{\tilde{W}}_S$, and $\braket{W}_{\rm dist}$. The latter two definitions reduce to the former upon taking the ideal measurement limit.}
\label{protocolFiglayers}
\end{figure}
\section{Self-commuting system Hamiltonians}
\label{Self-commuting system Hamiltonians}
In this section, analytic expressions of the work distribution $\mathcal{P}(W,t)$ are derived using the POVM construction above for the case of systems governed by a time-dependent Hamiltonian $H(t)$ that commutes with itself at different times, $[H(t),H(t')] = 0$. For such systems, it is shown that $\Delta W_{\rm int} =0 $ and $\Delta W_{\rm POVM}$ is a function of the measurement interaction that vanishes in the ideal measurement limit discussed above. The measured work distribution is modified on account of the system-apparatus interaction, which in turn leads to corrections to both the Crooks relation and the Jarzynski equality.
\subsection{Setup}
\label{subsectionselfcommutingtheory}
Consider a system described by the Hilbert space $\mathcal{H}_S$ and whose evolution is governed by the time-dependent Hamiltonian $H_S(t) \in \mathcal{B}(\mathcal{H}_S)$. Suppose that $H_S(t)$ commutes with itself at different times
\begin{align}
\left[ H_S(t) , H_S(t') \right] = 0, \quad \forall \ t,t' \in \mathbb{R}^{+}.
\nn
\end{align}
It follows that for such Hamiltonians, the energy eigenbasis does not change in time, and therefore by the spectral theorem\footnote{For simplicity, we consider here the case when the spectrum $\sigma_S$ is discrete; however, the results presented here naturally generalize to the case of continuous and degenerate spectrum Hamiltonians.}
\begin{align}
H_S(t) = \sum_{n} E_n(t) \ket{E_n}\! \bra{E_n}, \nn
\end{align}
where $\ket{E_{n}} \in \mathcal{H}_S$ is the energy eigenstate associated with the eigenvalue $E_{n}(t)\in \spec (H_S(t))$, that is, $H_S(t) \ket{E_{n}} = E_{n}(t) \ket{E_{n}}$. Thus, it is only the spectrum of the system Hamiltonian that changes in time, not its eigenbasis. An example of such a self-commuting system Hamiltonian is a two-level atom in the presence of a uniform magnetic field of varying strength.
As evaluated in Appendix~\ref{Self-Commuting work distribution}, the probability of a measurement of the apparatus at some time $t$ giving the outcome $W$ is
\begin{align}
\mathcal{P}(W,t)
&= \sum_n \dfrac{\rho_S^{(n,n)}(t)}{\sqrt{\pi} \Sigma(t)} e^{-\left( W - \int_{t_{p}}^{t_{m}} \dif{t} \, \lambda f(t) E_n(t) \right)^2/\Sigma^2(t) } , \label{WorkDistribution}
\end{align}
where
\begin{align}
\rho^{(n,n)}_S(t) &=\bra{E_n} \rho_S(t) \ket{E_n}= \rho_{S}^{(n,n)}(t_{i}), \nn \\
\Sigma(t) &\ce \left(\frac{1}{\sigma_{p}^{2}} + \dfrac{\sigma_{p}^{2}(t-t_{p})^{2}}{m^{2}}\right)^{\frac{1}{2}}.
\nn
\end{align}
Note that the diagonal elements of state of the system do not evolve under $H_{S}(t)$. It is seen that the Gaussian factors appearing in Eq.~\eqref{WorkDistribution} disperse as $t$ increases in a nontrivial manner that depends on $m$ and $\sigma_p$. Moreover, $\sigma_{p}^{-1}$ quantifies how localized the initial apparatus state is in position space, so as $\sigma_p^{-1}$ decreases the measurement model approaches the ideal measurement limit.\\
Since the work distribution is simply a sum of two Gaussians, the average work done on the system is \phantom{this space has been inserted for better equation spacing}
\begin{align}
\braket{W}_{\rm dist} = \sum_n \rho_S^{(n,n)}(t_{i}) \int_{t_{p}}^{t_{m}} \dif{t} \lambda f(t) E_n(t) . \label{distaverage}
\end{align}
In the ideal measurement limit, $f(t) \to \delta(t-t_f) - \delta(t-t_i)$, this expression reduces to the average work obtained from the first law using the freely evolving system state, $\rho_{S}$, so that $\Delta W_{\rm POVM} =0$ for arbitrary $m$ and $\sigma_p$. However, away from this limit $\Delta W_{\rm POVM}$ is in general non-zero. Moreover, as evaluated in Appendix~\ref{Self-Commuting work distribution}, we find that the quantity $\Delta W_{\rm int} $ defined in Eq.~\eqref{workint} vanishes independent of the shape of $f(t)$. Coupled with the fact that the diagonal elements of $\tilde{\rho}_{S}(t)$ are the same as those of $\rho_{S}(t)$ and Eq.~\eqref{firstlaw}, it follows that on average no heat transfer between the system and apparatus occurs. In general, this need not be the case, in particular when $[H_{S}(t),H_{S}(t')] \neq 0$ since $\Delta W_{\rm int}$ is non-zero and the diagonal elements of $\tilde{\rho}_{S}(t)$ are modified non-trivially.
Finally, the work distribution is seen to depend only on the diagonal elements in the energy eigenbasis of the system density matrix, which is a consequence of tracing out the system degrees of freedom in obtaining the reduced state of the apparatus. In the case of self-commuting system Hamiltonians, we find that the diagonal elements of the system density matrix do not evolve under $H_{S}(t)$ since the $\ket{E_{n}}$ remain eigenvectors for all $t \in \mathbb{R}^{+}$. We will see in Sec.~\ref{not self-commuting} that this no longer holds for the non-self-commuting case.
\subsection{Fluctuation relations}
Fluctuation relations are an important tool in statistical
mechanics because they relate equilibrium properties to measurable
non-equilibrium quantities. Generalizing classical fluctuation relations to the quantum regime has been the subject of much attention~\cite{holmesCoherentFluctuationRelations2019,campisiColloquiumQuantumFluctuation2011,perarnau-llobetQuantumSignaturesFluctuation2019,watanabeGeneralizedEnergyMeasurements2014}. Moreover, measurements of work fluctuations in quantum systems have been proposed and recently realized~\cite{wuExperimentallyReducingQuantum2019,perarnau-llobetNoGoTheoremCharacterization2017, cerisolaUsingQuantumWork2017,perarnau-llobetCollectiveOperationsCan2018,Pekola2015CircuitQT}.
Consider a system in contact with a heat bath of inverse temperature $\beta$, evolving under the system Hamiltonian $H_{S}(t)$.
The Crooks relation connects the work distributions associated with the forward and backward protocols for an initial equilibrium thermal state, where the former corresponds to $\mathcal{P}_{F}(W,t)$ and the latter corresponds to $\mathcal{P}_{B}(-W,t_{m}-t)$ for $t\in [t_{p},t_{m}]$, are related~\cite{campisiColloquiumQuantumFluctuation2011,crooksEntropyProductionFluctuation1999}, and can be stated as
\begin{align}
\mathcal{P}_{F}(W) = \mathcal{P}_{B}(-W) e^{\beta(W-\Delta F)}, \nn
\end{align}
where
$\Delta F$ is the change in the equilibrium free energy of the system, defined by $\Delta F \ce -\frac{1}{\beta}
\ln{\tfrac{Z(t_{f})}{Z(t_{i})}}$, and $Z(t) \ce \tr e^{-\beta H_S(t)}$ is the partition function of the system at time $t$.
If we consider an initial thermal state, $\tilde{\rho}_{S}(t_{i})=\frac{e^{-\beta H_{S}(t_{i})}}{Z(t_{i})}$,
the work done on the system obeys $W= - \Delta F$ in the ideal measurement limit. Then, the Crooks relation simply states that $ \frac{\mathcal{P}_{F}(W)}{\mathcal{P}_{B}(-W)} = e^{2\beta W}$.
Using Eq.~\eqref{WorkDistribution}, we obtain the work distribution associated to this thermal state
\begin{align}
\mathcal{P}_{F}(W,t)&= \frac{1}{Z(t_{i})\sqrt{\pi} \Sigma(t)} \nn \\
&\times \sum_n e^ {- \beta E_n(t_i) } e^{ -\frac{\left(W - \int_{t_p}^{t_m} \dif{t} \lambda f(t) E_n (t) \right)^2 }{ \Sigma(t)^2}} \label{ThermalStateDistribution} .
\end{align}
Using this work distribution, which takes into account the act of measuring the system on which work is being performed, we arrive at a modified Crooks relation specific to our measurement model:
\begin{widetext}
\begin{align}
\dfrac{\mathcal{P}_{F}(W,t)}{\mathcal{P}_{B}(-W,t_m -t)}= \dfrac{\Sigma(t_{m}-t)Z(t_f)}{\Sigma(t)Z(t_{i})} \dfrac{ \sum_n e^ {-
\beta E_n(t_{i}) } e^{ -\left(W - \int_{t_p}^{t_m} \dif{t} \, \lambda f(t) E_n
(t) \right)^2 / \Sigma(t)^2}}{\sum_n e^
{- \beta E_n(t_{f}) } e^{ -\left(W + \int_{t_{m}}^{t_{p}} \dif{t} \,
\lambda f(t_{m}-t) E_n (t_{m}-t) \right)^2 / \Sigma(t_{m}-t)^2}},
\label{Crooksmodified}
\end{align}
\end{widetext}
where we have parameterized the forward protocol with $t$ for $t_{p}\leq t\leq t_{m}$ and the
backward protocol with $t_{m}-t$ for $t_{p}\leq T\leq t_{m}$. Equation~\eqref{Crooksmodified} constitutes a generalization of the standard Crooks relation when the ancilla-assisted measurement protocol is used to define work and finite measurement interactions times and dispersion effects in the measuring apparatus are taken into account. Note that by taking the ideal measurement limit discussed under Eq.~\eqref{Effect}, we reproduce the Crooks relation for equilibrium states.
The Jarzynski equality is another important fluctuation relation that governs systems away from equilibrium~\cite{campisiColloquiumQuantumFluctuation2011,JarzynskiEquality1996}, which can be derived straightforwardly from the Crooks relation
\begin{align}
\Braket{e^{-\beta W}} = \int \dif{W} \mathcal{P}_{F}(W)e^{-\beta W} = e^{-\beta \Delta F}= \dfrac{Z(t_{f})}{Z(t_{i})}. \label{jarz}
\end{align}
Moreover, using Jensen's inequality, $\Braket{e^{-\beta W}} \geq e^{-\beta \Braket{W}}$, the statement of the
second law of thermodynamics follows
\begin{align}
\Braket{W} \geq \Delta F.
\label{secondlaw}
\end{align}
By using the work distribution in Eq.~\eqref{ThermalStateDistribution}, we can calculate the
exponentiated average work at the time of measurement of the apparatus, $t_{m}$, and use that to arrive at a modified Jarzynski equality
\begin{align}
\Braket{e^{-\beta W}}_{\rm dist}
= \frac{e^{\frac{\beta^{2} \Sigma^{2}(t_{m})}{4}} }{Z(t_i)} \sum_n e^{ - \beta \left( E_n(t_i) + \int_{t_p}^{t_m} \dif{t} \lambda \, f(t) E_n (t) \right) }. \nn
\end{align}
Upon taking the ideal measurement limit, the modified
Jarzynski equality reduces to the standard Jarzynski equality in Eq.~\eqref{jarz}.
For an equilibrium state of the system at inverse temperature $\beta$, Eq.~\eqref{distaverage} simplifies to the following
\begin{align}
\Braket{W}_{\rm dist} = \frac{1}{Z(t_{i})} \sum_{n} e^{-\beta E_{n}(t_{i})} \int_{t_{p}}^{t_{m}} \dif{t} \lambda f(t) E_{n}(t) \nn
\end{align}
Using the same reasoning that led to Eq.~\eqref{secondlaw}, we arrive at a statement of the second law
of thermodynamics with respect to our measurement model
\begin{align}
\Braket{W}_{\rm dist} &\geq
-\frac{1}{\beta} \ln{\Bigg(\frac{1}{Z(t_i )}\sum_{n}e^{-\beta(E_{n}(t_{i})+ \int_{t_{p}}^{t_{m}} \dif{t} \lambda f(t) E_{n}(t))}\Bigg)} \nn \\ &\quad -\frac{\beta \Sigma^{2}(t_{m})}{4} . \nn
\end{align}
It is seen that the finite resolution of the measurement modifies the expression of the second law in a way that is dependent on the temperature of the system and the measurement model parameters. In the ideal measurement limit, the first term in the above expression reduces to $\Delta F$ while the second term goes to zero, thus reproducing the expression in Eq.~\eqref{secondlaw}. We find that there is a constant correction proportional to the product of $\beta$ and the square of the width of the work distribution at $t_m$. For sufficiently low temperatures, this constant correction may remain non-zero even in the ideal measurement limit.
\section{The work done on a two-level atom by a changing magnetic field}
\label{not self-commuting}
We now consider the case in which the system Hamiltonian $H_S(t)$ does not commute with itself at different times, $[H_{S}(t),H_{S}(t')] \neq 0$. As an example of such a scenario, we consider a two-level atom $\mathcal{H}_S \simeq \mathbb{C}^2$ in the presence of a magnetic field that changes in strength and direction between the times $t_p$ to $t_m$,
\begin{align}
H_S(t) = \mu \vec{B}(t) \cdot \vec{\sigma}, \nn
\end{align}
where $\mu$ is the magnetic moment of the atom, $\vec{B}(t) = B(t) \hat{n}(t)$ is the magnetic field vector and $\vec{\sigma} = (\sigma_x, \sigma_y, \sigma_z)$ is the Pauli vector. For simplicity, we suppose that the magnetic field is rotating around the $z$-axis at a polar angle $\theta$ so that in the basis furnished by the eigenstates of the $\sigma_z$ operator, the system Hamiltonian takes the form
\begin{align}
H_S(t) = \mu B(t) \left[ \cos \omega t \sin \theta \sigma_x + \sin \omega t \sin \theta \sigma_y + \cos \theta \sigma_z \right]. \nn
\end{align}
This Hamiltonian does not commute with itself at different times,
$[H_{S}(t),H_{S}(t')] \propto \sin{\theta}$, unless $\theta$ is an integer multiple of $\pi$ in which case the results developed in Sec.~\ref{subsectionselfcommutingtheory} apply. Thus, we will use the parameter $\theta$ as a measure of the non-self-commutativity of $H_{S}(t)$.
Suppose that the system and measuring apparatus are prepared at time $t_p$ in a product state $\ket{\psi_A(t_p)} \ket{\psi_S(t_p)}$, where $\ket{\psi_A(t_p)}$ is given in Eq.~\eqref{Astatepgaus} and the initial state of the system at the time of the first sampling $t_i$ is
\begin{align}
\ket{\psi_S(t_i)} = \alpha \ket{0} + \beta \ket{1}, \nn
\end{align}
where $\alpha$ and $\beta$ are complex numbers such that $\abs{\alpha}^2 + \abs{\beta}^2 = 1$. To properly compare the effects of the parameters defining the measurement model ($\Delta$, $m$, $\sigma$, $\lambda$, $\omega$) with an the ideal measurement as described in Sec.~\ref{secideal}, $\alpha$ and $\beta$ are chosen such that if the system were to evolve under $H_S(t)$ alone, then at $t_i$ the system would be in the state
\begin{align}
\mathcal{T} e^{-i \int_{t_p}^{t_i} \dif{t} \, H_S(t)} \ket{\psi_S(t_p)} = \alpha \ket{0} + \beta \ket{1}. \nn
\end{align}
Generally, we can expand the joint state of both the apparatus and the system at a time $t$ as
\begin{align}
\ket{\psi(t)}= \sum_{n \in \{0,1\}} \int \dif{p} \, c_{n}(t,p) \ket{n} \ket{p},
\label{jointstate}
\end{align}
where we have expanded the system's state in the $\sigma_{z}$-basis. The coefficient
functions $c_{n}(t,p)$ can be determined by substituting Eq.~\eqref{jointstate} in the Schr\"{o}dinger
equation of the Hamiltonian in Eq.~\eqref{totalH}. We arrive at two coupled
differential equations
\begin{align}
i\dot{c}_{j}(t,p) = \frac{p^{2}}{2 m} c_{j}(t,p) + \left[1 + \lambda f(t) p \right] \sum_{k} c_{k}(t,p)
H_{jk}(t),\label{maindifeq}
\end{align}
where $j,k \in \{0,1\}$ and we have defined $H_{jk}(t) = \braket{j|H_{S}(t)|k}$. The work distribution is obtained from the diagonal entries in the position basis of the reduced apparatus density matrix. In
this case, the work distribution is given by
\begin{align}
\mathcal{P}(W,t) &= \frac{1}{2 \pi } \sum_{n}\int \dif{p}\dif{p'} \, c_n(t,p) c_n^*(t,p') e^{iW(p-p')}. \nn \\
&= \frac{1}{2 \pi } \sum_{n}\abs{\int \dif{p} \, c_n(t,p) e^{iWp}}^2. \nn
\end{align}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{thetacomparison.pdf}
\includegraphics{thetacomparisonlegend.pdf}
\caption{For a system prepared in an equally weighted superposition of the two energy
eigenstates, which corresponds to $\alpha=\tfrac{1}{\sqrt{2}}$ at the time of the first measurement $t_{i}=2\pi \sigma_{p}$, we show
the work distribution at the time $t_{m}=6\pi \sigma_{p}$ for different values of $\theta$.The mass of the measuring apparatus and the width of the interaction are taken to be $m=1000\sigma_{p}^{3}$ and $\Delta = \sigma_{p}$. }
\label{thetacomparison}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{AvgWork.pdf}\\
\includegraphics[width=0.5\textwidth]{avgworklegend.pdf}
\caption{For a system prepared in the excited state, which corresponds to $\alpha=0$ at the time of the first measurement $t_{i}=2\pi \sigma_{p}$, we show the average work at $t_{m}=6\pi \sigma_{p}$ for $\theta \in [0,\pi]$ using the work distribution, reduced system state, and system state without interaction. In general, the differences $\Delta W_{\rm int}$ and $\Delta W_{\rm POVM}$ are non-zero. The mass of the measuring apparatus and the width of the interaction are taken to be $m=500\sigma_{p}^{3}$ and $\Delta = 0.8\sigma_{p}$. }
\label{avgworkplot}
\end{figure}
The coupled differential equations in Eq.~\eqref{maindifeq} can be solved
numerically and their solutions used to arrive at a work distribution
specific to the measurement model. To illustrate in a concrete model the measurement interaction effects on the ancilla-assisted measurement protocol, we consider $B(t)= \gamma t$, where we take
$\gamma = B(1)/\sigma_p =1$, for convenience, and all parameters not explicitly stated are set to one for simplicity. Moreover, we will suppose $g(t) = \frac{1}{\sqrt{\pi \Delta^2 }}e^{-t^2/ \Delta^2}$ with the interpretation that the length of the interaction between the apparatus and system is on the order of $\frac{\Delta}{\sigma_{p}}$.
We note that the work distribution in Eq.~\eqref{WorkDistribution} is simply a weighted sum of Gaussian functions centered at two different work values. From Fig.~\ref{thetacomparison}, we see that this structure persists in the non-self-commuting case because the work distributions are in general bi-modal. Moreover, we find that as the level of non-self-commutativity increases as quantified by $\theta$, the distribution approaches uni-modality. Analytically, this would correspond to the vanishing of the coefficient corresponding to the mode that is suppressed.
We see in Fig.~\ref{avgworkplot} that the reduced system and work distribution average work both deviate from
the no-interaction average work. Making the sampling wider will skew the calculated averages using both the distribution or reduced state and thus will no longer approximate the no-interaction average work. A similar analysis of the average heat exchanged between the measuring apparatus and system leads to the conclusion that $\Delta Q_{\rm int}$ as defined in Sec.~\ref{thermocons} is generically non-zero when $\theta \neq 0$. This can be seen from the non-trivial modification to the reduced state of the system
\begin{align}
\tilde{\rho}_{S}(t) = \sum_{n,m} \int \dif{p} c_{n}(t,p) c^{*}_{m}(t,p) \ket{n} \bra{m}, \nn
\end{align}
constructed from the solutions to Eq.~\eqref{maindifeq}, on account of the system's interaction with the apparatus.
\section{Conclusion}
\label{Conclusion}
The ancilla-assisted protocol for measuring work distributions was generalized to account for dispersion and finite resolution effects of the measuring apparatus used to extract the work distribution. An explicit measurement model was considered that replicates the statistics of the probability distribution associated with the work done on the system. Two regimes were explored, one in which the system Hamiltonian self-commutes with itself at different times, and another in which it does not. The former admits an analytic expression for the work distribution, which we obtain, while the latter does not but was explored numerically via an example of a two-level in a time-dependent magnetic field. Corrections to the Crooks relation and Jarzynski equality were shown to manifest on account of the finite resolution of the measuring apparatus, which are expected to manifest in any realistic measurement of work distributions.
\begin{acknowledgments}
We thank Nicolai Friis and Nicole Yunger Halpern for their expertise and careful revision of a draft of this paper. This work was supported in part by the Paul K. Richter and Evalyn E. Cook Richter Memorial Fund, a Kaminsky Undergraduate Research Award, the Dartmouth College Society of Fellows, and Saint Anselm College.
\end{acknowledgments}
|
{
"timestamp": "2021-12-01T02:26:30",
"yymm": "2111",
"arxiv_id": "2111.15470",
"language": "en",
"url": "https://arxiv.org/abs/2111.15470"
}
|
\section{Introduction}
Image registration (IR) is required in a large variety of medical imaging studies involving tasks such as inter-subject comparison \cite{Ardekani2005}, patient follow-up \cite{Brock2006}, modality fusion \cite{Heinrich2013}, atlas generation \cite{Lorenzen2006}, or more recently cross-modality image synthesis with deep learning techniques \cite{Florkow2019}. Automated IR is however a difficult problem, as medical images can be acquired from modalities based on radically different physical principles and/or biological properties (e.g. anatomical imaging compared to functional imaging), using various image sampling strategies (e.g. slice thickness), showing inconsistent representation of underlying structures such as organs or tumours due to motion (e.g. respiratory movements \cite{Fayad2017}) or structural alterations (e.g. disease progression \cite{Zacharaki2009}). For these reasons, image registration is an active field of study followed by large communities of researchers and advanced users of dedicated software such as ANTs \cite{Avants2014} or \textit{elastix} \cite{Klein2010}. Over the recent years, the research in this field has mostly focused towards intensity-based ("voxel-based"), volumetric deformable image registration (DIR), a general alignment scenario accounting for nonlinear deformations between images and where no information other than voxel data is available \cite{Viergever2016}.
Many components in a DIR pipeline can have a strong influence on the accuracy of the results. The similarity metric (SM) is often regarded as the most critical \cite{Sotiras2013}, although a variety of other parameters such as the number of iterations of the optimizer or the optimizer itself \cite{Klein2009}, the number of scales in multiscale methods, or the nature of the regularizer \cite{Vishnevskiy2016} have also shown to strongly influence performance.
Regarding the choice of the SM, normalized mutual information (NMI) \cite{Viola1997,Studholme1999} has remained highly popular for more than two decades \cite{Sotiras2013} due to its high success and its relative insensitivity to modality changes. Despite this NMI has also some disadvantages. For example, it is known to exhibit many local minima \cite{Haber2006} and to be sensitive to inhomogeneities such as bias fields in magnetic resonance images (MRI) \cite{Heinrich2012}. It is generally admitted that the limitations of NMI are mainly due to the global nature of the intensity histograms, as spatial relationships between voxels are not considered \cite{Loeckx2009,Heinrich2012,Viergever2016}. As pointed out by the authors of \textit{elastix} "\textit{it is unlikely that mutual information will be able to maintain its popularity, given the need for local measures of image similarity}" \cite{Viergever2016}. A number of studies have therefore been devoted to the development of more robust, local and modality-independent alternatives to the NMI criterion \cite{Haber2006,Loeckx2009,Heinrich2012,Simonovsky2016}.
Among these efforts towards the development of new SM, Haber and Modersitzki proposed the normalized gradient field (NGF) \cite{Haber2006}. The rationale of NGF is to maximize the alignment between pairs of normalized image gradient vectors (instead of intensity images). The sign of the vectors are ignored to enforce robustness to contrast inversion between different modalities. The normalization step ensures that contrast variations across regions do not influence registration results. Similar concepts were introduced earlier in \cite{Pluim2000} where gradient similarity was used in combination with mutual information-based intensity registration. In practice, NGF is implemented as a SM through a variational formulation by either maximizing the square of the dot product between the two fields, or equivalently by minimizing the norm of the cross product. A limitation is that normalized gradients are only meaningful around edges, whereas in homogeneous regions their directions depend on local noise. To alleviate this issue, an empirical gradient threshold is defined based on the estimated noise level to suppress the influence of weak gradients attributable to noise. However, thresholding may not be optimal as it discards weak but real edges and preserves high amplitude noise. Another recently popular SM is the modality independent neighborhood descriptor (MIND) \cite{Heinrich2012}, which encourages local structural self-similarity. The local structure is in itself described by a vector-valued representation of the local similarity of small image patches. Such descriptors can then be compared between two images using conventional monomodal SM like the sum of square differences (SSD). However, the MIND does not encode the spatial orientations of image structures, contrary to NGF. Both NGF and MIND are being increasingly used in the community. For instance, NGF is used in the current winning method of the EMPIRE10 lung computed tomography (CT) DIR challenge \cite{Murphy2011}, followed by a method based on MIND. A more recent research path is to learn DIR unsupervisedly from training data using deep neural networks \cite{Simonovsky2016}. However, deep learning-based strategies for IR are still in their infancy \cite{Blendowski2020}, and it is not clear from recent results that they can outperform more conventional optimization approaches based on handcrafted similarity metrics and user software experience \cite{Devos2019}, as it is the case for example in supervised image segmentation with deep strategies such as U-Nets \cite{Ronneberger2015}.
Another way of improvement concurrent to the development of new SM focuses on \textit{structural representations} (SR) \cite{Wachinger2012}, whereby images themselves are transformed before being aligned in such a way that conventional monomodal SM, like the SSD, can be employed. The authors of \cite{Wachinger2012} introduce \textit{Entropy} and \textit{Laplacian} images, two SR that offer theoretical guarantees in terms of {locality preservation} and {structural equivalence} at the patch level. They demonstrate these SR can achieve superior performance over the use of intensity images. The motivations for modality independent SR are similar to the ones developed for the MIND approach, and identical concepts are now being adopted in deep learning-based DIR under the name of \textit{image transformer networks} \cite{Lee2019,Arar2020}, whereby a geometry preserving image translation neural network is trained to learn a SR, which is then fed to a \textit{spatial transformer network} \cite{Jaderberg2015} in charge of the actual image warping. More generally and relaxing theoretical requirements at the patch level, SR can be seen as a form of domain adaptation ensuring that heterogeneous data can be compared while suppressing modality-specific image characteristics. As pointed out by the authors of MIND \cite{Heinrich2012}, the SR in \cite{Wachinger2012} are scalar-valued. As such, they cannot convey directional information relative to the orientation of image structures. The MIND descriptor captures local self-similarity in a vector-valued SR, but it also does not carry information related to structure orientations. This property is an essential advantage of the NGF metric over these SR-based approaches.
In this work, we propose a new method for multimodal image registration based on the evaluation of the similarity between regularized directional \textit{vector-valued} fields derived from structural information, a technique we call Vector Field Similarity (VFS). These new representations are derived from edge-based fields (EBF) that are usually used for deformable model segmentation purposes to provide smooth vector fields oriented towards edges \cite{Xu1998,Li2007,Jaouen2014}. We show that EBF, by encoding regularized edge orientation in a vector-valued SR, can overcome the limitations of intensity-based registration. Similarly to existing SR-based registration methods, our approach can be combined in a straightforward and flexible fashion with any vector-valued registration framework regardless the SM chosen. As already noted in \cite{Heinrich2012}, this is in contrast to the NGF approach, which is formulated at the metric level. Amongst other advantages, SR enable a direct and fair comparison with existing registration pipelines, by using SR as substitutes for intensity images, all else being equal. In our experiments, we show that VFS compares favorably to conventional intensity-based methods on public datasets, using multiple image registration scenarios for a variety of imaging modalities and anatomical locations.
\section{Methods}
\begin{figure}[htbp]
\begin{center}
\subfigure[CT image]{
\includegraphics[width=.45\linewidth]{graphics/DIRLAB_I0}
}
\subfigure[NGF]{
\includegraphics[width=.45\linewidth]{graphics/DIRLAB_NGF.png}
}
\subfigure[EBF, $\gamma=5.0$]{
\includegraphics[width=.45\linewidth]{graphics/DIRLAB_VFS_kp50.png}
}
\subfigure[EBF, $\gamma=4.0$]{
\includegraphics[width=.45\linewidth]{graphics/DIRLAB_VFS_kp30.png}
}
\subfigure[EBF, $\gamma=3.0$]{
\includegraphics[width=.45\linewidth]{graphics/DIRLAB_VFS_kp20.png}
}
\subfigure[EBF, $\gamma=2.0$]{
\includegraphics[width=.45\linewidth]{graphics/DIRLAB_VFS_kp10.png}
}
\end{center}
\caption{\label{figVFC} Differences between normalized gradient fields (NGF) and edge-based fields (EBF) shown in a patient of the DIR-Lab dataset. The EBF is a vector field convolution field \cite{Li2007}. Results are shown using streamlines to ease visualization. (b) NGF field (c-f) EBF field with increasing levels of smoothing, smaller $\gamma$ indicates stronger smoothing. For high values of $\gamma$, NGF and EBF are similar. For stronger smoothing effect, EBF vectors are directed towards edges including in homogeneous regions, contrary to NGF. Near edges, EBF orientation is independent of contrast inversion and points towards edges.}
\end{figure}
\subsection{Vector Field Similarity}
We consider a fixed image $I$ and a moving image $J$ defined on the image grid $\Omega$. The aim of the proposed image registration method is to find a transformation $\hat T$ such that:
\begin{equation}
\hat T = \argmax_{T \in \mathcal{T}} ~ \mathcal S\left(\mathbf D^I,\mathbf D^J\left(T\right)\right),
\end{equation}
where $\mathcal S$ is a similarity metric such as NMI or SSD, $\mathcal{T}$ the space of transformations, and $\mathbf D^I$ and $\mathbf D^J$ are {vector-valued} SR of $I$ and $J$. This is different from the method proposed in \cite{Wachinger2012} where {scalar} SR are considered.
Several strategies can be considered to define $\mathbf D$, one of which is the MIND descriptor \cite{Heinrich2012}, which encodes local self-similarities. The NGF method is on the other hand expressed at the metric level $\mathcal S$ and describes the similarity between the normalized gradient fields of the fixed and the moving images. One of its formulation is:
\begin{equation}
\mathcal S_\text{NGF} := -\frac{1}{2}\int_{\mathbf x \in \Omega} \langle \mathbf n_\varepsilon\left(I\right), \mathbf n_{\varepsilon}\left(J\left(T\right)\right)\rangle ~\mathrm{d}{} \mathbf x,
\end{equation}
where $\mathbf n_\varepsilon(I)$ and $\mathbf n_\varepsilon(J)$ are normalized gradient fields of the fixed and moving images: \cite{Haber2006}:
\begin{equation}
\mathbf n_\varepsilon\left(I\right) :=\frac{\nabla I}{\sqrt{\nabla I^T \nabla I+\varepsilon^2}},
\end{equation}
with $\varepsilon$ an estimate of the noise level of $I$.
A main advantage of NGF over local descriptors such as MIND is that it encodes the orientations of image structures identified by the image gradient. In principle, normalized gradient vectors can be directly considered as a vector-valued structural representation rather than at the metric level.
Here, we propose to use edge-based vector fields (EBF) normally used for image segmentation for this purpose, a technique we call Vector Field Similarity (VFS).
EBF such as the popular gradient vector flow \cite{Xu1998} are smooth vector fields derived from edge information and oriented towards edges \cite{Xu1998}. EBF enable to extend the capture range of deformable models and to reduce sensitivity to noise through a regularization of the field orientations in homogeneous, edge-free regions (Fig. \ref{figVFC}c-f). They can also show some contour completion abilities in case of missing edges \cite{Jaouen2019,Jaouen2019a}. These properties are not only desirable for image segmentation with active contours, but also for image registration purposes and may help overcoming some of the limitations of previously proposed gradient-based alignment methods, such as NGF. For example, EBF orientations in homogeneous regions contribute favorably to image alignment, contrary to NGF where vectors point in random directions due to noise (Fig. \ref{figVFC}b).
\subsection{Vector Field Convolution fields}
\begin{figure}[h]
\centering
\subfigure[T1-weighted BrainWeb image.]{
\includegraphics[width=.45\linewidth]{graphics/T1}
}
\subfigure[T2-weighted BrainWeb image.]{
\includegraphics[width=.45\linewidth]{graphics/T2}
}
\subfigure[Noise level: $0\%$]{
\includegraphics[width=.9\linewidth]{graphics/T1vsT1_sig00mod}
}
\subfigure[Noise level: $9\%$]{
\includegraphics[width=.9\linewidth]{graphics/T1vsT1_sig09mod}
}
\caption{\label{figTranslationMono} Effect of noise on a self translation study along the coronal axis using the T1-weighted BrainWeb image shown in (a) for both normalized gradient (NGF) and vector field convolution similarity (VFS). The value is the average norm of the dot product, negated by convention to show attraction basins. VFS results are shown for several values of the nonlinear smoothing VFC parameter $\gamma$.}
\end{figure}
\begin{figure}[h]
\centering
\subfigure[Noise level: $0\%$]{
\includegraphics[width=.90\linewidth]{graphics/T1vsT2_sig00mod}
}
\subfigure[Noise level: $9\%$]{
\includegraphics[width=.90\linewidth]{graphics/T1vsT2_sig09mod}
}
\caption{\label{figTranslationMulti} Multimodal (multiparametric) translation study between the T1 and T2 images of the BrainWeb database. Due to contrast inversions between the modalities the NGF sign in (a) is reverted compared to Fig \ref{figTranslationMono}c. To ease and provide further elements of visual comparison, we negate the values for NGF in (b) scale all profiles between 0 and 1 .}
\end{figure}
In the present work, without loss of generality, we choose vector field convolution (VFC) \cite{Li2007} to generate robust and smooth EBF. VFC fields share several desirable properties with GVF fields such as large capture range while being more computationally efficient, requiring only $n$ convolutions with a kernel in $n$-dimensional images \cite{Xu2020}. This property is especially useful when considering large volumetric medical images. VFC fields also demonstrate superior robustness to impulse noise compared to GVF \cite{Li2007}. A VFC field $\mathcal F$ is expressed as:
\begin{equation}
\mathcal F = f * \mathcal K,
\end{equation}
where $f$ is an edge map typically derived from structural information like image gradient and $*$ is the convolution operation. In our experiments, we chose ${f=\left\|\nabla I\right\|^2}$ for simplicity, but more elaborate approaches such as edge maps based on local structure tensors \cite{Jaouen2014} or image pre-processing \cite{Bazan2007} can naturally be considered. $\mathcal K$ is a \textit{vector field kernel}, a vector kernel whose orientations point towards its center with decreasing magnitude $m$: \begin{equation}
m = \frac{1}{r^\gamma+\epsilon}
\end{equation} where $r$ is the distance to center \cite{Li2007}, $\epsilon$ a small positive value to avoid division by zero at the center, and $\gamma$ a fixed parameter that controls the rate of decrease.
To illustrate the properties of VFC fields, Fig. \ref{figTranslationMono} shows a self translation study using the BrainWeb \cite{Cocosco1997} T1-weighted MRI (Fig. \ref{figTranslationMono}a) for two distinct noise levels ($0\%$ and $9\%$, corresponding to the minimum and maximum noise levels of the BrainWeb database). Results with increasing values for the smoothing parameter $\gamma$ are shown for VFS. The degree of co-alignment between the image and its translated version is measured as the average dot product between the fixed and the moving fields using either normalized VFC fields or NGF fields, i.e. perfect alignment is of magnitude $1$. To respect registration conventions, the negative of the similarity value is shown. Even in the idealized case of self-registration without noise (Fig. \ref{figTranslationMono}c), the attraction basin of the NGF becomes quickly non convex. This is due to the inversion of the gradient signs even at small transation shifts. This non-convexity increases with noise, narrowing the capture range of gradient similarity to a few voxels (Fig. \ref{figTranslationMono}d). The dependency on the sign of the vectors of NGF can be lifted by squaring the magnitude of the dot product, as proposed in previous related works \cite{Haber2006,Pluim2000}. However, squaring first-order gradient information leads to noise amplification. On the contrary, VFS alleviates this need while allowing tunable nonlinear smoothing to further reduce sensitivity to noise. VFS also more essentially provides some form of feature selection by accumulating the weight of relevant edges while smearing out noise \cite{Li2007}. The effect of VFS on image registration is mainly determined by the degree of smoothness of the EBF field. In the case of VFC, this smoothness is controlled by a unique parameter $\gamma$, which specifies the tradeoff beween weak edge preservation and noise reduction. The higher $\gamma$ the lesser smoothing of the field occurs, and VFC field similarity becomes similar to NGF field similarity. More elaborate EBF can also be considered depending on the context, such as generalized GVF fields \cite{Xu1998a} or fields dedicated to vector-valued images \cite{Jaouen2013,Jaouen2014} that may use additional parameters to better tune edge preservation (through e.g. gradient thresholding or nonlinear structure tensor smoothing). However in the present case of VFC-based VFS registration $\gamma$, is a unique additional parameter that needs be tuned to a specific registration application that act as a tradeoff between robustness to noise and preservation of local orientations of image structures.
Fig. \ref{figTranslationMulti} shows a translation study similar to Fig. \ref{figTranslationMono} in a multimodal context, where the translated image is replaced by the T2-weighted BrainWeb image (Fig. \ref{figTranslationMono}b). Due to global contrast inversion between T1 and T2 weightings, the sign of the NGF is flipped with respect to Fig. \ref{figTranslationMono} (Fig. \ref{figTranslationMulti}a). Also, the maximum alignment value no longer achieves a magnitude of $1$ as the images are not identical. The lack of convexity of the attraction basin of the NGF becomes more prominent with added noise (Fig. \ref{figTranslationMulti}b), while on the contrary VFS consistently provides a smoother energy landscape for various values of $\gamma$.
Contrary to the original NGF approach, which is formulated at the similarity metric level and where the alignment of gradient vectors are directly optimized based on dedicated alignment metrics, we consider EBF to be vector-valued structural representations of the underlying images, similarly to the philosophy behind (scalar-valued) entropy or Laplacian structural representations \cite{Wachinger2012} or the (vector-valued) MIND approach \cite{Heinrich2012}. This point of view enables the use of any other SM deemed relevant to the problem at hand to evaluate VFS, such as e.g. SSD, NMI or normalized cross correlation (NCC).
\section{Evaluation process and datasets}
The difficulty of validating image registration methods has been largely discussed and argumented by experts in the field in the last decade \cite{Murphy2011, Rohlfing2013,Viergever2016}. A consensus is that the preferred validation strategy should be to use dedicated datasets carefully annotated with dense point landmarks (of the order of several hundreds per organ), and to rely on average distance-based metrics such as mean target registration errors (TRE). This type of ground truth is naturally extremely difficult to produce and is, as of today, only available publicly for lung CT studies \cite{Castillo2009,Castillo2013,Murphy2011}. Due to lack of annotations, using surrogates to landmark-based validation is nevertheless often the only validation strategy available and should be done carefully. A study by Rohlfing showed that a great number of DIR studies published in top journals and international conferences did not respect validation standards, relying too heavily on validation surrogates such as unsupervised image similarity metrics or volumetric overlaps between insufficiently small anatomical structures \cite{Rohlfing2013}. These observations are still valid for many more recent studies.
Another challenge with DIR validation is to demonstrate the superiority with respect to the state-of-the art. DIR pipelines contain a large quantity of components that can have a critical influence on the results \cite{Klein2010,Modersitzki2009,Tustison2014}, and success is most often achieved through intensive practice and extensive parameter exploration. Due to the size of the parameter space and to nonlinear interdependencies between these parameters, it is however extremely difficult to establish a fair ranking among different methods \cite{Murphy2011}. This task is nevertheless facilitated by active communities of software users such as \textit{elastix} \cite{Klein2010} or ANTs \cite{Tustison2014} and challenge organizers sharing best practice and pitfalls associated with various use cases for different anatomical locations, the most common of which being the brain and the lungs.
To partly address these issues raised with the validation of image registration algorithms, we evaluated the proposed VFS-based registration approach using existing DIR pipelines that were previously validated by research groups on the same public image datasets that we used in our experiments when available. All else being equal, we evaluated the change in performance using VFS by substituting EBF-based SR to the intensity image. That is, instead of the image-based similarity metric $\mathcal S \left(I, J\right)$ used in a given pipeline, we averaged the same metric on the $n$ components of the EBF:
\begin{equation}
\mathbf S \left(\mathbf D^I, \mathbf D^J\right) = \frac{1}{n} \sum_{i=1}^n \mathcal S \left(D_i^I, D_i^J\right),
\end{equation}
where $D_i^I$ and $D_i^J$ are the i$^\text{th}$ components of $\mathbf D^I$ and $\mathbf D^J$ ($n=3$ in volumetric images).
In this context, rather than absolute performance, the objective is to demonstrate that regardless the pipeline considered, substituting vector structural representation to the intensity image leads to improvedresults in several registration scenarios. In addition, this constrained validation setup can ease comparative evaluation by removing potential bias due to algorithm re-implementation. Such a choice is not optimal for VFS that could likely benefit from further optimization.
In all our experiments, $\mathbf D^{I,J}$ are VFC fields computed using a discrete kernel support size of $100$ voxels. For practical reasons, we implemented our own version of VFC in C++ using ITK \cite{Avants2014}. A Matlab implementation provided by the authors of VFC is available online\footnote{http://viva-lab.ece.virginia.edu/pages/toolbelt.html}. For all datasets, the VFC exponent parameter $\gamma$ controlling the degree of smoothness of the EBF was empirically selected in the range $[2.5,4.5]$ so as to maximize the studied evaluation metric and is specified for each experiment in the next section.
We used several publicly available datasets covering diverse anatomical locations including brain MRI and lung CT imaging, which are two common applications for image registration. We also propose the use of a publicly available abdominal MRI segmentation dataset which is also well-suited as a surrogate for the validation of multimodal image rgistration since it focuses on small anatomical structures (kidneys). The rationale behind the choice of the datasets was to cover a large variety of registration scenarios while encouraging reproducibility. More specifically, we used both Hammers\footnote{https://brain-development.org/brain-atlases/adult-brain-atlases/} \cite{Hammers2003} and IBSR18\footnote{https://www.nitrc.org/projects/ibsr} brain MRI datasets, the DIR-Lab\footnote{https://www.DIR-Lab.com/} thoracic CT datasets \cite{Castillo2009} as well as the CHAOS\footnote{https://chaos.grand-challenge.org/} abdominal multiparametric MRI segmentation dataset \cite{Chaos2020}, which we propose to use for the first time for image registration purposes.
\subsection{\label{sec3a} Landmark-based validation on lung 4DCT images}
The only public datasets providing dense landmark correspondences are the three DIR-Lab studies \cite{Castillo2009,Castillo2009a,Castillo2013}, and the EMPIRE10 challenge dataset \cite{Murphy2011}, which all concern lung 4DCT studies. The validation of EMPIRE10 results is indirect as it requires a written submission and a participation to the challenge website. In our experiments, we focused on the three DIR-Lab datasets, which provide access to $300$ landmarks' correspondences between the end-inhale and end-exhale phases for $20$ patient cases. The DIR-Lab datasets are divided in $3$ sub-datasets, hereafter referred to as studies DIR-Lab-4DCT-1 \cite{Castillo2009}, DIR-Lab-4DCT-2 \cite{Castillo2009a} and DIR-Lab-COPd \cite{Castillo2013} showing variations in displacement amplitudes between the two breathing phases.
We evaluated VFS in the context of thoracic imaging using two existing dedicated registration pipelines:
\begin{itemize}
\item a conventional \textit{elastix} DIR pipeline, hereafter referred to as ELX, constituted of three consecutive stages (one affine and two B-spline stages using the NCC metric) that was primarily designed for the EMPIRE10 lung registration challenge where it achieved the second best ranking at the time of submission \cite{Murphy2011}. An \textit{elastix} parameter file\footnote{http://elastix.bigr.nl/wiki/index.php/Par0011} was released by the authors allowing for an objective comparison with VFS. The second DIR stage (ELX-2) is used to refine results from the first DIR stage (ELX-1) using lung masking. In the following, the corresponding VFS-based results are referred to as VFS-1 and VFS-2 respectively.
\item the \textit{pTV} \footnote{https://github.com/visva89/pTVreg} pipeline based on isotropic total variation regularization \cite{Vishnevskiy2016}, which is currently the best performing method on the DIR-Lab datasets. The rationale behind pTV is to allow for sharp transitions in the deformation fields along sliding interfaces through total variation-based regularization of the control grid displacements of first order B-splines. Such transitions are common in the lung and impossible to render with $\ell_2$-based smoothness penalization. We found the mean TRE values reported in \cite{Vishnevskiy2016} to be slightly sub-optimal when running the algorithm and we therefore report two sets of measurements, referred to as pTV-1 for values reported in \cite{Vishnevskiy2016} and pTV-2 for the updated values.
\end{itemize}
In this experiment, the VFS parameter $\gamma$ was set to $\gamma=3$.
\subsection{\label{sec3b} Subcortical volume overlaps in MRI}
A major recommendation of Rohlfing's study \cite{Rohlfing2013} is that, in the absence of landmark-based correspondences, "\textit{only overlap of sufficiently
local labeled ROIs could distinguish reasonable from poor registrations}", the author arguing that \textit{"smaller, more localized ROIs
approximate point landmarks, and their overlap thus approximates point-based registration error"}. These principles have guided our validation strategy in the remainder of this section.
The Hammers dataset \cite{Hammers2003} is a brain T1-weighted MRI dataset consisting of $30$ subjects that were manually segmented into $83$ regions. Similarly to section \ref{sec3a}, we compared our approach to an existing study by Qiao et al. \cite{Qiao2015} for which parameter files for are also available on the \textit{elastix} online database website \footnote{http://elastix.bigr.nl/wiki/index.php/Par0035}. We reproduced the results described in \cite{Qiao2015} by performing inter-subject registration for all patients, leading to 870 fixed-moving image pairs. For each subject, we studied the average Dice similarity coefficient (DSC) in the sixteen labeled subcortical regions (left and right hippocampus, amygdala, cerebellum, caudate nucleus, nucleus accumbens, putamen, thalamus and pallidum) after image registration with the $29$ remaining images. We focused on subcortical structures in agreement with the recommendations of Rohlfing \cite{Rohlfing2013}. Although the cerebellum arguably does not meet these recommendations due to its large volume, it was kept in our experiments for completeness. Left and right segmentations were merged into bilateral segmentations for clarity, leading to $8$ labeled regions. A similar study was also conducted on the IBSR18 brain dataset consisting of $18$ healthy brain MRI subjects. The VFS parameter $\gamma$ was set to $\gamma=4$.
\subsection{\label{sec3c}Renal volume overlaps in multiparametric abdominal MRI}
We are also interested in the validation of VFS in a multimodal context. As of today, the only public multimodal dataset dedicated to image registration is the RIRE database for rigid brain alignment\footnote{http://www.insight-journal.org/rire}. To diversify anatomical locations, we propose the use of the recent Combined Healthy Abdominal Organ Segmentation (CHAOS) database \cite{Chaos2020}, a multiparametric MRI dataset for abdominal organ segmentation. Abdominal MRI is a less studied and nevertheless challenging problem for image registration \cite{Carrillo2000}. The CHAOS database is constituted of a multiparametric MRI study showing the segmentation of four abdominal organs (liver, spleen, right and left kidneys) acquired with two different pulse sequences (T1-DUAL and T2-SPIR) for $20$ patients that were neither co-registered nor resliced into the same image sampling space. In addition to showing a less explored anatomical location, several characteristics of the CHAOS dataset suggest its potential interest as a surrogate for image registration validation. A major advantage is that the manual ground truth segmentation masks of the different organs were performed independently in the two modalities. The kidneys are also relatively small organs compared to the field of view (FoV), which follows the recommendations of \cite{Rohlfing2013}. Finally, the relative symmetry of the kidneys enable a bilateral evaluation of the registration performance across the FoV.
For these reasons, we have studied the average Dice overlap measurements for the left and right kidneys in the CHAOS dataset as a new surrogate measurement for multimodal image registration accuracy. As previously, we used an existing registration pipeline based on \textit{elastix}\footnote{http://elastix.bigr.nl/wiki/index.php/Par0057} (referred to as ELX) for multiparametric MRI to evaluate VFS objectively. This pipeline was originally proposed in \cite{Jansen2019} for the registration of diffusion weighted MRI to abdominal dynamic contrast enhanced MRI for liver segmentation and tumor detection. It is a two-stage registration procedure (rigid then B-spline) and uses NMI as a similarity metric. The VFS parameter $\gamma$ was set to $\gamma=2.5$.
\section{Results}
\subsection{\label{sec4a} Landmark-based validation on lung 4DCT images}
Table \ref{tab1} and \ref{tab2} show results for the ELX comparative study on the three DIR-Lab datasets. Only the results for the final B-spline stage are shown in table \ref{tab2} for clarity. A consistent improvement of TRE was observed for all patients with the simple substitution of intensity images for directional structural VFC-based representations. After the first stage, an average mean TRE of $3.34\pm1.77$ mm was achieved for ELX-1 against $2.23\pm1.06$ mm for VFS-1. Results were further improved with the second B-spline stages, with an average mean TRE of $2.17\pm1.04$ mm for ELX-2 (performing only slightly better than the first VFS stage) against $1.84\pm0.64$ mm after the second VFS stage.
Table \ref{tab3} shows mean TREs for the DIRLAB-4DCT datasets using isotropic total variation regularization. Results were more contrasted based on the dataset considered, with significant improvement for VFS in $4$ out of $5$ cases of study \cite{Castillo2009}. The substitution for VFS sets two new record TRE scores in the DIR-Lab challenge leaderboard for the first two cases (mean TREs of $0.69$ mm and $0.67$ mm for cases \#1 and \#2 respectively). However, slightly lower performances than the original pTV were achieved for the second 4DCT dataset \cite{Castillo2009a}. The study \cite{Castillo2009} shows smaller average displacements prior to image registration than the other two \cite{Castillo2009a,Castillo2013}, which may explain the variations in performances. Nevertheless, we recall that these results were not optimized for VFS and that the only change made was to provide EBF components as substitutes for intensity images.
\begin{table}[htbp]
\centering
\caption{\label{tab1}DIR-Lab-4DCT datasets \cite{Castillo2009,Castillo2009a}. Comparison with two-step \textit{elastix} intensity registration. Mean TRE (in mm) for 300 landmarks.}
\begin{center}
\begin{tabular}{c|c c c c c }
\hline
\textbf{Subject} & {Orig.} & {ELX-1} & {ELX-2} & {VFS-1} & {VFS-2}\\
\hline
Study \cite{Castillo2009a}\\
\#1 & $3.89$ & $1.29$ & $1.07$ & $\mathbf{1.06}$ & $\mathbf{1.06}$\\
\#2 & $4.34$ & $1.55$ & $1.08$ & $\mathbf{1.02}$ & ${1.05}$\\
\#3 & $6.94$ & $2.37$ & $1.56$ & ${1.44}$ & $\mathbf{1.40}$\\
\#4 & $9.83$ & $2.80$ & $2.00$ & $\mathbf{1.86}$ & ${1.88}$\\
\#5 & $7.48$ & $3.25$ & $2.26$ & $\mathbf{2.07}$ & $\mathbf{1.98}$\\
\hline
Study \cite{Castillo2009}\\
\#6 & $10.89$ & $3.60$ & $2.45$ & $2.87$ & $\mathbf{2.19}$\\
\#7 & $11.03$ & $5.04$ & $2.59$ & $3.49$ & $\mathbf{2.00}$\\
\#8 & $14.99$ & $7.40$ & $4.73$ & $4.33$ & $\mathbf{3.29}$\\
\#9 & $7.92$ & $3.00$ & $1.93$ & $1.98$ & $\mathbf{1.67}$\\
\#10 & $7.30$ & $3.06$ & $2.05$ & $2.20$ & $\mathbf{1.86}$\\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[htbp]
\centering
\caption{\label{tab2}DIR-Lab-COPd dataset \cite{Castillo2013}. Comparison with two-step \textit{elastix} intensity registration. Mean TRE (in mm) for 300 landmarks.}
\begin{center}
\begin{tabular}{c|c c c c c }
\hline
\textbf{Subject} & {Orig.} & {ELX-2} & {VFS-2}\\
\hline
Study \cite{Castillo2013}\\
\#1 & $26.33$ & $13.59$ & $\mathbf{8.12}$ \\
\#2 & $21.79$ & $17.83$ & $\mathbf{6.74}$ \\
\#3 & $12.64$ & $5.40$ & $\mathbf{2.87}$ \\
\#4 & $29.58$ & $13.71$ & $\mathbf{11.03}$ \\
\#5 & $30.08$ & $14.20$ & $\mathbf{10.74}$ \\
\#6 & $28.46$ & $12.66$ & $\mathbf{7.07}$ \\
\#7 & $21.60$ & $7.26$ & $\mathbf{5.30}$ \\
\#8 & $26.46$ & $10.50$ & $\mathbf{8.02}$ \\
\#9 & $14.86$ & $6.43$ & $\mathbf{4.07}$ \\
\#10 & $21.81$ & $13.38$ & $\mathbf{10.75}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{table}[htbp]
\centering
\caption{\label{tab3}DIR-Lab-4DCT datasets. Comparison with pTV intensity registration. mean TRE (in mm) for 300 landmarks}\begin{center}
\begin{tabular}{c|c c c c c }
\hline
\textbf{Subject} & orig. & {pTV-1} & {pTV-2} & {VFS}\\
\hline
Study \cite{Castillo2009a}\\
\#1 & $3.89$ & $0.76$ & $0.77$ & $\mathbf{0.69}$ \\
\#2 & $4.34$& $0.77$ & $0.75$ & $\mathbf{0.67}$ \\
\#3 & $6.94$ & $0.90$ & $0.93$ & $\mathbf{0.87}$ \\
\#4 & $9.83$ & $1.24$ & $1.26$ & $\mathbf{1.22}$\\
\#5 & $7.48$ & $1.12$ & $\mathbf{1.07}$ & $1.11$\\
\hline
Study \cite{Castillo2009}\\
\#6 & $10.89$ & $0.85$& $\mathbf{0.83}$ &$0.95$\\
\#7 & $11.03$ & $\mathbf{0.80}$ & $\mathbf{0.80}$ & $0.87$\\
\#8 & $14.99$ & $1.34$ & $\mathbf{1.01}$ & $1.05$\\
\#9 & $7.92$ & $0.92$ & $\mathbf{0.91}$ & $0.98$\\
\#10 & $7.30$ & $\mathbf{0.82}$ & $0.84$ & $0.88$\\
\hline
\end{tabular}
\end{center}
\end{table}
\subsection{\label{sec4b} Subcortical volume overlaps in MRI}
\begin{figure}[htbp]
\subfigure[Hammers dataset]{\includegraphics[width=.95\linewidth]{graphics/boxPlot_Hammers.pdf}}
\subfigure[IBSR18 dataset]{\includegraphics[width=.95\linewidth]{graphics/boxPlot_IBSR.pdf}
}
\caption{\label{figBox} Box plots for tissue overlap scores as measured by the DSC in the (a) Hammers and (b) IBSR18 datasets in $8$ subcortical structures. The middle line represents the median. Results are shown as colored pairs in each structure for intensity-based (left box) and VFS-based (right box) registration. }
\end{figure}
\begin{table*}
\begin{center}
\caption{\label{tab4}Hammers dataset - DSC after registration (in \%) averaged over $8$ labeled subcortical regions for each subject. Standard deviation is between brackets.}
\begin{tabular}{l|c c c c c c c c c c c c|}
\hline
\textbf{Method} &\#1&\#2&\#3&\#4&\#5&\#6&\#7&\#8&\#9&\#10 \\
\hline
\textbf{Qiao et al. \cite{Qiao2015} }& $76.8$ & $74.9$ & $76.9$ & $75.7$ & $77.2$ & $78.0$ & $74.4$ & $76.3$ & $71.6$ & $73.8$ \\
& ($9.5$) & ($13.8$) & ($11.8$) & ($12.8$) & ($10.3$) & ($10.3$) & ($14.9$) & ($12.6$) & ($15.2$) & ($13.2$) \\
\textbf{VFS} & $\mathbf{77.8}$ & $\mathbf{76.5}$ & $\mathbf{78.4}$ & $\mathbf{78.4}$ & $\mathbf{78.5}$ & $\mathbf{79.3}$ & $\mathbf{74.9}$ & $\mathbf{77.8}$ & $\mathbf{76.0}$ & $\mathbf{75.6}$ \\
& ($8.9$) & ($13.8$) & ($10.6$) & ($10.2$) & ($9.9$) & ($8.5$) & ($14.6$) & ($10.6$) & ($12.1$) & ($13.2$) \\
\hline
&\#11&\#12&\#13&\#14&\#15&\#16&\#17&\#18&\#19&\#20 \\
\hline
\textbf{Qiao et al. \cite{Qiao2015} }& $74.3$ & $76.5$ & $75.3$ & $77.2$ & $75.3$ & $75.2$ & $72.3$ & $76.1$ & $76.1$ & $76.6$ \\
& ($12.5$) & ($13.0$) & ($12.1$) & ($10.9$) & ($13.3$) & ($12.0$) & ($15.7$) & ($11.0$) & ($11.0$) & ($12.0$) \\
\textbf{VFS} & $\mathbf{78.6}$ & $\mathbf{77.7}$ & $\mathbf{78.7}$ & $\mathbf{77.4}$ & $\mathbf{76.6}$ & $\mathbf{78.1}$ & $\mathbf{76.1}$ & $\mathbf{76.7}$ & $\mathbf{79.2}$ & $\mathbf{77.6}$ \\
& ($9.0$) & ($13.8$) & ($9.1$) & ($11.4$) & ($13.5$) & ($10.3$) & ($11.7$) & ($10.0$) & ($8.8$) & ($12.6$) \\
\hline
&\#21&\#22&\#23&\#24&\#25&\#26&\#27&\#28&\#29&\#30 \\
\hline
\textbf{Qiao et al. \cite{Qiao2015} } & $81.0$ & $79.6$ & $78.5$ & $79.1$ & $79.2$ & $76.9$ & $78.9$ & $80.9$ & $75.4$ & $76.5$ \\
& ($8.8$) & ($8.9$) & ($8.5$) & ($10.5$) & ($9.6$) & ($13.4$) & ($11.3$) & ($10.0$) & ($11.2$) & ($12.2$) \\
\textbf{VFS} & $\mathbf{81.8}$ & $\mathbf{81.2}$ & $\mathbf{80.5}$ & $\mathbf{81.8}$ & $\mathbf{79.8}$ & $\mathbf{78.4}$ & $\mathbf{80.0}$ & $\mathbf{82.7}$ & $\mathbf{79.9}$ & $\mathbf{78.8}$ \\
& ($6.9$) & ($7.4$) & ($7.2$) & ($7.8$) & ($9.8$) & ($11.7$) & ($10.3$) & ($7.6$) & ($7.2$) & ($8.3$) \\
\hline
\end{tabular}
\end{center}
\end{table*}
\begin{table*}[htbp]
\begin{center}
\caption{\label{tab5}IBSR18 dataset - DSC after registration (in \%) averaged over $8$ labeled subcortical regions for each subject. Standard deviation is between brackets.}
\begin{tabular}{l|c c c c c c c c c c c c|}
\hline
\textbf{Method} & \#1 & \#2 &\#3&\#4&\#5&\#6&\#7&\#8&\#9 \\
\hline
\textbf{Qiao et al. \cite{Qiao2015} }& $75.0$ & $75.8$ & $69.9$ & $72.5$ & $74.0$ & $73.4$ & $75.7$ & $71.7$ & $75.6$ \\
& ($7.8$) & ($6.8$) & ($12.0$) & ($8.9$) & ($9.0$) & ($6.9$) & ($7.3$) & ($8.6$) & ($6.9$)\\
\textbf{VFS} &$\mathbf{76.8}$ & $\mathbf{77.6}$ & $\mathbf{73.8}$ & $\mathbf{75.6}$ & $\mathbf{75.2}$ & $\mathbf{76.2}$ & $\mathbf{76.4}$ & $\mathbf{75.7}$ & $\mathbf{77.4}$ \\
& ($9.6$) & ($8.2$) & ($11.6$) & ($10.4$) & ($10.1$) & ($7.7$) & ($9.8$) & ($9.2$) & ($9.3$) \\
\hline
&\#10&\#11&\#12&\#13&\#14&\#15&\#16&\#17&\#18\\
\hline
\textbf{Qiao et al. \cite{Qiao2015}} & $68.7$ & $71.8$ & $73.0$ & $63.5$ & $75.1$ & $67.7$ & $76.9$ & $75.6$ & $79.0$ \\
& ($11.7$) & ($8.8$) & ($10.1$) & ($18.2$) & ($7.6$) & ($12.5$) & ($9.3$) & ($8.5$) & ($7.7$) \\
\textbf{VFS} & $\mathbf{73.0}$ & $\mathbf{74.8}$ & $\mathbf{73.9}$ & $\mathbf{73.7}$ & $\mathbf{78.2}$ & $\mathbf{76.3}$ & $\mathbf{78.4}$ & $\mathbf{80.1}$ & $\mathbf{80.8}$\\
& ($14.0$) & ($11.1$) & ($10.6$) & ($12.0$) & ($8.6$) & ($9.6$) & ($9.2$) & ($7.1$) & ($5.9$) \\
\hline
\end{tabular}
\end{center}
\end{table*}
Table \ref{tab4} summarizes the scores obtained on the $30$ images, where a consistent improvement in volume overlap was achieved for all $30$ subjects. Fig. \ref{figBox} shows average DSC achieved in each region for the Hammers and the IBSR18 datasets. In the Hammers dataset, tissue overlap was improved across all regions using VFS, with most relative increase being achieved in the amygdala ($+3.97\%)$, hippocampus ($+3.46\%$) pallidum ($+1.83\%$) and thalamus ($+1.78\%$) (Fig. \ref{figBox}a). DSC improvements with VFS were more pronounced on the IBSR18 dataset with $+8.44\%$ in the nuclei accumbens, $+6.07\%$ in the amygdala, $+4.16\%$ in the pallidum, $+3.61\%$ in the putamen, $+3.59\%$ in the hippocampus and $+3.13\%$ in the thalamus (Fig. \ref{figBox}b). A reduction of the variability of the results was also observed for all subcortical regions. A likely explanation for the higher gains for VFS on the IBSR18 dataset is that the compared method by Qiao et al. \cite{Qiao2015} was optimized on the Hammers dataset.
After averaging scores over all $870$ registrations and all labels, a global DSC of $78.5\%\pm10.0\%$ was achieved for VFS against $76.5\%\pm11.4\%$ for the baseline. The $p$-value of the corresponding Wilcoxon signed rank test was $p=0.012$, suggesting statistical significance.
Table \ref{tab5} shows similar improvement of volumetric overlap scores on the IBSR18 dataset.
\begin{figure*}[htbp]
\includegraphics[width=\linewidth]{graphics/figChaos_mod}
\caption{\label{figChaos1} Results of renal volume overlaps using the pipeline proposed in \cite{Jansen2019} for the $20$ cases of the CHAOS dataset. Results marked using a red or yellow arrow are failure cases for ELX or VFS respectively, characterized by a decrease of DSC after registration or a DSC inferior to $80\%$.}
\end{figure*}
\subsection{\label{sec4c}Renal volume overlaps in multiparametric abdominal MRI}
Fig. \ref{figChaos1} shows renal volume overlaps for the $20$ images of the CHAOS dataset after registration with ELX or VFS. Despite the similarity in registration scenarios between \cite{Jansen2019} and the present study, ELX led to several failure cases that we identified as a decrease of renal overlap measurements after registration or a DSC value inferior or equal to $80\%$ ($10$ cases out of $20$). Using the same pipeline, VFS led to only $5$ failure cases. Average overlap DSC was $68.1\%\pm18.8\%$ before image registration and $76.9\%\pm17.8\%$ and $81.1\%\pm15.13\%$ for ELX and VFS respectively, suggesting again better registration accuracy achieved with the substitution of EBF-based structural representations for intensity images.
\section{Discussion and Conclusion}
We have proposed the use of new directional representations for image registration based on the similarity between regularized vector fields normally used in active contour segmentation, a technique we called vector field similarity. Results on a variety of registration scenarios (mono-modal inter-patient, multi-modal intra-patient) and anatomical locations show the potential advantage of such representations over the use of intensity images, with consistent improvement achieved through this substitution. As we adopt the point of view of structural representations, similarity can be measured using several distance metrics, both mono- and multimodal and be readily implemented and adapted into existing registration pipelines. The main disadvantage associated to vector-valued directional representations over scalar $n-$dimensional intensity images is that they are $n$ times more memory consuming, which translates into longer registration times. However, we observed that among conventional metrics, VFS guided by the NCC similarity metric achieved on average results superior to, or on par with NMI even for pipelines optimized for NMI (not shown in the paper). The less memory-demanding NCC could therefore be substituted to NMI for VFS-based pipelines, partly compensating this drawback for various pipelines relying on NMI.
VFS was evaluated within existing registration pipelines to provide a fair and objective comparison with current state-of-the-art. On the other hand, VFS could likely benefit from additional parameter tuning.
Regarding parameter optimization, we used VFC-based similarity that enabled us to extend the capture range while providing a good localization of the global minimum for all values of the smoothing parameter $\gamma$.
Due to the simplicity of such a substitution, we hope this study will encourage the use of directional structural representations in cases where intensity-based registration does not seem to provide sufficient accuracy. Future investigations will be oriented towards the combination of directional structural representations with unsupervised deep learning-based registration.
\bibliographystyle{IEEEbib}
|
{
"timestamp": "2021-12-01T02:27:49",
"yymm": "2111",
"arxiv_id": "2111.15509",
"language": "en",
"url": "https://arxiv.org/abs/2111.15509"
}
|
\section{Introduction}
\IEEEPARstart{I}{n} recent years, convolutional neural networks have achieved great success in image classification \cite{10.1145/3065386}, semantic segmentation \cite{7298965}, face recognition \cite{electronics9081188}, target tracking \cite{7780834} and target detection \cite{ZAFEIRIOU20151}. The algorithm based on convolutional neural networks has become one of the leading technologies in these fields. In order to further improve the performance of convolutional neural networks, researchers start with the network structure, from AlexNet \cite{10.1145/3065386} to VGGNet \cite{Simonyan2015VeryDC}, and then to the deeper ResNet \cite{7780459}, DenseNet \cite{8099726}, etc. Many effective solutions have also been put forward in other aspects such as data enhancement, batch normalization \cite{10.5555/3045118.3045167} and various activation functions.
Loss function is an indispensable part of CNN model. It assists in updating the parameters of CNN model during the training phase. The traditional Softmax loss function is composed of softmax plus cross-entropy loss function. The output of the neural network passes through the softmax layer to get the posterior probability. Because of its advantages of fast learning speed and good performance, it is widely used in image classification. However, Softmax loss function adopts an inter-class competition mechanism, only cares about the accuracy of the prediction probability of the correct label, ignores the difference of the incorrect label, and can not ensure the compactness of intra-class and the discreteness of inter-class. L-Softamax \cite{10.5555/3045390.3045445} adds an angle constraint on the basis of Softmax to ensure that the boundaries of samples of different classes are more obvious. A-Softmax \cite{8100196} also improves Softamx, and proposes weights normalized and angular spacing to realize the recognition standard that the maximum intra-class distance is smaller than the minimum intra-class distance. In addition, AM-Softmax \cite{8331118} further improves A-Softmax. In order to improve the convergence speed, Euclidean feature space is converted to cosine feature space, and is changed to . Center Loss \cite{10.1007/978-3-319-46478-7_31} adds the constraints of sample features before the classification layer for the first time on the basis of Softmax (the mean square error between the sample features and the calculated class centers is used to restrict the intra-class distance and the inter-class distance), but the distance between similar classes cannot be well separated.
The above improvement to Softmax Loss improves the classification accuracy in the field of face recognition (the inter-class distance is relatively close), but in general image classification (the inter-class distance is farther), the effect of Softmax Loss is still the best. For the cross entropy commonly used in the loss function, it mainly uses the information from the correct label to maximize the posterior probability of the sample, which largely ignores the information of the remaining incorrect labels. Therefore, Complement Objective Training \cite{2019arXiv190301182C} proposed a method of alternate training using correct label and incorrect label information, which not only improves the performance of the model, but also is more robust to single-step adversarial attacks, but does not consider the non-entropy-based complement objectives. Liang et al. \cite{9075079} proposed a new loss function Near Classifier Hyper-Plane (N-CHP) Loss under the new CNN training framework, so that the learned sample features have the minimum intra-class distance and are close to the classifier hyperplane. The learned knowledge is transferred to Softmax Loss by using Loss Transferring, which greatly improves the classification performance. However, when the classification task has a large number of classes, N-CHP Loss is easy to fuse the sample features, resulting in poor performance.
From the perspective of maximize inter-class distance and minimize intra-class distance, PEDCC-Loss \cite{2019arXiv190200220Z}\cite{8933403} predefines evenly-distributed class centroids to replace the continuously updated class centers in Center Loss, so as to maximize the inter-class distance. Meanwhile, using AM-Softmax and mean square error (MSE) loss to restrict the compactness of feature distribution of intra-class and the discreteness of feature distribution of inter-class. PEDCC-Loss improves the classification accuracy in both face recognition and general image classification, but still contains the constraint of the loss function of the posterior probability, which makes the sample feature distribution still not in the optimal state.
In the application of pattern recognition, the structure of convolutional neural network classifier generally includes convolutional layers and fully connected layers, in which convolutional layers are used to extract the features of input information and fully connected layers are used for classification. In the field of classification, convolutional neural networks mostly use end-to-end learning, that is, the output of the neural network is constrained with loss functions, and there are few or no constraints on the extracted features. This article proposes a Softmax-free loss function (POD Loss) based on predefined optimal-distribution of latent features for CNN classifier. On the one hand, the predefined evenly-distributed class centroids (PEDCC) is used, and the weight of the classification linear layer in the convolutional neural network (the weight is fixed in the training phase to maximize the inter-distance) is replaced, so as to optimize the distribution of latent features. On the other hand, by introducing the decorrelation mechanism, the improved Cosine Loss restricts the cosine distance between the sample feature vector and the PEDCC. At the same time, the correlations between the sample features are limited, so that the extracted latent features are the most effective. POD Loss discards the constraints on the posterior probability in the traditional loss function, and only restricts the extracted sample features to achieve the optimal distribution of latent features. Finally, the cosine distance is used for classification to obtain high classification accuracy.
The location of POD Loss and the whole network structure are shown in Fig. 1. Section \uppercase\expandafter{\romannumeral3} gives the details of the method.
\begin{figure*}
\centering
\includegraphics[width=0.9\textwidth]{Network_Structure.eps}
\caption{POD Loss of CNN classifier. POD Loss is a combination of Cosine Loss and Self-correlation Constraint (SC) Loss, \textbf{x} is the normalized latent features of the sample, FC2 (PEDCC) is the linear classification layer with parameter solidification. POD Loss is completely a constraint on the optimal distribution of latent features of the samples.}
\end{figure*}
Our main contributions are as follows:
\begin{itemize}
\item The PEDCC we proposed before is adopted in our classification model, and only the output of the feature extraction layer in the convolutional neural network is restricted to achieve the optimal distribution of latent features. The Softmax loss function is discarded, the weight of the classification layer is fixed, and the cosine distance is used for classification.
\item The output of the feature extraction layer is constrained. On the one hand, the improved mean square error (MSE) loss function is used to constrain the cosine distance between the sample features and the PEDCC class centroid. On the other hand, the correlation between the latent feature dimensions is minimized to improve the classification accuracy.
\item For the classification tasks, experiments on multiple datasets were conducted. Compared with the traditional cross-entropy loss plus softmax, and the typical Softmax related loss functions AM-Softmax Loss, COT-Loss and PEDCC-Loss, the classification accuracy of POD Loss is obviously better, and the training of network is easier to converge.
\end{itemize}
\section{Related works}
\subsection{Loss function of classification}
In the field of classification, there are many different loss functions in convolutional neural networks for end-to-end learning. For multi-classification problems, Softmax loss function is generally selected, which has good performance and is easy to converge. The Softmax loss function is as follows:
\begin{normalsize}
\begin{equation}
L_{Softmax}=\frac{1}{N}\sum_{i} -log{\frac{e^{z_{y_i}}}{\sum_{j}e^{z_j}}}
\end{equation}
\end{normalsize}
where $N$ represents the number of samples, $z_{y_i}$ represents the output value of the last fully connected layer of the correct class $y_i$, and $z_j$ is the output value of the last fully connected layer of the $j$-th class. Therefore, $z_{y_i}=W^{T}_{y_i}x_i$, that is , $z_{y_i}=||W^{T}_{y_i}||\cdot||x_i||\cdot cos(\theta_{y_i})$, $W^{T}_{y_i}$ is the corresponding weight in the fully connected layer and $x_i$ represents the input feature of the $i$-th sample. Therefore, the Softmax loss function becomes the following formula:
\begin{normalsize}
\begin{equation}
L_{Softmax}=\frac{1}{N}\sum_{i} -log{\frac{e^{||W^{T}_{y_i}||\cdot||x_i||\cdot cos(\theta_{y_i})}}{\sum_{j}e^{||W^{T}_j||\cdot||x_i||\cdot cos(\theta_j)}}}
\end{equation}
\end{normalsize}
In general image classification tasks, there is no doubt about the performance of Softmax Loss, but for face recognition, due to the small inter-class distance, its classification accuracy is not satisfactory.
L-Softmax increases the coefficient $m$ before the angle $\theta$, m is a positive integer, and the cosine function is monotonically decreasing in the range of $0$ to $\pi$. Therefore, $cos(m\theta) \leq cos(\theta)$. In this way, the inter-class distance learned by the model is larger, and the intra-class distance is smaller. The greater the value of $m$, the greater the difficulty of learning.
A-Softmax is similar to L-Softmax in that it increases the angular spacing and normalizes the weight $||W||$. Based on this, AM-Softmax proposes to change $cos(m\theta)$ to $(cos\theta - m)$, changes the multiplication in the formula to addition, and also normalizes the input feature $||x||$. Compared with A-Softmax, the formula is simpler in form and calculation. The final AM-Softmax is written as:
\begin{small}
\begin{equation}
L_{AM-Softmax}=-\frac{1}{N}\sum_{i} log{\frac{e^{{s\cdot}({\cos\theta_y}_i-m)}}{e^{{s\cdot}({\cos\theta_y}_i-m)}+\begin{matrix} \sum_{j=1,j\ne y_i}^c e^{{s\cdot}{\cos\theta_j}}\end{matrix}}}
\end{equation}
\end{small}
Center Loss is no longer limited to the constraints of neural network output. On the basis of Softmax, an additional mean square error (MSE) loss function is added to calculate the sample feature and the class center feature of the sample to reduce the intra-class distance and increase the inter-class distance. The class feature centers are continuously updated during the process of network training.
\begin{normalsize}
\begin{equation}
L_{Center}=L_{Softmax}+\frac{\lambda}{2}\sum^M_{i=1}||\bm{x_i}-\bm{c_{y_i}}||^2
\end{equation}
\end{normalsize}
$M$ represents the number of samples, $\bm{x_i}$ is the input feature of the $i$-th sample, $\bm{c_{y_i}}$ represents the central feature of the class to which the $i$-th sample belongs, which is continuously updated during the learning process after initialization. $\lambda$ represents the weighting factor.
Cross entropy mainly uses the information from the correct label to maximize the possibility of data, but largely ignores the information from the remaining incorrect labels. COT believes that in addition to correct labels, incorrect labels should also be used in training, which can effectively improve the performance of the model. The training strategy is to alternate training between the correct label and the incorrect labels. The constraint of the correct label is the commonly used cross entropy, and the constraint of the incorrect labels is the complement entropy expression as follows:
\begin{normalsize}
\begin{equation}
C(\hat{y}_{\bar{c}})=-\frac{1}{N}\sum^N_{i=1}\sum^K_{j=1,j \neq g}(\frac{\hat{y}_{ij}}{1-\hat{y}_{ig}})log(\frac{\hat{y}_{ij}}{1-\hat{y}_{ig}})
\end{equation}
\end{normalsize}
where $N$ is the number of samples, $K$ is the number of classes, $\hat{y}_{ij}$ represents the predicted probability of the incorrect label $j$ of the $i$-th sample, and $\hat{y}_{ig}$ represents the predicted probability of the correct label $g$ of the $i$-th sample. When the predicted probabilities of all labels are equal, the entropy will be maximized, so the entropy makes $\hat{y}_{ij}$ approximate to $\frac{1-\hat{y}_{ig}}{K-1}$, essentially offsetting the predicted probability of incorrect labels as $K$ increases.
PEDCC-Loss \cite{2019arXiv190200220Z}\cite{8933403} proposes predefined evenly-distributed class centroids instead of the continuously updated class centers in Center Loss, and uses the fixed PEDCC weight instead of the weight of the classification linear layer in the convolutional neural network to maximize the inter-class distance. At the same time, the constraint of latent features is added (a constraint similar to Center Loss is added to calculate the mean square error (MSE) loss of sample features and PEDCC center), and AM-Softmax is also applied. This method makes the distribution of intra-class features more compact and the distribution of inter-class features more distant. The overall system diagram is shown in Fig. 2. The PEDCC-Loss expression is as follows:
\begin{small}
\begin{equation}
L_{PEDCC-AM}=-\frac{1}{N}\sum_{i} log{\frac{e^{{s\cdot}({\cos\theta_y}_i-m)}}{e^{{s\cdot}({\cos\theta_y}_i-m)}+\begin{matrix} \sum_{j=1,j\ne y_i}^c e^{{s\cdot}{\cos\theta_j}}\end{matrix}}}
\end{equation}
\end{small}
\begin{normalsize}
\begin{equation}
L_{PEDCC-MSE}=\frac{1}{2N}\sum_{i=1}^N{\left \| \bm{x_i}-\bm{pedcc_{y_i}} \right \|}^2
\end{equation}
\end{normalsize}
\begin{normalsize}
\begin{equation}
L_{PEDCC-Loss}=L_{PEDCC-AM}+\lambda\sqrt[n]{L_{PEDCC-MSE}}
\end{equation}
\end{normalsize}
where $N$ is the number of samples, $\lambda$ is a weighting coefficient, and $n \geq 1$ is a constraint factor of $L_{PEDCC-MSE}$. $\bm{x_i}$ represents the input feature of the ith sample (normalized), and $\bm{pedcc_{y_i}}$ represents the PEDCC central feature of the class to which the sample belongs (normalized).
\begin{figure*}
\centering
\includegraphics[width=1\textwidth]{PEDCC-Loss_Network_Structure.eps}
\caption{The PEDCC-Loss of CNN Classifier \cite{8933403}.}
\end{figure*}
\subsection{Loss function for latent features in self-supervised learning}
In self-supervised learning, Barlow Twins \cite{2021arXiv210303230Z} proposes an innovative loss function, adding a decorrelation mechanism to the loss function to maximize the variability of representation learning. Barlow Twins Loss is written as:
\begin{normalsize}
\begin{equation}
L_{BT}=\sum_{i}(1-C_{ii})^2+\lambda\sum_{i}\sum_{j \neq i}C_{ij}^2
\end{equation}
\end{normalsize}
Where $\lambda$ is a weighting coefficient, which is used to weigh the two items in the loss function, and $C$ is the cross-correlation matrix of the output features of the sample and its enhanced sample under the same batch of two same network. The addition of the latter redundancy reduction term in the loss function reduces the redundancy between the network output features, so that the output features contains the non-redundant information of the sample, and achieves a better feature representation effect.
On this basis, VICReg \cite{2021arXiv210504906B} combines the variance term with a decorrelation mechanism based on redundancy reduction and covariance regular term. The covariance criterion removes the correlation between the different dimensions of the learned representation, and the purpose is to spread information between dimensions to avoid dimension collapse. The criterion mainly punishes the off-diagonal terms of the covariance matrix, which further improves the classification performance of features.
In this article, PEDCC is also used to generate the predefined evenly-distributed class centroids, instead of the continuously updated class centers in Center Loss, and the solidified PEDCC weights replace the weights of the classification linear layer to maximize the inter-class distance. On the one hand, Cosine Loss calculates and constrains the distance between the sample feature and the central features of the PEDCC. On the other hand, similar to self-supervised learning, a decorrelation mechanism is introduced. The self-correlation matrix of the difference between the sample features and the central features of PEDCC in each batch is calculated, and the correlation between any pair of features is restricted to improve the classification accuracy.
In the task of image classification, the method in this article no longer restricts the posterior probability of neural network output, but restricts the latent features extracted from sample information, and realizes the optimal distribution of latent features for classification.
\section{Method}
\subsection{Cosine Loss}
The mean square error (MSE) loss function can be used to constrain the distance between the sample feature and its class PEDCC feature center. In this article, the PEDCC center and the sample feature vector are normalized before calculation, so the MSE Loss expression and its derivation are as follows:
\begin{normalsize}
\begin{equation}
L_{PEDCC-MSE}=\frac{1}{2N}\sum_{i=1}^N{\left \| \bm{x_i}-\bm{pedcc_{y_i}} \right \|}^2
\end{equation}
\end{normalsize}
\begin{normalsize}
\begin{equation}
=\frac{1}{2N}\sum_{i=1}^N( ||\bm{x_i}||^2+||\bm{pedcc_{y_i}}||^2-2\bm{x_i} \cdot \bm{pedcc_{y_i}})^2
\end{equation}
\end{normalsize}
\begin{normalsize}
\begin{equation}
=\frac{1}{N}\sum_{i=1}^N(1-cos{\theta_{y_i}})
\end{equation}
\end{normalsize}
where $N$ represents the number of samples. It can be seen that the loss function is essentially a constraint on the cosine distance between the sample feature and the center of the PEDCC. Taking $\theta_{y_i}$ as the independent variable to derive the derivative of $(1-cos{\theta_{y_i}})$, the derivative is $sin{\theta_{y_i}}$, as shown in Fig. 3. It can be seen from Fig. 3 that in the range of $0^{\circ}$ to $90^{\circ}$, the derivative value becomes larger and larger, and in the range of $90^{\circ}$ to $180^{\circ}$, the derivative value becomes smaller and smaller, and the derivative value is always less than or equal to $1$ in the whole range of $0^{\circ}$ to $180^{\circ}$.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{sin.eps}
\caption{The change of $sin{\theta_{y_i}}$ within the range of $0^{\circ}$ to $180^{\circ}.$}
\end{figure}
$\theta_{y_i}$ represents the angle between the sample feature and the predefined center feature of the $i$-th class. The angle between the sample feature point and the predefined center of the class to the origin must also be in the range of $0^{\circ}$ to $180^{\circ}$. In the process of network learning, what we want is let $\theta_{y_i}$ approach $0^{\circ}$ as quickly as possible. Therefore, the faster the falling speed of $\theta_{y_i}$, the better, that is, the greater the derivative value, the better, and $(1-cos{\theta_{y_i}})$ does not completely conform to the mathematical formula we want.
Based on above, Cosine Loss is proposed, whose expression is as follows:
\begin{normalsize}
\begin{equation}
L_{Cosine}=\frac{1}{N}\sum_{i=1}^N(1-cos{\theta_{y_i}})^2
\end{equation}
\end{normalsize}
where $N$ represents the number of samples, and $cos{\theta_{y_i}}$ is the cosine value of the angle between the sample feature and its own predefined central feature. Using $\theta_{y_i}$ as the independent variable to derive the derivative of $(1-cos{\theta_{y_i}})^2$, the derivative is $2\cdot(1-cos{\theta_{y_i}}) \cdot sin{\theta_{y_i}}$, as shown in Fig. 4. It can be seen from the figure that the derivative value becomes larger and larger in the range of $0^{\circ}$ to $120^{\circ}$, and the derivative value reaches $1$ around $65^{\circ}$ and the maximum value reaches about $2.5$. In network learning, Cosine Loss is easy to converge and converges faster. Softmax improved loss AM-Softmax Loss, PEDCC-Loss containing MSE Loss and Cosine Loss comparison experiment results are given in Section \uppercase\expandafter{\romannumeral4}.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Two_1-cos_sin.eps}
\caption{The change of $2\cdot(1-cos{\theta_{y_i}})\cdot sin{\theta_{y_i}}$ within the range of $0^{\circ}$ to $180^{\circ}.$}
\end{figure}
\subsection{Selection of latent feature dimension}
For a given type of dataset, \cite{9444709} gives the theorem: For arbitrarily generated $k$ point $a_i(i=1,2,...,k)$ evenly-distributed on the unit hypersphere of $n$ dimensional Euclidean space, if $k \leq n+1$, such that $\langle a_i, a_j \rangle=-\frac{1}{k-1}, i \neq j$. Corresponding to the convolutional neural network, k evenly-distributed PEDCC class centroids can be obtained when $k$(number of classes) $\leq n$(feature dimension)$+1$ is satisfied.
Different feature representations can be regarded as different knowledge of the images. The more comprehensive the knowledge, the better the classification performance, and the multi-feature dimension is indeed easier to find a hyperplane to separate images of different classes. However, too many feature dimensions will cause the classifier to emphasize the accuracy of the training set too much, and even learn some wrong or abnormal data, resulting in over-fitting problems. Therefore, under the premise of $k \leq n+1$, selecting the appropriate feature dimension is also crucial to the classification performance. In the experiments of this article, Section \uppercase\expandafter{\romannumeral4} gives the selection details between the number of classes and the dimension of features.
\subsection{Decorrelation between latent features}
The features of predefined evenly-distributed class centroids are irrelevant, but the sample features during the training process only approximate the features of the class centroids to which they belong, which can not be completely equal. Therefore, there is always a certain correlation between the features. Meanwhile, under the premise of ensuring that the number of center points $k$(the number of classes) and the space dimension $n$ satisfy $k \leq n+1$, $n$ is generally taken greater than $k$. For example, when the number of classes $k$ is $10$, $n$ is $256$. When the network training is over, due to the effect of loss function, the features of the training samples are basically distributed in the $k-1$ dimensions subspace \cite{9444709}. From this point of view, it seems that the remaining $n-k+1$ dimensions are useless, but the fact is that if $n$ is set to $k-1$ at the beginning, the classification accuracy drops significantly, which indicates that these extra dimensions play a role in optimizing classification features during the training process.
In this case, in order to further improve the utilization of all latent features, constraining the correlation between the dimensions is proposed. For example, for the image classification of cats and mice, if the correlation between the dimensions is not restricted, one of the learned features may represent the body size and the other dimension represents the facial contour size. Obviously, these two features have a strong correlation, which means a large body must have a large facial contour. For classification, the classification performance based on the two features is very similar or even the same as that based on one of the two features. Therefore, the resources occupied by one of the dimensions are wasted. When the constraint of the correlation between the dimensions is added, the two features learned in the above example may be hair color and body size, which are almost irrelevant. Compared with the classification based on a single feature, it is obvious that the combination of the two is better for classification and achieves the purpose of making full use of features.
Barlow Twins Loss adds a decorrelation mechanism to reduce the redundancy between network output features, so that the output features contain non-redundant information of samples. Barlow Twins uses a cross-correlation matrix and punishes the non-diagonal terms of the calculated cross-correlation matrix. Our method draws on this decorrelation mechanism, but instead of using the cross-correlation matrix, the self-correlation matrix of the difference between the latent features of the samples and the predefined central features is used. The non-diagonal terms are also punished to constrain the correlation between the different dimensions of the latent features of samples. Therefore, Self-correlation Constraint Loss is proposed:
\begin{normalsize}
\begin{equation}
L_{SC}=\sum_{i}^n{\sum_{j \neq i}^n{R_{ij}^2}}
\end{equation}
\end{normalsize}
\begin{normalsize}
\begin{equation}
R=\frac{1}{B-1}{(X-X_{pedcc})(X-X_{pedcc})^T}
\end{equation}
\end{normalsize}
where $n$ represents feature dimension, $R$ represents the self-correlation matrix of the difference matrix formed by the difference between the sample features and the predefined central features. The element in the $i$-th row and $j$-th column of the self-correlation matrix is the correlation coefficient between the $i$-th column and the $j$-th column of the difference matrix. $B$ is the number of samples in each batch. $X$ is the normalized feature matrix of $B$ samples in Fig. 1, where each column vector represents a sample. $X_{pedcc}$ is the PEDCC matrix corresponding to the sample feature.
\subsection{POD Loss}
On the one hand, Cosine Loss is used to constrain the distance between the sample feature and its class PEDCC feature center, and on the other hand it accelerates the convergence speed of network training. At the same time, Self-correlation Constraint (SC) Loss is defined as the constraint on the correlation between the different dimensions of sample latent features, that is, the decorrelation term, which includes punishing the off-diagonal terms of the self-correlation matrix of the difference between the sample features and the predefined central features. The final expression of POD Loss is:
\begin{normalsize}
\begin{equation}
L_{POD}=L_{Cosine}+\lambda{L_{SC}}
\end{equation}
\end{normalsize}
where $L_{Cosine}$ is the cosine loss function, $L_{SC}$ is the loss function that restricts the correlation between feature dimensions, and $\lambda$ is the weighting coefficient. For some experiments, the adjustment of $\lambda$ may improve the classification performance.
\section{Experiments and results}
Experiment is implemented using Pytorch1.0 \cite{pytorch}. The network structure used in the whole experiment is ResNet50 \cite{7780459}. The datasets used include CIFAR10 \cite{Krizhevsky2009LearningML}, CIFAR100 \cite{Krizhevsky2009LearningML}, Tiny ImageNet \cite{5206848}, FaceScrub \cite{7025068} and ImageNet \cite{5206848} dataset. In order to make the network structure more suitable for the image sizes of different datasets, some modifications are made to the original ResNet50 structure. For CIFAR10 (images size is $32 \times 32$), CIFAR100 (images size is $32 \times 32$) and Tiny ImageNet (images size is $64 \times 64$), the convolution kernel of the first convolution layer is changed from the original $7 \times 7$ to $3 \times 3$, and the step size is changed to $1$. The maximum pooling layer in the second convolutional layer is also eliminated (except Tiny ImageNet). The above changes are not made for ImageNet. In addition, the number of feature dimensions input by the PEDCC layer in the network structure varies with the number of classes of these datasets. All experimental results are the average of three experiments.
\subsection{Experimental datasets}
CIFAR10 dataset contains $10$ classes of RGB color images, and the images size is $32 \times 32$. There are $50,000$ training images and $10,000$ test images in the dataset. CIFAR100 dataset contains $100$ classes of images, the images size is $32 \times 32$, a total of $50,000$ training images and $10,000$ test images. FaceScrub dataset contains $100$ classes of images, the images size is $64 \times 64$, there are $15896$ training images and 3896 test images in the dataset. Tiny ImageNet dataset contains $200$ classes of images, the images size are $64 \times 64$, there are a total of $100,000$ training images and $10,000$ test images. For these datasets, standard data enhancement \cite{10.1145/3065386} is performed, that is, the training images are filled with $4$ pixels, randomly cropped to the original size, and horizontally flipped with a probability of $0.5$, and the test images are not processed.
ImageNet dataset contains $1000$ classes of images. The images size is not unique, and the height and width are both greater than $224$. There are $1282166$ training images and $51000$ test images in the dataset. For this dataset, the training images are randomly cropped to different sizes and aspect ratios, scaled to $224 \times 224$, and flipped horizontally with a probability of $0.5$. The test images are scaled to $256$ proportionally with a small side length, and then the center is cropped to $224 \times 224$.
For the above datasets, in the training phase, using the SGD optimizer, the weight decay is $0.0005$ (ImageNet is $0.0001$), and the momentum is $0.9$. The initial learning rate is $0.1$, and a total of $100$ epochs are trained. At the $30th$, $60th$, and $90th$ epoch, the learning rate drops to one-tenth of the original, and the batchsize is set to $128$ (ImageNet is $96$).
\subsection{Experimental results}
\subsubsection{Selection of feature dimensions}
According to the theoretical analysis of feature dimensions in Section \uppercase\expandafter{\romannumeral3}, comparison experiments are carried out on the selection of multiple different feature dimensions for datasets of different classes. Table \uppercase\expandafter{\romannumeral1} shows the classification performance under different feature dimensions on CIFAR10 dataset. Within a certain range, increasing the feature dimension will get better classification performance, but when the number of features reaches a certain scale, the performance of the classifier is declining. The feature dimensions with the best performance on different datasets under multiple experiments are shown in Table \uppercase\expandafter{\romannumeral2}. For CIFAR10 dataset, $10$ evenly-distributed class centroids are predefined in a $256$ dimensions hyperspace. For CIFAR100, FaceScrub and Tiny ImagNet dataset, several evenly-distributed class centroids of corresponding classes are respectively predefined in a $512$ dimensions hyperspace. For ImageNet dataset, the top $30$ classes, the top $100$ classes, the top $200$ classes, the top $500$ classes, and all the $1000$ classes are selected for experiments. The optimal feature dimensions are $256$, $512$, $512$, $1024$ and $2048$.
\begin{table}[h]
\caption{Classification accuracy (\%) under different feature dimensions on CIFAR10 dataset}
\centering
\setlength{\tabcolsep}{13mm}{
\begin{tabular}{cc}
\toprule
Feature Dimension & Accuracy(\%)
\\
\midrule
9 & 92.71
\\
32 & 93.51
\\
64 & 93.77
\\
128 & 93.98
\\
256 & \textbf{94.31}
\\
512 & 94.25
\\
1024 & 93.82
\\
\bottomrule
\end{tabular}}
\end{table}
\begin{table}[h]
\caption{The number of classes and the optimal feature dimensions of different datasets}
\centering
\setlength{\tabcolsep}{4.5mm}{
\begin{tabular}{ccc}
\toprule
Dataset & Number of Classes & Feature Dimension
\\
\midrule
CIFAR10 & 10 & 256
\\
CIFAR100 & 100 & 512
\\
FaceScrub & 100 & 512
\\
Tiny ImageNet & 200 & 512
\\
ImageNet(30) & 30 & 256
\\
ImageNet(100) & 100 & 512
\\
ImageNet(200) & 200 & 512
\\
ImageNet(500) & 500 & 1024
\\
ImageNet(1000) & 1000 & 2048
\\
\bottomrule
\end{tabular}}
\end{table}
\subsubsection{Role of Cosine Loss}
Comparative experiments on Softmax, AM-Softmax Loss$(s=5, m=0.25)$, COT, PEDCC-Loss$(s=10,m=0.5)$ and Cosine Loss are conducted on CIFAR100 dataset and FaceScrub dataset. The experimental results are shown in Table \uppercase\expandafter{\romannumeral3}. Among the five loss functions, Cosine Loss has the highest classification performance. In the network training process, the convergence speed of AM-Softmax Loss, PEDCC-Loss and Cosine Loss is shown in Fig. 5. As can be seen from Fig. 5, only Cosine Loss training has a faster and more stable convergence speed.
\begin{table}[h]
\caption{Comparison of classification accuracy (\%) of multiple losses on CIFAR100 and FaceScrub dataset}
\centering
\setlength{\tabcolsep}{5.8mm}{
\begin{tabular}{lcc}
\toprule
\diagbox{Loss}{Dataset} & CIFAR100 & FaceScrub
\\
\midrule
Softmax Loss & 73.80 & 89.10
\\
\makecell[l]{AM-Softmax Loss \\ $(s=5, m=0.25)$}
& 73.03 & 91.67
\\
COT & 74.03 & 90.40
\\
\makecell[l]{PEDCC-Loss \\ $(s=10,m=0.5)$}
& 75.58 & 90.98
\\
Cosine Loss & \textbf{76.23} & \textbf{92.11}
\\
\bottomrule
\end{tabular}}
\end{table}
\begin{figure*}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=6cm,height=5cm]{AM-Softmax_TrainingandValidationloss.eps}
\caption{AM-Softmax Loss}
\label{sf1}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=6cm,height=5cm]{PEDCC-Loss_TrainingandValidationloss.eps}
\caption{PEDCC-Loss}
\label{sf2}
\end{subfigure}
\begin{subfigure}{.32\textwidth}
\centering
\includegraphics[width=6cm,height=5cm]{Cosine_Loss_TrainingandValidationloss.eps}
\caption{Cosine Loss}
\label{sf3}
\end{subfigure}
\caption{Variation of classification accuracy of AM-Softmax Loss, PEDCC-Loss and Cosine Loss with epoch during training.}
\label{fig5}
\end{figure*}
\subsubsection{Role of SC Loss}
The variance of the $2048$ dimensions feature eigen-vectors (normalized) before the fully connected layer of the network is respectively calculated after POD Loss and Cosine Loss training, as shown in Table \uppercase\expandafter{\romannumeral4}. The variance of eigen-vectors after POD Loss training is smaller than that of Cosine Loss, indicating that the existence of SC Loss can balance the energy of the features and gain some benefits for subsequent classification.
\begin{table}[h]
\caption{Variance of feature eigen-vectors of Cosine Loss and POD Loss}
\centering
\setlength{\tabcolsep}{8mm}{
\begin{tabular}{cc}
\toprule
Loss Function & Variance of Feature Eigen-vectors
\\
\midrule
Cosine Loss & 1.82e-8
\\
POD Loss & \textbf{1.53e-8}
\\
\bottomrule
\end{tabular}}
\end{table}
Comparative experiments of Cosine Loss and POD Loss have also been carried out on some datasets, and the experimental results are shown in Table \uppercase\expandafter{\romannumeral5}. The experimental results show that POD Loss with SC Loss is better than a single Cosine Loss in classification accuracy on multiple datasets.
\begin{table}[h]
\caption{Comparison of classification accuracy (\%) of Cosine Loss and POD Loss on different datasets}
\centering
\setlength{\tabcolsep}{5mm}{
\begin{tabular}{lcc}
\toprule
\diagbox{Dataset}{Loss\\ Function} & Cosine Loss & POD Loss
\\
\midrule
CIFAR10 & 93.83 & \textbf{94.31}
\\
CIFAR100 & 76.23 & \textbf{77.70}
\\
Tiny ImageNet & 62.01 & \textbf{62.16}
\\
FaceScrub & 92.11 & \textbf{93.01}
\\
\bottomrule
\end{tabular}}
\end{table}
\subsubsection{POD Loss}
Comparative experiments of Softmax Loss and POD Loss are conducted on the above datasets. The experimental results are shown in Table \uppercase\expandafter{\romannumeral6}. Experimental results show that the classification performance of POD Loss is higher than that of Softmax Loss on multiple datasets, and the classification accuracy is much higher than Softmax Loss on some datasets. Experiments on the ImageNet dataset show that with the increase of the number of classes in the dataset, the classification accuracy of POD Loss is gradually approaching Softmax Loss. The reason lies in the structure of the network itself (the maximum output feature dimension of ResNet50 is $2048$, the ratio of the feature dimension to the number of large classes is much smaller than the ratio of the feature dimension to the number of small classes, and there are fewer or no extra dimensions to benefit classification). So, on large-class datasets, the advantages of POD Loss over Softmax Loss are limited.
\begin{table}[h]
\caption{Comparison of classification accuracy (\%) of Softmax Loss and POD Loss on different datasets}
\centering
\setlength{\tabcolsep}{5mm}{
\begin{tabular}{lcc}
\toprule
\diagbox{Dataset}{Loss\\ Function} & Softmax Loss & POD Loss
\\
\midrule
CIFAR10 & 93.71 & \textbf{94.31}
\\
CIFAR100 & 75.56 & \textbf{77.70}
\\
Tiny ImageNet & 60.29 & \textbf{62.16}
\\
FaceScrub & 89.10 & \textbf{93.01}
\\
ImageNet(30) & 79.86 & \textbf{85.40}
\\
ImageNet(100) & 78.25 & \textbf{82.06}
\\
ImageNet(200) & 82.06 & \textbf{83.08}
\\
ImageNet(500) & 82.14 & \textbf{82.24}
\\
ImageNet(1000) & 75.65 & \textbf{75.71}
\\
\bottomrule
\end{tabular}}
\end{table}
On the other hand, in the process of network training, the convergence speed of POD Loss and Softmax Loss is shown in Fig. 6. As can be seen from Figure 6, POD Loss training converges faster.
\begin{figure*}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=8cm,height=6cm]{Softmax_Loss_TrainingandValidationloss.eps}
\caption{Softmax Loss}
\label{sf4}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=8cm,height=6cm]{POD_Loss_TrainingandValidationloss.eps}
\caption{POD Loss}
\label{sf5}
\end{subfigure}
\caption{Variation of classification accuracy of Softmax Loss and POD Loss with epoch during training.}
\label{fig6}
\end{figure*}
\subsubsection{Discussion on the last classification method}
The constraint on latent features only solves the optimization problem of latent feature distribution, and the pattern classification method needs to be matched later. On the one hand, the POD Loss proposed in this article includes Cosine Loss that constrains the latent features and the cosine distance between the PEDCC centroids. When POD Loss converges, the cosine distance between the latent feature vector and the PEDCC centroid of the correct label reaches the minimum, and the cosine distances between the latent feature vector and the PEDCC centroids of the incorrect labels reach the maximum in a uniform state. The classification method is:
\begin{normalsize}
\begin{equation}
I=\mathop{argmax}\limits_{i=1,2,...,k}{(\bm{x}\cdot \bm{pedcc_i})}
\end{equation}
\end{normalsize}
\begin{normalsize}
\begin{equation}
=\mathop{argmax}\limits_{i=1,2,...,k}{(||\bm{x}|| \cdot ||\bm{pedcc_i}|| \cdot cos\theta_{\bm{x},\bm{pedcc_i}})}
\end{equation}
\end{normalsize}
\begin{normalsize}
\begin{equation}
=\mathop{argmax}\limits_{i=1,2,...,k}{(cos\theta_{\bm{x},\bm{pedcc_i}})}
\end{equation}
\end{normalsize}
where $k$ represents the number of categories, $\bm{x}$ is the sample feature, $\bm{pedcc_i}$ is the center of the PEDCC of the $i$-th class, and $cos\theta_{\bm{x},\bm{pedcc_i}}$ represents the cosine value of and .
On the other hand, under the premise that the features of samples satisfy the Gaussian distribution, the mean and covariance of each PEDCC are different, and each class has different class conditional probability densities, so Gaussian Discriminant Analysis (GDA) method can also be used for classification.
Therefore, this article conducts a comparative experiment on the two classification methods of cosine distance and GDA based on POD Loss. The experimental results on multiple datasets are shown in Table \uppercase\expandafter{\romannumeral7}. It can be seen from Table \uppercase\expandafter{\romannumeral7} that the classification method of cosine distance is better than GDA on multiple datasets. Therefore, cosine distance is adopted as the final classification method in this article.
\begin{table}[h]
\caption{Comparison of classification accuracy (\%) of two classification methods}
\centering
\setlength{\tabcolsep}{2mm}{
\begin{tabular}{lcc}
\toprule
\diagbox{Dataset}{Method} & Cosine distance & Gaussian discrimination analysis
\\
\midrule
CIFAR10 & \textbf{94.31} & 94.23
\\
CIFAR100 & \textbf{77.70} & 73.37
\\
Tiny ImageNet & \textbf{62.16} & 50.09
\\
\bottomrule
\end{tabular}}
\end{table}
\section{Conclusion}
For convolutional neural network classifier, this article proposes a Softmax-free loss function (POD Loss) based on predefined optimal-distribution of latent features. The loss function discards the constraints on the posterior probability in the traditional loss function, and only constrains the output of feature extraction to realize the optimal distribution of latent features, including the cosine distance between sample feature vector and predefined evenly-distributed class centroids, the decorrelation mechanism between sample features, and finally the classification through the solidified PEDCC layer. The experimental results show that, compared to the commonly used Softmax Loss and its improved Loss, POD Loss achieves better performance on image classification tasks, and is easier to train and converge. In the future, new loss function based on latent feature distribution optimization will be further studied from the perspective of distinguishing classification features and representation features, so as to further improve the recognition performance of the network.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-12-01T02:25:37",
"yymm": "2111",
"arxiv_id": "2111.15449",
"language": "en",
"url": "https://arxiv.org/abs/2111.15449"
}
|
\section{Introduction}
\label{Intro}
Drought is one of the most widespread and frequent natural disasters in the world, with profound economic, social, and environmental impacts~\citep{keyantash2002quantification}.
Unlike other natural hazards, droughts are a gradual process, often have a long duration, cumulative impacts, and widespread extent~\citep{below2007documenting}.
Climate change is expected to increase the area and population affected by soil moisture droughts and also the probability of extreme drought events comparable to the one of 2003 across Europe~\citep{IPCC2021, samaniego2018droughteurope}.
Therefore, it is a critical scientific task to understand better possible changes in drought frequency and intensity under varying climate scenarios~\citep{King2020}.
Drought is commonly classified into four categories: meteorological, agricultural, socioeconomic, and hydrological.
In this study, we focus on agricultural drought since it has a considerable impact on human population evolution~\citep{LloydHughes2013}.
Agricultural droughts can be quantified as a ``deficit of soil moisture relative to its seasonal climatology at a location''~\citep{sheffield2007characteristics}.
A low \gls{smi} in the root zone is a direct indicator of agricultural drought and inhibits vegetative growth, directly affecting crop yield and therefore food security~\citep{keyantash2002quantification}.
The physical processes involved in drought depend on complicated interactions among multiple variables and are spatiotemporally highly variable.
This behavior makes droughts hard to predict, classify, and understand~\citep{below2007documenting}.
However, recently, machine learning (ML)-based methods have demonstrated their ability to capture hydrological phenomena well, e.g., rainfall-runoff~\citep{kratzert2018rainfall} and flood~\citep{le2019application}.
ML has also been applied to drought detection but relied on relative indices as labels due to the lack of ground truth data~\citep{BELAYNEH201637, Shamshirband2020, FENG2019303}.
Using such statistically derived labels can lead to unreliable detection of droughts in climate model projections and, accordingly, an inaccurate estimation of the impacts of future climate change~\citep{VicenteSerrano2010}.
Therefore, we compare several ML algorithms in their ability to classify droughts based on agriculturally highly relevant soil moisture.
A future goal is to provide an ML-based drought classification for climate projections under various scenarios.
While we do not yet operate on climate model output from the Coupled Model Intercomparison Project (CMIP6)~\cite{Eyring2016},
in this work, we nevertheless showcase that drought classification is possible with the variables available in the output of CMIP6 climate projections, and thus it is promising to further pursue the goal.
\section{Data Preparation}
\begin{figure}
\centering
\includegraphics[
height=3.7cm, keepaspectratio
]{img/spearman_correlation}
\includegraphics[
height=4.5cm,
keepaspectratio
]{img/time_with_folds_v2.png}
\caption{
{\it Left:} Time-lagged Spearman correlation between the selected ERA5-Land input variables and the target variable SMI over 24 months. {\it Right:} Time series of \gls{smi} from 1981-2018 from the Helmholtz dataset.
The shaded area shows the standard deviation across different locations.
Red dotted lines show the split points for the time-based split into $k$ folds, and the green dashed line shows the binarization threshold for drought events. Shown above is the frequency of the positive class (drought events) per fold.
}
\label{fig:lagged-input-correlation_and_class-imbalance}
\end{figure}
Low soil moisture levels depend on various meteorological input variables and the soil type. Retrieving accurate SMI ground-truth data is therefore complicated:
Spatially-continuous soil moisture data on a resolution smaller than 0.25 degree is only available from satellite observations or model simulations.
Satellite observations are exclusively available for recent years, include only the top few centimeters of the soil, and have data gaps due to unfavorable data retrieval conditions such as snow or dense vegetation~\citep{Dorigo2017}.
Therefore, we select modeled SMI data as the ground-truth label.
Due to SMI data availability, the selected experiment region is Germany.
The data is limited to January 1981 to December 2018 by the availability of an overlapping period from both ERA5-Land and the SMI data.
All datasets used in this study are freely available.
The target variable SMI is derived from the German drought monitor uppermost 25cm of soil data as \gls{smi} labels~\citep{zink2016german}, which is generated by the hydrological model system based on data from about 2,500 weather stations~\cite{Samaniego2010, Kumar2013}.
Figure~\ref{fig:lagged-input-correlation_and_class-imbalance} shows the SMI distribution over time and the chosen binarization threshold.
We use monthly time-series of 12 selected variables from the ERA5-Land reanalysis, e.g., pressure, precipitation, temperature (see Table~\ref{tab:era5-variables}).
We selected ERA5-Land due to its higher resolution compared to ERA5 (9 km vs. 31 km) and its consequently better suitability for land applications~\citep{MuozSabater2021}.
To isolate the causal effects on SMI and avoid short-cut learning, we do not include potential confounding factors such as evaporation, runoff, and skin temperature.
We also deliberately restrict the input variables to those commonly available in the latest generation of climate models to enable the transfer of the trained models to data directly obtained from climate model simulations.
Land use and vegetation type data based on the MODIS (MCD12Q1) Land Cover Data is used as an input feature, represented as soil type fractions~\citep{friedl__mark_mcd12q1_2019}.
\textbf{Interpolation and Label Derivation}
Drought is an extreme weather event. Extreme events occur at the tail of variable distributions.
Thus, we chose a classification setting with the tail of the SMI distribution as labels instead of a regression setting.
The input data is re-gridded to the ERA5-Land regular latitude-longitude grid ($0.1^\circ \times 0.1^\circ \approx (9km)^2$).
In this paper, we follow the drought classification from the German and U.S. drought monitors~\citep{svoboda2002drought, zink2016german} using an \gls{smi} threshold of $0.2$.
\textbf{Dataset Split}
As seen in Figure~\ref{fig:lagged-input-correlation_and_class-imbalance}, the \gls{smi} values for the same location exhibit a noticeable but declining correlation for lags up to 6 month.
A simple random split over data points could therefore lead to data leakage, where memorizing \gls{smi} values from training and simple interpolation can lead to erroneously good results.
Thus, we opt for a modified $k$-fold time-series split.
First, we evenly determine $k-1$ split times to create $k$ time intervals.
For the $k$th split, we train on folds $\{1, \ldots, k\}$, validate on fold $k+1$ and test on $k+2$.
This split enables us to better assess parameter stability over time, mimicking increasing climate projection length.
We decide to use $k=5$ as a good compromise between a sufficient number of folds for a robust performance estimate and large enough folds with multiple years of data to account for seasonal and interannual effects.
Figure~\ref{fig:lagged-input-correlation_and_class-imbalance} shows the resulting folds separated by red dotted lines.
Note that the drought sample availability (positive class) varies between folds from 12\% to 28\%
\section{Methodology} \label{Methodology}
We frame drought classification as a binary classification problem given climate, land use as well as location data.
Since the \emph{memory-effect}~\citep{Kingtse2011} is suspected of playing an essential role in the development of droughts, we frame the problem as a
\emph{sequence classification}: The models use a window of the last $k$ months of climate input data for the current location to predict the drought label at the current time step.
In addition to the climate variables, we also provide a positional and seasonal encoding as input features:
For the positional encoding, we directly use the latitude \& longitude grid values.
A 2D circular encoding considers the seasonality based on the month of the year (\textit{month}).
$
s = \left[
\cos \left(2 \pi \cdot \frac{\textit{month}}{12}\right);
\sin \left(2 \pi \cdot \frac{\textit{month}}{12}\right)
\right]
$,
where $[\cdot;\cdot]$ denotes the concatenation.
Besides using the location as an input feature, we do not explicitly include inductive biases for spatial correlation.
Due to missing values and a non-rectangular shape of the available data area, simple grid-based methods such as a 2D-CNN are not directly applicable.
The exploration of methods for irregular spatial data, such as those described in~\citep{DBLP:journals/spm/BronsteinBLSV17}, will be a focus of future work.
\textbf{Addressing Imbalanced Data}
In the entire dataset, examples for the drought class account for 18\% of the total samples.
We address this class imbalance by adding class weights proportional to the inverse class frequency during training and using an appropriate metric, PR-AUC, during evaluation.
\textbf{Input Sequence Length}
The determination of a suitable sequence length is based on the Spearman correlation of the climate variables and the target \gls{smi} variable and the lagged correlation of the \gls{smi} variable, as both shown in Figure~\ref{fig:lagged-input-correlation_and_class-imbalance}. Cyclical and non-cyclical decaying dependencies are considered, and both are indeed observed.
Therefore, we select a window size of six months for our models, which in line with the period commonly used on monthly mean data by other drought indices such as the \gls{spi}~\citep{mckee1993relationship}, and the \gls{spei}~\citep{VicenteSerrano2010}.
\textbf{Models} \label{Models}
We investigate support vector machines (SVMs) (\textbf{M1}) with linear kernels as well as an MLP model which receives the flattened window as a single large vector as input (\textbf{M2}), which we denote by \texttt{dense}.
To investigate whether an explicit inductive bias for sequential data is beneficial, we also include two main sequence encoders to obtain a representation of the input sequence for the sequence prediction.
The \texttt{cnn} model (\textbf{M3}) applies multiple 1D convolutional layers before aggregating the input sequence to a single vector representation by average pooling.
The \texttt{lstm} model (\textbf{M4}) uses multiple LSTM layers and the final hidden state as the sequence representation.
For both sequence encoders, the drought classification is obtained by a fully connected layer on top of this representation.
\textbf{Experimental Setup}
We use \texttt{sklearn}~\citep{pedregosa2011scikit} for the
SVM, and implement the other models in \texttt{tensorflow}~\citep{ https://doi.org/10.5281/zenodo.4758419}.
To reflect the considerable class imbalance, we choose the
area under the precision-recall curve (PR-AUC) as evaluation measure, which does not neglect the model's performance for the minority/positive class, i.e., droughts.
For hyperparameter optimization we use \texttt{ray tune}~\citep{liaw2018tune} with random search instead of grid search due to its higher efficiency~\citep{bergstra12}.
The best hyperparameters are selected per fold according to validation PR-AUC on the fold's validation data, and we report test results of the corresponding model trained across five different random seeds.
The climate variables of the dataset are normalized to $[0, 1]$.
\section{Results and Discussion}
\begin{figure}
\centering
\includegraphics[height=3.4cm,keepaspectratio]{img/results_pr-auc.pdf}
\includegraphics[height=3.4cm, keepaspectratio]{img/ablation_resolution_pr-auc.pdf}
\caption{\textit{Left:} Results on PR-AUC of the different models on the test dataset across five different random seeds for drought classification using a window of six months. \textit{Right: } Ablation study:Inference on models trained on high resolution given input with decreasing resolution. Evaluation on five different random seeds using a window of six months.}
\label{fig:results2}
\end{figure}
\textbf{Model Comparison}
The resulting architectures were selected based on the validation PR-AUC on the second fold to account for a large variety of drought causes in the training data. The resulting hyperparameters are listed in Table~\ref{hyperparams}.
The results are shown in Figure~\ref{fig:results2}, and Table~\ref{tab:results}.
We observe that the PR-AUC is larger than the class frequency of the positive class indicating that the models indeed learned a non-trivial relation between the input variables and the target. The results for the F1 score can be found in Figure~\ref{fig:results1} with F1 scores larger than 0.5.
Moreover, the performance varies for different folds, highlighting the challenging setting of a time-based split, where distributions can differ between different folds.
There is no clear winner between the architectures: all models except the linear SVM perform comparable across folds.
In particular, we do not observe a significant difference between models with an explicit inductive bias for sequential data.
Since the utilized SMI data describes only the uppermost 25cm of the soil, the suspected memory effect might be more prominent in deeper soil layers.
Our initial data analysis supports this, with the correlation of the input variables with the target being strongest close in time, cf. Figure~\ref{fig:lagged-input-correlation_and_class-imbalance}.
\textbf{Ablation: Coarsening the Data Resolution}
As an important future application of our models is on simulated climate data from climate models, we investigate further how the performance is affected by changing the resolution from the original $0.1^\circ$ to a coarser spatial resolution.
The horizontal resolution of CMIP6 models varies from around 0.1$^{\circ}$ to 2$^{\circ}$ in the atmosphere~\cite{Cannon2020}.
Given the regional restriction of our input data, we restrict the ablation study to a range of 0.1$^{\circ}$-1.0$^{\circ}$ with 0.1$^{\circ}$ steps.
The architecture performing best on 0.1$^{\circ}$ is used in inference to calculate the results on the coarser resolutions without re-training.
On the right-hand side of Figure~\ref{fig:results2} we visualize the results of the resolution ablation.
In general, we observe a negative correlation between resolution and performance.
The LSTM architecture is most affected by this but also generally shows the noisiest results overall.
Overall, the models trained on 0.1$^{\circ}$ input data show satisfactory performance when applied to coarser input data without dedicated training.
This promising result indicates that it is possible to predict drought events under varying future climate scenarios with models trained on fine-grained drought labels.
\section{Summary and Outlook}
We summarize our contributions as follows: (1) We are the first to compare several ML models in their capability of classifying agricultural drought in a changing climate based on soil moisture index (SMI).
We use ground truth data from a hydrological model and intentionally restrict the climate input variables to those available in the newest generation of CMIP6 climate models.
We also include land use information.
(2) We provide an ablation study regarding a transfer to coarser input data resolution, demonstrating that the model capabilities are transferable to lower resolution when trained in higher resolution.
In future work, we plan to use climate model output as input data for our algorithm to produce drought estimates under varying future scenarios.
This will facilitate the transfer from learning on real input data to input data obtained from simulations.
Apart from feeding the location information encoded as an additional input feature to the model, we plan to add location-aware models motivated by the strong regional correlation of the input variable as seen in Figure~\ref{fig:spatial-corr}.
Additionally, we plan to investigate other ground truth labels, e.g., SMAP~\citep{smapL3} and expand the study region globally.
Overall, we consider our study as an important step towards machine learning-based agricultural drought detection.
With our intentional restriction to variables available in climate models, we pave the way towards application on simulated data, thus facilitating the investigation of agricultural droughts in a changing climate.
\begin{ack}
The work for this study was funded by the European Research Council (ERC) Synergy Grant “Understanding and Modelling the Earth System with Machine Learning (USMILE)” under Grant Agreement No 855187. This manuscript contains modified Copernicus Climate Change Service Information (2021) with the following dataset being retrieved from the Climate Data Store: ERA5-Land (neither the European Commission nor ECMWF is responsible for any use that may be made of the Copernicus Information or Data it contains). Ulrich Weber from the Max Planck Institute for Biogeochemistry contributed pre-formatted MCD12Q1 MODIS data. SMI data were provided by the UFZ-Dürremonitor from the Helmholtz-Zentrum für Umweltforschung. The computational resources provided by the Deutsches Klimarechenzentrum (DKRZ, Germany) were essential for performing this analysis and are kindly acknowledged.
\end{ack}
|
{
"timestamp": "2021-12-01T02:25:38",
"yymm": "2111",
"arxiv_id": "2111.15452",
"language": "en",
"url": "https://arxiv.org/abs/2111.15452"
}
|
\section{}
\label{}
\section{}
\label{}
\section{Introduction}
\file{elsarticle.cls} is a thoroughly re-written document class
for formatting \LaTeX{} submissions to Elsevier journals.
The class uses the environments and commands defined in \LaTeX{} kernel
without any change in the signature so that clashes with other
contributed \LaTeX{} packages such as \file{hyperref.sty},
\file{preview-latex.sty}, etc., will be minimal.
\file{elsarticle.cls} is primarily built upon the default
\file{article.cls}. This class depends on the following packages
for its proper functioning:
\begin{enumerate}
\item \file{natbib.sty} for citation processing;
\item \file{geometry.sty} for margin settings;
\item \file{fleqn.clo} for left aligned equations;
\item \file{graphicx.sty} for graphics inclusion;
\item \file{txfonts.sty} optional font package, if the document is to
be formatted with Times and compatible math fonts;
\item \file{hyperref.sty} optional packages if hyperlinking is
required in the document;
\item \file{endfloat.sty} optional packages if floats to be placed at
end of the PDF.
\end{enumerate}
All the above packages (except some optional packages) are part of any
standard \LaTeX{} installation. Therefore, the users need not be
bothered about downloading any extra packages. Furthermore, users are
free to make use of \textsc{ams} math packages such as
\file{amsmath.sty}, \file{amsthm.sty}, \file{amssymb.sty},
\file{amsfonts.sty}, etc., if they want to. All these packages work in
tandem with \file{elsarticle.cls} without any problems.
\section{Major Differences}
Following are the major differences between \file{elsarticle.cls}
and its predecessor package, \file{elsart.cls}:
\begin{enumerate}[\textbullet]
\item \file{elsarticle.cls} is built upon \file{article.cls}
while \file{elsart.cls} is not. \file{elsart.cls} redefines
many of the commands in the \LaTeX{} classes/kernel, which can
possibly cause surprising clashes with other contributed
\LaTeX{} packages;
\item provides preprint document formatting by default, and
optionally formats the document as per the final
style of models $1+$, $3+$ and $5+$ of Elsevier journals;
\item some easier ways for formatting \verb+list+ and
\verb+theorem+ environments are provided while people can still
use \file{amsthm.sty} package;
\item \file{natbib.sty} is the main citation processing package
which can comprehensively handle all kinds of citations and
works perfectly with \file{hyperref.sty} in combination with
\file{hypernat.sty};
\item long title pages are processed correctly in preprint and
final formats.
\end{enumerate}
\section{Installation}
The package is available at author resources page at Elsevier
(\url{http://www.elsevier.com/locate/latex}).
It can also be found in any of the nodes of the Comprehensive
\TeX{} Archive Network (\textsc{ctan}), one of the primary nodes
being
\url{http://tug.ctan.org/tex-archive/macros/latex/contrib/elsarticle/}.
Please download the \file{elsarticle.dtx} which is a composite
class with documentation and \file{elsarticle.ins} which is the
\LaTeX{} installer file. When we compile the
\file{elsarticle.ins} with \LaTeX{} it provides the class file,
\file{elsarticle.cls} by
stripping off all the documentation from the \verb+*.dtx+ file.
The class may be moved or copied to a place, usually,
\verb+$TEXMF/tex/latex/elsevier/+,
or a folder which will be read
by \LaTeX{} during document compilation. The \TeX{} file
database needs updation after moving/copying class file. Usually,
we use commands like \verb+mktexlsr+ or \verb+texhash+ depending
upon the distribution and operating system.
\section{Usage}\label{sec:usage}
The class should be loaded with the command:
\begin{vquote}
\documentclass[<options>]{elsarticle}
\end{vquote}
\noindent where the \verb+options+ can be the following:
\begin{description}
\item [{\tt\color{verbcolor} preprint}] default option which format the
document for submission to Elsevier journals.
\item [{\tt\color{verbcolor} review}] similar to the \verb+preprint+
option, but increases the baselineskip to facilitate easier review
process.
\item [{\tt\color{verbcolor} 1p}] formats the article to the look and
feel of the final format of model 1+ journals. This is always single
column style.
\item [{\tt\color{verbcolor} 3p}] formats the article to the look and
feel of the final format of model 3+ journals. If the journal is a two
column model, use \verb+twocolumn+ option in combination.
\item [{\tt\color{verbcolor} 5p}] formats for model 5+ journals. This
is always of two column style.
\item [{\tt\color{verbcolor} authoryear}] author-year citation style of
\file{natbib.sty}. If you want to add extra options of
\file{natbib.sty}, you may use the options as comma delimited strings
as arguments to \verb+\biboptions+ command. An example would be:
\end{description}
\begin{vquote}
\biboptions{longnamesfirst,angle,semicolon}
\end{vquote}
\begin{description}
\item [{\tt\color{verbcolor} number}] numbered citation style. Extra options
can be loaded with\linebreak \verb+\biboptions+ command.
\item [{\tt\color{verbcolor} sort\&compress}] sorts and compresses the
numbered citations. For example, citation [1,2,3] will become [1--3].
\item [{\tt\color{verbcolor} longtitle}] if front matter is unusually long, use
this option to split the title page across pages with the correct
placement of title and author footnotes in the first page.
\item [{\tt\color{verbcolor} times}] loads \file{txfonts.sty}, if
available in the system to use Times and compatible math fonts.
\item [{\tt\color{verbcolor} reversenotenum}] Use alphabets as
author--affiliation linking labels and use numbers for author
footnotes. By default, numbers will be used as author--affiliation
linking labels and alphabets for author footnotes.
\item [{\tt\color{verbcolor} lefttitle}] To move title and
author/affiliation block to flushleft. \verb+centertitle+ is the
default option which produces center alignment.
\item [{\tt\color{verbcolor} endfloat}] To place all floats at the end
of the document.
\item [{\tt\color{verbcolor} nonatbib}] To unload natbib.sty.
\item [{\tt\color{verbcolor} doubleblind}] To hide author name,
affiliation, email address etc. for double blind refereeing purpose.
\item[] All options of \file{article.cls} can be used with this
document class.
\item[] The default options loaded are \verb+a4paper+, \verb+10pt+,
\verb+oneside+, \verb+onecolumn+ and \verb+preprint+.
\end{description}
\section{Frontmatter}
There are two types of frontmatter coding:
\begin{enumerate}[(1)]
\item each author is
connected to an affiliation with a footnote marker; hence all
authors are grouped together and affiliations follow;
\pagebreak
\item authors of same affiliations are grouped together and the
relevant affiliation follows this group.
\end{enumerate}
An example of coding the first type is provided below.
\begin{vquote}
\title{This is a specimen title\tnoteref{t1,t2}}
\tnotetext[t1]{This document is the results of the research
project funded by the National Science Foundation.}
\tnotetext[t2]{The second title footnote which is a longer
text matter to fill through the whole text width and
overflow into another line in the footnotes area of the
first page.}
\end{vquote}
\begin{vquote}
\author[1]{Jos Migchielsen\corref{cor1}%
\fnref{fn1}}
\ead{J.Migchielsen@elsevier.com}
\author[2]{CV Radhakrishnan\fnref{fn2}}
\ead{cvr@sayahna.org}
\author[3]{CV Rajagopal\fnref{fn1,fn3}}
\ead[url]{www.stmdocs.in}
\end{vquote}
\begin{vquote}
\cortext[cor1]{Corresponding author}
\fntext[fn1]{This is the first author footnote.}
\fntext[fn2]{Another author footnote, this is a very long
footnote and it should be a really long footnote. But this
footnote is not yet sufficiently long enough to make two
lines of footnote text.}
\fntext[fn3]{Yet another author footnote.}
\address[1]{Elsevier B.V., Radarweg 29, 1043 NX Amsterdam,
The Netherlands}
\address[2]{Sayahna Foundations, JWRA 34, Jagathy,
Trivandrum 695014, India}
\address[3]{STM Document Engineering Pvt Ltd., Mepukada,
Malayinkil, Trivandrum 695571, India}
\end{vquote}
The output of the above \TeX{} source is given in Clips~\ref{clip1} and
\ref{clip2}. The header portion or title area is given in
Clip~\ref{clip1} and the footer area is given in Clip~\ref{clip2}.
\deforange{blue!70}
\src{Header of the title page.}
\includeclip{1}{130 612 477 707}{1psingleauthorgroup.pdf
\deforange{orange}
\deforange{blue!70}
\src{Footer of the title page.}
\includeclip{1}{93 135 499 255}{1pseperateaug.pdf
\deforange{orange}
Most of the commands such as \verb+\title+, \verb+\author+,
\verb+\address+ are self explanatory. Various components are
linked to each other by a label--reference mechanism; for
instance, title footnote is linked to the title with a footnote
mark generated by referring to the \verb+\label+ string of
the \verb=\tnotetext=. We have used similar commands
such as \verb=\tnoteref= (to link title note to title);
\verb=\corref= (to link corresponding author text to
corresponding author); \verb=\fnref= (to link footnote text to
the relevant author names). \TeX{} needs two compilations to
resolve the footnote marks in the preamble part.
Given below are the syntax of various note marks and note texts.
\begin{vquote}
\tnoteref{<label(s)>}
\corref{<label(s)>}
\fnref{<label(s)>}
\tnotetext[<label>]{<title note text>}
\cortext[<label>]{<corresponding author note text>}
\fntext[<label>]{<author footnote text>}
\end{vquote}
\noindent where \verb=<label(s)>= can be either one or more comma
delimited label strings. The optional arguments to the
\verb=\author= command holds the ref label(s) of the address(es)
to which the author is affiliated while each \verb=\address=
command can have an optional argument of a label. In the same
manner, \verb=\tnotetext=, \verb=\fntext=, \verb=\cortext= will
have optional arguments as their respective labels and note text
as their mandatory argument.
The following example code provides the markup of the second type
of author-affiliation.
\begin{vquote}
\author{Jos Migchielsen\corref{cor1}%
\fnref{fn1}}
\ead{J.Migchielsen@elsevier.com}
\address{Elsevier B.V., Radarweg 29, 1043 NX Amsterdam,
The Netherlands}
\author{CV Radhakrishnan\fnref{fn2}}
\ead{cvr@sayahna.org}
\address{Sayahna Foundations, JWRA 34, Jagathy,
Trivandrum 695014, India}
\author{CV Rajagopal\fnref{fn1,fn3}}
\ead[url]{www.stmdocs.in}
\address{STM Document Engineering Pvt Ltd., Mepukada,
Malayinkil, Trivandrum 695571, India}
\end{vquote}
\vspace*{-.5pc}
\begin{vquote}
\cortext[cor1]{Corresponding author}
\fntext[fn1]{This is the first author footnote.}
\fntext[fn2]{Another author footnote, this is a very long
footnote and it should be a really long footnote. But this
footnote is not yet sufficiently long enough to make two lines
of footnote text.}
\end{vquote}
The output of the above \TeX{} source is given in Clip~\ref{clip3}.
\deforange{blue!70}
\src{Header of the title page..}
\includeclip{1}{119 563 468 709}{1pseperateaug.pdf
\deforange{orange}
\pagebreak
Clip~\ref{clip4} shows the output after giving \verb+doubleblind+ class option.
\deforange{blue!70}
\src{Double blind article}
\includeclip{1}{124 567 477 670}{elstest-1pdoubleblind.pdf
\deforange{orange}
\vspace*{-.5pc}
The frontmatter part has further environments such as abstracts and
keywords. These can be marked up in the following manner:
\begin{vquote}
\begin{abstract}
In this work we demonstrate the formation of a new type of
polariton on the interface between a ....
\end{abstract}
\end{vquote}
\vspace*{-.5pc}
\begin{vquote}
\begin{keyword}
quadruple exiton \sep polariton \sep WGM
\end{keyword}
\end{vquote}
\noindent Each keyword shall be separated by a \verb+\sep+ command.
\textsc{msc} classifications shall be provided in
the keyword environment with the commands
\verb+\MSC+. \verb+\MSC+ accepts an optional
argument to accommodate future revisions.
eg., \verb=\MSC[2008]=. The default is 2000.\looseness=-1
\subsection{New page}
Sometimes you may need to give a page-break and start a new page after
title, author or abstract. Following commands can be used for this
purpose.
\begin{vquote}
\newpageafter{title}
\newpageafter{author}
\newpageafter{abstract}
\end{vquote}
\begin{itemize}
\leftskip-2pc
\item [] {\tt\color{verbcolor} \verb+\newpageafter{title}+} typeset the title alone on one page.
\item [] {\tt\color{verbcolor} \verb+\newpageafter{author}+} typeset the title
and author details on one page.
\item [] {\tt\color{verbcolor} \verb+\newpageafter{abstract}+}
typeset the title,
author details and abstract \& keywords one one page.
\end{itemize}
\section{Floats}
{Figures} may be included using the command, \verb+\includegraphics+ in
combination with or without its several options to further control
graphic. \verb+\includegraphics+ is provided by \file{graphic[s,x].sty}
which is part of any standard \LaTeX{} distribution.
\file{graphicx.sty} is loaded by default. \LaTeX{} accepts figures in
the postscript format while pdf\LaTeX{} accepts \file{*.pdf},
\file{*.mps} (metapost), \file{*.jpg} and \file{*.png} formats.
pdf\LaTeX{} does not accept graphic files in the postscript format.
The \verb+table+ environment is handy for marking up tabular
material. If users want to use \file{multirow.sty},
\file{array.sty}, etc., to fine control/enhance the tables, they
are welcome to load any package of their choice and
\file{elsarticle.cls} will work in combination with all loaded
packages.
\section[Theorem and ...]{Theorem and theorem like environments}
\file{elsarticle.cls} provides a few shortcuts to format theorems and
theorem-like environments with ease. In all commands the options that
are used with the \verb+\newtheorem+ command will work exactly in the same
manner. \file{elsarticle.cls} provides three commands to format theorem or
theorem-like environments:
\begin{vquote}
\newtheorem{thm}{Theorem}
\newtheorem{lem}[thm]{Lemma}
\newdefinition{rmk}{Remark}
\newproof{pf}{Proof}
\newproof{pot}{Proof of Theorem \ref{thm2}}
\end{vquote}
The \verb+\newtheorem+ command formats a
theorem in \LaTeX's default style with italicized font, bold font
for theorem heading and theorem number at the right hand side of the
theorem heading. It also optionally accepts an argument which
will be printed as an extra heading in parentheses.
\begin{vquote}
\begin{thm}
For system (8), consensus can be achieved with
$\|T_{\omega z}$
...
\begin{eqnarray}\label{10}
....
\end{eqnarray}
\end{thm}
\end{vquote}
Clip~\ref{clip5} will show you how some text enclosed between the
above code\goodbreak \noindent looks like:
\vspace*{6pt}
\deforange{blue!70}
\src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newtheorem}}
\includeclip{2}{1 1 453 120}{jfigs.pdf}
\deforange{orange}
The \verb+\newdefinition+ command is the same in
all respects as its\linebreak \verb+\newtheorem+ counterpart except that
the font shape is roman instead of italic. Both
\verb+\newdefinition+ and \verb+\newtheorem+ commands
automatically define counters for the environments defined.
\vspace*{6pt}
\deforange{blue!70}
\src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newdefinition}}
\includeclip{1}{1 1 453 105}{jfigs.pdf}
\deforange{orange}
The \verb+\newproof+ command defines proof environments with
upright font shape. No counters are defined.
\vspace*{6pt}
\deforange{blue!70}
\src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newproof}}
\includeclip{3}{1 1 453 65}{jfigs.pdf}
\deforange{orange}
Users can also make use of \verb+amsthm.sty+ which will override
all the default definitions described above.
\section[Enumerated ...]{Enumerated and Itemized Lists}
\file{elsarticle.cls} provides an extended list processing macros
which makes the usage a bit more user friendly than the default
\LaTeX{} list macros. With an optional argument to the
\verb+\begin{enumerate}+ command, you can change the list counter
type and its attributes.
\begin{vquote}
\begin{enumerate}[1.]
\item The enumerate environment starts with an optional
argument `1.', so that the item counter will be suffixed
by a period.
\item You can use `a)' for alphabetical counter and '(i)' for
roman counter.
\begin{enumerate}[a)]
\item Another level of list with alphabetical counter.
\item One more item before we start another.
\end{vquote}
\deforange{blue!70}
\src{List -- Enumerate}
\includeclip{4}{1 1 453 185}{jfigs.pdf}
\deforange{orange}
Further, the enhanced list environment allows one to prefix a
string like `step' to all the item numbers.
\begin{vquote}
\begin{enumerate}[Step 1.]
\item This is the first step of the example list.
\item Obviously this is the second step.
\item The final step to wind up this example.
\end{enumerate}
\end{vquote}
\deforange{blue!70}
\src{List -- enhanced}
\includeclip{5}{1 1 313 83}{jfigs.pdf}
\deforange{orange}
\section{Cross-references}
In electronic publications, articles may be internally
hyperlinked. Hyperlinks are generated from proper
cross-references in the article. For example, the words
\textcolor{black!80}{Fig.~1} will never be more than simple text,
whereas the proper cross-reference \verb+\ref{tiger}+ may be
turned into a hyperlink to the figure itself:
\textcolor{blue}{Fig.~1}. In the same way,
the words \textcolor{blue}{Ref.~[1]} will fail to turn into a
hyperlink; the proper cross-reference is \verb+\cite{Knuth96}+.
Cross-referencing is possible in \LaTeX{} for sections,
subsections, formulae, figures, tables, and literature
references.
\section[Mathematical ...]{Mathematical symbols and formulae}
Many physical/mathematical sciences authors require more
mathematical symbols than the few that are provided in standard
\LaTeX. A useful package for additional symbols is the
\file{amssymb} package, developed by the American Mathematical
Society. This package includes such oft-used symbols as
$\lesssim$ (\verb+\lesssim+), $\gtrsim$ (\verb+\gtrsim+) or
$\hbar$ (\verb+\hbar+). Note that your \TeX{}
system should have the \file{msam} and \file{msbm} fonts installed. If
you need only a few symbols, such as $\Box$ (\verb+\Box+), you might try the
package \file{latexsym}.
Another point which would require authors' attention is the
breaking up of long equations. When you use
\file{elsarticle.cls} for formatting your submissions in the
\verb+preprint+ mode, the document is formatted in single column
style with a text width of 384pt or 5.3in. When this document is
formatted for final print and if the journal happens to be a double column
journal, the text width will be reduced to 224pt at for 3+
double column and 5+ journals respectively. All the nifty
fine-tuning in equation breaking done by the author goes to waste in
such cases. Therefore, authors are requested to check this
problem by typesetting their submissions in final format as well
just to see if their equations are broken at appropriate places,
by changing appropriate options in the document class loading
command, which is explained in section~\ref{sec:usage},
\nameref{sec:usage}. This allows authors to fix any equation breaking
problem before submission for publication.
\file{elsarticle.cls} supports formatting the author submission
in different types of final format. This is further discussed in
section \ref{sec:final}, \nameref{sec:final}.
\subsection*{Displayed equations and double column journals}
Many Elsevier journals print their text in two columns. Since
the preprint layout uses a larger line width than such columns,
the formulae are too wide for the line width in print. Here is an
example of an equation (see equation 6) which is perfect in a
single column preprint format:
\bigskip
\setlength\Sep{6pt}
\src{See equation (6)}
\deforange{blue!70}
\includeclip{4}{105 500 500 700}{1psingleauthorgroup.pdf}
\deforange{orange}
\noindent When this document is typeset for publication in a
model 3+ journal with double columns, the equation will overlap
the second column text matter if the equation is not broken at
the appropriate location.
\vspace*{6pt}
\deforange{blue!70}
\src{See equation (6) overprints into second column}
\includeclip{3}{59 421 532 635}{elstest-3pd.pdf}
\deforange{orange}
\vspace*{6pt}
\noindent The typesetter will try to break the equation which
need not necessarily be to the liking of the author or as it
happens, typesetter's break point may be semantically incorrect.
Therefore, authors may check their submissions for the incidence
of such long equations and break the equations at the correct
places so that the final typeset copy will be as they wish.
\section{Bibliography}
Three bibliographic style files (\verb+*.bst+) are provided ---
\file{elsarticle-num.bst}, \file{elsarticle-num-names.bst} and
\file{elsarticle-harv.bst} --- the first one can be used for the
numbered scheme, second one for numbered with new options of
\file{natbib.sty}. The third one is for the author year
scheme.
In \LaTeX{} literature, references are listed in the
\verb+thebibliography+ environment. Each reference is a
\verb+\bibitem+ and each \verb+\bibitem+ is identified by a label,
by which it can be cited in the text:
\verb+\bibitem[Elson et al.(1996)]{ESG96}+ is cited as
\verb+\citet{ESG96}+.
\noindent In connection with cross-referencing and
possible future hyperlinking it is not a good idea to collect
more that one literature item in one \verb+\bibitem+. The
so-called Harvard or author-year style of referencing is enabled
by the \LaTeX{} package \file{natbib}. With this package the
literature can be cited as follows:
\begin{enumerate}[\textbullet]
\item Parenthetical: \verb+\citep{WB96}+ produces (Wettig \& Brown, 1996).
\item Textual: \verb+\citet{ESG96}+ produces Elson et al. (1996).
\item An affix and part of a reference:
\verb+\citep[e.g.][Ch. 2]{Gea97}+ produces (e.g. Governato et
al., 1997, Ch. 2).
\end{enumerate}
In the numbered scheme of citation, \verb+\cite{<label>}+ is used,
since \verb+\citep+ or \verb+\citet+ has no relevance in the numbered
scheme. \file{natbib} package is loaded by \file{elsarticle} with
\verb+numbers+ as default option. You can change this to author-year
or harvard scheme by adding option \verb+authoryear+ in the class
loading command. If you want to use more options of the \file{natbib}
package, you can do so with the \verb+\biboptions+ command, which is
described in the section \ref{sec:usage}, \nameref{sec:usage}. For
details of various options of the \file{natbib} package, please take a
look at the \file{natbib} documentation, which is part of any standard
\LaTeX{} installation.
In addition to the above standard \verb+.bst+ files, there are 10
journal-specific \verb+.bst+ files also available.
Instruction for using these \verb+.bst+ files can be found at
\href{http://support.stmdocs.in/wiki/index.php?title=Model-wise_bibliographic_style_files}
{http://support.stmdocs.in}
\section{Graphical abstract and highlights}
A template for adding graphical abstract and highlights are available
now. This will appear as the first two pages of the PDF before the
article content begins.
\pagebreak
Please refer below to see how to code them.
\begin{vquote}
....
....
\end{abstract}
\begin{graphicalabstract}
\end{graphicalabstract}
\begin{highlights}
\item Research highlight 1
\item Research highlight 2
\end{highlights}
\begin{keyword}
....
....
\end{vquote}
\section{Final print}\label{sec:final}
The authors can format their submission to the page size and margins
of their preferred journal. \file{elsarticle} provides four
class options for the same. But it does not mean that using these
options you can emulate the exact page layout of the final print copy.
\lmrgn=3em
\begin{description}
\item [\texttt{1p}:] $1+$ journals with a text area of
384pt $\times$ 562pt or 13.5cm $\times$ 19.75cm or 5.3in $\times$
7.78in, single column style only.
\item [\texttt{3p}:] $3+$ journals with a text area of 468pt
$\times$ 622pt or 16.45cm $\times$ 21.9cm or 6.5in $\times$
8.6in, single column style.
\item [\texttt{twocolumn}:] should be used along with 3p option if the
journal is $3+$ with the same text area as above, but double column
style.
\item [\texttt{5p}:] $5+$ with text area of 522pt $\times$
682pt or 18.35cm $\times$ 24cm or 7.22in $\times$ 9.45in,
double column style only.
\end{description}
Following pages have the clippings of different parts of
the title page of different journal models typeset in final
format.
Model $1+$ and $3+$ will have the same look and
feel in the typeset copy when presented in this document. That is
also the case with the double column $3+$ and $5+$ journal article
pages. The only difference will be wider text width of
higher models. Therefore we will look at the
different portions of a typical single column journal page and
that of a double column article in the final format.
\begin{center}
\hypertarget{bsc}{}
\hyperlink{sc}{
{\bf [Specimen single column article -- Click here]}
}
\hypertarget{bsc}{}
\hyperlink{dc}{
{\bf [Specimen double column article -- Click here]}
}
\end{center}
\src{}\hypertarget{sc}{}
\deforange{blue!70}
\hyperlink{bsc}{\includeclip{1}{88 120 514 724}{elstest-1p.pdf}}
\deforange{orange}
\src{}\hypertarget{dc}{}
\deforange{blue!70}
\hyperlink{bsc}{\includeclip{1}{27 61 562 758}{elstest-5p.pdf}}
\deforange{orange}
\end{document}
\section{}
\label{}
\section{Introduction}
Given an increased demand for renewable energy, accurate predictive models are essential to justify, manage, and monitor wind turbine power generation. In particular, accurate predictions of the \emph{power output} (under uncertainty) enable reliable forecasting of the expected income for a complete wind farm -- as well as individual turbines -- to support the expansion of wind-based energy \cite{TASLIMIRENANI2016544}. %
Robust models of the power output have potential applications in {performance monitoring} and operator control, to ensure optimal use \textit{in situ} \cite{papatheou2017performance,YANG2013365}.
\emph{Power curves} capture the relationship between wind speed and turbine output power \cite{papatheou2017performance} -- the associated function can be used as a key indicator of performance \cite{ROGERS20201124}. %
A regression can be inferred to approximate the relationship given operational measurements (training data) -- typically recorded using Supervisor Control and Sensory Data Acquisition (SCADA) systems \cite{YANG2013365}. %
An example of SCADA data is presented in \Cref{fig:ideal_data}; a regression of these data should generalise to future measurements given \emph{optimal} operation of the wind turbine. %
\begin{figure}[pt]
\centering
\includegraphics[width=\linewidth]{figures/ideal_curve.png}
\caption{Data that represent an ideal power curve. Measurements from three turbines over a period of three weeks.}\label{fig:ideal_data}
\end{figure}
Various techniques have been proposed to model training data that correspond to \emph{ideal} operation~\cite{thapar2011critical,carrillo2013review,lydia2014comprehensive}. %
In practice, however, only a subset of measurements will typically represent this relationship. %
In particular, power \textit{curtailments} will appear as additional functional components; %
these usually correspond to the output power being controlled (or limited) by the operator. Reasons to limit power include: requirements of the electrical grid \cite{waite2016modeling,hur2014curtailment}, the mitigation of loading/wake effects~\cite{bontekoning2017analysis}, and restrictions enforced by planning regulations. %
\begin{figure}[pt]
\centering
\includegraphics[width=\linewidth]{figures/curtailed_curve.png}
\caption{Data including power curtailments -- corresponding to (a) the ideal power curve (b) $\approx$50\%-limited output, and (c) zero-limited output. Measurements from seven turbines over nine weeks.}\label{fig:curtailed_data}
\end{figure}
An example of operational data including curtailments is shown in~\Cref{fig:curtailed_data}. %
The emergent space is multivalued, differing significantly from the archetypal curve in~\Cref{fig:ideal_data} (additionally, it cannot be modelled by conventional regression). %
Typically, the curtailment data are removed during pre-processing via engineering judgement \cite{papatheou2017performance}, alongside filtering \cite{MANOBEL20181015, marvuglia2012monitoring} and outlier analysis \cite{MARCIUKAITIS2017732,Papatheou2015} (see~\Cref{s:curts} for details). %
Disregarding curtailment data is logical when modelling the \emph{ideal} curve, corresponding to optimal operation \cite{ROGERS20201124}; despite this fact, curtailed observations are expected in practice.
Therefore, a representation of \emph{in situ} measurements should model these data, rather than filtering them out, particularly in monitoring or forecasting applications (outlined in~\Cref{s:curts}). %
The current work suggests an overlapping mixture of probabilistic regression models \cite{lazaro2012overlapping} (i.e.\ Gaussian processes \cite{rasmussenGP}) to infer multivalued power curves -- such as those in Figure~\ref{fig:curtailed_data}. %
The statistical method can represent operational power data, including curtailments, while negating requirements for user annotation of the observed data -- i.e.\ categorisation of curtailments is unsupervised. %
As a result, the model can represent observations that might be recorded from \textit{in-situ} turbines in operation (rather than the ideal case only), without the need for extensive outlier analysis, filtering, or pre-processing. %
\section{Related Work}
This work relates to existing literature (e.g.\ \cite{papatheou2017performance,Papatheou2015,YANG2013365}) concerning performance monitoring and prediction via wind turbine power curves. %
As aforementioned, numerous data-based models have been investigated, many of which have been summarised in review papers \cite{thapar2011critical,carrillo2013review,lydia2014comprehensive}. %
A brief summary is provided.
\emph{Parametric} methods fit parametrised functions to power curve data; some examples include polynomials and sigmoid (tanh/logistic) functions \cite{lydia2014comprehensive,MARCIUKAITIS2017732,TASLIMIRENANI2016544}. Parametric models are desirable -- sigmoid-type functions in particular -- as properties that appear inherent to power curves can be included; for example: the cut-in/cut-out wind speeds, bounded power above and below these values, as well as \emph{near-linear} behaviour within the bounds. %
Unfortunately, over-simplified functions can prove restrictive when approximating the wind-power relationship, while overly complex models (e.g.\ high-order polynomials) are susceptible to overtraining, and require validation procedures to ensure good generalisation to new data \cite{ROGERS20201124}. %
Alternative methods consider the data alone, and, in general, do not incorporate prior engineering knowledge. %
Some examples include multilayer perceptions \cite{MANOBEL20181015}, random forests \cite{pandit2019comparison}, and support vector machines \cite{ouyang2017modeling}. %
While these tools have proved effective in various machine learning tasks, many require stringent validation procedures, as the flexibility of the algorithms can easily lead to over-parametrised models in wind turbine applications -- as discussed in \cite{ROGERS20201124}. %
To combat the issues of overtraining, one option considers Gaussian Process (GP) regression \cite{rasmussenGP,Papatheou2015,papatheou2017performance,ROGERS20201124}. %
GPs relieve the need for validation as they are naturally self-regularising through the Bayesian Occam's razor \cite{rasmussen2001occam}; that is, training/optimisation will find the minimally-complex model given the observations in the training set. %
While GPs are typically referred to as nonparametric, a parametrised mean function (e.g.\ a sigmoid function) can be defined in the \textit{prior} of the model. In general terms, the Bayesian formulation allows for the natural inclusion of engineering knowledge of the expected functions, without the need to specify the function directly \cite{rasmussenGP}. %
As a result, GP regression can be viewed as a middle ground between purely data-based methods, and those that are based on engineering knowledge.
\subsection{Power-curtailments}\label{s:curts}
There is an established literature concerning power curtailments in wind energy; for example,~\cite{hur2014curtailment,waite2016modeling,bontekoning2017analysis,fan2015analysis,luo2016wind}. %
Generally, the literature considers physics-based simulation techniques for prediction, or control procedures to \textit{enforce} curtailment -- as opposed to data-driven models of wind-power measurements. %
For example, \citet{hur2014curtailment} present a wind farm controller to adjust the power generated by turbines while considering the requirements of the grid (for a simulated wind farm). %
It is shown that, by considering the entire wind farm in a control system, the output-power can be curtailed more effectively, such that turbines with high wind-speeds compensate for those with lower wind-speeds. %
\citet{bontekoning2017analysis} present an algorithm to determine the available power of a wind farm during curtailment, when considering the \textit{reduced wake-effect}.
This phenomenon occurs when a turbine is curtailed, leading to a reduced wake for downstream turbines; in turn, this leads to an apparent increase in the available power. %
A physics-based model is used to adjust calculations of the available power during curtailment interactions. %
A number of papers (e.g. \citet{fan2015analysis}, \citet{luo2016wind}) have analysed the history of power-curtailments for wind energy in China, to establish potential solutions and improve the utilisation of the available resources. %
A range of technical, planning, and policy-making strategies are proposed, highlighting the importance of understanding the expected curtailments when planning wind farm projects. %
For data-driven power curve models, the curtailment data are typically considered as outliers, and removed during pre-processing; %
this is because the typical concern is to characterise \textit{ideal} operation. %
While the removal of these data makes a regression model simpler, the outlier analysis is non-trivial; %
for example, \citet{MANOBEL20181015} flag and remove outliers using a threshold based on a Gaussian Process regression, while %
\citet{marvuglia2012monitoring} de-noise the data using kernel principal component analysis. %
Alternatively, \citet{MARCIUKAITIS2017732} use the quartile/interquartile range over windowed inputs to detect and remove outliers, while %
\citet{Papatheou2015} use \textit{labels} for weekly subsets of data, provided by an expert, to remove measurements that do not correspond to ideal operation. %
\subsection{Why model power-curtailments?}\label{s:why}
While it is logical to remove curtailment data when modelling an ideal wind-power relationship, %
it is desirable to consider these `outliers' in critical applications -- namely, \textit{monitoring}, and \textit{forecasting}. %
In data-driven monitoring \cite{farrar2012structural}, the model should approximate all the variations of the permitted \textit{normal} condition to inform reliable novelty detection. %
If the model represents ideal operation only, measurements corresponding to acceptable curtailments (via control interactions) will be flagged as abnormal. %
Such a monitoring regime would lead to a large number of false positives; a recognised issue in the turbine monitoring literature~\cite{YANG2013365}. %
On the other hand, accurate curtailment modelling should prove useful within reliable forecasting frameworks. That is, if a model considers all of the expected measurements \textit{in-situ}, it should be more informative than a model of ideal-operation only; i.e. power predictions prove more conservative if curtailment data are considered. It should be noted, however, that the proposed model can only approximate curtailments that have been previously observed.
Finally, if curtailment data are modelled rather than removed, they can be naturally separated using the model itself, instead of a separate outlier analysis procedure. %
As discussed, the process of outlier removal proves far from trivial~\cite{MANOBEL20181015,marvuglia2012monitoring,MARCIUKAITIS2017732,Papatheou2015}. %
\section{Contribution}
A novel algorithm is proposed to \textit{model} curtailments in wind turbine power curves. %
The method offers an alternative to the conventional approach, which filters out the associated data. %
The algorithm expands on previous work concerning Gaussian processes (GP) \cite{ROGERS20201124,Papatheou2015,papatheou2017performance} by inferring an overlapping mixture-model of GP components -- introduced by \citet{lazaro2012overlapping}. %
An alternative (parametrised) mean function is suggested (for the GPs) that is scalable, and therefore suited to represent the expected functions for curtailed data. %
This choice of mean function allows for the inclusion of prior engineering knowledge and leads to interpretable hyperparameters. %
For each component (i.e.\ power curve) in the mixture of regression models, input-dependent (heteroscedastic) noise is approximated (according to \citet{kersting2007most}), an important consideration for probabilistic models of wind turbine power data \cite{ROGERS20201124}.
\subsection{Layout}
\Cref{s:data} introduces the SCADA dataset and the issues associated with modelling operational (curtailed) measurements. %
\Cref{s:GP_PC} summarises conventional Gaussian process regression for power curve modelling and introduces a novel parametrised mean function, as well as methods to approximate input-dependent noise for curtailment data. %
\Cref{sec:OMGP} describes the Overlapping Mixture of Gaussian Processes (OMGP) for power curve modelling, combined with ideas from \Cref{s:GP_PC}.
\Cref{s:results} applies the model to \textit{in situ} operational SCADA data, and proposes methods for population-based monitoring with the OMGP. %
\Cref{s:conc} offers concluding remarks. %
\section{Operational Wind Farm Data: Population-based Monitoring}\label{s:data}
This work considers a SCADA dataset, recorded from an operational wind farm owned by Vattenfall, originally presented in \cite{papatheou2017performance}. For confidentiality reasons, information regarding the specific type, location, and number of turbines cannot be disclosed. The data were recorded from a farm containing the same model of turbine, over a period of 125 weeks~\cite{Papatheou2015,papatheou2017performance}. %
Observations consists of the mean \textit{power} produced and measured \textit{wind speed} over ten minute intervals. %
Sub-samples of this dataset are shown in \Cref{fig:ideal_data,fig:curtailed_data}. %
Primarily, the suggested method considers a population-based approach to performance monitoring -- associated with population-based structural health monitoring (PBSHM) \cite{PBSHMMSSP1,PBSHMMSSP2,PBSHMMSSP3}. %
That is, data from a population (the wind farm) are considered to infer a model (the power curve) that is representative of the group -- this general model is referred to as the \textit{form} in PBSHM \cite{PBSHMMSSP1}. %
To reiterate: robust and accurate models of \textit{in situ} population data are required to monitor the wind farm.
\subsection{Dataset details}
For the SCADA data analysed in this work, the observations are \textit{unlabelled}; %
i.e.\ records of the operational, environmental, or damage condition are not available. %
Considering \Cref{fig:curtailed_data}, this fact implies that there is no ground truth to indicate which underlying function generated each sample: (i) normal operation, (ii) $\approx$ 50\% curtailment, (iii) or zero-power\footnote{While the zero-power trend is not a typical curtailment, is it considered here as a function whose data are typically filtered out before modelling. Additionally, the data are interesting to consider, as they differ functionally from other trends in the measurements.}. %
As such, when modelling the curtailments, labels to associate data with wind-power relationships (i - iii) are unobserved and must be represented as latent variables. %
It is important to note: if labels were available (in a control log, for example) they should be \textit{observed} variables in the model. %
In the absence of labelling for functions (i-iii), the model must allocate observations in an \textit{unsupervised} manner, which proves non-trivial (consider outlier analysis procedures from previous work \cite{MANOBEL20181015,marvuglia2012monitoring,MARCIUKAITIS2017732,Papatheou2015}). %
To clarify, weekly subsets of data are presented in Figure~\ref{fig:weekly}; notice that each set can be associated with more than one operational condition~(i-iii). %
While separate trends are visually clear, manually labelling each point with the \textit{ground-truth} is infeasible. %
For example, it is clear that data represent normal (i) and 50\% curtailment (ii) in the left of Figure~\ref{fig:weekly}; however, it becomes difficult to assign measurements to functions as they overlap. %
Likewise, while certain data clearly correspond to zero-power (iii) in the right of Figure~\ref{fig:weekly}, it is unclear if the remaining data correspond to 50\% curtailment (ii) or normal operation~(i). %
Conveniently, the labels can be modelled as a latent random variable. %
In turn, a predictive distribution can associate `soft-labels' with the data, such that a (non-zero) likelihood associates measurements with each of the underlying functions (i-iii). %
\subsection{Data selection}\label{s:selection}
While this work aims to represent more realistic measurements from an operational wind farm, it should be clarified that preprocessing steps are still required. %
The study here primarily considers data from a subset of seven turbines over a period of nine weeks (as well as four alternative turbines over seven weeks, for validation). %
Very sparse outliers are removed via a standard K-nearest-neighbour approach \cite{murphy2012machine}. %
The subsets of data were selected as they contain three trends of data (i-iii). %
It is acknowledged, however, that alternative curtailments can occur, relating to different levels of limited power. %
Some examples of alternative functions from different turbines are demonstrated in~\Cref{s:more}. %
\begin{figure}[pt]
\centering\includegraphics[width=\linewidth]{figures/weely_data.png}
\caption{Examples of weekly data subsets, measured from individual turbines.}\label{fig:weekly}
\end{figure}
\section{Gaussian Processes to Model Curtailed Power Curves}\label{s:GP_PC}
Before introducing the overlapping mixture model (as well as heteroscedastic updates) it is useful to summarise conventional GP regression.
In this application, wind speed measurements correspond to the inputs $x_i$, while power measurements correspond to the outputs $y_i$. %
Given a set of $N$ training data, $\mathcal{D} = \left\{x_i,y_i\right\}_{i=1}^N = \left\{\bm{x},\bm{y}\right\}$, the predictive distribution of the power output $y_*$ for a new measurement of wind speed $x_*$ is inferred. %
Following a probabilistic approach, the power curve is modelled by some noiseless latent function $f(x_i)$, plus an independent noise term $\epsilon_i$,
\begin{align}
y_i = f(x_i) + \epsilon_i \label{eq:prob_reg}
\end{align}
\noindent Rather than inferring the parameters of a function $f$ (as with conventional parametric regression) a GP prior is placed over the functions directly. %
A Gaussian prior is also assumed for the noise term $\epsilon_i$ (the other latent variable). %
Using a Bayesian framework, a posterior distribution over the expected functions can be obtained, once training data $\mathcal{D}$ have been observed. %
The GP prior is defined by its mean $m(x_i)$ and covariance function $k(x_i,x_j)$; while the Gaussian prior is parametrised by $\sigma$,
\begin{align}
f(x_i) &\sim GP\left(m\left(x_i\right),k\left( x_i, x_j\right)\right) \\
\epsilon_i &\sim \mathcal{N}(0, \sigma^2)
\end{align}
\noindent Over a finite and arbitrary set of inputs $\bm{x} = \left\{x_1, \ldots,x_N \right\}$, the GP is a (joint) multivariate Gaussian \cite{murphy2012machine},
\begin{align}
p(\bm{f}\mid \bm{x}) = \mathcal{N}(\bm{m},\bm{K_{xx}})
\end{align}
\noindent where $\bm{m} = \{m(x_i),\ldots,m(x_N)\}$ and $
\bm{f} = \{f(x_i),\ldots,f(x_N)\}$, while $\bm{K_{xx}}$ is the covariance matrix, such that $\bm{K_{xx}}[i,j] = k(x_i,x_j) \;\;\forall i,j \in \{1,\ldots,N\}$. %
Note: square brackets are used to index matrices and vectors when subscripts become cluttered. %
Importantly, via the mean $m(x_i)$ and covariance $k(x_i,x_j)$, the GP prior can be used to encode knowledge of the expected functions given engineering judgement (before data are observed). %
The covariance function determines the correlation between outputs $y_i$ and $y_j$ -- it determines properties such as the process variance, and smoothness \cite{lazaro2012overlapping}. %
A popular (and relatively interpretable) choice of $k(\cdot)$ is the squared-exponential function (which is used here),
\begin{align}
k(x_i, x_j) = \sigma_f^2\exp{\left\{-\frac{1}{2l^2}(x_i - x_j)^2\right\}} \label{eq:sq_exp}
\end{align}
\noindent where $\sigma_f$ is the process variance, defining variance of the expected functions about the mean, and $l$ is the length scale, which determines the rate at which the correlation between outputs decays across the input space (smoothness). %
Since the GP is flexible enough to model \textit{arbitrary} trends \cite{murphy2012machine}, a zero-mean function is typically assumed \cite{rasmussenGP,lazaro2012overlapping} such that $m(x_i) = 0$; %
this is usually (somewhat) justified by subtracting the sample mean and standard deviation from the outputs $\bm{y}$. %
However, if knowledge of the \textit{expected} functions can be encoded via an explicit/parametrised mean (even approximately) this should be included \cite{PBSHMMSSP1}\footnote{It is acknowledged, however, that a poor choice of prior can lead to inferior predications.}. %
With an explicit mean, the resulting algorithm can be considered \textit{semi-parametric} \cite{murphy2012machine}, such that the GP models the residuals between the data and some parametrised function $m(x_i)$ (i.e.\ the prior mean). %
\subsection{Prior knowledge of the expected functions}
As aforementioned, sigmoid functions can be used to \textit{approximate} the expected power curve relationship: they exhibit a near-linear relationship within bounds (cut-in cut-out wind speeds) and horizontal asymptotes (re. min/max power) for high and low inputs ($x_i \rightarrow \pm \infty$). %
Sigmoids have been applied to power curves in the past -- for parametric regression, e.g.\ \cite{TASLIMIRENANI2016544,lydia2014comprehensive,MARCIUKAITIS2017732}, as well as within GPs \cite{ROGERS20201124}. %
A scaled version of the soft-clip (SC) function, presented by \citet{klimek2018neural}, is suggested as an alternative for this application, %
\begin{align}
m(x_i\;;\;\beta, \bm{\alpha}) &= \frac{\alpha_1}{\beta}\log\left\{\frac{1+e^{\beta v}}{1+e^{\beta(v-1)}}\right\} \label{eq:softclip} \\[1em]
v &\triangleq \alpha_2 x_i + \alpha_3 \nonumber\\
\bm{\alpha} &\triangleq \left\{ \alpha_1, \alpha_2, \alpha_2 \right\}
\end{align}
Relating to power curves, the hyperparameters $\{\beta, \bm{\alpha}\}$ are interpretable. %
$\alpha_1$ determines the value of the horizontal (non-zero) asymptote, which corresponds to the maximum (or limited) power. %
$\beta$ controls the \textit{rate} at which the near-linear section tends to the asymptotic values (around the cut-in/cut-out wind speed).
Finally, $\alpha_2$ \textit{scales} and $\alpha_3$ \textit{translates} the function with respect to the $x_i$ axis. %
\begin{figure}[pt]
\centering
\includegraphics[width=\textwidth]{figures/soft_clip.png}
\caption{Effects of the hyperparameters on the mean function of the prior $m(x_i; \beta, \bm{\alpha})$}\label{fig:softclip}
\end{figure}
\Cref{fig:softclip} illustrates the effects of $\{\beta, \bm{\alpha}\}$.
Importantly, control of the convergence rate via $\beta$ is particularly useful for curtailed data. %
Consider the $\approx $50\% limited trend in \Cref{fig:curtailed_data}: a sigmoid approximation would need to be scaled, such that $\alpha_1 \approx 0.5$, while $\beta$ must also increase to define \text{sharper} asymptotic behaviour. %
It is acknowledged that the zero-power trend (visible in \Cref{fig:curtailed_data}) does not resemble a soft-clip function. %
In fact, a linear regression would approximate these data -- a suitable component is introduced in \Cref{sec:OMGP}.
\subsection{Prediction and optimisation}
The collected hyperparameters of the model (associated with the mean and kernel functions) are $\bm{\theta} = \left\{\beta, \bm{\alpha}, \sigma_f, l, \sigma \right\}$. Keeping these values fixed, the joint distribution between the training data $\mathcal{D} = \left\{\bm{x},\bm{y}\right\} = \left\{x_i,y_i\right\}_{i=1}^{N}$ and some previously unseen observations $\left\{\bm{x}_*, \bm{y}_*\right\} = \left\{\bm{x}_*[i], \bm{y}_*[i] \right\}_{i=1}^{M}$ (with additive noise) is multivariate Gaussian,
\begin{align}
\begin{bmatrix}
\bm{y}\\\ \bm{y}_*
\end{bmatrix} &\sim \mathcal{N}\begin{pmatrix}
\begin{bmatrix}
\bm{m}\\ \bm{m}_*
\end{bmatrix},
\begin{bmatrix}
\; \bm{K_{xx}} + \bm{R} & \bm{K_{xx_*}}\\
\bm{K_{x_*x}} & \bm{K_{x_*x_*}} + \bm{R}_*\;
\end{bmatrix}
\end{pmatrix}.
\label{eq:joint}
\end{align}
\begin{align}
\bm{R} \triangleq \sigma^2\bm{I}_N \nonumber\\
\bm{R}_* \triangleq \sigma^2\bm{I}_M \label{eq:noise_kernel}
\end{align}
\noindent where $\{\bm{R},\bm{R}_*\}$ define the \textit{noise kernels}, such that $\bm{I}_N$ denotes an $N \times N$ identity matrix, and $\bm{I}_M$ denotes an $M \times M$ identity matrix. Continuing similar notation, $\bm{m}_* = \left\{m\left(\bm{x}_*[i]\right)\right\}_{i=1}^{M}$ denotes the mean vector for the new observations. %
According to the standard identity for conditioning a joint Gaussian distribution \cite{murphy2012machine,rasmussenGP}, the predictive equations can be defined,
\begin{align}
p\left(\bm{y}_* \mid \bm{x}_*, \mathcal{D}\right) &= \mathcal{N}\left(\bm{\mu}_*, \bm{\Sigma}_*\right) \label{eq:predict}\\
\bm{\mu}_* &\triangleq \bm{m}_* + \bm{K_{x_*x}}\left(\bm{K_{xx}} + \bm{R}\right)^{-1} \left( \bm{y} - \bm{m}\right) \nonumber\\
\bm{\Sigma}_* &\triangleq \bm{K_{x_*x_*}} - \bm{K_{x_*x}}\left(\bm{K_{xx}} + \bm{R}\right)^{-1} \bm{K_{xx_*}} + \bm{R}_* \nonumber
\end{align}
\noindent i.e.\ the mean of the posterior predictive function is $\mathbb{E} \left[ \bm{y}_* \right] = \bm{\mu}_*$, and the variance about that mean is $\mathbb{V}\left[\bm{y}_*\right] = \textrm{diag}(\bm{\Sigma}_*)$ (ignoring cross-terms).
Until this point, the hyperparameters $\bm{\theta} = \left\{\beta, \bm{\alpha}, \sigma_f, l, \sigma \right\}$ have been fixed. %
In practice, these are (typically) optimised through empirical Bayes \cite{murphy2012machine}, i.e.\ a type-II maximum likelihood \cite{rasmussenGP}, see~\ref{a:typeII} for details.
\subsection{Heteroscedastic updates: Estimating input-dependent noise}\label{s:hetGP}
Currently, the noise term $\epsilon_i$ in \cref{eq:prob_reg} has been governed by a single hyperparameter $\sigma$. When squared, $\sigma$ defines the noise variance; in turn, this defines the noise kernel $\bm{R}$ (\cref{eq:noise_kernel}). This setup enforces the assumption that the noise variance is \textit{constant} across the input domain, leading to a \textit{homoscedastatic} GP -- that is, the \textit{noise variance} does not change over $x_i$. %
To demonstrate, \Cref{fig:homoGP} depicts the homoscedastic GP learnt from the ideal data (in \Cref{fig:ideal_data}). %
The model behaves as expected: %
the mean function of the prior approximates the relationship as far as possible, while the GP models the residual between this prior and the data. To highlight this effect, the residual modelled by the GP can be visualised in the zero-mean transformed space, that is $[y_i - m(x_i)]$, \Cref{fig:homoGP}. %
\begin{figure}
\includegraphics[width=\textwidth]{figures/homo_GP.png}
\caption{Homoscedastic GP regression of the ideal power curve. The model in the data space (top) and the zero-mean transformed space (bottom). The black line shows the prior mean $\bm{m}$, the red line shows the predictive mean $\mathbb{E}[\bm{y}_*]$, and the shaded region shows three-sigma of the predictive variance $\mathbb{V}[\bm{y}_*]$.}\label{fig:homoGP}
\end{figure}
While the expected function $\mathbb{E}[\bm{y}_*]$ is representative of the general trend, the noise variance is poorly approximated when $\sigma$ is constant. %
This is particularly apparent in the transformed space $\left(y_i - m\left(x_i\right)\right)$, where the noise (represented by the shaded area) is significantly overestimated at high/low wind speeds (towards the asymptotes) and underestimated in the near-linear (central) regions. %
In consequence, as proposed in \cite{ROGERS20201124}, it is necessary to model power curve data with input-dependent noise, via \textit{heteroscedastic} regression \cite{goldberg1998regression}. %
Specifically, the variance of the noise terms is now some function of the inputs $x_i$, such that,
\begin{align}
\epsilon_i &\sim \mathcal{N}(0,\sigma^2_i)\\
\sigma^2_i &= r(x_i)
\end{align}
\noindent The GP equations remain the same, other than (\ref{eq:noise_kernel}), which defined a homoscedastic noise kernel. %
For a heteroscedastic process, the diagonal of the noise kernel is now defined by $r(x_i)$, rather than a constant, such that, %
\begin{align}
\bm{R} \triangleq \diag\left(\left\{r(x_1),\ldots,r(x_N)\right\}\right) \nonumber\\
\bm{R}_* \triangleq \diag\left(\left\{r(x_{*1}),\ldots,r(x_{*M})\right\}\right) \label{eq:het_noise_kernel}
\end{align}
\noindent where the off-diagonal elements are zero, $\bm{R}$ is an $N \times N$ matrix, and $\bm{R}_*$ is an $M \times M$ matrix. %
Rather than specifying a functional form for the noise variance, an additional independent GP is used to infer the function $r(x_i)$. %
As $\sigma$ must be strictly positive, the GP models the log-noise levels, denoted $g(x_i)$, such that,
\begin{align}
\log(r(x_i)) = g(x_i) \sim GP(\mu_g, k_g(x_i, x_j) )
\end{align}
\noindent i.e.\ a GP prior with constant mean $\mu_g$ and a squared-exponential kernel. The kernel has the same form as \cref{eq:sq_exp}, with a distinct length scale and process variance, such that the hyperparameters of the noise-process are $\bm{\zeta} = \{\mu_g, \sigma_g, l_g \}$. %
The training points for the $g$-process can have arbitrary locations; in this case, it is convenient that they coincide with the $f$-process, such that ${\bm{g} = \{g(x_i), \ldots, g(x_N)\}}$. %
Since the noise process has been introduced as additional (conditionally-independent) latent variables $\bm{g}$, the predictive distribution for $\bm{y}_*$ (previously \cref{eq:predict}) is extended to \cite{kersting2007most},
\begin{align}
p\left(\bm{y}_* \mid \bm{x}_*, \mathcal{D}\right) = \int \int p\left(\bm{y}_* \mid \bm{x}_*, \bm{g}, \bm{g}_*, \mathcal{D}\right)p(\bm{g}, \bm{g}_*\mid \bm{x}_*, \mathcal{D}) d\bm{g}d\bm{g}_*
\end{align}
\noindent Fixing $\{\bm{g}, \bm{g}_*\}$, the predictive distribution $p\left(\bm{y}_* \mid \bm{x}_*, \bm{g}, \bm{g}_*, \mathcal{D}\right)$ is the same as before -- with \cref{eq:het_noise_kernel} defining the noise kernel $\bm{R}$. %
Unfortunately, the term $p(\bm{g}, \bm{g}_*\mid \bm{x}_*, \mathcal{D})$ is problematic, as the integral is intractable. %
Various approximations of the integral can be implemented; including Monte-Carlo approximations, as well as variational inference \cite{lazaro2011variational,ROGERS20201124,blei2017variational}. %
A simple (and computationally-inexpensive) point-wise approximation of $\bm{g}$ is utilised here. %
This approach is convenient, since input-dependent noise can be implemented as an \textit{update} step following inference of the OMGP (outlined in \Cref{sec:OMGP}). %
In this case, the approximation was found to be representative of input-dependent noise for the power curve data. %
Specifically, according to \citet{kersting2007most}, the \textit{most likely} estimate of the target noise levels is assumed for the $g$-process, such that,
\begin{align}
p\left(\bm{y}_* \mid \bm{x}_*, \mathcal{D}\right) &\approx p\left(\bm{y}_* \mid \bm{x}_*, \bm{\hat{g}}, \bm{\hat{g}}_*, \mathcal{D}\right) \\[1em]
\{\bm{\hat{g}}, \bm{\hat{g}}_*\} &\triangleq \argmax_{\{\bm{\hat{g}}, \bm{\hat{g}}_*\}}\left\{p(\bm{g}, \bm{g}_*\mid \bm{x}_*, \mathcal{D})\right\}
\end{align}
\noindent i.e.\ \textit{most} (all) of the density of $p(\bm{g}, \bm{g}_*\mid \bm{x}_*, \mathcal{D})$ is assumed to be concentrated around the mode $\{\bm{\hat{g}}, \bm{\hat{g}}_*\}$ \cite{kersting2007most}.
\subsubsection{Optimisation of the noise process}
To obtain point-wise estimates of $\bm{g}$, a homoscedastatic process is initially learnt by type-II ML -- denoted $G_1$ -- with hyperparamters $\bm{\theta}$. %
($\bm{R}$ is a constant noise kernel, as in \cref{eq:noise_kernel}.) %
Given $G_1$, an empirical estimate of the most likely noise variance can be calculated for each training observation $\left\{x_i,y_i\right\} \in \mathcal{D}$, by %
considering a sample $\tilde{y}^{(j)}_i$ from the predictive distribution of $G_1$. %
If $y_i$ and $\tilde{y}_i^{(j)}$ are viewed as two independent observations from the same underlying distribution, their arithmetic mean $0.5\left(y_i - \tilde{y}_i^{(j)}\right)^2$ is shown to be a valid approximation of the noise variance at $x_i$ \cite{kersting2007most}. %
This can be improved by taking an expectation w.r.t.\ the predictive distribution, such that \cite{kersting2007most},
\begin{align}
\log\left\{\mathbb{V}\left[y_i, G_1(x_i, \mathcal{D})\right] \right\} &\approx g^\prime_i\\
&= \log\left\{\frac{1}{s}\sum_{j=1}^{s}{0.5\left(y_i - \tilde{y}_i^{(j)}\right)^2}\right\} \label{eq:g_est}
\end{align}
\noindent here, $s$ is the sample size from the predictive distribution of $G_1$. %
A suitably large value of $s$ should lead to reasonable estimates: \citet{kersting2007most} recommend $s \geq 100$, thus, in this case, $s=100$. %
Having calculated ${\bm{g}^\prime = \{g^\prime_1,\ldots,g^\prime_N\}}$, the noise process can be learnt -- denoted $G_2$ -- by type-II ML (given $\left\{\bm{g}^\prime, \bm{x}\right\}$) with distinct hyperparameters $\bm{\zeta}$. Then, conditioning a joint multivariate Gaussian (as before) the distribution $p(\bm{g}_* \mid \bm{x}_*, \bm{x}, \bm{g}^\prime)$ can be used to predict the (logarithmic) noise variance across the input space; in turn, defining $r(x_i)$. %
The \textit{heteroscedastic} GP -- denoted $G_3$ -- combines $G_1$ and $G_2$; %
i.e. $G_2$ models the input-dependent noise kernel according to \cref{eq:het_noise_kernel} for the $G_1$ process. %
At this point, $G_1$ is set to $G_3$ ($G_1 \leftarrow G_3$) and each step is repeated until convergence in the marginal likelihood (of the heteroscedastic process $G_3$). %
The optimisation procedure is summarised in~\ref{a:optim-HetGP}. %
Learning $\bm{g}$ in this way effectively minimises the \textit{average} distance between the target output $y_i$ and the predictive distribution of the (heteroscedastic) process $G_3$ at the training inputs~\cite{kersting2007most}. %
\subsection{Heteroscedastic regression of the ideal power curve}
The optimised heteroscedastic process for the ideal data is shown in \Cref{fig:hetGP}.
Unlike the homoscedastic example (\Cref{fig:homoGP}) the model is representative of input-dependent noise; %
to highlight this, the lower sub-plots illustrate the changing variance (shaded regions) and associated noise-levels over the inputs (blue line). %
As expected, a lower variance is associated with the tails of the sigmoid and a larger variance at the centre. %
To quantify improvements, the joint-log-likelihood of the training and test data under the model can be monitored -- this increases from $ 2.31\times 10^3$ to $3.31\times 10^3$, highlighting that input-dependant noise better approximates the variance in the data. %
\begin{figure}[pt]
\includegraphics[width=\textwidth]{figures/het_GP.png}
\caption{Heteroscedastic GP regression of the ideal power curve. The model in the original space (top), the zero-mean transformed space (middle), and the expectation of the noise function $\sigma_i = \mathbb{E}\left[\sqrt{r(x_i)}\right]$ (bottom). The black line shows the prior mean $\bm{m}$, the red line shows the predictive mean $\mathbb{E}[\bm{y}_*] = \bm{\mu}_*$, and the shaded region shows three-sigma of the predictive variance $\mathbb{V}[\bm{y}_*] = \diag(\bm{\Sigma}_*)$.}\label{fig:hetGP}
\end{figure}
The results so far, however, have shown a regression of the ideal observations only, similar to \cite{ROGERS20201124}. %
The OMGP is now introduced to model curtailed data, such as those in \Cref{fig:curtailed_data,fig:weekly}.
\section{An Overlapping Mixture of Gaussian Processes}\label{sec:OMGP}
The overlapping mixture of Gaussian processes (OMGP) model \cite{lazaro2012overlapping,tay2007smooth} is introduced to infer regression functions given curtailed power curve data. %
Here, the notation follows that of~\citet{lazaro2012overlapping}. %
Rather than a single GP, the OMGP assumes multiple latent functions to describe the data, such that,
\begin{align}
y_i^{(k)} = \left\{f^{(k)}(x_i)+ \epsilon_i\right\}_{k=1}^K
\end{align}
\noindent i.e.\ each observation is found by evaluating one of $K$ latent functions, with additive noise: %
for now, each process is homoscedastic.
As discussed, labels to assign observations to functions are unknown. %
In consequence, a latent variable is introduced to the model, $\bm{Z}$ -- this is a binary indicator matrix, such that $\bm{Z}[i,k] \neq 0$ indicates that observation $i$ was generated by function $k$. %
There is only one non-zero entry per row in $\bm{Z}$ (each observation is found by evaluating one function only). %
The likelihood of the OMGP is, therefore \cite{lazaro2012overlapping},
\begin{align}
p\left(\bm{y}\mid\left\{\bm{f}^{(k)}\right\}_{k=1}^{K},\bm{Z},\bm{x}\right) = \prod^{N,K}_{i,k=1} p\left(y_i \mid f^{(k)}(x_i)\right)^{\bm{Z}[i,k]}
\end{align}
\noindent As with the conventional GP, prior distributions are placed over the latent functions and variables,
\begin{align}
P(\bm{Z}) &= \prod_{i,k=1}^{N,K} \bm{\Pi}[i,k]^{\bm{Z}[i,k]} \label{eq:Zprior}\\
f^{(k)}(x_i) &\sim GP\left(m^{(k)}\left(x_i\right),k^{(k)}\left( x_i, x_j\right)\right) \label{eq:fk_prior} \\
\epsilon^{(k)}_i &\sim \mathcal{N}(0, \sigma^2)
\end{align}
\noindent where \cref{eq:Zprior} is the prior over the indicator matrix, such that $\bm{\Pi}[i,:]$ is a histogram over the $K$ components for the $i^{th}$ observation, and ${\sum_{k=1}^{K}{\bm{\Pi}[i,k]} = 1}$. %
(Note, colon notation is used to index all columns or rows in a matrix.) %
The terms in \cref{eq:fk_prior} are independent GP priors over each latent function $f^{(k)}$ with distinct mean/kernel functions $\left(m^{(k)}\left(x_i\right),k^{(k)}\left( x_i, x_j\right)\right)$. To reduce the number of latent variables, the prior over the noise variances is defined by a \textit{shared} hyperparameter $\sigma$ (this is modified later, in the heteroscedastic updates).
The collected hyperparamters for the model are $\left\{\left\{\bm{\theta}_k\right\}_{k=1}^K,\bm{\Pi}\right\}$. %
The notation $\bm{\theta}_k$ denotes a distinct set of mean/kernel function hyperparameters for the $k^{th}$ component (including the noise kernel). %
Referring back to the curtailed data in Figure~\ref{fig:curtailed_data}, it is now possible to encode prior engineering knowledge of the expected functions through the covariance, mean, and hyperparameters. %
Here, it is argued that the following are known, given prior knowledge of wind turbine power curves:
\begin{itemize}
\item given the training data (and possibly prior knowledge of the operational conditions) it should be clear that three latent functions will be representative of the data, such that\footnote{While it is assumed $K=3$, the point-wise classification of each datum remains unknown.} $K=3$;
\item for the zero-power relationship, a linear regression (with a constant kernel) should be representative;
\item for the remaining functions (ideal and curtailed data) the soft-clip \cref{eq:softclip} appropriately describes the expected relationships.
\end{itemize}
\noindent In this setting, while $K=3$, the prior includes two independent GPs with a soft-clip mean (\cref{eq:softclip}) and squared-exponential kernel (\cref{eq:sq_exp}) function. %
These priors correspond to the ideal and curtailed curves. %
For the final component, a constant kernel is selected $k^{(3)}(x_i,x_j) = c$; this reduces the latent function to a (zero-gradient) linear regression, to approximate the zero-power data. %
To summarise, the hyperparameters (of the prior) of the model are: $\bm{\theta}_{k} = \{\beta_k, \bm{\alpha}_k, \sigma_f^{(k)}, l_k, \sigma\}_{k=1}^2$ and $\bm{\theta}_{3} = \{c,\sigma\}$. %
\subsection{A note on model assumptions}
It is important to clarify the modelling assumptions. %
While the OGMP infers labels $\mathbf{Z}$, to associate measurements to functions in an unsupervised manner, the number of functional components ($K$) and their priors must be defined in advance. %
(This concept is somewhat analogous to unsupervised learning with Gaussian Mixture models \cite{murphy2012machine}.) %
As such, while an engineer is not required to label the data, they are required to predefine an appropriate number of functions. %
Here, it is assumed that this can be determined by inspecting the static training-set (i.e.\ \Cref{fig:curtailed_data}) in an offline sense. %
In certain scenarios, however, predefining $K$ and the prior distributions is less trivial; incremental/online learning, for example.
There are several options in this setting. %
One can select an appropriate number of components via cross-validation, considering quantities such as the Bayesian Information Criterion (BIC) or Bayes factors~\cite{murphy2012machine} -- an example of cross-validation is provided in \Cref{s:more} and \ref{a:cross-val}. %
Alternatively, $K$ could be considered as an additional latent variable, and its estimation could be included in the inference. %
Unsurprisingly inferring $K$ in this way is more involved, as presented by \citet{ross2013nonparametric}. %
It is reiterated that the training-data consider a subset of possible curtailments (described in \Cref{s:selection}). %
This consideration should not be an issue in practice, as the model is flexible in the power curves it can represent. %
To demonstrate, the OMGP is learnt for another set of curtailments, measured from four \textit{alternative} turbines in the wind farm -- the results are presented in \Cref{s:results}. %
It should be acknowledged, of course, that inference will slow down as more data (or components) are included -- a typical caveat when learning from data. %
In the context of Gaussian processes, there are a number of options; for example, sparse approximations could be explored~\cite{snelson2005sparse}. %
\subsection{Variational approximation}
Due to the latent variables $\left\{\bm{f}^{(k)}\right\}$ and $\bm{Z}$, computation of the posterior $p\left(\left\{\bm{f}^{(k)}\right\}_{k=1}^K, \bm{Z} \mid \bm{x}, \bm{y}\right)$ is now intractable; %
thus, variational inference (VI) \cite{blei2017variational} is implemented. %
Specifically, VI involves specifying an approximate density family $q(\bm{a}) \in \mathcal{Q}$ over the target conditional $p(\bm{a}\mid\bm{b})$.
The best candidate $\hat{q}(\bm{a})$ can be viewed as $q(\bm{a}) \in \mathcal{Q}$ that is \textit{closest} to the (unknown) target $p(\bm{a}\mid\bm{b})$ in terms of the KL-divergence,
\begin{align}
\hat{q}(\bm{a}) = \argmin_{q(\bm{a}) \in \mathcal{Q}} KL\left(q(\bm{a})\mid\mid p(\bm{a}\mid\bm{b})\right) \label{eq:VIKL}
\end{align}
\noindent Once found, $\hat{q}(\bm{a})$ is the best approximation of $p(\bm{a}\mid\bm{b})$ within the family $\mathcal{Q}$~\cite{blei2017variational}. %
(In this case, $\bm{a} \triangleq \left\{\left\{\bm{f}^{(k)}\right\}, \bm{Z}\right\}$ and $\bm{b} \triangleq \{\bm{y}\}$.) %
The KL divergence for \cref{eq:VIKL} is defined,
\begin{align}
KL\left(q(\bm{a})\mid\mid p(\bm{a}\mid\bm{b})\right) &= \mathbb{E}_{q(\bm{a})}[\log{q(\bm{a})}] - \mathbb{E}_{q(\bm{a})}[\log{p(\bm{a}\mid\bm{b})}] \label{eq:KL1}\\
&= \mathbb{E}_{q(\bm{a})}[\log{q(\bm{a})}] - \mathbb{E}_{q(\bm{a})}[\log{p(\bm{a},\bm{b})}] + \log{p(\bm{b})} \label{eq:KL2}
\end{align}{}
\noindent (Steps (\ref{eq:KL1}) to (\ref{eq:KL2}) uses log rules while expanding the conditional.) %
\cref{eq:KL2} reveals the dependence on $p(\bm{b})$, which is intractable, and why VI is needed in the first place \cite{blei2017variational}. Therefore, rather than the KL divergence (\ref{eq:KL2}), an alternative object is optimised that is equivalent to the (negative) KL divergence up to the term $\log{p(\bm{b})}$, which is a constant with respect to $q(\bm{z})$; that is,
\begin{align}
\mathcal{L}_{b}(\bm{a})&= \mathbb{E}_{q(\bm{a})}[\log{p(\bm{a},\bm{b})}] -\mathbb{E}_{q(\bm{a})}[\log{q(\bm{a})}] \label{eq:elbo1}\\
&= \int q(\bm{a}) \log{\frac{p(\bm{a},\bm{b})}{q(\bm{a})}} \d\bm{a} \label{eq:elbo}
\end{align}
\noindent This quantity is the referred to as the \textit{evidence lower bound} (elbo). From (\ref{eq:KL2}), it can be seen that \textit{maximising} this object will \textit{minimise} the KL divergence between $q(\bm{a})$ and $p(\bm{a}\mid\bm{b})$. %
Conveniently, \cref{eq:elbo} can be used to construct a lower bound on the marginal likelihood $p(\bm{b})$: %
i.e.\ rearranging \cref{eq:KL2} and substituting in (\ref{eq:elbo1}) leads to,
\begin{align}
\log{p(\bm{b})} &= KL\left(q\left(\bm{a}\right) \;\Vert\; p\left(\bm{a}\mid\bm{b}\right)\right) + \mathcal{L}_{b} \label{eq:KL_LB}
\end{align}
\noindent Since $KL(\cdot)\geq0$ \cite{mackay2003information}, it follows that the evidence is lower-bounded by the elbo, in other words \ $\log{p(\bm{b})}\geq \mathcal{L}_{b}$. %
This inequality is useful, as $\mathcal{L}_{b}$ can be used to monitor the marginal likelihood during inference/optimisation (as with the conventional GP, \cref{eq:lml}). %
Substituting notation $\bm{a} \triangleq \left\{\left\{\bm{f}^{(k)}\right\}, \bm{Z}\right\}$ and $\bm{b} \triangleq \{\bm{y}\}$ in (\ref{eq:elbo}), leads to (\ref{eq:L_b}),
\begin{small}
\begin{align}
&\log{p(\bm{y}\mid\bm{x})} = \log \int \int p\left(\left\{\bm{f}^{(k)}\right\}, \bm{Z}, \bm{y}, \bm{x}\right) p\left(\left\{\bm{f}^{(k)}\right\}\right) p\left(\bm{Z}\right) \d\left\{\bm{f}^{(k)}\right\} \d\bm{Z} \label{eq:tru_lml}\\
&\geq \mathcal{L}_{b} = \int \int q\left(\left\{\bm{f}^{(k)}\right\}, \bm{Z}\right) \log{\frac{p\left(\left\{\bm{f}^{(k)}\right\}, \bm{Z}, \bm{y}, \bm{x}\right)}{q\left(\left\{\bm{f}^{(k)}\right\}, \bm{Z}\right) }} \d\left\{\bm{f}^{(k)}\right\} \d\bm{Z} \nonumber\\
&= \int\int q\left(\left\{\bm{f}^{(k)}\right\}, \bm{Z}\right
\log{\frac{p\left(\bm{y}\mid\left\{\bm{f}^{(k)}\right\},\bm{Z},\bm{x}\right)p(\bm{Z}) \prod_{k=1}^K p\left(\bm{f}^{(k)}\mid \bm{x}\right)}{q\left(\left\{\bm{f}^{(k)}\right\}, \bm{Z}\right)}}\;\d\left\{\bm{f}^{(k)}\right\}\d\bm{Z} \label{eq:L_b}
\end{align}
\end{small}
A family of variational distributions $q \in \mathcal{Q}$ is now chosen to approximate $p\left(\left\{\bm{f}^{(k)}\right\}, \bm{Z} \mid \bm{x}, \bm{y}\right)$ such that a mean field assumption is implemented: i.e.\ $q$ factorises, $q\left(\left\{\bm{f}^{(k)}\right\}, \bm{Z}\right) = q\left(\left\{\bm{f}^{(k)}\right\}\right) q\left(\bm{Z}\right)$. %
In consequence, due to conjugacy, it is possible to analytically update each latent variable in turn (while keeping the others fixed) such that the bound $\mathcal{L}_b$ is maximised (with respect to that variable). %
Updates for each factor are iterated until convergence in the lower bound $\mathcal{L}_b$.\footnote{At this stage in the inference, the hyperparameters of the model $\left\{\left\{\bm{\theta}_k\right\}_{k=1}^K,\bm{\Pi}\right\}$ are fixed -- they will be optimised later.}
\subsubsection{Mean-field updates}
Firstly, assuming $q\left(\left\{\bm{f}^{(k)}\right\}\right)$ is known -- and therefore the marginals for each component $q\left(\bm{f}^{(k)}\right)= \mathcal{N}\left(\bm{\mu}^{(k)}, \bm{\Sigma}^{(k)}\right)$ -- it is possible to analytically maximise $\mathcal{L}_{b}$ with respect to $q(\bm{Z})$, by setting the derivative of the bound to zero, and constraining $q$ to be a probability density~\cite{lazaro2012overlapping},
\begin{align}
q(\bm{Z}) &= \prod_{i=1,k=1}^{N,K} \bm{\hat{\Pi}}[i,k]^{\bm{Z}[i,k]}, \hspace{1em} \textrm{s.t.} \hspace{1em}
\bm{\hat{\Pi}}[i,k] \propto \bm{\Pi}[i,k] \exp\left(a_{ik}\right) \label{eq:VIup1}\\
a_{ik} &\triangleq - \frac{1}{2\sigma^2} \left(\left( y_i - \bm{\mu}^{(k)}_i \right)^2 + \bm{\Sigma}^{(k)}[i,i] \right) - \frac{1}{2} \log\left(2\pi\sigma^2\right) \nonumber
\end{align}
\noindent where \cref{eq:VIup1} implies the approximated distribution $q(\bm{Z})$ is factorised for each sample~\cite{lazaro2012overlapping}. %
Conversely, assuming $q(\bm{Z})$ is known, $\mathcal{L}_{b}$ can maximised with respect to each latent function $q\left(\left\{\bm{f}^{(k)}\right\}\right)$,
\begin{align}
q\left(\bm{f}^{(k)}\right) &= \mathcal{N}\left(\bm{f}^{(k)}\mid\bm{\mu}^{(k)}, \bm{\Sigma}^{(k)}\right) \label{eq:VIup2} \\[1ex]
\bm{\Sigma}^{(k)} &\triangleq \left(\bm{K}^{-1{(k)}}_{\bm{x}\bm{x}} + \bm{B}^{(k)} \right)^{-1} \nonumber\\
\bm{\mu}^{(k)} &\triangleq \bm{m}^{(k)}+ \bm{\Sigma}^{(k)} \bm{B}^{(k)}\left(\bm{y} - \bm{m}^{(k)}\right) \nonumber
\end{align}
\noindent where $\bm{B}^{(k)}$ is a $N\times N$ diagonal matrix (off-diagonals are zero) with elements,
\begin{align}
\bm{B}^{(k)} = \diag\left(\left\{\frac{\bm{\hat{\Pi}}[1,k]}{\sigma^{2}}\,,\; \ldots\;,\, \frac{[\bm{\hat{\Pi}}[N,k]}{\sigma^{2}} \right\}\right) \label{eq:omgpB}
\end{align}
\noindent To find a candidate $\hat{q}$ that is closest to the true posterior, $q(\bm{Z})$ and $q\left(\left\{\bm{f}^{(k)}\right\}\right)$ are initialised from their priors, and they are iteratively updated by alternating \cref{eq:VIup1,eq:VIup2}. %
Both updates are optimal with respect to the distribution that they compute; therefore, they are guaranteed to increase the (lower bound) on the log-marginal-likelihood~\cite{lazaro2012overlapping}, \cref{eq:L_b}.
\subsubsection{Monitoring convergence: An improved lower bound}
As in \cite{lazaro2011variational}, an improved bound is used to \textit{monitor} convergence, introduced by \citet{king2006fast}. %
This object, denoted $\mathcal{L}_{bc}$, also lower-bounds the marginal likelihood, while being an upper-bound on the standard variational bound $\mathcal{L}_b$ (\cref{eq:L_b}). %
(That is, if $\mathcal{L}_b$ is subtracted from the improved bound, the result is a KL divergence -- %
as $KL(\cdot) \geq 0$, this implies that $\mathcal{L}_{bc}$ upper-bounds $\mathcal{L}_b$.) %
The bound can be defined when the term $\log \int p\left(\left\{\bm{f}^{(k)}\right\}, \bm{Z}, \bm{y}, \bm{x}\right) p\left(\bm{Z}\right) \d\bm{Z}$ -- in the true marginal likelihood, \cref{eq:tru_lml} -- is replaced with $ \int q(\bm{Z}) \log \frac{p\left(\left\{\bm{f}^{(k)}\right\}, \bm{Z}, \bm{y}, \bm{x}\right) p\left(\bm{Z}\right)}{q(\bm{Z})} \d\bm{Z}$. %
Following substitution, it is possible to integrate out $p\left(\left\{\bm{f}^{(k)}\right\}\right)$ analytically.
Alternatively, \citet{lazaro2011variational} show that it is possible to obtain the corrected bound by optimally removing $p\left(\left\{\bm{f}^{(k)}\right\}\right)$ from the standard bound.
The (implementation friendly) expression for $\mathcal{L}_{bc}$ is as follows \cite{lazaro2011variational},
\begin{align}
&\log{p(\bm{y}\mid\bm{x})} \geq \mathcal{L}_{bc} \nonumber\\
& = \sum_{k=1}^{K}\left(-\frac{1}{2} \big\lVert \bm{R}^{(k)\top} \backslash \left(\bm{B}^{(k)\frac{1}{2}} \left( \bm{y} - \bm{m}^{(k)} \right) \right) \big\rVert^2 - \sum_{i=1}^{N} \log\bm{R}^{(k)}[i,i] \right) \ldots \nonumber\\[1ex]
& \quad\qquad\qquad - \textrm{KL}\left(q\left(\bm{Z}\right) \,\big\Vert\, p\left(\bm{Z}\right) \right) - \frac{1}{2} \sum^{N,K}_{i=1,k=1} \bm{\hat{\Pi}}[i,k] \log\left(2\pi\sigma^{^2}\right) \label{eq:Lvb}\\[2em]
&\bm{R}^{(k)} \triangleq \textrm{chol}\left(\bm{I} + \bm{B}^{(k)\frac{1}{2}} \,\bm{K}^{{(k)}}_{\bm{x}\bm{x}}\, \bm{B}^{(k)\frac{1}{2}}\right) \nonumber \\[1ex]
&\textrm{KL}\left(q\left(\bm{Z}\right) \,\big\Vert\, p\left(\bm{Z}\right) \right) \triangleq \sum_{i=1,k=1}^{K,N} \bm{\hat{\Pi}}[i,k] \log\frac{\bm{{\Pi}}[i,k]}{\bm{\hat{\Pi}}[i,k]} \nonumber
\end{align}
\noindent where $\textrm{chol}(\cdot)$ is the Cholesky decomposition and the backslash operator $\bm{A}\backslash \bm{B}$ solves the systems of linear equations $\bm{A}\bm{c}=\bm{B}$ for $\bm{c}$. The improved, tighter bound is independent of $p\left(\left\{\bm{f}^{(k)}\right\}\right)$ -- hence it can be referred to as the \textit{marginalised variational bound} \cite{lazaro2011variational}. %
In words, this implies that $\mathcal{L}_b$ is the same as $\mathcal{L}_{bc}$ -- for a given $q(\bm{Z})$ -- when an optimal choice for $p\left(\left\{\bm{f}^{(k)}\right\}\right)$ is made \cite{lazaro2012overlapping}. %
In consequence, the bound is more stable over different hyperparameter values \cite{king2006fast}, and it is more efficient when optimising $\left\{\left\{\bm{\theta}_k\right\}_{k=1}^K,\bm{\Pi}\right\}$ through type-II ML (the optimisation scheme is outlined below).
\subsubsection{Optimisation of hyperparameters}
A variational inference and Expectation Maximisation (EM) scheme is implemented \cite{blei2017variational}. %
The strategy iteratively updates the \textit{approximate} (factorised) posterior and then optimises the hyperparameters of the model, while the (improved) lower bound $\mathcal{L}_{bc}$ on the marginal likelihood is maximised. The EM steps are repeated until convergence:
\begin{enumerate}
\item E-step: mean field updates -- iterate \cref{eq:VIup1,eq:VIup2} until convergence in $\mathcal{L}_{bc}$ (or $\mathcal{L}_{b}$), hyperparameters are fixed.
\item M-step: optimise the lower bound $\mathcal{L}_{bc}$ w.r.t.\ all hyperparameters until convergence,
$$\left\{\hat{\left\{\bm{\theta}_k\right\}}_{k=1}^K,\hat{\bm{\Pi}}\right\}= \underset{\left\{\left\{\bm{\theta}_k\right\}_{k=1}^K,\bm{\Pi}\right\}}{\argmax}\bigg\{\mathcal{L}_{bc}\bigg\}$$
the distribution $q(\bm{Z})$ is kept fixed. %
\end{enumerate}
\noindent Having initialised each component from the prior, steps 1 and 2 are iterated until convergence in $\mathcal{L}_{bc}$ (of the M-step).
\subsubsection{Predictive equations}
Having learnt the OMGP, it can be used to estimate the latent variables and functions. %
These predications are critical in the context of performance monitoring: i.e.\ for a given measurement of wind speed $x_i$, the OMGP can predict the power output $y_i$, and classify the trend (or curtailment) $k \in \{1,\ldots,K\}$.
The posterior predictive likelihood given the unseen inputs $\bm{x}_*$ is,
\begin{align}
p(\bm{y}_*\mid\bm{x}_*,\mathcal{D}) &\approx \sum^K_{k=1} \bm{\Pi}[*,k] \int p\left(\bm{y}_*\mid \bm{f}^{(k)},\bm{x}_*, \mathcal{D}\right)q\left(\bm{f}^{(k)} \mid \mathcal{D}\right) \d \bm{f}^{(k)}\\
&=\sum_{k=1}^K \bm{{\Pi}}[*,k] \; \mathcal{N}\left(\bm{y}_*\mid\bm{\mu}^{(k)}_*,\bm{\Sigma}_{*}^{(k)}\right) \label{eq:PPL}\\[1em]
\bm{\mu}^{(k)}_* &\triangleq \bm{m}_*^{(k)} + \bm{K}_{\bm{x}_*\bm{x}}^{(k)}\left(\bm{K}_{\bm{x}\bm{x}}^{(k)} + \bm{B}^{(k)-1} \right)^{-1} \left(\bm{y} - \bm{m}^{(k)}\right) \nonumber\\
\bm{\Sigma}_{*}^{(k)} &\triangleq \bm{K}_{\bm{x}_*\bm{x}_*}^{(k)} - \bm{K}_{\bm{x}_*\bm{x}}^{(k)}\left(\bm{K}_{\bm{x}\bm{x}}^{(k)} + \bm{B}^{(k)-1} \right)^{-1} \bm{K}_{\bm{x}\bm{x}_*}^{(k)} + \bm{R}_*^{(k)} \nonumber \\
\bm{R}_*^{(k)} &\triangleq \sigma^2\,\bm{I}_M \label{eq:omgpR}
\end{align}
\noindent The prior mixing proportion for new observations $\bm{{\Pi}}[*,k]$ is a fixed hyperparameter, weighting each component equally \textit{a priori}, such that $\bm{{\Pi}}[*,k] = 1/K$. %
Interestingly, the predictive equations for the OMGP are similar to the conventional GP (\cref{eq:predict}), however, the noise component for the training data ($\bm{B}^{(k)-1}$) is scaled according to $\bm{\hat{\Pi}}[i,k]^{-1}$ \cite{lazaro2012overlapping}.
Thus, the noise component effectively \textit{weights} the contribution of each observation in $\mathcal{D}$ to its posterior predictive component in the mixture.
Another useful prediction categorises observations according to the most likely component $k$. For the training data $\mathcal{D}$, this is simply the \textit{maximum a posteriori} (MAP) estimate, given the approximated posterior (\ref{eq:VIup1}),
\begin{equation}
\hat{k}_i = \underset{k}{\textrm{argmax}}\left\{\bm{\hat{\Pi}}[i,k]\right\}
\end{equation}
\noindent For the test-data (i.e.\ weekly wind-power data $\{\bm{x}_*,\bm{y}_*\}$) the posterior predictive class component ${k}_*$ is,
\begin{align}
p(k_*\mid\bm{x}_*,\bm{y}_*,\mathcal{D}) &= \frac{p(\bm{y}_*\mid\bm{x}_*,k,\mathcal{D})\,\bm{{\Pi}}[*,k]}{p(\bm{y}_*\mid\bm{x}_*, \mathcal{D})} \label{eq:bayes_rules_classify}
\end{align}
\noindent where the denominator (evidence) was defined in (\ref{eq:PPL}), and the numerator is,
\begin{align}
p(\bm{y}_*\mid\bm{x}_*,k_*,\mathcal{D})\,p(k_*) &\triangleq \mathcal{N}\left(\bm{y}_*\mid\bm{\mu}^{(k)}_*,\bm{\Sigma}_{*}^{(k)}\right) \bm{{\Pi}}[*,k]
\end{align}
\noindent the MAP class component $\hat{k}_*$ can then be defined,
\begin{align}
\hat{k}_* &= \underset{k_*}{\textrm{argmax}}\left\{p(k_*\mid\bm{x}_*,\bm{y}_*,\mathcal{D})\right\} \label{eq:classify}
\end{align}
\noindent Note, classifying new data according to $\hat{k}_*$ is only possible when \textit{both} $\bm{x}_*$ and $\bm{y}_*$ have been observed. %
This implies that predictions using \cref{eq:classify} should be used in certain monitoring applications (as demonstrated in the results). %
\subsubsection{Input dependent noise approximations for the OMGP}
At this stage, it is possible to apply heteroscedastic updates to the OMGP, according to the method in \Cref{s:hetGP}. %
In this case, for each $k^{th}$ component, the noise variance is now considered a function of the inputs,
\begin{align}
\sigma_i^{(k)^2} &= r^{(k)}(x_i) \\[1em]
\log r^{(k)}(x_i) = g^{(k)}(x_i) &\sim GP(\mu_g^{(k)},k_g^{(k)}(x_i,x_j))
\end{align}
\noindent i.e.\ there are $K$ GPs (with hyperparameters $\bm{\zeta}_k = \{\mu^{(k)}_g, \sigma^{(k)}_g, l^{(k)}_g \}$) to describe input-dependent noise for each function in the mixture -- rather than a single, shared hyperparameter $\sigma$.
Again, the predictive \cref{eq:PPL} remains similar, where the noise kernels are updated. %
In this case, $\bm{B}^{(k)}$ (from \cref{eq:omgpB}) becomes,
\begin{align}
\bm{B}^{(k)} = \diag \left(\left\{\frac{\bm{\hat{\Pi}}[\mathcal{I}^{(k)}_1,k]}{r^{(k)}(x_{\mathcal{I}^{(k)}_1})}\,,\; \ldots\;,\, \frac{[\bm{\hat{\Pi}}[\mathcal{I}^{(k)}_N,k]}{r^{(k)}(x_{\mathcal{I}^{(k)}_N})} \right\}\right)
\end{align}
\noindent where the indices $\bm{\mathcal{I}}_k=\{\mathcal{I}^{(k)}_1,\ldots \mathcal{I}^{(k)}_N\}$ correspond to observations in $\mathcal{D}$ whose MAP label is $k$. %
Formally, $\{x_i, y_i\}_{i\in\bm{\mathcal{I}}_k} \in \mathcal{D}$, where $\hat{k}_{i\in \bm{\mathcal{I}}_k} = k$.
Additionally, $\bm{R}_*^{(k)}$ from \cref{eq:omgpR} is updated,
\begin{align}
\bm{R}_*^{(k)} &\triangleq \diag\left(\left\{r^{(k)}(x_{*1}),\ldots,r^{(k)}(x_{*M})\right\}\right)
\end{align}
In summary, to approximate the noise-process for each component, the training data are split into $K$ subsets, according to the MAP classification (\ref{eq:classify}) given the homoscedastic OMGP and the training data. %
That is, the noise-processes are approximated for each component, given the training data that are associated with that component (according to $\hat{k}_i$) and the framework outlined in \Cref{s:hetGP}.
\section{Results}\label{s:results}
In total, 8900 observations were sampled from the wind farm data, corresponding to a (selected) subset of seven operational turbines over nine weeks. %
As aforementioned, three trends are present in these data; additional curtailments may be observed in practical data, and can be included in the OMGP if necessary -- an alternative example is provided in \Cref{s:more}. %
The data are shown in \Cref{fig:curtailed_data,fig:weekly}. %
Approximately 1/3 ($N=2980$ observations) were using for training here, and the remaining data ($M=5920$ observations) were used as an independent test-set. %
OMGP regression of the curtailed data is shown in \Cref{fig:hetOMGP}. %
Given the training observations (larger {$\bullet$} markers), the model has inferred the multivalued behaviour in an unsupervised manner, including the ideal curve (orange), $\approx$ 50\% curtailment (green), and the zero-power behaviour (purple). %
\begin{figure}[pt]
\includegraphics[width=\textwidth]{figures/het_OMGP.png}
\caption{Heteroscedastic OMGP regression of curtailed power curve data. The mixture model in the original space (top), and each component in the zero-mean transformed space, i.e.\ $\bar{y}_i = y_i - m^{(k)}(x_i)$ (bottom three plots). Black lines show the mean functions of the prior $\bm{m}^{(k)}$. The green, orange, and purple lines show the predictive mean $\bm{\mu}^{(k)}_*$, and shaded regions show three-sigma of the predictive variance $\diag(\bm{\Sigma}^{(k)}_*)$. Small {$\bm{\cdot}$} markers show the test set, and larger $\bullet$ markers show the training set. %
For each component, the data correspond to their MAP function, according to $\hat{k}_i$ and $\hat{k}_*$.}\label{fig:hetOMGP}
\end{figure}
Visually, the model is representative of the underlying functions, and it appears to generalise to the test data (smaller {$\bm{\cdot}$} markers). %
Importantly, the GP successfully models the residual between prior engineering knowledge (encoded in the parametrised mean, shown by the black lines in \Cref{fig:hetOMGP}) and the data. %
Generally, the heteroscedastic updates are representative. The noise levels are (perhaps) overestimated towards the asymptotes of the power curves (high and low wind speeds). %
Additionally, the noise for the zero-power trend (purple) is overestimated, as it captures some of the data associated with the ideal/curtailed data -- around negative $0.5$ normalised wind speed. %
Smaller length scales $l_g^{(k)}$ in the noise-processes $g^{(k)}$ might prove appropriate, as there is no guarantee that the parameter set $\left\{\left\{\bm{\theta}_k, \bm{\zeta}_k \right\}_{k=1}^K,\bm{\Pi}\right\}$ represents the \textit{global} minimum of the log-marginal-likelihood. %
However, following several initialisations, this realisation was the most representative (and repeatable). %
To quantify performance, the normalised mean squared-error (NMSE) and Mahalanobis squared-distance (MSD) are provided. %
As the OMGP is a mixture, each test observation is assessed against its most likely component $\hat{k}_*$. In other words, NMSE and (normalised) MSD are assessed for each function with respect to their most likely data -- the corresponding subsets are shown for each component in the (lower) plots of \Cref{fig:hetOMGP}.
\begin{align}
\hat{NMSE} = \frac{100}{M \sigma_{\bm{y}_*}^2}\left(\bm{\mu}^{(\hat{k}_*)}_*-\bm{y}_*\right)^{\top}\left(\bm{\mu}^{(\hat{k}_*)}_*-\bm{y}_*\right) \label{eq:nmse}
\end{align}
\noindent Similarly, the MSD is,
{}
\begin{align}
\hat{MSD} = \frac{1}{M} \sum^{M}_{i=1} \frac{\left(\bm{\mu}^{(\hat{k}_*)}_*[i] -\bm{y}_*[i]\right)^2}{\bm{\Sigma}^{(\hat{k}_*)}[i,i]} \label{eq:msd}
\end{align}
\begin{table}[]
\centering
\begin{tabular}{l|cc|cc}
\hspace{1em} & \multicolumn{2}{c}{Conventional regression} & \multicolumn{2}{c}{Mixture of regressions} \\
\hspace{1em} & RVM & GP & OMGP & Het-OMGP \\
\hline
\hline
NMSE & 47.13 & 46.97 & 0.26 & 0.26 \\
MSD & 1.01 & 1.02 & 1.00 & 0.73
\end{tabular}\caption{Model performance metrics for the curtailed power curve data.}\label{t:metrics}
\end{table}
\Cref{t:metrics} quantifies significant improvements in representing the curtailed power data with a heteroscedastic OMGP. %
For reference, an alternative probabilistic regression is included, previously applied in the literature~\cite{jing2020wind}, the Relevance Vector Machine (RVM); implementation details are provided in~\ref{a:benchmarks}. %
It is reiterated, however: the focus is to show improvements of a mixture of regressions, rather than improvements between conventional regression models. %
The NMSE shows a marked advantage in representing the data with multiple latent functions. %
Nonetheless, the NMSE does not highlight advantages of heteroscedastic updates, since the metric (\ref{eq:nmse}) does not consider the predictive variance $\bm{\Sigma}^{(k)}_*$. %
Therefore, the (normalised) MSD in \Cref{t:metrics} highlights
improvements when modelling input-dependent noise for the mixture%
\footnote{It is acknowledged that the MSD is less useful when assessing the \textit{fit} of the OMGP, as the \textit{error} is scaled by the predictive variance $\bm{\Sigma}^{(k)}_*$; %
thus, the MSD is used only to assess the predictive variance $\bm{\Sigma}^{(k)}_*$.}. %
As discussed, certain hyperparamters can be interpreted. %
$\alpha_1^{(k)}$ corresponds to the maximum (normalised) power in the prior, and $\beta_k$ determines the rate of convergence (of the asymptote) for priors with sigmoidal mean functions. %
As expected, for the ideal curve ($k=1$) the mean of the prior tends to $\alpha_1^{(1)} = 1.021$.
For $k=2$ the asymptote tends to $\alpha_1^{(2)} = 0.4623$; %
this provides a more accurate estimate of the maximum curtailed output (46.23\% rather than $\approx 50\%$). %
As expected, the rate of convergence is greater for the curtailment data ($\beta_2 = 28.8$) and lower for the ideal data ($\beta_1 = 11.4$); this can be visualised in the plots of the prior mean functions (black lines) \Cref{fig:hetOMGP}.
\subsection{Validation: more turbines and curtailments}\label{s:more}
To demonstrate the flexibility of the model it is used to infer $K > 3$ latent functions, associated with a \textit{separate} group of turbines in the wind farm. %
As before, the turbines exhibit normal, 50\% curtailment, and zero-power relationships; however, an 80\% curtailment is also observed. %
The priors of the OMGP are defined as in~\Cref{sec:OMGP}, with an additional soft-clip mean function component, such that $K=4$. The number of components is verified via cross-validation in \ref{a:cross-val}. %
A total of 9973 observations are sampled from the data, corresponding to a (selected) subset of four turbines over seven weeks. %
Approximately $1/3$ of the data are used for training and $2/3$ for testing. %
A representative model is learnt for the alternative latent functions, visualised in \Cref{fig:80curt}. %
The same metrics are presented in~\Cref{t:metrics80} to highlight improvements. %
Again, the hyperparameters of the OMGP are interpretable: in particular, for the new curtailment $\alpha_1^{(k)} = 0.81$, corresponding approximately to 80\%.
\begin{figure}[pt]
\centering
\includegraphics[width=\textwidth]{figures/80pluscurt.png}
\caption{Heteroscedastic OMGP of curtailed data from an alternative group of turbines, also exhibiting 80\% curtailment. Black lines show the mean functions of the prior $\bm{m}^{(k)}$. The green, orange, purple, and pink lines show the predictive mean $\bm{\mu}^{(k)}_*$, and shaded regions show three-sigma of the predictive variance $\diag(\bm{\Sigma}^{(k)}_*)$. Small {$\bm{\cdot}$} markers show the test set, and larger $\bullet$ markers show the training set.}\label{fig:80curt}
\end{figure}
\begin{table}[]
\centering
\begin{tabular}{l|cc|cc}
\hspace{1em} & \multicolumn{2}{c}{Conventional regression} & \multicolumn{2}{c}{Mixture of regressions} \\
\hspace{1em} & RVM & GP & OMGP & Het-OMGP \\
\hline
\hline
NMSE & 10.46 & 10.19 & 0.15 & 0.15 \\
MSD & 0.98 & 0.98 & 0.70 & 0.66
\end{tabular}\caption{Model performance metrics when $K=4$, including 80\% curtailed data.}\label{t:metrics80}
\end{table}
The validation experiments with four components ($K=4$) highlight that the OMGP can be used to represent a variety of curtailment relationships, supporting modelling and monitoring procedures for a wide range of data that should be expected in practice. %
\subsection{Towards population-based monitoring: entropy measures}
Considering applications of monitoring \textit{in situ}, the OMGP can be used to inform novelty detection and classification across the wind farm. %
Novel observations of wind speed and power (from the full 125 week monitoring period) can be compared to the OMGP. %
This approach to performance monitoring is an approach in population-based SHM, whereby a general model, referred to as the \textit{form} \cite{PBSHMMSSP1}, is used to represent the behaviour of members within a population. %
In this case, the form is the OMGP and the population is the wind farm.
When monitoring via the power curve, the error given the predicted output (e.g.\ \cref{eq:nmse,eq:msd}) can be used for novelty detection, as in \cite{PBSHMMSSP1,Papatheou2015,papatheou2017performance}. %
Alternatively, with the OMGP, given wind speed and power observations, measurements can be classified using $\hat{k}_*$ (\ref{eq:classify}). %
Additionally, the distribution $p(k_*\mid\bm{x}_*,\bm{y}_*,\mathcal{D}) = P(k_*\mid\bm{x}_*,\bm{y}_*,\mathcal{D})$ (\ref{eq:bayes_rules_classify}) is informative from a monitoring perspective; %
this is the probability that $\{\bm{x}_*,\bm{y}_*\}$ were generated by component $f^{(k)}$ in the mixture. %
In other words, the probability that new data correspond to:
\begin{itemize}
\item the normal curve $(k_*=1)$,
\item 50~\% curtailment $(k_*=2)$,
\item or zero-power $(k_*=3)$. %
\end{itemize}
\noindent As $k_* \in \{1,2,3\}$, and $\sum_{k_*}^{K} P(k_*\mid\bm{x}_*,\bm{y}_*,\mathcal{D}) = 1 \quad \forall \{\bm{x}_*,\bm{y}_*\}$, it is possible to view power curve data as points on a 3D \text{simplex}, associated with the multinomial distribution $p(k_*\mid\bm{x}_*,\bm{y}_*,\mathcal{D})$. %
The grey triangle in \Cref{fig:simplex} visualises the simplex where points are observations from the test set (concerning the 50\% curtailment data). %
Although initially abstract, the plot is insightful from a monitoring perspective. %
It indicates that classes one and two (ideal and curtailed trends) are regularly confused, while class three (zero power) is equally confused with the others. %
This makes sense when inspecting \Cref{fig:hetOMGP}: the ideal and curtailed trends are similar up to a normalised wind speed of zero, while the zero-power trend is indistinguishable from $k=1$ and $k=2$ at low wind speeds. %
\begin{figure}[pt]
\centering
\includegraphics[width=.6\textwidth]{figures/simplex.png}
\caption{The simplex (grey triangle) associated with the distribution $P(k_*\mid\bm{x}_*,\bm{y}_*,\mathcal{D})$. Points on the simplex represents observations of wind speed and power. Blue $\circ$ markers highlight low entropy points, red $\circ$ markers highlight high entropy points.}\label{fig:simplex}
\end{figure}
Given this distribution, the Shannon-entropy can be used as a measure of \textit{uncertainty} to indicate if it is likely that new data were generated by latent functions within the OMGP,
\begin{align}
H(k_*) = - \sum^K_{j=1} P(k_*=j\mid\bm{x}_*,\bm{y}_*,\mathcal{D}) \log P(k_*=j\mid\bm{x}_*,\bm{y}_*,\mathcal{D})
\end{align}
\noindent With regard to the simplex in \Cref{fig:simplex}, each corner of the triangle relates to \textit{low} entropy, corresponding to data that are classified with \textit{certainty} (as $k_*=1$, $k_*=2$, or $k_*=3$). On the other hand, the centre corresponds to \textit{high} entropy, i.e.\ observations for which each component is equally likely (or none at all). %
During monitoring, high entropy data can be investigated, as it is unclear which component generated them. %
Examples of high and low entropy data given the test set are shown by red and blue markers respectively in \Cref{fig:simplex}. %
Following investigation, if it appears that new data correspond to an additional latent function (not yet included in the \textit{form} of the wind farm) the mixture can be updated accordingly by adding a component, such that $K \leftarrow K+1$. %
Ideas behind modelling and updating the \textit{form} for a wind farm population (and subsequent monitoring) are the focus of future work.
\section{Conclusions}\label{s:conc}
A novel data-driven model for wind turbine power data has been proposed. %
Critically, the method is capable of representing wind/power measurements including both \textit{curtailed} and ideal operation. %
This is an alternative to the conventional approach, which filters out (and removes) the curtailed (SCADA) data. %
Consequently, the model should be representative of \textit{in situ} behaviour, rather than ideal operation only. %
A mixture of Gaussian processes infers \textit{multivalued} wind-power relationships without labels to associate data to functions. %
Each function corresponds to a different operational condition (power curve) for a wind farm population. %
The algorithm is unsupervised, as labels to define which trend (ideal, curtailed, etc.) generated each each of the measurements are not required; this information was not available in the experiments here. %
For each function in the mixture, input dependent noise is considered, a critical consideration when modelling power curve data. %
The model is applied to measurements from an operational wind farm, and it is shown to generalise well, representing future measurements from the population for various sets of turbines and curtailments. %
Finally, ideas for population-based power curve monitoring procedures (considering entropy measures) are introduced and discussed.
\section*{Acknowledgements}
The authors gratefully acknowledge the support of the UK Engineering and Physical Sciences Research Council (EPSRC) through Grant references EP/R003645/1, EP/R004900/1, EP/R006768/1.
\bibliographystyle{unsrtnatemph}
|
{
"timestamp": "2021-12-01T02:27:11",
"yymm": "2111",
"arxiv_id": "2111.15496",
"language": "en",
"url": "https://arxiv.org/abs/2111.15496"
}
|
\section{Background and Considerations}
\label{sec:background}
In this section, we revise several considerations made in this paper and the concepts behind each one.
\subsection{Video Analytics Systems}
Video analytics applications can be divided into \textit{online} and \textit{offline} video analytics, depending on \textit{when} the information extracted from the frames is needed. On the one hand, online video analytics requires images to be processed in real-time, as actions are triggered by the analysis of what is in front of the camera when images are captured. An ALPR (Automatic License Plate Recognition) system controlling a barrier to entry a given facility would fall under this category, as well as an autonomous car detecting obstacles to avoid them. At the same time, past information is no longer useful for online systems. Consequently, such systems are subject to strict latency constraints and are typically evaluated by the single inference latency. Therefore, they are unable to fully exploit parallelism due to the \textit{hazards} of request consolidation under strict deadlines.
On the other hand, offline video analytics processes images captured in the past. For example, a system to query the contents of recorded footage to obtain the fragments or timestamps that contain the queried objects~\cite{kang2017noscope} falls under this category. These systems analyze images in bulk and are commonly evaluated based on the turnaround latency of the queries (i.e., total system throughput). Therefore, they can prioritize resource utilization over single inference latency.
Online and offline video analytics are two categories with very distinct requirements. In this paper, we focus on online video analytics for two reasons. First, they show subpar scalability as the set of available optimizations while scheduling requests or orchestrating available resources is limited. Second, offline video analytics can also benefit from an online analysis to pre-process images and reduce the search space of queries~\cite{hsieh2018focus}.
\subsection{Selection Region of Interest}
Object detection is, in fact, a combination of region proposal and image classification problems. Region proposal techniques suggest a set of regions in the input image that might contain objects of interest (i.e., objects the neural network was trained to detect). These regions are then processed as independent images and are classified based on their contents. The region is considered a positive example if the highest-scoring classification is above a user-defined threshold. Intuitively, the quality of the region proposal mechanisms has a high impact on the quality of the detection results. There are several methods available, such as Selective Search or Focal Pyramid Networks (FPN)\cite{lin2017feature}.
Neural networks that differentiate region proposal and classification in two stages are called \textit{Two-Shot} detection models. On the contrary, \textit{Single-Shot} detection models skip the region proposal stage and yield localization and class predictions all at once. Regardless of the number of shots, all object detection models predict bounding boxes and classes using a single input layer of a fixed size decided during training. Larger input layers potentially yield more accurate predictions, as more pixels (i.e., information) are taken into account. This is especially true for smaller objects that are more difficult to detect correctly. However, any increment in the input layer's size is followed by an increment in the compute requirements that is not always matched with an equal increment in accuracy.
\subsection{Moving/Foreground/Salient Object Detection}
FoMO relies on background subtraction (BGS) methods that can detect and extract objects of interest. Such methods allow us to differentiate between \textit{background} and \textit{foreground} objects. However, which objects compose the background and foreground is not always clear and is subject to interpretation. For example, in most scenarios, a car would be considered part of the foreground, as cars come and go from the field of view of the camera and are often an object of interest to identify or detect (e.g., autonomous driving or smart cities). However, a broken car in a scrapyard will probably remain where it is for as long as other objects that are typically considered background objects (e.g., traffic lights or buildings). Once that car is correctly identified, there is no need to process it on successive frames repeatedly. Therefore, we make the following assumption: \textit{foreground} objects move, \textit{background} objects are stationary.
As for the set of techniques that can leverage the extraction of foreground objects, we have mainly considered traditional computer vision techniques that deliver \textit{relatively} good results at a small compute overhead. Newer and more accurate methods make use of neural networks to provide high-quality results~\cite{zeng2019combining}, but their compute requirements are higher than that of many neural networks, which significantly reduces the room for improving the system performance. Moreover, it is important to note that, while an accurate algorithm to detect salient objects is desirable, background subtraction methods that offer more modest results \textit{usually} increase the number of false positives instead of false negatives. That is, more or larger regions are flagged as foreground/moving than those that actually changed, which can be caused due to camera jitter, changes in the lighting conditions, or other phenomenons causing unwanted frame-to-frame variations. However, false positives increase the amount of work but do not necessarily translate into false positives being predicted by the detection model. Ultimately, the selection of a more or less accurate BGS method will depend on the compute capabilities available at the edge locations (cameras or compute next to the cameras).
\section{Conclusions}
\label{sec:conclusions}
In this paper, we have presented FoMO, a method that distributes the load of video analytics and, at the same time, reduces the compute requirements of video analytics and improves the accuracy of the neural network models used for object detection. Results have shown how FoMO is able to scale the number of cameras being processed with a sub-linear increment in the amount of resources required while still improving accuracy with respect to the baseline. Results have shown that the number of inferences can be reduced by 8 while still achieving between 10\% and 40\% higher mean average precision with the same model.
Moreover, the methodology used shows how video analytics can be effectively deployed in resource-constrained edge locations by tackling its optimization from a different perspective that opens a new line of research.
\section{Introduction}
\label{sed:introduction}
Rapid advances in the field of deep learning have led to an equally rapid and unprecedented increase in interest in video analytics applications. Smart cities or autonomous driving are just two examples of use cases that require an automatic analysis of video feeds to trigger different actions. However, video feeds are commonly processed individually, and, at the same time, the execution of neural networks is a computationally expensive task. This leads to the cost of video analytics systems growing linearly with the number of cameras being deployed. We believe there is a window of opportunity to optimize video analytics systems by combining mechanisms that reduce the search space within an image, thanks to our knowledge of how neural networks look for objects within the input image.
Neural networks can learn to find objects anywhere on an image, even when these objects represent a small fraction of the input and the rest of the image is of no interest. To make this possible, neural networks classify many regions (in the order of thousands) proposed at an earlier stage of the network. This implies that neural networks end up looking at regions that are empty of objects of interest. We could argue that this is what makes neural networks so powerful (especially convolutional neural networks for image analysis), as it means they can focus attention without developers or users needing to hint them where to look in the image. However, it also implies that most of the work will yield no meaningful results with an increased chance of wrong results (any positive detection on an \textit{empty} region is a false positive).
Moreover, neural networks have a \textit{hard time} detecting small objects~\cite{cai2018cascade}. However, some of the smaller objects are captured at an even higher resolution than that used by the network's input layer, but once the captured image is resized to feed the neural network, these objects become just a few indistinguishable pixels. For example, an 8MP camera (4K UHD resolution, or 3840x2160 pixels) captures an object at 20 meters with a pixel density of over 100 px/m, enough to get a sharp image of a license plate. Nonetheless, if the image gets resized to standard definition (640x480 pixels), density drops below 20 px/m (calculated for a camera with 3mm focal length placed 4m height), which is unsuited for most detection tasks. For reference, 320x320 pixels is a common input size for \textit{edge} detection models.
We make the following key observation: scenes captured by static cameras do not move; objects of interest do. We do not need to check the entire scene, just whatever is new in it. By focusing on footage from static cameras, we can extract the regions of the scene that contain \textit{objects of interest} (i.e., objects not yet classified) and filter the rest of the image out. For this task, we have multiple techniques at our disposal that we can apply to identify and extract such regions of interest, like motion detection or background subtraction. This step alone lets the object detection model analyze objects at a higher resolution and, potentially, increase its accuracy. At the same time, we can optimize the analysis of multiple scenes by merging regions from different cameras into a composite frame that can be passed to the model as a single input. Therefore, we can effectively exploit the intrinsic parallelism of neural networks and work on multiple video streams simultaneously, reducing the total number of inferences required by the system.
In this paper, we present FoMO (\textit{Focus on Moving Objects}). A method to optimize video analytics by removing the analysis of \textit{uninteresting} parts of the scene, distributing the work, and consolidating information from multiple cameras to reduce the compute requirements of the analysis. Moreover, as a by-product, neural networks can analyze objects from the scenes at a higher resolution, which results have shown to improve the accuracy of the models tested by several fold.
The paper is organized as follows: in Section~\ref{sec:background}, we introduce some of the concepts of computer vision and video analytics from which we base our work. Then, we present and describe FoMO in Section~\ref{sec:method}. In Section~\ref{sec:setup}, we detail the experimental setup and the methodology followed during the evaluation. In Section~\ref{sec:results}, we presents the results of FoMO's evaluation. The literature review and previous work is presented in Section~\ref{sec:related}. Finally, Section~\ref{sec:conclusions} concludes the paper.
\section*{Acknowledgments}
The authors would like to thank Dr. Josep Ll. Berral for his valuable feedback.
\bibliographystyle{splncs04}
\section{Focus on Moving Objects}
\label{sec:method}
In this paper, we present FoMO (Focus on Moving Objects), a novel method to optimize and distribute video analytics workloads in Edge locations. FoMO has been conceived with static cameras in mind, and its goal is to maximize the \textit{pixel-to-object ratio} that is processed at each inference, i.e., it aims to maximize the number of pixels processed by the neural network that belong to actual objects of interest instead of the background. Towards this goal, each frame is preprocessed to extract the regions of interest that have a higher chance of containing objects of interest. Working with static cameras, we can safely assume that such regions of interest intersect with regions whose content has changed over time.
Figure \ref{fig:key_steps} depicts the main steps involved in FoMO. First, a set of static cameras (either in a single location or geographically distributed) periodically capture images from the scenes. Then, each scene is processed individually, and background subtraction is computed to extract moving objects (i.e., regions of the image where movement has been detected). Depending on the compute and network bandwidth available at the Edge, video scenes can be processed locally or sent to a central location. In both cases, a single entity, known as \textit{composer}, is in charge of receiving the objects from all the scenes and consolidating objects into a single RGB matrix known as \textit{composite frame}. This step is called \textit{frame composition}.
During frame composition, each object is treated as an independent unit of work that can be allocated separately or jointly from other objects from the same or other scenes. Objects are selected based on a pre-defined composition policy that determines the order and rate at which objects are composed and processed. The selected objects are placed together at each composition interval, forming a mosaic of objects (i.e., the composite frame). A few-pixels wide border is added to mark frontiers between objects. Next, the composite frame is used as input for the object detection model. Finally, the predicted coordinates of the detected objects are translated back to the original coordinates of the corresponding scene.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{images/key_components.png}
\caption{Main steps involved in the process of FoMO.}
\label{fig:key_steps}
\end{figure*}
\subsection{Extraction of Objects of Interest}
Frame decoding results in a 3-D matrix containing the RGB color of every pixel in the scene captured by the camera. When cameras capturing the scene are static, consecutive RGB matrices will often have little to no pixel-to-pixel variation. At a higher level of abstraction, this correlation among frames means that nothing is moving in the scene (or the camera lens could not capture it). Consequently, whenever an object is moving in the scene, only some subregions are affected. This is the basis for \textit{motion detection} algorithms, which are extensively used to trigger the analysis of a given frame to save computation on frames that do not contain anything new. In this paper, however, we extend this idea and consider that moving objects, not whole frames, are of interest. The rationale is that for an object to trigger an action, it has to either enter, leave or interact with the scene (i.e., moving in the scene). Thus, we can focus on the regions that contain change and discard the rest. As these subregions are usually a tiny fraction of the whole frame, we can combine several subregions from different cameras into a composed frame. This frame can be processed by a neural network the same way it would process an entire frame.
Motion detection involves many challenges (e.g., camera jitter or variations in lighting, among others), and there is no single nor standard method that can handle all of them robustly. Moreover, it usually relies on background initialization algorithms to model the background without foreground objects. Among all available methods, we prioritized simplicity (i.e., speed) over accuracy for two reasons. First, the method must run in resource-constrained (Edge) nodes and do it faster than it would take to process the whole frame by the neural network. Second, the accuracy of these methods is usually evaluated by error metrics that compare the modeled background to the ground-truth background at the pixel level. However, artifacts and other inconsistencies in the generated background do not necessarily translate into a lower recall of objects of interest or a lower accuracy of the object detection model.
We have evaluated FoMO using three different BGS methods to extract objects of interest from a scene. These methods differ by the way they model the background and, second, by how they identify foreground objects. The three BGS methods are:
\begin{itemize}
\item \textbf{PtP Mean} (Moving Average Pixel-to-Pixel): The background corresponds to the moving average of \textit{n} frames (not necessarily consecutive) at pixel level. Objects are extracted by applying first a Gaussian blur to the background model and the current frame to reduce noise. Then, the absolute difference between both frames is computed before a binary threshold decides which level of difference constitutes change and which does not.
\item \textbf{MOG2} (Mixture of Gaussians): Each pixel is modeled as a distribution over several Gaussians, instead of being a single RGB color. This method is better than the moving average at preserving the edges and is already implemented in OpenCV.
\item \textbf{Hybrid}: The background is modeled as a Mixture of Gaussians (MOG2), but objects are extracted applying the same operations as in the PtP Mean to detect differences between the background and the current frame.
\end{itemize}
As output, all three methods provide a black and white mask of the same dimensions as the frames. From the mask, the bounding boxes of the objects moving are easily extracted by detecting adjacent pixels. Bounding boxes of an area smaller than a pre-defined minimum are discarded. This threshold helps to filter minor variations out. However, it also defines the method's sensitivity, and the optimal value will vary from scene to scene depending on the minimum size of objects (in pixels) to detect.
All three methods can run in the order of milliseconds in resource-constrained nodes. Performance and accuracy of these methods are analyzed in Section~\ref{sec:results}, where we explore their impact on the final results. In short, a more sensitive motion detector will generate false positives (i.e., non-interesting objects being extracted, like a tree with moving leaves) and, therefore, introduce more or larger regions than necessary to be composed in the following step.
Effectively, the breakdown of the scene into objects opens the door to two major optimizations. On the one hand, we can decide which regions should be processed and which can be omitted. Potentially, this reduces the amount of data moved around and processed by the detection model. Consequently, it increases the pixel-to-object ratio, as we achieve the same results while processing only those pixels that we have considered of interest. On the other hand, video analytics can now be distributed while consuming less network bandwidth, as, potentially, only a fraction of the scene must be sent over the network.
\subsection{Composition Policies}
After moving objects have been extracted from the frames, the composer has a pool of them in the form of cropped images, each with a different size and aspect ratio. At each composition interval, the composer decides what objects are to be allocated in the resulting composite frame. Currently, objects are selected in a purely first-come, first-serve manner. However, not all objects can be allocated during the subsequent allocation cycle, as there is a limit to the number of objects composed in one frame. The threshold is not a fixed value and is set upon one of two pre-defined composition policies.
To the composer, objects become the minimum unit of work to be allocated into composite frames. Therefore, we could argue that the composer treats composite frames as a type of resource, a resource that can be shared among requests (i.e., objects). At the same time, the composite frame can be considered elastic, as it can be overprovisioned by simply adding more objects to it. Nonetheless, as in any other type of shared resource, there is a point at which a higher degree of overprovisioning degrades the overall performance. Object detection models can locate and classify multiple objects within an input image and do it with a single forward pass over the network. However, there is a minimum number of pixels required for the model to successfully detect an object (closely related to the focal view and input size of the network). Similarly, the larger an object is, the better can a model detect it accurately.
During composition, the composer and the composition policy must consider the trade-off between the resource savings from consolidating more objects into a single composite frame and the accuracy drop involved. After a frame is composed, the resulting frame is resized to match the neural network's input size regardless of its original size. At the same time, adding one more object to the composition could potentially result in a larger frame, depending on whether the new object can be allocated without increasing the dimensions of the composite frame with all previous objects. The larger the resulting composite frame, the smaller its objects become after the frame is resized to feed the neural network. Consequently, the more objects allocated in a composite frame, the higher the amount of computation saved (i.e., fewer inferences per camera), but also the higher the impact on the accuracy of the model. Unfortunately, there is no known mechanism to determine beforehand where the limit is or even where the sweet spot is.
FoMO implements two policies that limit the number of objects allocated in a single composite frame, albeit the exact number differs from composition to composition. The first policy, namely \textit{downscale limit policy}, limits the downscaling factor to which objects will be subjected after the composite frame is resized to feed the neural network's input. This is equivalent to setting an upper limit to the dimensions of the resulting composite frame. This policy prioritizes the quality of the detection results over the system's performance. The second policy, namely \textit{elastic policy}, does not set a hard limit but a limit on the number of camera frames to consider on each composition. That is, the dimensions of the composite frame can be arbitrarily large but the objects allocated belong to, at most, \textit{n} camera frames, being \textit{n} user-defined. Effectively, this policy prioritizes system's performance over quality of the detection results.
\subsection{Allocation Heuristic}
Composing objects into a composite frame can be seen as allocating 2D images into a larger 2D canvas. For the sake of simplicity, we consider objects to be 2-D during allocation, as the third dimension is always of size 3 in RGB images. The allocation can be mapped to the \textit{bin packing problem}. Bin packing is an optimization problem that tries to pack (allocate) items (objects) of different volumes into bins of a fixed volume (composite frame) while minimizing the number of bins used (blank spaces in the composite node). This problem is, unfortunately, known to be an NP-hard problem. Therefore, FoMO approximates a solution using a \textit{first-fit} heuristic, where objects are first sorted into decreasing width (arbitrary) order. For this task, a sub-optimal solution results in a composite frame with more blank spaces than needed. Blank spaces reduce the density of meaningful pixels (i.e., pixels that are part of an object of interest) and, ultimately, reduce the number of objects that a single composed frame can fit.
\section{Related Work}
\label{sec:related}
Given the amount of data and compute required to train neural networks, the training step has garnered most of the attention from both academia and industry. However, as neural networks start being widely deployed for production use cases, optimizing inference is gaining interest from the industry. Most of the previous works has focused on the optimization of either the neural network architecture itself (using techniques to prune layers \cite{kang2017noscope} or reduce weight precision through quantization) or on the optimization of the application's pipeline (using cascade classifiers \cite{kang2017noscope,hsieh2018focus} or reducing the number of frames to process~\cite{canel2019scaling}). In the context of video analytics systems, there are two main directions: reducing the amount of work \cite{canel2019scaling} and optimizing the inference pipeline \cite{kang2017noscope}.
Regarding what impacts the quality of the predictions of a neural network, the work in~\cite{cai2018cascade} argues that, in general, a detector can only provide high quality predictions if presented with high quality [region] proposals. Similarly, authors in~\cite{eggert2017closer} demonstrate how deep neural networks have difficulties to accurately detect smaller objects by design.
Regarding the usage of traditional computer vision techniques, authors in
Authors in~\cite{bouwmans2017scene} provide a taxonomy and evaluation (in terms of accuracy and performance) of scene background initialization algorithms, which helped us decide what were the most sensible methods to implement in FoMO.
Previous methods have focused on optimizing the different transformations applied to the input image. However, all these methods still devote a significant amount of computation to decide whether a certain region contains true positives or not (region proposal) when, in practice, most will not. Our method aims to reduce the amount of work wasted on this purpose. Moreover, this allows us to stack regions of different cameras to reduce the total computation of the system and increase the overall system throughput. At the same time, our results have shown that by increasing the quality of proposals, accuracy increases considerably as a \textit{side effect}.
To the best of our knowledge, no previous work proposes and evaluates a method combining traditional computer vision techniques to quantize and distribute portions of scene, consolidate multiple scenes into one, and process them with a single inference.
\section{Results}
\label{sec:results}
The following experiments evaluate the potential of our method by analyzing upper and lower-bound scenarios. Therefore, we have replicated the same stream whenever more than one stream is used unless otherwise stated. By replicating the stream, we avoid potential variability introduced by different load distributions across streams that is difficult to quantify. Consequently, if one object is captured at the \textit{i-th} frame of a stream, the same object will be captured in the same \textit{i-th} frame in all other streams. However, each frame's decoding and preprocessing (i.e., background subtraction, motion detection, and cropping) is computed for each stream individually to obtain an accurate and realistic performance evaluation.
\subsection{Accuracy Boost vs Resource Savings}
\label{sec:results-map}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{images/ap_wrt_frames.png}
\caption{Mean Average Precision (mAP) by scene for three different DNNs with an increasing number of frames composed into a single inference. The reduction in number of inferences is equivalent to the number of frames composed.}
\label{fig:overview_ap}
\end{figure}
Figure~\ref{fig:overview_ap} shows the mAP averaged for all scenes of the dataset for the three different pre-trained models. Each model is evaluated using the full frame as input compared to using a composed frame with objects from 1, 2, 4, and 8 parallel streams. The objects in each frame are extracted based on the ground truth annotations from the curated dataset, i.e., without background subtraction. Therefore, these results represent an upper bound precision for FoMO. It is important to understand why the mean average precision appears especially low for the baseline method (\textit{Full Frame}) during inference for the two smaller models. As mentioned in Section~\ref{sec:setup} (and, more specifically, as shown in Figure~\ref{fig:area_objects}), moving objects are, on average, a tiny fraction of the frame's area. Hence, once the frame gets downscaled to the neural network's input size, objects become too small to be detected accurately. This is why the bigger model, with an input twice as large as the other two models (640x640 pixels compared to 320x320 pixels), gets around 50\% improvement in accuracy over them.
From the mAP results of the models using FoMO, we extract two key observations. On the one hand, results show that a larger model does not necessarily translate into a higher mAP when inferencing over composite frames, as it happens with the baseline. The larger model is up to 10\% below the smaller models when the composite frame contains objects from only one camera frame. When the number of objects to compose is small (as it is expected when only a single camera frame is considered), the composition may result in a composite frame smaller than the model's input size. Consequently, the frame must be enlarged to the appropriate dimensions expected by the neural network. Thus, these results indicate that accuracy is also impacted negatively when objects are zoomed in. On the other hand, as already discussed, accuracy takes a hit when more frames (hence, more objects too) are considered during the composition, as objects become smaller. However, results show that a larger input size seems to mitigate this impact. The largest model still achieves 10\% higher accuracy composing frames eight cameras compared to the full-frame inference using the same model.
\subsection{Impact of BGS Techniques}
\label{sec:bgs_impact}
There are multiple available techniques to compute background subtraction to detect and extract moving objects in a scene. Each technique provides a different trade-off in terms of latency (or required resources) and quality of the solution. The quality of these techniques, however, can be measured through different metrics. We are interested in detecting moving objects but not \textit{everything} that is moving (e.g., leaves or camera jitter). Some methods are more sensitive to minor variations than others, and the level of sensitivity will impact what is considered actual movement and what is just background noise. Therefore, a higher sensitivity potentially results in a higher rate of false positives (i.e., lower precision) and a lower rate of false negatives (i.e., higher recall), as more objects will get extracted.
Figure~\ref{fig:bgs_map} shows the recall, precision, and average precision of the three object detection models using the four BGS methods considered plus the baseline (\textit{Full Frame}, i.e., no object extraction). On the one hand, results show how FoMO performs compared to the baseline method. The upper bound for FoMO can be defined by the ground-truth annotations (\textit{GT Objects}), as no object is missed during the extraction. GT Objects consistently outperforms the baseline by a considerable margin on every metric and model, except one case. For example, GT Objects achieves twice the AP of the baseline for the two smaller models, although this margin reduces to 30\% when the larger model processes the objects. However, baseline surpasses FoMO's upper bound recall by a 10\% when using the largest model. This seems to be related to what results in Section~\ref{sec:results-map} showed when composition considers objects from only one frame, i.e., the resulting composite frame is smaller than the neural network's input, and objects must be enlarged. Nonetheless, a baseline's precision half of FoMO's mitigates this issue.
On the other hand, results show what can be expected from using a more or less accurate BGS method. MOG and Hybrid are close on all metrics. Again, precision is where FoMO seems to provide larger benefits, especially for small input sizes. Both MOG and Hybrid outperform baseline's precision by a factor of 2 using the smaller models and close to 50\% for the larger model. However, all BGS methods seem to miss objects during extraction, and recall takes a hit. Nevertheless, the AP of both methods still outperforms the baseline by a 60\% and 30\% when using the two smaller models, while baseline outperforms the other two by 12\% in AP using the larger model. Finally, PtP Mean lacks far behind on the recall, which hits its average precision. Nevertheless, results show that FoMO consistently increases precision regardless of the BGS method, while the accuracy of the selected method will mainly impact its recall.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{images/bgs_map.png}
\caption{Recall, Precision, and Average Precision of three SSD MobileNet V2 pre-trained models after using the different methods to create the input for the model. \textit{Full Frame} uses the frame as captured by the camera; GT Objects: objects are cropped from the ground truth annotations; MOG: Mixture of Gaussians; PtP Mean: Pixel-to-Pixel Mean; Hybrid: MOG for background modeling and then pixel-to-pixel difference with respect the current frame.}
\label{fig:bgs_map}
\end{figure}
\subsection{Performance and Resource Usage}
FoMO's main goal is to improve the overall system performance. This is achieved by reducing the amount of computing required to process a single frame. After decoding, each frame undergoes three consecutive steps: 1. object extraction, 2. frame composition, 3. inference. We now evaluate the first two, as the third will be determined by the neural network used.
Table~\ref{tab:bgs_latency} shows the average latency to extract objects of interest for each of the three methods considered in the experiments. These results were obtained using a single core on an Intel Xeon 4114. Pixel-to-Pixel mean is, on average, eight times slower than MOG2 and almost six times slower than the Hybrid method. The table does not show timings for the executions that use ground truth annotations as those are synthetic executions that do not require any background subtraction to get the objects' bounding boxes.
\begin{table}[h]
\centering
\begin{tabular}{cclll}
\textbf{Method} & \textbf{Latency (millisec)} & & & \\ \cline{1-2}
PtP Mean & 22.0 & & & \\
MOG2 & 2.7 & & & \\
Hybrid & 3.8 & & &
\end{tabular}
\caption{Average latency (milliseconds) to extract objects using the three methods considered in the experiments}.
\label{tab:bgs_latency}
\end{table}
The next step is the composition of objects into the composite frame. Potentially, the composition is performed once for multiple scenes. Figure~\ref{fig:composition_latency} shows the time required to compose a frame with respect to the number of objects to compose and the width of the resulting composite frames (and height, as these are always 1:1). Results show how the latency increases with the number of objects to compose. The number of camera frames considered within a composition interval indirectly impacts the latency, as the more frames are considered, the more objects we can expect to be available for composition (although not necessarily always true). The increasing dimensions of the resulting composite frames highlight this relationship. The frame is enlarged to try to fit a larger number of objects. Nonetheless, we can expect the detection models to yield worse accuracy when processing the larger composite frames, as objects will appear smaller once shrunk to the model's input size. Therefore, datapoints with composition latency in hundreds of milliseconds are too large to be reliably used unless the detection model has an appropriately sized input layer. In that case, inference cost will also be higher, and, therefore, the cost of composing many objects can be hidden.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{images/composition_latency.png}
\caption{Time to compose a frame, i.e. latency, in milliseconds (Y-Axis) with respect to the number of objects to compose (X-Axis). The color palette shows the width of the resulting composite frame. Each dot is a composition with objects from up to eight parallel streams.}
\label{fig:composition_latency}
\end{figure}
\section{Experimental Setup}
\label{sec:setup}
In this section, we provide the specific experimental setup used throughout the evaluation presented in Section~\ref{sec:results}.
\subsection{Dataset}
\label{sec:dataset}
For the evaluation, we have used the VIRAT dataset~\cite{oh2011large}. VIRAT contains footage captured with static cameras from 11 different outdoor scenes.
\subsubsection{Dataset Curation}
VIRAT contains frame-by-frame annotations for all objects in the scene, with bounding box and label (classes person, car, vehicle, bike, and object). However, annotations include both static and moving objects. We evaluate FoMO by its capability to detect moving objects. Therefore, we have curated the dataset to remove all annotations from static objects and avoid these objects artificially lowering the final accuracy. Thus, we consider objects static if their coordinates in a given frame do not change to 10 frames prior (10 has been arbitrarily chosen and corresponds to the frame skipping we used during the evaluation). Nonetheless, cameras often suffer from a slight jitter, mainly due to wind. Whenever that happens, the bounding boxes of a given object on consecutive frames may not perfectly match, even if the object has not moved. To avoid false positives in such cases, we still consider an object static when its coordinates remained static for at least 90\% of the frames.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{images/objects_area_per_scene.png}
\caption{Percentage of area that represent objects in each scene, according to ground-truth annotations.}
\label{fig:area_objects}
\end{figure}
Figure~\ref{fig:area_objects} shows the percentage of area occupied by objects in each scene of the VIRAT dataset. Results show that scenes are mostly \textit{empty} and objects represent as little as 2\% of the scene and no more than a quarter of the scene. Moreover, some scenes appear to be \textit{quiet}, i.e., only a tiny fraction of objects in the scene are moving. When considering moving objects only, the average area taken by these objects fluctuates between 1\% and 8\% of the scenes. These results highlight the potential benefits that can be achieved by removing all the empty regions.
Moreover, we have made the following considerations during the evaluation:
\begin{itemize}
\item Only sequences with at least 1000 frames have been considered.
\item Discarded the first 250 frames (approx. 10 seconds) to give a time window to initialize the background modeling of some methods.
\item Frame skipping of 10 frames.
\end{itemize}
\subsection{Object Detection Models}
All experiments have been carried out using pre-trained object detection models that are publicly and freely available in the TensorFlow Model Zoo~\cite{tfmodelzoo}. We decided upon the use of pre-trained models instead of training or fine-tuning the models on our data to remove training as a variable on which to focus, which is an important one nonetheless.
The following are the three DNN models used throughout the evaluation:
\begin{itemize}
\item SSD + MobileNet V2 with an input layer of 320x320x3 pixels.
\item Same as previous with an additional FPN (Feature Pyramid Network) that improves the quality of predictions~\cite{lin2017feature}.
\item Same as previous with an input layer of 640x640x3 pixels to improve the quality of predictions.
\end{itemize}
\subsection{Background Subtraction Mechanisms}
Background subtraction (BGS) algorithms, albeit not directly part of our contributions, are key to extracting the set of objects that will constitute the system's workload. At the same time, they have an undeniable impact on the quality of the region proposal. An accurate BGS algorithm will extract all or most objects of interest and nothing else (i.e., discard all regions that do not contain objects of interest). However, to have a complete overview of how FoMO performs, we must break the accuracy of these methods down into \textit{precision} and \textit{recall}, as each one will have a different impact on the overall performance.
\textit{Precision} is defined as the ratio of \textit{True Positives} (TP) with respect to the sum of TP and \textit{False Positives} (FP), i.e., what proportion of predictions are correct. In the context of \textit{object extraction}, we can see precision at the pixel level and define it as the proportion of extracted area (i.e., number of pixels) that does belong to objects of interest with respect to the total extracted area (including that without objects of interest).
\textit{Recall} is defined as the ratio of TP with respect to the total number of positive examples (TP + FN), i.e. what proportion of positive examples are correctly detected. In the context of \textit{object extraction}, we define recall as the proportion of extracted area that does belongs to objects of interest with respect to the total area of the objects of interest in the scene (including the area of the objects not extracted).
Thus, low precision BGS mechanisms will cause \textit{uninteresting} regions to occupy space in the composed frame, potentially shrinking other regions or leaving them out of the composition (hence, delaying their processing). Low recall mechanisms will cause FoMO to directly miss predictions as entire objects were not extracted and will, therefore, not make it into the neural network's input. On the contrary, a high precision and high recall BGS mechanism will maximize FoMO's efficiency, while accuracy will be made to only depend on the neural network model chosen.
The experiments aim to quantify the impact that different methods for BGS, with more or less accuracy on the extraction of objects, have on the system's resource usage and the quality of the detections.
The configuration of the three BGS methods is, for all experiments, as follows:
\begin{itemize}
\item PtP Mean: The moving average is computed over the last 20 frames with a frame skipping of 10 (i.e., spanning over 200 consecutive frames in total).
\item MOG: Computed every frame. Implementation from OpenCV.
\item Hybrid: MOG background model is updated once every 50 frames (2 seconds). The difference with the current frame to extract objects is computed with respect to the latest background model available.
\end{itemize}
\subsection{Metrics of Interest}
In video analytics, the two main metrics of interest are inference accuracy and cost. For the cost, we use latency as a proxy. For the inference accuracy, we use the average precision as used in the PASCAL Visual Object Classes (VOC) Challenge~\cite{everingham2010pascal}. To consider a detection as either positive or negative, we use a value of 0.3 for the Intersection over Union (IoU) between the predicted and the ground-truth bounding boxes. The IoU is a matric that measures the overlap between two bounding boxes.
|
{
"timestamp": "2021-12-01T02:25:38",
"yymm": "2111",
"arxiv_id": "2111.15451",
"language": "en",
"url": "https://arxiv.org/abs/2111.15451"
}
|
\section{Introduction}
\noindent As a decentralized and distributed public ledger, blockchain technology~\cite{2017The} has enjoyed great success in various fields, \emph{e.g.}, finance, technology, and culture~\cite{2017The}. Cryptocurrency~\cite{wood2014ethereum,yuan2018blockchain}, undoubtedly, is one of the most profound applications of blockchain. As the largest blockchain platform supporting smart contracts, Ethereum now holds cryptocurrencies worth more than $\$39.3$ billion dollars. Unfortunately, the decentralization of blockchain also breeds numerous financial scams~\cite{vasek2015there, chen2018detecting,vasek2018analyzing,huang2020understanding}. Chainalysis\footnote{A provider of investigation and risk management software for virtual currencies} has reported that phishing scams, which accounted for 38.7\% of all Ethereum scams~\cite{EtherScamDB}, stole $\$ $34 million from Ethereum platform in 2018. Phishing refers to the impersonating a website of an honest firm, which obtains the users sensitive information and money via phishing websites. Recently, phishing scams are reported every year, and they become even more sophisticated.
As a result, detecting phishing addresses for blockchain has attracted widespread attention. Fundamentally, phishing detection aims to learn a mapping function that bridges their historical transaction behaviors to a binary output $y$, where $y=0$ denotes the target address is normal and $y=1$ represents it is abnormal. A family of works~\cite{li2020identifying,lin2019evaluation} utilized manually extracted features to capture the target user's transaction behaviors. Unfortunately, these models require an incisive data understanding, leading to unsatisfactory results.
Recent approaches explored using different sorts of graph embedding algorithms to address the issue. \emph{Walking based detectors}~\cite{perozzi2014deepwalk,grover2016node2vec,wu2020phishers,lin2020t,yuan2020phishing} direct their efforts at adopting random walking to characterize the temporal evolution between transactions. \emph{Subgraph based detectors}~\cite{shen2021identity,wang2021tsgn,yuan2020phishing,zhang2021mcgc} usually described the target address's transaction pattern through a static subgraph. Specifically, they first construct the transactions of the target address and its neighbors in all periods into a static subgraph, then built upon the success of graph neural networks to learn the spatial graph structure from the static subgraphs. All in all, there are several challenges in this task.
\textbf{Balance between structural and temporal information.} Earlier works tend to focus on either structural or temporal information, which leads to considerable information loss. This motivates us to consider whether we can combine and balance them to approach better detection performance. Due to the incompleteness of the structural information caused by the limited sampling sequence length, it is difficult for the walking strategy to achieve such a balance. We speculate that one viable approach is to construct multiple transaction subgraphs for a target address, where each subgraph characterizes the transaction topology within a temporal period. We term the transaction subgraphs as \emph{dynamic subgraphs}. Taking the Ethereum~\cite{wood2014ethereum} users as an example. We apply the static subgraph construction method in MCGC~\cite{zhang2021mcgc} and extend it to the dynamic subgraph construction.
\textbf{Robustness against hidden phishing addresses.} Researches~\cite{zugner2018adversarial,dai2018adversarial,chang2020restricted} on the vulnerability of the graph analysis methods also reveal potential security issues in blockchain phishing detection. Intuitively, the phishers may bypass the detection by transacting to specific addresses. To verify the robustness of existing phishing detectors, we randomly add transactions between the first and second-order neighbor addresses of 200 verified Ethereum phishing addresses.
To address these challenges, our approach converts phishing detection into a dynamic graph classification problem. We construct a series of transaction evolution graphs (TEGs) for multiple time slices, which have the key advantage of retaining both the spatial structural and temporal information. To effectively utilize the abundant information contained in TEGs, we also propose TEGDetector that serves to capture target addresses' behavior features. The TEGDetecor is composed of graph convolutional layers and GRU, which respectively capture the topology structure and dynamic evolution characteristics of the network. Specifically, we introduce adaptive time coefficient to comprehensively balance the user's behavior features in all periods, rather than using only the one in the most recent period. This benefits exploring the crucial factors of phishing detection and helps TEGDetector identify possible malicious deception. Our contributions may be summarized as follows:
\begin{itemize}
\item To the best of our knowledge, this is the first work that defines the phishing detection task as a dynamic graph classification problem. The proposed method balances the structural and temporal information through the constructed TEGs, and provides TEGDetector to map these information into user behavior features.
\item A fast non-parametric phishing detector (FD) is presented, which can quickly narrow down the search space of suspicious addresses and improve the detection performance and efficiency of the phishing detector.
\item Experiments conducted on the Ethereum dataset demonstrate that TEGDetector can achieve state-of-the-art detection performance. Interestingly, phishing deception experiments caused the existing methods to undergo an accuracy decline of 25-50\%, while TEGDetector achieves more robust phishing detection with only a decline of 13\%.
\end{itemize}
The rest of the paper is organized as follows. Related works are introduced in Section \uppercase\expandafter{\romannumeral2}, while the proposed method is detailed in Section \uppercase\expandafter{\romannumeral3}. Experiment results and discussion are showed in Section \uppercase\expandafter{\romannumeral4}. Finally, we conclude our work.
\section{Related Work}
In this section, we briefly review the existing works on phishing detection and graph classification.
\subsection{Phishing Detector}
To provide early warnings to potential victims, various phishing detectors are proposed to identify phishing addresses.
\textbf{Feature engineering based phishing detectors}~\cite{lin2019evaluation, li2020identifying} usually manually extract basic and additional transaction statistical features from preprocessed transactions, then use them to train a classifier. To realize automatic phishing behavior feature extraction, \textbf{walking based phishing detectors} learn the user's transaction behavior feature unsupervised. Wu et al.~\cite{wu2020phishers} performed a biased walking according to the transaction amount and timestamp, then obtained the address sequence to extract the user's behavior features. Lin et al.~\cite{lin2020t} further defined the temporal weighted multidigraph (TWMDG), which ensures the walking sequences contain the actual meaning of the currency flow. The \textbf{subgraph based phishing detectors} pay more attention to spatial structure information. Yuan et al.~\cite{yuan2020phishing} designed second-order subgraphs to represent the target address, modeling the phishing detection task as a graph classification problem. Wang et al.~\cite{wang2021tsgn} mapped the original transaction subgraphs to the more complex edge subgraphs. Shen et al.~\cite{shen2021identity} and Zhang et al.~\cite{zhang2021mcgc} introduced graph neural networks to realize blockchain phishing detection in an end-to-end manner.
In general, the existing blockchain phishing detectors will sacrifice some structural or temporal information when capturing users' behavior features. Moreover, the robustness of phishing detectors lacks research.
\subsection{Graph Classification}
For a blockchain transaction platform, the transactions of the target address and its neighbors are usually sufficient to reflect its transaction pattern. Intuitively, it is possible to convert the phishing detection task into a graph classification problem.
There are two general approaches to graph classification. The first~\cite{borgwardt2005shortest,shervashidze2011weisfeiler, vishwanathan2010graph,yanardag2015deep} assumes that molecules with similar structures share similar functions, and converts the core problem of graph learning to measure the similarity of different graphs. The second~\cite{DBLP:conf/nips/DefferrardBV16,DBLP:conf/aaai/ZhangCNC18,ying2018hierarchical,lee2019self} introduces various pooling operations to aggregate the node level representations into the graph level, which performs better on complex graphs.
It is worth noting that although numerous dynamic graph mining methods~\cite{DBLP:journals/corr/ChungGCB14, goyal2018graph,li2018deep,chen2019lstm,chen2019generative} have been studied, most graph classifiers are designed for static graphs. Due to the dynamic evolving pattern of user behaviors, a dynamic graph classifier will be beneficial to phishing detection.
\begin{figure*}[htb]\setlength{\abovecaptionskip}{-0.3cm}
\centering
\includegraphics[width=0.85\linewidth]{flow-1-xhy.pdf}\\
\caption{A high-level overview of our pipeline. (a) Preprocessing data from Ethereum. (b) Construction of TEGs for each address. (c) Phishing detection via TEGDetector.}
\label{fig.flow}
\end{figure*}
\section{Methodology}
TEGDetector seeks to identify phising addresses by extracting their evolving behavior cues from the transaction graphs. An overview of the proposed method is outlined in Figure~\ref{fig.flow}. In the following,
we present the details of each component.
\subsection{Data Preprocessing }\label{sec3.1}
The phishing detection problem on the blockchain is a typical supervised learning problem, which requires labeled user addresses to train TEGDetector. Here, we obtained an Ethereum address list from the blockchain academic research data platform Xblock\footnote{http://xblock.pro/}.
We extract transaction sending/receiving addresses, transaction amount, timestamps, and address labels as the crucial information for constructing TEGs. The sending addresses and the receiving ones correspond to the nodes on graph, and the transaction amount and timestamps represent the edge weight and temporal information between the node pairs, respectively. Moreover, we construct the address label-based attribute $X\in \mathbb R^{N\times 2}$ for $N$ addresses. $X_{i,1}=1$ if the address $v_i$ is a phishing address, and $X_{i,0}=1$ otherwise.
\subsection{TEGDetector}
\begin{figure*}[htbp]\setlength{\belowcaptionskip}{-0.5cm}\vspace{-2em}\setlength{\abovecaptionskip}{0.1cm}
\centering
\includegraphics[width=0.8\linewidth]{3-1.pdf}\\
\caption{ The framework of TEGDetector. (a) EF-Extractor learns the evolution features of different TEG slices. (b) The evolution graphs pooling alternates with EF-Extractor to gradually aggregate the node-level evolution features into the graph-level. (c) The Read-out operation and Multilayer Perceptron (MLP) output the detection results. }
\label{fig4}
\end{figure*}
In this section, we designe a phishing detector for fully extracting the structure and temporal information from TEGs, termed TEGDetector. As shown in Figure~\ref{fig4}, TEGDetector is designed in an end-to-end manner, including evolution feature extraction (EF-Extractor), evolution graphs pooling, and behavior recognition.
The EF-Extractor integrates structural and temporal information to extract the addresses' evolution features.
Through the alternation with EF-Extractor, the evolution graph pooling aggregates the evolution features of similar addresses until obtaining the TEG's graph-level features. The behavior recognition assigns time coefficients to these graph-level features, and comprehensively considers the target address's transaction behaviors in different time slices, which also enhances the robustness of TEGDetector.
\textbf{EF-Extractor.} We introduce EF-Extractor to learn the user addresses' transaction evolution features at different time slices. Since graph convolutional layers have proven its powerful ability to capture the structural features of graphs in \cite{DBLP:conf/iclr/KipfW17} \cite{20K-Core}, EF-Extractor employs graph convolutional layers to learn the structural features of the current TEG slice. Meanwhile, we learn from the idea of GRU~\cite{DBLP:conf/emnlp/ChoMGBBSB14} to capture the temporal information of the TEGs. Another reason for choosing GRU is that it has fewer model parameters and runs faster than long short-term memory~\cite{97LSTM}.
Specifically, EF-Extractor utilizes a two-layer GCN~\cite{DBLP:conf/iclr/KipfW17} module to map the structural information to a $d$-dimensional node representation $Z$. As the structural features of the $t$-th slice of TEGs, $Z_t$ can be defined as:
\setlength{\parskip}{-0.4\baselineskip}
\begin{equation}
\setlength{\abovedisplayskip}{0.2pt}
\label{eq2}
Z_t = GCN(h_{t-1},A_t)=f(\hat A_t \sigma(\hat A_t h_{t-1} W_0)W_1)
\end{equation}
where $\hat A_t = \tilde D_t ^{-\frac{1}{2}} \tilde A_t \tilde D_t ^{-\frac{1}{2}}$, $ A_t \in \mathbb R^{N \times N}$ is the adjacency matrix of the $t$-th slice of TEGs, $\tilde{A_t}=A_t+I_{N(t)}$ is the adjacency matrix with self-connections. $\tilde{D}_{t(ii)}=\sum_{j} \tilde{A}_{t(ij)}$ denotes the degree matrices of $\tilde{A_t}$. $h_{t-1}$ is the evolution features of the $t$-th slice, which will be described in detail later. $W_0 \in \mathbb R^{N \times H}$ and $W_1 \in \mathbb R^{H \times d}$ denote the weight matrix of the hidden layer and the output layer, respectively. $\sigma$ is the Relu active function and the input $h_0= X$.
\setlength{\parskip}{0\baselineskip}
For the evolution process of the structural features, EF-Extractor first calculates the update gate $z_t$ and the reset gate $r_t$ according to the current structural features $Z_t$ and the previous evolution features $h_{t-1}$, which can be expressed as:
\begin{equation}
\label{eq3}
z_t = \sigma(Z_tW_z+h_{t-1}U_z)
\end{equation}
\setlength{\parskip}{-0.6\baselineskip}
\begin{equation}
\label{eq4}
r_t = \sigma(Z_tW_r+h_{t-1}U_r)
\end{equation}
where $W_z,W_r \! \in \! \mathbb R^{N \times d}$ and $U_z,U_r \!\in\! \mathbb R^{d \times d}$ are the weight matrix of the update/reset gate, respectively. The update gate decides how much $h_{t-1}$ is passed to the future, and the reset gate determines how much $h_{t-1}$ need to be forgotten.
\setlength{\parskip}{0\baselineskip}
The next step is to calculate the candidate hidden state $\tilde h_t$ by reset gate. Here, EF-Extractor stores historical evolution features $h_{t-1}$ and memorizes the current state:
\begin{equation}
\label{eq5}
\tilde h_t = tanh(WZ_t+(r_t \odot h_{t-1}U)
\end{equation}
where $W\in \mathbb R^{N \times d}$ and $U\in \mathbb R^{d \times d}$ are the weight matrix used to calculate $\tilde h_t$. $\odot$ denotes the Hadamard product.
\setlength{\parskip}{0\baselineskip}
Finally, EF-Extractor updates the current evolution features $h_t$ according to $h_{t-1}$ and $\tilde h_t$:
\begin{equation}
\label{eq6}
h_t = (1-z_t) \odot h_{t-1} + z_t \odot \tilde h_t
\end{equation}
\setlength{\parskip}{0\baselineskip}
\textbf{Evolution graphs pooling.} Intuitively, addresses with similar evolution features can be divided into the same address clusters. This motivates us to aggregate similar addresses until all addresses' evolution features are aggregated into the TEG's graph-level behavior features. The key idea of the evolution graphs pooling is to learn the cluster assignment matrix by GNNs and assign similar addresses to new address clusters. For the evolution features $h_t$ in $t$-th slice ($t\in \{1,...,T\}$), the evolution graphs pooling first calculates the current cluster assignment matrix $C_t$:
\begin{equation}
\label{eq7}
C_t=softmax \left ( GNN_{pool}(A_t,h_t) \right )
\end{equation}
where $GNN_{pool}$ can be any GNNs, here we choose the GCN module with the same structure as EF-Extractor. $C_t\in \mathbb R^{N\times \tilde N}$ means that $N$ addresses are assigned to $\tilde N$ new address clusters. $\tilde N = N*r$ and $r$ is the assignment ratio.
According to the adjacency matrix $\{A_1,...,A_T\}$ of the current TEG, address evolution features $\{h_1,...,h_T\}$, and the assignment matrix $\{C_1,\cdots,C_T\}$, the process of the evolution graphs pooling for $t$-th slice can be formulated as:
\setlength{\parskip}{-0.8\baselineskip}
\setlength{\abovedisplayskip}{-1.5pt}
\begin{equation}
\label{eq8}
h_{t}^{pool}=C_t^T h_t \in \mathbb{R}^{\tilde N \times d}
\end{equation}
\setlength{\parskip}{-1\baselineskip}
\begin{equation}
\label{eq9}
A_{t}^{pool}=C_t^T A_t C_t \in \mathbb{R}^{\tilde N \times \tilde N}
\end{equation}
where $d$ is the dimension of the node representation $Z$. Eq.~\ref{eq8} and Eq.~\ref{eq9} generate the evolution features $\{h_1^{pool},...,h_T^{pool}\}$ and the adjacency matrix $\{A_1^{pool},...,A_T^{pool}\}$ for $\tilde N$ address clusters, respectively. They are input to the next EF-Extractor to capture the evolution features of next address clusters.
\setlength{\parskip}{0\baselineskip}
\textbf{Behavior recognition.} In some cases, phishers initiating malicious transactions in a specific evolution period may seriously affect the subsequent transaction evolution features. Therefore, we comprehensively consider the transaction behavior features in all slices rather than using only the most recent time slice, which can alleviate the negative impact of malicious transactions
After extracting the evolution features $\{h_1^{pool},...,h_T^{pool}\}$ for $T$ time slices ($\tilde N =1$, $h_t \in \mathbb R^{1 \times d},t\in [1,...,T]$), the \emph{Read-out} operation assigns time coefficients $\alpha=[\alpha_1,...,\alpha_T]$ to different evolution features, aggregating the $T$ evolution features into the unique evolution feature $h_i$ for the target address $v_i$:
\setlength{\parskip}{-0.8\baselineskip}\vspace{-0.2em}
\setlength{\abovedisplayskip}{-2pt}
\begin{equation}
\label{eq10}
h_i = \sum_{t=1}^T \alpha_t h_t^{pool}
\end{equation}
\setlength{\parskip}{-0.5\baselineskip}
\noindent where $h_t^{pool}$ denotes $v_i$'s evolution features of $t$-th slice.
\setlength{\parskip}{0\baselineskip}
Finally, we take $h_i$ as the input of the \emph{MLP} layer with a softmax classifier. Moreover, we use the cross-entropy function $\mathcal{L} $ to train TEGDetector, which is given by:
\setlength{\parskip}{-0.4\baselineskip}
\begin{equation}
\label{eq11}
\hat{Y}= softmax(MLP(h_i))
\end{equation}
\setlength{\parskip}{-1.1\baselineskip}
\begin{equation}\label{eq12}
\mathcal{L} = -\sum_{\mathcal G_i \in \mathbb G_{set}} \sum_{j=1}^{|Y|}Q_{ij}\ln{\hat{Y}_{ij}\left(A^i,X^i\right)}
\end{equation}
where $\mathcal G_i \in \mathbb G_{set}$ denotes the target address $v_i$'s TEG in the training set $\mathbb G_{set}$. $Y=\{y_1,...,y_n\}$ is the category set of the TEGs. $Q_{ij}=1$ if $\mathcal G_i$ belongs to category $y_i$ and $Q_{ij}=0$ otherwise. $\hat{Y}_{ij}$ denotes the predicted probability of $\mathcal G_i$, which is calculated by Eq.\ref{eq11} and can be considered as a function of $A^i$ and $X^i$, thus we denote it as $\hat{Y}_{ij}(A^i,X^i)$.
\setlength{\parskip}{0\baselineskip}
\section{Experiments}
In this section, we comprehensively evaluate the proposed TEGDetector, including its phishing detection performance, detection efficiency, and robustness.
\begin{table}[htbp]\setlength{\abovecaptionskip}{0.05cm}\setlength{\belowcaptionskip}{-0.2cm}\setlength{\abovecaptionskip}{0.1cm}\vspace{0.2cm}
\centering
\caption{Dataset statistics}
\resizebox{80mm}{9.6mm}{
\begin{tabular}{c|ccc}
\hline \hline
TEG properties & \# Addresses & \#Transctions & Average degree \\ \hline
Sum &790,849 &3,383,022 &- \\
Average &395.42 &1,691.51 &4.86 \\
Maximum &4,934 &110,060 &7.09 \\
Minimum &2 &1 &1.00 \\ \hline \hline
\end{tabular} \label{data} }
\end{table}
\vspace{-0.4cm}
\begin{table*}[htbp]\setlength{\abovecaptionskip}{0.1cm}\setlength{\belowcaptionskip}{0.3cm}
\centering
\caption{The detection performance of different phishing detectors. We use bold to highlight wins.}
{
\begin{tabular}{cccccccccc}
\hline \hline
\multirow{3}{*}{Compared methods} & \multicolumn{9}{c}{Training ratio} \\ \cline{2-10}
& \multicolumn{3}{c|}{60\%} & \multicolumn{3}{c|}{70\%} & \multicolumn{3}{c}{80\%} \\ \cline{2-10}
& Precision & Recall & \multicolumn{1}{c|}{F-score} & Precision & Recall & \multicolumn{1}{c|}{F-score} & Precision & Recall & F-score \\ \hline
Density detector &50.41 &\textbf{99.30 } & \multicolumn{1}{c|}{66.87} &50.41 &\textbf{99.30 } & \multicolumn{1}{c|}{66.87} &50.41 &\textbf{99.30 } & 66.87 \\
Repeat detector &46.94 &51.78 & \multicolumn{1}{c|}{49.25} &48.16 &62.35 & \multicolumn{1}{c|}{54.35} & 48.53 &69.62 &57.19 \\
Deepwalk & 76.85 & 68.85 & \multicolumn{1}{c|}{72.63} & 77.90 & 71.20 & \multicolumn{1}{c|}{74.40} & 78.15 & 72.70 & 75.33 \\
Node2vec & 81.65 & 70.35 & \multicolumn{1}{c|}{75.58} & 82.30 & 72.20 & \multicolumn{1}{c|}{76.92} & 82.65 & 74.85 & 78.56 \\
Trans2vec & 78.65 & 77.90 & \multicolumn{1}{c|}{78.27} & 88.65 & 86.55 & \multicolumn{1}{c|}{87.59} & 91.45 & 87.65 & 89.51 \\
T-EDGE & 79.05 & 76.20 & \multicolumn{1}{c|}{77.60} & 87.45 & 76.65 & \multicolumn{1}{c|}{81.69} & 88.75 & 78.55 & 83.34 \\
I$^2$BGNN & 88.65 & 91.45 & \multicolumn{1}{c|}{90.03} & 89.20 & 91.55 & \multicolumn{1}{c|}{90.36} & 89.20 & 92.05 & 90.60 \\
MCGC & 90.50 & 91.55 & \multicolumn{1}{c|}{91.02} & 90.55 & 92.10 & \multicolumn{1}{c|}{91.32} & 90.75 & 92.85 & 91.79 \\
TEGDetector (Ours) & \textbf{95.90} & 95.60 & \multicolumn{1}{c|}{\textbf{95.75}} & \textbf{96.55} & 96.75 & \multicolumn{1}{c|}{\textbf{96.65}} & \textbf{96.30} & 96.25 & \textbf{96.28} \\ \hline \hline
\end{tabular}\label{tab:1}}
\end{table*}
\subsection{Datasets}\label{sec4.2}
We evaluate TEGDetector on the real-world Ethereum transaction dataset released on the Xblock platform. Xblock provides 1,660 phishing addresses that have been reported and 1,700 randomly selected normal ones with the records of their two-order transactions. Specifically, we randomly selected 1,000 phishing addresses and the same number of ordinary addresses and construct the TEGs with 10 time slices ($T=10$) for them. To make a comprehensive evaluation, we divide the TEGs into two parts: \{60\%, 70\%, 80\%\} as the training set and the remaining \{40\%, 30\%, 20\%\} as the test set. The basic statistics are summarized in Table~\ref{data}. In the experiment, we repeated the above steps five times and reported the average phishing detection performance.
\subsection{Compared Methods}
To better evaluate the detection performance of TEGDetector, we choose several phishing detectors as the compared methods.
For all compared methods, we select the same target addresses as TEGDetector, and conduct experiments based on the source code released by the authors and their suggested parameter settings. The compared methods are briefly described as follows:
\textbf{Density detector} calculates the density ratio of TEG slices containing transactions to all slices. When the density ratio is greater than $0.5$, the target address will be classified as a phishing address.
\textbf{Repeat detector} calculates the repetition ratio of test transactions with the same direction as the training ones to all test transactions. According to the conclusion of Lin et al.~\cite{lin2021evolution}, we classify addresses with a repetition ratio greater than $0.1$ as phishing addresses. Note that in this case, we divide the slices in each TEG into training slices and test ones according to different division ratios.
\textbf{Deepwalk}~\cite{perozzi2014deepwalk} and \textbf{Node2vec}~\cite{grover2016node2vec} learn node representations through random walking and can be used in blockchain transaction networks.
\textbf{Trans2vec}~\cite{wu2020phishers} and \textbf{T-EDGE}~\cite{lin2020t} consider the transaction amount and timestamps of blockchain transactions on the basis of random walking, thus achieving better phishing detection performance.
\textbf{I$^2$BGNN}~\cite{shen2021identity} and \textbf{MCGC}~\cite{zhang2021mcgc} are graph classifiers designed for phishing detection on the blockchain. They have achieved satisfactory detection performance and easy to be implemented for phishing detection on new addresses.
\setlength{\parskip}{0\baselineskip}
\subsection{Performance of TEGDetector }
In this section, we discuss the phishing detection performance of TEGDetector, and analyze its detection efficiency.
\textbf{Phishing detection performance.}
Compared with other detectors in Table~\ref{tab:1}, TEGDetector achieves the state-of-the-art (SOTA) performance at different training set ratios. Specifically, although the density detector achieves a recall of 99.30\%, a precision of 50.41\% indicates that it is actually invalid to detect phishing addresses based on density ratio. Compared with density detector and repeat detector, other compared methods achieve better detection performance with automated feature learning. Unfortunately, the lack of structural or temporal information restricts them from achieving more accurate phishing detection.
In contrast to Trans2vec and T-EDGE (which focus on temporal information), I$^2$BGNN and MCGC (which pay attention to structural information), TEGDetector achieves the best detection performance. This suggests that balancing structural and temporal information can more accurately capture the target address's transaction behaviors.
\textbf{Detection efficiency.}
We further study the detection efficiency of TEGDetector. Since TEGs are essentially a series of dynamic subgraphs, we compare TEGDetector with the GNN based phishing detectors which input the static ones.
Figure~\ref{fig6}(a) and (b) show the training time and detection time of different detectors. We select all training addresses for model training and count the detection time of 100 randomly selected addresses in the detection phase. We can observe that with the increase of $max\_\ links$, the training time of TEGDetector becomes longer than that of I$^2$BGNN. Moreover, TEGDetector's detection time still reaches 6.5 times that of I$^2$BGNN, although the gap has been greatly reduced. Considering TEGDetector's excellent detection performance, we believe that such a price is acceptable.
\begin{figure}[htb]\setlength{\belowcaptionskip}{-0.8cm} \setlength{\abovecaptionskip}{0.1cm}\vspace{-0.1cm}
\centering
\includegraphics[width=1\linewidth]{5-1.pdf}\\
\caption{Detection efficiency of different detectors. }\label{fig6}
\end{figure}
\subsection{A Fast and Non-parametric Phishing Detection}\label{sec4.6}
The previous section confirms the SOTA phishing detection performance of TEGDetector. However, the high time complexity of TEGDetector is still a challenge. To Address this problem, we propose to quickly filter out the obvious normal addresses while ensuring that the real phishing ones are not miss, which can narrow down the search space of suspicious addresses.
Phishers usually send phishing messages to massive users, allowing them to have more potential transaction partners. We believe that they may have more intensive large-amount transactions compared to the normal addresses. To verify our conjecture, we defined central transaction ratio (CTR), which represents the ratio of central address's transactions to all transactions in a TEG (or a static subgraph).
Inspired by this observation, we propose a fast and non-parametric detector (FD). Specifically, we classify the target address whose CTR is greater than a threshold as a phishing address, otherwise, it is regarded as a normal address. In Figure~\ref{fig8}(a), when CTR is set to 0.6, FD can almost reach 100\% Recall, and Precision almost reaches the highest 60\%. This indicates that FD can filter out normal addresses almost without missing any phishing addresses, which can be a pre-detected approach before using TEGDetector for precise phishing detection. Meanwhile, Figure~\ref{fig8}(b) shows that the TEGDetector's Precision is improved more than its Recall since FD may pre-filters some normal addresses that may be misclassified by TEGDetector. Additionally, we can observe that FD can also reduce the detection time of TEGDetector by approximately 15\%.
Consequently, FD is a light-weighted solution for phishing detection when we consider both performance and efficiency.
\begin{figure}[htb]\setlength{\belowcaptionskip}{-0.2cm} \setlength{\abovecaptionskip}{0.1cm}\vspace{0cm}
\centering
\includegraphics[width=1\linewidth]{8.pdf}\\
\caption{ (a) FD's detection performance under different CTR thresholds. (b) FD can improve the detection performance of TEGDetector and reduce its detection time. }\label{fig8}
\end{figure}
\subsection{Ablation Study of TEGDetector }
To further explore the effectiveness of TEGDetector, we conduct ablation experiments on the pooling layer and time coefficient. For the pooling layer, we utilize the average pooling and maximum pooling operations on the feature matrix of the graph, respectively, expressed as TEGD-ave and TEGD-max. For the time coefficients, we replace the weighting of time coefficients in TEGDetector with a summation operation to obtain the variant TEGDetector\_S.
\begin{table}[!htb]
\caption{Ablation study of TEGDetector.}
\label{tab:abl}
\setlength{\tabcolsep}{2mm}
\centering
\resizebox{0.5\textwidth}{!}{
\begin{tabular}{ccccc}
\hline \hline
Ablation Module &Method & Precision(\%) & Recall(\%) & F-score(\%) \\ \hline
\multirow{2}{*}{Pool-method}
&TEGD-ave &95.48 &95.53 &95.50 \\
&TEGD-max &95.24 &95.25 &95.25 \\ \hline
Time coefficient &TEGDetector\_S &94.51 &94.56 &94.50 \\ \hline
Proposed &TEGDetector &96.55 &96.75 &96.65 \\ \hline \hline
\end{tabular}}
\end{table}
As illustrated in Table \ref{tab:abl}, the performance of TEGDetector is better than the two detection methods with average and maximum pooling, e.g, the precision of TEGDetector is 96.55\%, while the precision of TEGD-max is 95.24\%. This indicates that the pooling method using the cluster assignment matrix can extract graph-level features more effectively than the average and maximum pooling methods. We also observe that TEGDetector\_S is almost 2\% lower than TEGDetector on precision, recall and F-sorce, which demonstrates that time coefficients give different weights on different moments to improve model performance.
\subsection{Robustness of TEGDetector }
Now we evaluate the robustness of detectors when phishers maliciously conceal their phishing behaviors. From the perspective of network topology properties and detectors, we design two methods to add disturbances on the transaction networks, i.e., the CTR-based and gradient-based methods.
\begin{figure}[htbp]\setlength{\belowcaptionskip}{-0.5cm}\setlength{\abovecaptionskip}{0.4cm}
\centering
\includegraphics[width=1\linewidth]{9.pdf}\\
\caption{ (a) An example of phishing deception. (b) TEGDetector is more robust than other methods when facing possible phishing deception. }\label{fig9}
\end{figure}
According to our discussion in the previous section, the phishing addresses' CTRs are more likely to be larger than 0.9. Therefore, we designed a phishing deception experiment where we change the phishing address's CTR. As shown in Figure~\ref{fig9}(a), we randomly add transactions to the non-central address pairs in TEGs until their CTRs are less than the set value. Specifically, we randomly select $T/2$ time slices in TEGs and add malicious transactions to them. The transaction amount of these transactions is set to a random value less than the maximum one in original TEGs.
In the phishing deception experiment, the smaller the target CTR, the more malicious transactions need to be added. In Figure~\ref{fig9}(b), the detection accuracy of Trans2vec and T-EDGE is more stable than other existing methods. We speculate that although the randomness of the walking strategies leads to the loss of structural information, it also reduces the impact of malicious transactions. In contrast, the static subgraphs constructed by I$^2$BGNN and MCGC retain all the malicious information, their detection accuracy reduces drastically. This further demonstrates that in addition to limiting the phishing detection performance, the robustness of static subgraphs lacking temporal information is also worrisome. Reassuringly, TEGDetector still with an accuracy of 83\% even in the worst case (when the target CTR is set to $0.2$), indicating that it is necessary to expand the static subgraphs into the dynamic ones. Compared with existing methods, TEGDetector can fully balance the target address's transaction behaviors in all periods, which enables it to capture more comprehensive behavior features. It is worth noting that TEGDetector is more robust than TEGDetector$\_$S, \emph{i.e.}, the latter undergoes an accuracy decline of 21.25\%, while the former with only a decline of 13.5\%. This testifies that TEGDetector is significantly more robust when phishers cannot add malicious transactions in each period.
\begin{figure}[htb]\setlength{\belowcaptionskip}{-0.2cm} \setlength{\abovecaptionskip}{0.1cm}\vspace{0cm}
\centering
\includegraphics[width=1\linewidth]{attack_robu.pdf}\\
\caption{(a), (b), (c) and (d) respectively represent the performance of TEGDetector, TEGDetector\_S, MCGC, I$^2$BGNN on the perturbed transaction networks. }\label{attack_robu}
\end{figure}
In addition to considering the malicious transactions generated based on the network topology properties, we also design the malicious transactions generated based on the feedback of the detectors gradient. Since T-EDGE and Trans2vec are based on random walk methods, these two detection methods do not perform gradient attacks and compare with other detection methods. As shown in Figure~\ref{attack_robu}, we obtain the gradient value of the transaction network adjacency matrix from the objective loss function in descending order, and add transactions in the order where there are no transactions. The value obtained by multiplying the modify rate of link by the maximum number of nodes in the transaction network is the number of malicious transactions added.
As shown in Figure~\ref{attack_robu}(a), TEGDetector exceeds 85\% in precision, recall and F1-score even if the modify rate of link is 0.5, which is better than the other three detection methods, e.g., the precision on TEGDetector\_S, the recall on MCGC and the F1-score on I$^2$BGNN are only 86.09\%, 79.33\% and 72.50\% respectively in the worst case. From the overall decline in precision, the maximum rate of decrease in precision of TEGDetector is only 9.02\% and TEGDetector\_S is 8.91\%, while MCGC and IBGNN are 10.08\% and 11.41\% respectively. This indicates that the TEGs can disperse disturbances in multiple timestamps when mitigating the impact of malicious transactions, thereby enhancing the robustness of the detector.
\section{Conclusions}
In this paper, we first defined the transaction evolution graphs (TEGs) that can frame both structural and temporal behavior cues. Then, we proposed TEGDetector, a dynamic graph classifier suitable for identifying the target address's transaction behavior from TEGs. Experimental results demonstrate the SOTA detection performance of TEGDetector. Moreover, we gain insights that in the TEGs, large-amount transactions tend to be more concentrated on phishing addresses. Inspired by this, a fast phishing detector(FD) is designed, which can quickly narrow down the search space of suspicious addresses and improve the detection efficiency of the phishing detector. In the possible phishing deception experiment, TEGDetector shows significantly higher robustness than other phishing detectors.
However, TEGDetector's time complexity is much higher than other phishing detectors since it both considers the structural and temporal information. For future work, we plan to explore a lower complexity phishing detector. In addition, improving the robustness of phishing detectors against more targeted phishing deception methods deserves further research.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-12-01T02:25:34",
"yymm": "2111",
"arxiv_id": "2111.15446",
"language": "en",
"url": "https://arxiv.org/abs/2111.15446"
}
|
\section{Introduction}
Over the last few decades, control of the discretisation error generated by the numerical approximation of partial differential equations (PDEs) has witnessed significant advances due to contributions in \textit{a posteriori} errror analysis and the use of adaptive mesh refinement techniques. Such algorithms aim to save computational resources by refining only a certain subset of elements, making up part of the underlying mesh, that contribute most to the error in some sense. In particular, we refer to the early works \cite{AINSWORTH19971, ivo2001finite, bangerth2003adaptive}, and the references cited therein.
Typically, in applications we are not concerned with pointwise accuracy of the numerical solution of PDEs themselves, but rather quantities involving the solution (which we will refer to as being goal quantities, or quantities of interest); in this setting goal-oriented techniques are employed to bound the error in the given quantity of interest. Work in this area was first pioneered by \cite{Becker96afeed-back, becker_rannacher_2001} and \cite{giles_suli_2002}, which established the general framework \cite{ODEN2001735, PRUDHOMME1999313} of the dual, or adjoint, weighted-residual method (DWR). When the quantity of interest is represented by a nonlinear functional, a linearisation about the numerical solution is employed in order for the problem to become tractable and computable; hence, the nonlinear functional must be differentiated. Solving a discrete version of this linearised adjoint problem allows for an estimate of the discretisation error induced by the quantity of interest, which may be localised further to drive adaptive refinement algorithms. Unweighted, residual-based estimates can be derived based on employing certain stability estimates \cite{eriksson_estep_hansbo_johnson_1995}, but this results in meshes independent of the choice of quantity of interest. The DWR approach has been applied to a vast number of different applications including the Poisson problem \cite{Becker96afeed-back}, nonlinear hyperbolic conservation laws \cite{hartmann_houston}, fluid-structure interaction problems \cite{VANDERZEE20112738}, application to Boltzmann-type equations \cite{hoitinga_van_brum}, as well as criticality problems in neutron transport applications \cite{NeutronTransport}.
In this paper, our motivation is in the post-closure safety assessment of facilities intended for use as deep geological storage of high-level radioactive waste \cite{cliffe_collis_houston, NDA, NIREX, MCKEOWN1999231}. Here, we are solely interested in the time-of-flight for a non-sorbing solute (which has leaked from the repository) to make its way to the surface, or boundary, of the domain; this time is represented by the (nonlinear) travel time functional. Previously, work undertaken in \cite{cliffe_collis_houston} employed goal-oriented \textit{a posteriori} error estimation for this functional, relying on a finite-difference approximation of its Gâteaux derivative.
The work presented in this article derives an exact expression for the Gâteaux derivative of the travel time functional, based on employing a backwards-in-time initial-value-problem (IVP) considered adjoint to the trajectory of the leaked solute. The use of such linearisation allows for an easy implementation of the adjoint problem required for the goal-oriented error estimation of the travel time functional. In comparison with the previous approximate linearisation, in the case of a lowest--order approximation for the driving velocity field, there is now no need for time--stepping techniques to evaluate the derivative of the travel time functional, which are often slow and computationally expensive.
Before we proceed, we first introduce the travel time functional for generic velocity fields; in addition a preliminary version of the main result of this work is presented: the Gâteaux derivative of the travel time functional for continuous velocity fields. Next we briefly discuss some of the literature relating to Darcy's equations as a model for groundwater flow, other potential models that could be used for more realistic simulations, and the \textit{a posteriori} error analysis that has been developed within these areas. Finally, we outline the contents of the rest of this article.
\subsection{The Travel Time Functional}\label{prelims}
Within this section, we define the travel time functional for generic velocity fields and address briefly the difficulties involved with its linearisation. To this end, consider an open and bounded Lipschitz domain $\Omega\subset\mathbb{R}^d$, $d = 2,3$, with polygonal boundary $\partial\Omega = \Gamma$, and the semi-infinite time interval $\mathcal{I} = [0, \infty)$. Let us suppose we have a generic velocity field $\mathbf{u} = \mathbf{u}(\mathbf{x}, t) : \overline{\Omega}\times \mathcal{I}\rightarrow\mathbb{R}^d$. For a user-defined initial position $\mathbf{x}_0\in\Omega$, the particle trajectory $\mathbf{X} \equiv \mathbf{X_u}$, due to $\mathbf{u}$, is given by the solution of the following IVP:
\begin{align*}
\displaystyle\frac{d\mathbf{X}}{dt}(t) & = \mathbf{u}(\mathbf{X}(t), t) \;\;\;\; \forall t\in \mathcal{I},\\
\mathbf{X}(0) & = \mathbf{x}_0.
\end{align*}
The so-called travel time of the velocity field $\mathbf{u}$, which is defined to be the time-of-flight of the particle trajectory $\mathbf{X_u}$ from its initial position $\mathbf{x}_0$ to, if ever, its first exit point out of the domain $\Omega$. Thereby, the functional $T(\mathbf{u}; \mathbf{x}_0)$ is defined by
\begin{equation}\label{TT}
T(\mathbf{u}; \mathbf{x}_0) = \inf\{t\in \mathcal{I} : \mathbf{X_u}(t)\not\in\Omega\}.
\end{equation}
Alternatively, we can write this in the equivalent form:
\begin{equation*}
T(\mathbf{u}; \mathbf{x}_0) = \int_{P(\mathbf{u}; \mathbf{x}_0)}\frac{dt}{\Vert\mathbf{u}\Vert_2},
\end{equation*}
where $\Vert\cdot\Vert_2$ denotes the standard Euclidean $2-$norm and $P(\mathbf{u}; \mathbf{x}_0)$ is the curve traced by the particle trajectory from its initial position to the first boundary contact:
\begin{equation*}
P(\mathbf{u}; \mathbf{x}_0) = \{\mathbf{X_u}(t)\in\overline{\Omega} : t\in [0, T(\mathbf{u}; \mathbf{x}_0)]\}.
\end{equation*}
The integral version of the functional clearly highlights the difficulty concerning the demonstration of its differentiability. Indeed, the nonlinearity occurs within the integrand and the curve in which the integral is taken over depends itself on the velocity field. The travel time functional cannot clearly be globally continuous and therefore not globally Fréchet differentiable. We shall see, however, that it is possible to evaluate its Gâteaux derivative (Theorem \ref{mainresult}). The regularity of the functional itself will not be addressed within this work.
Additionally, evaluating the travel time functional itself involves the computation of the velocity streamlines, or particle trajectories $\mathbf{X_u}(t)$. Within this work, we follow the techniques outlined in \cite{KAASSCHIETER1995277} for streamline computation; furthermore, a streamfunction
approach can indeed be employed when the considered fluid flow approximations are divergence-free
\cite{MATRINGE2006992}, and it is even possible for high-order velocity approximations, when also divergence-free, to
have accurate streamline tracing \cite{juanes_matringe}.
\subsection{Linearisation in the Continuous Case}
A preliminary result for the linearisation of the travel time functional involves assuming that the velocity field $\mathbf{u}$ satisfying the underlying flow problem is continuous on $\Omega$. When this is the case, then the Gâteaux derivative of the travel time functional may be evaluated and computed as an integral, in time, weighted by a variable $\mathbf{Z}$ which may be considered as being \textit{adjoint} to the particle trajectory $\mathbf{X_u}$. The below theorem presents such a preliminary version of the main result of this paper. Here, for a sufficiently smooth functional ${\cal Q}:V\rightarrow {\mathbb R}$, we use the notation ${\cal Q}'[w](\cdot)$ to denote the G\^{a}teaux derivative of ${\cal Q}(\cdot)$ evaluated at some $w$ in $V$, where $V$ is some suitably chosen function space.
\begin{theorem}\label{preMainResult}
Suppose that the velocity field $\mathbf{u}(\mathbf{x}, t)$ is continuous on $\Omega$. Let $\mathbf{n} = \mathbf{n}(\mathbf{x})$ be the unit outward normal vector to $\Gamma$. Assume $\Gamma$ is locally flat at the exit point $\mathbf{X}_{\mathbf{u}}(T(\mathbf{u}; \mathbf{x}_0))$, and that the particle trajectory does not exit the domain parallel to the boundary. Let $\mathbf{Z}$ solve the IVP:
\begin{align*}
-\frac{d\mathbf{Z}}{dt}(t) - [\nabla\mathbf{u}(\mathbf{X}(t), t)]^\top\mathbf{Z}(t) & = \mathbf{0}\;\;\;\;\forall t\in[0, T(\mathbf{u}; \mathbf{x}_0)),\\
\mathbf{Z}(T(\mathbf{u}; \mathbf{x}_0)) & = -\frac{\mathbf{n}}{\mathbf{u}(\mathbf{X}(T(\mathbf{u}; \mathbf{x}_0)), T(\mathbf{u}; \mathbf{x}_0))\cdot\mathbf{n}}.
\end{align*}
Then, the G\^{a}teaux derivative of the travel time functional may be evaluated as
\begin{equation*}
T'[\mathbf{u}](\mathbf{v}) = \int_0^{T(\mathbf{u}; \mathbf{x}_0)}\mathbf{Z}(t)\cdot\mathbf{v}(\mathbf{X}(t), t)\,dt.
\end{equation*}
\end{theorem}
The above result can be used to evaluate the derivative required for the implementation of DWR {\em a posteriori} error estimators, where here the velocity field $\mathbf{u}$ is replaced with its discrete approximation $\mathbf{u}_h$. However, such approximations are usually obtained via finite element methods, and the continuity of $\mathbf{u}_h$ at element interfaces is not always guaranteed. In this case, Theorem \ref{preMainResult} must be generalised to allow for such discontinuity; this is addressed as part of Section \ref{MainResultSec}, where Theorem \ref{mainresult} is derived without such a continuity assumption. Moreover, Theorem \ref{mainresult} presents a more general result in which Theorem \ref{preMainResult} may be recovered easily by setting the resulting jump terms equal to zero.
\subsection{Related Literature}
Groundwater flow, governed by Darcy’s equations, represents a viable simplified model for the fluid flow \cite{MCKEOWN1999231, cliffe_collis_houston} and will be exploited within this paper. It is assumed that whilst the surrounding rocks may not be saturated while the repository is being built, that they will eventually become saturated in its operational lifetime; thus, it is sufficient that in a post-closure assessment we can consider saturated conditions, and therefore use the time independent Darcy’s equations as our model, rather than the usual Richards equations for capillary flow \cite[p. 3]{collis}. Of course, within this context and in many others, there are more sophisticated models, cf. \cite{Pal, ZHANG2016396, Popov, Martin_Jaffre_Roberts, BOON_SIAM, BOON_springer, FUMAGALLI2021110205, poly_methods, Milne, Berre} and the references cited therein, where large-scale structures and complex topographical features, such as fracture networks or vugs and caves, are considered as parts of the domain. The solution-based \textit{a posteriori} error estimation for these more sophisticated models may be found in, for example, the articles \cite{CHEN2014502, chen_sal, chen_sun, WILLIAMSON2019266, Hecht, Varela, MGHAZLI2019163} and the references cited therein.
An energy norm based approach can also be found in \cite{CAO1999681}, where adaptive mesh refinement is employed to accurately compute streamlines via a streamfunction approach. More generally, the goal-oriented error estimation for linear functionals of Darcy’s equations can be found in \cite{MOZOLEVSKI2015127} which employs equilibrated-flux techniques in order to achieve a guaranteed bound. Furthermore, \cite{MALLIK2020112367} extends this work to bound higher-order terms to demonstrate that the \textit{a posteriori} bounds are asymptotically exact, as well as taking into account the error induced by inexact solvers.
For a set of slightly different homogenised problems, \cite{Carraro} presents the goal-oriented error estimation for general quantities of interest. We also point out the existing literature for goal-adaptivity in the context of contaminant transport, presented in the articles \cite{Bengzon, LARSONgoal}, but which differs slightly from the work presented here. For the numerical experiments presented in Section \ref{numexp}, for example, following \cite{MFEM}, we employ a mixed finite element method using the Brezzi-Douglas-Marini (BDM) elements. These elements, introduced originally in \cite{Brezzi}, ensure $H$(div)-conformity in order to retain physical results in the streamline computation: that is, ensuring the continuity of the normal traces of velocity fields across element interfaces.
The original solution-based \textit{a posteriori} error analysis for Darcy's equations, employing Raviart--Thomas elements, was undertaken by Braess and Verfürth in \cite{Braess}; we also refer to \cite{Barrios,BARRIOS2015909} which consider augmented, stabilised versions of Darcy’s equations, whose original $L^2$-bound analysis was given in the article \cite{Larson}. Moreover, there is a vast literature for the \textit{a posteriori} error analysis for Darcy’s equations in a a variety of contexts. For example, \cite{Bernadi_Orfi, ORFI20192833} presents the analysis for time-dependent Darcy flow; \cite{de_pietro} uses the finite volume method for two-phase Darcy flow; and \cite{BARRIOS_Bustinza} uses an augmented discontinuous Galerkin method. For the (residual) norm-based \textit{a posteriori} error analysis for Darcy’s equations, and mixed finite element methods in general, we refer to the articles \cite{Voh1,Voh2} by Vohralík, and the references cited therein. In \cite{Verfurth}, similar to \cite{Carst}, residual-based \textit{a posteriori} error bounds are derived by considering a Helmholtz decomposition in order to overcome the need for a saturation assumption previously assumed by \cite{Braess}. Moreover, in \cite{AMANBEK2020112884} an enhanced velocity mixed finite element method is used instead.
Lastly, problems modelled by Darcy’s equations often lend themselves for investigation in the realm of uncertainty quantification; more specifically, in real-life there is uncertainty regarding the properties of the sub-surface rock making up the domain. While not the focus of this work, we refer to \cite{collis}, and the references cited therein, where substantial work has been undertaken in a random setting.
\subsection{Outline of the Paper}
In Section \ref{modelsec} we introduce Darcy's equations for a simple model of saturated groundwater flow and their classical mixed formulation. Section \ref{MFEMsec} presents the numerical approximation of Darcy's equations via the mixed finite element method. The DWR method is presented in Section \ref{DWRsec}; here, an \textit{a posteriori} error estimate is stated and localised into element-wise indicators. Section~\ref{MainResultSec} represents the main contribution of this paper which is presented for piecewise discontinuous velocity fields. The remainder of this section proves the main linearisation result, given by Theorem \ref{mainresult}, for the travel time functional. Applying the linearisation result to groundwater flow and Darcy's equations is addressed in Section \ref{DarcyContextSec}, and the following Section \ref{ImplementationSec} provides some brief implementation details when the velocity field under consideration is piecewise linear. Three numerical experiments are conducted in Section \ref{numexp}: two simple, academic-style examples aim to build confidence in the proposed \textit{a posteriori} error estimate, while the last one adaptively simulates the leakage of radioactive waste within a domain inspired by the (albeit greatly simplified) Sellafield site, located in Cumbria, in the UK. This final, physically motivated example, matches the experiment conducted in \cite{cliffe_collis_houston} but uses the new linearisation result instead. Lastly, some concluding remarks are discussed in Section \ref{ConclusionsSec}.
\section{Darcy Flow, FE Approximation, and \textit{A Posteriori} Error Estimation}
\subsection{The Model for Groundwater Flow}\label{modelsec}
For illustrative purposes, a Darcy flow model is adopted in this paper in order to demonstrate the main Gâteaux derivative result (Theorem \ref{mainresult}) in the context of goal-oriented adaptivity.
To this end, Darcy's equations are given by the following system of first-order PDEs, whereby we seek the \textit{Darcy velocity} $\mathbf{u}$ and \textit{hydraulic head (or pressure)} $p$ such that:
\begin{alignat}{2}
\label{DL}\mathbf{K}^{-1}\mathbf{u} + \nabla p & = \mathbf{0}\;\;\;\; && \forall\mathbf{x}\in\Omega,\\
\label{CM}\nabla\cdot\mathbf{u} & = f\;\;\;\; && \forall\mathbf{x}\in\Omega,\\
\label{Dbc} p & = g_D\;\;\;\; && \forall\mathbf{x}\in\partial\Omega_D,\\
\label{Nbc}\mathbf{u}\cdot\mathbf{n} & = 0\;\;\;\; && \forall\mathbf{x}\in\partial\Omega_N.
\end{alignat}
Here, $\Omega\subset\mathbb{R}^d$, $d=2,3$, is an open and bounded domain with polygonal boundary $\partial\Omega$, partitioned into so-called Dirichlet and Neumann parts ${\partial\Omega} = \overline{\partial\Omega}_D\cup\overline{\partial\Omega}_N$; the unit outward normal vector to the boundary is denoted by $\mathbf{n}$. Furthermore, $f\in L^2(\Omega)$ is a source/sink term and $g_D\in H^{\frac{1}{2}}(\partial\Omega_D)$ is Dirichlet boundary data for the pressure. Such regularity assumptions allow for the existence of a unique weak solution to Darcy's equations, discussed very briefly in Section \ref{formulationsec}. Lastly, the matrix $\mathbf{K}(\mathbf{x})\in\mathbb{R}^{d\times d}$ represents the hydraulic conductivity of the surrounding rock in the groundwater model; it is given by
$
\mathbf{K} := \nicefrac{\rho g}{\mu}\mathbf{k},
$
where $\rho$ is the density of water, $g$ is the acceleration due to gravity, $\mu$ is the viscosity of water, and $\mathbf{k}$ is the permeability of the surrounding rock. It is assumed that the eigenvalues of $\mathbf{K}$, $\lambda_{\pm}$ ($0 < \lambda_- \leq \lambda_+)$ satisfy
\begin{equation}\label{conductivityassumption}
\lambda_-\vert\mathbf{y}\vert^2 \leq \mathbf{y}^\top\mathbf{K}\mathbf{y} \leq \lambda_+\vert\mathbf{y}\vert^2 \;\;\;\; \forall\mathbf{x}\in\Omega\;\;\;\;\forall\mathbf{y}\in\mathbb{R}^d.
\end{equation}
In particular, the condition (\ref{conductivityassumption}) implies that $\mathbf{K}$ is invertible.
\subsubsection{Weak Formulation}\label{formulationsec}
Firstly, we introduce the following function spaces:
\begin{align*}
H(\text{div}, \Omega) & := \{\mathbf{v}\in [L^2(\Omega)]^d : \nabla\cdot\mathbf{v}\in L^2(\Omega)\},\\
H_{0, D}^1(\Omega) & := \{\psi\in H^1(\Omega) : \psi\vert_{\partial\Omega_D} = 0\},\\
H_{0, N}(\text{div}, \Omega) & := \{\mathbf{v}\in H(\text{div}, \Omega) : \langle\mathbf{v}\cdot\mathbf{n}, \psi\rangle_{\partial\Omega} = 0\;\;\forall\psi\in H_{0, D}^1(\Omega)\}.
\end{align*}
The space $H_{0, N}(\text{div}, \Omega)$ is a subspace of $H(\text{div}, \Omega)$ with vanishing normal-trace on the Neumann part of the boundary $\partial\Omega_N$. The duality pairing between $H^{-\frac{1}{2}}(\partial\Omega)$ and $H^{\frac{1}{2}}(\partial\Omega)$ is denoted by $\langle\cdot,\cdot\rangle_{\partial\Omega}$ and is given by the following Green's formula.
\begin{proposition}\label{IBP}
For $\mathbf{v}\in H(\textnormal{div}, \Omega)$,
\begin{equation*}
\langle\mathbf{v}\cdot\mathbf{n}, \psi\rangle_{\partial\Omega} = \int_\Omega \mathbf{v}\cdot\nabla\psi + \int_\Omega \psi\nabla\cdot\mathbf{v}\;\;\;\;\forall\psi\in H^1(\Omega).
\end{equation*}
\end{proposition}
By multiplying (\ref{DL}) by a test function $\mathbf{v}\in H_{0, N}(\text{div}, \Omega)$ and (\ref{CM}) by a test function $q\in L^2(\Omega)$, and applying Proposition \ref{IBP} to the latter, we arrive at the saddle-point problem: find $(\mathbf{u}, p)\in \mathbf{H} := H_{0, N}(\text{div}, \Omega)\times L^2(\Omega)$ such that
\begin{alignat}{2}
\label{SP1}& a(\mathbf{u}, \mathbf{v}) + b(\mathbf{v}, p) && = G(\mathbf{v})\;\;\;\;\forall\mathbf{v}\in H_{0, N}(\text{div}, \Omega),\\
\label{SP2}& b(\mathbf{u}, q) && = F(q)\;\;\;\;\forall q\in L^2(\Omega).
\end{alignat}
The bilinear forms are given by
$a(\mathbf{u}, \mathbf{v}) := \int_\Omega \mathbf{K}^{-1}\mathbf{u}\cdot\mathbf{v}$,
$b(\mathbf{v}, p) := -\int_\Omega p\nabla\cdot\mathbf{v}$,
and the linear functionals are defined as
$G(\mathbf{v}) := -\langle\mathbf{v}\cdot\mathbf{n}, g_D\rangle_{\partial\Omega}$,
$F(q) := -\int_\Omega f q$.
For simplicity of presentation, we rewrite (\ref{SP1})--(\ref{SP2}) in the following compact manner:
find $(\mathbf{u}, p)\in \mathbf{H}$ such that
\begin{equation}
\label{SP3}\mathscr{A}((\mathbf{u}, p), (\mathbf{v}, q)) = \mathscr{L}((\mathbf{v}, q))
~~~\forall (\mathbf{v}, q)\in \mathbf{H},
\end{equation}
where
\begin{align}
\label{bilin}\mathscr{A}((\mathbf{u}, p), (\mathbf{v}, q)) & := a(\mathbf{u}, \mathbf{v}) + b(\mathbf{u}, q) + b(\mathbf{v}, p),\\
\label{lin}\mathscr{L}((\mathbf{v}, q)) & := G(\mathbf{v}) + F(q).
\end{align}
Such a weak formulation admits a unique solution $(\mathbf{u}, p)\in \mathbf{H}$ according to standard theory (see \cite{MFEM}, for example). That is, since the functionals $G$ and $F$ are clearly continuous; the pair of solution-spaces satisfy the well known \textit{BNB, or inf-sup, compatibility condition}
\begin{equation*}
0 < \beta := \inf_{0\neq\varphi\in L^2(\Omega)}\sup_{\mathbf{0}\neq\mathbf{v}\in H_{0, N}(\text{div}, \Omega)}\frac{b(\mathbf{v}, \varphi)}{\Vert\mathbf{v}\Vert_{H(\text{div}, \Omega)}\Vert\varphi\Vert_{L^2(\Omega)}},
\end{equation*}
(as a result of the divergence operator $\mathfrak{B} : H_{0, N}(\text{div}, \Omega)\rightarrow L^2(\Omega)$ $(\mathbf{w}\mapsto\nabla\cdot\mathbf{w})$ being surjective); and the bilinear form $a(\cdot, \cdot)$ being coercive on the kernel of the divergence operator $\mathfrak{B}$.
Indeed, the surjectivity of $\mathfrak{B}$ follows immediately from the application of the \textit{Lax--Milgram Lemma} to a standard Poisson problem, giving the unique existence of a $\varphi\in H^1(\Omega)$ such that
\begin{alignat*}{2}
-\Delta\varphi & = q && \;\;\;\; \forall\mathbf{x}\in\Omega,\\
\varphi = 0 \;\;\;\; \forall\mathbf{x}\in\partial\Omega_D, ~~~
\nabla\varphi\cdot\mathbf{n} & = 0 && \;\;\;\; \forall\mathbf{x}\in\partial\Omega_N,
\end{alignat*}
for any $q\in L^2(\Omega)$; $\varphi$ admits the function $\mathbf{w} = -\nabla\varphi\in H_{0, N}(\textnormal{div}, \Omega)$ with $\nabla\cdot\mathbf{w} = q$.
\subsection{Mixed Finite Element Approximation}\label{MFEMsec}
The numerical approximation of Darcy's equations employed in this paper will be based on a mixed finite element method. To this end, let $\mathscr{T}_h$ be a shape-regular simplicial partition of $\overline\Omega$ with $h$ the mesh-size parameter. We use the terminology \emph{face} to refer to a $(d-1)$-dimensional simplicial facet which forms part of the boundary of an element $\kappa\in\mathscr{T}_h$.
Consider the finite-dimensional subspaces $\mathbf{V}_h\subset H_{0, N}(\text{div}, \Omega)$ and $\Pi_h\subset L^2(\Omega)$. To achieve such $H(\text{div}, \Omega)$-conformity is paramount; indeed, such approximations will have continuous normal-traces across element faces (for example, see \cite{MFEM}), allowing for the computation of physical streamlines, vital to real-life applications. Conversely, nodal-based elements should not be implemented since they often result in unphysical streamlines, as well as there being a lack of mass conservation at an elemental level \cite{Cordes}. Typically, such conformity is achieved by utilising the well known \textit{Raviart--Thomas} (RT) or \textit{Brezzi--Douglas--Marini} (BDM) finite elements. For the pressure space $\Pi_h$ we employ discontinuous piecewise-polynomial functions. However, we stress that any approximation spaces can be used as long as they are $H(\text{div}, \Omega)$ and $L^2(\Omega)$ conforming, respectively, and are a stable pair in the \textit{inf--sup} sense.
Hence, the discrete problem is: find $(\mathbf{u}_h, p_h)\in \mathbf{H}_h:=\mathbf{V}_h\times \Pi_h$ such that
\begin{equation}
\label{DiscDarcy}\mathscr{A}((\mathbf{u}_h, p_h), (\mathbf{v}_h, q_h)) = \mathscr{L}((\mathbf{v}_h, q_h))
~~~\forall (\mathbf{v}_h, q_h)\in \mathbf{H}_h.
\end{equation}
\subsection{Goal--Oriented Error Estimation}\label{DWRsec}
In this section we briefly present the general DWR theory for the \textsl{a posteriori}
error estimation for a general nonlinear functional ${\mathcal Q}:{\mathbf H} \rightarrow {\mathbb R}$ for the flow problem \eqref{SP3}; for simplicity of
presentation, here the underlying PDE problem is linear, though we stress that the proceeding analysis
naturally generalises to the nonlinear setting.
To this end, given \eqref{SP3} and its corresponding finite element approximation defined by \eqref{DiscDarcy},
we define the error in the quantity of interest ${\mathcal Q}(\mathbf{u},p)$, by
\begin{equation}\label{goalerror}
{\mathcal E}^{\mathcal Q}_h := {\mathcal Q}(\mathbf{u},p) - {\mathcal Q}(\mathbf{u}_h,p_h).
\end{equation}
To estimate this quantity we introduce the following sequence of \textit{adjoint or dual} problems, relative to the variational problem (\ref{SP3}), with respect to the functional ${\mathcal Q}$:
\textbf{Adjoint problem I:} find $(\mathbf{z}, r)\in\mathbf{H}$ such that
\begin{equation}\label{DWRformal}
\mathscr{A}((\mathbf{v}, q), (\mathbf{z}, r)) = \overline{{\mathcal Q}}((\mathbf{u}, p), (\mathbf{u}_h, p_h); (\mathbf{v}, q))\;\;\;\;\forall(\mathbf{v}, q)\in\mathbf{H},
\end{equation}
where the mean-value linearisation of ${\mathcal Q}(\cdot)$, evaluated at $\zeta\in\mathbf{H}$, is defined as
\begin{equation}\label{mvlDef}
\overline{{\mathcal Q}}((\mathbf{u}, p), (\mathbf{u}_h, p_h); \zeta) := \int_0^1 {\mathcal Q}'[\vartheta(\mathbf{u}, p) + (1 - \vartheta)(\mathbf{u}_h, p_h)](\zeta)\,d\vartheta.
\end{equation}
\textbf{Adjoint problem II:} find $(\mathbf{z}_\star, r_\star)\in\mathbf{H}$ such that
\begin{equation}\label{DWRlin}
\mathscr{A}((\mathbf{v}, q), (\mathbf{z}_\star, r_\star)) = {\mathcal Q}'[(\mathbf{u}_h, p_h)]((\mathbf{v}, q))\;\;\;\;\forall(\mathbf{v}, q)\in\mathbf{H}.
\end{equation}
\textbf{Discrete adjoint problem II:} find $(\mathbf{z}_h, r_h)\in\mathscr{W}_h$ such that
\begin{equation}\label{DWRdisclin}
\mathscr{A}((\mathbf{v}_h, q_h), (\mathbf{z}_h, r_h)) = {\mathcal Q}'[(\mathbf{u}_h, p_h)]((\mathbf{v}_h, q_h))\;\;\;\;\forall(\mathbf{v}_h, q_h)\in\mathscr{W}_h.
\end{equation}
Here, the finite-dimensional space $\mathscr{W}_h$ can be any space such that $\mathscr{W}_h\subset\mathbf{H}$ but so that $\mathscr{W}_h\not\subset\mathbf{H}_h$, for reasons relating to Galerkin orthogonality that we shall see later. If hierarchical bases are used within the finite-element method, then a popular choice is to have $\mathscr{W}_h$ defined on the same mesh $\mathscr{T}_h$ as $\mathbf{H}_h$, but employ higher-order polynomials. We also see already here the need to be able to evaluate the Gâteaux derivative of the nonlinear functional representing the quantity of interest, since it appears in both of the adjoint problems (\ref{DWRlin}) and (\ref{DWRdisclin}).
Defining the residual by
\begin{equation}\label{residual}
\mathscr{R}_h(\mathbf{v}, q) := \mathscr{L}((\mathbf{v}, q)) - \mathscr{A}((\mathbf{u}_h, p_h), (\mathbf{v}, q)),
\end{equation}
we have, by employing standard arguments, the following error representation formula.
\begin{proposition}[Error Representation]
Let $(\mathbf{u}, p)$ denote the solution of the primal problem (\ref{SP3}), $(\mathbf{u}_h, p_h)$ solve the discrete, primal problem (\ref{DiscDarcy}) and $(\mathbf{z}, r)$ be the solution of the adjoint problem (\ref{DWRformal}). Then, the following equality holds
\begin{equation}
\mathcal{E}^{\mathcal Q}_h = \mathscr{R}_h(\mathbf{z} - \mathbf{z}_I, r - r_I) \label{ERimpoved}
\end{equation}
for all $(\mathbf{z}_I, r_I)\in\mathbf{H}_h$.
\end{proposition}
In particular, (\ref{ERimpoved}) is relevant for localising an estimate of the error representation, in order to potentially drive mesh adaptivity. Of course, (\ref{ERimpoved}) is not computable since the formal adjoint solutions $(\mathbf{z}, r)$ are not, in general, computable themselves. We must instead use the approximate linearised adjoint problem, and its discretisation, in order to approximate the error (\ref{goalerror}).
To this end, we can see easily that, for all $(\mathbf{z}_I, r_I)\in\mathbf{H}_h$, the residual may be decomposed into the three parts
\begin{align*}
\mathcal{E}^{\mathcal Q}_h
& = \mathscr{R}_h(\mathbf{z} - \mathbf{z}_\star, r - r_\star) + \mathscr{R}_h(\mathbf{z}_\star - \mathbf{z}_h, r_\star - r_h) + \mathscr{R}_h(\mathbf{z}_h - \mathbf{z}_I, r_h - r_I).
\end{align*}
The first term $\mathscr{R}_h(\mathbf{z} - \mathbf{z}_\star, r - r_\star)$ represents the error induced by the approximate linearisation of the formal adjoint problem; the second term $\mathscr{R}_h(\mathbf{z}_\star - \mathbf{z}_h, r_\star - r_h)$ represents the error induced by discretising the approximate linearised adjoint problem. The last term, $\mathscr{R}_h(\mathbf{z}_h - \mathbf{z}_I, r_h - r_I)$ is most useful since it is \textit{computable}. If we assume that the other, non-computable, residuals converge to zero with an asymptotic rate \textit{faster} than this latter term, we can simply estimate the error in the quantity of interest with the computable part directly by
\begin{equation}\label{ErrorEstimate}
\mathcal{E}_h^{\mathcal Q} \approx \mathscr{R}_h(\mathbf{z}_h - \mathbf{z}_I, r_h - r_I).
\end{equation}
Typically, the functions $\mathbf{z}_I$ and $r_I$ are chosen to be interpolants: projecting the discrete linearised adjoint solutions $\mathbf{z}_h$ and $r_h$ from $\mathscr{W}_h$ into $\mathbf{H}_h$. We stress that the presence of these interpolants are essential to ensure that the \textit{double} rate of convergence expected in optimal goal-oriented adaptive regimes is retained when local elementwise error indicators are defined based on \eqref{ErrorEstimate}, cf. below.
Under mesh refinement, whether it be uniform or adaptive, the estimate (\ref{ErrorEstimate}) converges to the true error if the \textit{effectivity index}
$\theta_h := \mathcal{E}_h^{\mathcal Q} /\mathscr{R}_h(\mathbf{z}_h - \mathbf{z}_I, r_h - r_I) \rightarrow 1$
as the mesh is refined. Section \ref{numexp} showcases numerical evidence of this behaviour for both simple and more complex examples, under uniform and adaptive refinement.
\subsubsection{Estimate Localisation for Darcy's Equations}
In this section we localise the error estimate (\ref{ErrorEstimate}) into element-based indicators on the mesh $\mathscr{T}_h$, based on the usual, integration-by-parts approach.
To this end, writing the right-hand side of (\ref{ErrorEstimate}) as a sum over the mesh $\mathscr{T}_h$, we get
\begin{align}
\mathcal{E}_h^{\mathcal Q} \approx \sum_{\kappa\in\mathscr{T}_h}\Big( & -\langle(\mathbf{z}_h - \mathbf{z}_I)\cdot\mathbf{n}_\kappa, g_D\rangle_{\partial\kappa\cap\partial\Omega_D} - \int_\kappa (r_h - r_I)f\nonumber\\
&-\int_\kappa\mathbf{K}^{-1}\mathbf{u}_h\cdot(\mathbf{z}_h - \mathbf{z}_I) + \int_\kappa p_h\nabla\cdot(\mathbf{z}_h - \mathbf{z}_I) + \int_\kappa (r_h - r_I)\nabla\cdot\mathbf{u}_h\Big), \label{sumovereles2}
\end{align}
where $\mathbf{n}_\kappa$ denotes the unit outward normal vector to element $\kappa\in\mathscr{T}_h$.
Employing the Green's formula stated in Proposition \ref{IBP}, we see that in particular
\begin{equation*}
\int_\kappa p_h\nabla\cdot(\mathbf{z}_h - \mathbf{z}_I) = -\int_\kappa(\mathbf{z}_h - \mathbf{z}_I)\cdot\nabla p_h + \langle(\mathbf{z}_h - \mathbf{z}_I)\cdot\mathbf{n}_\kappa, p_h\rangle_{\partial\kappa}.
\end{equation*}
Therefore, summing over the elements in the mesh, gives
\begin{align}
\nonumber \sum_{\kappa\in\mathscr{T}_h}\int_\kappa p_h\nabla\cdot(\mathbf{z}_h - \mathbf{z}_I) = \sum_{\kappa\in\mathscr{T}_h}\Big(& -\int_\kappa(\mathbf{z}_h - \mathbf{z}_I)\cdot\nabla p_h + \frac{1}{2}\langle(\mathbf{z}_h - \mathbf{z}_I)\cdot\mathbf{n}_\kappa, \llbracket p_h\rrbracket\rangle_{\partial\kappa\setminus\partial\Omega}\\
& \label{SlotTerm2}+ \langle(\mathbf{z}_h - \mathbf{z}_I)\cdot\mathbf{n}_\kappa, p_h\rangle_{\partial\kappa\cap\partial\Omega_D}\Big),
\end{align}
where $\llbracket \cdot \rrbracket$ denotes the jump operator across an element face.
Inserting (\ref{SlotTerm2}) into (\ref{sumovereles2}) gives the following result.
\begin{theorem}\label{DarcyLocal}
Under the foregoing notation, we have the (approximate) a posteriori error estimate
\begin{equation*}
\vert \mathcal{E}_h^{\mathcal Q}\vert \approx \bigg\vert\sum_{\kappa\in\mathscr{T}_h}\eta_\kappa\bigg\vert \leq \sum_{\kappa\in\mathscr{T}_h}\vert\eta_\kappa\vert
\end{equation*}
where the element indicator $\eta_\kappa$ is split into the four contributions
\begin{equation*}
\eta_\kappa \equiv \eta_\kappa^{BC} + \eta_\kappa^{DL} + \eta_\kappa^{CM} + \eta_\kappa^{PR},
\end{equation*}
each given by:
\begin{align}
\label{I1} \eta_\kappa^{BC} & = \langle(\mathbf{z}_h - \mathbf{z}_I)\cdot\mathbf{n}_\kappa, p_h - g_D\rangle_{\partial\kappa\cap\partial\Omega_D},\\
\label{I2} \eta_\kappa^{DL} & = -\int_\kappa(\mathbf{K}^{-1}\mathbf{u}_h + \nabla p_h)\cdot(\mathbf{z}_h - \mathbf{z}_I),\\
\label{I3} \eta_\kappa^{CM} & = \int_\kappa(r_h - r_I)(\nabla\cdot\mathbf{u}_h - f),\\
\label{I4} \eta_\kappa^{PR} & = \frac{1}{2}\langle(\mathbf{z}_h - \mathbf{z}_I)\cdot\mathbf{n}_\kappa, \llbracket p_h\rrbracket\rangle_{\partial\kappa\setminus\partial\Omega}.
\end{align}
\end{theorem}
Each of the indicator contributions (\ref{I1})--(\ref{I4}) are \textit{adjoint-weighted} and may be interpreted as the following: $\eta_\kappa^{BC}$ measures how well the boundary condition (\ref{Dbc}) is satisfied; $\eta_\kappa^{DL}$ measures how well Darcy's Law (\ref{DL}) is satisfied; $\eta_\kappa^{CM}$ measures how well the conservation of mass equation (\ref{CM}) is satisfied; and finally, $\eta_\kappa^{PR}$ is a measure of the interior pressure residual across element interfaces.
\section{Linearising the Travel Time Functional}\label{MainResultSec}
Recalling the discussion presented in Section \ref{prelims}, we emphasise that the main result (i.e. evaluating the Gâteaux derivative of the travel time functional) is independent of where the velocity field $\mathbf{u}$ has come from; for now we are concerned only about the continuity of $\mathbf{u}$. Indeed, computing an approximation to the travel time functional via an approximation of the velocity field $\mathbf{u}$ may or may not lead to a continuous velocity field; this depends on the fluid model and the type of approximation that is employed.
More explicitly: suppose our problem was not in groundwater flow and the disposal of radioactive waste, but instead that we are interested in $T(\mathbf{u};\mathbf{x}_0)$ where $\mathbf{u}$ is a flow governed by Stokes equations. In this situation, typically vector-valued $H^1$-conforming elements are employed (cf. \cite{BS}), on some mesh $\mathscr{T}_h$, to obtain an approximation (at least in two spatial dimensions) $\mathbf{u}_h$ that is continuous across the element interfaces. Here, Theorem \ref{preMainResult} can be applied to evaluate the derivative $T'[\mathbf{u}_h](\cdot)$ (to, for example, drive an adaptive mesh refinement algorithm). However, in the context of this work, an $H(\text{div})$-conforming approximation of a flow governed by Darcy's equations is used and as such, this conformity does not guarantee continuity of the velocity field across element interfaces. Thereby, in the following discussion we derive a more general result stated in Theorem \ref{mainresult}.
\subsection{Linearisation in the Discontinuous Case}
Given the domain $\Omega\subset\mathbb{R}^d$, $d=2,3$, denote by $\mathcal{I}$ the semi-infinite time interval $[0, \infty)$. Furthermore, suppose we have the possibly time-dependent velocity field
$
\mathbf{v} : (\mathbb{R}^d\times\mathcal{I})\rightarrow\mathbb{R}^d.
$
The particle trajectory of the velocity field, $\mathbf{X}_\mathbf{v} : \mathcal{I}\rightarrow\mathbb{R}^d$, satisfies the IVP:
\begin{equation}\label{traj}
\begin{cases}\frac{d\mathbf{X}_{\mathbf{v}}}{dt} = \mathbf{v}(\mathbf{X}_\mathbf{v}, t) & \;\;\;\;\forall t\in\mathcal{I},\\
\mathbf{X}_\mathbf{v}(0) = \mathbf{x}_0,\end{cases}
\end{equation}
where the initial position $\mathbf{x}_0\in\Omega$.
The main result is stated below in Theorem \ref{mainresult}, which provides the evaluation of the Gâteaux derivative $T'[\mathbf{v}](\cdot)$, of the travel time functional $T(\cdot)$.
\begin{theorem}\label{mainresult}
Let $\mathbf{n} = \mathbf{n}(\mathbf{x})$ be the unit outward normal vector to the boundary $\partial\Omega$. Assume firstly that $\partial\Omega$ is locally flat at the exit point $\mathbf{X}(T_\mathbf{v})$, so that the unit outward normal vector $\mathbf{n} = \mathbf{n}(\mathbf{X}(T_\mathbf{v}))$ is unique. Assume also that the particle trajectory does not exit $\Omega$ parallel to the boundary, so that $\mathbf{v}(\mathbf{X}(T_\mathbf{v}), T_\mathbf{v})\cdot\mathbf{n}(\mathbf{X}(T_\mathbf{v})) \neq 0$. Suppose that $\mathscr{T}_h$ is a simplicial partition of $\Omega$ and that $\mathbf{v}$ is discontinuous across the faces $\{\mathcal{F}_i\}$ that intersect the path $t\mapsto\mathbf{X}(t)$, defined by (\ref{traj}) at the times $\{t_i = t_{i,\mathbf{v}}\}$. Lastly assume that the particle trajectory does not exit any element in $\mathscr{T}_h$ parallel to its boundary, or through the boundary of one of the element faces, except possibly at the end, given the previous assumption about local flatness. With the above notation described, let $\mathbf{Z} : [0, T_\mathbf{v}]\rightarrow\mathbb{R}^d$ be the solution to the adjoint, or dual (linearised-adjoint, backward-in-time) IVP:
\begin{equation}\label{adjointIVP}
\begin{cases}\mathcal{L}_\mathbf{v}^*(\mathbf{Z}(t)) \equiv -\frac{d\mathbf{Z}}{dt} - [\nabla\mathbf{v}(\mathbf{X}(t), t)]^\top\mathbf{Z} = \mathbf{0} & \;\;\;\;\forall t\in[0, T_\mathbf{v})\setminus\{t_{i,\mathbf{v}}\},\\ \\
\mathbf{Z}(T_\mathbf{v}) = -\frac{\mathbf{n}(\mathbf{X}(T_\mathbf{v}))}{\mathbf{v}(\mathbf{X}(T_\mathbf{v}), T_\mathbf{v})\cdot\mathbf{n}(\mathbf{X}(T_\mathbf{v}))},\\ \\
\llbracket\mathbf{Z}(t_{i,\mathbf{v}})\rrbracket = -\frac{\mathbf{Z}(t_{i,\mathbf{v}}^+)\cdot\llbracket\mathbf{v}(t_{i,\mathbf{v}})\rrbracket\mathbf{n}_i^-}{\mathbf{v}(\mathbf{X}(t_{i,\mathbf{v}}^-), t_{i,\mathbf{v}}^-)\cdot\mathbf{n}_i^-} & \;\;\;\;\forall i,\end{cases}
\end{equation}
where $\mathbf{n}_i^-$ is the unit outward normal vector to the faces $\{\mathcal{F}_i\}$, pointing in the same direction as the particle trajectory $\mathbf{X_v}(t)$ at the time of intersection $t = t_i$, and where $\llbracket\mathbf{Z}(t_{i,\mathbf{v}})\rrbracket = \mathbf{Z}(t_{i,\mathbf{v}}^+) - \mathbf{Z}(t_{i,\mathbf{v}}^-)$ and $\llbracket\mathbf{v}(t_{i,\mathbf{v}})\rrbracket = \mathbf{v}(\mathbf{X}(t_{i, \mathbf{v}}^+), t_{i, \mathbf{v}}^+) - \mathbf{v}(\mathbf{X}(t_{i, \mathbf{v}}^-), t_{i, \mathbf{v}}^-)$ denote jump operators. Then, the Gâteaux derivative of $T(\cdot)$, evaluated at $\mathbf{v}$, is given by
\begin{equation*}
T'[\mathbf{v}](\mathbf{w}) = \int_0^{T_\mathbf{v}}\mathbf{Z}(t)\cdot\mathbf{w}(\mathbf{X}(t), t)\,dt.
\end{equation*}
\end{theorem}
The plus/minus notation refers to the times after/before, respectively, the trajectory $\mathbf{X_u}$ intersects the element interface, forwards in time.
We may also index $\mathbf{Z_v} \equiv \mathbf{Z}$ to indicate that $\mathbf{Z_v}$ solves the IVP (\ref{adjointIVP}) induced by the velocity field $\mathbf{v}$. Also, we note that if the velocity field driving the trajectory is in fact continuous across the element interfaces, then the jump terms vanish and Theorem~\ref{preMainResult} is recovered.
We now proceed to prove Theorem \ref{mainresult}. To this end, we require two lemmas which are given below. Firstly, consider the so-called trajectory derivative, corresponding to the change in the particle path as a result of a change in velocity:
\begin{equation*}
\mathbf{X}'\equiv\partial_\mathbf{v}\mathbf{X}_\mathbf{v}[\mathbf{w}] := \lim_{\varepsilon\rightarrow 0^+}\frac{\mathbf{X}_{\mathbf{v} + \varepsilon\mathbf{w}} - \mathbf{X}_\mathbf{v}}{\varepsilon},
\end{equation*}
recalling the notation that $\mathbf{X}_\mathbf{v}$ is the trajectory induced by the velocity field $\mathbf{v}$.
\begin{lemma}\label{lemma1}
Let $\mathbf{v}$ be as before, discontinuous across the faces $\{\mathcal{F}_i\}$ intersecting the path $t\mapsto\mathbf{X}_\mathbf{v}(t)$ at the times $\{t_i = t_{i, \mathbf{v}}\}$. Then, the trajectory derivative $\mathbf{X}' : \mathcal{I}\rightarrow \mathbb{R}^d$ satisfies the IVP:
\begin{equation}\label{derivjump}
\begin{cases}\mathcal{L}_\mathbf{v}(\mathbf{X}'(t))\equiv \frac{d\mathbf{X}'}{dt} - \nabla\mathbf{v}(\mathbf{X}_\mathbf{v}(t), t)\mathbf{X}' = \mathbf{w}(\mathbf{X}_\mathbf{v}(t), t) & \;\;\;\;\forall t\in\mathcal{I}\setminus\{t_i\},\\
\mathbf{X}'(0) = \mathbf{0},\\
\llbracket\mathbf{X}'(t_i)\rrbracket = -\llbracket\mathbf{v}(t_i)\rrbracket t_i' & \;\;\;\;\forall i, \end{cases}
\end{equation}
where
\begin{equation}\label{ti_label}
t_i' = -\frac{\mathbf{X}'(t_i^-)\cdot\mathbf{n}_i^-}{\mathbf{v}(\mathbf{X}_\mathbf{v}(t_i^-), t_i^-)\cdot\mathbf{n}_i^-}.
\end{equation}
\end{lemma}
\begin{proof}
The time derivative of $\mathbf{X}'$ is given by
\begin{equation*}
\frac{d\mathbf{X}'}{dt} = \frac{d}{dt}\lim_{\varepsilon\rightarrow 0^+}\frac{\mathbf{X}_{\mathbf{v} + \varepsilon\mathbf{w}} - \mathbf{X}_\mathbf{v}}{\varepsilon} = \lim_{\varepsilon\rightarrow 0^+}\frac{(\mathbf{v} + \varepsilon\mathbf{w})(\mathbf{X}_{\mathbf{v} + \varepsilon\mathbf{w}}, t) - \mathbf{v}(\mathbf{X}_\mathbf{v}(t), t)}{\varepsilon},
\end{equation*}
where we recall the pathline equations the trajectories satisfy. Thus,
\begin{align*}
\frac{d\mathbf{X}'}{dt} & = \lim_{\varepsilon\rightarrow 0^+}\frac{(\mathbf{v} + \varepsilon\mathbf{w})(\mathbf{X}_{\mathbf{v} + \varepsilon\mathbf{w}}, t) - \mathbf{v}(\mathbf{X}_\mathbf{v}(t), t)}{\varepsilon}\\
& = \lim_{\varepsilon\rightarrow 0^+}\frac{\mathbf{v}(\mathbf{X}_{\mathbf{v} + \varepsilon\mathbf{w}}(t), t) - \mathbf{v}(\mathbf{X}_\mathbf{v}(t), t)}{\varepsilon} + \mathbf{w}(\mathbf{X}_\mathbf{v}(t), t)\\
& = \lim_{\varepsilon\rightarrow 0^+}\frac{\mathbf{v}(\mathbf{X}_\mathbf{v}(t) + \varepsilon\mathbf{X}'(t) + o(\varepsilon), t) - \mathbf{v}(\mathbf{X}_\mathbf{v}(t), t)}{\varepsilon} + \mathbf{w}(\mathbf{X}_\mathbf{v}(t), t)\\
& = \lim_{\varepsilon\rightarrow 0^+}\frac{[\nabla\mathbf{v}(\mathbf{X}_\mathbf{v}(t), t)]\epsilon\mathbf{X}'(t) + o(\varepsilon)}{\varepsilon} + \mathbf{w}(\mathbf{X}_\mathbf{v}(t), t)\\
& = [\nabla\mathbf{v}(\mathbf{X}_\mathbf{v}(t), t)]\mathbf{X}'(t) + \mathbf{w}(\mathbf{X}_\mathbf{v}(t), t),
\end{align*}
i.e., for all $t\in\mathcal{I}\setminus\{t_i\}$ (so that $\nabla\mathbf{v}(\mathbf{X}_\mathbf{v}(t), t)$ exists away from the discontinuities),
\begin{equation*}
\frac{d\mathbf{X}'}{dt} - [\nabla\mathbf{v}(\mathbf{X}_\mathbf{v}(t), t)]\mathbf{X}'(t) = \mathbf{w}(\mathbf{X}_\mathbf{v}(t), t).
\end{equation*}
The initial condition follows easily as
\begin{equation*}
\mathbf{X}'(0) = \lim_{\varepsilon\rightarrow 0^+}\frac{\mathbf{X}_{\mathbf{v} + \varepsilon\mathbf{w}}(0) - \mathbf{X}_\mathbf{v}(0)}{\varepsilon} = \lim_{\varepsilon\rightarrow 0^+}\frac{\mathbf{x}_0 - \mathbf{x}_0}{\varepsilon} = \mathbf{0}.
\end{equation*}
Although the velocity $\mathbf{v}$ has discontinuities, we still require that the trajectory $\mathbf{X}_\mathbf{v}$ is continuous. Hence, we have the coupling conditions between the two maps:
\begin{equation*}
(\mathbf{v}\mapsto\mathbf{X}_\mathbf{v}(t_i^+)) = (\mathbf{v}\mapsto\mathbf{X}_\mathbf{v}(t_i^-))\;\;\;\;\forall i.
\end{equation*}
Taking the Gâteaux derivative of each side (i.e., $(d/d\varepsilon)(\cdot)(\mathbf{v} + \varepsilon\mathbf{w})$, as $\varepsilon\rightarrow 0$) gives
\begin{equation*}
\mathbf{X}'(t_i^+) + \frac{d\mathbf{X}(t_i^+)}{dt}t_i' = \mathbf{X}'(t_i^-) + \frac{d\mathbf{X}(t_i^-)}{dt}t_i'\;\;\;\;\forall i.
\end{equation*}
Thus,
\begin{equation*}
\mathbf{X}'(t_i^+) + \mathbf{v}(\mathbf{X}_\mathbf{v}(t_i^+), t_i^+)t_i' = \mathbf{X}'(t_i^-) + \mathbf{v}(\mathbf{X}_\mathbf{v}(t_i^-), t_i^-)t_i'\;\;\;\;\forall i;
\end{equation*}
rearranging gives
\begin{equation*}
\llbracket\mathbf{X}'(t_i)\rrbracket = -\llbracket\mathbf{v}(t_i)\rrbracket t_i'.
\end{equation*}
The expression for $t_i' \equiv \partial_\mathbf{v}t_{i,\mathbf{v}}(\mathbf{w})$, given by (\ref{ti_label}), follows similarly to the proof given for the following Lemma \ref{final}.
\end{proof}
We note as well that a variational approach can be used instead to prove Lemma \ref{lemma1}.
For use in Lemma \ref{final}, consider the change in exit-time, or time-of-flight, due to a change in the velocity, given by
\begin{equation*}
T'\equiv \partial_\mathbf{v}T_\mathbf{v}(\mathbf{w}) := \lim_{\varepsilon\rightarrow 0^+}\frac{T_{\mathbf{v} + \varepsilon\mathbf{w}} - T_\mathbf{v}}{\varepsilon}.
\end{equation*}
\begin{lemma}\label{final}
The derivative $\mathbf{X}'(T_\mathbf{v})$ satisfies
\begin{equation*}
\mathbf{X}'(T_\mathbf{v})\cdot\mathbf{n} = -T'\mathbf{v}(\mathbf{X}_\mathbf{v}(T_\mathbf{v}), T_\mathbf{v})\cdot\mathbf{n},
\end{equation*}
with $\mathbf{n} \equiv \mathbf{n}(\mathbf{X}_\mathbf{v}(T_\mathbf{v}))$.
\end{lemma}
\begin{proof}
If we assume that the boundary $\partial\Omega$ is locally flat at the exit-point $\mathbf{X}(T_\mathbf{v})$, then this means that for sufficiently small $\varepsilon$ we have
\begin{equation*}
(\mathbf{X}_{\mathbf{v} + \varepsilon\mathbf{w}}(T_\mathbf{v}) - \mathbf{X}_{\mathbf{v}}(T_\mathbf{v}))\cdot\mathbf{n} = (\mathbf{X}_{\mathbf{v} + \varepsilon\mathbf{w}}(T_{\mathbf{v}}) - \mathbf{X}_{\mathbf{v} + \varepsilon\mathbf{w}}(T_{\mathbf{v} + \varepsilon\mathbf{w}}))\cdot\mathbf{n}.
\end{equation*}
Hence,
\begin{align*}
\mathbf{X}'(T_\mathbf{v})\cdot\mathbf{n} & = \lim_{\varepsilon\rightarrow 0^+}\frac{\mathbf{X}_{\mathbf{v} + \varepsilon\mathbf{w}}(T_\mathbf{v}) - \mathbf{X}_\mathbf{v}(T_\mathbf{v})}{\varepsilon}\cdot\mathbf{n}\\
& = \lim_{\varepsilon\rightarrow 0^+}\frac{\mathbf{X}_{\mathbf{v} + \varepsilon\mathbf{w}}(T_{\mathbf{v}}) - \mathbf{X}_{\mathbf{v} + \varepsilon\mathbf{w}}(T_{\mathbf{v} + \varepsilon\mathbf{w}})}{\varepsilon}\cdot\mathbf{n}\\
& = \lim_{\varepsilon\rightarrow 0^+}\frac{\mathbf{X}_{\mathbf{v} + \varepsilon\mathbf{w}}(T_{\mathbf{v}}) - \mathbf{X}_{\mathbf{v} + \varepsilon\mathbf{w}}(T_\mathbf{v} + \varepsilon T' + o(\varepsilon))}{\varepsilon}\cdot\mathbf{n}\\
& = \lim_{\varepsilon\rightarrow 0^+}\frac{-\frac{d\mathbf{X}_{\mathbf{v} + \varepsilon\mathbf{w}}}{dt}(T_\mathbf{v})(\varepsilon T' + o(\varepsilon))}{\varepsilon}\cdot\mathbf{n}\\
& = \lim_{\varepsilon\rightarrow 0^+}\frac{-(\mathbf{v} + \varepsilon\mathbf{w})(\mathbf{X}_{\mathbf{v} + \varepsilon\mathbf{w}}(T_\mathbf{v}), T_\mathbf{v})(\varepsilon T' + o(\varepsilon))}{\varepsilon}\cdot\mathbf{n}\\
& = -T'\mathbf{v}(\mathbf{X}_\mathbf{v}(T_\mathbf{v}), T_\mathbf{v}))\cdot\mathbf{n}.
\end{align*}
\end{proof}
Thus, we are now able to prove the main result. Firstly, note that
\begin{equation*}
T'[\mathbf{v}](\mathbf{w}) = \lim_{\varepsilon\rightarrow 0}\frac{T_{\mathbf{v} + \varepsilon\mathbf{w}} - T_\mathbf{v}}{\varepsilon} = T'.
\end{equation*}
\subsubsection{Proof of Theorem \ref{mainresult}}
\begin{proof}
From Lemma \ref{final} and (\ref{adjointIVP}) we have
\begin{equation*}
T' = -\frac{\mathbf{X}'(T_\mathbf{v})\cdot\mathbf{n}}{\mathbf{v}(\mathbf{X}_\mathbf{v}(T_\mathbf{v}), T_\mathbf{v})\cdot\mathbf{n}} = \mathbf{X}'(T_\mathbf{v})\cdot\mathbf{Z}(T_\mathbf{v}).
\end{equation*}
Since from (\ref{adjointIVP}) we know that $\mathcal{L}_\mathbf{v}^*(\mathbf{Z}(t)) = 0$ away from the jump times $\{t_i\}$, we have
\begin{equation*}
T' \equiv \mathbf{X}'(T_\mathbf{v})\cdot\mathbf{Z}(T_\mathbf{v}) = \mathbf{X}'(T_\mathbf{v})\cdot\mathbf{Z}(T_\mathbf{v}) + \sum_i\int_{t_{i-1}}^{t_i}\mathcal{L}_\mathbf{v}^*\mathbf{Z}(t)\cdot\mathbf{X}'(t)\,dt.
\end{equation*}
Integrating by parts reveals that
\begin{align*}
T' & \equiv \sum_i\int_{t_{i-1}}^{t_i}\mathbf{Z}(t)\cdot\mathcal{L}_\mathbf{v}X'\,dt + \sum_i(\mathbf{Z}(t_i^+)\cdot\mathbf{X}'(t_i^+) - \mathbf{Z}(t_i^-)\cdot\mathbf{X}'(t_i^-)) + \mathbf{Z}(0)\cdot\mathbf{X}'(0)\\
& = \sum_i\int_{t_{i-1}}^{t_i}\mathbf{Z}(t)\cdot\mathbf{w}(\mathbf{X}_\mathbf{v}(t), t)\,dt + \sum_i(\mathbf{Z}(t_i^+)\cdot\mathbf{X}'(t_i^+) - \mathbf{Z}(t_i^-)\cdot\mathbf{X}'(t_i^-)),
\end{align*}
since from (\ref{derivjump}) in Lemma \ref{lemma1} we have that $\mathcal{L}_\mathbf{v}(\mathbf{X}'(t)) = \mathbf{w}(\mathbf{X}_\mathbf{v}(t), t)$ and $\mathbf{X}'(0) = \mathbf{0}$. The jump condition in (\ref{derivjump}) for $\mathbf{X}'$ can be rearranged to obtain the expression
\begin{equation*}
\mathbf{X}'(t_i^+) = \mathbf{X}'(t_i^-) + \llbracket\mathbf{v}(t_i)\rrbracket\frac{\mathbf{X}'(t_i^-)\cdot\mathbf{n}_i^-}{\mathbf{v}(\mathbf{X}_\mathbf{v}(t_i^-), t_i^-)\cdot\mathbf{n}_i^-}.
\end{equation*}
Thereby,
\begin{align*}
T' \equiv & \sum_i\int_{t_{i-1}}^{t_i}\mathbf{Z}(t)\cdot\mathbf{w}(\mathbf{X}_\mathbf{v}(t), t)\,dt\\ & + \sum_i\left(\mathbf{Z}(t_i^+)\cdot\left(\mathbf{X}'(t_i^-) + \llbracket\mathbf{v}(t_i)\rrbracket\frac{\mathbf{X}'(t_i^-)\cdot\mathbf{n}_i^-}{\mathbf{v}(\mathbf{X}_\mathbf{v}(t_i^-), t_i^-)\cdot\mathbf{n}_i^-}\right) - \mathbf{Z}(t_i^-)\cdot\mathbf{X}'(t_i^-)\right).
\end{align*}
Notice that
\begin{align*}
& \mathbf{Z}(t_i^+)\cdot\left(\mathbf{X}'(t_i^-) + \llbracket\mathbf{v}(t_i)\rrbracket\frac{\mathbf{X}'(t_i^-)\cdot\mathbf{n}_i^-}{\mathbf{v}(\mathbf{X}_\mathbf{v}(t_i^-), t_i^-)\cdot\mathbf{n}_i^-}\right) - \mathbf{Z}(t_i^-)\cdot\mathbf{X}'(t_i^-)\\ & = \left(\mathbf{Z}(t_i^+) - \mathbf{Z}(t_i^-) + \frac{\mathbf{Z}(t_i^-)\llbracket\mathbf{v}(t_i)\rrbracket}{\mathbf{v}(\mathbf{X}_\mathbf{v}(t_i^-), t_i^-)\cdot\mathbf{n}_i^-}\cdot\mathbf{n}_i^-\right)\cdot\mathbf{X}'(t_i^-)\\
& = (\llbracket\mathbf{Z}(t_i)\rrbracket - \llbracket\mathbf{Z}(t_i)\rrbracket)\cdot\mathbf{X}'(t_i^-) = \mathbf{0}.
\end{align*}
This implies that
\begin{equation*}
T' \equiv \sum_i\int_{t_{i-1}}^{t_i}\mathbf{Z}(t)\cdot\mathbf{w}(\mathbf{X}_\mathbf{v}(t), t)\,dt = \int_0^{T_\mathbf{v}}\mathbf{Z}(t)\cdot\mathbf{w}(\mathbf{X}_\mathbf{v}(t), t)\,dt,
\end{equation*}
thus completing the proof.
\end{proof}
\subsection{Application to Darcy Flow}\label{DarcyContextSec}
For a groundwater flow model governed by Darcy's equations (\ref{DL})--(\ref{Nbc}), physical (non-sorbing, non-dispersive, purely advective transport based) particle trajectories are due to a velocity field known as the transport velocity, which relates the Darcy velocity $\mathbf{u}$ and the porosity, $\phi$, of the surrounding rock via
$
\mathbf{u}_T = \nicefrac{\mathbf{u}}{\phi}.
$
Indeed, the travel time along particle trajectories driven by this velocity field are those that should be considered in the travel time functional (\ref{TT}). With $\mathbf{x}_0$ the initial burial point, our quantity of interest can be expressed either by the functionals $\mathfrak{T}(\cdot\,; \mathbf{x}_0)$ or $T(\cdot\,; \mathbf{x}_0)$, where, in particular, the former is given by
\begin{equation}\label{traveltime}
\mathfrak{T}(\mathbf{u}; \mathbf{x}_0) = T(\mathbf{u}_T; \mathbf{x}_0) = \inf\{t > 0 : \mathbf{X}_{\mathbf{u}_T}(t)\not\in\Omega\},
\end{equation}
and it is indeed the trajectory $\mathbf{X}_{\mathbf{u}_T}$ that should be considered $(\mathbf{v} \leftrightarrow \mathbf{u}_T)$ in Theorem~\ref{mainresult}, and the functional $T(\mathbf{u}_T; \mathbf{x}_0)$ should be considered in the context of the \textit{a posteriori} error estimation presented in Section \ref{DWRsec}.
Furthermore, a simple application of a generalised chain rule allows us to deduce an expression for the Gâteaux derivative of the functional $\mathfrak{T}(\cdot\,;\mathbf{x}_0)$, given by
\begin{equation}\label{chain}
\mathfrak{T}'[\mathbf{v}](\mathbf{w}) = T'[\mathbf{v}_T](\mathbf{w}_T).
\end{equation}
\subsection{Implementation Details}\label{ImplementationSec}
In this section, let $\mathbf{u}_h\in\mathbf{V}_h$ and $\mathbf{v}\in\mathbf{V}$ be generic velocity fields. For example, $\mathbf{u}_h$ could be the solution of the discrete problem (\ref{DiscDarcy}), while $\mathbf{v}$ could be a basis function of $\mathbf{W}_h\subset\mathbf{V}$, $\mathbf{W}_h\not\subset\mathbf{V}_h$, so that the derivative
\begin{equation}\label{derivative}
T'[\mathbf{u}_h](\mathbf{v}) = \int_0^{T(\mathbf{u}_h)}\mathbf{Z}(t)\cdot\mathbf{v}(\mathbf{X}_{\mathbf{u}_h}(t))\,dt
\end{equation}
is required for computing the numerical solution to the approximate linearised adjoint problem (\ref{DWRlin}). Of course, if $\mathbf{u}_h$ is the discrete Darcy velocity satisfying (\ref{DiscDarcy}) then the derivative $\mathfrak{T}'[\mathbf{u}_h](\mathbf{v})$ can be evaluated combining this section with (\ref{chain}).
For simplicity of presentation, we restrict this discussion to $d=2$, but we stress that the generalisation to $d=3$ follows directly. In this setting, we recall that $\mathscr{T}_h$ is a shape-regular triangulation of $\overline{\Omega}$ for which $\mathbf{u}_h$ is discontinuous across the element interfaces intersected by the particle trajectory $\mathbf{X}_{\mathbf{u}_h}(t)$ at the times $\{t_i\}_{i=1}^N$; proceed with the assumptions stated in Theorem \ref{mainresult}.
Denote by $\mathbb{T}_h = \{\kappa_i\}_{i=1}^{N}\subset\mathscr{T}_h$ the ordered list of elements intersected by the particle trajectory. Here, we allow for repetitions if the trajectory re-enters the same element, where it will appear multiple times in $\mathbb{T}_h$ with different labels. In order to obtain the adjoint variable $\mathbf{Z}_{\mathbf{u}_h} \equiv \mathbf{Z}$, we can solve the IVP (\ref{adjointIVP}) in a element-by-element manner. That is, starting from the intersection point with the boundary of $\mathbf{X}_{\mathbf{u}_h}(t)$, we trace the particle trajectory backwards through its intersected elements, and solve for $\mathbf{Z}$ on each time interval that the trajectory is residing in that element. More precisely, consider the final element $\kappa_N$. The trajectory $\mathbf{X}_{\mathbf{u}_h}(t)$ occupies this element for $t\in(t_{N-1}, t_N)$, where $t_N \equiv T(\mathbf{u}_h; \mathbf{x}_0)$ is the travel time. Restricting to this time interval, the adjoint variable $\mathbf{Z}(t)$ solves the IVP
\begin{equation*}
-\frac{d\mathbf{Z}(t)}{dt} - [\nabla\mathbf{u}_h(\mathbf{X}_{\mathbf{u}_h}(t))]^\top\mathbf{Z}(t) = \mathbf{0}.
\end{equation*}
For times $t\in(t_{N-1}, t_N)$, we have $\mathbf{X}_{\mathbf{u}_h}(t)\in\kappa_N$ and within this element $\mathbf{u}_h$ is a polynomial function. This means that together with the given final--time condition
\begin{equation*}
\mathbf{Z}(t_N) = -\frac{\mathbf{n}}{\mathbf{u}_h(\mathbf{X}(t_N))\cdot\mathbf{n}},
\end{equation*}
we can solve for $\mathbf{Z}$ within this time interval, via an exact method or using some approximate time-stepping technique for ODEs. For example, if $\mathbf{u}_h$ is a piecewise linear function on the triangulation $\mathscr{T}_h$ (e.g. a lowest order RT or BDM function) then we may solve for $\mathbf{Z}$ directly via matrix exponentials. Indeed, the gradient of such a function will be piecewise constant on the same triangulation.
In such a case, denote by $\mathbf{a} = (\alpha_x, \alpha_y)^\top$, $\mathbf{b} = (\beta_x, \beta_y)^\top$ and $\mathbf{c} = (\gamma_x, \gamma_y)^\top$ the real coefficients such that on $\kappa_i\in\mathbb{T}_h$
\begin{equation*}
\mathbf{u}_h\vert_{\kappa_i} \equiv \begin{bmatrix}\alpha_x + \beta_x x + \gamma_x y\\ \alpha_y + \beta_y x + \gamma_y y\end{bmatrix}.
\end{equation*}
Then,
$\mathbf{a} = \mathbf{u}_h\vert_{\kappa_i}(0, 0)$,
$\mathbf{b} = \mathbf{u}_h\vert_{\kappa_i}(1, 0) - \mathbf{a}$,
$\mathbf{c} = \mathbf{u}_h\vert_{\kappa_i}(0, 1) - \mathbf{a}$,
and the gradient of $\mathbf{u}_h$ restricted to $\kappa_i$ is given by
\begin{equation*}
\nabla\mathbf{u}_h\vert_{\kappa_i} = \begin{bmatrix} \mathbf{b} & \mathbf{c}\end{bmatrix} = \begin{bmatrix}\beta_x & \gamma_x\\ \beta_y & \gamma_y\end{bmatrix}.
\end{equation*}
Denoting by $\Upsilon_i = [\nabla\mathbf{u}_h(\mathbf{X}_{\mathbf{u}_h}(t))]^\top\vert_{\kappa_i}$ the gradient transposed for each $i$, we then have
\begin{equation}\label{AdjointSolutionFinal}
\mathbf{Z}(t) = \texttt{exp}(\Upsilon_N(t_N - t))\mathbf{Z}(t_N)\;\;\;\;\forall t\in(t_{N-1}, t_N].
\end{equation}
By putting $t = t_{N-1}$ in (\ref{AdjointSolutionFinal}), we can evaluate $\mathbf{Z}(t_{N-1}^+)$. The jump condition in (\ref{adjointIVP}) can be rearranged for the value of $\mathbf{Z}$ at this time before the particle trajectory $\mathbf{X}_{\mathbf{u}_h}(t)$ crosses into the element $\kappa_N$, forwards in time, which is given by
\begin{equation}\label{rearrange}
\mathbf{Z}(t_{N-1}^-) = \mathbf{Z}(t_{N-1}^+) + \frac{\mathbf{Z}(t_{N-1}^+)\cdot\llbracket\mathbf{u}_h(t_{N-1})\rrbracket\mathbf{n}_{N-1}}{\mathbf{u}_h(\mathbf{X}(t_{N-1}))\cdot\mathbf{n}_{N-1}}.
\end{equation}
We see that all of the terms on the right-hand-side of the equality in (\ref{rearrange}) are known (also, the orientation of the normal vector $\mathbf{n}_{N-1}$ to the element interface does not matter since it appears both in the numerator and demoninator). On the next (or previous, from the perspective of the particle trajectory) element, $\kappa_{N-1}$, we restrict to the time interval $(t_{N-2}, t_{N-1})$ and solve similarly. Now, using $\mathbf{Z}(t_{N-1}^-)$ as the final--time condition to obtain
\begin{equation*}
\mathbf{Z}(t) = \texttt{exp}(\Upsilon_{N-1}(t_{N-1} - t))\mathbf{Z}(t_{N-1}^-)\;\;\;\;\forall t\in(t_{N-2}, t_{N-1}).
\end{equation*}
One then follows this procedure for all time intervals up to and including $(0, t_1)$. In general, for a piecewise linear velocity field $\mathbf{u}_h$, we may hence write
\begin{equation}\label{AdjointSolutionLinear}
\mathbf{Z}(t) = \texttt{exp}(\Upsilon_i(t_i - t))\mathbf{Z}(t_i^-)\;\;\;\;\forall t\in(t_{i-1}, t_i).
\end{equation}
When $\mathbf{u}_h$ is, for example, piecewise polynomial with a higher degree, or some other general function, then (\ref{AdjointSolutionLinear}) does not apply since the matrices $\Upsilon_i$ will not be constant. Instead, one could employ a time--stepping technique within each time interval to solve for the adjoint solution $\mathbf{Z}(t)$; time--stepping from $\mathbf{Z}(t_i^-)$ until $\mathbf{Z}(t_{i-1}^+)$, using this to generate the next starting position $\mathbf{Z}(t_{i-1}^-)$, and so forth.
We note as well that the integral (\ref{derivative}) can be reduced to a sum of integrals over these time-intervals for which the trajectory intersects the support of the function $\mathbf{v}$. This is especially useful when $\mathbf{v}$ is, for example, a finite element basis function, which has support on only a few elements of which either all or just one might intersect the trajectory. Because of this, and the need to compute $\mathbf{Z}(t)$ in the fashion stated above, the right-hand-side vector in (\ref{DWRdisclin}) can easily be assembled by looping over these intersected elements in the same backwards fashion as described here.
\section{Numerical Examples}\label{numexp}
The purpose of this section is to utilise the linearisation result stated in Theorem \ref{mainresult} within the context of goal-oriented adaptivity. Here, Darcy's equations (\ref{DL})--(\ref{Nbc}) model the flow of groundwater as a saturated porous medium; we are interested (cf. Sections \ref{prelims}, \ref{DWRsec} and \ref{DarcyContextSec}) in the accurate estimation of the discretisation error induced by numerically approximating the travel time $\mathfrak{T}(\mathbf{u}; \mathbf{x}_0)$, for a given burial point $\mathbf{x}_0\in\Omega$. For simplicity we assume throughout this section that $d=2$.
\subsection{Approximation Spaces and Mesh Adaptivity}
Adaptive mesh refinement, and goal--oriented error estimation, will be performed for the accurate computation of the travel time functional (\ref{traveltime}) when the primal solution $(\mathbf{u}, p)\in\mathbf{H}$ to (\ref{SP3}) is approximated by the solution $(\mathbf{u}_h, p_h)\in\mathbf{H}_h$ to (\ref{DiscDarcy}). We wish to measure
\begin{equation}\label{ExpEst}
{\mathcal E}_h^\mathfrak{T} = \mathfrak{T}(\mathbf{u}; \mathbf{x}_0) - \mathfrak{T}(\mathbf{u}_h; \mathbf{x}_0) \approx \sum_{\kappa\in\mathscr{T}_h}\eta_\kappa
\end{equation}
on each of the computational meshes employed, where the indicators are those defined in Theorem \ref{DarcyLocal}. For mesh adaptivity we utilise a fixed--fraction marking strategy, with a refinement selection of $\texttt{REF} = 10\%$, together with the standard red--green, regular, refinement strategy for triangular elements.
We begin by stating the definition of the approximation space $\mathbf{H}_h$. Here, we employ the Brezzi--Douglas--Marini elements for the approximation of the Darcy velocity, and discontinuous piecewise polynomials for the approximation of the pressure (cf. Section \ref{MFEMsec}). To this end, we define the following spaces, where $\mathscr{T}_h$ is the usual shape--regular triangulation of the domain $\Omega\subset\mathbb{R}^2$:
\begin{align*}
BDM_k(\kappa) & := [\mathbb{P}_k(\kappa)]^2,\\
BDM_k(\Omega, \mathscr{T}_h) & := \{\mathbf{v}\in H(\text{div}, \Omega) : \mathbf{v}\vert_\kappa\in BDM_k(\kappa)\;\;\forall\kappa\in\mathscr{T}_h\}.
\end{align*}
Then, the approximation space $\mathbf{H}_{h, k} \equiv \mathbf{V}_{h, k}\times\Pi_{h, k}$ is defined via
\begin{align*}
\mathbf{V}_{h, k} & := \{\mathbf{v}\in BDM_{k + 1}(\Omega, \mathscr{T}_h) : (\mathbf{v}\cdot\mathbf{n})\vert_{\partial\Omega_N} = 0\},\\
\Pi_{h, k} & := \{\varphi\in L^2(\Omega) : \varphi\vert_\kappa\in\mathbb{P}_k(\kappa)\,\,\forall\kappa\in\mathscr{T}_h\}.
\end{align*}
The stability of these pairs of spaces, in the inf--sup sense, is discussed, for example, in \cite{MFEM} for any choice of $k \geq 0$.
In our examples we consider the primal and adjoint approximations $(\mathbf{u}_h, p_h)\in\mathbf{H}_{h, 0}$ and
$(\mathbf{z}_h, r_h)\in\mathbf{H}_{h, 1}$, where $(\mathbf{z}_h, r_h)$ solves the discrete linearised adjoint problem (\ref{DWRdisclin}) with functional $\mathfrak{T}(\,\cdot\,; \mathbf{x}_0)$, approximating the solutions $(\mathbf{z}, r)\in\mathbf{H}$ to the problem (\ref{DWRformal}).
We recall (cf. Section \ref{DWRsec}) the effectivity index
\begin{equation*}
\theta_h := \frac{\mathfrak{T}(\mathbf{u}; \mathbf{x}_0) - \mathfrak{T}(\mathbf{u}_h; \mathbf{x}_0)}{\sum_{\kappa\in\mathscr{T}_h}\eta_k},
\end{equation*}
which measures how well the error estimate approximates the exact travel time error.
\begin{figure}[t!]\label{SimplePath}
\centering
\includegraphics[width=0.35\columnwidth]{Plots/Simple/SimplePath.png}
\caption{Example I: Approximate particle trajectory on the final mesh.}
\end{figure}
\subsection{Example I: A Simple Test Case}
This first example considers a very simple problem for which we know the value of the exact travel time $\mathfrak{T}(\mathbf{u}; \mathbf{x}_0)$. The travel time is approximated on a series of uniformly refined triangulations, in order to validate the proposed error estimate (\ref{ExpEst}).
To this end, let $\Omega = (0, 1)^2$; we impose appropriate boundary conditions, so that the exact Darcy velocity is given by $\mathbf{u} = [\sin(x) \cos(y)]^\top.$ The porosity is set to be $\phi = 1$ everywhere so that the Darcy and transport velocities coincide. Furthermore, the de-coupling of the IVP for the particle trajectory $\mathbf{X_u}(t)$ means that we can evaluate exactly the travel time for some choice of $\mathbf{x}_0\in\Omega$. Selecting $\mathbf{x}_0 = (0.1,\,\,0.3)$ gives
\begin{equation*}
\mathfrak{T}(\mathbf{u}; \mathbf{x}_0) = \log\left(\frac{\tan(1) + \sec(1)}{\tan(0.3) + \sec(0.3)}\right)\approx 0.9216\dots,
\end{equation*}
cf. Figure~\ref{SimplePath} which depicts the particle trajectory.
\begin{figure}[t!]\label{SimplePlots}
\centering
\includegraphics[width=0.4\columnwidth]{Plots/Simple/SimplePressure.png}
\includegraphics[width=0.4\columnwidth]{Plots/Simple/SimpleVelocity.png}\\
\includegraphics[width=0.4\columnwidth]{Plots/Simple/SimpleAdjointPressure.png}
\includegraphics[width=0.4\columnwidth]{Plots/Simple/SimpleAdjointVelocity.png}
\caption{Example I: Primal (top) and adjoint (bottom) pressure and velocity approximations on the final mesh.}
\end{figure}
\begin{table}[t!]
{\footnotesize
\caption{Example I: Results employing the $BDM_1$ finite element space.}\label{tab:example_I_table}
\begin{center}
\begin{tabular}{|c|c|c|c|} \hline
\bf Number of DOFs & \bf Error & \bf Est. Error & \bf $\theta_h$ \\ \hline
$20$ & $-8.274\times 10^{-3}$ & $-8.476\times 10^{-3} $ & $0.976$ \\
$72$ & $1.358\times 10^{-3}$ & $1.360\times 10^{-3}$ & $0.998$ \\
$272$ & $-3.155\times 10^{-5} $ & $-2.818\times 10^{-5} $ & $1.120$ \\
$1056$ & $-1.894\times 10^{-5} $ & $-1.899\times 10^{-5} $ & $0.997$ \\
$4160$ & $-2.085\times 10^{-6} $ & $-2.084\times 10^{-6} $ & $1.001$ \\
$16512$ & $-9.310\times 10^{-7} $ & $-9.308\times 10^{-7} $ & $1.000$ \\ \hline
\end{tabular}
\end{center}
}
\end{table}
The results featured in Table \ref{tab:example_I_table} show the exact travel time error, the error estimate, and the resulting effectivity index on each of the uniform meshes employed for this example. Indeed, here we observe that the effectivity indices are extremely close to unity on each of the meshes, thereby demonstrating that the error estimate accurately predicts the travel time error in this simple example, even on particularly coarse meshes with less than $50$ degrees of freedom. The primal and adjoint pressure and velocity approximations on the final mesh are depicted in Figure~\ref{SimplePlots}.
\subsection{Example II: A Two--Layered Geometry}
Similar to Example I, this numerical experiment considers a simple geometry and problem set-up in order to further validate the proposed error estimate (\ref{ExpEst}) under uniform refinement. Here, the domain $\Omega$, pictured in Figure \ref{TwoLayerPath}, is defined by
$
\Omega = \{(x, y)\in(0,\,\, 1)\times (0,\,\, 1) : y + \nicefrac{x}{10} < 1\}.
$
Along the line $y = \nicefrac{1}{2}$ the domain is partitioned into the two sub-domains $\Omega_i$, $i = 1,2$, representing different types of rock. That is, the top layer consists solely of Calder Sandstone, while the bottom containes St. Bees Sandstone. To each of the sub-domains we assign a fixed, constant, permeability and porosity (cf. Example III), given by the dataset used in \cite{cliffe_collis_houston}.
This example can be considered to be a simpler version of Example III, in which we apply the same boundary conditions. Along the top of the domain we impose atmospheric pressure, and no--flow out of the rest of the boundary. The burial point is chosen to be $\mathbf{x}_0 = (0.1,\,\, 0.1)$ and we set $f = 0$ in Darcy's equations (\ref{DL})--(\ref{Nbc}). Unlike the previous example, the exact travel time $\mathfrak{T}(\mathbf{u}; \mathbf{x}_0)$ is not known in this case; instead, we use an approximation on the final mesh.
\begin{figure}[t!]\label{TwoLayerPath}
\centering
\includegraphics[width=0.35\columnwidth]{Plots/TwoLayer/TwoLayerPath.png}
\caption{Example II: Approximate particle trajectory on the final mesh.}
\end{figure}
\begin{figure}[t!]\label{TwoLayerPlots}
\centering
\includegraphics[width=0.4\columnwidth]{Plots/TwoLayer/TwoLayerPressure.png}
\includegraphics[width=0.4\columnwidth]{Plots/TwoLayer/TwoLayerVelocity.png}\\
\includegraphics[width=0.4\columnwidth]{Plots/TwoLayer/TwoLayerAdjointPressure.png}
\includegraphics[width=0.4\columnwidth]{Plots/TwoLayer/TwoLayerAdjointVelocity.png}
\caption{Example II: Primal (top) and adjoint (bottom) pressure and velocity approximations on the final mesh.}
\end{figure}
\begin{table}[t!]
{\footnotesize
\caption{Example II: Results employing the $BDM_1$ finite element space.}\label{tab:example_II_table}
\begin{center}
\begin{tabular}{|c|c|c|c|} \hline
\bf Number of DOFs & \bf Error & \bf Est. Error & \bf $\theta_h$ \\ \hline
$198$ & $1.188\times 10^{-3}$ & $1.719\times 10^{-3} $ & $0.691$ \\
$764$ & $4.773\times 10^{-4}$ & $4.534\times 10^{-4}$ & $1.053$ \\
$3000$ & $7.891\times 10^{-5} $ & $8.178\times 10^{-5} $ & $0.965$ \\
$11888$ & $1.255\times 10^{-5} $ & $1.294\times 10^{-5} $ & $0.970$ \\
$47328$ & $4.261\times 10^{-6} $ & $4.460\times 10^{-6} $ & $0.955$ \\
$188864$ & $-2.694\times 10^{-7} $ & $-2.694\times 10^{-7} $ & $1.000$ \\ \hline
\end{tabular}
\end{center}
}
\end{table}
The results presented in Table \ref{tab:example_II_table} again show that the proposed error estimate reliably predicts the size of the error, with effectivity indices close to unity on each of the meshes employed. Although it looks as if the trajectory is exiting the domain parallel to the boundary (cf. Figure \ref{TwoLayerPath}), the performance of the error estimator does not deteriorate in this setting.
The adjoint solution approximations, pictured in Figures \ref{TwoLayerPlots}, are discontinuous along the particle trajectory. Indeed, close to $\mathbf{x}_0$ is a sink--like feature, with the adjoint velocity traveling backwards along the path to the initial point, while moving in the same direction as the path elsewhere. These may be interpretted as generalised Green's functions along the particle trajectory; in particular, the adjoint pressure looks to be bounded, while the adjoint velocity resembles more a Dirac--type measure.
\subsection{Example III: Inspired by the Sellafield Site}
In this example, the domain $\Omega$ is defined as being the union of six sub--domains $\Omega_i$, $i = 1, 2, \dots, 6$, each representing a different type of rock. Each of these layers are assumed to have a given fixed, constant, porosity $\phi$ and permeability $\mathbf{k}$ related to the hydraulic conductivity $\mathbf{K}$ (cf. Sections \ref{DarcyContextSec} and \ref{modelsec}, respectively) by
$
\mathbf{K} = \nicefrac{\rho g}{\mu}\mathbf{k},
$
where $\rho$, $g$, and $\mu$ are the density of water, acceleration due to gravity, and kinematic velocity of water, respectively; the data for each of these is taken from \cite{cliffe_collis_houston}.
We briefly mention that the domain $\Omega$ is merely inspired by the geological units found at the Sellafield site and in no way is physically representative of it; therefore, we draw no conclusions of real-life consequence within this numerical example in the context of the post-closure safety assessments of potential radioactive waste burial sites. Furthermore, this experiment merely aims to reproduce similar results previously obtained in \cite{cliffe_collis_houston} in order to verify the main linearisation result presented in Theorem \ref{mainresult}. More details concerning this problem, as well as a more complex version of this test case, can be found in \cite{cliffe_collis_houston} where the permeability per layer was considered variable, but still constant per element.
\begin{figure}[t!]\label{Sellafield}
\centering
\includegraphics[width=0.65\columnwidth]{Plots/Sellafield/SellafieldDomain.png}
\caption{Example III: The domain $\Omega$, inspired by Sellafield; see \cite{cliffe_collis_houston}.}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\columnwidth]{Plots/Sellafield/SellafieldPath.png}
\caption{Example III: Particle trajectory approximation on the initial mesh.}\label{SellafieldPath}
\end{figure}
\begin{table}[t!]
{\footnotesize
\caption{Example III: Results employing the $BDM_1$ finite element space.}\label{tab:BDM_table}
\begin{center}
\begin{tabular}{|c|c|c|c|} \hline
\bf Number of DOFs & \bf Error & \bf Est. Error & \bf Eff. Index \\ \hline
$22871$ & $-8.905\times 10^{-5} $ & $-5.970\times 10^{-5} $ & $1.492$ \\
$32624$ & $-5.455\times 10^{-6}$ & $-4.421\times 10^{-6}$ & $1.234$ \\
$47053$ & $4.065\times 10^{-6} $ & $4.382\times 10^{-6} $ & $0.928$ \\
$69887$ & $-2.140\times 10^{-7} $ & $-2.206\times 10^{-7} $ & $0.970$ \\
$1.0755\times 10^{5}$ & $-4.216\times 10^{-8} $ & $-4.326\times 10^{-8} $ & $0.974$ \\
$1.6796\times 10^{5}$ & $-1.330\times 10^{-8} $ & $-1.468\times 10^{-8} $ & $0.906$ \\
$2.6631\times 10^{5}$ & $-8.280\times 10^{-9} $ & $-8.280\times 10^{-9} $ & $1.000$ \\ \hline
\end{tabular}
\end{center}
}
\end{table}
Here, we let $\partial\Omega_D$ be the top of the domain, representing the surface of the site, and let $\partial\Omega_N$ be the remainder of the boundary, as pictured in Figure \ref{Sellafield}. We make the same assumptions as \cite{cliffe_collis_houston}:
the rock below the stratum consisting of Borrowdale Volcanic Group type is of much lower permeability than all of the other layers; there is a flow divide on the left and right edges of the domain; the pressure at the top of the domain is prescribed via $g_D = p_{\text{atm}}/\rho g + y$ where $p_{\text{atm}} \approx 1.013\times 10^5 Pa$ is atmospheric pressure; the source term $f$ is set equal to zero. The travel time path computed on the initial mesh is depicted in Figure~\ref{SellafieldPath}.
\begin{figure}[t!]
\centering
\includegraphics[width=1\columnwidth]{Plots/Sellafield/SellafieldPressure.png}
\caption{Example III: Pressure approximation on the initial mesh.}\label{SellafieldPressure}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\columnwidth]{Plots/Sellafield/SellafieldVelocity.png}
\caption{Example III: Velocity approximation on the initial mesh.}\label{SellafieldVelocity}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\columnwidth]{Plots/Sellafield/SellafieldAdjointPressure.png}
\caption{Example III: Adjoint pressure approximation on the initial mesh.}\label{SellafieldAdjointPressure}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=1\columnwidth]{Plots/Sellafield/SellafieldAdjointVelocity.png}
\caption{Example III: Adjoint velocity approximation on the initial mesh.}\label{SellafieldAdjointVelocity}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.9\columnwidth]{Plots/Sellafield/Meshes/SellafieldMesh1.png}\\
\includegraphics[width=0.9\columnwidth]{Plots/Sellafield/Meshes/SellafieldMesh7.png}
\caption{Example III: Initial and final adaptively refined meshes.}\label{SellafieldMeshes}
\end{figure}
In Table \ref{tab:BDM_table} we present the performance of the adaptive routine when approximating the travel time functional. The exact travel time $\mathfrak{T}(\mathbf{u}; \mathbf{x}_0)$ is based on the approximation computed on the final mesh and the computed error estimator; on this basis the exact travel time is approximately $0.49\times 10^{5}$ years. We can see from these results that the effectivity indices computed on all meshes are close to unity, indicating that the approximate error estimate (\ref{ExpEst}) leads to reliable error estimation, similar to the previously undertaken work in \cite{cliffe_collis_houston}. We see that for this physically motivated example we are able to estimate the error in the travel time functional very closely.
Figures \ref{SellafieldPressure} and \ref{SellafieldVelocity} show the computed approximations $(\mathbf{u}_h, p_h)\in\mathbf{H}_{h, 0}$ on the initial mesh. Again, here we observe discontinuities in the Darcy velocity across the rock layer interfaces, with the velocities differing by orders of magnitude within each of the stratum. We also see a local stationary point in the pressure near the centre of the domain which accounts for the change in direction of the groundwater flow; indeed, in this region the flow moves upwards and thus could transport the buried nuclear waste back up to the surface of the site.
Figures \ref{SellafieldAdjointPressure} and \ref{SellafieldAdjointVelocity} plot the computed adjoint approximations $(\mathbf{z}_h, r_h)\in\mathbf{H}_{h, 1}$. As concurred by \cite{cliffe_collis_houston} we see a strong discontinuity along the direction of the trajectory $\mathbf{X}_{\mathbf{u}_{h}}$, and with both the adjoint velocity and pressure approximations vanishing away from the path. Close to the initial release point $\mathbf{x}_0$ we see what looks to be a sink--like feature in the adjoint velocity approximation, and again, in agreement with \cite{cliffe_collis_houston}, this velocity points in the same direction as the primal Darcy velocity (approximation) outside of the path, but in the opposite direction along $\mathbf{X}_{\mathbf{u}_{h}}$.
Finally, in Figure \ref{SellafieldMeshes} we show the initial mesh and the final, adaptively refined, mesh. As expected, we observe mesh refinement taking place around the initial point $\mathbf{x}_0$, at the exit point, and along the trajectory itself. There is more significant refinement (compared with the rest of the path) where the trajectory changes direction; in these regions there are sharp discontinuities in the Darcy velocity approximation, which may lead to a large discretisation error of the primal Darcy problem. Such large errors contribute greatly to the error induced in the travel time functional and as such, is targetted more for refinement when compared with the regions containing long horizontal stretches of the trajectory; typically here, the velocity (especially when confined to a single rock layer) appears to be quite smooth.
\section{Conclusions}\label{ConclusionsSec}
This work has been concerned with the numerical approximation of the travel time functional in porous media flows and the post--closure safety assessment of radioactive waste storage facilities. An expression for the Gâteaux derivative of the travel time functional has been derived, for both continuous and piecewise--continuous velocity fields, which was then utilised via the dual--weighted--residual--method for goal--oriented error estimation and mesh adaptivity. Numerical experiments considering both simple and complicated problem set--ups were considered, validating the proposed error estimate which performed extremely well, in terms of the computed effectivity indices being very close to unity on all meshes employed. The contributions of this research have built upon those in \cite{cliffe_collis_houston} where previously such an expression for the Gâteaux derivative was unavailable.
Extensions of this work may, for example, involve considering more realistic conditions in order to test the proposed error estimate. More demanding domains, such as fractured porous media or domains with inclusions such as vugs or caves, is vital to extend the results from these simple academic test cases to real--life applications. Furthermore, a closer look into the regularity of the adjoint solutions would be extremely beneficial in understanding how to improve the error estimate to derive a guaranteed bound and to better understand the expected rates of convergence in the error of the computed travel time functional. Indeed, the well--posedness of the adjoint problem still remains an open question.
\bibliographystyle{siamplain}
|
{
"timestamp": "2021-12-01T02:27:36",
"yymm": "2111",
"arxiv_id": "2111.15504",
"language": "en",
"url": "https://arxiv.org/abs/2111.15504"
}
|
\section{Introduction}
Modern explorations of possible mission scenarios to the Ice Giants Uranus and Neptune \cite{Fletcher2020, Hofstadter2019, Simon2020, Masters2014} have motivated a revision of what we know about these planets and their planetary systems. Recent reviews explore their atmospheric dynamics \cite{Hueso2019}, mean circulation patterns \cite{Fletcher2020b}, and vertical structure \cite{Hueso2019, Atreya2020}. These atmospheric themes connect with their internal structure \cite{Helled2020} and formation mechanisms \cite{Guillot2005}. Knowledge of the properties of the Ice Giants can guide us to understand the atmospheres of the growing population of exoplanets with Neptune-size masses.
Additional insights to understand Uranus and Neptune come from the recent exploration of Jupiter and Saturn by the Juno and Cassini missions \cite{Guillot_Fletcher2020}. Both planets have zonal winds that extend a few thousand kilometers in the atmosphere \cite{Kaspi2018, Galanti2019}. For Jupiter, new questions arise on how meteorological processes distribute condensables well bellow the upper clouds and through the planet \cite{Li2017} and in both planets it is unknown how deep latitudinal variations in the banded patterns of the planets extend. These new results and questions from the detailed exploration of Gas Giants challenge our expectations on the properties of the Ice giants Uranus and Neptune, and in particular in the distribution of condensables, the characteristics of moist convection and how these factors manifest in observable fields such as the dynamics of the cloud fields.
In hydrogen-helium atmospheres, condensables are heavier than dry air, imposing a fundamentally different dynamic regime to non-hydrogen based atmospheres \cite{Guillot1995}. In the Gas and Ice Giants, above a critical abundance of the condensing species, moist convection is {\em inhibited} by the weight of the condensables rather than favored by latent heat release \cite{Guillot1995, Leconte2017, Friedson+Gonzales2017,Guillot2019a}. This inhibition requires a sufficiently high abundance of condensables, which might be close to the water abundance in Jupiter \cite{Li2020} and might be found in the deep water cloud in Saturn, thus providing a way to explain the observations of episodic storms on this planet \cite{Li2015}. In the case of Uranus and Neptune, methane, which condenses in the directly observable part of the atmosphere, is abundant enough to lie in that regime. Uranus and Neptune are therefore extremely interesting laboratories to understand convection in hydrogen atmospheres and its overall effect on the global atmospheric dynamics.
Recent observations of Jupiter's deep weather layer by Juno \cite{Li2017} and the Very Large Array (VLA) \cite{dePater2016, dePater2019}, and results from Cassini following the 2010-2011 Great Storm in Saturn \cite{SanchezLavega2019, Jansen2016, Sromovsky2016} suggest that moist convection acts in both planets to desiccate NH$_3$ of the upper trospospheres below the ammonia condensation level due to details in the thermodynamics of the water ammonia system \cite{Guillot2020a, Guillot2020b}. But do other volatiles transport mechanisms occur in Uranus and Neptune cloud layers? And how observations of their cloud activity constrain the properties of moist convection in these colder atmospheres with lower internal heat sources?
Uranus and Neptune have time variable cloud systems where storms are believed to be fueled by methane condensation, but the frequency, spatial distribution and strength of these storms including their size and temporal duration are not well-known and most cloud activity observed so far is probably not convective. Methane abundance is constrained from remote sensing to values in the range of 80$\pm$20 times solar \cite{Guillot_Gautier2015,Karkoschka2011a,Sromovsky2011} (solar abundance is the proportion of a given element to hydrogen in the Sun's photosphere; the standard values of solar abundances are given in \cite{Asplund2009}), which translate in cloud bases close to 1 bar and consistent with observations of cloud systems in both planets. Models of the formation of Uranus and Neptune \cite{Helled2020} imply similar high abundances of other condensables that would form a multi-layered cloud structure with deep water clouds at pressures that can range 500-3,000 bar for water abundances in the range of 20-80 times solar \cite{Hueso2019, Atreya2020}, i.e. well above the limits to produce an inhibition of convection \cite{Guillot1995, Leconte2017, Friedson+Gonzales2017}. Both planets receive weak amounts of energy from the Sun. The highest mean daily solar irradiation received in Uranus is 3.7 Wm$^{-2}$ in its polar regions during summer, and 0.7 Wm$^{-2}$ in Neptune, also near polar latitudes \cite{Hueso2019}. The annual average solar insolation also varies strongly as a function of latitude, with Uranus receiving the highest annual average insolation in their polar regions and Neptune in the equator \cite{Moses2018}. In addition, Neptune possesses an internal energy source (it radiates $2.61\pm 0.28$ times as much energy as it absorbs from the Sun), whereas the internal flux measured at Uranus appears to be at least one order of magnitude smaller \cite{Pearl1991}. In terms of flux density, the internal heat source in Uranus is negligible, and in Neptune it is about 20 and 7 times smaller than those of Jupiter and Saturn, respectively, resulting in a much lower capacity to power internal convection.
The goals of this paper are to review the observational evidences in favor of moist convection in Uranus and Neptune, explore how moist convection may operate in the Ice Giants from a comparison with moist convection in the Gast Giants and inspect the effects that storms may produce in the vertical properties of the atmosphere. We also aim to address which measurements a mission to these planets should perform, in addition to the fundamental determination of a vertical profile of temperature and condensables abundances that could be obtained by an atmospheric probe \cite{Mousis2018}.
\section{Observations of convective activity in Uranus and Neptune}
Here we review the observational evidences in favor of moist convective storms in Uranus and Neptune (i.e. clouds formed by vertical ascending motions powered by buoyancy differences and vertically transporting heat). These evidences come from two sources: (a) Observations of the cloud activity; (b): The inferred abundance of atmospheric methane above the tropopause for Neptune \cite{Baines1994}. Both are incomplete sources of information and show remarkable differences with what we know about convective storms in Jupiter and Saturn.
The observations of the possible cloud activity in Uranus and Neptune either lack the spatial resolution, or the temporal sequences required to study their dynamics, or the spectral coverage to properly model the vertical cloud structure of candidate storms.
The high methane abundance above the tropopause was historically the main argument in favor of moist convection in Neptune. This argument does not apply to Uranus (because of a low methane abundance above the tropopause) and Jupiter and Saturn (although they are convective, they do not show high ammonia abundances above the tropopause). In addition, this high abundance in Neptune's lower stratosphere can be explained by other ideas (leakage of methane at the lower stratosphere from warm polar regions) without requiring moist convection \cite{Orton2007, Lellouch2015}. Thus, in this section we will discuss candidates to moist convective storms and not cloud systems completely proven to have a convective origin.
Observations of moist convective storms in the Gas Giants show divergent clouds on short time-scales of hours to a few days, elevated clouds above their environment and close to the tropopause, and in some cases large disturbances evolving over months. Clouds with strong divergences, as updrafts expand in the stable part of the troposphere, are a key signature of convective storms, but the observational record of such activity in Uranus and Neptune is almost inexistent because it requires frequent observation at very high-spatial resolution. In Jupiter and Saturn features with irregular cloud morphologies that show rapid changes tend to be convective, as opposed to compact ovals (closed vortices), regularly spaced spots and long filamentary wavy structures (waves). Intense convective storms large enough to be observed from Earth are frequent in Jupiter \cite{Vasavada2005} and rare in Saturn \cite{SanchezLavega2019, SanchezLavega2020}. In both planets radio emissions emitted from lightning have been observed by different spacecraft \cite{Aplin2017}, and lightning flashes have been observed in some of the most intense storms \cite{Little1999, Dyudina2013}, including Jupiter locations with no evident storms in the observed cloud field \cite{Kolmasova2018, Becker2019}, but lightning in Uranus and Neptune has only been inferred from radio signals \cite{Zarka1986, Gurnett1990, Aplin2020}.
Juno observations of Jupiter show small-scale storms at many latitudes. Storms big enough to be observed from Earth are more rare \cite{Inurrigarro2020}, and strong convective eruptions able to change the visual aspect of entire bands (like in the North Temperate Belt or in the South Equatorial Belt) \cite{SanchezLavega1996, SanchezLavega2008, SanchezLavega2017} only happen every few years \cite{Fletcher2017a}. Saturn storms occur also on different scales, with small puffy clouds in the polar regions, seasonal storms at mid-latitudes, and the much more rare Great White Spots \cite{SanchezLavega2020}. In both planets the mechanisms underlying the spatial distribution of storms, their cyclic nature in Jupiter, and their seasonal distribution in Saturn, are not known.
Due to the difficulties to study cloud motions in images of Uranus and Neptune, the fundamental criterion for selecting candidates of convective storms is their morphology as irregular features or elongated plume-shaped clouds, with rapid temporal evolution in their size and shape and their high brightness in images at methane absorption bands indicative of high cloud tops. Besides convection, bright clouds are also observed accompanying dark anticyclones in Neptune \cite{Smith1989}. These ''companion clouds'' are suggested to be formed by similar processes to those occurring on orographic clouds on Earth. Numerical simulations indicate that methane humid air forced by the vortex circulation can rise a few kilometers and condense close to the tropopause as the vortex interacts with the environment flow \cite{Stratman2001}. These bright clouds are produced through a dynamic interaction that does not require buoyancy produced by condensation as in convective storms and are separated from the moist convective storm candidates we discuss here.
\subsection{Convective activity in Uranus}
The first spatially resolved images of Uranus were obtained by the Voyager 2 in 1986 and showed a planet with a homogeneous cloud cover with no bright features \cite{Smith1986}. Voyager observations of the atmospheric lapse rate and ortho to para hydrogen were used to suggest thin-layered convection with sub-saturated methane \cite{Gierasch1987}.
Those observations inspired the idea of a planet with low cloud activity, and slow variability. Figure \ref{fig_uranus} shows cloud features in Uranus that reveal that that early view of Uranus is incorrect. There are several cases of discrete bright clouds that could be caused by moist convective storms. Some of them were short-lived and others were active for years.
Voyager 2 only found a plume like feature at 35$^{\circ}$S whose morphology and brightness in methane band filters were suggestive of a convective origin \cite{Smith1986} (Fig. \ref{fig_uranus}a). Later observations with ground-based telescopes and the Hubble Space Telescope (HST) captured this feature between 1994 and 2009 \cite{Hammel2005, Sromovsky2009, dePater2011} (Fig. \ref{fig_uranus}b). This feature was nicknamed the ''Berg'', it showed oscillations in longitude and a latitude migration that eventually caused its disintegration when it reached 5$^{\circ}$S. The Berg extended $\sim 5,000-10,000$ km and had smaller clouds whose brightness changes suggested convective sources. Although most of the clouds in the Berg could be ``companion clouds'' as those occurring on Neptune's dark vortices (i.e., generated by an unobserved deep anticyclone) intense brigtening events in 2004 and 2007 suggest convective storms \cite{dePater2011}.
As the northern hemisphere turned into view in the mid-90s, a large number of discrete features were captured in a latitude band centered at 30$^{\circ}$N \cite{Karkoschka1998}. Because that latitude received the first rays of the Sun after a long winter, this activity was proposed to be seasonal convection triggered by the increasing insolation. However, observations made to date suggest that the band between 28$^{\circ}$N and 42$^{\circ}$N is an intrinsically active region in generating discrete bright spots with typical sizes of 2,000-4,000 km and sometimes developing in series \cite{Sromovsky2007a} (Fig. \ref{fig_uranus}c). The cloud tops of the brightest features (Fig. \ref{fig_uranus}d) were estimated to be at 300-500 mbar. New outbreaks of activity occurred in 2004-2006 at this latitude and were named ''Bright Complex'' \cite{Sromovsky2007b}. This high level of activity lasted until Uranus' equinox in December 2007 \cite{Sromovsky2009}.
Another bright feature was observed in 2011 and was active for several months \cite{Sromovsky2012}. The brightest spot in Uranus so far recorded was observed in 2014 close to the equator at 15$^{\circ}$N \cite{dePater2015} (Fig. \ref{fig_uranus}e). Its reflectivity in the deep methane band K' (2.2 $\mu$m) reached 30\% of Uranus disk brightness implying that cloud tops were above the 330 mbar altitude level. The feature extended $\sim$ 17,000 km in longitude and 4,300 km in latitude and its texture was complex and formed by smaller spots of $\sim$ 2,000 km (Fig. \ref{fig_uranus}e, upper inset). A second cloud system at 32$^{\circ}$N, less bright and deeper in the atmosphere at approximately 2 bar (Fig. \ref{fig_uranus}e, lower inset), developed over months forming an elongated tail-like structure similar to large-scale convective storms in Jupiter and Saturn \cite{SanchezLavega2008, SanchezLavega2019}. Because of these characteristics, these systems are probably the best candidates for moist convective storms in Uranus. However, detailed radiative transfer analysis suggest that although the clouds reached levels close to the tropopause, their optical depth (around 1 at 0.467 nm) was not as large as it would be expected for convective features where near infinite optical depths are expected at all wavelengths. The elongated tail-like structure was found to lay around 1-2 bar not reaching the top altitudes of the main bright feature \cite{Irwin2017}.
Finally, a different type of convective activity could occur in Uranus’s North Pole, above latitude 60$^{\circ}$N, where a cluster of bright spots were observed \cite{Sromovsky2015} (Fig. \ref{fig_uranus}f). These spots have sizes of $\sim$ 500 km and are mutually separated by $\sim$ 1,000-3,000 km. The pattern strongly reminds that seen in Saturn’s North Pole \cite{Antunano2018}, so the dynamical mechanisms responsible of their formation could be similar.
From the point of view of seasonal variations, there is a global indication of enhanced cloud activity and convective candidates as the planet was reaching its equinox in 2007 and afterwards until reaching a peak in 2014. Since 2014 (corresponding to 1/12th of the Uranus year after equinox) there has not been any strong indication of convective activity, although since 2015, an occasional bright cloud system appears in the boundary of the North polar hood.
\begin{figure}[!h]
\centering\includegraphics[width=4.5in]{Figure_01_Uranus.v3.jpg}
\caption{Candidate features to moist convection in Uranus:
(a) A plume-like feature observed by Voyager 2 in 1986 in Uranus' southern hemisphere \cite{Smith1986, Karkoschka2015}.
The arrow signals the direction of rotation and the inset shows details from \cite{Smith1986};
(b) The Berg feature observed in 2007 (arrow) \cite{Sromovsky2009} with details in the inset;
(c) The active northern band with multiple spots observed in 2004 \cite{Sromovsky2007a, Sromovsky2007b};
(d) Bright spot observed in 2005 \cite{Sromovsky2007a};
(e) The brightest spot in Uranus as observed in 2014 \cite{dePater2015} with the elongated cloud system with insets showing details;
(f) Cloud cluster in the North Pole observed in 2012 with details shown in the inset \cite{Sromovsky2015}.
North is up in all the images. Panels (a-d) show color composite images based in visible to near infrared wavelengths below 1 $\mu$m with bright features in wide methane absorption bands. Panel (e) from images in band H (1.6 $\mu$m) with insets using a combination of near infrared images in bands H (blue), a CH4S filter (1.53-1.66 $\mu$m, green) and K' (2.2 $\mu$m, red), being the latter the most sensitive to high clouds. Panel (f) shows observations in band H.}
\label{fig_uranus}
\end{figure}
\subsection{Convective activity in Neptune}
\begin{figure}[!h]
\centering\includegraphics[width=4.5in]{Figure_02_Neptune_V2.jpg}
\caption{Candidate features to moist convection in Neptune:
(a-e) Variable activity at the center of the anticyclone DS2 in 1989 from Voyager 2 \cite{Sromovsky1993};
(f) Details at the center of DS2 \cite{Smith1989, Sromovsky1993};
(g) The ''scooter'' imaged by Voyager 2 \cite{Smith1989};
(h) Details of the SPF observed by Voyager 2 \cite{Karkoschka2011b};
(i) Bright equatorial cloud complex in 2017 at near infrared wavelengths \cite{Molter2019}.
Panels (a-g) in Voyager 2 clear filters (a-b), orange (c), violet (d), green (e), blue (f) and green (g).
Panel (h) is a false color composite with red, green and blue from observations in yellow, blue and ultraviolet light respectively.
Panel (i) show images on moderate to strong methane absorption wavelengths in band H (1.63 $\mu$m), CH4 (1.59 $mu$m) and K' (2.12 $\mu$m).}
\label{fig_Neptune}
\end{figure}
Neptune is much more active in producing new cloud systems and cloud variations than Uranus, with large-scale changes occuring regularly \cite{Hueso2017}.
The stratosphere of Neptune has a higher abundance of CH$_4$ than the saturated abundance at the cold tropopause \cite{Baines1994}, and the stratospheric abundance of CH$_4$ in Neptune is nearly 3 orders of magnitude larger than in Uranus. Early attempts to understand these high stratospheric abundances in Neptune relied on moist convection as a mechanism able to transport methane to the stratosphere
\cite{Lunine1989, Stoker1989}. Another possibility is that methane could leak out
to the stratosphere through warm regions like the hot south polar region \cite{Orton2007} without the need for strong convection. However, this scenario is in conflict with circulation patterns that could explain the thermal structure observed in the planet's stratosphere \cite{Fletcher2014, Lellouch2015, Fletcher2020b}. Here we review observations of Neptune that suggest vigorous methane moist convection on discrete cloud systems.
Since the Voyager 2 flyby in 1989, Neptune has exhibited a rich variety of bright and dark spots with their related ''orographic'' cloud companions \cite{Smith1989, Sromovsky1993}. Clouds with some characteristics of convective storms were observed by Voyager 2 in at least two features. One was in the interior of the so-called DS2 anticyclone at 55$^{\circ}$S, where bright and high altitude clouds developed displaying rapidly changing morphology (Fig. \ref{fig_Neptune}a-f). The relation with the vortex dynamics is a challenging issue. This vortex was an anticyclone, but convective storms in Jupiter and Saturn that resemble the possible convection inside DS2 develop in cyclones \cite{Fletcher2017b,Inurrigarro2020, SanchezLavega2020}. Furthermore, two transient bright spots, suggestive of convective activity were also observed in 2007 in Neptune's South Pole \cite{Luszcz-Cook2016}, where the wind profile shows cyclonic vorticity.
Voyager 2 images also showed a complex system of narrow streaks stacked in latitude and forming the so-called ''scooter'' at 40$^{\circ}$S, extending about $\sim$ 3,000 km (Fig. \ref{fig_Neptune}g).
Their clouds were deeper than its surroundings but their plume-like structure was interpreted as formed by updrafts in a region with vertical wind
shear \cite{Sromovsky1993}. In fact, the latitude band $\sim$37-47$^{\circ}$S has also been very active in bright spot activity during the last 20 years, as recorded by a variety of telescopes \cite{Sromovsky2001}, including amateur observers \cite{Hueso2017}. These spots with sizes in the range 4,000 – 8,000 km stand out in red and near infrared wavelengths, where methane absorptions occurs, therefore implying cloud tops high in the atmosphere, but their convective origin can not be proven with the existing data.
Bright clouds are also found in the South Polar Feature (SPF) (Fig. \ref{fig_Neptune}g) and South Polar Waves (SPW), apparently permanent cloud systems since the Voyager 2 flyby and located at sub-polar latitudes (60-70$^{\circ}$S) \cite{Karkoschka2011b}. Both are formed by elongated clouds that display a very stable motion that has been suggested to represent Neptune’s rotation period. The clouds have been proposed to be related to a Rossby wave, but where localized convection could occur regularly forming the small bright spots embedded on it (Fig. \ref{fig_Neptune}h).
The brightest spot observed in Neptune was a large bright spot centered at 2$^{\circ}$N that lasted 7 months in 2017 \cite{Molter2019} (Fig. \ref{fig_Neptune}i). It developed two equatorial bright cloud systems with the largest one showing a variable zonal extension $\sim$ 8,500-12,000 km and $\sim$ 6,000-9,000 km of meridional size and reaching the altitude level of 300-600 mbar. This cloud system was proposed as a good candidate for methane moist convection. No dark vortices that could explain the formation of these cloud systems were expected or found at the equatorial latitude of the bright spots.
We stress that, while most of the features shown in Figures \ref{fig_uranus} and \ref{fig_Neptune} are very good convective storm candidates, we cannot make a definite statement on their convective nature. According to what we know on Jupiter and Saturn, convective storms can reach horizontal sizes $L$ in the range $L/R\sim 0.01-0.1$, where $R$ is the planetary radius. These are sizes comparable to most cloud systems described above. These convective storms should be formed by clusters of updrafts with horizontal sizes comparable to the scale height $H\sim 30-50$ km (depending on the temperature). Resolving these elements is currently impossible with ground based telescopes, HST or the future JWST, but will be in the limit of the ELT. Observations that could determine the convective nature of a particular cloud system require high-spatial resolution over multiple wavelengths from the continuum to the near IR methane absorption bands together with the time evolution of the global cloud fields. The apparent low frequency of convective candidates makes unlikely that such observations will be obtained from a combination of ground-based AO observations and HST or JWST observations, and the short flybys by Voyager 2 did not produce a dataset large enough to characterize convective candidates. Only observations from an orbiter mission seem able to provide such data \cite{Fletcher2020}.
\section{Vertical structure of the atmosphere}
The cloud systems described in the previous section are observed at levels compatible with methane condensation. In Jupiter and Saturn, thermal infrared (typically at 5 $\mu$m) sensitive to 2-4 bar and radio observations allow to glimpse in the dynamics of the clouds below. In Uranus and Neptune, the colder temperatures of the atmosphere imply that observations in the microwave to milimiter wavelengths are needed to peer below the upper clouds. Such observations using radio interferometers result in a banded structure of the planets down to 50-80 bar \cite{Tollefson2019, Molter2019b}. This deep structure is interpreted as a signature of the latitudinal distribution of condensables such as H$_2$S, and is interpreted as caused by a deep global circulation \cite{Fletcher2020b}. We here describe the expected multi-cloud layer structure of Uranus and Neptune and show caveats in the classical description of the vertical cloud structure.
The volatiles in Uranus and Neptune are methane (CH$_4$), ammonia (NH$_3$), hydrogen sulfide (H$_2$S), water (H$_2$O) and a mixed cloud of ammonium hydrosulfide (NH$_4$SH). The formation of ammonium hydrosulfide requires one molecule of NH$_3$ and one of H$_2$S and desiccates the atmosphere of the less abundant volatile. NH$_3$ is highly depleted in both planets at pressures lower than 40 bar. This is known from observations at cm wavelengths where both planets are too bright to allow for high abundances of ammonia \cite{Gulkis1978, dePater1991}. In addition, H$_2$S has been detected spectroscopically in both planets \cite{Irwin2018, Irwin2019}. This implies that the S/N ratio in their tropospheres must be higher than 1, at least down to the NH$_4$SH cloud base, which is close to the transition from the water ice to the water liquid level. Given sulfur's lower abundance in a solar-composition mixture, this means that sulphur should be enriched at least 5 times more than nitrogen compared to solar abundance. It also implies the absence of an NH$_3$ cloud. Potential atmospheric sinks for NH$_3$ and also H$_2$S complicate the interpretations that can be extracted from the absence of NH$_3$ in the upper troposphere. These sinks are a suspected liquid water ocean at pressures of 10 kilobar or higher and a ionic/superionic ocean at pressures of hundreds of kilobars \cite{Atreya2020}.
Most models of the vertical structure of Uranus and Neptune assume thermochemical equilibrium to describe the vertical location and extent of cloud layers \cite{Weidenschilling+Lewis1973,Atreya2005}. Although the complexity of cloud systems on Earth, Jupiter and Saturn \cite{Pruppacher+Klett1997, Sugiyama+2014, Li+Chen2019} shows that the thermochemical equilibrium assumption is not realistic, in the current absence of in situ or remote sensing data below the methane cloud, these models are generally used to anticipate what to expect below the methane cloud.
Figures \ref{fig_thermal} and \ref{fig_thermochemical} show the thermal, density and volatile vertical structure of Uranus and Neptune.
Thermal profiles obtained from the Voyager 2 radio occultation experiments \cite{Lindal1987, Lindal1992} ended at pressures of 2.3 bar in Uranus and 6.3 bar in Neptune (highlighted with stars in figure \ref{fig_thermal}a). These levels correspond to the regions where CH$_4$ and H$_2$S condense, and future atmospheric occultation measurements that could be obtained by an orbiter could sense the thermal structure of the atmosphere over different latitudes sampling both cloud condensing regions. Below those pressures we examine the planets' deep tropospheres, considering abundances of 20 and 80 times solar for all volatiles except for ammonia, which is assumed to be less abundant than H$_2$S, to be consistent with the spectroscopic detections of H$_2$S vapor in the tropospheres of both planets.
\begin{figure}[!h]
\centering\includegraphics[width=4.5in]{Figure_Thermal_structure.jpg}
\caption{Temperature pressure profiles measured by radio occultation in Uranus (green) and Neptune (blue) extended in the deep atmosphere following wet adiabats with different abundances of condensables. Stars in panel a show the bottom pressure layer sensed in radio occultation experiments performed by Voyager 2 in Uranus (green) and Neptune (blue). (a and b): Dashed horizontal lines represent the cloud base level for different condensates and abundances. Red profiles indicate possible thermal profiles in the case that moist convection is fully inhibited producing sharp discontinuities in temperature. These discontinuities would be found at critical abundances of methane and water but for simplicity only the one from high water abundance is shown based in \cite{Leconte2017}. c: Density profiles as a function of altitude above the 1 bar level except for the convectively inhibited profile. The kink in the 80 times solar abundance profile at 550 km is caused by the critical point of water at 647 K. Pink dotted lines in b and c show the extrapolation of Neptune's thermal profile following an adiabat without latent heat release but considering volatiles with a deep abundance of 80 times solar and relative humidities of 99\%.}
\label{fig_thermal}
\end{figure}
Other physical processes, including the inhibition of moist convection at high abundances of methane \cite{Guillot1995, Leconte2017, Friedson+Gonzales2017} and water or layered convection \cite{Gierasch1987} can affect the vertical structure producing small and large thermal and mean molecular weight discontinuities in the vertical thermal profile. Figure \ref{fig_thermal}b sketches possible effects caused by the vertical distribution of water and is based in \cite{Leconte2017}. It must be noted that these large thermal discontinuities can be produced in combination with a discontinuity in the mean molecular weight of the atmosphere and the volatiles concentration without neccesarily resulting in a large discontinuity in density. Such discontinuities would be readily found by a descending atmospheric probe, but could be different at different locations of the planet requiring a characterization of the descending region from remote sensing observations by an orbiter. Figure \ref{fig_thermal}c examines the atmospheric density profile for a range of possible deep abundances of condensables (dry atmosphere, 20 and 80 times solar). The density profile depends on the thermal structure, the local gravity and local winds (we here considered the equator and Voyager 2 winds constant in depth). Because of water's potential high abundance in the deep tropospheres of the Ice Giants, it probably dominates the overall mean molecular weight and controls the density profile.
\begin{figure}[!h]
\centering\includegraphics[width=4.5in]{Figure_Thermochemical_models.jpg}
\caption{Thermochemical models of the atmosphere of Uranus and Neptune for 20 times solar abundance of condensables (a), and 80 times solar abundances (b). Volatiles mixing ratio (colored lines, lower axis) and maximum cloud density (black lines with shaded regions, upper axis). Horizontal dashed black lines represent the transition between water ice and liquid water plus ammonia cloud. The dark red line shows the vertical temperature profile of the atmosphere (bottom axis). Maximum cloud densities can be calculated following \cite{Atreya2005} and are given in g/l (upper axis) assuming an updraft length scale equal to the atmospheric scale height and following \cite{Wong2015} (colored dotted lines) in g/l per meter (top blue axis) without assuming an explicit updraft length scale. Panels (c) and (d) show parameters related with the stability of the atmosphere for 20 and 80 times solar abundances respectively. Green lines show the vertical distribution of molecular weight (lower axis). Grey lines are the static stability of the atmosphere (upper axis). Dark red lines are Brunt-V{\"a}is{\"a}ll{\"a} frequencies including effects of the vertical gradient of molecular weight (bottom axis). Red line represents the atmospheric density profile (top axis). Note the sharp discontinuity for the model with 80 times solar abundance at the critical point of water.}
\label{fig_thermochemical}
\end{figure}
Figure \ref{fig_thermochemical} shows the vertical distribution of volatiles and clouds compatible with the thermal structure described above. This figure does not take into account super adiabatic effects or thermal discontinuities associated to the inhibition of moist convection and can be considered as highly ideal. The densities associated to the cloud content should be considered as illustrative, as the real properties of a cloud depend on microphysical processes including the density of condensation nucleii and precipitation, which are neglected in these simple models. Properties of real clouds depend also on the history of the cloud formation and the vertical motions through the clouds. Densities in Figure \ref{fig_thermal}c do not include contributions from the clouds.
In both Uranus and Neptune the base of the water cloud lies at pressures of several hundred to a few thousand bars and their study would require the use of non-ideal gas laws. Most of the water cloud particles would be in liquid phase with the potential to dissolve ammonia, desiccating the upper atmosphere from this volatile. For the models with higher enhancement of volatiles (80 times solar) the water cloud base deepens until reaching temperatures close to the critical point of water at 647 K at pressures of $\sim3,000$ bar at a depth of about 550 km bellow the 1 bar level. The right panels in this figure show different variables related with the stability of the atmosphere. The most relevant of this is the vertical gradient of the molecular weight of the atmosphere. As a result of the increase in methane abundance, the mean molecular weight in the upper troposphere varies from 2.3 g mol$^{-1}$ in the tropopause to 2.5-2.9 g mol$^{-1}$ at the methane cloud base for 20-80 times solar abundances, respectively. The mean molecular weight continues to increase in the lower atmosphere and can reach 2.8-4.2 g mol$^{-1}$ at the water condensation level for the range of abundances here explored. For 80 times solar abundances the water cloud base occurs at the water critical level, an effect expected in many exoplanets with Neptune-like masses \cite{Mousis2020}.
High values of the deep water abundance result in a combination of high molecular weight (Figure \ref{fig_thermochemical}b-d) and lower deep temperatures for profiles where moist convection is operating (Figure \ref{fig_thermal}b). The high molecular weight determines a high density at depth expected for high water abundances suggesting that gravity measurements could be used to uncover the deep abundance of volatiles.
\section{Moist Convection in the Ice Giants}
Because of the higher stratospheric methane on Neptune than in Uranus \cite{Baines1994}, moist convective models have been traditionally developed for Neptune without additional work to address Uranus convective systems. Different authors \cite{Lunine1989, Stoker1989} have used one-dimensional models of moist convection of methane, or simple estimations of the maximum Convective Available Potential Energy (CAPE) \cite{Molter2019} that predicted powerful updrafts capable to reach vertical velocities of $\sim 200-250$ ms$^{-1}$. These updrafts would be able to ascend to the tropopause and transport methane to the lower stratosphere. However these storms need strong perturbations of the environment to be initiated (initial vertical velocities $> 40$ ms$^{-1}$) to counteract the static stability of the atmosphere, very favourable environments with high humidity to counter-act the weight of methane gas in the saturated updraft, and very efficient rain-out . These pre-Voyager predictions of such powerful methane moist convection storms have not been verified observationally.
An additional complication is that Uranus and Neptune abundances of methane imply an inhibition of moist convection at pressure levels just below those directly observed \cite{Guillot1995}. The moist convection inhibition criteria also applies in the pressence of double-diffusive convection \cite{Leconte2017, Friedson+Gonzales2017} and it is yet unclear how heat is transported in such a situation.
In order to get a simplified grasp of the inhibition of convection we will calculate a few numbers that will help us to show the regions of the atmosphere where convective inhibition plays a role and we extend this discussion to the deep clouds that determine the vertical structure of the atmosphere. We closely follow \cite{Guillot2019a} in this discussion, and ignore the negative buoyancy effect of the weight of condensates, assuming precipitation is efficient. There are a number of quantities important to examine the potential to drive moist convection. One of them is the capacity to produce increases in temperature by the release of latent heat. The increase of temperature by latent heat from a condensable $i$ reaching its vapor saturation, $\Delta T_{Li}$, can be calculated as:
\begin{equation}
\Delta T_{L_i}=\frac{q_{vi} L_{Vi}}{C_p},
\end{equation}
where $q_{vi}$ is the maximum mass mixing ratio of the condensate $i$, $L_{Vi}$ is the latent heat per unit mass
and $C_p$ is the atmospheric heat capacity per mass.
A second important quantity is the temperature increase required to compensate for the weight of the volatiles in a wet parcel, $\Delta T_{\mu i}$. This quantity can be calculated as:
\begin{equation}
\Delta T_{\mu i}=- \left[ ln(1- \varpi q_v) \right] T,
\end{equation}
where $T$ is temperature and $\varpi=(M_{vi}-M_a)/M_{vi}$ is the reduced mean molar mass difference calculated from the molar masses of the vapor $i$, $M_{vi}$, and the dry air, $M_a$.
The third quantity is the moist convection inhibition factor $\xi_i$ \cite{Guillot1995, Leconte2017, Guillot2019a}, which predicts that moist convection for condensable $i$ is not possible when $\xi_i>1$. This inhibition factor is defined as:
\begin{equation}
\xi_i=\frac{\varpi M_{vi} L_{vi} }{ RT}q_{vi},
\end{equation}
where $R$ is the ideal gas constant.
For models shown in Figure \ref{fig_thermochemical} these quantities are given in Table 1. For ammonia we assume a slightly different atmosphere with equal ammonia and hydrogen sulfide resulting in an ammonia cloud with no hydrogen sulfide cloud. For those condensables where $\xi_i>1$ at the cloud base there is always a level above where $\xi_i=1$. This level constitutes the deepest level where moist convection can be powered for each condensable and is also given in Table 1 as $P_{c}$. This can be compared with the cloud base $P_b$ at the deepest layer where saturation is reached. So, although moist convection is inhibited at the base of different clouds, convection is possible in the upper atmosphere in a range of altitudes, possibly resulting in a complex thermal and compositional structure of the atmosphere. Adding the mass-loading of precipitation would complicate further the problem and numerical models will have to be designed to answer these questions. In addition, dry convection without latent heat release could be activated by differences in condensable abundances below their saturation values with moist air sinking and dry air ascending, an effect that could also be observable in the methane condensation layer at low optical depth.
\begin{table}[]
\caption{Convective parameters for different condensables for models of Uranus and Neptune for 20 times solar abundances (1; left columns) and 80 times solar abundances (2; right columns). (*) Ammonia clouds are explored considering a 20 times solar abundance of all condensates (1) or 80 times solar abundance (2).}
\label{table_convection}
\begin{tabular}{l | c c c c c | c c c c c}
\hline
Vapor & $\Delta T_{L1}$ & $\Delta T_{\mu1}$ & $\xi_1$ & $P_{c1}$ & $P_{b1}$ & $\Delta T_{L2}$ & $\Delta T_ {\mu2}$ & $\xi_2$ & $P_{c2}$ & $P_{b2}$ \\
& (K) & (K) & & (bar) & (bar) & (K) & (K) & & (bar) & (bar)\\
\hline
CH$_4$ & 4.0 & 5.8 & 1.05 & 1.1 & 1.1 & 18.8 & 33.8 & 3.1 & 1.5 & 1.8 \\
H$_2$S & 0.3 & 0.6 & 0.08 & 7.0 & 7.0 & 1.0 & 25 & 0.2 & 9.4 & 9.4 \\
NH$_3 (*)$ & 3.0 & 3.0 & 0.3 & 20 & 20 & 14.5 & 13.9 & 1.1 & 34 & 37 \\
H$_2$O & 34 & 73 & 1.5 & 334 & 490 & 112 & 454 & 2.7 & 415 & 3000 \\
\hline
\end{tabular}
\end{table}
\section{A complex weather layer}
Given current unknowns in the abundances of volatiles in Uranus and Neptune, their impact in the vertical structure of the atmosphere, and the complexities of moist convection previously examined, a large range of possible structures could form below the upper methane clouds in Uranus and Neptune. Which measurements would provide the most data to solve open questions? We turn to Jupiter and its better known atmosphere and later examine the measurements needed for Uranus and Neptune.
\subsection{Lessons from the Galileo Probe and Juno}
In 1995 the Galileo probe entered Jupiter's atmosphere obtaining measurements of its composition and vertical structure down to 22 bars \cite{Young2003}. Measurements of ammonia and water abundances increased in depth but were separated from saturated profiles. While a deep value of ammonia was found, water was still highly subsolar and was still rising at the 22 bar level \cite{Wong2004}. Explanations of these measurements required the probe to have descended in a dry location in the planet caused by local meteorology \cite{Showman1998}. The Galileo probe did not find the deep value of oxygen, which, given the paramount importance of water for planet formation, also motivated the Juno mission.
Recent measurements of the ammonia latitudinal and vertical distribution obtained by the Juno MWR instrument resulted in another puzzle. Ammonia is not uniformly distributed in Jupiter's atmosphere below the clouds, but depleted to around 30 bar at all latitudes except at the equator \cite{Li2017}. A viable model to explain this global desiccation is that water-powered moist convective storms could trap ammonia in hailstones that would precipitate and deplete the upper atmosphere of ammonia \cite{Guillot2020a, Guillot2020b}, making water-powered moist convective storms an essential piece of the jovian meteorology. This mechanism is in agreement with the ammonia and lower clouds aerosol depletion observed in Saturn after the development of the large Great White Storm of 2010-2011 \cite{SanchezLavega2019, Jansen2016, Sromovsky2016} and could be a general drying mechanism affecting also the atmospheres of Uranus and Neptune in case moist convection from a lower cloud could reach the condensation region of a volatile of the upper atmosphere. The recent determination of water abundance in Jupiter ($2.7^{+2.4}_{-1.7}$ times solar, representative of the equatorial region and possibly of the deep atmosphere) \cite{Li2020} seems to exclude inhibition of water moist convection in Jupiter, which requires a water abundance of 9.9 times solar \cite{Leconte2017}.
The atmospheres of Uranus and Neptune have the potential to be more complex than those of Jupiter and Saturn due to the larger values of condensable abundances, higher number of cloud layers, and the potential transition of liquid water to vapor at the water critical point for water abundances close to 80 times solar. The higher abundance of H$_2$S than NH$_3$ in the upper troposphere requires ammonia depletion mechanisms that could be similar or different to those proposed in Jupiter and could include dissolution of ammonia in the liquid water cloud layer or in the interior below the hydrogen-helium atmosphere.
An exploration of the atmospheres or Uranus or Neptune by an atmospheric probe \cite{Mousis2018, Atreya2020} would be limited to the first few tens of bars. Radio interferometric measurements performed by ALMA show the structure of Neptune down to 80 bar at low spatial resolution, and we have seen that the deep abundance of water plays a critical aspect in the overall density profile that could be investigated from a gravity experiment. Measurements from an in situ probe would require a global characterization by an orbiter \cite{Fletcher2020} sensitive to the deep structure of the atmosphere through a combination of gravity measurements and microwave radiation. Such an orbiter could also obtain detailed observations of the methane clouds and convective storms and could determine through occultation experiments thermal profiles at several locations of the planet to depths of a few bars. In Jupiter, the combination of results from the Galileo Probe, the Juno mission and the detailed observations of jovian meteorology at the upper ammonia clouds show that such a combination of global and detailed local measurements is needed to understand its atmosphere. The alternative to send a secondary or multiple probes at other planet locations would also highly enhance the capability to study these questions \cite{Sayanagi2020}, but would also benefit from a characterization from orbit studying also changes in the upper atmosphere.
\subsection{Non-homogeneous weather layers}
The standard picture of Uranus and Neptune used during decades assumes well-defined cloud decks and a temperature profile that follows a moist adiabat accounting for the condensation of the different species. Meridional circulation may affect the abundances of condensables and the thermal properties, producing contrasts in the band pattern of the planet at a variety of depths \cite{Fletcher2020b, Tollefson2019, Molter2019b}. An alternative and more complex picture can be build by assuming non-homogeneous weather layers where cold dry air could coexist with warm humid air at the same level, and where convective storms could shape the characteristics of the atmosphere \cite{Guillot2019a}. The vertical structure of the atmosphere could have a combination of the following characteristics. For each one we briefly suggest measurements that could help to advance in each of these cases.
\begin{itemize}
\item \textbf{Variable temperature profile from moist adiabatic to superadiabatic.} An orbiter capable to obtain occultation measurements could investigate the thermal profile on different locations at least to the methane cloud base.
\item \textbf{No ubiquitous methane cloud deck but regular formation of methane clouds including possible convective storms.} These storms could combine strong updrafts powered by latent heat release and downdrafts powered by the weight of the condensates and evaporative cooling. Observations obtained from an orbiter could determine which of the active cloud systems are actually convective storms and the characteristics of such storms. The risk here is that we do not know the frequency and time scales of convective events in Uranus or Neptune.
\item \textbf{Deep H$_2$S stable clouds with little impact in the thermal structure.} Depending on the abundance of CH$_4$ and H$_2$S, thermal profiles from occultation measurements could investigate this question. Observations in the thermal infrared from thermal emissions emitted from the lower atmosphere above the NH$_4$SH cloud level could also reveal the dynamics of this cloud layer.
\item \textbf{Complex dynamics at the NH$_4$SH cloud layer.} This cloud layer could be complex due to the combination of a large heat release (comparable to the water latent heat release) and the high molecular weight of its components. This cloud region can be investigated by thermal emissions in the milimeter and microwave regime and a Juno-like MWR instrument could investigate the physical processes in this cloud layer through the planet. Current interferometer telescopes like VLA and ALMA are currently able to latitudinally map Uranus \cite{Molter2019b} and Neptune \cite{Tollefson2019} down to 50-80 bar.
\item \textbf{Strong vertical transport at the water condensation layers.} The dynamics here could be an extreme version of the dynamics produced at the methane condensation region but these deep layers are hidden to observations using radiation or measurements from an atmospheric probe. Gravity measurements by an orbiter could investigate the deep abundance of water and a detailed model-observations comparison may be required to interpret gravity data.
\item \textbf{Vertical motions driven by compositional differences and not latent heat release.} In atmospheres with large horizontal compositional gradientes vertical motions could be driven by compositional differences and not latent heat release. Updrafts of dry air in a more wet environment could be possible at any of the condensation regions. Detailed observations of these processes at the upper CH$_4$ layer could be obtained by observations in the visible and near infrared.
\item \textbf{Ammonia powered storms in exo-Neptunes.} Cold exo-Neptunes are likely very common planets \cite{Suzuki2016}, and they may be very diverse in terms of the abundances of volatiles. In those planets with higher tropospheric abundances of NH$_3$ than H$_2$S ammonia moist convective storms could have similar strength to methane storms in Uranus and Neptune. The lack of observed NH$_3$ in Uranus and Neptune upper tropospheres implies that the best candidates to study ammonia-driven moist convective storms are Jupiter and Saturn, where ammonia storms seem to play a minor role compared with water powered storms.
\end{itemize}
\vspace*{-5pt}
\section{Conclusions}
The mechanisms responsible for the structure, dynamics and energy balance of the atmospheres of Gas and Ice Giants are complex and poorly known. In particular, the role of moist convection and precipitation, its importance to determine the vertical structure of temperature, condensables and density, and the interplay of moist convection with the large-scale circulation are yet to be understood. Adding to this complexity, multiple cloud layers and sources of storms are present in these planets, most of them too deep to be probed directly. This is for example the case of water, thought to be fueling most of Jupiter’s and Saturn’s storms, but whose abundance in both planets is poorly constrained because of its condensation at relatively high pressures and high optical depths. Owing to their larger distance to the Sun, Uranus and Neptune possess colder atmospheres with abundant
methane cloud activity that could be interpreted as convective, but the existing data does not allow to determine which of the possible storm candidates observed are actually moist convective events. This methane condensation region is at a relatively low optical depth, and can be probed relatively easily. We thus believe that Uranus and Neptune are laboratories to understand the underlying physics of atmospheric dynamics in hydrogen atmospheres. The hidden cloud layers below the upper methane clouds have a key importance in the global vertical structure of the planets. Understanding these cloud layers has applications to understand and constrain the atmospheres, interior structure and evolution of not only Uranus and Neptune, but also Ice Giants and Gas Giants in general.
Uranus and Neptune remain the only planets in the Solar System that have not been visited by an orbiting spacecraft. A mission to at least one of them would bring an essential piece of the puzzle to complete the inventory of our Solar System, understand the structure and atmosphere dynamics of Ice Giants and Gas Giants in the Universe. Such a mission should include an orbiter to fully map and characterize the planet’s atmosphere and its time-variability \cite{Fletcher2020}, and an atmospheric probe \cite{Mousis2018}, or multiprobe system \cite{Sayanagi2020}, to measure with high precision the thermal structure and volatile abundance profiles at a well-defined location, or set of locations. An instrumentation suite similar to the one carried by Juno would be the most efficient to provide spatial context to the measurements obtained by a descending probe. The combination of this information would provide unique data to understand planetary atmospheres.
\vskip6pt
\enlargethispage{20pt}
\textbf{Acknowledgements}. RH and ASL were supported by the Spanish MINECO project AYA2015-65041-P and PID2019-109467GB-I00 (MINECO/FEDER, UE) and Grupos Gobierno Vasco IT1366-19. We are thankful to the Royal Astronomical Society for hosting the Ice Giant Systems workshop on January 2020 and to Leigh N. Fletcher for organizing this event.
|
{
"timestamp": "2021-12-01T02:27:07",
"yymm": "2111",
"arxiv_id": "2111.15494",
"language": "en",
"url": "https://arxiv.org/abs/2111.15494"
}
|
\section{Introduction}
Rational extended thermodynamics (RET) is a theory applicable to nonequilibrium phenomena out of local equilibrium. It is expressed by an hyperbolic system of field equations with local constitutive equations and is strictly related to the kinetic theory with the closure method of the hierarchies of moment equations in both classical and relativistic framework \cite{RET,RS}.
The first relativistic version of the modern RET was given by Liu, M\"uller and Ruggeri (LMR) \cite{LMR} considering the Boltzmann-Chernikov relativistic equation
\cite{BGK,Synge,KC}:
\begin{equation}\label{BoltzR}
p^\alpha \partial_\alpha f = Q,
\end{equation}
in which the distribution function $f$ depends on $(x^\alpha, p^\beta)$, where $x^\alpha$ are the space-time coordinates, $p^\alpha$ is the four-momentum, $\partial_{\alpha} = \partial/\partial x^\alpha$, $c$ denotes the light velocity, $m$ is the particle mass in the rest frame, $Q$ is the collisional term and $\alpha, \beta =0,1,2,3$.
For monatomic gases, the relativistic moment equations associated with \eqref{BoltzR}, truncated at tensorial index $N+1$, are \footnote{When $n=0$, the tensor reduces to $A^\alpha$. Moreover, the production tensor in the right-side of \eqref{RelmomentMono} is zero for $n=0,1$, because the first $5$ equations represent the conservation laws of the particle number and of the energy-momentum, respectively.}:
\begin{equation}\label{Relmomentseq}
\partial_\alpha A^{\alpha \alpha_1 \cdots \alpha_n } = I^{ \alpha_1 \cdots \alpha_n }
\quad \mbox{with} \quad n=0 \, , \,\cdots \, , \, N
\end{equation}
with
\begin{align}\label{RelmomentMono}
A^{\alpha \alpha_1 \cdots \alpha_n } = \frac{c}{m^{n-1}} \int_{\varmathbb{R}^{3}}
f \, p^\alpha p^{\alpha_1} \cdots p^{\alpha_n} \, \, d \boldsymbol{P}, \quad I^{\alpha_1 \cdots \alpha_n } = \frac{c}{m^{n-1}} \int_{\varmathbb{R}^{3}}
Q \, p^{\alpha_1} \cdots p^{\alpha_n} \, \, d \boldsymbol{P} ,
\end{align}
and
\begin{equation*}
d \boldsymbol{P} = \frac{dp^1 \, dp^2 \,
dp^3}{p^0} .
\end{equation*}
When $N=1$, we have the relativistic Euler system
\begin{equation}
\partial_\alpha A^{\alpha } = 0, \quad \partial_\alpha A^{\alpha \beta} =0, \label{Euleros}
\end{equation}
where, also in the following, $A^\alpha \equiv V^\alpha$ and $A^{\alpha\beta} \equiv T^{\alpha\beta}$ have the physical meaning, respectively, of the particle number vector and the energy-momentum tensor.
Instead, when $N=2$, we have the LMR theory of a relativistic gas with $14$ fields:
\begin{equation}
\partial_\alpha A^{\alpha } = 0, \quad \partial_\alpha A^{\alpha \beta} =0, \quad \partial_\alpha A^{\alpha \beta \gamma} = I^{ \beta \gamma }, \qquad \Big(\gamma=0,1,2,3; \,\, I^\alpha_{\,\,\alpha} =0 \Big). \label{Annals}
\end{equation}
Recently, Pennisi and Ruggeri first constructed a relativistic ET theory for polyatomic gases with \eqref{Relmomentseq} in the case of $N=2$ \cite{Annals} (see also \cite{Car1,Car2}) whose moments are given by
\begin{align}
\begin{split}
& A^\alpha = m c \int_{\R^3} \int_0^\infty f p^\alpha \phi(\mathcal{I}) \, d \mathcal{I} \, d \boldsymbol{P} \, , \\
& A^{\alpha \beta} = \frac{1}{mc} \int_{\R^3} \int_0^\infty f p^\alpha p^\beta (mc^2 + \mathcal{I}) \, \phi(\mathcal{I}) \, d \mathcal{I} \, d \boldsymbol{P} \, , \\
& A^{\alpha \beta \gamma } = \frac{1}{m^2 c} \int_{\varmathbb{R}^{3}}
\int_0^{+\infty} f \, p^{\alpha} p^\beta p^{\gamma} \, \Big( mc^2 + 2\mathcal{I} \Big) \,
\phi(\mathcal{I}) \, d \mathcal{I} \, d \boldsymbol{P} \, ,
\end{split}
\label{PS3}
\end{align}
where the distribution function $f(x^\alpha, p^\beta,\mathcal{I})$ depends on the extra energy variable $\mathcal{I}$, similar to the classical one (see \cite{RS} and references therein), and $\phi(I)$ is the state density of the internal mode.
In \cite{Annals}, by taking the traceless part of the third order tensor, i.e., $A^{\alpha \langle \beta \gamma \rangle}$, as a field instead of $A^{\alpha\beta\gamma}$ in \eqref{Annals}$_3$, the relativistic theory with 14 fields (ET$_{14}$) was proposed. It was also shown that its classical limit coincides with the classical ET$_{14}$ based on the binary hierarchy \cite{Arima-2011,Pavic-2013,RS}. The beauty of the relativistic counterpart is that there exists a single hierarchy of moments, but, as was noticed by the authors, to obtain the classical theory of ET$_{14}$, it was necessary to put the factor 2 in front of $\mathcal{I}$ in the last equation of \eqref{PS3}!
This was also more evident in the theory with any number of moments where Pennisi and Ruggeri generalized \eqref{PS3} considering the following moments \cite{PRS}:
\begin{align} \label{relRETold}
\begin{split}
& A^{\alpha \alpha_1 \cdots \alpha_n } = \frac{1}{m^n c} \int_{\varmathbb{R}^{3}}
\int_0^{+\infty} f \, p^\alpha p^{\alpha_1} \cdots p^{\alpha_n} \, \Big(mc^2 + n \mathcal{I} \Big)\,
\phi(\mathcal{I}) \, d \mathcal{I} \, d \boldsymbol{P} \, , \\
& I^{\alpha_1 \cdots \alpha_n } = \frac{1}{m^{n}c} \int_{\varmathbb{R}^{3}}
\int_0^{+\infty} Q \, p^{\alpha_1} \cdots p^{\alpha_n} \, \Big( mc^2 + n\mathcal{I} \Big)\,
\phi(\mathcal{I}) \, d \mathcal{I} \, d \boldsymbol{P}. \\
\end{split}
\end{align}
In this case, we need a factor $n \mathcal{I}$ in \eqref{relRETold} to obtain, in the classical limit, the binary hierarchy.
To avoid this unphysical situation, Pennisi first noticed that $ (mc^2 +n \mathcal{I})$ appearing in \eqref{relRETold} are the first two terms of the Newton binomial formula for $(mc^2 +\mathcal{I})^n/ (mc^2)^{n-1}$
Therefore he proposed in \cite{Pennisi_2021} to modify, in the relativistic case, the definition of the moments by using the substitution:
\begin{equation*
(mc^2)^{n-1}\Big( mc^2 + n \mathcal{I} \Big) \qquad \text{with \quad } \Big( mc^2 + \mathcal{I} \Big)^n,
\end{equation*}
i.e., instead of \eqref{relRETold}, the following moments are proposed:
\begin{align} \label{relRET}
\begin{split}
& A^{\alpha \alpha_1 \cdots \alpha_n } = \Big(\frac{1}{mc}\Big)^{2n-1} \int_{\varmathbb{R}^{3}}
\int_0^{+\infty} f \, p^\alpha p^{\alpha_1} \cdots p^{\alpha_n} \, \Big( mc^2 + \mathcal{I} \Big)^n\,
\phi(\mathcal{I}) \, d \mathcal{I} \, d \boldsymbol{P} \, , \\
& I^{\alpha_1 \cdots \alpha_n } = \Big(\frac{1}{mc}\Big)^{2n-1} \int_{\varmathbb{R}^{3}}
\int_0^{+\infty} Q \, p^{\alpha_1} \cdots p^{\alpha_n} \, \Big( mc^2 + \mathcal{I} \Big)^n\,
\phi(\mathcal{I}) \, d \mathcal{I} \, d \boldsymbol{P}. \\
\end{split}
\end{align}
Such definitions are more physical because now the full energy (the sum of the rest frame energy and the energy of internal modes) $mc^2 + \mathcal{I}$ appears in the moments.
The aim of this paper is to consider the system \eqref{Annals} with moments given by \eqref{relRET}. In this way, for the case with $N=2$ also by taking the trace part of $A^{\alpha\beta\gamma}$ as a field, we have $15$ field equations and to close the system we adopt the molecular procedure of RET based on the Maximum Entropy Principle.
The paper is organized as follows. In Section~\ref{sec:momeq}, the values of generic moments in an equilibrium state are estimated in the general case. In Section~\ref{sec:15}, the ET theory for $15$ fields (ET$_{15}$) is proposed, and the constitutive quantities are closed near the equilibrium state. By adopting a variant of BGK model appropriate for polyatomic gases proposed by Pennisi and Ruggeri \cite{Car1}, the production tensor is derived. In Section~\ref{entro} the four-dimensional entropy flux and the entropy production are deduced within the second order with respect to the nonequilibrium variables. Then, we show the condition of convexity of the entropy density and the positivity of the entropy production which ensure the well-posedness of the Cauchy problem and the entropy principle as a result. We also discuss in Section~\ref{diatomico} the case of the diatomic gases for which all coefficients are expressed in closed form in terms of the ratio of two Bessel functions similar to the case of monatomic gases. In Section~\ref{sec:ultra}, we study the ultra-relativistic limit. In Section~\ref{sec:sub}, the principal subsystems of ET$_{15}$ are studied. First, we obtain ET$_{14}$ in which all field variables have physical meaning. Then, at the same level as ET$_{14}$ in the sense of the principal subsystem, there also exists the subsystem with $6$ fields in which the dissipation is only due to the dynamical pressure. This system is important in the case that the bulk viscosity is dominant compared to the shear viscosity and heat conductivity and must be particularly interesting in cosmological problems. The simplest subsystem is the Euler non dissipative case with $5$ fields. In Section~\ref{sec:max}, we use the Maxwellian iteration and, as a result, the phenomenological coefficients of the Eckart theory, that is, the heat conductivity, shear viscosity and bulk viscosity are determined with the present model. Finally, in Section~\ref{sec:classical}, we show that the classic limit of the present model coincides with the classical ET$_{15}$ studied in \cite{x}.
\section{Distribution function and moments at equilibrium} \label{sec:momeq}
The equilibrium distribution function $f_E$ of polyatomic gas that generalizes the J\"uttner one of monatomic gas
was evaluated in \cite{Annals} with the variational procedure of Maximum Entropy Principle (MEP) \cite{Janes,ET,RET}. Considering the first $5$ balance equations of \eqref{Annals} in equilibrium state:
\begin{equation*}
A^\alpha_E \equiv V_E^\alpha = m \, n U^\alpha, \quad \quad
A^{\alpha\beta}_E \equiv T_E^{\alpha \beta} = p h^{\alpha \beta} + \frac{e}{c^2} \, U^{\alpha } U^{\beta}.
\end{equation*}
MEP requires that the appropriate distribution function $f\equiv f(x^\alpha, p^\alpha,
\mathcal{I})$ is the one which maximizes the entropy density
\begin{equation*}
\rho S = h_E= h^\alpha_E U_\alpha = - k_B \, c \, U_\alpha \int_{\varmathbb{R}^3}
\int_0^{+\infty} f
\ln f p^\alpha \phi(\mathcal{I}) \, d \mathcal{I} \, d \boldsymbol{P} \, ,
\end{equation*}
under the constraints that the temporal part $V^\alpha U_\alpha$ and $T^{\alpha\beta}U_\beta$ are prescribed.
Here, $k_B, n, \rho(=nm), U^\alpha, h^{\alpha\beta},p,e,S$ are respectively the Boltzmann constant, the particle number, the mass density, the four-velocity ($U^\alpha U_\alpha= c^2$), the projector tensor $(h^{\alpha\beta}= U^\alpha U^\beta/c^2 - g^{\alpha\beta})$, the pressure, the energy and the entropy density and $g^{\alpha \beta}= \text{diag}(1 \, , \, -1 \, , \, -1 \,, \, -1)$ is the metric tensor.
The equilibrium distribution function for a rarefied polyatomic gas that maximizes the entropy has the following expression \cite{Annals}:
\begin{equation}\label{5.2n}
{f_E= \frac{n }{\bar{A}(\gamma)} \frac{1}{4 \pi m^3
c^3} e^{- \frac{1}{k_B T} \left[ \Big( 1 + \frac{\mathcal{I}}{m c^2} \Big) U_\beta
p^\beta \right]}}, \qquad \bar{A}(\gamma) = \int_0^{+\infty} J_{2,1}^*\, \phi(\mathcal{I}) \, d \, \mathcal{I}
\end{equation}
with $T$ being the absolute temperature,
\begin{equation*
\begin{split}
& J_{m,n}^* = J_{m,n} (\gamma^*), \qquad \gamma^* = \gamma \, \Big( 1
+ \frac{\mathcal{I}}{m \, c^2} \Big), \qquad \gamma = \frac{m \, c^2}{k_B T},
\end{split}
\end{equation*}
and
\begin{equation*}
J_{m,n}(\gamma)= \int_0^{+\infty} e^{-\gamma \cosh s} \sinh^m s \cosh^n s \, d \, s \, ,
\end{equation*}
subjected to the following recurrence relations \cite{LMR,Annals}:
\begin{equation}\label{R}
\begin{split}
J_{m+2,n}(\gamma)= J_{m,n+2}(\gamma) - J_{m,n} (\gamma) \, ,
\end{split}
\end{equation}
\begin{equation}\label{Rbis}
\begin{split}
-\gamma J_{m+2,n}(\gamma)= n J_{m,n-1}(\gamma) - (n+m+1) J_{m,n+1} (\gamma) \, .
\end{split}
\end{equation}
The pressure and the energy
compatible with the equilibrium distribution function \eqref{5.2n} are \cite{Annals}:
\begin{align}\label{10}
\begin{split}
& p =\frac{ k_B}{m} \, \rho T \, , \qquad \qquad e=
\rho c^2 \omega(\gamma), \\
& \text{with } \quad \omega(\gamma)= \frac{\int_0^{+\infty} J_{2,2}^* \, \Big( 1 + \frac{\mathcal{I}}{m c^2} \Big) \, \phi(\mathcal{I}) \, d \, \mathcal{I}}{\int_0^{+\infty} J_{2,1}^* \, \phi(\mathcal{I}) \, d \, \mathcal{I}} .
\end{split}
\end{align}
Taking into account that
$
e = \rho c^2 + \rho \varepsilon,
$
where $\varepsilon$ is the internal energy, we deduce from \eqref{10}:
\begin{equation}\label{interenergy2}
\varepsilon = c^2(\omega -1).
\end{equation}
Therefore the internal energy is a function only of $\gamma$ or, it is the same, of $T$ as in the classical case for rarefied gases.
The moments in equilibrium state $A^{\alpha \alpha_1 \cdots \alpha_j }_E$ for $j \geq 2$ were deduced in \cite{Pennisi_2021}:
\begin{align}\label{11}
A^{\alpha_1 \cdots \alpha_{j+1 } }_E= \sum_{k=0}^{\left[ \frac{j+1}{2} \right]} \rho c^{2k} \theta_{k,j} \, h^{( \alpha_1 \alpha_2 } \cdots h^{ \alpha_{2k-1} \alpha_{2k} } U^{\alpha_{2k+1} } \cdots U^{\alpha_{j+1} ) } \, ,
\end{align}
where
\begin{align}\label{11b}
\theta_{k,j} = \frac{1}{2k+1} \begin{pmatrix}
j+1 \\ 2k
\end{pmatrix} \frac{\int_0^{+\infty} J_{2k+2,j+1-2k}^* \, \Big( 1 + \frac{\mathcal{I}}{m c^2} \Big)^j \, \phi(\mathcal{I}) \, d \, \mathcal{I}}{\int_0^{+\infty} J_{2,1}^* \, \phi(\mathcal{I}) \, d \, \mathcal{I}} \,
\end{align}
are dimensionless functions depending only on $\gamma$.
Taking into account \eqref{10} and \eqref{11b}, we obtain $\theta_{0,0} =1, \, \theta_{0,1} =\omega(\gamma)$ and
using the recurrence formula \eqref{R} and \eqref{Rbis}, in \cite{Pennisi_2021}, the following recurrence relations hold:
\begin{align}\label{12b}
\begin{split}
& \theta_{0,0}=1 \, ,\\
& \theta_{0,j+1} = \omega(\gamma) \, \theta_{0,j} \, - \, \theta^{\, \prime}_{0,j} \hspace{4.6 cm} \mbox{with } \quad ^\prime = \frac{d}{d \gamma}, \\
& \\
& \theta_{h,j+1} = \frac{j+2}{\gamma} \Big( \theta_{h,j} + \frac{j+3-2h}{2h} \theta_{h-1,j} \Big) \hspace{1.9 cm} \mbox{for } h=1, \, \cdots , \,\left[ \frac{j+1}{2} \right] \, , \\
& \\
& \theta_{\frac{j+2}{2},j+1} = \frac{1}{\gamma} \theta_{\frac{j}{2}, \, j} \hspace{6.3 cm} \mbox{for $j$ even} \, .
\end{split}
\end{align}
It is interesting to see that all the scalar coefficients can be expressed in terms of the function $\omega(\gamma)$ and of its derivatives with respect to $\gamma$ (or with respect to the temperature $T$) and $\omega$ is strictly related to the internal energy $\varepsilon$ by \eqref{interenergy2}. A similar situation is studied in the article \cite{x} for the non relativistic case.\\
The values of $\theta_{h,j}$ can be determined, by using the recurrence formula \eqref{12b}, according to the following diagram:
\begin{align*}
\begin{matrix} \theta_{0,0} & \Rightarrow & \theta_{0,1} & \Rightarrow & \theta_{0,2} & \Rightarrow & \theta_{0,3} & \cdots \\
&\searrow &&& \\
~ & & \theta_{1,1} & \rightarrow & \theta_{1,2} & \rightarrow & \theta_{1,3} & \cdots & \\
& & & & & \searrow \\
~ & & ~ & & ~ & & \theta_{2,3} & \rightarrow & \theta_{2,4} &\cdots \\
&
\end{matrix}
\end{align*}
We see that all the $\theta_{0,j}$ can be obtained from $\theta_{0,0}$ by using eq. $\eqref{12b}_2$ and the other $\theta_{h,j}$ with $j\geq h$ can be obtained from eqs. $\eqref{12b}_{3,4}$. In particular, we can evaluate the following ones that are needed to know for the model with $15$ fields in the subsequent sections:
\begin{align}\label{thetas}
\begin{split}
& \theta_{0,0}=1, \qquad \theta_{0,1}=\omega , \qquad \theta_{0,2}=\omega
^2-\omega ',
\\
&\theta_{0,3}=\omega ^3+\omega
''-3 \omega \omega ', \qquad \theta_{0,4}=\omega
^4-\omega {'''}+4 \omega
\omega ''+3 \omega
'^2-6 \omega ^2
\omega ' , \\
& \theta_{1,1}=\frac{1}{\gamma
}, \qquad \theta_{1,2}=\frac{3}{\gamma ^2} (\gamma \omega
+1), \qquad \theta_{1,3}=\frac{6}{\gamma
^3}
\left[\gamma ^2 (\omega
^2-
\omega ')+2(
\gamma \omega
+1)\right],
\\
& \theta_{1,4}=\frac{10} {\gamma^4 }
\left\{3
\gamma \left[\omega
(\gamma \omega +2)-\gamma
\omega '\right]+6+ {\gamma
^3}( \omega^3 +\omega ''-3
\omega \omega
')\right\},\\
& \theta_{2,3}=\frac{3} {\gamma ^3}
(\gamma \omega
+1), \qquad \theta_{2,4}=\frac{15}{\gamma
^4}
\left[\gamma ^2 (\omega
^2- \omega ')+3(
\gamma \omega
+1)\right].
\end{split}
\end{align}
\section{The closure for the 15 moments model}\label{sec:15}
In this section, we consider the simplest and physical case, that is, the system \eqref{Relmomentseq} for $n=0,1,2$ with the moments given by \eqref{relRET}:
\begin{equation}\label{Annalis}
\partial_\alpha V^{\alpha } = 0, \quad \partial_\alpha T^{\alpha \beta} =0, \quad \partial_\alpha A^{\alpha \beta \gamma} = I^{ \beta \gamma }, \qquad \left(\beta,\gamma=0,1,2,3\right).
\end{equation}
with
\begin{align} \label{relRETpol}
\begin{split}
& V^{\alpha } = {mc} \int_{\varmathbb{R}^{3}}
\int_0^{+\infty} f \, p^\alpha \,
\phi(\mathcal{I}) \, d \mathcal{I} \, d \boldsymbol{P} \, , \qquad
T^{\alpha \beta} = c\int_{\varmathbb{R}^{3}}
\int_0^{+\infty} f \, p^\alpha p^{\beta} \, \Big( 1 + \frac{\mathcal{I} }{m c^2}\Big)\,
\phi(\mathcal{I}) \, d \mathcal{I} \, d \boldsymbol{P} \, , \\
& A^{\alpha \beta\gamma } = \frac{c}{m} \int_{\varmathbb{R}^{3}}
\int_0^{+\infty} f \, p^\alpha p^{\beta} p^{\gamma} \, \Big( 1 + \frac{\mathcal{I}}{mc^2} \Big)^2\,
\phi(\mathcal{I}) \, d \mathcal{I} \, d \boldsymbol{P} \, , \\
& I^{ \beta \gamma } = \frac{c}{m}\int_{\varmathbb{R}^{3}}
\int_0^{+\infty} Q \, p^{\beta} p^{\gamma}\, \left(1 + \frac{\mathcal{I}}{mc^2} \right)^2\,
\phi(\mathcal{I}) \, d \mathcal{I} \, d \boldsymbol{P}. \\
\end{split}
\end{align}
To close the system \eqref{relRETpol}, we adopt the MEP which requires to find the distribution function that maximizes the non-equilibrium entropy density:
\begin{equation}\label{entropy}
h= h^\alpha U_\alpha = - k_B \, c \, U_\alpha \int_{\varmathbb{R}^3}
\int_0^{+\infty} f
\ln f p^\alpha \phi(\mathcal{I}) \, d\mathcal{I} \, d\boldsymbol{P} , \quad \rightarrow \quad \max
\end{equation}
under the constraints that the temporal part $V^\alpha U_\alpha, T^{\alpha\beta}U_\alpha$ and $A^{\alpha\beta\gamma}U_\alpha$ are prescribed. Proceeding in the usual way as indicates in previous papers of RET (see \cite{RS,Annals}), we obtain:
\begin{equation}\label{f15}
f_{15}= e^{ -1 - \frac{\chi}{k_B}} \, , \quad \mbox{with} \quad
\chi = m \, \lambda \, + \, \lambda_{\mu} \, p^{\mu} \, \left( 1 + \, \frac{\mathcal{I}}{m \, c^2} \right) \, + \, \frac{1}{m} \, \lambda_{\mu \nu} \, p^{\mu} p^{\nu} \, \left( 1 + \, \frac{\mathcal{I}}{m \, c^2} \right)^2 \, ,
\end{equation}
where $\lambda, \lambda_{\mu}, \lambda_{\mu \nu}$ are the Lagrange multipliers.
Hereafter, recalling the following decomposition of the particle number vector and the energy-momentum tensor
\begin{align}\label{19}
V^\alpha =\rho U^\alpha \, , \quad T^{\alpha \beta} = \frac{e}{c^2} \, U^{\alpha } U^\beta + \, \left(p \, + \, \Pi\right)
h^{\alpha \beta} + \frac{1}{c^2} ( U^\alpha q^\beta +U^\beta q^\alpha)+ t^{<\alpha \beta>_3} \, ,
\end{align}
we can choose as fields, as usual, $14$ physical variables; $\rho$, $T$, $U^\alpha$, $\Pi$, $q^\alpha$, $t^{<\alpha \beta>_3}$, where $\Pi$ is the dynamic pressure, $q^\alpha= -h^\alpha_\mu
U_\nu T^{\mu \nu}$ is the heat flux and $t^{<\alpha \beta>_3} = T^{\mu\nu} \left(h^\alpha_\mu h^\beta_\nu - \frac{1}{3}h^{\alpha\beta}h_{\mu\nu}\right)$ is the deviatoric shear viscous stress tensor. We also recall the constraints:
\begin{equation*}
U^\alpha U_\alpha = c^2, \quad q^\alpha U_\alpha = 0, \quad t^{<\alpha \beta>_3} U_\alpha = 0, \quad t^{<\alpha}_{\,\,\,\,\,\ \alpha >_3} =0,
\end{equation*}
and we choose as the $15$th variable:
\begin{align}\label{deltina}
\Delta = \frac{4}{c^2} \, U_\alpha U_\beta U_\gamma \, \left( A^{\alpha \beta \gamma} \, - \, A^{\alpha \beta \gamma}_E\right).
\end{align}
The pressure $p$ and the energy $e$ as function of $(\rho,T)$ are given in \eqref{10}.
\smallskip
{\bf Remark 1.} :
For any symmetric tensor $M^{\alpha \beta}$, we can define its traceless part $M^{< \alpha \beta >}$ and its 3-dimensional traceless part $M^{< \alpha \beta >_3}$ that is the traceless part of its projection in the 3-dimensional space orthogonal to $U^\alpha$ as follows
\begin{align*
\begin{split}
& M^{< \alpha \beta >} = \left( g_\mu^{\alpha} \, g_\nu^{ \beta} - \, \frac{1}{4} \, g^{\alpha \beta} g_{\mu \nu} \right) \, M^{\mu \nu} = M^{\alpha \beta} - \, \frac{1}{4} \, g_{\mu \nu} \, M^{\mu \nu} g^{\alpha \beta} \, , \\
& M^{< \alpha \beta >_3} = \left( h_\mu^{\alpha} \, h_\nu^{ \beta} - \, \frac{1}{3} \, h^{\alpha \beta} h_{\mu \nu} \right) \, M^{\mu \nu} \, ,
\end{split}
\end{align*}
which are different except for the case in which
$M^{\mu \nu} U_\mu=0$, and $M^{\mu \nu} g_{\mu \nu}=0 $. In fact, these conditions indicate that
\begin{align*}
M^{< \alpha \beta >}= M^{< \alpha \beta >_3} \, .
\end{align*}
Moreover in the following a parenthesis between two indexes indicate the symmetric part.
\subsection{The linear deviation from equilibrium}
The thermodynamical definition of the equilibrium according with M\"uller and Ruggeri \cite{RET} is
the state in which the entropy production
vanishes and hence attains its minimum
value. Using this definition, the theorem was proved \cite{BoillatRuggeri-1998,BoillatRuggeriARMA} that the components of the Lagrange multipliers of the balance laws of nonequilibrium variables vanish, and only
the five Lagrange multipliers corresponding to the equilibrium conservation laws (Euler System) remain. In the present case, we have:
\begin{align}\label{mainE}
\begin{split}
& {\lambda}_E = - \frac{1}{T}\left(g+ c^2\right), \quad {\lambda}_{\mu_E}= \frac{U_\mu}{T}, \quad \lambda_{\mu \nu_E} =0,
\end{split}
\end{align}
where $g =\varepsilon +p/\rho -T S $ is the equilibrium chemical potential. We remark that ${\lambda}_E , {\lambda}_{\mu_E}$ are the components of the \emph{main field} that symmetrize the relativistic Euler system as was first proved by Ruggeri and Strumia (see \cite{RugStr}).
In the molecular ET approach, we consider, as usual, the processes near equilibrium. For this reason, we expand \eqref{f15} around an equilibrium state as follows:
\begin{align}\label{fgenE}
\begin{split}
&f_{15} \simeq f_E\Big(1-\frac{1}{k_B}\tilde{\chi}\Big), \\
&\tilde{\chi} = m \, (\lambda - \lambda_E) \, + \, (\lambda_{\mu}-\lambda_{\mu_E}) \, p^{\mu} \, \Big( 1 + \, \frac{\mathcal{I}}{m \, c^2} \Big) \, + \, \frac{1}{m} \, \lambda_{\mu \nu} \, p^{\mu} p^{\nu} \, \Big( 1 + \, \frac{\mathcal{I}}{m \, c^2} \Big)^2 \, .
\end{split}
\end{align}
Inserting the distribution function \eqref{fgenE} into the moments \eqref{relRETpol}, we obtain the following system:
\begin{align}\label{18}
\begin{split}
& 0= V^\alpha - V^\alpha_E = - \frac{m}{k_B}\left[ V^\alpha_E (\lambda - \lambda_E) + T^{\alpha \mu}_E \Big(
\lambda_\mu - \lambda_{\mu_E} \Big) + A^{\alpha \mu \nu}_E \lambda_{\mu \nu} \right] \, , \\
& t^{<\alpha \beta>_3} +\Pi
h^{\alpha \beta} + \frac{2}{c^2} \, U^{(\alpha } q^{\beta)} = - \frac{m}{k_B} \left[ T^{\alpha \beta}_E (\lambda - \lambda_E) + A^{\alpha \beta \mu}_{E} \Big(
\lambda_\mu - \lambda_{\mu_E} \Big) + A^{\alpha \beta \mu \nu}_{E} \lambda_{\mu \nu} \right] \, , \\
& A^{\alpha \beta \gamma} - A^{\alpha \beta \gamma}_E = - \frac{m}{k_B} \left[ A^{\alpha \beta \gamma}_E (\lambda - \lambda_E) + A^{\alpha \beta \gamma \mu}_{E}
\Big( \lambda_\mu - \lambda_{\mu_E} \Big) + A^{\alpha \beta \gamma \mu \nu}_{E}
\lambda_{\mu \nu} \right] \, ,
\end{split}
\end{align}
where the equilibrium values of the tensors $A^{\alpha \beta \mu }_E, A^{\alpha \beta \mu \nu}_E$ and $A^{\alpha \beta \mu \nu \gamma}_E$ can be obtained by \eqref{11} taking $j=2,3,4$:
\begin{align}\label{A1w}
\begin{split}
& A^{\alpha \beta \gamma}_E= \rho \, \theta_{0,2} \, U^{\alpha} U^{\beta} U^{\gamma} \, + \,
\rho c^2 \, \theta_{1,2} \, h^{( \alpha \beta } U^{\gamma ) } , \\
& A^{\alpha \beta \mu \nu}_E= \rho \, \theta_{0,3} \, U^\alpha U^\beta U^\mu U^\nu \, +
\, \rho \, c^2 \, \theta_{1,3}\, h^{( \alpha \beta } U^\mu U^{\nu ) } \, + \, \rho \,
c^4 \, \theta_{2,3} \, h^{( \alpha \beta } h^{\mu \nu ) } , \\
&A^{\alpha \beta \gamma \mu \nu}_E= \rho \, \theta_{0,4} \, U^\alpha U^\beta U^\gamma
U^\mu U^\nu \, + \, \rho \, c^2 \theta_{1,4} \, h^{( \alpha \beta } U^\gamma U^\mu
U^{\nu ) } \, + \, \rho \, c^4 \theta_{2,4} \, h^{( \alpha \beta } h^{\gamma \mu}
U^{\nu )}\, ,
\end{split}
\end{align}
with the $\theta$'s given in \eqref{thetas}.
The system \eqref{18} permits to deduce the $15$ Lagrange multipliers in terms of the $15$ field variables including $\Delta$ given in \eqref{deltina}, and then we can obtain the remaining part of the tensor $A^{\alpha \beta \gamma}$.
To solve this system, we consider first eq. \eqref{18}$_{1}$ contracted with $U_\alpha$, eq. \eqref{18}$_{2}$ contracted with $U_\alpha \, U_\beta$, eq. \eqref{18}$_{3}$ contracted with $U_\alpha U_\beta U_\gamma / c^3$, eq. \eqref{18}$_{2}$ contracted with
$h_{\alpha \beta}/3$ and \eqref{18}$_{3}$ contracted with $U_\alpha h_{\beta \gamma} /(3\, c^ 2)$, obtaining the system
\begin{align}\label{19b}
\begin{split}
& \theta_{0,0} \, (\lambda - \lambda_E) + \theta_{0,1} \, U^\mu \Big(
\lambda_\mu - \frac{U_\mu}{T} \Big) + \theta_{0,2} \, U^\mu U^\nu \lambda_{\mu \nu} \, + \frac{c^2}{3} \theta_{1,2} \, h^{\mu \nu} \lambda_{\mu \nu} = 0 \, , \\
& \theta_{0,1} \, (\lambda - \lambda_E) + \theta_{0,2} \, U^\mu \Big(
\lambda_\mu - \frac{U_\mu}{T} \Big) + \, \theta_{0,3} \, U^\mu U^\nu \lambda_{\mu \nu} \, + \frac{c^2}{6} \, \theta_{1,3} \, h^{\mu \nu} \lambda_{\mu \nu}= 0 \, , \\
& \theta_{0,2} \, (\lambda - \lambda_E) + \, \theta_{0,3} \, U^\mu \Big( \lambda_\mu - \frac{U_\mu}{T}
\Big) + \, \theta_{0,4} \, U^\mu U^\nu \lambda_{\mu \nu} \, + \frac{c^2}{10} \, \theta_{1,4} \, h^{\mu \nu} \lambda_{\mu \nu}= - \, \frac{k_B}{4 \, m^2 \, n \, c^4} \, \Delta \, , \\
& \theta_{1,1}\, (\lambda - \lambda_E) + \frac{1}{3} \theta_{1,2} \, U^\mu \Big( \lambda_\mu - \frac{U_\mu}{T}
\Big) + \, \frac{1}{6} \theta_{1,3} \, U^\mu U^\nu \lambda_{\mu \nu} + \, \frac{5}{9} c^2 \theta_{2,3} \, h^{\mu \nu} \lambda_{\mu \nu}= - \, \frac{k_B}{m^2 n c^2} \, \Pi \, , \\
& \frac{1}{3} \theta_{1,2} \, (\lambda - \lambda_E) + \, \frac{1}{6} \, \theta_{1,3} \, U^\mu \Big( \lambda_\mu - \frac{U_\mu}{T}
\Big) + \, \frac{1}{10} \, \theta_{1,4} \, U^\mu U^\nu \lambda_{\mu \nu} \, + \frac{c^2}{9} \, \theta_{2,4} \, h^{\mu \nu} \lambda_{\mu \nu}= \\
& \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad = - \, \frac{k_B}{3m^2 c^4 n} \, \Big( A^{\alpha \beta \gamma} - A^{\alpha \beta \gamma}_E \Big) U_\alpha h_{\beta \gamma} \, .
\end{split}
\end{align}
This is a system of 5 equations in the 4 unknowns $\lambda - \lambda_E$, $ U^\mu \Big(
\lambda_\mu - \frac{U_\mu}{T} \Big) $, $U^\mu U^\nu \lambda_{\mu \nu}$, $h^{\mu \nu} \lambda_{\mu \nu}$; in order to have solutions, the determinant of the complete matrix must be zero, i.e.,
\begin{align}\label{determ}
0 = \left|
\begin{matrix} \theta_{0,0} & \theta_{0,1} & \theta_{0,2} & \frac{1}{3}\theta_{1,2} & 0 \\
&&&& \\
\theta_{0,1} & \theta_{0,2} & \theta_{0,3} & \frac{1}{6} \, \theta_{1,3} & 0 \\
&&&& \\
\theta_{0,2} & \theta_{0,3} & \theta_{0,4} & \frac{1}{10} \, \theta_{1,4} & - \, \frac{k_B}{4m c^4} \, \Delta \\
&&&& \\
\theta_{1,1} & \frac{1}{3} \, \theta_{1,2} & \frac{1}{6} \, \theta_{1,3} & \frac{5}{9} \, \theta_{2,3} & - \, \frac{k_B}{m c^2} \, \Pi \\
&&&& \\
\frac{1}{3} \, \theta_{1,2} & \frac{1}{6} \, \theta_{1,3} & \frac{1}{10} \, \theta_{1,4} & \frac{1}{9} \, \theta_{2,4} & - \, \frac{k_B}{3m \, c^4} \, \Big( A^{\alpha \beta \gamma} - A^{\alpha \beta \gamma}_E \Big) U_\alpha h_{\beta \gamma}
\end{matrix}
\right| \, .
\end{align}
By defining
\begin{align*
& D_4 = \left|
\begin{matrix} \theta_{0,0} & \theta_{0,1} & \theta_{0,2} & \frac{1}{3}\theta_{1,2} \\
&&&& \\
\theta_{0,1} & \theta_{0,2} & \theta_{0,3} & \frac{1}{6} \, \theta_{1,3} \\
&&&& \\
\theta_{0,2} & \theta_{0,3} & \theta_{0,4} & \frac{1}{10} \, \theta_{1,4} \\
&&&& \\
\theta_{1,1} & \frac{1}{3} \, \theta_{1,2} & \frac{1}{6} \, \theta_{1,3} & \frac{5}{9} \, \theta_{2,3}
\end{matrix}
\right| \, , \nonumber \\
\nonumber \\
& N^\Pi = - \, \left|
\begin{matrix} \theta_{0,0} & \theta_{0,1} & \theta_{0,2} & \frac{1}{3}\theta_{1,2} \\
&&&& \\
\theta_{0,1} & \theta_{0,2} & \theta_{0,3} & \frac{1}{6} \, \theta_{1,3} \\
&&&& \\
\theta_{0,2} & \theta_{0,3} & \theta_{0,4} & \frac{1}{10} \, \theta_{1,4} \\
&&&& \\
\frac{1}{3} \, \theta_{1,2} & \frac{1}{6} \, \theta_{1,3} & \frac{1}{10} \, \theta_{1,4} & \frac{1}{9} \, \theta_{2,4}
\end{matrix}
\right| \, ,
\quad
N^\Delta = \left|
\begin{matrix} \theta_{0,0} & \theta_{0,1} & \theta_{0,2} & \frac{1}{3}\theta_{1,2} \\
&&&& \\
\theta_{0,1} & \theta_{0,2} & \theta_{0,3} & \frac{1}{6} \, \theta_{1,3} \\
&&&& \\
\theta_{1,1} & \frac{1}{3} \, \theta_{1,2} & \frac{1}{6} \, \theta_{1,3} & \frac{5}{9} \, \theta_{2,3} \\
&&&& \\
\frac{1}{3} \, \theta_{1,2} & \frac{1}{6} \, \theta_{1,3} & \frac{1}{10} \, \theta_{1,4} & \frac{1}{9} \, \theta_{2,4}
\end{matrix}
\right| \, ,
\end{align*}
the \eqref{determ} gives:
\begin{align}\label{21}
\frac{1}{3 \, c^2} \, \left( A^{\alpha \beta \gamma} - A^{\alpha \beta \gamma}_E \right) U_\alpha h_{\beta \gamma} = - \, \frac{N^\Pi}{D_4} \, \Pi \, - \, \frac{N^\Delta}{D_4} \frac{1}{4c^2} \, \Delta \, .
\end{align}
We contract now eq. \eqref{18}$_{1}$ with $h_\alpha^\delta$, eq. \eqref{18}$_{2}$ with $U_\alpha \, h_\beta^\delta$, eq. \eqref{18}$_{3}$ with $U_\alpha U_\beta h_\gamma^\delta/c^3$ and \eqref{18}$_{3}$ with $h_\alpha^\delta h_{\beta \gamma}/(3\, c^ 2)$, obtaining the system
\begin{align}\label{21bis}
\begin{split}
& c^2 \theta_{1,1} \, h^{\delta \mu}(\lambda_\mu - \lambda_{\mu_E}) +\frac{2}{3} c^2 \theta_{1,2} \,
\, U^\mu h^{\delta \nu} \lambda_{\mu \nu} = 0 \, , \\
& c^2 \theta_{1,2} \, h^{\delta \mu}(\lambda_\mu - \lambda_{\mu_E}) +\, c^2 \theta_{1,3} \,
\, U^\mu h^{\delta \nu} \lambda_{\mu \nu} = - \frac{3 \, k_B}{m^2 c^2 n} \, q^\delta \, , \\
& c^2 \theta_{1,3} \, h^{\delta \mu}(\lambda_\mu - \lambda_{\mu_E}) + \frac{18}{15} \, c^2 \theta_{1,4} \,
\, U^\mu h^{\delta \nu} \lambda_{\mu \nu} = \frac{6 \, k_B}{m^2 c^4 n} \, \left( A^{\alpha \beta \gamma} - A^{\alpha \beta \gamma}_E \right) U_\alpha U_\beta h_\gamma^\delta \, , \\
& \frac{5}{3} \, c^4 \theta_{2,3} \, h^{\delta \mu}(\lambda_\mu - \lambda_{\mu_E}) + \frac{2}{3} c^4 \theta_{2,4}\,
\, U^\mu h^{\delta \nu} \lambda_{\mu \nu} = \frac{k_B}{m^2 n} \, \left( A^{\alpha \beta \gamma} - A^{\alpha \beta \gamma}_E \right) h_{\alpha \beta} h_\gamma^\delta\, .
\end{split}
\end{align}
By eliminating the parameters $h^{\delta \mu}(\lambda_\mu - \lambda_{\mu_E})$ and $U^\mu h^{\delta \nu} \lambda_{\mu \nu}$ from these equations, we obtain
\begin{align}\label{22}
\begin{split}
& \left( A^{\alpha \beta \gamma} - A^{\alpha \beta \gamma}_E \right) U_\alpha U_\beta h_\gamma^\delta = - \, c^2
\frac{N_3}{D_3} \, q^\delta \, , \\
& \left( A^{\alpha \beta \gamma} - A^{\alpha \beta \gamma}_E \right) h_{\alpha \beta} h_\gamma^\delta = - \,
\frac{N_{31}}{D_3} \, q^\delta \, ,
\end{split}
\end{align}
with
\begin{align*
D_3 = \left|
\begin{matrix} \theta_{1,1} & \theta_{1,2} \\
& \\
\theta_{1,2} & \frac{3}{2} \, \theta_{1,3} \,
\end{matrix}
\right| \, , \quad N_3 = \frac{1}{2} \, \left|
\begin{matrix} \theta_{1,1} & \theta_{1,2} \\
& \\
\theta_{1,3} & \frac{9}{5} \, \theta_{1,4} \,
\end{matrix}
\right| \, , \quad N_{31} = \left|
\begin{matrix} \theta_{1,1} & \theta_{1,2} \\
& \\
5 \, \theta_{2,3} & 3 \, \theta_{2,4} \,
\end{matrix}
\right| \, .
\end{align*}
We contract now eq. \eqref{18}$_{2}$ with $h_\alpha^{< \delta } \, h_\beta^{\theta >_3 }$ and \eqref{18}$_{3}$ with $h_\alpha^{< \delta } \, h_\beta^{\theta >_3 } U_\gamma$, obtaining
\begin{align}\label{22b}
\begin{split}
& - \, \frac{k_B}{m} \, t^{<\delta \theta >_3} = \frac{2}{3} \, mn c^4 \theta_{2,3} \, h^{\mu < \delta } h^{\theta >_3 \nu} \lambda_{\mu \nu} \, , \\
& \left( A^{\alpha \beta \gamma} - A^{\alpha \beta \gamma}_E \right) h_\alpha^{< \delta } \, h_\beta^{\theta >_3} U_\gamma = - \, \frac{2}{15} \, \frac{m}{k_B} \, mn \, c^6 \, \theta_{2,4} \, h^{\mu < \delta } h^{\theta >_3 \nu} \lambda_{\mu \nu} \, ,
\end{split}
\end{align}
from which it follows
\begin{align}\label{23}
\left( A^{\alpha \beta \gamma} - A^{\alpha \beta \gamma}_E \right) h_\alpha^{< \delta } \, h_\beta^{\theta >_3 } U_\gamma = C_5 \, c^2 \, t^{<\delta \theta >_3} \, \qquad \mbox{with} \qquad C_5 = \frac{1}{5} \, \frac{\theta_{2,4}}{\theta_{2,3}} \, .
\end{align}
Finally, \eqref{18}$_{3}$ contracted with $h_\alpha^{< \delta} \, h_\beta^\theta \, h_\gamma^{\psi>_3 }$ gives
\begin{align*}
\Big( A^{\alpha \beta \gamma} - A^{\alpha \beta \gamma}_E \Big)h_\alpha^{< \delta} \, h_\beta^\theta \, h_\gamma^{\psi>_3 } = 0 \, .
\end{align*}
This result, jointly with \eqref{21}, \eqref{22}, \eqref{23}, gives the decomposition of the triple tensor $A^{\alpha \beta \gamma}$:
\begin{align*
\begin{split}
A^{\alpha \beta \gamma} - A_E^{\alpha \beta \gamma} & = \frac{1}{4c^4} \, \Delta \, U^\alpha U^\beta U^\gamma - \, \frac{3}{4c^2}\, \frac{N^\Delta}{D_4} \, \Delta \, h^{(\alpha \beta} U^{\gamma)} - 3 \, \frac{N^\Pi}{D_4} \, \Pi h^{(\alpha \beta} U^{\gamma)} \\
& + \frac{3}{c^2}
\frac{N_3}{D_3} \, q^{(\alpha} U^\beta U^{\gamma)} + \frac{3}{5} \frac{N_{31}}{D_3} h^{(\alpha
\beta} q^{\gamma)} + 3 C_5 t^{(<\alpha \beta >_3} U^{\gamma)}
\, .
\end{split}
\end{align*}
Thanks to eq. \eqref{A1w}$_1$, we have the closure of the triple tensor in terms of the physical variables:
\begin{align}\label{24}
\begin{split}
A^{\alpha \beta \gamma} &= \left( \rho\, \theta_{0,2} + \, \frac{1}{4c^4} \, \Delta \right) U^{\alpha } U^{\beta} U^\gamma + \left( \rho \, c^2\,\theta_{1,2} - \, \frac{3}{4c^2}\, \frac{N^\Delta}{D_4} \, \Delta \, - 3 \, \frac{N^\Pi}{D_4} \, \Pi \right)
\, h^{(\alpha\beta} U^{\gamma)} \\
& + \frac{3}{c^2} \frac{N_3}{D_3} \, q^{(\alpha} U^\beta U^{\gamma)} + \frac{3}{5} \frac{N_{31}}{D_3} h^{(\alpha
\beta} q^{\gamma)} + 3 C_5 t^{(<\alpha \beta >_3} U^{\gamma)}
\, .
\end{split}
\end{align}
\subsection{Inversion of the Lagrange Multipliers}
In this section, we present the explicit expression of the Lagrange Multipliers in terms of the 15 physical independent variables.
From the representation theorems, their are expressed as follows:
\begin{align}\label{RT}
\begin{split}
\lambda - \lambda_E &= a_1 \Pi + a_2 \Delta, \\
\lambda_\mu - \lambda_{\mu_E} &= \left(b_1 \Pi + b_2 \Delta\right)U_\mu + b_3 q_\mu,\\
\lambda_{\mu\nu} &= \left( \alpha_1 \Pi + \beta_1 \Delta\right) U_\mu U_\nu + \left(\alpha_2 \Pi + \beta_2 \Delta\right) h_{\mu\nu} + \alpha_3 \left(q_{\mu} U_{\nu} +q_{\nu} U_{\mu}\right) + \alpha_4 t_{<\mu\nu >_3},
\end{split}
\end{align}
where $\lambda_E$ and $\lambda_{\mu_E}$ can be found in eq. \eqref{mainE}, and the coefficients $a_{1,2}, b_{1,2,3}$, $\alpha_{1,2,3,4}$ and $\beta_{1,2}$ are functions of $\rho$ and $\gamma$. By using eqs. \eqref{19b}, \eqref{21bis} and \eqref{22b}, it is possible to obtain the explicit expressions of these coefficients.
For convenience, let us denote by $D_4^{ij}$ the minor determinant obtained from $D_4$ by deleting its $i$th row and $j$th column. From system \eqref{19b}, we obtain
\begin{align}\label{EP2}
\begin{split}
\lambda - \lambda_E &= -\frac{k_B}{m c^4 \rho\, D_4} \left(- \Pi \, c^2 D_4^{41}
+ \frac{\Delta}{4} D_4^{31} \right) , \\
&\\
U^\mu(\lambda_\mu-\lambda_{\mu_E}) &= -\frac{k_B}{m c^4 \rho\, D_4} \left( \Pi \, c^2 D_4^{42}
- \frac{\Delta}{4} D_4^{32} \right), \\
&\\
U^\beta U^{\gamma } \lambda_{\beta\gamma} &= -\frac{k_B}{m c^4 \rho\, D_4} \left(- \Pi \, c^2 D_4^{43}
+ \frac{\Delta}{4} D_4^{33} \right), \\
&\\
h^{\beta\gamma}\lambda_{\beta\gamma} &= -\frac{k_B}{m c^4 \rho\, D_4} \left(\Pi D_4^{44} - \frac{\Delta}{4c^2} D_4^{34}
\right).
\end{split}
\end{align}
From system \eqref{21bis} we obtain
\begin{align}\label{EP3}
h^{\delta \mu}\left(\lambda_\mu - \lambda_{\mu_E}\right) = \frac{3\, k_B \theta_{1,2}}{m c^4 \rho\, D_3} q^\delta \qquad\qquad \text{and} \qquad \qquad U^\beta h^{\gamma\delta}\lambda_{\beta\gamma} = -\frac{9 k_B \theta_{1,1} }{2 m c^4 \rho\, D_3} q^\delta.
\end{align}
Finally, from eq. \eqref{22b} we have
\begin{align*}
h^{\beta <\delta} h^{\theta >_3 \gamma}\lambda_{\beta\gamma} = -\frac{3 k_B}{2m c^4 \rho\theta_{2,3}} t^{<\delta\theta >_3},
\end{align*}
that, multiplied by $t_{<\delta\theta >_3}$, gives
\begin{align}\label{EP4}
t^{<\beta \gamma>_3}\lambda_{\beta\gamma} = -\frac{3 k_B}{2m c^4 \rho\theta_{2,3}} t^{<\beta \gamma>_3} t_{<\beta \gamma>_3}.
\end{align}
By comparing eqs. \eqref{RT}$_1$ with \eqref{EP2}$_1$ we have
\begin{align}\label{coef1}
a_1= \frac{k_B}{m c^2 \rho\, D_4} D_4^{41}, \qquad \qquad
a_2= -\frac{k_B}{4\,m c^4 \rho\, D_4} D_4^{31} .
\end{align}
By multiplying eq. \eqref{RT}$_2$ respectively times $U^\mu$ and $h^{\mu\delta}$, and using eqs. \eqref{EP2}$_2$ and \eqref{EP3}$_1$ we have
\begin{align}\label{coef2}
b_1= - \frac{k_B}{m c^4 \rho\, D_4} D_4^{42}, \quad \quad
b_2= \frac{k_B}{4\,m c^6 \rho\, D_4} D_4^{32},
\quad \quad
b_3 = \frac{3\, k_B \theta_{1,2}}{m c^4 \rho D_3}.
\end{align}
Finally, by multiplying equation \eqref{RT}$_3$ respectively times $U^\mu \, U^\nu$, $h^{\mu\nu}$, $U^\nu h^{\mu\delta}$, $h^{\mu <\delta}h^{\theta> \nu}$ and using eqs. \eqref{EP2}-\eqref{EP4} we obtain that
\begin{align}\label{coef3}
\begin{split}
\alpha_1 &= \frac{k_B}{m c^6 \rho\, D_4} D_4^{43}, \qquad \qquad
\alpha_2 = -\frac{k_B}{3 m c^4 \rho\, D_4} D_4^{44}, \\
\alpha_3 &= \frac{9 k_B \theta_{1,1} }{2 m c^6 \rho\, D_3}, \qquad \qquad \quad
\alpha_4 = -\frac{3 k_B}{2m c^4 \rho\theta_{2,3}},\\
\beta_1 &= -\frac{k_B}{4 m c^8\rho\, D_4} D_4^{33}, \qquad \quad
\beta_2 = \frac{k_B}{12 m c^6 \rho\, D_4} D_4^{34}.
\end{split}
\end{align}
\subsection{Production term with a variant BGK model}
To complete the closure of the system \eqref{Annalis}, we need to have the expression of the production tensor $I^{\beta \gamma}$. It depends on the collisional term $Q$ (see \eqref{relRETpol}$_2$), and obtaining the expression of $Q$ is an hard task in relativity. Usually, for monatomic gas it is adopted the relativistic generalization of the BGK approximation first made by Marle \cite{Mar2,Mar3} and successively by Anderson and Witting \cite{AW}. The Marle model is an extension of the classical BGK model in the Eckart frame \cite{KC,E}, and the Anderson-Witting model obtains such extension using the Landau-Lifshitz frame \cite{KC,LL}. There are some weak points for Marle model, and the Anderson-Witting model uses the Landau-Lifshitz four velocity.
Starting from these considerations, Pennisi and Ruggeri proposed a variant of Anderson-Witting model in the Eckart frame both for monatomic and polyatomic gases, and proved that the conservation laws of particle number and energy-momentum are satisfied and the H-theorem holds \cite{Car1} (see also \cite{RS}).
In the polyatomic case, the following collision term has been proposed:
\begin{align}\label{P1}
Q=\frac{U^\alpha p_\alpha}{c^2 \tau}\left(f_E-f-f_Ep^\mu q_\mu \frac{1+\frac{\mathcal{I}}{m c^2}}{bmc^2}\right),
\end{align}
where $3b$ is the coefficient of $h^{(\alpha \beta} U^{\gamma )}$ in eq. \eqref{A1w}$_1$, i.e
$3 b=\rho c^2\theta_{1,2}$.\\
The most general expression of a nonequilibrium double tensor as a linear function of $\Delta$, $\Pi$, $t^{<\mu\nu>_3}$ and $q^{\mu}$ is the following:
\begin{align*}
I^{\beta \gamma}= (B_1^\Delta \, \Delta + \, B_1^\Pi \, \Pi ) \, U^\beta U^\gamma \, + ( B_2^\Delta \, \Delta \, + \, B_2^\Pi \, \Pi ) h^{\beta \gamma} + \, B^q \, U^{(\beta } \, q^{\gamma )} \, + \, B^t \, t^{< \beta \gamma >_3}.
\end{align*}
In order to determine the coefficients in $I^{\alpha\beta}$, we have to substitute eq. \eqref{P1} into eq. $\eqref{relRETpol}_4$, obtaining
\begin{align*
I^{\beta\gamma} &= \frac{c}{m} \int_{\varmathbb{R}^{3}}
\int_0^{+\infty} \frac{U^\alpha p_\alpha}{c^2 \tau}\Big(f_E-f-f_Ep^\mu q_\mu \frac{1+\frac{\mathcal{I}}{m c^2}}{bmc^2}\Big) \, p^{\beta} p^{\gamma} \, \Big( 1 + \, \frac{\mathcal{I}}{m \, c^2} \Big)^2 \,
\phi(\mathcal{I}) \, \, d \, \mathcal{I} d\boldsymbol{P}= \nonumber \\
&= \frac{U_\alpha}{c^2 \tau} (A_E^{\alpha\beta\gamma}- A^{\alpha\beta\gamma}) - 3 \frac{U_\alpha q_\mu}{ \theta_{1,2} m^2 n c^6 \tau} A_E^{\alpha\beta\gamma\mu},
\end{align*}
then we have
\begin{align}\label{P3}
\begin{split}
B_1^\Delta &= - \frac{1}{4c^4\tau}, \qquad B_1^\Pi =0, \qquad B_2^\Delta = \frac{1}{4c^2\tau} \, \frac{N^\Delta}{D_4}, \qquad B_2^\Pi = \, \frac{1}{\tau} \, \frac{N^\Pi}{D_4} \\
B^q &= \frac{1}{c^2 \tau} \, \Big( \frac{\theta_{1,3}}{\theta_{1,2}} \, - 2 \frac{N_3}{D_3} \Big) \, , \quad B^t = - \, \frac{1}{\tau} \, C_5 \, .
\end{split}
\end{align}
Therefore the final expression of the production term $I^{\beta\gamma}$ is
\begin{align}\label{P22}
I^{\beta\gamma} =\frac{1}{\tau}
\left\{
- \frac{1}{4c^4 } \Delta \, U^\beta U^{\gamma } + \Big( \frac{1}{4c^2} \frac{N^\Delta}{D_4}\Delta +\frac{N^\Pi}{D_4} \Pi \Big) h^{\beta\gamma} + \Big( -\frac{2}{c^2 } \frac{N_3}{D_3} +\frac{\theta_{1,3}}{\theta_{1,2}}\frac{1}{c^2 } \Big)q^{(\beta} U^{ \gamma )} - C_5 t^{<\beta\gamma>_3}
\right\}
\end{align}
We summarize the results of this section as:
\begin{statement}
The closed system \eqref {Annalis} obtained via MEP is the one for which $V^\alpha, T^{\alpha \beta}, A^{\alpha\beta \gamma}, I^{\beta \gamma} $ are given explicitly in terms of the $ 15 $ fields ($\rho,\gamma,\Pi, \Delta, U^\alpha, q^\alpha, t^{<\alpha \beta>_3}$) using the expressions \eqref {19}, \eqref {24} and \eqref {P22}. All coefficients are completely determined in terms of a single function $\omega(\gamma)$ given by eq. \eqref {10}$_3 $ and its derivatives up to the order $ 3 $. Observe, by taking into account \eqref {interenergy2}, that the coefficients $ \theta$'s given in \eqref {thetas} can be formally written in terms of the internal energy $ \varepsilon $ and its derivatives.
\end{statement}
\subsection{Closed system of the field equations and material derivative}
It is possible to write now explicitly the differential system for the field variables using the material derivative.
The relativistic material derivative of a function $f$ is defined as the derivative with respect to the proper time $\bar{\tau}$ along the path of the particle:
\begin{align}\label{matde}
\dot{f} = \frac{d f}{d \bar{\tau}} = \frac{d f}{dt} \frac{dt}{d\bar{\tau}} = \Gamma (\partial_t f + v^j \partial_j f) = U^\alpha \partial_\alpha f,
\end{align}
where $\Gamma$ is the Lorentz factor and we take into account that
\[
U^\alpha = \frac{dx^\alpha}{d\bar{\tau}} \equiv (\Gamma c, \Gamma v^j),
\]
where $v^j$ is the velocity.
Now we observe that for any balance law we can have the following identity:
\begin{align*}
I^{\alpha_1 \cdots \alpha_n} = \partial_\alpha \, A^{\alpha \alpha_1 \cdots \alpha_n} = g^\beta_\alpha \, \partial_\beta \, A^{\alpha \alpha_1 \cdots \alpha_n} = \Big( -h^\beta_\alpha + \frac{U^\beta U_\alpha}{c^2} \Big) \, \partial_\beta \, A^{\alpha \alpha_1 \cdots \alpha_n} = \\
= \frac{U_\alpha}{c^2} \, \dot{A}^{\alpha \alpha_1 \cdots \alpha_n} \, - \, h^\beta_\alpha \, \partial_\beta \, A^{\alpha \alpha_1 \cdots \alpha_n} \, .
\end{align*}
In our case with $n=0,1,2$, these equations are written as follows:
\begin{align*}
\begin{split}
& \partial_\alpha \left( \rho U^\alpha \right) =0, \quad \, h_{\delta \beta} \, \left( \frac{U_\alpha}{c^2} \, \dot{T}^{\alpha \beta} \, - \, h^\mu_\alpha \, \partial_\mu \, T^{\alpha \beta} \right)=0, \quad \, U_{\beta} \, \left( \frac{U_\alpha}{c^2} \, \dot{T}^{\alpha \beta} \, - \, h^\mu_\alpha \, \partial_\mu \, T^{\alpha \beta} \right)=0, \, , \\
& h_{\delta \beta} \, h_{\theta \gamma} \, \left( \frac{U_\alpha}{c^2} \, \dot{A}^{\alpha \beta \gamma} \, - \, h^\mu_\alpha \, \partial_\mu \, A^{\alpha \beta \gamma} \, - \, I^{\beta \gamma} \right)=0 \, , \\
& h_{\delta \beta} \, U_{\gamma} \, \left( \frac{U_\alpha}{c^2} \, \dot{A}^{\alpha \beta \gamma} \, - \, h^\mu_\alpha \, \partial_\mu \, A^{\alpha \beta \gamma} \, - \, I^{\beta \gamma} \right)=0, \quad
U_{\beta} \, U_{\gamma} \, \left( \frac{U_\alpha}{c^2} \, \dot{A}^{\alpha \beta \gamma} \, - \, h^\mu_\alpha \, \partial_\mu \, A^{\alpha \beta \gamma} \, - \, I^{\beta \gamma} \right)=0 \, .
\end{split}
\end{align*}
By using the expressions $\eqref{19}$, \eqref{24} and \eqref{P22}, respectively for $V^\alpha, T^{\alpha \beta}$, $A^{\alpha \beta \gamma}$ and $I^{\beta \gamma}$, we see that these become
{\small
\begin{align}\label{derivmat}
& \dot{\rho} + \rho \, \partial_\alpha \, U^\alpha=0 \, , \nonumber \\
& -\frac{e+p+ \Pi}{c^2}\, \dot{U}^\delta \, + \, \frac{1}{c^2}\, h^{\delta}_{\beta} \, \dot{q}^\beta \, + \, \frac{1}{c^2}\, t^{< \alpha \delta >_3} \, \dot{U}_\alpha \, - \, h^{\delta \mu} \, \partial_\mu (p+ \Pi) \, - \, \frac{1}{c^2}\, q^\mu \, \partial_\mu U^\delta \, -\frac{1}{c^2}\, q^\delta \, \partial_\alpha U^\alpha \, - \, h^\delta_\beta \, h_\alpha^\mu \, \partial_\mu \, t^{< \alpha \beta >_3} =0 \, , \nonumber \\
& \dot{e} \, + \, 2 \, \frac{U_\alpha}{c^2} \, \dot{q}^\alpha \, + \, (e+p+ \Pi)\, \partial_\alpha U^\alpha \, - \, h_\alpha^\mu \, \partial_\mu q^\alpha \, - t^{< \alpha \beta >_3} \, \partial_\alpha U_\beta =0 \, , \nonumber \\
& h_{\delta \beta} \,\Big( \frac{1}{3} \rho c^2 \theta_{1,2} \, - \, \frac{1}{4 \, c^2} \, \frac{N^\Delta}{D_4} \, \Delta \, - \, \frac{N^\Pi}{D_4} \, \Pi \Big)^\bullet
+ \, C_5 \, h_{\delta \gamma} \, h_{\theta \beta} \, \dot{t}~^{< \theta \gamma>_3} \, + \, t_{< \delta \beta >_3} \, \dot{C}_5 -
\frac{2}{c^2} \, \Big( \frac{N_3}{D_3} \, + \, \frac{1}{5} \, \frac{N_{31}}{D_3} \Big) q_{( \delta} \, h_{\beta ) \gamma} \, \dot{U}^\gamma \, - \nonumber \\
& \hspace{2cm} \frac{1}{5 \, c^2} \, \frac{N_{31}}{D_3} \, h_{\beta \delta} \, q^\alpha \, \dot{U}_\alpha +
\, \Big(- \, \frac{1}{3} \rho c^2 \theta_{1,2} \, + \, \frac{1}{4 \, c^2} \, \frac{N^\Delta}{D_4} \, \Delta \, + \,
\frac{N^\Pi}{D_4} \, \Pi \Big) \,
\left[ - h_{\delta \beta} \partial_\alpha \, U^\alpha \, + \, 2 \,
h_{\theta ( \delta } \, h_{\beta )}^\mu \, \partial_\mu \, U^\theta \right] \, + \nonumber \\
& \hspace{2cm} \frac{1}{5} \, \Big( q^\mu h_{\delta \beta} + 2 \, q_{( \delta} h_{\beta )}^\mu \Big) \, \partial_\mu \, \Big( \frac{N_{31}}{D_3} \Big)
- \, \frac{1}{5} \, \frac{N_{31}}{D_3} \,
\left[ h_{\delta \beta} \, h^\mu_\alpha \, \partial_\mu \, q^\alpha + 2 \,
h_{\theta ( \delta} h^\mu_{\beta )} \, \partial_\mu \, q^\theta \right] + \nonumber \\
& \hspace{2cm} C_5 \,\left[ \, t_{< \delta \beta >_3} \, \partial_\alpha
\, U^\alpha \,+ \, 2 \, t^{< \mu \gamma >_3} h_{\gamma ( \beta} \,
h_{\delta ) \theta} \, \partial_\mu \, U^\theta \right] = \nonumber \\
& \hspace{2cm} \frac{1}{\tau} \, \Big( \frac{1}{4c^2} \, \frac{N^\Delta}{D_4} \,
\Delta \, + \, \frac{N^\Pi}{D_4} \, \Pi \Big) h_{\delta \beta} \, - \, \frac{1}{\tau}
\, C_5 \, t_{< \delta \beta >_3}\, , \\
& h_{\beta \delta} \, \dot{U}^\beta \Big( \rho \theta_{0,2} c^2 + \frac{2}{3} \rho c^2 \theta_{1,2} + \frac{1}{4 \, c^2} \,
\Delta - \, \frac{1}{2 \, c^2} \, \frac{N^\Delta}{D_4} \, \Delta
- \, 2 \, \frac{N^\Pi}{D_4} \, \Pi \Big) +
h_{\beta \delta} \, \frac{N_3}{D_3} \, \dot{q}^\beta - \, q_\delta \, \Big( \frac{N_3}{D_3} \Big)^\bullet \, + \, \Big( 2 \, C_5 \, - 1 \Big) \, t^{< \delta \gamma >_3} \, \dot{U}_\gamma \, - \nonumber \\
& \hspace{2cm} h_\delta^\mu \, \partial_\mu \, \Big( \frac{1}{3} \rho c^4 \theta_{1,2} - \frac{1}{4} \,
\frac{N^\Delta}{D_4} \, \Delta
- \, \frac{N^\Pi}{D_4} \, c^2 \, \Pi \Big) -
\Big( \frac{N_3}{D_3} \, + \, \frac{1}{5} \, \frac{N_{31}}{D_3} \Big) \, \Big( q^\mu \, \partial_\mu \, U_\delta \,+ q_\delta \, \partial_\alpha \, U^\alpha \Big) + \nonumber \\
& \hspace{2cm} \frac{1}{5} \, \frac{N_{31}}{D_3} \, h^\mu_\delta \, q^\gamma \, \partial_\mu \, U_\gamma \,+ \, h_\alpha^\mu \, \partial_\mu \, \Big( C_5 \, c^2 \, t^{< \alpha}_{~~~~\delta >_3} \Big) =
\frac{1}{\tau} \, \Big( \frac{N_3}{D_3} \, - \, \frac{\theta_{1,3}}{2 \, \theta_{1,2}} \Big) \, q_\delta \, , \nonumber \\
& \Big( \rho \theta_{0,2} c^4 + \frac{1}{4} \, \Delta \Big)^\bullet - \, 3 \, \frac{N_3}{D_3} \, q^\alpha \, \dot{U}_\alpha \, +
\partial_\alpha \, U^\alpha \cdot \Big( \rho \theta_{0,2} c^4 + \frac{2}{3} \rho c^4 \theta_{1,2} + \frac{1}{4}
\, \Delta - \, \frac{1}{2} \, \frac{N^\Delta}{D_4} \, \Delta - \, 2 \, \frac{N^\Pi}{D_4} \, \Pi \, c^2 \Big) \, - \nonumber \\
& \hspace{2cm} h_\alpha^\mu \, \partial_\mu \, \Big( \frac{N_3}{D_3} \, c^2 \, q^\alpha \Big) \, - \, 2 \, C_5 c^2 \, t^{< \mu \gamma >_3} \, \partial_\mu U_\gamma = - \, \frac{1}{4 \, \tau} \, \Delta \, . \nonumber
\end{align}
}
\normalsize
It may be useful to decompose \eqref{derivmat}$_{4}$ into the trace and spatial traceless parts. The trace part is given by
\begin{align}\label{tracciap}
\begin{split}
& \,\Big( \rho c^2 \theta_{1,2} \, - \, \frac{3}{4 \, c^2} \, \frac{N^\Delta}{D_4} \, \Delta \, - \, 3 \frac{N^\Pi}{D_4} \, \Pi \Big)^\bullet
+ C_5 h_{\theta\gamma} \dot{t}~^{<\theta\gamma >_3} +\frac{1}{c^2} \, \Big( 2\frac{N_3}{D_3} \, - \frac{1}{5} \frac{N_{31}}{D_3} \Big) q_{\gamma} \, \dot{U}^\gamma \, - \\
& \hspace{2cm}
\, \Big(- \, \frac{1}{3} \rho c^2 \theta_{1,2} \, + \, \frac{1}{4 \, c^2} \, \frac{N^\Delta}{D_4} \, \Delta \, + \,
\frac{N^\Pi}{D_4} \, \Pi \Big) \,
\partial_\alpha \, U^\alpha \, + q^\mu \partial_\mu \, \Big( \frac{N_{31}}{D_3} \Big)\, \, -\\
& \hspace{2cm}
\, \frac{N_{31}}{D_3} \, h^\mu_\alpha \, \partial_\mu \, q^\alpha - 2 C_5 \, t^{< \mu }_{\,\,\ \, \,\gamma >_3} \partial_\mu \, U^\gamma = \frac{3}{\tau} \, \Big( \frac{1}{4c^2} \, \frac{N^\Delta}{D_4} \, \Delta \, + \, \frac{N^\Pi}{D_4} \, \Pi \Big),
\end{split}
\end{align}
and the spatial traceless part is:
\begin{align}\label{devia}
\begin{split}
& C_5 \, h_{\gamma < \delta} \, h_{\beta >_3 \theta} \dot{t}^{< \gamma \theta >_3}\, + \, t_{< \delta \beta >_3}\, \dot{C}_5 \, + \,
\frac{2}{c^2} \, \left( \frac{N_3}{D_3} \, + \, \frac{1}{5} \, \frac{N_{31}}{D_3} \right) q_{< \delta} \, \dot{U}_{\beta >_3} \, + \\
& + \, 2 \, \left(- \, \frac{1}{3} \, \rho \, c^2 \theta_{1,2} \, + \, \frac{1}{4 \, c^2} \, \frac{N^\Delta}{D_4} \, \Delta \, + \,
\frac{N^\pi}{D_4} \, \pi \right) \,
h_{\gamma < \delta } \, h^\mu_{\,\, \beta >_3} \, \partial_\mu \, U^\gamma \, + \\
& + \frac{2}{5} \, \left( q_{< \delta} h^\mu_{\,\, \beta >_3} \right) \, \partial_\mu \, \left( \frac{N_{31}}{D_3} \right)
- \, \frac{2}{5} \, \frac{N_{31}}{D_3} \,
\left(
h_{\gamma < \delta} h^\mu_{\,\, \beta >_3} \, \partial_\mu \, q^\gamma \right) + \\
& + C_5 \,\left[ t_{< \delta \beta >_3} \, \partial_\alpha
\, U^\alpha \,+ \, 2 \, t^{< \mu \gamma >_3} h_{\gamma < \beta} \,
h_{\delta >_3 \nu} \, \partial_\mu \, U^\nu \right] = - \, \frac{1}{\tau}
\, C_5 \, t_{< \delta \beta >_3} \, .
\end{split}
\end{align}
The system formed by the $15$ equations \eqref{derivmat}$_{1,2,3}$, \eqref{tracciap}, \eqref{devia} and \eqref{derivmat}$_{5,6}$ is a closed systems for the $15$ unknown $(\rho, U_\delta,T,\Pi,t_{<\alpha \beta>_3}, q_\delta, \Delta)$.
\section{Entropy density, Convexity, Entropy Principle and well-posedness of Cauchy problem}\label{entro}
In this section we evaluate the entropy law and we want to prove that every solutions are entropic with an entropy density that is a convex function.
\subsection{Entropy density}
By substituting the distribution function \eqref{fgenE} with \eqref{RT} into \eqref{entropy}, we can evaluate the four-dimensional entropy flux. In this procedure, it is needed to be careful concerning the order of the nonequilibrium variables. The present linear constitutive equations is related to the entropy with the second order of the nonequilibrium variables. By taking into account up to the second order in the expansion of the distribution function and of the constitutive equations, we may evaluate as follows
\begin{align}\label{halpha}
h^\alpha
= h^\alpha_E \, + \, h^\alpha_{(1)} \, + \, h^\alpha_{(2)} \, ,
\end{align}
where $h^\alpha_{(1)}$ and $h^\alpha_{(2)}$ are, respectively, the contribution of the first and second order terms of the nonequilibrium variables, which can be derived as follows (see Appendix~\ref{app:entropy} for details):
\begin{align}\label{entropyd1}
\begin{split}
h^\alpha_{(1)} &= -\frac{c}{k_B} \int_{\varmathbb{R}^3} \int_{0}^{+ \infty} p^\alpha f_E \, \chi_E \, \tilde{\chi}_{(1)} \, \varphi ( \mathcal{I} ) \, d \mathcal{I} \, d \boldsymbol{P}, \\
h^\alpha_{(2)} &= - \frac{c}{2 \, k_B} \int_{\varmathbb{R}^3} \int_{0}^{+ \infty} p^\alpha f_E \, {\tilde{\chi}_{(1)}}{}^2 \, \varphi ( \mathcal{I} ) \, d \mathcal{I} \, d \boldsymbol{P} \, ,
\end{split}
\end{align}
where ${\tilde{\chi}_{(1)}}$ is $\tilde{\chi}$ defined in \eqref{fgenE} with the linear constitutive equations studied in the previous. After cumbersome calculations, we obtain explicit expression of them as follows:
\begin{align} \label{entropyd2}
h^\alpha_{(1)} & = \lambda_E \left(V^\alpha - V^\alpha_E\right) \, + \, \frac{U_\mu}{T} \, \left(T^{\alpha \mu} - T^{\alpha \mu}_E\right)
= \frac{q^\alpha}{T}, \nonumber \\
h^\alpha_{(2)} &= - \frac{m}{2 \, k_B} \left\{ \left[\left( \lambda-\lambda^E\right)\right]^2 V^\alpha_E \, + \, \left(\lambda_{\mu}-\lambda_{\mu}^E\right) \left(\lambda_{\nu}-\lambda_{\nu}^E\right) \, A^{\alpha \mu \nu}_E \, + \, \left(\lambda_{\mu \nu }\right) \left(\lambda_{\psi \theta }\right) \, A^{\alpha \mu \nu \psi \theta}_E \, + \right. \nonumber \\
&\qquad + \left. 2 \left( \lambda-\lambda^E\right) \left(\lambda_{\mu}-\lambda_{\mu}^E\right) \, T^{\alpha \mu}_E \, + \, 2 \left( \lambda-\lambda^E\right) \left(\lambda_{\mu \nu}\right) \, A^{\alpha \mu \nu}_E \, + \, 2 \left( \lambda_\theta-\lambda_\theta^E\right) \left(\lambda_{\mu \nu}\right) \, A^{\alpha \theta \mu \nu}_E \right\} \, \\
&= - \frac{1}{c^2}U^\alpha \left\{- \frac{c^2 \alpha_4 C_5}{2} t^{< \mu\nu >_3}t_{< \mu\nu>_3} - \left(c^2 \alpha_3 \frac{N_3}{D_3} + \frac{b_3}{2}\right)q_\mu q^\mu
+L_1\Pi^2 +L_2 \Delta^2 +2 L_3 \Pi \Delta \right\}\nonumber \\
&\qquad + \frac{1}{2}\left(b_1-b_3 + c^2 \frac{N_3}{D_3}\alpha_1 +
\frac{N_{31}}{D_3}\alpha_2 +2\alpha_3 c^2 \frac{N^\Pi}{D_4}\right)\Pi q^\alpha + \frac{1}{2}\left(b_2 + c^2 \frac{N_3}{D_3}\beta_1 + \frac{N_{31}}{D_3}\beta_2 +\frac{1}{2}\alpha_3 \frac{N^\Delta}{D_4}
\right)\Delta q^\alpha \nonumber \\
&\qquad + \frac{1}{2}\left(b_3 + 2c^2 \alpha_3 C_5 - \frac{2}{5}\alpha_4 \frac{N_{31}}{D_3}\right)t^{< \alpha\mu >_3}q_\mu, \nonumber
\end{align}
where
\begin{align*}
&L_1 = \frac{3c^2}{2}\alpha_2 \frac{N^\Pi}{D_4},\qquad
L_2 = \frac{1}{8}\left(\ 3\beta_2\frac{N^\Delta}{D_4} - c^2 \beta_1\right), \qquad
L_3 =\frac{1}{4} \left(\frac{3\alpha_2}{4}\frac{N^\Delta}{D_4}+3 c^2{\beta_2}\frac{N^\Pi}{D_4}- \frac{c^2\alpha_1}{4}
\right).
\end{align*}
In particular, for the entropy density $h=h^\alpha U_\alpha$, we have
\begin{align}\label{conv}
h= h_E
+ \frac{c^2 \alpha_4 C_5}{2} t^{< \mu\nu >_3}t_{< \mu\nu>_3} + \left(c^2 \alpha_3 \frac{N_3}{D_3} + \frac{b_3}{2}\right)q_\mu q^\mu
-
\begin{pmatrix}
\Pi & \Delta
\end{pmatrix}
\begin{pmatrix}
L_1 & L_3\\
L_3 & L_2
\end{pmatrix}
\begin{pmatrix}
\Pi \\ \Delta
\end{pmatrix}.
\end{align}
We emphasize that the convexity of the entropy density is satisfied because from \eqref{entropyd2}$_1$ we have $h^\alpha_{(1)}U_\alpha =0$ and from \eqref{entropyd1} we have $h^\alpha_{(2)}U_\alpha <0$ everywhere and zero only at equilibrium. Therefore the following inequalities are automatically satisfied:
\begin{itemize}
\item $\alpha_4 C_5 <0$,
\item $\displaystyle 2c^2 \alpha_3 \frac{N_3}{D_3} + {b_3}>0$ because $q_\alpha q^\alpha <0$,
\item $L_1>0$,
\item $L_1 \, L_2 - \left(L_3\right)^2>0$.
\end{itemize}
\subsection{Entropy production}
According with the theorem proved by Boillat and Ruggeri \cite{BoillatRuggeri-1998} (see also \cite{RET,RS}), the procedure of MEP at molecular level it is equivalent to the closure using the entropy principle and the Lagrange multipliers coincide with the {\emph{main field}}
for which the original system become symmetric hyperbolic \cite{RS}. Therefore the closed system satisfy the entropy balance law
\begin{equation}\label{ep}
\partial_\alpha h^\alpha = \Sigma,
\end{equation}
where the entropy four-vector is given by \eqref{halpha}, \eqref{entropyd2}. For what concerns
the entropy production $\Sigma$ according with the result of Ruggeri and Strumia \cite{RS} this is given by the scalar product between the main field components and the production terms \cite{RugStr}. In the present case, we have
\begin{align}\label{EP}
\Sigma = I^{\beta\gamma}\, \lambda_{\beta\gamma}.
\end{align}
By using eq. \eqref{P22} we have
\begin{align}\label{EP1}
\Sigma =\frac{1}{\tau}
\left\{
- \frac{1}{4c^4 } \Delta \, U^\beta U^{\gamma } \lambda_{\beta\gamma} + \Big( \frac{1}{4c^2} \frac{N^\Delta}{D_4}\Delta +\frac{N^\Pi}{D_4} \Pi \Big) h^{\beta\gamma}\lambda_{\beta\gamma} + \Big( -\frac{2}{c^2 } \frac{N_3}{D_3} +\frac{\theta_{1,3}}{\theta_{1,2}}\frac{1}{c^2 } \Big)q^{(\beta} U^{ \gamma )}\lambda_{\beta\gamma} - C_5 t^{<\beta\gamma>_3}\lambda_{\beta\gamma}
\right\}.\nonumber\\
\end{align}
By substituting eqs. \eqref{EP2}-\eqref{EP4} into eq. \eqref{EP1} and reminding that $q^\beta U^\gamma \lambda_{\beta\gamma}= - q_\alpha h^{\alpha \beta} U^\gamma \lambda_{\beta\gamma}$, we obtain $\Sigma$ in a quadratic form as follows:
\begin{align}\label{EP6}
\Sigma = \frac{3 k_B \, C_5}{2 \tau m c^4 \rho\theta_{2,3}} t^{<\beta \gamma>_3} t_{<\beta \gamma>_3} + \frac{9 k_B \theta_{1,1}}{2 \tau m^2\, n\, c^6 D_3}\Big( -2 \frac{N_3}{D_3} +\frac{\theta_{1,3}}{\theta_{1,2}} \Big) q^\alpha q_\alpha
+ \begin{pmatrix}
\Delta & \Pi
\end{pmatrix}
\left(
\begin{matrix}
M_1 & M_2 \\
M_2 & M_3
\end{matrix} \right)
\begin{pmatrix}
\Delta \\ \Pi
\end{pmatrix},
\end{align}
where
\begin{align*
\begin{split}
&M_1 = \frac{k_B}{16\, c^8 \tau m^2 \,n\, D_4}\left(D_4^{33} + \frac{N^\Delta}{D_4}D_4^{34}\right), \qquad
M_2 = - \frac{k_B}{4\, c^6 \tau m^2 \,n\, D_4} \left(D_4^{43}+\frac{N^\Delta}{D_4}D_4^{44} - \frac{N^\Pi}{D_4} D_4^{34}\right),\\
&M_3 = - \frac{k_B}{ c^4 \tau m^2 \,n\, D_4} \frac{N^\Pi}{D_4} D_4^{44}.
\end{split}
\end{align*}
The Sylvester criteria allows us to state that the quadratic form is positive definite iff all the following conditions hold:
\begin{enumerate}
\item $\frac{3 k_B \, C_5}{2 \tau m c^4 \rho\theta_{2,3}}>0$.
\item $\left( -2 \frac{N_3}{D_3} +\frac{\theta_{1,3}}{\theta_{1,2}} \right)\frac{9 k_B \theta_{1,1}}{2 \tau m^2\, n\, c^6 D_3} <0$, because $q^\alpha q_\alpha <0$.
\item $M_1 >0$.
\item $M_1 \, M_3 - (M_2)^2 >0$.
\end{enumerate}
The first condition is automatically satisfied because of the definition of the functions involved.\\
In order to prove the second condition, we can consider a space like vector $X^\beta$ and the following function that is defined to be positive for each value of $X^\beta$.
\begin{align*}
g(X^\beta) = \frac{U_\alpha}{c \,\tau \, k_B} \int_{\varmathbb{R}^3} \int_0^{+\infty} f_E p^\alpha \left[ X_\beta p^\beta \left( \frac{\theta_{1,3}}{\theta_{1,2}} \left( 1+ \frac{\mathcal{I}}{mc^2}\right) - \frac{2}{mc^2} \left( 1+ \frac{\mathcal{I}}{mc^2}\right)^2 U_\nu p^\nu\right)\right]^2 \phi(\mathcal{I}) d \mathcal{I} d\boldsymbol{P} \, .
\end{align*}
By exploiting the calculation in the above integral and by using eqs. \eqref{A1w}, we have
\begin{align*}
g(X^\beta) = \frac{m^2 \, n \, c^2}{\tau \, k_B} \left[ \frac{1}{3}\frac{\left(\theta_{1,3}\right)^2}{\theta_{1,2}} - \frac{2}{5} \theta_{1,4}\right] X^\beta \, X_\beta \, .
\end{align*}
If we choose, as a particular value,
\begin{align*}
X^\beta = - \frac{1}{D_3} \frac{9 \, k_B}{2 \, m^2 \, n \, c^4} \theta_{1,1}\, q^\beta ,
\end{align*}
we obtain
\begin{align*}
g(X^\beta) = \frac{9 k_B \theta_{1,1}}{2 \tau m^2\, n\, c^6 D_3}\Big( -2 \frac{N_3}{D_3} +\frac{\theta_{1,3}}{\theta_{1,2}} \Big) q^\alpha q_\alpha > 0.
\end{align*}
It proves that also the second condition is satisfied.\\
Conditions 3 and 4 can be proved showing that they are coefficients of a quadratic form that is definite positive. In order to obtain the entropy production up to the second order, we have to substitute eq. \eqref{relRETpol}$_4$ into \eqref{EP} and take the collisional term \eqref{P1} up to the first order. Then,
\begin{align*}
\Sigma^{(2)} = \frac{c}{m}\int_{\varmathbb{R}^{3}}
\int_0^{+\infty} Q^{(1)} \, p^{\beta} p^{\gamma}\,\lambda_{\beta\gamma} \, \Big(1 + \frac{\mathcal{I}}{mc^2} \Big)^2\,
\phi(\mathcal{I}) \, d \mathcal{I} \, d \boldsymbol{P},
\end{align*}
with \begin{align*}
Q^{(1)} = \frac{f_E}{c^2 \, \tau \, k_B} U_\alpha p^\alpha \left[ \tilde{\chi} - \frac{k_B}{bmc^2} p^\mu q_\mu \left(1+\frac{\mathcal{I}}{mc^2}\right)\right].
\end{align*}
If we substitute to $\lambda_{\beta \gamma }$ its expression obtained from eq. \eqref{fgenE}$_2$, we obtain
\begin{align*}
\Sigma^{(2)} = {c}\int_{\varmathbb{R}^{3}}
\int_0^{+\infty} Q^{(1)} \, \tilde{\chi} \, \phi(\mathcal{I}) \, d \mathcal{I} \, d \boldsymbol{P}.
\end{align*}
In the state where $q^\beta =0$ and $t^{<\alpha \beta >_3}=0$ the Lagrange multipliers and the Entropy production assume particular values that we denote with a $*$, in particular
\begin{align*}
\Sigma^{(2*)} = \frac{c}{m}\int_{\varmathbb{R}^{3}}
\int_0^{+\infty} Q^{(1*)} \, \tilde{\chi}^* \, \phi(\mathcal{I}) \, d \mathcal{I} \, d \boldsymbol{P} = \frac{c}{m}\int_{\varmathbb{R}^{3}}
\int_0^{+\infty}\frac{f_E}{c^2 \, \tau \, k_B} U_\alpha p^\alpha \, \left[\tilde{\chi*}\right]^2 \phi(\mathcal{I}) \, d \mathcal{I} \, d \boldsymbol{P},
\end{align*}
that is clearly a positive quantity. Moreover, we have
\begin{align*}
\Sigma^{(2*)} = I^{\beta\gamma *}\lambda_{\beta \gamma *}
\end{align*}
that corresponds to the quadratic form
\begin{align*}
\begin{pmatrix}
\Delta & \Pi
\end{pmatrix}
\left(
\begin{matrix}
M_1 & M_2 \\
M_2 & M_3
\end{matrix} \right)
\begin{pmatrix}
\Delta \\ \Pi
\end{pmatrix},
\end{align*}
which, therefore, turns to be definite positive.
Therefore proved the
\begin{statement}
The entropy density \eqref{conv} is a convex function and is maximum at equilibrium. The solutions satisfies the entropy principle \eqref{ep} with an entropy production \eqref{EP6} that is always non negative.
According with the general theory of symmetrization given first in covariant formulation in \cite{RugStr}, and the equivalence between Lagrange multipliers and main field \cite{BoillatRuggeri-1998}, the closed system is symmetric hyperbolic in the neighborhood of equilibrium if we chose as variables the main field variables \eqref{RT}, with coefficients given in \eqref{coef1}, \eqref{coef2}, \eqref{coef3} and the Cauchy problem is well posed locally in time.
\end{statement}
\section{Diatomic gases}\label{diatomico}
The system \eqref{derivmat} is very complex, in particular, because it is not simple to evaluate the function $\omega(\gamma)$ that involves two integrals \eqref{10}$_3$ that cannot have analytical expression for a generic polyatomic gas.
Taking into account the relations \cite{Annals}
\begin{equation*}
J_{2,1} (\gamma) = \frac{1}{\gamma}K_2(\gamma), \qquad \qquad J_{2,2} (\gamma) = \frac{1}{\gamma}\Big(K_3(\gamma)-\frac{1}{\gamma}K_2(\gamma)\Big),
\end{equation*}
where $K_n$ denotes the modified Bessel function, we can rewrite $\omega$ given in \eqref{10}$_3$ in terms of the modified Bessel functions \cite{Annals}:
\begin{equation*}
\omega(\gamma) = \frac{1}{\gamma}\left(
\frac{\int_0^{+\infty} K_3(\gamma^*)\, \phi(\mathcal{I}) \, d \, \mathcal{I}}
{\int_0^{+\infty}\frac{K_2(\gamma^*)}{\gamma^*}\phi(\mathcal{I}) \, d \, \mathcal{I}}-1\right).
\end{equation*}
Moreover, to calculate the integrals, we need to prescribe the measure $\phi(\mathcal{I})$.
In \cite{Annals}, the measure $\phi(\mathcal{I})$ was assumed as
\begin{equation*}
\phi(\mathcal{I}) = \mathcal{I}^a, \qquad a = \frac{D-5}{2},
\end{equation*}
because it is the one for which the macroscopic internal energy in the classical limit, when $\gamma \rightarrow \infty$ , converges to the one of a classical polyatomic gas, where $D$ indicates the degree of freedom of molecule.
As was observed by Ruggeri, Xiao and Zhao \cite{Xiao} in the case of $a=0$, i.e., $D=5$ corresponding to diatomic gas, the energy $e$ have an explicit expression similar to the monatomic gas:
\begin{equation*}
e = p \left(\frac{\gamma K_0(\gamma)}{K_1(\gamma)}+3\right).
\end{equation*}
Therefore, from \eqref{10}, we have
\begin{equation*}\label{omegadiat}
\omega_{\text{diat}}(\gamma)= \frac{K_0(\gamma)}{K_1(\gamma)} +\frac{3}{\gamma}.
\end{equation*}
Using the following recurrence formulas of the Bessel functions
\begin{align}\label{recur}
K_n(\gamma) = \frac{\gamma}{2n} \Big(K_{n+1}(\gamma) - K_{n-1}(\gamma) \Big) \, ,
\end{align}
we can express $\omega$ in terms of
\begin{equation*}
G(\gamma) = \frac{K_3(\gamma)}{K_2(\gamma)}.
\end{equation*}
In fact we can obtain immediately the following expression:
\begin{align}\label{omegaD}
\omega_{\text{diat}}(\gamma)= \frac{1}{\gamma } + \frac{ \gamma}{\gamma G -4},
\end{align}
which is a simple function similar to the one of monatomic gas for which we have \cite{LMR}:
\begin{align*
\omega_{\text{mono}}(\gamma) = -1 + \, \gamma \, G.
\end{align*}
Taking into account that the derivative of Bessel function are known, all coefficients appearing in the differential system \eqref{derivmat} can be written explicitly in terms of $G(\gamma)$, by using \eqref{omegaD} and the recurrence formula \eqref{recur}. This is simple by using a symbolic calculus like Mathematica\textregistered.
\section{Ultra-relativistic limit}\label{sec:ultra}
In the ultra-relativistic limit where $\gamma \rightarrow 0$, it was proved in \cite{5} and \cite{6} that the energy converges to
\begin{align}\label{25}
e = (\alpha +1) \frac{n \, m \, c^2}{\gamma} \, , \quad \mbox{with} \quad \alpha= \left\{
\begin{matrix}
2 & \mbox{if} \quad a \leq 2 \\
a & \, \, \, \mbox{if} \quad a \geq 2 \, .
\end{matrix} \right.
\end{align}
This implies
\begin{align}\label{omefaultre}
\omega_{\text{ultra}} = \frac{(\alpha +1) }{\gamma} \, , \quad \mbox{with} \quad \alpha= \left\{
\begin{matrix}
2 & \mbox{if} \quad a \leq 2 \\
a & \, \, \, \mbox{if} \quad a \geq 2 \, .
\end{matrix} \right.
\end{align}
By means of this expression, we can evaluate the coefficients $\theta_{h,j}$ in \eqref{thetas} that become:
\begin{align*}
\begin{split}
& \left\{\theta_{0,0},\theta_{0,1},\theta_{0,2},\theta_{0,3},\theta_{0,4}\right\} = \\
& \hspace{1cm }\left\{1,\frac{\alpha
+1}{\gamma },\frac{(\alpha
+1) (\alpha +2)}{\gamma
^2},\frac{(\alpha +1)
(\alpha +2) (\alpha
+3)}{\gamma
^3},\frac{(\alpha +1)
(\alpha +2) (\alpha +3)
(\alpha +4)}{\gamma
^4}\right\}, \\
& \left\{\theta_{1,1},\theta_{1,2},\theta_{1,3},\theta_{1,4}\right\} = \\
& \hspace{1cm }
\left\{\frac{1}{\gamma
},\frac{3 (\alpha
+2)}{\gamma ^2},\frac{6
(\alpha +2) (\alpha
+3)}{\gamma ^3},\frac{10
(\alpha +2) (\alpha +3)
(\alpha +4)}{\gamma
^4}\right\}, \\
& \left\{\theta_{2,3},\theta_{2,4}\right\} = \\
& \hspace{1cm }
\left\{\frac{3 (\alpha
+2)}{\gamma ^3},\frac{15
(\alpha +2) (\alpha
+4)}{\gamma ^4}\right\}.
\end{split}
\end{align*}
It follows that, in the ultra-relativistic limit, we have
\begin{align*
\frac{N_3}{D_3} = \frac{2(\alpha +3)}{\gamma} \quad, \quad \frac{N_{31}}{D_3} = \frac{10}{\gamma} \quad, \quad C_5 = \frac{\alpha +4}{\gamma},
\end{align*}
and
\begin{align}\label{26}
\frac{N^\Pi}{D_4} = - \, \frac{\alpha +4}{\gamma} \quad, \quad \frac{N^\Delta}{D_4} = - \, \frac{1}{\alpha +1} \, ,
\end{align}
where the last two equations holds for $\alpha \neq 2$ (i.e., $a \neq 2$). For $a = 2$, the ultra-relativistic limit of $\frac{N^\Pi}{D_4}$ and of $\frac{N^\Delta}{D_4}$ gives the indeterminate form $\left[\frac{0}{0}\right]$. We show (see Appendix~\ref{app:cont} for details) that it can be solved by considering higher order terms for the energy $e$, allowing to prove that eqs. \eqref{26} are valid also with $a=2$ and, hence, that the closure of the present model is continuous with respect to the parameter $\alpha$, at the ultra-relativistic limit.
\section{Principal subsystems of ET$_{15}$} \label{sec:sub}
For a general system of the hyperbolic system of balance laws, the system with smaller set of the field equations can be deduced retaining the property that the convexity of the entropy and the positivity of the entropy production is preserved as a principal subsystem \cite{BoillatRuggeriARMA}. The principal subsystem is obtained by putting some components of the main field as a constant, and the corresponding balance laws are deleted.
Let us recall the system \eqref{Annalis}. The balance law of $A^{\alpha \beta \gamma}$ is divided into the trace part $A^{\alpha \beta}_\beta$ and the traceless part $A^{\alpha < \beta\gamma>}$. As we study below, by deleting the trace part and putting the corresponding component of main field as zero, we obtain the theory with $14$ fields (ET$_{14}$). On the other hand, by conducting the same procedure on the traceless part, we obtain the theory with $6$ fields (ET$_{6}$). It is remarkable that ET$_{14}$ and ET$_{6}$ is the same order in the sense of the principle subsystem differently from the classical case in which the classical ET$_{6}$ is a principal subsystem of classical ET$_{14}$. Moreover, the relativistic Euler theory is deduced as a principal subsystem by deleting the balance laws of $A^{\alpha\beta\gamma}$ and putting the corresponding component of main field as zero
\subsection{ET$_{14}$: 14 fields theory}
The ET$_{14}$ is obtained as a principal subsystem of ET$_{15}$ under the condition $\lambda^\alpha_\alpha = 0$. From \eqref{RT}$_3$, this condition provides $\Delta$ expressed by $\Pi$ as follows:
\begin{align}
\Delta^{(14)} &= - \frac{c^2\alpha_1 -{3}\alpha_2}{c^2 \beta_1 - {3}\beta_2}\Pi
= 4 \frac{N_a}{D_a}c^2 \Pi, \label{subd}
\end{align}
where $N_a = D_4^{44}+D_4^{43}$ and $D_a = D_4^{34}+D_4^{33}$. Then, the independent fields are the following $14$ fields; $(\rho, \gamma, \Pi, U^\alpha, q^\alpha, t^{< \alpha\beta >_3})$.
By deleting the balance equation corresponding to $\lambda^\alpha_\alpha$, that is, the one of $A^{\alpha \beta}_\beta$, the present system of the balance equations are the following:
\begin{align}\label{Sub14}
\partial_\alpha V^\alpha =0 \, , \quad \partial_\alpha T^{\alpha \beta} =0 \, , \quad \partial_\alpha A^{ \alpha < \beta \gamma>} = I^{<\beta \gamma>} .
\end{align}
With \eqref{subd}, the constitutive equation is modified in this subsystem.
For the comparison with the ET$_{14}$ theory studied in \cite{Annals}, let us denote
\begin{align*}
\frac{N_1^\pi}{D_1^\pi} = -\frac{1}{3}\frac{N_a}{D_a} , \quad
\frac{N_{11}^\pi}{D_1^\pi} = \frac{1}{D_4}\left(\frac{N_a}{D_a}N^\Delta + N^\Pi \right).
\end{align*}
We can prove the following identity:
\begin{align*}
\frac{N_b}{D_a} = - \frac{1}{D_4}\left(\frac{N_a}{D_a}N^\Delta + N^\Pi \right), \quad \text{with} \quad N_b = N^{\Delta 34}+N^{\Delta 33},
\end{align*}
where $N^{\Delta 33}$ and $N^{\Delta 34}$ are the minor determinants of $N^\Delta$ which deletes the third row and third column, and the third row and fourth column, respectively.
Then, as a result, instead of \eqref{24}, the closure for $A^{\alpha \beta \gamma}$ in the present principal subsystem is given by
\begin{align}\label{Sub2}
A^{\alpha \beta \gamma} = \left( \rho \, \theta_{0,2} - \frac{3}{c^2} \, \frac{N^\pi_1}{D_1^\pi} \, \Pi \right) U^\alpha U^\beta U^\gamma \, + \, \left( \rho \, c^2 \theta_{1,2} - 3 \, \frac{N_{11}^\pi}{D_1^\pi} \, \Pi \right) U^{( \alpha} h^{\beta \gamma )} \, + \\
+ \, \frac{3}{c^2} \, \frac{N_3}{D_3} \, q^{( \alpha} U^\beta U^{\gamma )} \, + \, \frac{3}{5} \, \frac{N_{31}}{D_3} \, q^{( \alpha} h^{\beta \gamma )} \, + \, 3 \, C_5 \, t^{( < \alpha \beta >_3} U^{\gamma )} \, . \nonumber
\end{align}
This result is formally same as the result of \cite{Annals} (eq. (56) of the paper). However, there are the difference in the coefficients due to the presence of $\left( mc^2 + \mathcal{I} \right)^n$ instead of $ mc^2 + \, n \, \mathcal{I}$ in the integrals.
Similarly, we obtain the production term in this principal subsystem as follows:
\begin{align}\label{Sub3}
I^{< \beta \gamma >} = - \, \frac{1}{c^2 \tau} \,\frac{3N_1^\pi+N_{11}^\pi}{D_1^\pi} \, \Pi \, U^{< \beta} U^{\gamma >} \, + \, \frac{1}{c^2 \tau} \left( \frac{\theta_{1,3}}{\theta_{1,2}} \, - \,
2 \, \frac{N_3}{D_3} \right) \, q^{( \beta} U^{\gamma )} \, - \, \frac{1}{\tau} \, C_5 \, t^{< \beta \gamma >_3} \, .
\end{align}
This expression \eqref{Sub3} is formally the same with the result of \cite{Car1} (eq (16) of the paper) in \cite{Car1}, except that now we have $\frac{\theta_{1,3}}{\theta_{1,2}} $ instead of $\frac{B_2}{B_4}$ defined in \cite{Car1} and the difference of the integral in the coefficients is similar with the case for $A^{\alpha\beta\gamma}$.
\emph{The system} \eqref{Sub14} \emph{is symmetric hyperbolic in the main field} $(\lambda,\lambda_{\alpha}, \lambda_{< \mu \nu >})$ \emph{given respectively by} \eqref{RT} \emph{with} $\Delta = \Delta^{(14)}$ \emph{given by} \eqref{subd}.
\subsection{ET$_6$: 6 fields theory}
We consider the principal subsystem with $\lambda_{< \mu \nu >} = \lambda_{\mu \nu} - \frac{1}{4}\lambda_\alpha^\alpha g_{\mu \nu} = 0$, then we have
\begin{align} \label{lag6}
\lambda_{\mu \nu} = \frac{1}{4}\lambda^\alpha_\alpha g_{\mu \nu}.
\end{align}
By comparing it with \eqref{RT}, we have
\begin{align*}
\left(\alpha_1 + \frac{\alpha_2}{c^2}\right)\Pi + \left(\beta_1 + \frac{\beta_2}{c^2}\right)\Delta =0, \qquad q_\mu = 0, \qquad t_{< \mu\nu >_3} =0.
\end{align*}
The first equation indicates that, in this principal subsystem, $\Delta$ is expressed with $\Pi$ as follows:
\begin{align}\label{subd6}
\Delta^{(6)} = - \frac{c^2 \alpha_1 + {\alpha_2}}{c^2 \beta_1 + {\beta_2}}\Pi = w \, \Pi
\end{align}
where
\begin{equation*}
w = 4 c^2 \frac{D^{44}_4 - 3D^{43}_4 }{D^{34}_4 - 3D^{33} _4}.
\end{equation*}
It should be mentioned that the relation between $\Delta$ and $\Pi$ is different from the case of ET$_{14}$.
The independent fields are now the $6$ fields; $(n, \gamma, U^\alpha, \Pi)$, and the balance equations are the following:
\begin{align}\label{Sub61}
\partial_\alpha V^\alpha =0 \, , \quad \partial_\alpha T^{\alpha \beta} =0 \, , \quad \partial_\alpha A^{\alpha \beta}_{\,\,\, \, \,\beta} = I^\beta_{\,\,\beta} \, .
\end{align}
where the energy-momentum tensor is now given, instead of \eqref{19}, by
\begin{align}\label{T6}
T^{\alpha \beta} = \frac{e}{c^2} \, U^{\alpha } U^\beta + \, \Big(p \, + \, \Pi\Big)
h^{\alpha \beta}.
\end{align}
and, from \eqref{24},
\begin{align}\label{Aa6}
A^{\alpha \beta}_{\,\,\, \, \,\beta} = \left\{\rho c^2 (\theta_{0,2} - \theta_{1,2}) + A_1\right\}\Pi U^\alpha,
\end{align}
where
\begin{align*}
A_1 = - \frac{1}{4c^2}\left\{\left(1+3\frac{N^\Delta}{D_4}\right)\frac{c^2 \alpha_1 + {\alpha_2}}{c^2 \beta_1 + {\beta_2}} - 12c^2 \frac{N^\Pi}{D_4}\right\} = \frac{D_4^{44}-3D_4^{43} + 3 N^{\Delta 34} - 9N^{\Delta 33}}{D_4^{34} - 3 D_4^{33}}.
\end{align*}
Similarly, from \eqref{P22}, we obtain
\begin{align}
I^\beta_{\,\,\beta} = - \frac{A_1}{\tau}\Pi.
\end{align}
The corresponding Lagrange multiplier to $A^{\alpha \beta}_{\,\,\, \, \,\beta}$ is $\mu = \frac{1}{4}\lambda^\alpha_\alpha$ which is obtained from \eqref{lag6} as follows:
\begin{align}\label{mumu}
\mu = c^2 \frac{\alpha_1\beta_2 - \alpha_2 \beta_1}{c^2 \beta_1 + \beta_2}\Pi.
\end{align}
\emph{The system} \eqref{Sub61} \emph{with} \eqref{T6} \emph{and} \eqref{Aa6} \emph{is symmetric hyperbolic in the main field} $(\lambda,\lambda_{\alpha}, \mu)$ \emph{given respectively by (see} \eqref{RT}$_{1,2}$ \emph{):}
\begin{equation}
\lambda = -\frac{g+c^2}{T} +(a1+ a2 w)\,\Pi, \qquad \lambda_\alpha = \frac{1}{T}\left(1+(b1+b2 w) \Pi
\right) U_\alpha,
\end{equation}
\emph{and $\mu$ given by }\eqref{mumu}.
The closed field equations with the material derivative are obtained as follows:
\begin{align}\label{field6}
\begin{split}
& \dot{\rho} + \rho \, \partial_\alpha \, U_\alpha=0 \, , \, \\
&\frac{e+p+ \Pi}{c^2} \, \dot{U}_\delta \, + \, h_\delta^\mu \, \partial_\mu (p+ \Pi) =0 \, , \, \\
&\dot{e} \, + \, (e+p+ \Pi)\, \partial_\alpha U^\alpha =0 \, , \\
& \dot{\Pi} \, + \, \frac{\rho c^2 \left({\theta}_{0,2}' - {\theta}_{1,2}' \right)}{A_1}\dot{\gamma} \, + \, \frac{\dot{A}_1}{A_1} \Pi \, + \, \Pi \partial_\alpha U^\alpha = - \, \frac{\Pi}{\tau} .
\end{split}
\end{align}
Taking into account
\begin{equation}
h_\delta^\mu \, \partial_\mu (p+ \Pi) = U_\delta \frac{\dot{p}+\dot{\Pi}}{c^2} - \partial_\delta (p+\Pi),
\end{equation}
and from \eqref{10}:
\begin{equation}
\dot{e} = c^2( \dot{\rho} \omega + \rho \omega^\prime \dot{\gamma}), \qquad \dot{p}=\frac{c^2}{\gamma^2}(\gamma \dot{\rho} - \rho \dot{\gamma}),
\end{equation}
the system \eqref{field6} can be put in the normal form:
\begin{align}\label{field6bis}
& \dot{\rho} + \rho \, \partial_\alpha \, U_\alpha=0 \, , \, \nonumber\\
&\left(\rho +\frac{\rho \varepsilon+p+ \Pi}{c^2}\right) \, \dot{U}_\delta \, - \partial_\delta (p+ \Pi)
- \frac{(p+\Pi)}{c^2}\left[1- \frac{1}{A_1 \omega^\prime}\left(
A_1^\prime \frac{\Pi}{\rho c^2} + \frac{A_1}{\gamma^2} +\theta_{0,2}^\prime-\theta_{1,2}^\prime
\right)
\right] U_\delta \, \partial_\alpha U^\alpha
= \frac{\Pi}{\tau c^2 }U_\delta
\, , \, \nonumber\\
&\rho c^2 \omega^\prime\, \dot{\gamma} \, + \, (p+ \Pi)\, \partial_\alpha U^\alpha =0 \, , \\
& \dot{\Pi} \, + \left\{\Pi -\frac{p+\Pi}{\rho c^2 A_1 \omega^\prime}\left[A_1^\prime \Pi +\rho c^2\left({\theta}_{0,2}' - {\theta}_{1,2}' \right)\right]\right\}\partial_\alpha U^\alpha
= - \, \frac{\Pi}{\tau} .\nonumber
\end{align}
It is extremely interesting that in the relativistic theory the acceleration is influenced by the relaxation time trough the right hand side of \eqref{field6bis}$_2$ and this may be important for the application to the problems of cosmology.
\subsection{ET$_5$: Euler 5 fields theory}
Let us consider the principal subsystem with $\lambda_{\mu \nu}=0$. This indicates that any nonequilibrium variables are set to be zero, i.e.,
\begin{align}
\Pi = \Delta=0, \qquad t_{< \mu\nu>_3} =0, \qquad q_\alpha=0.
\end{align}
The independent fields are the $5$ fields $(n, U^\alpha, \gamma)$, and the balance equations are
\begin{align}\label{Eulers}
\partial_\alpha V^\alpha =0 \, , \qquad \partial_\alpha T^{\alpha \beta} =0,
\end{align}
with
\begin{align}
T^{\alpha \beta} = \frac{e}{c^2} \, U^{\alpha } U^\beta + \, p h^{\alpha \beta}.
\end{align}
\emph{The deduced system is the one of the relativistic Euler theory and the system \eqref{Eulers} become symmetric in the main field $(\lambda = -(g+c^2)/T,\lambda_\alpha = U_\alpha/T)$ obtained first by Ruggeri and Strumia in \cite{RugStr}.}
\section{Maxwellian Iteration and phenomenological coefficients}\label{sec:max}
In order to find the parabolic limit of system \eqref{derivmat} and to obtain the corresponding Eckart equations, we adopt the Maxwellian iteration \cite{Ikenberry} on \eqref{derivmat} in which only the first order terms with respect to the relaxation time are retained. The phenomenological coefficients, that is, the heat conductivity $\chi$, the shear viscosity $\mu$ and the bulk viscosity $\nu$ are identified with the relaxation time.
The method of the Maxwellian iteration is based on putting to zero the nonequilibrium variables on the left side of equations \eqref{derivmat}:
\begin{align}\label{Pen3}
\begin{split}
& \dot{\rho} - \rho \, h^{\beta \alpha} \, \partial_\beta \, U_\alpha=0 \, , \\
& \frac{e+p}{c^2}\, h_{\delta \beta} \, \dot{U}^\beta \, - \, h_\delta^\mu \, \partial_\mu p =0 \, , \\
& \dot{e} \, - \, (e+p)\, h_\alpha^\mu \, \partial_\mu U^\alpha =0 \, , \\
& \frac{c^2}{3} h_{\delta \beta} \, \Big(\dot{\rho} \theta_{1,2}+\rho \theta'_{1,2}\dot{\gamma}\Big)
- \, \frac{1}{3} \rho c^2 \theta_{1,2} \,
\left[ h_{\delta \beta} \, h_\alpha^\mu \, \partial_\mu \, U^\alpha \, + \, 2
\, h_{\theta ( \delta } \, h_{\beta )}^\mu \, \partial_\mu \, U^\theta \right] = \\
& \quad = \frac{1}{\tau} \, \Big( \frac{1}{4} \, \frac{N^\Delta}{D_4} \,
\frac{\Delta}{c^2} \, + \, \frac{N^\Pi}{D_4} \, \Pi \Big) h_{\delta \beta} \, - \, \frac{1}{\tau}
\, C_5 \, t_{< \delta \beta >_3}\, , \\
& h_{\beta \delta} \, \dot{U}^\beta \Big( \rho \theta_{0,2} c^2 + 2 \frac{1}{3} \rho c^2 \theta_{1,2} \Big) \, - \,
h_\delta^\mu \, c^2 \, \partial_\mu \, \frac{1}{3} \rho c^2 \theta_{1,2} =
\frac{1}{\tau} \, \Big( \frac{N_3}{D_3} \, - \, \frac{\theta_{1,3}}{2 \, \theta_{1,2}} \Big) \, q_\delta \, , \\
&c^4 \Big(\dot{\rho} \theta_{0,2} + \rho \theta'_{0,2}\dot{\gamma} \Big) \, - \,
\, \rho c^4 \Big( \theta_{0,2} + \frac{2}{3} \theta_{1,2} \Big) h_\alpha^\mu \, \partial_\mu \, U^\alpha \, = - \, \frac{1}{4 \, \tau} \, \Delta \, .
\end{split}
\end{align}
From the first three equations of \eqref{Pen3} and taking into account $p=\rho c^2/\gamma, e = \rho c^2 \omega(\gamma) $ (see \eqref{10}), we can deduce
\begin{align}\label{3con}
\dot{\rho} = \rho \, h^{\mu \alpha} \partial_\mu \, U_\alpha , \quad
h_\delta^\mu \, \partial_\mu \, \rho = \rho \frac{\omega \gamma +1}{ c^2} \, h_{\delta \beta} U^\mu \, \partial_\mu \, U^\beta \, + \, \frac{\rho}{\gamma} \,
h_\delta^\mu \, \partial_\mu \, \gamma, \quad \,
\dot{\gamma} = \frac{1}{\gamma \omega'} \, h^{\mu \alpha} \partial_\mu \, U_\alpha \, .
\end{align}
Putting \eqref{3con} in the remaining equations \eqref{Pen3}$_{4,5,6}$, we obtain the solution
\begin{align}\label{I8b}
\begin{split}
& q_{\beta}=-\chi \, h_\beta ^{\alpha}\left [ \partial_{\alpha}T- \frac{T}{c^2}U^{\mu}\partial_{\mu}U^{\alpha}\right ], \\
& \Pi= -\nu \, \partial_{\alpha} U^{\alpha}, \\
& t_{<\beta \delta>_3} =2 \mu \, h^{\alpha}_{\beta}\, h^{\mu}_{\delta}\partial_{<\alpha} U_{\mu>}, \\
& \Delta = \sigma \, \partial_{\alpha} U^{\alpha},
\end{split}
\end{align}
with
\begin{align}\label{I8n}
\begin{split}
& \chi = - \frac{2\rho c^2}{3 B^q T } \left[3 \theta_{0,2} + \theta_{1,2} (1-\omega \,\gamma)\right],\\
& \nu= -\frac{\rho c^2}{3B_2^\Pi} \left\{
\frac{2}{3} \theta_{1,2} - \frac{\theta'_{1,2}}{\gamma \omega' }+ 3 \frac{N^\Delta}{D_4} \Big(\frac{2}{3} \theta_{1,2} - \frac{\theta'_{0,2}}{\gamma \omega'}
\Big)
\right\}, \\
& \mu = -\frac{\rho c^2}{3B^t} \theta_{1,2}\, ,
\end{split}
\end{align}
and
\[
\sigma = \frac{\rho }{B_1^\Delta} \Big( \frac{2}{3} \theta_{1,2} - \frac{\theta'_{0,2}}{\gamma \omega'} \Big),
\]
where $B_2^\Pi, \, B^q,\, B^t$ are explicitly given by \eqref{P3} with the relaxation time $\tau$.
As the first three equations in \eqref{I8b} are the Eckart equations, we deduce that $\chi, \nu, \mu$ are respectively the heat conductivity, the bulk and shear viscosities. In addition, we have a new phenomenological coefficient $\sigma$ but as $\Delta$ doesn't appears neither in $V^\alpha$ and $T^{\alpha\beta}$ (see the equation \eqref{19} or the first three equations in \eqref{derivmat}), we arrive to the conclusion that the present theory converges to the Eckart one formed in the first three block equations of \eqref{derivmat} with constitutive equations \eqref{I8b} in which the heat conductivity, bulk viscosity and shear viscosity are explicitly given by \eqref{I8n}$_{1,2,3}$.
We introduce as in \cite{Car2} the dimensionless variables as follows:
\begin{align}\label{I8nn}
\begin{split}
&\bar{\chi} = \frac{\rho T \chi}{p^2 \tau} = -\frac{2}{3} \gamma^2 \frac{3 \theta_{0,2} + \theta_{1,2} (1-\omega \,\gamma)}{\frac{\theta_{1,3}}{\theta_{1,2}} \, - 2 \frac{N_3}{D_3} },\\
& \bar{\nu} = \frac{\nu}{p \tau}= -\frac{1}{3} \frac{\gamma}{\frac{N^\Pi}{D_4}} \left\{
\frac{2}{3} \theta_{1,2} - \frac{\theta'_{1,2}}{\gamma \omega' }+ 3 \frac{N^\Delta}{D_4} \Big(\frac{2}{3} \theta_{1,2} - \frac{\theta'_{0,2}}{\gamma \omega'}
\Big)
\right\}, \\
& \bar{\mu} = \frac{\mu}{p \tau}= \frac{\gamma}{3C_5} \theta_{1,2}\, ,
\end{split}
\end{align}
that are functions only of $\gamma$.
\subsection{Ultra-relativistic and classical limit of the phenomenological coefficients}
Taking into account eqs. \eqref{omefaultre} - \eqref{26}, it is simple to obtain the limit of \eqref{I8nn} when $\gamma \rightarrow 0$:
\begin{align*
\begin{split}
{\bar{\chi}}_\text{ultra} = 0,\qquad
{\bar{\nu}} _\text{ultra} = \frac{2}{3} \frac{\alpha^2 -4}{(1 + \alpha) (4 + \alpha)}, \qquad
{\bar{\mu} }_\text{ultra} =\frac{2 + \alpha}{4 + \alpha}.
\end{split}
\end{align*}
In particular in the most significant case in which $a\leq 2$ for which $\alpha =2$ we have
\begin{align}\label{I8ultran}
\begin{split}
{\bar{\chi}}_\text{ultra} = 0,\qquad
{\bar{\nu}} _\text{ultra} = 0, \qquad
{\bar{\mu} }_\text{ultra} =\frac{2}{3}.
\end{split}
\end{align}
Instead, in the classical limit for which $\gamma \rightarrow \infty$, it was proved in \cite{Annals} that the internal energy $\varepsilon$ converges to the classical internal energy of polytropic gas: $\varepsilon = (D/2)(k_B/m)T$. Therefore, from \eqref{interenergy2}, $\omega$ converges to
\begin{equation}\label{wclas}
\omega_{\text{class}} = 1+ \frac{D}{2 \gamma}.
\end{equation}
In the present case, using \eqref{wclas}, it is not difficult to find $\theta_{h,j}$ deduced in \eqref{thetas} in the limit $\gamma \rightarrow \infty$ as follows:
\begin{align} \label{thetaclass}
\begin{split}
& \left\{\theta_{0,0},\theta_{0,1},\theta_{0,2},\theta_{0,3},\theta_{0,4}\right\} =
\left\{
1, 1 + \frac{D}{2 \gamma }, 1 + \frac{D}{\gamma}, 1 + \frac{3 D}{2\gamma},
1 + \frac{2 D}{\gamma}
\right\}, \\
& \left\{\theta_{1,1},\theta_{1,2},\theta_{1,3},\theta_{1,4}\right\} =
\left\{
\frac{1}{\gamma}, \frac{3}{\gamma}, \frac{6}{\gamma}, \frac{10}{\gamma}
\right\}, \\
& \left\{\theta_{2,3},\theta_{2,4}\right\} =
\left\{
\frac{3}{\gamma ^2}, \frac{15}{\gamma ^2}
\right\}.
\end{split}
\end{align}
Therefore, in the classical limit, we have
\begin{align}\label{27c}
\frac{N_3}{D_3} = 2, \qquad \frac{N_{31}}{D_3}=\frac{10}{2+D}, \qquad C_5 = 1,
\qquad \frac{N^\Pi}{D_4} = - 1, \qquad \frac{N^\Delta}{D_4} = - \frac{2}{D},
\end{align}
and we find from \eqref{I8nn}
\begin{align}\label{classi}
\begin{split}
{\bar{\chi}}_\text{class} = \frac{D+2}{2},\qquad
{\bar{\nu}} _\text{class} = \frac{2(D-3)}{3D}
, \qquad
{\bar{\mu} }_\text{class} =1,
\end{split}
\end{align}
which are in perfect agreement with the phenomenological coefficients of the classical RET theory \cite{RS}.
\subsection{Phenomenological coefficients in ET$_{14}$ and ET$_{6}$}
By conducting the Maxwellian iteration to ET$_{14}$ as a principal subsystems of ET$_{15}$, we may expect that the different bulk viscosity appears. This is because $\Delta$ is related to $\Pi$ by \eqref{subd} and it affects the balance laws corresponding to $\Pi$ in ET$_{14}$. In fact, from \eqref{Sub2} and \eqref{Sub3}, we can obtain the closed field equations for $\Pi$, and then, through the Maxwellian iteration as it has been done in \cite{Car2}, we obtain the bulk viscosity for ET$_{14}$ as follows:
\begin{align}\label{nu14}
\bar{\nu}_{14} = \frac{\frac{1}{\omega'}\left(\theta_{0,2}' + \frac{1}{3}\theta_{1,2}'\right) - \frac{8}{9}\gamma \theta_{1,2}}{\left(-1 + \frac{N^\Delta}{D_4}\right)\frac{N_a}{D_a} + \frac{N^\Pi}{D_4}}.
\end{align}
We remark that the heat conductivity and the shear viscosity is the same between ET$_{15}$ and ET$_{14}$.
Similarly, from \eqref{field6bis}$_4$, we obtain the bulk viscosity estimated by ET$_6$ as follows:
\begin{align} \label{nu6}
& \bar{\nu}_6 = - \frac{\theta_{0,2}' - \theta_{1,2}'}{\omega' A_1}
\end{align}
It should be noted that, in the classical case studied in \cite{x}, the bulk viscosities of ET$_{15}$, ET$_{14}$ and ET$_{6}$ are the same. In fact, in the classical limit, $\bar{\nu}_{14}$ and $\bar{\nu}_6$ coincides with $\bar{\nu}_\text{class}$. However, due to the mathematical structure of the relativity, i.e., the scalar fields $\Pi$ and $\Delta$ appear together in the triple tensor, the method of the principal subsystem dictates the difference of the subsystems.
\subsection{Heat conductivity, Bulk viscosity and Shear viscosity in diatomic gases}
Inserting \eqref{omegaD}, after cumbersome calculations (easy with Mathematica), we can obtain the phenomenological coefficients in the diatomic case:
{\small
\begin{align*}
&\bar{\chi}=-\frac{\gamma \Big(\gamma^2+2 \gamma G-8\Big)
\left\{\gamma ^4
\Big(G^2-1\Big)+2 \gamma
^2 \Big(G^2+2\Big)-5
\gamma ^3 G-16 \gamma
G+32\right\}^2}
{(\gamma G-4)^3 \left\{\gamma \left[-\gamma ^5+5 \gamma
^3+48 \gamma +\Big(\gamma
^4-6 \gamma ^2-12\Big)
\gamma G^2+\Big(-5 \gamma
^4+12 \gamma ^2+96\Big)
G\right]-192\right\}}, \\
& \bar{\mu} = \frac{\Big(\gamma ^2+2 \gamma G-8\Big)^2}{(\gamma G-4)
\left\{4 \Big(\gamma
^2-8\Big)+\gamma
\Big(\gamma ^2+8\Big)
G\right\}},\\
&\bar{\nu} = \frac{g_1}{3 (\gamma G-4) g_2},
\end{align*}
with
\begin{align*}
&g_1= 4
\gamma ^{15} G
\Big(G^2-1\Big)^2+81920
\gamma ^3 G \Big(7
G^2+20\Big)-196608 \gamma^2
\Big(7 G^2+4\Big)+1024
\gamma ^5 G \Big(21 G^4+660
G^2-392\Big) -\\
& \hspace{1cm} 4096 \gamma
^4 \Big(35 G^4+348
G^2-56\Big)+4 \gamma ^{14}
\Big(G^6-17 G^4+21
G^2-5\Big)+\gamma ^{13} G
\Big(7 G^6-86 G^4+435
G^2-256\Big)+\\
& \hspace{1cm}
4 \gamma
^{12} \Big(-40 G^6+193
G^4-331 G^2+48\Big)+4
\gamma ^{11} G \Big(-14
G^6+422 G^4-943
G^2+500\Big)+16 \gamma ^{10}
\Big(77 G^6-660 G^4+677
G^2-84\Big)+\\
& \hspace{1cm}
16 \gamma ^9 G
\Big(7 G^6-714 G^4+2560
G^2-1108\Big)-64 \gamma ^8
\Big(45 G^6-910 G^4+1472
G^2-204\Big)+ \\
& \hspace{1cm} 64 \gamma ^7
G \Big(G^6+492 G^4-2800
G^2+1760\Big)-256 \gamma
^6 \Big(7 G^6+740 G^4-1344
G^2+192\Big)+1835008 \gamma G - 1048576, \\
& g_2 = \gamma ^4
\Big(G^2-1\Big)+\gamma ^2
\Big(G^2+4\Big)-5 \gamma
^3 G-8 \gamma G+16\Big)
\Big(\gamma \Big(2 \gamma
^9 G^2 \Big(G^2-1\Big)+5
\gamma ^8 G \Big(1-3
G^2\Big)+\\
&\hspace{1cm} 40 \gamma ^6 G
\Big(6-5 G^2\Big)+64
\gamma ^4 G \Big(11
G^2-25\Big)+512 \gamma ^2
G \Big(G^2+14\Big)-1024
\gamma \Big(3
G^2+5\Big)+\\
&\hspace{1cm} \gamma ^7
\Big(19 G^4-17
G^2+28\Big)-4 \gamma ^5
\Big(13 G^4-198
G^2+60\Big)-32 \gamma ^3
\Big(G^4+108
G^2-52\Big)+8192
G\Big)-8192.
\end{align*}
}
\normalsize
Let us compare the phenomenological coefficients with the ones for the monatomic case obtained in \cite{Car2}. In Fig.\ref{fig:chidiamono}, we plot the dependence of the dimensionless heat conductivity and shear viscosity on $\gamma$ for both diatomic and monatomic cases. Concerning $\nu$, we also plot the dimensionless bulk viscosity of ET$_{14}$ derived in \eqref{nu14} in the Fig.\ref{fig:nubarr}. We observe that in the ultra-relativistic limit and in classical limit, the figures are in perfect agreement with the limits \eqref{I8ultran} and \eqref{classi} (for $D=3,5$). We remark, as it is evidently shown in Fig.\ref{fig:nubarr}, how small is the bulk viscosity in monatomic gas with respect to the one of diatomic case.
It is also remarkable that the value of the bulk viscosity of ET$_6$ given by \eqref{nu6} is quite near to the one of ET$_{15}$. For this reason, we omit the plot of $\bar{\nu}^{(6)}$ in the figure. This indicates that ET$_6$ captures the effect of the dynamic pressure in consistent with ET$_{15}$.
\begin{figure}[h]
\includegraphics[width=0.48\linewidth]{gam-chi} \hspace{0.5cm} \includegraphics[width=0.48\linewidth]{gam-mu}
\caption{Dependence of $\bar{\chi}$ (left) and $\bar{\mu}$ (right) for diatomic (red solid line) and monatomic (black dashed line) gases on $\gamma$. The dotted line indicates the corresponding value in the classical limit. In the ultra-relativistic limit ($\gamma \to 0$), $\bar{\chi}_\text{ultra}=0, \bar{\mu}_\text{ultra}=2/3$ both for monatomic and diatomic gases. In the classical limit ($\gamma \to \infty$), $\bar{\chi}_\text{class}=2.5, \bar{\mu}_\text{class}=1$ for monatomic gas, and $\bar{\chi}_\text{class}=3.5, \bar{\mu}_\text{class}=1$ for diatomic gas.}
\label{fig:chidiamono}
\end{figure}
\begin{figure}[h]
\includegraphics[width=0.48\linewidth]{gam-nu}
\caption{Dependence of $\bar{\nu}$ for diatomic (red solid line) monatomic (black dashed line) gases on $\gamma$. The prediction by $ET_{14}$ as a principal subsystem of ET$_{15}$ is also shown with the dotted line. In the ultra-relativistic limit ($\gamma \to 0$), $\bar{\nu}_\text{ultra}=0$ both for monatomic and diatomic gases. In the classical limit ($\gamma \to \infty$), $\bar{\nu}_\text{class}=0$ for monatomic gas, and $\bar{\nu}_\text{class}=4/15$ for diatomic gas.}
\label{fig:nubarr}
\end{figure}
\section{Classic limit of the relativistic theory}\label{sec:classical}
We want to perform now the classical limit $\gamma \rightarrow \infty $ of the closed relativistic system \eqref{derivmat}. For this purpose we recall the limits of the coefficients given in \eqref{thetaclass} and \eqref{27c}. Moreover,
taking into account the decomposition $U^\alpha \, \equiv \, \Big(\Gamma \, c \, , \, v^i\Big)$, where $\Gamma$ is the Lorentz factor, we have $\partial_\alpha U^\alpha = \, \frac{1}{c} \, \partial_t \Big(\Gamma \, c\Big) + \, \partial_k \, \Big(\Gamma \, v^k\Big)$ whose limit is $\partial_i v^i$ because $\partial_t \, \Gamma = - \, \Gamma^3 \frac{v_i}{c^2} \, \partial_t v^i$ has zero limit, and the similar evaluation applies to $\partial_k \, \Gamma$. Then,
\begin{align*}
\frac{1}{c^2} \, U^\mu \partial_\mu U^0 = \frac{1}{c^2} \, \Gamma \, c \, \frac{1}{c} \, \partial_t \, \Big( \Gamma \, c \Big) \, + \,
\frac{1}{c^2} \, \Gamma \, v^k \, \partial_k \, \Big( \Gamma \, c \Big) \quad \mbox{has 0 limit} \, , \\
\frac{1}{c^2} \, U^\mu \partial_\mu U^i = \frac{1}{c^2} \, \Gamma \, c \, \frac{1}{c} \, \partial_t \, \Big( \Gamma \, v^i \Big) \, + \,
\frac{1}{c^2} \, \Gamma \, v^k \, \partial_k \, \Big( \Gamma \, v^i \Big) \quad \mbox{has 0 limit} \, .
\end{align*}
Concerning the projection operator in the limit, it is necessary to remind that, with our choice of the metric, $v_j= - \, v^j$, then
\begin{align*
h^{\beta \alpha} = &- \, g^{\beta \alpha} + \frac{U^\beta U_\alpha}{c^2} \quad \rightarrow \quad h^{i j} = - \, g^{i j} + \Gamma^2 \, \frac{v^i v^j}{c^2} \quad \rightarrow \quad \lim_{c \, \rightarrow \, + \infty} \, h^{i j} = - \, g^{i j} = \delta^{i j} , \\
& \lim_{c \, \rightarrow \, + \infty} \, h^{i}_j = - \, g^{i}_j = - \, \mbox{diag} \, (1, \, 1, \, 1) \, . \nonumber
\end{align*}
While from
\begin{align*}
\begin{split}
& 0 = U_\alpha h^{i \alpha} = \Gamma \, c \, h^{i 0} + \Gamma \, v_k \, h^{i k} \quad \rightarrow \quad h^{i 0} = - \frac{v_k}{c} \, h^{i k} \, , \\
& \, 0 = U_\alpha h^{0 \alpha} = \Gamma \, c \, h^{0 0} + \Gamma \, v_k \, h^{0 k} \quad
\rightarrow \quad h^{0 0} = - \frac{v_k}{c} \, h^{0 k} = - \frac{v_a v_b}{c^2} \, h^{a b}\, .
\end{split}
\end{align*}
The last two relations hold also without taking the non relativistic limit. As a consequence, we have that $ \lim_{c \, \rightarrow \, + \infty} \, h^{i 0} = 0$ and $\lim_{c \, \rightarrow \, + \infty} \, h^{00} = 0$.
The relativistic material derivative \eqref{matde} of a function $f$ converges to the classical material derivative where we continue to indicate it with a dot. Then, the system \eqref{derivmat} becomes in the classical limit:
\begin{align}
\begin{split}
&\dot{\rho} + \rho \frac{\partial v_l}{\partial x_l} = 0,\\
&\rho \dot{v}_i + \frac{\partial p}{\partial x_i} + \frac{\partial \Pi}{\partial x_i} - \frac{\partial \sigma_{\langle ik\rangle}}{\partial x_k} =0,\\
&\dot{T} + \frac{2T}{Dp} \left\{(p+\Pi)\frac{\partial v_l}{\partial x_l} - \sigma_{\langle ik\rangle}\frac{\partial v_k}{\partial x_i}+\frac{\partial q_l}{\partial x_l}\right\}
=0,\\
&\dot{\Pi} + \frac{2}{3}\frac{D -3}{D}p \frac{\partial v_l}{\partial x_l} + \frac{5D-6}{3D}\Pi \frac{\partial v_l}{\partial x_l} - \frac{2}{3}\frac{D-3}{D}\sigma_{\langle lk\rangle}\frac{\partial v_l}{\partial x_k} + \frac{4(D-3)}{3D(D+2)}\frac{\partial q_l}{\partial x_l} = - \frac{1}{\tau }\, \Pi,\\
&\dot{\sigma}_{\langle ij\rangle} + \sigma_{\langle ij\rangle}\frac{\partial v_l}{\partial x_l} + 2\sigma_{\langle l\langle i\rangle}\frac{\partial v_{j\rangle}}{\partial x_l} - 2 (p+\Pi)\frac{\partial v_{\langle j}}{\partial x_{i\rangle}} - \frac{4}{D+2} \frac{\partial q_{\langle i}}{\partial x_{j\rangle}} = - \frac{1}{\tau } \sigma_{\langle ij\rangle},\\
& \dot{q}_i + \frac{D+4}{D+2}q_i \frac{\partial v_l}{\partial x_l} + \frac{D+4}{D+2} q_l \frac{\partial v_i}{\partial x_l} + \frac{2}{D+2}q_l \frac{\partial v_l}{\partial x_i} \\
& \qquad + \frac{D+2}{2}\frac{p}{\rho T}\left\{\left(p+\Pi \right)\delta_{il} - \sigma_{\langle il\rangle}\right\}\frac{\partial T}{\partial x_l} - \frac{p}{\rho^2} \left(\Pi \delta_{il} - \sigma_{\langle il\rangle}\right)\frac{\partial \rho}{\partial x_l} \\
&\qquad + \frac{1}{\rho}\left\{(p-\Pi)\delta_{il}+ \sigma_{\langle il\rangle} \right\}\left(\frac{\partial \Pi}{\partial x_l} - \frac{\partial \sigma_{\langle rl\rangle}}{\partial x_r}\right) +\frac{1}{2D}\frac{\partial \Delta}{\partial x_i} = - \frac{1}{\tau }\, q_i,\\
& \dot{\Delta} + \left(\frac{D+4}{D}\Delta + 8\frac{p}{\rho}\Pi \right)\frac{\partial v_l}{\partial x_l} - 8 \frac{p}{\rho}\sigma_{\langle ik\rangle}\frac{\partial v_i}{\partial x_k} - \frac{8}{\rho}q_i \frac{\partial p}{\partial x_i} \\
&\qquad + 4(D+4)\frac{p}{\rho T}q_l \frac{\partial T}{\partial x_l}
+ \frac{8p}{\rho} \frac{\partial q_l}{\partial x_l} - \frac{8}{\rho}q_i \frac{\partial \Pi}{\partial x_i} + \frac{8}{\rho} q_i \frac{\partial \sigma_{\langle il\rangle}}{\partial x_l} = - \frac{1}{\tau }\Delta,
\end{split}
\label{fieldeqsMpoly}
\end{align}
where $\sigma_{\langle ij\rangle} = -t_{\langle ij\rangle}$. The system \eqref{fieldeqsMpoly} coincides perfectly with the one obtained recently in \cite{x}.
We remark that, as it has been studied in \cite{x}, for classical polytropic gases, ET$_{14}$ is derived as a principal subsystem of ET$_{15}$ by setting $\Delta=0$. Moreover, ET$_6$ is derived from ET$_{14}$ as a principal subsystem of ET$_{14}$ by setting $\sigma_{\langle ij\rangle}=0$ and $q_i=0$. This fact corresponds that, in the classical limit, both of $\Delta^{(14)}$ defined in \eqref{subd} and $\Delta^{(6)}$ defined in \eqref{subd6} become zero.
\bigskip
\noindent
\acknowledgments{
The work has been partially supported by JSPS KAKENHI Grant Numbers JP18K13471 (TA), by the Italian MIUR through the PRIN2017
project "Multiscale phenomena in Continuum Mechanics:
singular limits, off-equilibrium and transitions" Project Number:
2017YBKNCE (SP) and GNFM/INdAM (MCC, SP and TR).
}
\conflictsofinterest{The authors declare no conflict of interest.}
|
{
"timestamp": "2021-12-01T02:27:02",
"yymm": "2111",
"arxiv_id": "2111.15492",
"language": "en",
"url": "https://arxiv.org/abs/2111.15492"
}
|
\section{Introduction}
Since the advent of scientific communities, there has been a high demand in the area of collaboration recommendations. Due to the complex nature of interconnections between researchers, this domain has not reached a successful automation for a long time.
According to the underlying graph structure of collaboration networks, we propose to use recently emerged graph neural networks (GNN) to efficiently predict research cooperation between scientists. This branch of machine learning has readily proved its outstanding performance in a wide range of areas related to recommender systems \cite{graph_recommender}. Such algorithms as Node2Vec \cite{node2vec}, Attri2Vec \cite{a2v}, and GraphSAGE \cite{GraphSAGE} can be trained to capture structural features of co-authorship network. Embeddings produced by these methods can be effectively applied to the different forecasting tasks including prediction of network connections as well \cite{liu2010link}.
Graph neural networks allow us not only to boost performance in straightforward link prediction task on co-authorship network, but to improve the quality on such a forecasting challenge via aggregation of additional information from the citation graph. Future development of the discussed pipeline can lead to the simplification of the collaboration assessment process for the R$\&$D team management.
\section{Related work}
Recommender systems for scientific communities have a long history of development.
Early approaches in this area \cite{sie2012whom, recommender2017} were based on deterministic network information, which lacked the ability to represent complex features of graph data.
Learning algorithms in the area of link prediction resolved a variety of issues related to capture of graph intricacies. The explicit examples of such models are local random walk \cite{liu2010link} and local naive Bayes \cite{liu2011link}.
Various metrics such as content similarity LDAcosin \cite{chuan2018link} were also applied in this field. Another popular method \cite{makarov_recsys} leverages linear regression on feature vectors of nodes and set of graph measures \cite{makarov2017scientific}. However, implementation of a learning feature extractor instead of deterministic metrics would significantly boost the performance of such pipelines. Applying this idea to the collaboration network structures, usage of graph neural networks becomes native \cite{zhang2018link}.
Despite the promising results achieved by graph neural networks, different techniques could be implemented to further increase their efficiency. Those methods include the alteration of graph topology \cite{singh2021edge} and the usage of task-independent techniques like Node2Vec in order to create initial node representations which serve as better inputs for GNNs \cite{gupta2021integrating}.
\section{Data}
We use classic HEP-TH dataset \cite{hep-th} as the subject for further development and extension. It consists of citation and co-authorship graphs obtained from the arXiv papers published between January 1993 and April 2003 in the area of high energy physics theory. Unfortunately, there is no connection between these two parts of the dataset (authors’ IDs were not provided in the citation network) which makes its initial second part worthless for our purposes.
To our best knowledge, the potential of the HEP-TH citation network was never explicitly revealed. Due to the presence of highly useful paper metadata (such as author lists or abstracts), the range of the dataset usage can be significantly extended. In order to complete our current research, we perform processing of the citation graph metadata aiming to restore the corresponding co-authorship network. To preserve homogeneous nature of the reconstructing graph, we discarded from the citation graph all anonymous papers.
Along with the information about authors and abstracts, HEP-TH involves a journal reference field with the publisher output information. The presence of such data provides us an opportunity to extract ISSNs of indexed publications and further parse scientific metrics (quartile, h-index, and impact factor) from the "SCImago Journal \& Country Rank" website.
\section{Methods}
\sloppy
Explored architecture consists of two subsequently applied graph neural networks. The first model generates vector representations of the publications according to their annotations and the structure of the citation network. As its input we leverage the directed citation graph $G(V, A, X)$, where $V = \{ v_1, v_2, \dots, v_n\}$ corresponds to the set of graph nodes (articles), $A: n \times n \rightarrow \{0, 1\}$ is the adjacency matrix (each edge encodes citation between two papers), and $X: n \times d \rightarrow \textbf{R}$ denotes the matrix of node features (vectorized abstracts via pre-trained FastText \cite{fasttext}).
We perform a set of computational experiments using previously addressed unsupervised methods GraphSAGE, Node2Vec, and Attri2Vec. In the following, we briefly discuss each of them to clarify their usage as the feature extractors.
\textbf{GraphSAGE}. This method aggregates information about the set of neighbor nodes $N(v)$ in order to produce embedding of node $v$
\begin{equation}
h^{l+1}_v = \sigma(W^l \cdot \operatorname{CONCAT}(h^l_v, h^{l+1}_{N(v)})),
\end{equation}
where $l + 1$ is the current number of a convolutional layer, $h^0_v$ = $x_v$, $W^l$ is the matrix of learning parameters, and $h^{l+1}_{N(v)}$ can be extracted by different aggregation functions like Max Pooling or Mean aggregator.
\textbf{Node2Vec}. This algorithm generates sequences of nodes via second-order random walks and utilizes them as the input data for a skip-gram model. The skip-gram generates pairs from input and context nodes in order to cast them to the feedforward neural network. Its weights can be used as the desired node embeddings as a result of the following function optimization:
\begin{equation}
\max _{f} \sum_{u \in V}\left[-\log Z_{u}+\sum_{n_{i} \in N(u)} f\left(n_{i}\right) \cdot f(u)\right],
\end{equation}
where $Z_{u}=\sum_{v \in V} \exp (f(u) \cdot f(v))$ is per-node partition function and f($\cdot$) corresponds to the mapping function.
\textbf{Attri2Vec}.
The last considered model uses the image $f(x_i)$ of node $v_i$ with feature vector $x_i$ in the new attribute subspace to predict its context nodes.
In this method, the task is to solve the joint optimization problem
\begin{equation}
\min _{W^{in}, W^{\text {out }}} -\sum_{i=1}^{|V|} \sum_{j=1}^{|V|} n\left(v_{i}, v_{j}\right) \log \frac{\exp \left(f\left(v_{i}\right) \cdot w_{j}^{\text {out }}\right)}{\sum_{k=1}^{|V|} \exp \left(f\left(v_{i}\right) \cdot w_{k}^{\text {out }}\right)},
\end{equation}
where $n(v_i, v_j)$ is the number of times that $v_j$ occurs in $v_i$ context within t-window size in the generated set of random walks, $W_{\text{in}}$ is the weight matrix from the input layer to hidden layer and $W_{\text{out}}$ is the weight matrix from the hidden
layer to the output layer.
Link prediction step follows after the publications embeddings generating. For this task, we consider the co-authorship graph $\hat{G}(\hat{V}, \hat{A}, \hat{X})$, where $\hat{V} = \{\hat{v}_1, \hat{v}_2, \dots, \hat{v}_m\}$ is the set of graph vertexes (authors), $\hat{A}: m \times m \rightarrow \{0, 1\}$ is the adjacency matrix (each edge encodes collaboration between the two authors), and $\hat{X}: m \times k \rightarrow \textbf{R}$ denotes the matrix of node features (one-hot encoded research interests of authors). In order to supply the predictive model by additional data about publications of the authors, we extend each element $\hat{x}_i$ of the matrix $\hat{X}$ to $\hat{x}'_i$ as follows:
\begin{equation}
\hat{x}'_i = \operatorname{CONCAT} (x_i, \sum_{e\in e_i} e),
\end{equation}
where $e_i$ denotes the set of publications embeddings of ${\rm i}^{th}$ author.
After the concatenation, the extended graph was translated as an input to the two-layer GraphSAGE with link classifier. It constructs the embedding of the potential links applying a binary operator to the pair of node embeddings (we consider L1, L2, Hadamard operator, average, and inner product \cite{link_embs_operators}). Finally, these link embeddings are passed through the dense classification layer to obtain probabilities of links existence in the network. The whole pipeline is illustrated in Figure \ref{pipeline}.
\begin{figure}[t]
\includegraphics[width=\textwidth]{images/pipeline.pdf}
\caption{Example configuration of the described architecture based on GraphSAGE convolutions.}
\label{pipeline}
\end{figure}
The last model is trained by minimizing the binary cross-entropy loss
\begin{equation}
L=-\frac{1}{\text { N }} \sum_{i=1}^{\text {N }} y_{i} \cdot \log \hat{y}_{i}+\left(1-y_{i}\right) \cdot \log \left(1-\hat{y}_{i}\right),
\end{equation}
where $N$ is the output size, $y_i$ is the true link labels, $\hat y_i$ is the predicted link existence probabilities.
\section{Results}
We performed training and evaluation on the PC with 1 GPU Tesla V100 and 96 GB of RAM. The weights of the neural networks were updated by Adam optimizer. We divided graph edges into train, validation and test samples in ratio 3:1:2.
We evaluated the set of models with various link embedding operators, representation learning models, and aggregation functions. As the main quality measurements, we chose binary accuracy, AUC-ROC, and F1-score. Received embeddings from the models were tested on the supervised GraphSAGE setup applied to the link prediction task.
\begin{table}[t]
\begin{center}
\fontfamily{phv}\selectfont
\begin{tabular}{lccccc}
\rowcolor{LightCyan}
\rule{0pt}{4ex}
Article embedding & Author embedding & LP op. & Accuracy & AUC-ROC & F1-score \hspace{2ex} \\[2ex]
\rule{0pt}{4ex}
-- & GraphSAGE (Mean) & L2 & 0.8793 & 0.9442 & 0.8817 \\
\rule{0pt}{4ex}
FastText & GraphSAGE (Mean) & Had & 0.8844 & 0.9486 & 0.8828 \\
\rule{0pt}{4ex}
GraphSAGE (Mean) & GraphSAGE (Mean) & Had & 0.8895 & 0.9568 & \textbf{0.8911} \\
\rule{0pt}{4ex}
GraphSAGE (Mean) & GraphSAGE (Mean) & L2 & \textbf{0.8928} & 0.9531 & 0.8885 \\
\rule{0pt}{4ex}
GraphSAGE (Mean) & GraphSAGE (MaxPool) & L1 & 0.8638 & \textbf{0.9617} & 0.8489 \\
\end{tabular}
\end{center}
\caption{Results of different models on test sample}
\label{results_table}
\end{table}
We conducted experiments with more than 80 different configurations and represented key results in Table \ref{results_table}. The first two baselines were selected as the best among the approaches using either vectors of authors interests without citation network information or just embeddings of the abstracts. Comparison of these simpler architectures with full models which includes two graph neural networks could be interpreted as an ablation study.
As it is shown in the table, the proposed aggregation algorithm positively influences the quality of scientific collaboration forecasting. The model without any citation data significantly suffers from the lack of expressive input features as well as the pipeline including only the abstracts of the papers. Obtained result allow us to report that proposed aggregation strategy efficiently utilizes citation graph properties.
\section{Conclusion and Outlook}
In the present paper, we briefly introduced the two-stage pipeline for the collaboration prediction task. Performed computational experiments reveal the perspective of citation data utilization in sense of co-authorship network extension. The embeddings generated by GNNs effectively capture the network properties including its topology and vectorized abstracts represented as features of the corresponding graph nodes.
Our main contributions in the present work are the following:
\begin{enumerate}
\item We perform extraction of the co-authorship graph from the corresponding HEP-TH citation network.
\item Presence of useful metadata allows us to parse the scientific significance measures of the publications (e.g., impact factor).
\item We aggregate structural data from the citation graph and apply it to the co-authorship network in order to evaluate its influence on the link prediction quality.
\end{enumerate}
Along with the future improvements of link prediction methods in the area of scientific collaboration, we intend to explore qualitative and quantitative assessment approaches of emerged links. As the probabilistic estimation of collaborations is not sufficient, it is important to extend it by less abstract metrics. In the following, we are going to leverage the average impact factor and the total number of publications for this task.
\section*{Acknowledgements}
We acknowledge fruitful discussions with Natalia Semenova.
\bibliographystyle{splncs04}
|
{
"timestamp": "2021-12-01T02:26:28",
"yymm": "2111",
"arxiv_id": "2111.15466",
"language": "en",
"url": "https://arxiv.org/abs/2111.15466"
}
|
\section{Introduction}
The cars people are driving at present are complicated cyber-physical systems involving tight interaction between rapidly evolving car technologies and their human users, the drivers.
To meet the needs and preferences of (at least) drivers, the infotainment system is more and more integrated with the setups for passengers' physical preferences, such as seating configuration, driving style and air conditioning, as well as for non-physical preferences, such as music to play, preferred numbers to call and on-line payment details.
A plethora of data originates, whose processing enhances the driving experience and exceeds that towards increased support for autonomous driving, a goal of large interest at present.
Modern cars also come with Internet connectivity ensuring, at least, that car software always gets over-the-air updates from the manufacturer. Cars expose services remotely via dedicated apps that the driver installs on their smartphone to remotely operate functions such as electric doors, air conditioning, headlights, horn and even start/stop the engine. Therefore, car and driver's smartphone apps form a combined system that exposes innovative services, including locating the car remotely via GPS or even geo-fencing it, so that the app user would be notified if their car ever exceeds a predefined geographical area~\cite{connectedcarfeatures}.
Because cars are progressively resembling computers, offering services while treating personal data, they also attract various malicious aims.
The field of car cyber-security shows that software vulnerabilities can be exploited on a Jeep~\cite{remoteattackJeep}, on a General Motors~\cite{GM2015} as well as on a Tesla Model S~\cite{Tesla2016}. Such vulnerabilities may, in particular, impact data protection, and
the sequel of this manuscript will discuss the variety of personal data treated through cars, thus calling for compliance, at least in the EU, with the General Data Protection Regulation (GDPR)~\cite{gdpr}. Car cyber-security (``security'', in brief) is certain to be more modern than car safety, hence our overarching research goal is to understand whether the former is understood as well as the former is.
We formulate the hypothesis that privacy concerns decrease when trust perceptions on the underlying security and data protection measures are correspondingly high. For example, it means that if a driver feels that their personal data is protected, then that is because the driver trusts that the car is secure.
To assess such hypothesis, this paper does not take a common attack-then-fix approach but, rather, addresses the following research questions pivoted on drivers' perceptions.
\begin{description}
\item[RQ1.] Are drivers adequately concerned about the privacy risks associated with how that their car and its manufacturer treat their personal data?
\item[RQ2.] Do drivers adequately perceive the trustworthiness of their car, in terms of security especially?
\end{description}
We are aware that these research questions are not conclusive, and we have gathered data to specialise the answers by categories of drivers, e.g. by age or education.
To the best of our knowledge, this is the first large-scale study targeting and relating privacy concerns and trust perception of car drivers. We took the approach of questionnaire development and survey execution through a crowdsourcing platform. Our goal was to get at least 1037 sets of responses in order for the findings to be statistically relevant, as explained below. We first piloted the questionnaire with 88 friends and colleagues with the aim of getting feedback but no significant tuning was required. After crowdsourcing, a total number of 1101 worldwide participants was reached.
We analysed the results obtained from the questionnaire through standard statistical analysis by Pearson's correlation coefficient, Spearman's rank correlation coefficient and Coefficient Phi.
In a nutshell, most drivers believe that it is unnecessary for their car to collect their personal data because they find the collection unnecessary to the full functioning of modern cars; this indicates that privacy concerns are low, which in turn may be due to wrong preconceptions, given that cars do collect personal data.
Also, it appears that most drivers do not fully agree that their data is protected using appropriate security measures; this may be interpreted as a somewhat low trust on security. To our surprise, pairing these two abstracted findings clearly disproves our hypothesis.
Section~\ref{sec:related-work} comments on the related work, Section~\ref{sec:method} outlines our research method, particularly the questionnaire design, the crowdsourcing task and the statistical approach, Section~\ref{sec:results} discusses our results and Section~\ref{sec:conclusion} concludes.
\section{Related Work}\label{sec:related-work}
In 2014, Schoettle and Sivak~\cite{schoettle2014} surveyed public opinions in Australia, the United States and the United Kingdom regarding connected vehicles.
Their research noted that people (drivers as well as non-drivers) expressed a high level of concern about the safety of connected cars, which does not seem surprising on the basis of the novelty of the concept at the time.
However, participants demonstrated an overall positive attitude towards connected car technology, with particular interest in device integration and in-vehicle Internet connectivity.
In 2016, Derikx et al.~\cite{derikx2016} investigated whether drivers' privacy concerns can be compensated by offering monetary benefits.
They analysed the case of usage-based auto insurance services where the rate is tailored to driving behaviour and measured mileage and found out that drivers were willing to give up their privacy when offered a small financial compensation.
Therefore, what appears to be missing is a study on drivers' understanding on the amount and type of personal data that modern cars process, which is the core of this paper.
There are relevant publications on drivers' trust on car safety but are limited to self-driving cars. Notably, Du et al.~\cite{du2019} conducted an experiment to better understand whether explaining the actions of automated vehicles promote general acceptance by the drivers. They found out that the specific point in time when explanations were given was crucial for their effectiveness --- explanations provided before the vehicle started were associated with higher trust by the subjects.
Similar results were obtained by Petersen et al.~\cite{petersen2019} in another study in 2019. They manipulated drivers' situational awareness by providing them with different types and details of information. Their analysis showed that situational awareness influenced the level of trust in automated driving systems, allowing drivers to immerse themselves in non-driving activities. Clearly, the more people are aware of something, the more trust they manage to place in it.
It is clear that modern cars technologies are not limited to self-driving features. Modern cars include innumerable digital components, often integrated in the infotainment system, which interact with drivers and collect their data. It follows that modern cars process personal data to some extent, as detailed in the next Section, hence car manufacturers must meet specific sets of requirements to comply with the relevant regulations. Therefore, it becomes important to assess drivers' concerns on their privacy through their use of a car and drivers' trust on the security (also in relation to their trust on the safety) of the car.
\section{Research method}\label{sec:method}
We took the approach of questionnaire development and survey execution to assess car drivers' privacy concerns and trust perceptions. Specifically, we built questionnaire with 10 questions, administered it through a crowdsourcing platform and carried out a statistical analysis of the answers. Opinions were measured using a standard 7-point Likert scale. With a very low margin of error, of just 4\%, and a very high confidence level, of 99\%, the necessary sample size to represent the worldwide population is 1037. Our total respondents were 1101, including piloting over 88, so our findings are statistically relevant of the entire world --- a limitation is that, while Prolific ensures that respondents are somehow geographically dispersed, it cannot guarantee that they are truly randomly sampled from the entire world.
\section{Results}\label{sec:results}
The answers are catalogued and statistically studied by analysing indexes of central tendency and correlation coefficients. The indexes of central tendency (mean and median) synthesise with a single numerical value the values assumed by the data. The mean value is coupled with the standard deviation in order to measure the amount of variation of the values. There is no room to present the demographic and its correlations with other answers here; we recall that driving at least 3 hours a week was a prerequisite to enter the study, along with being over 18.
To simplify the analysis of the answers to the core questions, we follow the standard practice of grouping the 7 levels of agreement into three categories. Specifically, if the participants reply with ``Strongly agree'', ``Agree'' or ``Somewhat agree'', then we consider their value as ``Agreeing''; if instead they select ``Neither agree nor disagree'', then we consider them in the category ``Undecided''; and finally, if the participants select ``Somewhat disagree'', ``Disagree'' or ``Strongly disagree'', then we consider those answers as ``Disagreeing''.
\subsubsection{Knowledge on modern cars}
Question Q1 evaluates the driver's knowledge on modern cars. Considering the values of the mean and the median, shown in Table~\ref{tab:q1}, it can be stated that the interviewed sample considers itself knowledgeable about modern cars. The data show that 55\% of participants are quite confident about their knowledge, while a minority of the participants (about 29\%) think they are not. Finally, 16\% of participants think they have average knowledge about modern cars. Thus, considering the answers of the preliminary question, there does not seem to be a substantial difference between those who drive a few hours a week and those who drive more with regard to the level of knowledge they claim to have on modern cars.
Then, question Q2 asks respondents whether or not they agree that modern cars are similar to modern computers. Also this question receives a high rate of agreement. We note that 72\% of participants agree that a modern car is similar to a modern computer. Furthermore, it turns out that 14\% of them are undecided while 14\% of them disagree with the statement. The mean and the median are shown in Table~\ref{tab:q2}.
\begin{table}[ht]
\centering
\caption{Q1, Q2 answers and their statistics}
\label{tab:q1}
\begin{tabular}{lc}
\toprule
\textbf{Knowledge level} & \textbf{[\%]} \\
\midrule
Knowledgeable about modern cars & 55 \\
Average knowledge & 16 \\
Not knowledgeable about modern cars & 29 \\
\midrule
\midrule
Mean & 4.37 \\
Median & 5 \\
Standard Deviation & 1.55 \\
\bottomrule
\end{tabular}
\quad
\label{tab:q2}
\begin{tabular}{lc}
\toprule
\textbf{Agreement level} & \textbf{[\%]} \\
\midrule
Agreeing & 72 \\
Disagreeing & 14 \\
Undecided & 14 \\
\midrule
\midrule
Mean & 5 \\
Median & 5 \\
Standard Deviation & 1.35 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Concerns on data privacy}
The first of these questions (Q3) asks participants to select all the categories of data they think a car collects. It must be remarked that this answer allows for multiple choice, so a respondents can choose from multiple categories of data. Table~\ref{tab:q3} shows the answers selected by the respondents. The predominant categories according to the interviewed sample are: ``personal data about the driver'' (selected by 56\% of the sample); ``public data about the driver'' (selected by 54\% of the sample); ``public data not about the driver'' (selected by 47\% of the sample).
A few participants think that their vehicle collects more sensitive data belonging to the special categories of personal (13\%) and financial data (11\%). Finally, we note that just 8\% of the participants think that modern cars do not collect any data at all.
Overall, these findings confirm a modest level of awareness in terms of what data a car collects. In particular, while it is positive that the majority (56\%) understands that driver's personal data are involved, it is concerning that a similar subset (54\%) deem such data about the driver to have been made public. It would be surprising if any car manufacturer's privacy policy stated that the driver's collected data would be made public (and such policies are well worthy of a dedicated comparative study). This potential confusion calls for awareness campaigns, more readability of official documents and innovative technologies to ensure policies are understood. By contrast, a positive sign that a small kernel of participants is highly informed is the appreciable understanding that special categories of personal data (13\%) or financial data (11\%) may be gathered.
\begin{table}[ht]
\centering
\caption{Q3 answers}\label{tab:q3}
\scalebox{0.95}{
\begin{tabular}{lc}
\toprule
\textbf{Collected data} & \textbf{[\%]} \\
\midrule
Personal data about the driver & 56 \\
Public data about the driver & 54 \\
Public data not about the driver & 47 \\
Special categories of personal data about the driver & 13 \\
Financial data about the driver & 11 \\
No data at all & 8 \\
\bottomrule
\end{tabular}}
\end{table}
Question Q4 asks participants whether they think it is necessary to collect personal data to achieve full vehicle functionality. The indexes and a summary of the answers are shown in Table~\ref{tab:q4}. It shows that 27\% of the participants agree with the statement above, moreover, 19\% of them are undecided and 54\% of them disagree with the statement. Thus, we could argue that the participants disagree with the statement proposed in the question.
This finding can be interpreted in various ways. On one hand, it denounces a false preconception because that the customised, driver-tailored experience that is getting more and more common at present is certain to stand on a trail of data collected about the driver's.
It clearly also signifies that drivers are neither adequately informed on what data is being collected and for what purposes, contradicting art. 5 of GDPR, nor have they been able to grant an informed consent, contradicting art. 7 of GDPR.
\begin{table}[ht]
\centering
\caption{Q4, Q5 answers and their statistics}
\label{tab:q4}
\begin{tabular}{lc}
\toprule
\textbf{Agreement level} & \textbf{[\%]} \\
\midrule
Agreeing & 27 \\
Disagreeing & 54 \\
Undecided & 19 \\
\midrule
\midrule
Mean & 3.35 \\
Median & 3 \\
Standard Deviation & 1.58 \\
\bottomrule
\end{tabular}
\quad
\label{tab:q5}
\begin{tabular}{lc}
\toprule
\textbf{Agreement level} & \textbf{[\%]} \\
\midrule
Agreeing & 21 \\
Disagreeing & 65 \\
Undecided & 14 \\
\midrule
\midrule
Mean & 2.97 \\
Median & 3 \\
Standard Deviation & 1.67 \\
\bottomrule
\end{tabular}
\end{table}
Moving on to the answers of question Q5, it can be noticed that just 21\% of the sample agrees to the transmission of data over the Internet, only 14\% of participants are undecided moreover 65\% of them disagree with the statement. This means that the sample is not very convinced to send personal data over the Internet. Table~\ref{tab:q5} shows agreement levels and indexes of Q5's answers.
This may again be interpreted as a wrong preconception because it is clear that remote services, including eCall, location-tailored weather forecasts, music streaming and many others, must generate Internet traffic.
\subsubsection{Perceptions of trust on safety}
Question Q6 asks whether participants agree that a modern vehicle safeguards the life of its driver. The agreement levels and the indexes of central tendency are shown in Table~\ref{tab:q6}. It turns out that 77\% of participants agree with the statement above,
then just 8\% disagree with the statement, and 15\% of them are undecided.
Question Q7 asks participants whether a modern car protects its driver's personal data better than its driver's life. It appears that a part of the sample is undecided with this statement (26\%), just 18\% of participants agree with the statement moreover 56\% of them disagree. Table~\ref{tab:q7} shows also that the indexes of central tendency are not as high when compared to the previous question.
There is considerable uncertainty in front of this question, if not for the majority's expression of disagreement (56\%). It signifies that trust on security still has a great lot to grow in comparison to trust on safety, perhaps due to the much longer establishment of the latter. It is well known that trust may take a long time to root, and car security is certain to be a somewhat recent problem.
\begin{table}[ht]
\centering
\caption{Q6, Q7 answers and their statistics}
\label{tab:q6}
\begin{tabular}{lc}
\toprule
\textbf{Agreement level} & \textbf{[\%]} \\
\midrule
Agreeing & 77 \\
Disagreeing & 8 \\
Undecided & 15 \\
\midrule
\midrule
Mean & 5.26 \\
Median & 5 \\
Standard Deviation & 1.20 \\
\bottomrule
\end{tabular}
\quad
\label{tab:q7}
\begin{tabular}{lc}
\toprule
\textbf{Agreement level} & \textbf{[\%]} \\
\midrule
Agreeing & 18 \\
Disagreeing & 56 \\
Undecided & 26 \\
\midrule
\midrule
Mean & 3.26 \\
Median & 4 \\
Standard Deviation & 1.46 \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Perceptions of trust on security}
Question Q8 asks whether the data collected from the vehicle is legitimately processed according to the relevant regulations. Table~\ref{tab:q8} shows that 44\% of the participants agree with this statement moreover 25\% disagree and the rest of them (31\%) are undecided.
Trust one the legitimacy of the data processing is not higher than 44\%. This indicates, once more, that car drivers need to be better informed, first of all. Conversely, this means that the majority, 56\%, are not sure about the legitimacy of the processing of their personal data. Being informed correctly is essential for raising awareness, which in turn is essential for trust building.
Question Q9 asks if participants believe that the personal data collected is systematically analysed and evaluated using automated processes (including profiling). From Table~\ref{tab:q9}, around 42\% of participants agree with this statement, moreover 32\% of them disagree and 26\% are undecided with the statement.
This question is designed to be self-contained and understandable by everyone.
A notable 42\% show concern that profiling takes place, which may be taken to signify a correspondingly low trust on the security of the treatment. There is no official public information on whether car manufacturers really carry out profiling but, if this were the case, then a Data Protection Impact Assessment, pursuant art. 35 of GDPR, would have been necessary.
The last question (Q10) asks whether the participants feel that the data transmitted over the Internet are protected by adequate technologies. Table~\ref{tab:q10} confirms the representation that agrees with the question (46\%) to be considerable.
The fact that those who agree do not exceed the majority of the sample clearly indicate, also in this case, room for improving drivers' trust on security.
\begin{table}[ht]
\centering
\caption{Q8, Q9, Q10 answers and their statistics}
\label{tab:q8}
\begin{tabular}{lc}
\toprule
\textbf{Agreement level} & \textbf{[\%]} \\
\midrule
Agreeing & 44 \\
Disagreeing & 25 \\
Undecided & 31 \\
\midrule
\midrule
Mean & 4.28 \\
Median & 4 \\
Standard Deviation & 1.31 \\
\bottomrule
\end{tabular}
\quad
\label{tab:q9}
\begin{tabular}{lc}
\toprule
\textbf{Agreement level} & \textbf{[\%]} \\
\midrule
Agreeing & 42 \\
Disagreeing & 32 \\
Undecided & 26 \\
\midrule
\midrule
Mean & 4.07 \\
Median & 4 \\
Standard Deviation & 1.43 \\
\bottomrule
\end{tabular}
\quad
\label{tab:q10}
\begin{tabular}{lc}
\toprule
\textbf{Agreement level} & \textbf{[\%]} \\
\midrule
Agreeing & 46 \\
Disagreeing & 32 \\
Undecided & 22 \\
\midrule
\midrule
Mean & 4.19 \\
Median & 4 \\
Standard Deviation & 1.49 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Correlations}
There are statistically significant correlations that arise by analysing the relevant coefficients over the data obtained from the sample. In brief,
Pearson's linear correlation coefficient, denoted by the letter $r$, allows us to evaluate a possible linearity relationship between two sets of data.
Spearman's rank correlation coefficient, denoted by the Greek letter \( \rho \), measures the correlation between two numerical variables; these must be sortable because Spearman's correlation coefficient is defined as Pearson's correlation coefficient applied to the ranks.
The Phi coefficient (or mean square contingency coefficient), denoted with the Greek letter \( \phi \), is a measure of association for two binary variables and is calculated from the frequency distributions of the pairs.
Correlation coefficient values are accompanied by a significance level (the \textit{p-value}) to establish the reliability of the calculated value. The p-value is a number between 0 and 1 representing the probability that the result would have been obtained if the data had not been correlated. If the p-value is less than 0.01 then the relationship found is statistically significant.
The analysis looks at the core questions, those from Q1 to Q10, to focus on general correlations between knowledge on the subject matter, privacy concerns, trust perceptions of safety and on security.
\subsubsection{Core question analysis}
We noted a significant correlation between question Q1 and question Q2 (\(\rho = 0.48\), \(p < 0.001\)). Therefore, it seems that participants who are knowledgeable about modern cars also think that modern cars are similar to modern computers, reinforcing the conclusion. Moreover, thanks to the correlation between question Q1 and question Q4 (\(\rho = 0.35\), \(p < 0.001\)) we can state that those who consider themselves informed about modern cars also believe that the data collected by the car is necessary for the full functioning of the car. This aligned with our own, specialist view. There is also a significant correlation between question Q1 and question Q6 (\(\rho = 0.40\), \(p < 0.01\)), that is, those who are knowledgeable about modern cars think that a modern car safeguards its driver's life. Somewhat surprisingly, it appears that Q1 does not significantly correlate with later questions on trust on car security, signifying that trust on security must grow even for those who are knowledgeable about the field.
We calculated the Phi coefficients between the answers of question Q3 to determine if there are any associations, i.e. whether there are pairs of categories of personal data that appear together in the answers. The coefficient values are shown in Table~\ref{tab:corQ3}, and it becomes apparent that there are only two values that may establish a possible association. The Phi Coefficient obtained between the couple ``Special categories of personal data'' and ``Financial data about the driver'' is 0.3255, which means that those who think that financial data are collected by modern cars also think that special categories of personal data are collected as well. This exhibits a correct preconception because financial data are routinely grouped with special categories of data. Also, given the 0.3363 value, we notice that drivers who think ``Special categories of personal data'' are collected from the car, also think that ``Personal data about the driver'' are collected, emphasising a correct understanding that personal data also include the special categories (of personal data).
\begin{table}[ht]
\centering
\caption{Phi Coefficients of question 3}\label{tab:corQ3}
\scalebox{0.84}{
\begin{tabular}{c|cccccc}
\( \phi \) & No Data & Fin & Spec & Pub & Pub\(_{driver}\) & Pers \\
\hline
No Data & 1 & & & & & \\
Fin & -0.0920 & 1 & & & & \\
Spec & -0.0999 & \textbf{0.3255} & 1 & & & \\
Pub & -0.2371 & 0.0004 & -0.0624 & 1 & & \\
Pub\(_{driver}\) & -0.2973 & 0.1468 & 0.0759 & 0.0255 & 1 & \\
Pers & -0.3136 & 0.2332 & \textbf{0.3363} & -0.2099 & 0.1743 & 1 \\
\end{tabular}}
\end{table}
Moving on to question Q4, there is a strong statistically significant correlation between question Q4 and question Q5: both the Pearson coefficient and the Spearman coefficient have very high values (\(r = 0.55\), \(\rho = 0.68\)) both with a reliability value \(p < 0.001\). In consequence, we can affirm that those who think that it is necessary to collect personal data for the full functioning of their vehicle also think that this data should be transmitted over the Internet.
There is also another statistically significant correlation between question Q4 and question Q8 (\(r = 0.39\), \(\rho = 0.50\), \(p < 0.01\) for both), showing that those who agree to the collection of personal data also think that the data are processed legitimately in a manner consistent with the relevant regulations. Both of these can be taken as indications that those with modest privacy concerns show some trust on security, but we are mindful of the generally low agreement with Q4 and Q5 and only fair agreement with Q8 noted above.
Question Q5 correlates only moderately with question Q8 (\(\rho = 0.48\), \(p < 0.01\)) but more significantly with question Q10 (\(r = 0.36\), \(\rho = 0.52\), \(p < 0.01\)). It follows that those who think that data should be transmitted over the Internet, also think that this data will be adequately protected during transmission. This shows that trust on security is broad if present.
Spearman's correlation coefficient detects a moderately significant correlation between question Q6 and question Q8 (\(\rho = 0.45\), \(p < 0.01\)), so that it seems that those who think that a modern car safeguards its driver's life also think that the personal data collected are processed legitimately according to the relevant regulations in force. This seems a positive outcome in terms of a spreading of trust on safety over trust on security. It is unfortunate that this correlation is not very strong, and we deem it highly desirable to develop socio-technical security and privacy measures to reinforce it in the future.
There is a statistically significant correlation between question Q7 and question Q10 (\(r = 0.38\), \(\rho = 0.52\), \(p < 0.01\)).
In fact, those who think that a modern car protects its driver's personal data better than it safeguards its driver's life also think that the personal data are protected by adequate technology when the vehicle transmits it over the Internet. The Spearman's correlation coefficient also shows a significant correlation between question Q7 and question Q8 (\(\rho = 0.47\), \(p < 0.01\)), that is, those who think that a modern car protects its driver's data better than it safeguards its driver's life also think that the personal data collected are processed legitimately according to the relevant regulations. These findings confirm that trust on security is somehow ``logical'' in the sense that it covers all relevant elements.
There is also a significant correlation between question Q8 and question Q9 (\(\rho = 0.41\), \(p < 0.01\)), it appears that those who think that modern cars carry out a systematic and extensive evaluation of personal data also think that their data are processed in a legitimate way according to relevant regulations. This correlation suggests that drivers who consent to the evaluation of their personal data even consent to profiling --- perhaps too lightheartedly, raising concern that the potentially negative consequences of profiling may not be fully understood at present. It may be inferred that drivers are not fully aware that it would be their right to object to profiling, as prescribed by art. 22 of GDPR.
We also noted a moderate correlation between question Q9 and question Q4 (\(\rho = 0.45\), \(p < 0.01\)). Those who think that in order to use the full functionality of the car it is necessary to provide personal data also think that this data is analysed and studied according to automatic processes to evaluate personal aspects of drivers. This reconfirms that profiling is somewhat ill-understood. There is also a statistically significant correlation between question Q9 and question Q5 (\(\rho = 0.46\), \(p < 0.01\)) indicating that those who think that their data are analysed by automatic evaluation processes also think that they are transmitted over the Internet. This outcome correctly indicates that potential profiling does not take place aboard the car.
There is a statistically significant correlation between question Q10 and question Q4 (\(r = 0.37\), \(\rho = 0.49\), \(p < 0.01\)), that is, those who think that the personal data collected by the vehicle is necessary for the full functioning of the car also think that their data is adequately protected when transmitted over the Internet. Once more, modest privacy concerns lead to some trust on security. Finally, there is a statistically significant correlation between question Q10 and question Q8 (\(r = 0.51\), \(\rho = 0.64\), \(p < 0.01\)), so we can argue that those who think that their personal data is processed lawfully also think that the data are adequately protected over the Internet. Here is yet another confirmation that trust on security, if at all present, covers all relevant aspects.
\section{Conclusions}\label{sec:conclusion}
Our study was designed with care to carve out drivers' privacy concerns and trust perceptions with the ultimate aim of assessing our research hypothesis that low privacy concerns imply high trust perceptions. Crowdsourcing was leveraged to collect a representative sample of participants. Answers were then analysed in isolation as well as statistically correlated, producing very many insights.
There would be little use in developing amazing technical security and privacy measures for preserving drivers' privacy and the security of their cars in case drivers
are not adequately concerned about
the privacy issues bound to their driving and yet do not trust the security of their cars at an appropriate level. That case is confirmed by the results of our study, thus contradicting our research hypothesis. We consider this outcome worthy of further attention.
Precisely, we believe that the privacy concerns that arose are insufficient in the present technological setting. We would have found it more positive if drivers exhibited higher awareness on the personal data involved through their driving, on how treating such data is fundamental for delivering driver-tailored services, and on the fact that such service quality often demands data transmission over the Internet. Unfortunately, the opposite scenario holds.
A somewhat logical explanation of low privacy concerns could be a high trust on security, but we were surprised once more that also trust on security was somewhat low. Therefore, the only way to read the general outcome is that privacy is generally ill-understood by drivers, hence we learn that more information must be delivered to them in order to raise awareness and then form correct privacy concerns and correspondingly adequate trust perceptions.
We strongly argue that this must be the ultimate effect for the development of more and more advanced technical security and privacy measures.
The correlations among answers could be seen as somewhat logical.
For example, knowledge on the field correlates with adequate privacy concerns and well-related trust perceptions. It is noteworthy that the potentially negative implications of profiling on the freedoms of natural persons are far from being well received at the moment.
Trust on security is much less represented than trust on safety, arguably because the former derives from a less rooted perception in our society due to the relatively young age of the technologies that should support it.
Moreover, trust on cyber-security is normally broad, that is, if it is present to some extent, it then covers all relevant aspects.
We ultimately maintain that also correlations justify a need for more awareness and trust building campaigns.
The value of our results is multifaceted. They can be read in support of the ISO/SAE DIS 21434 standard, which is yet to be finalised. They also offer a solid baseline to conduct a cyber-security and privacy risk assessment on cars following standard methodologies such as ISO/IEC 27005.
Future work includes tailoring the effort presented in this paper to specific car brands in support of a contrastive analysis among brands. It is clear that the user-level studies in the automotive field that this paper incepted have great potential for growth.
\paragraph{Acknowledgments}
This research was funded by COSCA (COnceptualising Secure Cars)~\cite{COSCA}, a project supported by the European Union's Horizon 2020 research and innovation programme under the NGI TRUST grant agreement no 825618.
\bibliographystyle{splncs04}
|
{
"timestamp": "2021-12-01T02:26:28",
"yymm": "2111",
"arxiv_id": "2111.15467",
"language": "en",
"url": "https://arxiv.org/abs/2111.15467"
}
|
\section{Introduction}
In this paper, we consider the three-dimensional incompressible Navier-Stokes equations
\begin{equation}\label{NSintro}
\partial_{t}v-\Delta v+v\cdot\nabla v+\nabla p=0,\,\,\,\nabla\cdot v=0,\,\,\,v(\cdot,0)=v_{0}(x)\,\,\,\textrm{in}\,\,\,\mathbb{R}^3\times (0,T).
\end{equation}
Here, $T\in (0,\infty]$.
The rigorous existence theory for the Navier-Stokes equations was pioneered by Leray in \cite{Le}. In particular, Leray showed that for any square-integrable solenodial initial data $v_{0}(x)$ there exists at least one associated global-in-time \textit{weak Leray-Hopf solution} $v$. The question of global-in-time regularity of weak Leray-Hopf solutions remains open in three dimensions.
For weak Leray-Hopf solutions \textit{conditional} regularity results (in terms of the velocity field) were also given in Leray's paper \cite{Le}. In particular, \cite{Le} implies that for any $q\in (3,\infty]$ there exists a $C_{q}>0$ such that if $v$ is a weak Leray-Hopf solution (assumed to be smooth on $\mathbb{R}^3\times (0,T)$) then
\begin{equation}\label{Lerayssmoothness}
\|v(\cdot,t)\|_{L^{q}(\mathbb{R}^3)}\leq \frac{C_{q}}{(T-t)^{\frac{1}{2}(1-\frac{3}{q})}}\,\,\,\textrm{for}\,\,\,\textrm{some}\,\,\,\,t\in (0,T)\Rightarrow\quad v\,\,\textrm{can}\,\,\textrm{be}\,\,\textrm{smoothly}\,\,\textrm{extended}\,\,\textrm{past}\,\,T.
\end{equation}
Almost 70 years later, Escauriaza, Seregin and \v{S}ver\'{a}k's celebrated paper \cite{ESS} treated the case $q=3$. In particular, it was shown in \cite{ESS} that
\begin{equation}\label{ESSsmoothness}
\|v\|_{L^{\infty}_{t}(0,T; L^{3}(\mathbb{R}^3))}<\infty\Rightarrow\quad v\,\,\textrm{can}\,\,\textrm{be}\,\,\textrm{smoothly}\,\,\textrm{extended}\,\,\textrm{past}\,\,T.
\end{equation}
There have been many extensions of \eqref{ESSsmoothness}. See, for example, \cite{sereginL3limit}, \cite{albritton} and
\cite{albrittonbarker}.
Notice that the assumptions \eqref{Lerayssmoothness} and \eqref{ESSsmoothness} are invariant with respect to the scaling symmetry of the Navier-Stokes equations
\begin{equation}\label{eqrescaling}
(v_{\lambda}(x,t), p_{\lambda}(x,t), v_{0\lambda}(x))=(\lambda v(\lambda x,\lambda^2 t),\lambda^2 p(\lambda x,\lambda^2 t), \lambda v_{0}(\lambda x) )\,\,\textrm{with}\,\,\,\lambda\in (0,\infty).
\end{equation}
This paper is concerned with conditional results for the three-dimensional Navier-Stokes equations \textit{in terms of the pressure}, which are less well understood compared to regularity results formulated in terms of the velocity.
To the best of our knowledge, regularity criteria involving the pressure were first considered in Kaniel's paper \cite{kaniel}. Kaniel considered a weak solution $v:\Omega\times (0,\infty)\rightarrow \mathbb{R}^3$ ($\Omega$ is a smooth bounded domain and $v$ satisfies the Dirichlet boundary condition on $\partial\Omega$) with smooth initial conditions. Furthermore, Kaniel showed that if the associated pressure $p$ satisfies
\begin{equation}\label{kanielcondition}
\|p\|_{L^{\infty}_{t}(0,T; L^{q}(\Omega))}<\infty\quad\textrm{with}\,\,q>\frac{12}{5}
\end{equation}
then $v$ can be smoothly extended past $T$. Note that the assumed bound on the pressure in \eqref{kanielcondition} is not invariant with respect to the rescaling \eqref{eqrescaling} and is \textit{subcritical}\footnote{We say a quantity $F(v,p)\in [0,\infty)$ is \textit{subcritical} if, for the rescaling \eqref{eqrescaling}, there exists $\alpha>0$ such that $F(v_{\lambda},p_{\lambda})=\lambda^{\alpha} F(v,p)$ for all $\lambda>0$.}. Kaniel's result paved the way for other regularity criteria for the Navier-Stokes equations in terms of subcritical norms of the pressure. See, for example, \cite{berselli}, \cite{bdv} and \cite{chaelee}.
Conditional regularity results in terms of \textit{scale-invariant}\footnote{We say a quantity $F(v,p)\in [0,\infty)$ is \textit{scale-invariant}, with respect to the rescaling \eqref{eqrescaling}, if $F(v_\lambda, p_{\lambda})=F(v,p)$ for all $\lambda>0$.} norms of the pressure were pioneered by Berselli and Galdi's seminal paper \cite{berselligaldi}. In \cite{berselligaldi}, Berselli and Galdi considered a weak Leray-Hopf solution $v:\mathbb{R}^n\times (0,\infty)\rightarrow\mathbb{R}^n$, with sufficiently smooth initial data. They showed that if the associated pressure $p$ satisfies the scale-invariant bounds
\begin{align}\label{pressurescaleinvariant}
\begin{split}
&\|p\|_{L^r_{t}(0,T; L^{s}(\mathbb{R}^n))}<\infty\,\,\,\textrm{with}\,\,\,\frac{2}{r}+\frac{n}{s}=2\,\,\,\textrm{and}\,\,\,s>\frac{n}{2},\\
\textrm{or}\,\,\textrm{if}\,\,&\|p\|_{L^{\infty}_{t}(0,T; L^{\frac{n}{2}}(\mathbb{R}^n))}\,\,\textrm{is}\,\,\textrm{small}\,\,\textrm{enough},
\end{split}
\end{align}
then $v\in C^{\infty}((0,T]\times \mathbb{R}^n).$ Galdi and Berselli's influential work paved the way for many regularity criteria in terms of scale-invariant norms involving the pressure. Whilst it is not possible to list such works exhaustively, we refer the reader to \cite{zhou06proc}, \cite{zhou06math}, \cite{kanglee}, \cite{struwe}, \cite{caifangzhai}, \cite{suzucki1}, \cite{suzucki2} and \cite{ji}, for example. Let us mention that there are many interesting regularity criteria involving the pressure that are of a different nature to the aforementioned literature. We point out \cite{neustupanecas}, \cite{sereginsverakpressure}, \cite{caotiti}, \cite{constantin} and \cite{tranyu}.
Despite a substantial number of results concerning regularity criterion in terms of the pressure, it remains unclear if regularity of the Navier-Stokes equations holds when assuming the pressure analogue of Escauriaza, Seregin and \v{S}ver\'{a}k's condition \eqref{ESSsmoothness} or other endpoint cases. In particular, the following is open.
\begin{itemize}
\item[]\textbf{Open Problem.} Suppose $v:\mathbb{R}^3\times (0,\infty)\rightarrow\mathbb{R}^3$ is a weak Leray-Hopf solution, with sufficiently smooth initial data. Fix $s\in [\frac{3}{2},\infty)$ and $r\in (1,\infty]$ such that $\frac{2}{r}+\tfrac{3}{s}=2$.
$$\textrm{Does}\,\,\,\|p\|_{L^{r,\infty}_{t}(0,T; L^{s}(\mathbb{R}^3))}<\infty \Rightarrow v\in C^{\infty}((0,T]\times \mathbb{R}^3)? $$
\end{itemize}
Here, $L^{r,\infty}(\mathbb{R})$ is the Lorentz space, which is slightly larger\footnote{For example, $|t|^{-\frac{1}{r}}\in L^{r,\infty}(\mathbb{R})\setminus L^{r}(\mathbb{R})$. Throughout this paper, we use the convention that $L^{\infty}=L^{\infty,\infty}$. } than $L^{r}(\mathbb{R})$ and is defined in `2.Preliminaries'. This paper is motivated by these open problems.
\subsection{Main Results}
In this paper, we are unable to resolve the open problems at the end of the previous subsection. Instead, when $\|p\|_{L^{\infty}_{t}(0,T^*; L^{\frac{3}{2}}(\mathbb{R}^3))}<\infty$, we show improved estimates of the Hausdorff dimension of the singular set at a first potential blow-up time $T^*>0$. Let us now state our first main result.
\begin{theorem}\label{hausdorffdimreduce}
There exists a universal constant $C^{(0)}_{univ}\in (0,\infty)$ such that the following holds.
Let $v$ be a weak Leray-Hopf solution to the Navier-Stokes equations on $\mathbb{R}^3\times (0,\infty)$. Assume that $v$ first blows-up at $T^*>0$, namely
$$v\in L^{\infty}_{loc}([0,T^*); L^{\infty}(\mathbb{R}^3))\,\,\,\textrm{and}\,\,\,\lim_{t\uparrow T^*}\|v(\cdot,t)\|_{L^{\infty}(\mathbb{R}^3)}=\infty. $$
Assume that the pressure $p$ associated to $v$ satisfies the scale-invariant bound
\begin{equation}\label{typeIpres}
\|p\|_{L^{\infty}(0,T^*; L^{\frac{3}{2}}(\mathbb{R}^3))}\leq M^2.
\end{equation}
Let\footnote{Note that $(x,T^*)$ is a singular point of $v$ if $v\notin L^{\infty}(B(x,R)\times (T^*-R^2,T^*))$ for all sufficiently small $R<T^*$.}
\begin{equation}\label{sigmadefhausdorff}
\sigma:=\{x: (x,T^*)\,\,\,\textrm{is}\,\,\,\textrm{a}\,\,\,\textrm{singular}\,\,\,\textrm{point}\,\,\,\textrm{of}\,\,\,v\}.
\end{equation}
Then the above assumptions imply that
$$\mathcal{H}^{1-\frac{C^{(0)}_{univ}}{M}}(\sigma)=0.$$
Here, $\mathcal{H}^{1-\frac{C^{(0)}_{univ}}{M}}$ denotes the Hausdorff measure of dimension $1-\frac{C^{(0)}_{univ}}{M}.$
\end{theorem}
Let us state our second main result, which treats other endpoint cases for the pressure.
\begin{theorem}\label{hausdorffdimreducegen}
Fix $s\in (\frac{3}{2},\infty)$ and $r\in (1,\infty]$ such that $\frac{2}{r}+\tfrac{3}{s}=2$.
There exists a positive constant $C^{(0)}_{s}$ (depending only on $s$) such that the following holds.
Let $v$ be a weak Leray-Hopf solution to the Navier-Stokes equations on $\mathbb{R}^3\times (0,\infty)$. Assume that $v$ first blows-up at $T^*>0$, namely
$$v\in L^{\infty}_{loc}([0,T^*); L^{\infty}(\mathbb{R}^3))\,\,\,\textrm{and}\,\,\,\lim_{t\uparrow T^*}\|v(\cdot,t)\|_{L^{\infty}(\mathbb{R}^3)}=\infty. $$
Assume that the pressure $p$ associated to $v$ satisfies the scale-invariant bound
\begin{equation}\label{typeIpresgenmaintheo}
\|p\|_{L^{r,\infty}(0,T^*; L^{s}(\mathbb{R}^3))}\leq M^2.
\end{equation}
Let
\begin{equation}\label{sigmadefhausdorffgen}
\sigma:=\{x: (x,T^*)\,\,\,\textrm{is}\,\,\,\textrm{a}\,\,\,\textrm{singular}\,\,\,\textrm{point}\,\,\,\textrm{of}\,\,\,v\}.
\end{equation}
Then the above assumptions imply that
$$\mathcal{H}^{1-\frac{C^{(0)}_{s}}{M^{2r}}}(\sigma)=0.$$
Here, $\mathcal{H}^{1-\frac{C^{(0)}_{s}}{M^{2r}}}$ denotes the Hausdorff measure of dimension $1-\frac{C^{(0)}_{s}}{M^{2r}}.$
\end{theorem}
Note that if $v:\mathbb{R}^3\times (0,\infty)\rightarrow\mathbb{R}^3$ is a weak Leray-Hopf solution, which first loses smoothness at $T^*>0$, then Caffarelli, Kohn and Nirenberg's seminal paper \cite{CKN} implies that
$$\mathcal{H}^{1}(\sigma)=0. $$
Without extra assumptions, it is unknown whether the Hausdorff dimension can be reduced below 1 beyond logarithmic factors (see \cite{choelewis}). Theorems \ref{hausdorffdimreduce}-\ref{hausdorffdimreducegen} show that the additional pressure assumption \eqref{typeIpres} or \eqref{typeIpresgenmaintheo} give an improvement in the known (unconditional) dimension of the singular set.
In order to prove Theorem \ref{hausdorffdimreduce}, we utilize a higher integrability statement from the author's paper \cite{barkerhigherinteg} (specifically Theorem 1 in \cite{barkerhigherinteg}). In particular, \cite{barkerhigherinteg} shows that, under the assumptions of Theorem \ref{hausdorffdimreduce}, there exists a positive universal constant $C_{univ}^{(0)}$ such that for all $t_1\in (0,T^*)$:
\begin{equation}\label{higherintegfirstblowup}
\||v|^{\frac{q}{2}}\|_{L^{\infty}_{t}(t_1,T^*; L^{2}(\mathbb{R}^3))}^2+\int\limits_{t_1}^{T^*}\int\limits_{\mathbb{R}^3} |\nabla v|^2|v|^{q-2} dxds+\int\limits_{t_1}^{T^*}\int\limits_{\mathbb{R}^3} |\nabla |v|^{\frac{q}{2}}|^2 dxds<\infty\quad\textrm{with}\,\,q=2+\frac{C_{univ}^{(0)}}{M}.
\end{equation}
In order to prove Theorem \ref{hausdorffdimreducegen}, we need to prove a generalization of the higher integrability result in \cite{barkerhigherinteg} by means of the following Theorem.
\begin{theorem}\label{higherinteggen}
Fix $s\in (\frac{3}{2},\infty)$ and $r\in (1,\infty]$ such that $\frac{2}{r}+\tfrac{3}{s}=2$.
There exists a positive constant $C^{(0)}_{s}$ (depending only on $s$) such that the following holds.
Let $v$ be a weak Leray-Hopf solution to the Navier-Stokes equations on $\mathbb{R}^3\times (0,\infty)$. Assume that $v$ first blows-up at $T^*>0$, namely
\begin{equation}\label{vbounded}
v\in L^{\infty}_{loc}([0,T^*); L^{\infty}(\mathbb{R}^3))\,\,\,\textrm{and}\,\,\,\lim_{t\uparrow T^*}\|v(\cdot,t)\|_{L^{\infty}(\mathbb{R}^3)}=\infty.
\end{equation}
Assume that the pressure $p$ associated to $v$ satisfies the scale-invariant bound
\begin{equation}\label{typeIpresgen}
\|p\|_{L^{r,\infty}(0,T^*; L^{s}(\mathbb{R}^3))}\leq M^2.
\end{equation}
Then the above assumptions imply that we have higher integrability \textbf{up to $T^*$}. Namely, for all $t_{1}\in (0,T^*)$ we have
\begin{equation}\label{higherintegrable}
\||v|^{\frac{q}{2}}\|_{L^{\infty}((t_1,T^*); L^{2}(\mathbb{R}^3))}+\int\limits_{t_1}^{T^*}\int\limits_{\mathbb{R}^3} |\nabla v|^2|v|^{q-2} dxds+\int\limits_{t_1}^{T^*}\int\limits_{\mathbb{R}^3} |\nabla (|v|^{\frac{q}{2}})|^2 dxds<\infty\,\,\,\textrm{with}\,\,\,q:=2+\frac{C^{(0)}_s}{M^{2r}}.
\end{equation}
\end{theorem}
\subsection{Strategy For the Proof of Higher Integrability (Theorem \ref{higherinteggen})}
The main idea in proving Theorem \ref{higherinteggen} is the same as the proof of Theorem 1 in \cite{barkerhigherinteg}. Namely, we test the Navier-Stokes equations with $v|v|^{q-2}$, which produces a right hand side multiplied by a ``small exponent'' $q-2$. Then $q\sim 2$ is chosen to offset $M$ being large in \eqref{typeIpresgen}. When implementing this strategy, we encounter additional difficulties not present in the proof of Theorem 1 in \cite{barkerhigherinteg}, which we outline.
Unlike the strategy in \cite{barkerhigherinteg} where the pressure $p$ belong to $L^{\infty}_{t}L^{\frac{3}{2},\infty}_{x}$, it is not possible to directly obtain a weighted energy balance involving the condition \eqref{typeIpresgen}.
To get around this, we utilize an idea first introduced in \cite{robinson} and subsequently used in \cite{ji}. We interpolate
\begin{equation}\label{presinterpolate}
\|p(\cdot,t)\|_{L^{s_{\kappa}}(\mathbb{R}^3)}^{r_{\kappa}}\,\textrm{with}\quad \|p(\cdot,t)\|_{L^{s}(\mathbb{R}^3)}^{r(1-\kappa)}\,\,\,\textrm{and}\,\,\,\|p(\cdot,t)\|_{L^{\frac{q}{2}}(\mathbb{R}^3)}^{\mu \kappa}.
\end{equation}
Here, $\kappa\in (0,1)$ is an \textit{interpolation parameter}, $\mu(s)\geq 2$ and $\tfrac{2}{r_{\kappa}}+\tfrac{3}{s_{\kappa}}=2$. We then aim to conclude as in \cite{robinson} and \cite{ji}, by applying a generalization of a Gronwall type lemma in \cite{robinson} (Lemma 3.1 in \cite{robinson}), which involves a differential inequality with a small coefficient.
The above goal and the interpolation \eqref{presinterpolate} introduces a further issue not present in \cite{robinson} and \cite{ji}. Namely, we must estimate the Calder\'{o}n-Zygmund operator $$p=\mathcal{R}_{i}\mathcal{R}_{j}(v_iv_j)$$ in spaces $L^{\frac{q}{2}}$ close to $L^{1}$. It is well known\footnote{See pg. 110 of \cite{javier} and references therein.} that such an operator has bound$\sim \frac{q^2}{q-2}$, which deteriorates for $q$ close to 2. This threatens to destroy the small exponent $q-2$ coming from the $L^{q}$ energy inequality for $v$. We mitigate this by taking the interpolation parameter $\kappa$ in \eqref{presinterpolate} to be small enough, which reduces the influence of the deteriorating operator bound and preserves the ``small exponent''.
\subsection{Strategy For the Reduction in the Hausdorff Dimension of the Singular Set}
Recall that in Caffarelli, Kohn and Nirenberg's paper \cite{CKN} $\varepsilon$-regularity criteria, formulated in terms of the quantities
$$r^{-1}\int\limits_{t-\frac{7}{8}r^2}^{t+\frac{1}{8}r^2}\int\limits_{B(x_0,r)} |\nabla v|^2 dxds $$
(see Proposition 2 in \cite{CKN}), are used in conjunction with a covering argument to obtain that the one-dimensional parabolic Hausdorff dimension of the space-time singular set is zero.
Notice that from the dimensional analysis perspective of \cite{CKN}, the space-time integral of $|\nabla v|^2$ has dimension 1. Since $|\nabla v|^2$ is controlled for certain classes of solutions of the Navier-Stokes equations, this explains why it is heuristically reasonable that the one-dimensional parabolic Hausdorff dimension of the space-time singular set is zero.
Adopting the same dimensional analysis perspective of \cite{CKN}, we see that for $q\in (2,3)$
$|\nabla v|^2|v|^{q-2}$ has dimension $3-q$. Due to \eqref{higherintegfirstblowup} and Theorem \ref{higherinteggen}, one should heuristically expect that, under the assumptions of Theorems \ref{hausdorffdimreduce}-\ref{hausdorffdimreducegen}, the dimension of the singular set is reduced.
In order to make these heuristics rigorous, it is necessary to formulate an $\varepsilon$-regularity criteria involving the space-time integral of $$|\nabla v|^2|v|^{q-2}. $$ We accomplish this by means of the proposition below.\begin{pro}\label{Bqsmallreg}
For every $q\in (2,3)$, there exists $\varepsilon_{q}\in (0,1)$ such that the following holds true.
Suppose that $(v,p)$ is a suitable weak solution\footnote{See `2.3 Solution Classes of the Navier-Stokes Equations' for a definition of `suitable weak solutions'.} to the Navier-Stokes equations in $Q(0,1):=B(0,1)\times (-1,0)$.
Furthermore, suppose that
\begin{equation}\label{Bqsmall}
\sup_{0<r<1}\Big(\frac{1}{r^{3-q}}\int\limits_{-r^2}^{0}\int\limits_{B(0,r)} |\nabla v|^2|v|^{q-2} dxds\Big)<\varepsilon_{q}.
\end{equation}
Then the above assumptions imply that $(x,t)=(0,0)$ is a regular point of $v$. In particular, there exists an $R\in (0,1)$ such that
$v\in L^{\infty}(B(0,R)\times (-R^2,0)).$
\end{pro}
\subsection{Further Extensions}
The higher integrability results in \cite{barkerhigherinteg}, which are used to prove Theorem \ref{hausdorffdimreduce}, actually apply when the pressure satisfies the weaker assumption
\begin{equation}\label{pressurelorentznotLebesgue}
\|p\|_{L^{\infty}(0,T^*; L^{\frac{3}{2},\infty}(\mathbb{R}^3))}\leq M^2.
\end{equation}
Hence, the proof of Theorem \ref{hausdorffdimreduce} will also apply to the assumption \eqref{pressurelorentznotLebesgue}. It is also possible to prove Theorem \ref{hausdorffdimreducegen} under the weaker pressure assumption that
$$\|p\|_{L^{r,\infty}(0,T^*; L^{s,\infty}(\mathbb{R}^3))}\leq M^2\quad\textrm{with}\quad\tfrac{2}{r}+\tfrac{3}{s}=2\,\,\textrm{and}\,\,r\in (1,\infty). $$
Doing so does not require different ideas, but requires further technical function space preliminaries. For the sake of conciseness, we leave such extensions to the interested reader.
\begin{section}{Preliminaries}
\subsection{General Notation}
Throughout this paper we adopt the Einstein summation convention. For arbitrary vectors $a=(a_{i}),\,b=(b_{i})$ in $\mathbb{R}^{n}$ and for arbitrary matrices $F=(F_{ij}),\,G=(G_{ij})$ in $\mathbb{M}^{n}$ we put
$$a\cdot b=a_{i}b_{i},\,|a|=\sqrt{a\cdot a},$$
$$a\otimes b=(a_{i}b_{j})\in \mathbb{M}^{n},$$
$$FG=(F_{ik}G_{kj})\in \mathbb{M}^{n}\!,\,\,F^{T}=(F_{ji})\in \mathbb{M}^{n}\!,$$
$$F:G=
F_{ij}G_{ij}\,\,\,\textrm{and}
\,\,\,|F|=\sqrt{F:F}.$$
For $x_0\in\mathbb{R}^n$ and $R>0$, we define the ball
\begin{equation}\label{balldef}
B(x_0,R):=\{x: |x-x_0|<R\}.
\end{equation}
For $z_0=(x_0,t_0)\in \mathbb{R}^n\times\mathbb{R}$ and $R>0$, we denote the parabolic cylinder by
\begin{equation}\label{paraboliccylinderdef}
Q(z_0,R):=\{(x,t): |x-x_0|< R,\,t\in (t_0-R^2,t_0)\}.
\end{equation}
For $S\subset \mathbb{R}^n$, $\delta>0$ and $\lambda\in (0,\infty)$ we define
$$\mathcal{H}^{\lambda, \delta}(S):=\inf\Big\{\sum_{i=1}^{\infty} (\textrm{diam}\,U_{i})^{\lambda}: S\subset \cup_{i=1}^{\infty} U_{i}\,\,\,\textrm{with}\,\,\,\textrm{diam}\,U_{i}\leq \delta\Big\}. $$
We then define the $\lambda$-dimensional Hausdorff dimension of $S$ as
$$\mathcal{H}^{\lambda}(S):=\lim_{\delta\rightarrow 0^+}\mathcal{H}^{\lambda, \delta}(S). $$
If $X$ is a Banach space with norm $\|\cdot\|_{X}$, then $L_{s}(a,b;X)$, with $a<b$ and $s\in[1,\infty)$, will denote the usual Banach space of strongly measurable $X$-valued functions $f(t)$ on $(a,b)$ such that
$$\|f\|_{L^{s}(a,b;X)}:=\left(\int\limits_{a}^{b}\|f(t)\|_{X}^{s}dt\right)^{\frac{1}{s}}<+\infty.$$
The usual modification is made if $s=\infty$.
Sometimes we will denote $L^{p}(0,T; L^{q})$ by $L^{p}_{T}L^{q}$, $L^{p}(0,T; L^{q}_{x})$ or $L^{p}_{t}(0,T; L^q)$.
Let $C([a,b]; X)$ denote the space of continuous $X$ valued functions on $[a,b]$ with the usual norm. In addition, let $C_{w}([a,b]; X)$ denote the space of $X$ valued functions, which are continuous from $[a,b]$ to the weak topology of $X$.
\subsection{Lorentz Spaces}
Given a measurable subset $\Omega\subseteq\mathbb{R}^{n}$, let us define the Lorentz spaces.
For a measurable function $f:\Omega\rightarrow\mathbb{R}$ define:
\begin{equation}\label{defdistchapter2}
d_{f,\Omega}(\alpha):=\mu(\{x\in \Omega : |f(x)|>\alpha\}),
\end{equation}
where $\mu$ denotes the Lebesgue measure on $\mathbb{R}^n$.
The Lorentz space $L^{p,q}(\Omega)$, with $p\in [1,\infty)$, $q\in [1,\infty]$, is the set of all measurable functions $g$ on $\Omega$ such that the quasinorm $\|g\|_{L^{p,q}(\Omega)}$ is finite. Here:
\begin{equation}\label{Lorentznormchapter2}
\|g\|_{L^{p,q}(\Omega)}:= \Big(p\int\limits_{0}^{\infty}\alpha^{q}d_{g,\Omega}(\alpha)^{\frac{q}{p}}\frac{d\alpha}{\alpha}\Big)^{\frac{1}{q}},
\end{equation}
\begin{equation}\label{Lorentznorminftychapter2}
\|g\|_{L^{p,\infty}(\Omega)}:= \sup_{\alpha>0}\alpha d_{g,\Omega}(\alpha)^{\frac{1}{p}}.
\end{equation}\\
Notice that if $p\in(1,\infty)$ and $q\in [1,\infty]$, there exists a norm, which is equivalent to the quasinorm defined above, for which $L^{p,q}(\Omega)$ is a Banach space.
For $p\in [1,\infty)$ and $1\leq q_{1}< q_{2}\leq \infty$, we have the following continuous embeddings
\begin{equation}\label{Lorentzcontinuousembeddingchapter2}
L^{p,q_1}(\Omega) \hookrightarrow L^{p,q_2}(\Omega)
\end{equation}
and the inclusion is known to be strict.
\begin{subsection}{Solution Classes of the Navier-Stokes Equations}
We say $v$ is a \emph{finite-energy solution} or a \emph{weak Leray-Hopf solution} to the Navier-Stokes equations on $(0,T)$ if $v\in C_{w}([0,T]; L^{2}_{\sigma}(\mathbb{R}^3))\cap L^{2}(0,T; \dot{H}^{1}(\mathbb{R}^3))$ and if $v$ satisfies the global energy inequality
$$\|v(\cdot,t)\|_{L^{2}(\mathbb{R}^3)}^2+2\int\limits_{0}^{t}\int\limits_{\mathbb{R}^3}|\nabla v|^2dxdt'\leq \|v(\cdot,0)\|_{L^{2}(\mathbb{R}^3)}^2$$
for all $t\in [0,T)$.
Let $\Omega\subseteq\mathbb{R}^3$. We say that $(v,p)$ is a \textit{suitable weak solution} to the Navier-Stokes equations
in $\Omega\times (T_{1},T)$ if it fulfills the properties described in \cite{gregory2014lecture} (Definition 6.1 p.133 in \cite{gregory2014lecture}).
\end{subsection}
\begin{subsection}{Generalized Gronwall and Algebraic Inequalities}
We will need the following generalized Gronwall lemma, which is contained in \cite{robinson} for $\gamma=2$.
\begin{lemma}\label{gronwall}
Let $\varphi\in C^{1}([0,S))$ be a strictly positive function on $[0,S)$. Suppose that there exists $\gamma\geq 1$, $\varepsilon_0>0$, $\nu>0$ and a positive function $\lambda\in C([0,S))\cap L^{1,\infty}(0,S)$ such that the following holds true. For all $0<\varepsilon<\varepsilon_0$ and all $t\in [0,S)$, $\varphi$ satisfies the inequality
\begin{equation}\label{varphinequality}
\frac{d}{dt}\varphi(t)\leq \nu (\lambda(t))^{1-\varepsilon}(\varphi(t))^{1+\gamma\varepsilon}\qquad\textrm{with}\,\, \nu\|\lambda\|_{L^{1,\infty}(0,S)}<\frac{1}{\gamma}.
\end{equation}
Then the above assumptions imply that $\varphi\in L^{\infty}(0,S).$
\end{lemma}
\begin{proof}
The proof for the specific case $\gamma=2$ is contained in \cite{robinson} (Lemma 3.1 there). The proof of the above generalized version uses the same arguments as in \cite{robinson}. We include it in order to make the paper self-contained. Throughout, we take $\varepsilon\in (0,\min(\varepsilon_0,1))$.
Note that if $\nu\|\varphi\|_{L^{1,\infty}(0,S)}<\frac{1}{\gamma}$ then
\begin{equation}\label{limitepsilontozero}
\nu \limsup_{\varepsilon\rightarrow 0^+} \varepsilon \int\limits_{0}^{S} (\lambda(s))^{1-\varepsilon} ds<\frac{1}{\gamma}.
\end{equation}
Let $\mu$ denote the Lebesgue measure on the real line.
By the definition of the Lebesgue norm, we have
\begin{align*}
\begin{split}
\varepsilon\int\limits_{0}^{S} (\lambda(s))^{1-\varepsilon} ds&=\varepsilon(1-\varepsilon)\int\limits_{0}^{\infty} \frac{1}{s^\varepsilon}\mu\{\tau\in [0,S]: \lambda(\tau)>s\} ds\\
&=\varepsilon(1-\varepsilon)\int\limits_{0}^{1} \frac{1}{s^\varepsilon}\mu\{\tau\in [0,S]: \lambda(\tau)>s\} ds+\varepsilon(1-\varepsilon)\int\limits_{1}^{\infty} \frac{1}{s^{1+\varepsilon}}s\mu\{\tau\in [0,S]: \lambda(\tau)>s\} ds\\
&\leq S\varepsilon(1-\varepsilon)\int\limits_{0}^{1} \frac{1}{s^\varepsilon} ds+\varepsilon(1-\varepsilon)\|\lambda\|_{L^{1,\infty}(0,S)}\int\limits_{1}^{\infty} \frac{1}{s^{1+\varepsilon}} ds=S\varepsilon+(1-\varepsilon)\|\lambda\|_{L^{1,\infty}(0,S)}.
\end{split}
\end{align*}
This together with the assumption that $\nu\|\lambda\|_{L^{1,\infty}(0,S)}<\frac{1}{\gamma}$ readily gives \eqref{limitepsilontozero}.
Next, we use that $\varphi$ is strictly positive to integrate \eqref{varphinequality} to get for for any $t\in [0,S)$:
\begin{equation}\label{varphiintegrated}
-(\varphi(t))^{-\gamma\varepsilon}+(\varphi(0))^{-\gamma\varepsilon}\leq \gamma\varepsilon\nu\int\limits_{0}^{t} (\lambda(s))^{1-\varepsilon} ds.
\end{equation}
Using \eqref{limitepsilontozero}, we see that there exists $\delta\in (0,\tfrac{1}{3})$ such that
\begin{equation}\label{limitlambdadelta}
\gamma\nu \limsup_{\varepsilon\rightarrow 0^+} \varepsilon \int\limits_{0}^{S} (\lambda(s))^{1-\varepsilon} ds<1-3\delta.
\end{equation}
Using this and that $\lim_{\varepsilon\rightarrow 0^+} \varphi^{-\gamma\varepsilon}(0)=1$, we see that we can choose $\varepsilon\in (0,\min(\varepsilon_0,1))$ small enough such that
\begin{equation}\label{upperlowerdelta}
\gamma \nu\varepsilon\int\limits_{0}^{S}(\lambda(s))^{1-\varepsilon} ds<1-2\delta\quad\textrm{and}\quad (\varphi(0))^{-\gamma\varepsilon}>1-\delta.
\end{equation}
This combined with \eqref{varphiintegrated} yields that
$$\varphi(t)<\delta^{\frac{1}{\gamma\varepsilon}}\quad\forall t\in [0,S). $$ This gives the desired conclusion.
\end{proof}
\begin{remark}\label{optimality}
For the specific case $\gamma=2$, the authors in \cite{robinson} use a counterexample to show that the condition $\nu\|\lambda\|_{L^{1,\infty}(0,S)}<\frac{1}{2}$ is optimal for Lemma \ref{gronwall} to hold. We remark here that the same idea as in \cite{robinson} gives the sharpness of Lemma \ref{gronwall} in this more general case.
For $t\in [0,1)$ let $$\lambda(t):=\alpha(1-t)^{-1}.$$
It is clear that $\|\lambda\|_{L^{1,\infty}(0,1)}=\alpha.$
Consider $\gamma\geq 1$, $\varepsilon>0$ and for $t\in [0,1)$ the ordinary differential equation
\begin{equation}\label{ODEcounterexample}
\frac{d}{dt}\varphi(t)=(\lambda(t))^{1-\varepsilon}(\varphi(t))^{1+\gamma\varepsilon}\quad\textrm{with}\quad\varphi(0)=1.
\end{equation}
On its maximal interval of existence (intersected with $[0,1)$) this has a unique solution
$$\varphi(t)=(1-\gamma \alpha^{1-\varepsilon}+\gamma \alpha^{1-\varepsilon}(1-t)^\varepsilon)^{-\frac{1}{\gamma\varepsilon}}. $$
If $\gamma\geq 1$ and $1\geq \alpha\geq \frac{1}{\gamma}$, it is easy to see by the intermediate value theorem that this solution must blow-up at some $t_{*}\in (0,1]$.
\end{remark}
We will also need the following elementary algebraic lemma, which is essentially taken from \cite{ji} (Lemma 2.2 there) except with a mistake corrected.
\begin{lemma}\label{indicesalgebra}
Assume that $(p,q)$ satisfy $\tfrac{2}{p}+\tfrac{3}{q}=2$ with $q\geq 1$ and $p>0$. Given $b,c_0\geq 1$, for every $\kappa\in [0,1]$ and there exists $p_{\kappa}>0$ and $q_{\kappa}$ with
\begin{equation}\label{qkappabounds}
\min(q,\tfrac{3}{2}+\tfrac{b}{c_0})\leq q_{\kappa}\leq \max(q, \tfrac{3}{2}+\tfrac{b}{c_0})
\end{equation}
that satisfy the following. Namely,
\begin{equation}\label{pkappaqkappaserrin}
\frac{2}{p_{\kappa}}+\frac{3}{q_{\kappa}}=2
\end{equation}
and
\begin{equation}\label{pkappaqkappainterpolation}
\frac{p_{\kappa}}{q_{\kappa}}=\frac{p(1-\kappa)}{q}+\frac{c_0\kappa}{b}.
\end{equation}
\end{lemma}
\begin{proof}
The proof is essentially the same as in Lemma 2.2 of \cite{ji}. Except in \cite{ji} it is falsely claimed that $q_{\kappa}$ can always be taken to satisfy
$$\min(q,b)\leq q_{\kappa}\leq \max(q,b). $$
Since this assertion would negatively impact our results, we prove the corrected version following arguments in \cite{ji}.
Clearly
\begin{equation}\label{pqratio}
\frac{p}{q}=\frac{2}{3}(p-1).
\end{equation}
Thus, we require $p_{\kappa}$ and $q_{\kappa}$ satisfying
\begin{equation}\label{pkappaqkapparatio}
\frac{p_{\kappa}}{q_{\kappa}}=\frac{2}{3}(p-1)(1-\kappa)+\frac{c_0\kappa}{b}\quad\textrm{and}\quad \frac{p_{\kappa}}{q_{\kappa}}=\frac{2}{3}(p_{\kappa}-1).
\end{equation}
Solving this readily gives
\begin{equation}\label{pkappasol}
p_{\kappa}=p(1-\kappa)+\kappa(\frac{3c_0}{2b}+1)
\end{equation}
and
\begin{equation}\label{qkappasol}
q_{\kappa}=\frac{3p_{\kappa}}{2(p_{\kappa}-1)}=\frac{3(p(1-\kappa)+\kappa(\frac{3c_0}{2b}+1))}{2((p-1)(1-\kappa)+\kappa\frac{3c_0}{2b}))}.
\end{equation}
Clearly from \eqref{pkappasol}, we get
\begin{equation}\label{pkappasolbounds}
p_{\kappa}\in \Big[\min(p,\frac{3c_0}{2b}+1), \max(p,\frac{3c_0}{2b}+1)\Big].
\end{equation}
Using this and \eqref{qkappasol}, we readily get the desired bounds for $q_{\kappa}$ in \eqref{qkappabounds}.
\end{proof}
\end{subsection}
\end{section}
\section{Proof of higher integrability (Theorem \ref{higherinteggen})}
First, we state and prove the following lemma, which is a generalization of similar arguments in \cite{ji} for the case $q=4$.
\begin{lemma}\label{energyestprodiserrin}
Let $q\in (2,3)$ be fixed. Fix $s\in (\frac{3}{2},\infty)$ and $r\in (1,\infty)$ such that $\frac{2}{r}+\tfrac{3}{s}=2$.
Let $s':=\tfrac{s}{s-1}$ and $r':=\tfrac{r}{r-1} $. Suppose that $v:\mathbb{R}^3\times (0,\infty)\rightarrow\mathbb{R}^3$ is a weak Leray-Hopf solution, which is smooth on $\mathbb{R}^3\times (0,T^*)$.
Then the above assumptions imply that there exists a positive universal constant $C^{(1)}_{univ}$ such that the following holds true for all $t\in (0,T^*)$:
\begin{align}\label{prodiserrinestpressure}
\begin{split}
&\frac{d}{dt}\Big(\||v|^{\frac{q}{2}}(\cdot,t)\|_{L^{2}(\mathbb{R}^3)}^2\Big)+\int\limits_{\mathbb{R}^3} |\nabla v(x,t)|^2 |v(x,t)|^{q-2} dx+\frac{1}{4}\int\limits_{\mathbb{R}^3} |\nabla (|v|^{\frac{q}{2}})(x,t)|^2 dx\leq\\
& \frac{(C^{(1)}_{univ})^r (s')^{2r}(q-2)^2}{r(s'-1)^r}\Big(\frac{r'}{12}\Big)^{-\frac{r}{r'}}\|p(\cdot,t)\|_{L^{s}(\mathbb{R}^3)}^r \||v|^{\frac{q}{2}}(\cdot,t)\|_{L^{2}(\mathbb{R}^3)}^2.
\end{split}
\end{align}
\end{lemma}
\begin{proof}
We test the Navier-Stokes system with $v|v|^{q-2}$ and then integrate over $\mathbb{R}^3$. Performing identical arguments as in \cite{barkerhigherinteg} gives that for any $t\in (0,T^*)$:
\begin{align}\label{Lqenergyest1}
\begin{split}
&\frac{d}{dt}\Big(\||v|^{\frac{q}{2}}(\cdot,t)\|_{L^{2}(\mathbb{R}^3)}^2\Big)+\int\limits_{\mathbb{R}^3} |\nabla v(x,t)|^2 |v(x,t)|^{q-2} dx+\frac{2}{3}\int\limits_{\mathbb{R}^3}|\nabla(|v|^{\frac q2})(x,t)|^2\, dx\leq\\
&\frac{d}{dt}\Big(\||v|^{\frac{q}{2}}(\cdot,t)\|_{L^{2}(\mathbb{R}^3)}^2\Big)+\frac{q}{2}\int\limits_{\mathbb{R}^3} |\nabla v(x,t)|^2 |v(x,t)|^{q-2} dx+\frac{2}{q}\int\limits_{\mathbb{R}^3}|\nabla(|v|^{\frac q2})(x,t)|^2\, dx\\
&\leq{2(q-2)}\int\limits_{\mathbb{R}^3}|p(x,t)||v(x,t)|^{\frac{q-2}{2}}|\nabla(|v|^{\frac{q}{2}})(x,t)|\, dx.
\end{split}
\end{align}
Performing H\"{o}lder's inequality and then Young's algebraic inequality to the term on the right hand side gives that for any $t\in (0,T^*)$:
\begin{align}\label{Lqenergybalmain}
\begin{split}
&\frac{d}{dt}\Big(\||v|^{\frac{q}{2}}(\cdot,t)\|_{L^{2}(\mathbb{R}^3)}^2\Big)+\int\limits_{\mathbb{R}^3} |\nabla v(x,t)|^2 |v(x,t)|^{q-2} dx+\frac{1}{3}\int\limits_{\mathbb{R}^3}|\nabla(|v|^{\frac q2})(x,t)|^2\, dx\leq 3(q-2)^2 I,\quad\\
&\textrm{with}\quad I:=\int\limits_{\mathbb{R}^3} |p(x,t)||p(x,t)||v(x,t)|^{q-2} dx.
\end{split}
\end{align}
Performing H\"{o}lder's inequality on $I$ allows us to infer that
\begin{equation}\label{Ifirstest}
I\leq \|p(\cdot,t)\|_{L^{s}(\mathbb{R}^3)}\|p(\cdot,t)\|_{L^{\frac{s'q}{2}}(\mathbb{R}^3)}\||v|^{q-2}(\cdot,t)\|_{L^{\frac{s'q}{q-2}}(\mathbb{R}^3)}.
\end{equation}
Since the pressure $p$ is a composition of Riesz transforms acting on $v\otimes v$, we see that
\begin{equation}\label{pressurefirstest}
\|p(\cdot,t)\|_{L^{\frac{s'q}{2}}(\mathbb{R}^3)}\leq \frac{C_{univ,(0)}(\tfrac{s'q}{2})^2}{(\tfrac{s'q}{2})-1}\||v|^2(\cdot,t)\|_{L^{\frac{s'q}{2}}(\mathbb{R}^3)}\leq \frac{C_{univ,(1)}(s')^2}{s'-1}\||v|^2(\cdot,t)\|_{L^{\frac{s'q}{2}}(\mathbb{R}^3)}.
\end{equation}
Here, we used the explicit operator bound for Calder\'{o}n-Zygmung operators on non-endpoint Lebesgue spaces (see pg.110 of \cite{javier} and references therein), along with the fact that $\tfrac{s'q}{2}\in (s', \tfrac{3s'}{2}).$ Next, substituting \eqref{pressurefirstest} into \eqref{Ifirstest} gives that
\begin{equation}\label{Isecondest}
I\leq \frac{C_{univ,(1)}(s')^2}{s'-1}\|p(\cdot,t)\|_{L^{s}(\mathbb{R}^3)}\||v|^{\frac{q}{2}}(\cdot,t)\|_{L^{2s'}(\mathbb{R}^3)}^2.
\end{equation}
Now,
$$\frac{1}{2s'}=\frac{\theta}{2}+\frac{1-\theta}{6}\quad\textrm{with}\,\,\theta=\frac{1}{r}. $$
So by Lebesgue interpolation and the Sobolev inequality, we get
\begin{equation}\label{Ithirdest}
I\leq \frac{C_{univ,(2)}(s')^2}{s'-1}\|p(\cdot,t)\|_{L^{s}(\mathbb{R}^3)}\Big(\||v|^{\frac{q}{2}}(\cdot,t)\|_{L^{2}(\mathbb{R}^3)}^2\Big)^{\frac{1}{r}}\Big(\|\nabla(|v|^{\frac{q}{2}})(\cdot,t)\|_{L^{2}(\mathbb{R}^3)}^2\Big)^{\frac{3}{2s}}.
\end{equation}
By Young's algebraic lemma, $q-2\in (0,1)$ and $r> 1$, we see that
$$3(q-2)^2 I\leq \frac{(C^{(1)}_{univ})^r (s')^{2r}(q-2)^2}{r(s'-1)^r}\Big(\frac{r'}{12}\Big)^{-\frac{r}{r'}}\|p(\cdot,t)\|_{L^{s}(\mathbb{R}^3)}^r \||v|^{\frac{q}{2}}(\cdot,t)\|_{L^{2}(\mathbb{R}^3)}^2+\frac{1}{12}\int\limits_{\mathbb{R}^3}|\nabla(|v|^{\frac q2})(x,t)|^2\, dx. $$
Inserting this into \eqref{Lqenergybalmain} gives the desired conclusion.
\end{proof}
\textbf{Proof of Theorem \ref{higherinteggen}}\\
We fix a constant $q\in (2,3)$ to be determined.
Note that the fact that $v$ is a weak Leray-Hopf solution satisfying \eqref{vbounded} implies that $v$ and its associated pressure $p$ are sufficiently smooth on $\mathbb{R}^3\times (0,T^*)$. For each $t\in (0,T^*)$, $v(\cdot,t)$ belongs to every $L^{\alpha}(\mathbb{R}^3)$ with $\alpha \in [2,\infty]$. For each $t\in (0,T^*)$, $p(\cdot,t)$ belongs to every $L^{\beta}(\mathbb{R}^3)$ with $\beta \in (1,\infty)$.
Hence all calculations performed below can be rigorously justified.
Recall our assumption that
\begin{equation}\label{presassumptionrecall}
\|p\|_{L^{r,\infty}(0,T^*; L^{s}(\mathbb{R}^3))}\leq M^2\quad\textrm{with}\,\,\tfrac{2}{r}+\tfrac{3}{s}=1\,\,\textrm{and}\,s>\tfrac{3}{2}.
\end{equation}
We let $\mu(s)=\mu>2$ be fixed and chosen such that
\begin{equation}\label{parameterchoice}
\frac{3}{2}+\frac{3}{2\mu}<s.
\end{equation}
Since $\mu>2$ and $q\in (2,3)$, we also have
\begin{equation}\label{muqlower}
\frac{2\mu}{q}\geq \frac{2\mu}{3}\geq \frac{4}{3}.
\end{equation}
We now apply Lemma \ref{indicesalgebra} with $c_0:=\mu>2$ and $b:=\frac{q}{2}>1$. This gives that for all $\kappa\in (0,1]$ there exists $s_{\kappa}$ and $r_{\kappa}$ such that
\begin{align}\label{indiceproperties}
\begin{split}
&\frac{2}{r_{\kappa}}+\frac{3}{s_{\kappa}}=2,\\
&\frac{q}{2}<\frac{3}{2}+\frac{1}{\mu}<\frac{3}{2}+\frac{q}{2\mu}\leq s_{\kappa}\leq s\,\,\textrm{and}\\
&\frac{r_{\kappa}}{s_{\kappa}}=\frac{r(1-\kappa)}{s}+\frac{2\mu\kappa}{q}.
\end{split}
\end{align}
Given any $t\in (0,T^*)$, we can apply \eqref{indiceproperties} and Lebesgue interpolation to get
\begin{equation}\label{pressureinterpolation}
\|p(\cdot,t)\|_{L^{s_{\kappa}}(\mathbb{R}^3)}^{r_{\kappa}}\leq \|p(\cdot,t)\|_{L^{s}(\mathbb{R}^3)}^{r(1-\kappa)}\|p(\cdot,t)\|_{L^{\frac{q}{2}}(\mathbb{R}^3)}^{\mu\kappa}.
\end{equation}
Since the pressure $p$ is a composition of Riesz transforms acting on $v\otimes v$, we see that there exists $C_{univ, (3)}\,\textrm{and}\,C_{univ,(4)}\geq 1$ such that
\begin{equation}\label{pressuresecondest}
\|p(\cdot,t)\|_{L^{\frac{q}{2}}(\mathbb{R}^3)}\leq \frac{C_{univ,(3)}(\tfrac{q}{2})^2}{(\tfrac{q}{2})-1}\||v|^2(\cdot,t)\|_{L^{\frac{q}{2}}(\mathbb{R}^3)}\leq \frac{C_{univ,(4)}}{q-2}\||v|^{\frac{q}{2}}(\cdot,t)\|_{L^{2}(\mathbb{R}^3)}^{\frac{4}{q}}.
\end{equation}
Combining this with \eqref{pressureinterpolation} yields
\begin{equation}\label{pressureinterpolation2}
\|p(\cdot,t)\|_{L^{s_{\kappa}}(\mathbb{R}^3)}^{r_{\kappa}}\leq \frac{(C_{univ,(4)})^{\mu}}{(q-2)^{\mu\kappa}}\|p(\cdot,t)\|_{L^{s}(\mathbb{R}^3)}^{r(1-\kappa)}(\||v|^{\frac{q}{2}}(\cdot,t)\|_{L^{2}(\mathbb{R}^3)}^{2})^{\frac{2\mu\kappa}{q}}.
\end{equation}
Since $\frac{2}{r_{\kappa}}+\tfrac{3}{s_{\kappa}}=2$, we can apply Lemma \ref{energyestprodiserrin}. Note that the related coefficient from Lemma \ref{energyestprodiserrin} is
$$ \frac{(C^{(1)}_{univ})^{r_\kappa} (s'_{\kappa})^{2r_{\kappa}}(q-2)^2}{r_{\kappa}(s'_{\kappa}-1)^{r_{\kappa}}}\Big(\frac{r'_{\kappa}}{12}\Big)^{-\frac{r_{\kappa}}{r'_{\kappa}}},\qquad\textrm{with}\,\,\,s'_{\kappa}:=\frac{s_{\kappa}}{s_{\kappa}-1}\quad\textrm{and}\quad r'_{\kappa}:=\frac{r_{\kappa}}{r_{\kappa}-1}.$$
Notice that due to \eqref{indiceproperties}, this can be taken to be independent of $\kappa$. In particular, there exists a constants $C_{s,(0)}$ and $C_{s,(1)}$ (depending only on $s$) such that for any $\kappa:=\varepsilon\in (0,1]$ and $t\in (0,T^*)$:
\begin{align}\label{weightedestimates1}
\begin{split}
&\frac{d}{dt}\Big(\||v|^{\frac{q}{2}}(\cdot,t)\|_{L^{2}(\mathbb{R}^3)}^2\Big)+\int\limits_{\mathbb{R}^3} |\nabla v(x,t)|^2 |v(x,t)|^{q-2} dx+\frac{1}{4}\int\limits_{\mathbb{R}^3} |\nabla (|v|^{\frac{q}{2}})(x,t)|^2 dx\leq\\
& C_{s,(0)}(q-2)^2\|p(\cdot,t)\|_{L^{s_{\kappa}}(\mathbb{R}^3)}^{r_{\kappa}} \||v|^{\frac{q}{2}}(\cdot,t)\|_{L^{2}(\mathbb{R}^3)}^2\leq C_{s,(1)}(q-2)^{2-\mu\varepsilon}\|p(\cdot,t)\|_{L^{s}(\mathbb{R}^3)}^{r(1-\varepsilon)}(\||v|^{\frac{q}{2}}(\cdot,t)\|_{L^{2}(\mathbb{R}^3)}^2)^{1+\frac{2\mu\varepsilon}{q}}.
\end{split}
\end{align}
Letting $\varepsilon_0:=\frac{1}{\mu}$, we see that for any $\varepsilon\in (0,\varepsilon_0]$ and $t\in (0,T^*)$ we have
\begin{align}\label{weightedestimates2}
\begin{split}
&\frac{d}{dt}\Big(\||v|^{\frac{q}{2}}(\cdot,t)\|_{L^{2}(\mathbb{R}^3)}^2\Big)+\int\limits_{\mathbb{R}^3} |\nabla v(x,t)|^2 |v(x,t)|^{q-2} dx+\frac{1}{4}\int\limits_{\mathbb{R}^3} |\nabla (|v|^{\frac{q}{2}})(x,t)|^2 dx\leq\\
& C_{s,(1)}(q-2)\|p(\cdot,t)\|_{L^{s}(\mathbb{R}^3)}^{r(1-\varepsilon)}(\||v|^{\frac{q}{2}}(\cdot,t)\|_{L^{2}(\mathbb{R}^3)}^2)^{1+\frac{2\mu\varepsilon}{q}}.
\end{split}
\end{align}
Let $\lambda(t):=\|p(\cdot,t)\|_{L^{s}(\mathbb{R}^3)}^r$. Then by the standing assumption \eqref{typeIpresgen}, we see that
\begin{equation}\label{lambdaweaknorm}
\|\lambda(t)\|_{L^{1,\infty}(0,T^*)}\leq M^{2r}.
\end{equation}
For $M$ sufficiently large, we define
\begin{equation}\label{qfixed}
q:=2+\frac{1}{\mu C_{s,(1)}M^{2r}}\in (2,3).
\end{equation}
With this choice of $q$, we get
\begin{equation}\label{smallexponent}
C_{s,(1)}(q-2)\|\lambda(t)\|_{L^{1,\infty}(0,T^*)}\leq \frac{1}{\mu}\leq \frac{q}{2\mu}.
\end{equation}
Now, \eqref{weightedestimates2} and \eqref{smallexponent} allow us to apply Lemma \ref{gronwall}. This allows us to deduce that for any $t_{1}\in (0,T^*)$: $$\||v|^{\frac{q}{2}}\|_{L^{\infty}_{t}(t_1,T^*; L^{2}(\mathbb{R}^3))}<\infty .$$
Using this and integrating \eqref{weightedestimates2} over $(t_1,T^*)$ yields the conclusion \eqref{higherintegrable}.
\section{Proof of the $\varepsilon$- regularity criterion (Proposition \ref{Bqsmallreg})}
In this section, it is necessary to introduce the following notation for $z_0=(x_0,t_0)$, $v:Q(z_0,R)\rightarrow\mathbb{R}^3$ and $p:Q(z_0,R)\rightarrow \mathbb{R}$. We define (for $r\in (0,R)$):
\begin{equation}\label{Adef}
A(v,z_0,r):=\esssup_{t_0-r^2\leq t\leq t_{0}}\frac{1}{r}\int\limits_{B(x_0,r)} |v(x,t)|^2 dx ,
\end{equation}
\begin{equation}\label{Edef}
E(v,z_0,r):= \frac{1}{r}\int\limits_{Q(z_0,r)} |\nabla v|^2 dxds
,
\end{equation}
\begin{equation}\label{Cdef}
C(v,z_0,r):=\frac{1}{r^2}\int\limits_{Q(z_0,r)} |v|^3 dxds
,
\end{equation}
\begin{equation}\label{Sdef}
S(v,z_0,r):=\frac{1}{r^2}\int\limits_{Q(z_0,r)} ||v|^2-[|v|^2]_{B(x_0,r)}||v|dxds
,
\end{equation}
\begin{equation}\label{Ddef}
D(p, z_0,r):=\frac{1}{r^2}\int\limits_{Q(z_0,r)} |p-[p]_{B(x_0,r)}|^{\frac{3}{2}} dxds.
\end{equation}
Note that in \eqref{Sdef}-\eqref{Ddef}, we use the notation
$$[f]_{B(x_0,r)}=\frac{1}{\mu(B(x_0,r))}\int\limits_{B(x_0,r)} f dx, $$
where $\mu(B(x_0,r))$ denotes the Lebesgue measure of $B(x_0,r)$.
For $q\in (2,3)$ we also introduce the notation
\begin{equation}\label{Bqdef}
B_{q}(v,z_0,r):=\frac{1}{r^{3-q}}\int\limits_{Q(z_0,r)} |\nabla v|^2|v|^{q-2} dxds.
\end{equation}
In the above, when $z_0=(0,0)$ we will write $A(v,r)$, $B(v,r)$ etc, instead of $A(v,0,r)$ and $B(v,0,r)$.
The goal of this section is to prove Proposition \ref{Bqsmallreg} (stated in the Introduction).
In proving Proposition \ref{Bqsmallreg}, we will utilize the following Lemma, which is stated and proven in \cite{sereginsverakpressure} (specifically Lemma 3.3 in \cite{sereginsverakpressure}).
\begin{lemma}\label{smallkineticimpliesreg}
There exists a positive universal constant $\varepsilon_{*}$ such that the following holds true.
Suppose that $(v,p)$ is a suitable weak solution to the Navier-Stokes equations in $Q(0,1)$.
Furthermore, suppose that there exists an $R_{*}\in(0,1)$ such that
\begin{equation}\label{Asmall}
\sup_{0<R<R_{*}} A(v,R)<\varepsilon_{*}.
\end{equation}
Then the above assumptions imply that $(x,t)=(0,0)$ is a regular point of $v$.
In particular, there exists an $r\in (0,1)$ such that
$v\in L^{\infty}(Q(0,r)).$
\end{lemma}
In order to prove Proposition \ref{Bqsmallreg}, we must first prove three preliminary estimates, which we state as separate Lemmas.
\begin{lemma}\label{Sestlem}
For every $q\in (2,3)$, there exists $c^{(0)}_{q}>0$ such that the following holds true.
Suppose that $(v,p)$ is a suitable weak solution to the Navier-Stokes equations in $Q(0,1)$.
Furthermore, suppose that
\begin{equation}\label{Bqfinite2}
B_{q}(v,1)<\infty.
\end{equation}
Then for every $0<r\leq 1$, we have the estimate
\begin{equation}\label{Sest}
S(v,r)\leq c^{(0)}_{q} (A(v,r))^{\frac{4-q}{4}} (C(v,r))^{\frac{1}{3}}(B_{q}(v,r))^{\frac{1}{2}}.
\end{equation}
\end{lemma}
\begin{proof}
By H\"{o}lder's inequality in space, we have
$$S(v,r)\leq \frac{1}{r^2}\int\limits^{0}_{-r^2} ds\Big(\int\limits_{B(0,r)}||v|^2-[|v|^2]_{B(0,r)}|^{\frac{3}{2}} dx\Big)^{\frac{2}{3}}\Big(\int\limits_{B(0,r)}|v|^3 dx\Big)^{\frac{1}{3}}. $$
Next, using the Poincar\'{e}-Sobolev inequality, we infer that
\begin{equation}\label{Sfirstest}
S(v,r)\leq \frac{C_{univ,(5)}}{r^2}\int\limits^{0}_{-r^2} ds\Big(\int\limits_{B(0,r)}|\nabla v||v| dx\Big)\Big(\int\limits_{B(0,r)}|v|^3 dx\Big)^{\frac{1}{3}}.
\end{equation}
Next, note that for almost every $s\in(-r^2,0)$ we can write
$$\int\limits_{B(0,r)} |\nabla v(x,s)||v(x,s)| dx=\int\limits_{B(0,r)} |\nabla v(x,s)||v(x,s)|^{\frac{q-2}{2}}|v(x,s)|^{\frac{4-q}{2}} dx. $$
Recalling that $q\in (2,3)$ and applying H\"{o}lder's inequality twice thus gives
\begin{equation}\label{vnablavkeyest}
\int\limits_{B(0,r)} |\nabla v(x,s)||v(x,s)| dx\leq C_{q,(1)}r^{\frac{3(q-2)}{4}}\Big(\int\limits_{B(0,r)} |v(x,s)|^2 dx\Big)^{\frac{4-q}{4}}\Big(\int\limits_{B(0,r)}|\nabla v(x,s)|^2 |v(x,s)|^{q-2} dx\Big)^{\frac{1}{2}}.
\end{equation}
Inserting this into \eqref{Sfirstest} gives
\begin{equation}\label{Ssecondest}
S(v,r)\leq C_{q,(2)}r^{\frac{q-5}{2}}(A(v,r))^{\frac{4-q}{4}}\int\limits^{0}_{-r^2} ds\Big(\int\limits_{B(0,r)}|\nabla v|^2|v|^{q-2} dx\Big)^{\frac{1}{2}}\Big(\int\limits_{B(0,r)}|v|^3 dx\Big)^{\frac{1}{3}}.
\end{equation}
Applying H\"{o}lder's inequality to \eqref{Ssecondest} (twice in the time variable) readily gives the desired conclusion.
\end{proof}
\begin{lemma}\label{lemCest}
For every $q\in (2,3)$, there exists $c^{(1)}_{q}>0$ such that the following holds true.
Suppose that $(v,p)$ is a suitable weak solution to the Navier-Stokes equations in $Q(0,1)$.
Furthermore, suppose that
\begin{equation}\label{Bqfinite1}
B_{q}(v,1)<\infty.
\end{equation}
Then for every $0<r\leq\rho<1$, we have the estimate
\begin{equation}\label{Cest}
C(v,r)\leq c^{(1)}_{q}\Big\{\Big(\frac{r}{\rho}\Big)^3(A(v,\rho))^{\frac{3}{2}}+(A(v,\rho))^{\frac{3(4-q)}{8}}\Big(\frac{\rho}{r}\Big)^{3}(B_{q}(v,\rho))^{\frac{3}{4}}\Big\}.
\end{equation}
\end{lemma}
\begin{proof}
Let $0<r\leq\rho<1$. From p.g 800 of \cite{CKN}, we see that for almost every $s\in (-r^2,0)$ we have
$$\int\limits_{B(0,r)} |v(x,s)|^2 dx\leq C_{univ,(6)}\rho\int\limits_{B(0,\rho)} |\nabla v(x,s)||v(x,s)| dx+C_{univ,(6)}\rho\Big(\frac{r}{\rho}\Big)^3 A(v,\rho). $$
Applying \eqref{vnablavkeyest} (with $\rho$ replacing $r$) gives that for almost every $s\in (-r^2,0)$:
\begin{equation}\label{vL2est}
\int\limits_{B(0,r)} |v(x,s)|^2 dx\leq C_{q,(1)}\rho^{\frac{q+1}{2}} A(v,\rho)^{\frac{4-q}{4}}\Big(\int\limits_{B(0,\rho)} |\nabla v(x,s)|^2 |v(x,s)|^{q-2} dx\Big)^{\frac{1}{2}}+C_{univ,(6)}\rho\Big(\frac{r}{\rho}\Big)^3 A(v,\rho).
\end{equation}
Next, we write
$$C(v,r)=\frac{1}{r^2}\int\limits^{0}_{-r^2}\int\limits_{B(0,r)} (|v(x,s)|^2-[|v|^2]_{B(0,r)})|v(x,s)| dxds+\frac{1}{r^2}\int\limits^{0}_{-r^2}\int\limits_{B(0,r)} [|v|^2]_{B(0,r)}|v(x,s)| dxds. $$
Hence, using H\"{o}lder's inequality we have
\begin{equation}\label{CinequalityS}
C(v,r)\leq S(v,r)+\frac{C_{univ,(7)}}{r^2}\int\limits_{-r^2}^{0}\frac{1}{r^{\frac{3}{2}}}\Big(\int\limits_{B(0,r)} |v(x,s)|^2 dx\Big)^{\frac{3}{2}} ds.
\end{equation}
Now, using Lemma \ref{Sestlem} we have
\begin{equation}\label{Cfirstinequality}
C(v,r)\leq c^{(0)}_{q} (A(v,r))^{\frac{4-q}{4}} (C(v,r))^{\frac{1}{3}}(B_{q}(v,r))^{\frac{1}{2}}+\frac{C_{univ,(7)}}{r^2}\int\limits_{-r^2}^{0}\frac{1}{r^{\frac{3}{2}}}\Big(\int\limits_{B(0,r)} |v(x,s)|^2 dx\Big)^{\frac{3}{2}} ds.
\end{equation}
Using Young's inequality, we then infer that
\begin{equation}\label{Cyoungsinequality}
C(v,r)\leq C_{q,(3)}(A(v,r))^{\frac{3(4-q)}{8}} (B_{q}(v,r))^{\frac{3}{4}}+\frac{C_{univ,(8)}}{r^2}\int\limits_{-r^2}^{0}\frac{1}{r^{\frac{3}{2}}}\Big(\int\limits_{B(0,r)} |v(x,s)|^2 dx\Big)^{\frac{3}{2}} ds.
\end{equation}
Using this, together with \eqref{vL2est} and H\"{o}lder's inequality in the time variable, one deduces that
\begin{align}\label{Cinequalityrrho}
\begin{split}
&C(v,r)\leq C_{q,(3)}(A(v,r))^{\frac{3(4-q)}{8}} (B_{q}(v,r))^{\frac{3}{4}}+C_{univ,(9)}\Big(\frac{r}{\rho}\Big)^{3} (A(v,\rho))^{\frac{3}{2}}
\\&+C_{univ,(9)}(A(v,\rho))^{\frac{3(4-q)}{8}}\Big(\frac{\rho}{r}\Big)^{3}(B_{q}(v,\rho))^{\frac{3}{4}}
\\
&\leq \Big(C_{q,(3)}\Big(\frac{\rho}{r}\Big)^{\frac{3(4-q)}{8}+\frac{3(3-q)}{4}}+C_{univ,(9)}\Big(\frac{\rho}{r}\Big)^{3}\Big)(A(v,\rho))^{\frac{3(4-q)}{8}}(B_{q}(v,\rho))^{\frac{3}{4}}+C_{univ,(9)}\Big(\frac{r}{\rho}\Big)^{3} (A(v,\rho))^{\frac{3}{2}}.
\end{split}
\end{align}
Using this, along with the fact that $q\in (2,3)$, the desired conclusion readily follows.
\end{proof}
\begin{lemma}\label{Destlem}
For every $q\in (2,3)$, there exists a positive constant $c^{(2)}_{q}$ (depending only on $q$) such that the following holds true.
Suppose that $(v,p)$ is a suitable weak solution to the Navier-Stokes equations in $Q(0,1)$.
Furthermore, suppose that
\begin{equation}\label{Bqfinite3}
B_{q}(v,1)<\infty.
\end{equation}
Then for every $0<r\leq\rho<1$, we have the estimate
\begin{equation}\label{Dest}
D(p,r)\leq c^{(2)}_{q}\Big\{\Big(\frac{\rho}{r}\Big)^2(A(v,\rho))^{\frac{3(4-q)}{8}}(B_{q}(v,\rho))^{\frac{3}{4}}+\Big(\frac{r}{\rho}\Big)^{\frac{5}{2}} D(p,\rho)\Big\}.
\end{equation}
\end{lemma}
\begin{proof}
Let $\chi_{B(0,\rho)}:\mathbb{R}^3\rightarrow \{0,1\}$ denote the indicator function, that is equal to 1 on $B(0,\rho)$ and is zero elsewhere.
For almost every $s\in (-\rho^2,0)$, we write
\begin{equation}\label{pressuredecomp}
p(x,s)=p_{1}(x,s)+p_{2}(x,s)\,\,\,\textrm{with}\,\,\,p_{1}:= \mathcal{R}_{i}\mathcal{R}_{j}(\chi_{B(0,\rho)}(v_{i}v_{j}-[v_{i}v_{j}]_{B(0,\rho)})).
\end{equation}
In \eqref{pressuredecomp}, $R=(R_{\alpha})_{\alpha=1,\ldots 3}$ is the Riesz transform and we utilize the Einstein summation convention.
It is not difficult to show that for almost every $s\in (-\rho^2,0)$, we must have
\begin{equation}\label{q2harmonic}
\Delta p_{2}(x,s)=0\,\,\,\textrm{in}\,\,\,B(0,\rho).
\end{equation}
Using properties of harmonic functions and the same arguments as in \cite{gregory2014lecture} (specifically p.g 145 of \cite{gregory2014lecture}) yields the following estimate:
\begin{equation}\label{pressureestharmonicity}
D(p,r)\leq\frac{C_{univ,(10)}}{r^2}\int\limits_{Q(0,r)} |p_{1}(x,s)|^{\frac{3}{2}} dxds+C_{univ,(10)}\Big(\frac{r}{\rho}\Big)^{\frac{5}{2}}\Big(D(p,\rho)+\frac{1}{{\rho}^2}\int\limits_{Q(0,\rho)} |p_{1}(x,s)|^{\frac{3}{2}} dxds\Big).
\end{equation}
It remains to estimate $$\frac{1}{{\rho}^2}\int\limits_{Q(0,\rho)} |p_{1}(x,s)|^{\frac{3}{2}} dxds.$$
Recalling \eqref{pressuredecomp} and applying Calder\'{o}n-Zygmund estimates gives that for almost every $s\in (-\rho^2,0)$
$$\int\limits_{B(0,\rho)} |p_{1}(x,s)|^{\frac{3}{2}} dx\leq C_{univ,(11)}\sum_{i,j=1}^{3}\int\limits_{B(0,\rho)} |v_{i}v_{j}(x,s)-[v_{i}v_{j}(\cdot,s)]_{B(0,\rho)}|^{\frac{3}{2}} dx. $$
Next, the Poincar\'{e}-Sobolev inequality gives that for almost every $s\in (-\rho^2,0)$:
\begin{equation}\label{p1est1}
\int\limits_{B(0,\rho)} |p_{1}(x,s)|^{\frac{3}{2}} dx\leq C_{univ,(12)}\Big(\int\limits_{B(0,\rho)} |v(x,s)||\nabla v(x,s)| dx\Big)^{\frac{3}{2}}.
\end{equation}
Next, utilizing \eqref{vnablavkeyest} (with $\rho$ instead of $r$) yields
\begin{equation}\label{p1est2}
\int\limits_{B(0,\rho)} |p_{1}(x,s)|^{\frac{3}{2}} dx\leq C_{q,(4)}\rho^{\frac{9(q-2)}{8}}(A(v,\rho))^{\frac{3(4-q)}{8}}\Big(\int\limits_{B(0,\rho)}|\nabla v(x,s)|^2|v(x,s)|^{q-2} dx\Big)^{\frac{3}{4}}.
\end{equation}
Integrating this in time and applying H\"{o}lder's inequality in time yields
\begin{equation}\label{p1keyest}
\frac{1}{\rho^2}\int\limits_{Q(0,\rho)} |p_{1}|^{\frac{3}{2}} dxds\leq C_{univ,(13)}(A(v,\rho))^{\frac{3(4-q)}{8}}(B_{q}(v,\rho))^{\frac{3}{4}}.
\end{equation}
Combining this with \eqref{pressureestharmonicity} then gives the desired conclusion.
\end{proof}
\subsection{Proof of Proposition \ref{Bqsmallreg}}
\begin{proof}
First, recall that the local energy inequality implies that for all positive functions $\varphi\in C^{\infty}_{0}(B(0,1)\times (-1,\infty))$ the following inequality holds for almost every $t\in (-1,0)$:
\begin{align}\label{localnenergyequality}
\begin{split}
&\int\limits_{B(0,1)} |v(x,t)|^2 \varphi(x,t) dx+2\int\limits_{-1}^{t}\int\limits_{B(0,1)} |\nabla v(x,s)|^2\varphi(x,s) dxds\leq \int\limits_{-1}^{t}\int\limits_{B(0,1)} | v(x,s)|^2\Delta\varphi(x,s) dxds+\\&
+\int\limits_{-1}^{t}\int\limits_{B(0,1)} (| v(x,s)|^2-[|v(\cdot,s)|^2]_{B(0,2r)})v(x,s)\cdot\nabla\varphi+ 2(p(x,s)-[p(\cdot,s)]_{B(0,2r)})v(x,s)\cdot\nabla\varphi dxds.
\end{split}
\end{align}
Here, we have taken $0\leq r\leq \frac{1}{2}.$
By selecting an appropriate test function $\varphi$, together with making use of H\"{o}lder's inequality, one can show that \eqref{localnenergyequality} implies the following. Namely, for any $0<r\leq \frac{1}{2}$:
\begin{equation}\label{localenergybasicest}
A(v,r)+E(v,r)\leq C_{univ,(14)}\{(C(v,2r))^{\frac{2}{3}}+S(v,2r)+(D(p,2r))^{\frac{2}{3}}(C(v,2r))^{\frac{1}{3}}\}.
\end{equation}
Recall that in Proposition \ref{Bqsmallreg}, we assume that
\begin{equation}\label{Bqsmallrecall}
\sup_{0<\rho<1} B_{q}(v,\rho)<\varepsilon,
\end{equation}
where $\varepsilon=\varepsilon_{q}\in (0,1)$ is to be determined.
Inserting \eqref{Bqsmallrecall} into Lemmas \ref{Sestlem}-\ref{Destlem} allows us to infer that the following hold true. In particular, for all $0<2r\leq\rho<1$ we have
\begin{equation}\label{energyestBqsmall}
A(v,r)+E(v,r)\leq C_{q,(5)}\{C(v,2r))^{\frac{2}{3}}+(D(p,2r))^{\frac{2}{3}}(C(v,2r))^{\frac{1}{3}}+(A(v,2r))^{\frac{4-q}{4}}(C(v,2r))^{\frac{1}{3}}\varepsilon^{\frac{1}{2}}\},
\end{equation}
\begin{equation}\label{CestBqsmall}
C(v,2r)\leq C_{q,(6)}\Big\{\Big(\frac{r}{\rho}\Big)^3 (A(v,\rho))^{\frac{3}{2}}+(A(v,\rho))^{\frac{3(4-q)}{8}}\Big(\frac{\rho}{r}\Big)^{3}\varepsilon^{\frac{3}{4}}\Big\}\,\,\,\textrm{and}
\end{equation}
\begin{equation}\label{DestBqsmall}
D(p,2r)\leq C_{univ,(15)}\Big\{\Big(\frac{\rho}{r}\Big)^2(A(v,\rho))^{\frac{3(4-q)}{8}}\varepsilon^{\frac{3}{4}}+\Big(\frac{r}{\rho}\Big)^{\frac{5}{2}} D(p,\rho)\Big\}.
\end{equation}
Let us define
\begin{equation}\label{Ebigdef}
\mathcal{E}(v,p,r):=(A(v,r))^{\frac{3}{2}}+(D(p,r))^{2}.
\end{equation}
From \eqref{energyestBqsmall} and Young's inequality, we see that
\begin{equation}\label{Ebigfirstest}
\mathcal{E}(v,p,r)\leq C_{q,(7)}\{C(v,2r)+(D(p,2r))^2+(A(v,2r))^{\frac{3(4-q)}{4}}\varepsilon^{\frac{3}{2}}\}.
\end{equation}
Substituting \eqref{CestBqsmall}-\eqref{DestBqsmall} into \eqref{Ebigfirstest} (and recalling $2r\leq\rho$) gives
\begin{equation}\label{Ebigsecondest}
\mathcal{E}(v,p,r)\leq C_{q,(8)}\Big\{\Big(\frac{r}{\rho}\Big)^{3}\mathcal{E}(v,p,\rho)+(A(v,\rho))^{\frac{3(4-q)}{8}}\Big(\frac{\rho}{r}\Big)^{3}\varepsilon^{\frac{3}{4}}+\Big(\frac{\rho}{r}\Big)^4(A(v,\rho))^{\frac{3(4-q)}{4}}\varepsilon^{\frac{3}{2}}+(A(v,2r))^{\frac{3(4-q)}{4}}\varepsilon^{\frac{3}{2}}\Big\}.
\end{equation}
Using $A(v,2r)\leq \frac{\rho}{2r} A(v,\rho)$, $q\in (2,3)$, $\varepsilon\in (0,1)$ and Young's inequality, we can now deduce the following. Namely,
\begin{equation}\label{Ebigthirdest}
\mathcal{E}(v,p,r)\leq C_{q,(9)}\Big\{\Big(\frac{r}{\rho}\Big)^{3}\mathcal{E}(v,p,\rho)+\Big(\frac{\rho}{r}\Big)^6(A(v,\rho))^{\frac{3(4-q)}{4}}\varepsilon^{\frac{1}{2}}+\varepsilon\Big\}.
\end{equation}
Noting that $q\in (2,3)$ implies $\frac{2}{4-q}\in (1,2)$, we can apply Young's inequality once more.
Thus for fixed positive $\delta$ and for all $0<2r\leq\rho\leq 1$, we get
\begin{equation}\label{Ebigmainestgeneral}
\mathcal{E}(v,p,r)\leq C_{q,(10)}\Big\{\Big(\Big(\frac{r}{\rho}\Big)^{3}+\delta\Big)\mathcal{E}(v,p,\rho)+\varepsilon+\delta^{-\frac{4-q}{q-2}}\varepsilon^{\frac{1}{q-2}}\Big(\frac{\rho}{r}\Big)^{\frac{12}{q-2}}\Big\}.
\end{equation}
Therefore, for any $0<\theta\leq\tfrac{1}{2}$ and $0<\rho\leq 1$ we have
\begin{equation}\label{Ebigthetarhofirstest}
\mathcal{E}(v,p,\theta\rho)\leq C_{q,(11)}\{(\theta^3+\delta)\mathcal{E}(v,p,\rho)+\varepsilon+\delta^{-\frac{4-q}{q-2}}\varepsilon^{\frac{1}{q-2}}\theta^{-\frac{12}{q-2}}\}.
\end{equation}
Next we fix $\theta=\theta_{q}\in (0,\frac{1}{2}]$ and $\delta=\delta_{q}>0$ (independent of $\varepsilon$) such that
\begin{equation}\label{thetadeltafix}
2C_{q,(11)}\theta\leq 1\,\,\,\textrm{and}\,\,\,C_{q,(11)}\delta<\frac{\theta^2}{2}.
\end{equation}
This gives
\begin{equation}\label{Ebigthetarhosecondest}
\mathcal{E}(v,p,\theta\rho)\leq \theta^2 \mathcal{E}(v,p,\rho)+ G_{\delta,\theta}(\varepsilon)\,\,\,\textrm{with}\,\,\, G_{\delta,\theta}(\varepsilon):=C_{q,(11)}(\varepsilon+\delta^{-\frac{4-q}{q-2}}\varepsilon^{\frac{1}{q-2}}\theta^{-\frac{12}{q-2}}).
\end{equation}
Setting $\rho=1$ and iterating \eqref{Ebigthetarhosecondest} gives that for every $k\in\mathbb{N}$:
\begin{equation}\label{Ebigiterated}
\mathcal{E}(v,p,\theta^k)\leq \theta^{2k}\mathcal{E}(v,p,1)+\frac{G_{\delta,\theta}(\varepsilon)}{1-\theta^2}.
\end{equation}
Thus, for all $r\in (0,1]$ we obtain
\begin{equation}\label{Ebigrdecay}
\mathcal{E}(v,p,r)\leq \frac{r^2}{\theta^6} \mathcal{E}(v,p,1)+\frac{G_{\delta,\theta}(\varepsilon)}{\theta^4(1-\theta^2)}.
\end{equation}
From \eqref{Ebigthetarhosecondest} and since $\delta$ and $\theta$ are fixed independently of $\varepsilon$, we see that $\lim_{\varepsilon\rightarrow 0^+} G_{\delta,\theta}(\varepsilon)=0.$ So, there exists $\varepsilon=\varepsilon_{q}$ such that
$$\frac{G_{\delta,\theta}(\varepsilon)}{\theta^4(1-\theta^2)}<\Big(\frac{\varepsilon_{*}}{2}\Big)^{\frac{3}{2}}. $$
Here, $\varepsilon_{*}$ is as in Lemma \ref{smallkineticimpliesreg}.
Hence, one concludes
\begin{equation}\label{Alimsupsmall}
\limsup_{R_{*}\downarrow 0} \sup_{0<R<R_{*}} A(v,R)<\varepsilon_{*}.
\end{equation}
Hence, by Lemma \ref{smallkineticimpliesreg}, we see that $(x,t)=(0,0)$ is a regular point of $v$ as required.
\end{proof}
\section{Proof of the reduction in the Hausdorff dimension of the singular set (Theorems \ref{hausdorffdimreduce}-\ref{hausdorffdimreducegen})}
Having established the $\varepsilon$- regularity criterion (Proposition \ref{Bqsmallreg}), in this section we prove Theorems \ref{hausdorffdimreduce}-\ref{hausdorffdimreducegen} by means of Theorem \ref{higherinteggen}, the higher integrability statement in the author's paper \cite{barkerhigherinteg} and a covering argument found in Caffarelli, Kohn and Nirenberg's paper \cite{CKN}. Since this procedure works the same for both Theorems \ref{hausdorffdimreduce}-\ref{hausdorffdimreducegen}, we only write the proof of Theorem \ref{hausdorffdimreduce}.
\begin{proof}
Recall that the assumptions in Theorem \ref{hausdorffdimreduce} allow us to apply Theorem 1 in \cite{barkerhigherinteg}. In particular, for $q=2+\frac{C^{(0)}_{univ}}{M}$ and any $t_{1}\in (0,T^*)$ we have:
\begin{equation}\label{highintegrecall}
\int\limits_{t_{1}}^{T^*}\int\limits_{\mathbb{R}^3} |v|^{q-2}|\nabla v|^2 dxds<\infty.
\end{equation}
Let $(x,T^*)$ be any member of the singular set $\sigma$, where $\sigma$ is defined in \eqref{sigmadefhausdorff}. From Proposition \ref{Bqsmallreg}, we see that for all $\delta\in (0, \sqrt{T^*-t_{1}})$ there exists $r_{x,\delta}<\frac{\delta}{5}$ such that
\begin{equation}\label{Bqlowerboundsing}
\int\limits_{T^*-r_{x,\delta}^2}^{T^*}\int\limits_{B(x,r_{x,\delta})} |v|^{q-2}|\nabla v|^2 dxds\geq \frac{\varepsilon_{q}}{2} r_{x,\delta}^{3-q}=\frac{\varepsilon_{q}}{2} r_{x,\delta}^{1-\frac{C^{(0)}_{univ}}{M}}.
\end{equation}
By Vitali's covering lemma, there exists a countable subset $\{x_{i}\}_{i\in\mathbb{N}}\subset\sigma$ such that
\begin{equation}\label{disjoint}
\bar{B}(x_{i}, r_{x_{i},\delta})\cap \bar{B}(x_{j}, r_{x_{j},\delta})=\emptyset\,\,\,\textrm{for}\,\,\,i\neq j\,\,\,\textrm{and}
\end{equation}
\begin{equation}\label{countablecovering}
\sigma\subseteq \cup_{x\in\sigma} \bar{B}(x,r_{x,\delta})\subseteq \cup_{i=1}^{\infty} \bar{B}(x_{i}, 5r_{x_{i},\delta}).
\end{equation}
From \eqref{Bqlowerboundsing}, we see that for every $i\in\mathbb{N}$
\begin{equation}\label{Bqlowerboundcountable}
\frac{\varepsilon_{q}}{2} (5r_{x_{i},\delta})^{1-\frac{C^{(0)}_{univ}}{M}}\leq 5^{1-\frac{C^{(0)}_{univ}}{M}} \int\limits_{T^*-\delta^2}^{T^*}\int\limits_{B(x_{i},r_{x_{i},\delta})} |v|^{q-2}|\nabla v|^2 dxds.
\end{equation}
Using \eqref{disjoint}, we see that
\begin{align}\label{sumdisjoint}
\begin{split}
\sum_{i=1}^{\infty}\frac{\varepsilon_{q}}{2}(5r_{x_{i},\delta})^{1-\frac{C^{(0)}_{univ}}{M}}&\leq 5^{1-\frac{C^{(0)}_{univ}}{M}}\sum_{i=1}^{\infty}\int\limits_{T^*-\delta^2}^{T^*}\int\limits_{B(x_{i},r_{x_{i},\delta})} |v|^{q-2}|\nabla v|^2 dxds\\
&\leq 5^{1-\frac{C^{(0)}_{univ}}{M}}\int\limits_{T^*-\delta^2}^{T^*}\int\limits_{\mathbb{R}^3} |v|^{q-2}|\nabla v|^2 dxds.
\end{split}
\end{align}
Using this, \eqref{countablecovering}, the fact that $5r_{x_{i},\delta}<\delta$ and the definition of Hausdorff measure, we infer the following. Namely
$$\mathcal{H}^{1-\frac{C^{(0)}_{univ}}{M},\delta}(\sigma)\leq \frac{ 5^{1-\frac{C^{(0)}_{univ}}{M}}2}{\varepsilon_{q}}\int\limits_{T^*-\delta^2}^{T^*}\int\limits_{\mathbb{R}^3} |v|^{q-2}|\nabla v|^2 dxds. $$
Sending $\delta\downarrow 0$ and applying the dominated convergence theorem gives us the desired conclusion.
\end{proof}
|
{
"timestamp": "2021-12-01T02:25:33",
"yymm": "2111",
"arxiv_id": "2111.15444",
"language": "en",
"url": "https://arxiv.org/abs/2111.15444"
}
|
\section{introduction}
The fractional quantum Hall effect~\cite{Tsui} (FQHE) occurs in
two-dimensional electron
systems in a strong perpendicular field and is characterized notably by a gap for all
charged excitations at some fractional filling $\nu$ of the Landau levels.
Such gaps are due to electron-electron Coulomb interactions. In this regime
there are excitations with fractional charges and also statistics, Abelian or even
non-Abelian in some cases. There are possible opportunities for fabrication of new
electronic devices
using the non-Abelian statistics for quantum computation~\cite{Nayak} or the
interlayer phase coherence
that occurs in bilayer systems for dissipationless devices~\cite{Eisenstein}.
For many years after its discovery the FQHE has been studied almost only in the special
two-dimensional electron gases formed in GaAs/GaAlAs heterojunctions.
In this set-up the electron Land\'e $g$-factor and the dielectric constant
of the host material conspire to reduce the Zeeman energy with respect to the Coulomb
energy scale. As a consequence the spin degree of freedom in the lowest Landau level
cannot be considered as frozen by the external magnetic field and FQHE ground states
as well as their excited states are not always fully spin-polarized. This leads notably
to quasiparticles that have nontrivial spin textures called
skyrmions~\cite{Sondhi,Moon}.
The study of electron gases in materials such as AlAs~\cite{shayegan} added an extra degeneracy
due to the relevance of several valleys for the electronic states.
The discovery of monolayer graphene has opened an even richer
arena~\cite{Du,Bolotin,Ghahari,Dean-MLG1,Young-MLG2,Feldman-MLG3,Feldman-MLG4,Hunt,Young-MLG4,Amet-MLG5,Zibrov-MLG6,Polshyn-MLG7,Chen-MLG8,Zeng-MLG9,Zhou-MLG10,Yang-MLG11,Zhou-MLG12}
for the FQHE
since there is also an additional two-fold valley degeneracy that comes to play.
The central $N=0$ Landau level of monolayer graphene is approximately fourfold
degenerate because of spin and valley degrees of freedom
and is partially filled for a range of filling factor $-2\leq\nu_G\leq+2$.
Integer quantum Hall states~\cite{dassarmayang} also appear at fillings
$\nu_G=0,\pm 1$ and this is an instance
of quantum Hall ferromagnetism~: Coulomb interactions lead to gap opening also
at these integer filling factors.
At neutrality $\nu_G=0$ theory predicts~\cite{JJung,Kharitonov} competing phases
with various patterns of
spin and valley ordering. One can have ferromagnetic, antiferromagnetic, K\'ekul\'e
or charge-density wave states. While these states are degenerate if one makes
the approximation of full $SU(4)$ symmetry in spin/valley space, anisotropies will
select one of these states in real samples. Changing parameters like the ratio
of Zeeman
energy to Coulomb energy can induce transitions between such ground states.
An appealing scenario has been presented to explain a transition from an insulator
to a quantum spin Hall state as the transition between the canted antiferromagnetic
state and the ferromagnetic state.
When the central Landau level is partially filled we expect formation of FQHE states
and they will have also some pattern of spin and valley ordering.
The fully $SU(4)$ symmetric case has been explored in many works~\cite{Apalkov,Toke2006,Toke2007,Shibata1,Shibata2,Toke2,Papic,Balram2015,Balram2015b,Balram2015c}.
Inclusion of anisotropies has been studied by Abanin et al.~\cite{Abanin}
at the level of wavefunctions
and an extension of Hartree-Fock theory has been also proposed by Sodemann and
MacDonald~\cite{Inti}. While the results of mean-field theory have been confirmed
by exact
diagonalizations~\cite{Wu2014} for $\nu_G=0$ this has not been checked for the
fractional quantum Hall states.
In this paper we introduce a set of variational wavefunctions constructed out of
exact eigenstates of the $SU(4)$ symmetric limit. The wavefunctions are dressed by
a spin and flavor configuration who is determined by minimizing the energy.
The anisotropy energy can be expressed solely in terms of the pair correlation function
of the parent symmetric state. This approach generalizes the mean-field treatment of
quantum Hall ferromagnetism. Within this framework we obtain the phase diagrams
for the fractions $\nu_G=-2+n/3$ ($n=2,4,5$). We also perform extensive exact
diagonalizations in the spherical~\cite{Haldane,Fano} and torus geometry to check the
validity of the
variational approach.
For fractions $\nu_G=-2+n/3$ ($n=2,4,5$) there are two competing ground states
in the $SU(4)$ limit. At $\nu_G=-2+2/3$ we have the fully polarized particle-hole
partner of the $\nu_G=-2+1/3$ as well as a singlet state with two-component
structure, well known from previous studies~\cite{Ashoori}.
At $\nu_G=-2+4/3$ the competing states are the two-component particle-hole
partner of the singlet with spin-valley content $(2/3,2/3,0,0)$
and a state with a filled shell by addition to the $\nu=1/3$ Coulomb ground state
with spin-valley content $(1,1/3,0,0)$.
At $\nu_G=-2+5/3$ we have a two-component state $(1,2/3,0,0)$
where the $2/3$ state is the fully polarized state
and there is also a three-component state $(1,1/3,1/3)$ that is
built from a filled Landau level plus a spin-singlet $2/3$ state
as pointed out by Sodemann and MacDonald~\cite{Inti}
The phase diagrams from the variational method are in agreement
with the diagonalization results except for the fraction $\nu_G=-2+5/3$
where a large portion of the phase space is occupied by a correlated singlet state
that cannot be reproduced by our set of wavefunctions. To clarify its structure
we compute the pair correlation function for all combinations of spin/valley indices
and find that there is an enhanced probability for pair formation of opposite spin.
We discuss possible ways of favoring this new phase in real samples of graphene.
While the regions of stability of the different patterns in spin and valley
configurations are correctly predicted variationally the quantum numbers
are different in several important cases involving antiferromagnetism.
Also we observe that fully polarized phases for the states $(1,1/3,0,0)$ and
$(1,2/3,0,0)$ still form $SU(2)$ valley multiplets even though the Hamiltonian
does not have such a symmetry. This is due to the special point-contact form
of the anisotropies.
For all the fractions we have studied there are in general spin transitions
when one varies the magnetic field. Multicomponent states are preferred
at small Zeeman energies.
In section II we describe the Landau levels of monolayer graphene. In section III
we introduce the Coulomb interactions and anisotropies that govern the spin/valley
ordering of quantum Hall states. Section IV contains a discussion of the $SU(4)$
representations that we use in our exact diagonalization studies as well as the
formulation of trial wavefunctions for fractional states with various spin/valley
configurations. Section V discuss the integer quantum Hall states at $\nu_G=0,\pm 1$.
Section VI give results for fractions $\nu_G <-1$. In section VII we give evidence
for similar physics at $\nu_G=-2+4/3$ and $\nu_G=0$. The phases for the
three-component state
at $\nu_G=-2+5/3$ are discussed in section VIII. We give a simplified treatment
of spin transitions in section IX and section X contains our conclusions.
\section{graphene Landau levels}
Monolayer graphene has a simple hexagonal lattice structure that leads to a
massless Dirac fermion
spectrum close to the neutrality point. When applying a perpendicular magnetic field
there is formation of relativistic Landau levels with energies~:
\begin{equation}
E_N=sign(N)\frac{\hbar v_F }{\ell}\sqrt{2|N|} +\frac{1}{2}g\mu_B B \sigma,\quad N=0,\pm1, \pm 2, \dots
\label{GLL}
\end{equation}
where $\ell=\sqrt{\hbar/eB}$ is the magnetic length, $v_F$ is the velocity of
the relativistic Dirac fermions, $g$ is the Land\'e factor which is equal to $2$
because spin-orbit coupling is negligible in graphene, $\mu_B$ is the Bohr magneton
and $\sigma=\pm 1$ the spin projection onto the magnetic field direction.
All these Landau levels are twice degenerate due to the presence of two valleys $K$ and $K^\prime$ in
the graphene Brillouin zone.
In this paper we concentrate on the physics that takes place in the zero-energy $N=0$ central
Landau level for fractional filling factors.
The $N=0$ Landau level wavefunctions have the same spatial dependence as the
non-relativistic two-dimensional electrons. For the $N=0$ manifold of Landau states the
valley index
translates exactly into
a sublattice index i.e. a valley $K$ Landau state has nonzero amplitude only on one
sublattice
$A$ while the other valley $K^\prime$ is entirely concentrated on the other
sublattice $B$.
An important consequence is that a sublattice potential difference $\Delta_{AB}$ is
equivalent
to a Zeeman field acting on the valley degree of freedom i.e. lifting the
degeneracy between
the valley
degrees of freedom. Such a potential is not expected for suspended graphene samples
but is
induced notably by a hexagonal boron nitride (hBN)
substrate when it is geometrically aligned with the graphene layer.
The energy levels from Eq.(\ref{GLL}) lead to integer quantum Hall effects at
graphene filling factors $\nu_G=\pm 2, \pm 6, \pm 10,\dots$. However even in
the integer
quantum Hall regime it was discovered that there are also states for $\nu_G=0,\pm 1$
within the $N=0$ Landau level. These are instances of the general phenomenon of
quantum Hall ferromagnetism~\cite{dassarmayang}. Indeed the general arguments
for appearance
of quantum Hall ferromagnetism carry out in the case of graphene. We discuss the
case of fillings $\nu_G=0,\pm1$ below.
It is convenient to redefine the filling factor by writing $\nu=2+\nu_G$
so that the $N=0$ LL now spans the range $0\leq\nu\leq+4$.
The global particle-hole symmetry maps the filling factor $4-\nu$ onto $\nu$ so that
it is enough to restrict
our study to the range $0\leq\nu\leq+2$.
\section{interactions and anisotropies in monolayer graphene}
We now discuss the effective electron-electron interactions within
the $N=0$ Landau level of monolayer graphene. The Coulomb interaction
is $SU(4)$ symmetric to an excellent approximation. It leads to an
energy scale which is constructed out of the magnetic length
$E_C=e^2/(\epsilon\ell)$ where $\ell=\sqrt{\hbar/(eB)}$ and $\epsilon$
the dielectric permittivity of the system. The unitary $SU(4)$ symmetry mixing
spin and valley degrees of freedom is however reduced by several phenomena.
It has been proposed to encapsulate these splittings into the following
Hamiltonian~\cite{Alicea,Kharitonov} acting only onto the valley degrees of freedom~:
\begin{equation}
\mathcal{H}_{aniso}=\sum_{i<j}
\left[g_\perp (\tau^x_i \tau^x_j+\tau^y_i \tau^y_j) + g_z \tau^z_i \tau^z_j \right]
\delta^{(2)}(\mathbf{r}_i-\mathbf{r}_j),
\label{aniso}
\end{equation}
where the $\tau^\alpha_i$ Pauli matrices operate only in valley space.
We note that this form of the interaction is local in real space so it is not felt by
quantum states
that vanish when two particles coincide in space.
This is the case notably for the $\nu=1/3$ Laughlin wavefunction.
This simple model reminiscent of the $XXZ$ model for magnetic systems
has several desirable features, notably it describes the metal-insulator transition
observed at $\nu=0$ when tilting the magnetic field away from the direction perpendicular
to the sample as we discuss below~\cite{Young-MLG4}.
It is convenient to parameterize the two coefficients $g_{\perp,z}$ with an
angular variable $\theta$~:
\begin{equation}
g_\perp = g\cos\theta , \quad g_z=g\sin\theta ,
\end{equation}
in addition to an overall strength $g$ which we take as positive.
It is convenient to convert the parameters $g_{\perp,z}$ into two separate
energy scales~:
\begin{equation}
u_\perp = \frac{g_\perp}{2\pi \ell_B^2}, \quad u_z = \frac{g_z}{2\pi \ell_B^2}.
\end{equation}
One can define dimensionless strengths of anisotropies by factoring out the Coulomb
energy scale~:
\begin{equation}
\tilde g = (g/\ell_B^2)/(e^2/(\epsilon\ell_B))
\end{equation}
One-body energy level splitting occurs through the Zeeman effect that acts
on the spins and sublattice splitting onto the valley indices~:
\begin{equation}
\mathcal{H}_{1body}=-\epsilon_Z \sum_i S^z_i + \Delta_{AB}\sum_i T^z_i .
\end{equation}
The Zeeman energy $\epsilon_Z$ is $g\mu_B B_{T}$ where $B_T$ is the total field and the
Land\'e factor $g=2$ since spin-orbit coupling is negligible in graphene. We note that the
direction of the field is arbitrary due to spin
rotation invariance. On the contrary the sublattice symmetry breaking takes
place between the two valleys. In the case of the commonly used hBN substrate
typical values of the splitting $\Delta_{AB}$ are of the order of 10 meV
and are magnetic field independent.
The special Hamiltonian Eq.(\ref{aniso}) has some symmetry properties that are independent
of the filling factor.
There are symmetries like those of the XXZ Heisenberg Hamiltonian well known in the field
of quantum magnetism. Notably one can always perform rotations in valley space around
the $z$ axis.
These form a $U(1)$ symmetry group leading to the conservation of $T_z$, the projection of
the isospin onto the $z$ axis.
When $u_z=u_\perp$ ($\theta=\pi/4,\pi+\pi/4$) there is invariance under full rotation in
valley space with a group
$SU(2)$. Beyond these obvious symmetries, we note that for $u_\perp=0$
($\theta=\pi/2,3\pi/2$) one can make spin rotations
independently in each valley so there is a symmetry $SU(2)_K\times SU(2)_{K^\prime}$.
There is also an additional $SO(5)$ symmetry when $u_z+u_\perp=0$
($\theta=3\pi/4,7\pi/4$) which is not obvious in
the present formulation~\cite{Wu2015}.
It is important to keep in mind that the anisotropic Hamiltonian Eq.(\ref{aniso})
has no
fundamental significance. It should be viewed as the most relevant perturbation beyond
the $SU(4)$ symmetric Coulomb interaction. In the real world there will be weaker
anisotropic interactions between electrons with non-zero relative angular momentum.
We also note that the magnitude of the couplings $g_z,g_\perp$ is still uncertain.
Constraints on their values come from the metal-insulator transition in tilted field
observed at neutrality~\cite{Young-MLG2}.
\section{integer fillings $\nu_G =0,\pm 1$}
If we consider the integer quantum Hall state at $\nu=1$ then there is a set of
exact eigenstates
of the Coulomb problem with a closed form given by a single Slater determinant~:
\begin{equation}
|\Psi_{\nu=1}\rangle=\prod_m c^\dag_{m\alpha} |0\rangle,
\label{nu1wave}
\end{equation}
where $|\alpha\rangle$ is an arbitrary four component vector in spin-valley space.
The integer $m$ is the index of the orbital Landau level and the product runs over
all values of $m$, corresponding to complete filling of the level.
This state is an exact eigenstate provided one neglects Landau level mixing. This
fact is
aptly called quantum Hall ferromagnetism~\cite{dassarmayang}.
The arbitrariness in the the direction of $|\alpha\rangle$ is fixed presumably by
lattice effects
beyond the simple continuum models we consider~\cite{Alicea}. Notably such
a wavefunction
Eq.(\ref{nu1wave})
vanishes when electrons coincide in real space and so it is insensitive to
perturbations like the anisotropies of Eq.(\ref{aniso}).
\begin{figure}[H]
\centering
\includegraphics[width=0.3\columnwidth]{phasediagnu0.pdf}
\caption{Phase diagram for neutrality $\nu=2$. The various ground states are displayed as a function
of the anisotropy angle $\theta$ that varies $g_z=g\sin\theta$ and $g_\perp=g\cos\theta$.
For small enough anisotropy energy, the precise value of the overall
energy scale $g$ is irrelevant to the phase competition.
There are four phases and four first-order phase transitions between them, occurring at high-symmetry points.}
\label{nu0phase}
\end{figure}
We now discuss the special case of graphene neutrality $\nu=2$. At this filling factor
there are exactly two filled
Landau levels. In the fully symmetric $SU(4)$ limit one notes that a Slater
determinant is
an exact eigenstate~:
\begin{equation}
|\Psi_{\nu=0}\rangle=\prod_m c^\dag_{m\alpha} c^\dag_{m\beta}|0\rangle,
\label{nu0wave}
\end{equation}
where the orbital index $m$ (whose precise definition is geometry dependent) takes all allowed values
corresponding to complete integer filling, and $\alpha,\beta$ are two orthogonal vectors
in four-dimensional spin-valley space spanned by
$\{|K\uparrow\rangle,|K\downarrow\rangle,
|K^\prime\uparrow\rangle,|K^\prime\downarrow\rangle\}$.
Due to the $SU(4)$ symmetry these two vectors are arbitrary.
The anisotropies will induce energy differences among all possibilities and determine
the ground state
spin-valley ordering. The mean-field treatment is performed by taking the
expectation value
of the
anisotropy Hamiltonian Eq.(\ref{aniso}) in the Slater determinant of
Eq.(\ref{nu0wave}).
The result is given by a functional of the two vectors $\alpha$ and $\beta$~:
\begin{equation}
\epsilon_a=\frac{1}{N_{\phi}}
\langle\Psi_{\nu=0}| \mathcal{H}_{aniso}|\Psi_{\nu=0}\rangle
=\sum_{i=x,y,z} u_i [ tr(P_\alpha \tau_i)tr(P_\beta \tau_i)
-tr(P_\alpha \tau_i P_\beta \tau_i)]
\label{trialenergy}
\end{equation}
where we have defined
the anisotropy energy per flux quantum $\epsilon_a$, the
projectors onto the trial vectors~:
\begin{equation}
P_\alpha=|\alpha\rangle\langle\alpha|,P_\beta=|\beta\rangle\langle\beta|,
\end{equation}
and the couplings are $u_x=u_y=u_\perp$, $u_z$.
By minimization of this functional
one finds four phases in the absence of Zeeman energy that are displayed in
Fig.(\ref{nu0phase}).
There is a ferromagnetic phase (F) with $\alpha=|K\uparrow\rangle,\beta=|K^\prime\uparrow\rangle$
which is stabilized in the range $-\pi/4<\theta<+\pi/2$. One finds also
an antiferromagnetic phase (AF) with $\alpha=|K\uparrow\rangle,\beta=|K^\prime\downarrow\rangle$
preferred in the range $\pi/2 < \theta < 3\pi/4$. Note that this state is an
antiferromagnet both in spin space and valley space.
The K\'ekul\'e state (KD) has $\alpha=|\mathbf{n}\uparrow\rangle,\beta=|\mathbf{n}\downarrow\rangle$
where $\mathbf{n}$ is a vector lying in the XY plane of the Bloch sphere for valley degrees of freedom~:
$\mathbf{n}=(|K\rangle+\mathrm{e}^{i\phi}|K^\prime\rangle)/\sqrt{2}$ where $\phi$ is an arbitrary angle in the valley XY plane.
This state is a spin singlet but a XY valley ferromagnet.
It is preferred in the range $3\pi/4<\theta<5\pi/4$. Finally
the charge density wave state (CDW) defined by
$\alpha=|K\uparrow\rangle,\beta=|K\downarrow\rangle$.
This state is a spin singlet but a valley ferromagnet. Since the valley index
coincides with the sublattice index
in the central Landau level, this state has all the charge density on one
sublattice and would be favored by a substrate
breaking explicitly the sublattice symmetry like hexagonal boron nitride. It
requires the range
$5\pi/4<\theta<7\pi/4$ as it is stabilized by negative valley interactions along
$z$ direction.
All transitions between these various states are first-order within mean-field theory
and this is confirmed by exact diagonalizations~\cite{Wu2014}. Notably all transitions
take place on the points with extra symmetries beyond the $U(1)$ valley conservation.
The spin quantum number of the ground state for a finite system as studied by exact
diagonalization is given by $S=0$ i.e. spin-singlet states for AF, KD and CDW phases
while the F phase has maximal spin $S=N_e/2$. Concerning valley quantum numbers
there are three phases with $T^z=0$~: F, AF, and KD while CDW has maximal valley
polarization $T^z=N_e/2$.
With nonzero Zeeman energy the most notable change is that the AF phase becomes
canted (CAF).
The spins prefer to have antiparallel orientation in the plane perpendicular to the
field direction and a small projection onto this direction. Increasing the Zeeman energy
leads to greater ferromagnetic character and ultimately a phase transition from CAF to
the F
phase. The F phase has the very special characteristic of having conducting edge modes
contrary to the CAF phase~\cite{ALL}.
The tilted-field experiments of ref.(\onlinecite{Young-MLG2}) enable the variation
of the ratio of Zeeman vs. Coulomb energy scales. Since these is a metal-insulator
transition in this set-up a natural explanation is that they observe the CAF-F
transition.
To be consistent with this scenario one deduces bounds on anisotropies~:
\begin{equation}
u_\perp\simeq -10\epsilon_Z, \quad u_z+u_\perp > 0.
\label{anisobounds}
\end{equation}
\section{SU(4) representations and model wavefunctions}
\label{irreps}
In this section we describe how we use the theory of irreducible representations
of the $SU(4)$
group in our exact diagonalization studies.
Energy levels of a $SU(4)$-invariant Hamiltonian are in general degenerate
and form irreducible
representations (irreps) of the symmetry group. These irreps are
in one-to-one correspondence with Young tableaux that consist of
three rows of boxes with $L_1$ boxes on the first row, $L_2$ boxes on the
second row and $L_3$
on the third row with $L_1\geq L_2\geq L_3$. The dimension of such an irrep is given by~:
\begin{equation}
\mathcal{D}(p_1,p_2,p_3)=\frac{1}{12}(p_1+1)(p_2+1)(p_3+1)
(p_1+p_2+2)(p_2+p_3+2)(p_1+p_2+p_3+3),
\end{equation}
where we have used the positive integers $p_1=L_1-L_2$, $p_2=L_2-L_3$,
$p_3=L_3$. It is also convenient to view such an irrep as a collection
of irreps of $SU(2)$ subgroups that operate only on subsets of the basis states.
We will use
repeatedly the pure spin subgroup generated by
$S^\alpha=\sigma^\alpha \otimes 1$ and the pure isospin subgroup generated
by $T^\alpha= 1\otimes \tau^\alpha$ acting on the valley degrees of freedom. They can be
conveniently complemented by a third $SU(2)$ subgroup
with generators $N^\alpha=\sigma^\alpha\otimes \tau^z$ that
is inspired by N\'eel antiferromagnetic order.
Although there are six independent such $SU(2)$ subgroups only these three are convenient
because we will consider the additional symmetry-breaking perturbation given by
Eq(\ref{aniso}). A state belonging to a given
$SU(4)$ irrep
can be taken as an eigenstate of $(S^z,T^z,N^z)$ simultaneously
because these generators commute. Contrary to the simpler case
of the $SU(2)$ group there is in general more than one state characterized
by the three values $(S^z,T^z,N^z)$.
The $SU(4)$ symmetry implies that the second-quantized Hamiltonian of the Coulomb interaction
preserves the number of particles of each species $n_i$ with definite spin and
valley so one may
impose separately these numbers $(n_1,n_2,n_3,n_4)$
and perform the diagonalization in the sector defined by these fixed numbers.
The identification of an irrep is made by finding a highest weight state.
This concept is
familiar in the $SU(2)$ case.
Indeed if one finds an energy value by imposing a given $S^z=M$
one does not know the total spin by this sole observation~:
we only know that the total spin of this level is larger than
$M$. One has to find eigenenergies in sectors with $S^z=M+1$
then $S^z=M+2$ and so on up the point of disappearance of the energy level.
The value of highest $S^z$ containing a given energy is then equal to its
\textit{total} spin. A generalized reasoning applies in the $SU(4)$ case.
If an energy eigenstate is found in some $(n_1,n_2,n_3,n_4)$
sector one has to track it among states obtained by acting with raising operators
till one finds a highest-weight state
annihilated by all such operations. This is more computationally demanding than in
the $SU(2)$ case. For a value of the highest weight $(S^z,T^z,N^z)=(m_1,m_2,m_3)$
the $SU(4)$ irrep is characterized by
integers $p_1=m_2-m_3, p_2=m_1-m_2,p_3=m_2+m_3$ and the corresponding values of the
lengths of Young tableau rows $L_1,L_2,L_3$.
The picture that underlies our investigations is that eigenstates are essentially
$SU(4)$ multiplets whose
degeneracy
is lifted only perturbatively by the anisotropies in Eq.(\ref{aniso}).
This is sensible as
long as the anisotropy energies are not
too large with respect to the Coulomb energy scale. Present experiments seem
to be compatible
with this point of view. It is important to note that the anisotropy energies are likely
larger than the Zeeman energy in current experiments~\ref{anisobounds}.
All these symmetry considerations above are valid independently of the geometry we use.
We make use of the spherical geometry~\cite{Haldane} as well as of the torus geometry~\cite{HaldaneTorus}. The ground state of a FQHE system is expected
to be uniform in space so on a sphere it should have zero total angular momentum
and on the torus geometry where one can define magnetic many-body translations
it should have also zero many-body momentum $K_x=K_y=0$.
While the pure Coulomb problem has the complete $SU(4)$ symmetry, the anisotropic case
admit only spin $T^z$ conservation. For practical reasons we implement only $S^z$
and $T^z$ conservation.
We now discuss trial wavefunctions that can be used to describe the fractional
quantum Hall
states with spin and valley degrees of freedom. We first make several remarks that
apply to
the $SU(4)$ symmetric limit. Note that
any $SU(2)$ Coulomb eigenstate is also an $SU(4)$ eigenstate. Only the degeneracy will
change. If we have some $SU(2)$ Coulomb eigenstate constructed with two
spin-valley vectors $\alpha$ and
$\beta$ and filling factor $\nu_{\alpha\beta}$ then we can glue a filled $\nu=1$ complete
Landau level for any vector $\gamma$ and obtain another exact eigenstate of
the Coulomb interaction
with now filling factor $\nu=1+\nu_{\alpha\beta}$~:
\begin{equation}
\Psi= \{\prod_m c^\dag_{m\gamma}\} \hat{\Psi}^\dag_{\alpha\beta} |0\rangle ,
\label{product}
\end{equation}
where the operator $\hat{\Psi}^\dag_{\alpha\beta}$ creates the $SU(2)$ state with
two components.
The energy of this new state is now given by~:
\begin{equation}
E_{1+\nu_{\alpha\beta}}=E_1 + E_{\nu_{\alpha\beta}}, \quad E_1=-\sqrt{\pi/8}E_C,
\end{equation}
where $E_1$ is the energy of the completely filled level.
In first-quantized language this gluing operation is the multiplication
by the Vandermonde determinant of the $\nu=1$ state.
Note that this gluing operation also works if we use a single component state
$\hat{\Psi}_{\alpha}$ instead of $\hat{\Psi}_{\alpha\beta}$.
In addition to the full particle-hole symmetry mapping $\nu$ to $4-\nu$
one can also perform a particle-hole symmetry on only two flavours
mapping $(\nu_1,\nu_2)$ to $(1-\nu_1,1-\nu_2)$ and the total filling
is then transformed from $\nu$ to $2-\nu$. Under this operation
the energy becomes~:
\begin{equation}
E_{2-\nu}=E_{\nu}+2(1-\nu)E_1 .
\label{pph}
\end{equation}
These mappings Eqs(\ref{product},\ref{pph}) allow us to identify at least some of the
eigenstates found by exact diagonalization. Of course there are also
multicomponent states
that cannot be generated from the 2-component case. Some of them have been found
at filling factor $\nu=2/3$ in a previous study~\cite{Wu2015}.
It is possible also to use these mappings to construct trial wavefunctions once we have
a one or two-component state as obtained for example by the composite
fermion construction~\cite{Jain,Jainbook}. For the fractions we study, which are
most prominent
in experiments we cannot use known multicomponent generalizations~\cite{Halperin,Allan}
of the Laughlin wavefunction because they lead to more complicated fractional fillings.
We adhere to the view that anisotropies are not strong enough to destroy the
Coulomb correlations
in a given trial state. It means that the small symmetry-breaking perturbations
Eq.(\ref{aniso}) will choose the orientation of the free vectors $\alpha,\beta,\gamma$
in a trial state like those of Eq.(\ref{product}). Sodemann and MacDonald~\cite{Inti} have
proposed an approximate scheme based on an extension of Hartree-Fock theory to estimate
the anisotropy energy as a function of the free vectors in a trial state like
Eq.(\ref{product}).
We note that it is fact feasible to compute directly the expectation value
of the Hamiltonian for anisotropies in the trial states, bypassing any Hartree-Fock like approximation.
Since the anisotropic interactions are purely point-like, the expectation value
of the Hamiltonian Eq.(\ref{aniso}) can be expressed in terms of the pair correlation
function at the origin $g_{\alpha\beta}(0)$ generalizing the formula for $\nu=2$~:
\begin{equation}
\epsilon_a=\sum_{i=x,y,z} u_i \sum_{\alpha\beta}\, g_{\alpha\beta}(0)\, [ tr(P_\alpha \tau_i)tr(P_\beta \tau_i)
-tr(P_\alpha \tau_i P_\beta \tau_i)],
\label{anisoenergypair}
\end{equation}
where the pair correlation function is that of the trial wavefunction.
We define a more compact notation~:
\begin{equation}
\epsilon_a=\sum_{\alpha\beta}\, g_{\alpha\beta}(0)\,
\mathcal{F}_{\alpha\beta},
\label{anisopair}
\end{equation}
where now the sum over $\alpha,\beta$ runs over all values involved in
the trial wavefunction. The sum runs only over distinct values due to the Pauli principle
($g_{\alpha\alpha}(0)=0$). If we consider trial states obtained by gluing a filled
shell like in Eq.(\ref{product}) then there are no non-trivial correlations between the
completely filled shell and the other electrons~: $g_{1\alpha}(0)=\nu_\alpha$.
The case with two filled Landau levels $\nu_\alpha=\nu_\beta=1$ gives the previous
formula for the anisotropy energy Eq.(\ref{trialenergy}).
If we take a single-component state
with filling $\nu_2$ and glue a $\nu=1$ shell then we obtain an energy which is
simply the $\nu=2$ formula multiplied by a $\nu_2$ factor. Hence
without any further calculation we can be sure that the phase diagram is
identical to that of the $\nu=2$ and is given in Fig.(\ref{nu0phase})
with the same set of spin-valley vectors described above for the $\nu=2$ case.
However since the number of electrons is different in the two components,
the quantum numbers of the ground state are different from those of the $\nu=2$ case.
We now discuss the case with three occupied flavors with content $(1,\nu,\nu)$.
The trial state is now a two-component state with flavor content $(\nu,\nu)$
with a filled shell $\nu=1$ glued onto it.
The anisotropy energy is now given by~:
\begin{equation}
\epsilon_a^{(1,\nu,\nu)}=\nu [\mathcal{F}_{\alpha\beta}+\mathcal{F}_{\alpha\gamma}]
+g_{\beta\gamma}(0)\mathcal{F}_{\beta\gamma}
\end{equation}
For the trial states we consider, the partially filled states will involve
the two-component $\nu=2/3$ and $\nu=2/5$ spin-singlet states that have a very small
pair correlation between different spin values $g_{\alpha\beta}(0)\approx 10^{-3}$.
This very small number only slightly change the phase boundaries and can be
safely discarded.
For matrix elements between two states $|\mathbf{t}_1\mathbf{s}_1 \rangle$,
$|\mathbf{t}_2\mathbf{s}_2 \rangle$ (so without spin-valley entanglement) the
matrix element $\mathcal{F}_{12}$ is given by~:
\begin{equation}
\mathcal{F}_{12} =\frac{1}{2}(1-\mathbf{s}_1\cdot\mathbf{s}_2)[\sum_i u_i t_{1i}t_{2i}]
-\frac{1}{2}(1+\mathbf{s}_1\cdot\mathbf{s}_2)
\frac{1}{2}(1-\mathbf{t}_1\cdot\mathbf{t}_2)[\sum_i u_i ],
\end{equation}
which allow us to obtain the variational energies for all cases of concern.
When discussing trial states we omit the unoccupied spin-valley states when displaying
the component structure of the wavefunction i.e. $(1,1/3,0,0)$ is written
simply $(1,1/3)$. However when describing an irrep we use all four components
i.e. $(7,4,0,0)$ stands for an $SU(4)$ irrep defined by its highest weight.
\section{Fractions for $\nu <1$}
\subsection{The $\nu=1/3$ state}
\label{onethird}
At filling factor $\nu=1/3$ the one-component Coulomb ground state
is an exact eigenstate of the $SU(4)$ symmetric case and it is a member
of the irrep with highest weight $(N_e,0,0,0)$. This means that the spatial part
of the wave function is
multiplied by a fully symmetrized wavefunction with all electrons in the same
spin/valley state. The Zeeman field will orient the spin component and there will
be a residual $SU(2)$ valley symmetry.
Introduction of anisotropies Eq.(\ref{aniso}) will have no effect on this eigenstate
since the wavefunction exactly vanishes when two electrons coincides because of
Pauli principle and the model anisotropies involve only contact interactions. In
the real world there will be also further anisotropies involving relative
angular momentum one and more that will act upon the eigenstate. However these
effects can be estimated as being $O(a/\ell)$ smaller than the contact anisotropies.
However it is worth mentioning that theoretical estimates of anisotropies are
much smaller than the values required to
explain the tilted-field transition between the F and CAF states at $\nu_G=0$. So
this means that presumably also the anisotropies involving relative angular
momentum one and higher are not well known and may be larger that naive estimates
so they could possibly be relevant even in the range of filling factor $\nu<1$.
\subsection{The $\nu=2/3$ state}
\label{twothirds}
The situation is richer for $\nu=2/3$.
Several competing states are known to be present at this filling factor. First of all
there is
the one-component particle-hole symmetric of the $\nu=1/3$ state which is again a
one-component state.
In the composite fermion description
this fully polarized state has negative effective flux and composite fermions
occupy two effective $\Lambda$-levels~\cite{Jainbook}.
With two components one can also
construct a singlet state where now only one CF $\Lambda$ level is occupied
by singlet pairs. For pure Coulomb interactions the singlet state
has lower energy than the polarized state by approximately $0.009E_C$ in the
thermodynamic limit. These two states compete directly on the torus geometry
while on the spherical geometry they have a different shift~: the polarized state
is realized for $N_\phi=(3/2)N_e$ and the singlet state for $N_\phi=(3/2)N_e-1$
While these two states are
well-established in two-component FQHE systems, we note that there is
evidence~\cite{Wu2015} for the
formation of three-component and four-component states that are slightly
lower in energy
by $\approx 0.002E_C$. These enigmatic states are not easily explained by
composite-fermion theory and finite size limitations leas to a large uncertainly in
energy difference estimation. Such states are formed for $N_\phi=(3/2)N_e-2$
on the sphere and they also appear on the torus geometry.
All these pure Coulomb eigenstates can be embedded in the four-component case
giving rise to degenerate $SU(4)$ multiplets. Of course the $SU(4)$
singlet~\cite{Wu2015}
observed for $N_e=8$ is unaffected by anisotropies apart from a change in energy.
The polarized $2/3$ state does not feel the delta function interactions of
Eq.(\ref{aniso}) but the singlet state has a nonvanishing probability of having
two electrons at the same location provided they have different flavors~:
$g_{\alpha\neq\beta}(0)\neq 0$. This probability is small and is known to be
of the order of $10^{-3}$
from exact diagonalization or CF wavefunctions. As a consequence the splitting
induced by anisotropies is of order $g\times g_{\alpha\neq\beta}(0)$.
If we use the variational approach of section (\ref{irreps}) we obtain an energy
functional
which has exactly the same expression as in the $\nu=2$ case apart from the
overall scale.
This means that the phase diagram is the one given in Fig.(\ref{nu0phase}).
We have checked by exact diagonalization that this phase diagram is correct beyond perturbation theory. Notably all characteristics of the phase transitions are unchanged between $\nu=0$ and $\nu=2/3$. The ground state quantum numbers of the finite system
with $N=6$ and $N_\phi=8$ are consistent with the pattern of spin and valley ordering
for $\nu=2$.
\section{the fraction $\nu=4/3$}
We now turn to the richer situation with fractions appearing for filling
factors greater than one (and also less than two because of particle-hole symmetry).
At the fraction 4/3, by the remarks of section (\ref{irreps}) we know that
there are exact
eigenstates in the $SU(4)$ symmetric limit obtained by adding a $\nu=1$ shell
to the $\nu=1/3$ polarized eigenstate of the Coulomb problem. The flavor content
of such a state is thus $(1,1/3)$. There is also an eigenstate obtained
by taking a $\nu=2/3$ singlet involving only two flavors and making a particle-hole
transformation on both flavors so that its final
flavor content is $(2/3,2/3)$.
With the known properties of the particle-hole transformation Eq.(\ref{pph})
in fact we already
know that the $(2/3,2/3)$ state has lower Coulomb energy than $(1,1/3)$ from the
energies of the parent states in the thermodynamic limit. This remark was made in
ref.(\onlinecite{Inti}). Beyond these two exact eigenstates it is not guaranteed that
there are not intruders implying more components.
\subsection{The $(1,1/3)$ state}
In the torus geometry there is no shift so these two states directly compete
when we fix the flux and the number of particles, they differ only by the flavor
partitioning. In this geometry the state $(1,1/3)$ is an excited state
and it is thus computationally demanding to study it.
In the sphere geometry
for the first candidate $(1,1/3)$ the total number of electrons is
partitioned
into two flavors $N_e=N_1+N_2$ and $N_1$ electrons fully fill
a spherical shell with flux $N_\phi$ while $N_2$ electrons form the usual
$\nu=1/3$ state~:
\begin{equation}
N_\phi = N_1 -1, \quad N_\phi= 3(N_2-1),
\end{equation}
so that the flux-number of particles relationship is given by~:
\begin{equation}
N_e=\frac{4}{3}N_\phi +2,
\end{equation}
Hence we can use the sphere geometry to study separately the two states
$(1,1/3)$ and $(2/3,2/3)$.
We have performed exact diagonalizations of the system with $N_e=10$ and $N_\phi=6$.
While there is definitely a low-lying state with zero angular momentum $L=0$
and irrep $(7,3,0,0)$ as expected, it is not the ground state. Indeed the true ground state
spans the irrep $(4,4,1,1)$ with $L=0$ and the irrep $(7,3,0,0)$ is only the
fourth excited state at this system size. At these rather small system sizes we consider that these states that lie below $(7,3,0,0)$
are likely quasiparticle states with flavor changing excitations.
The irrep $(7,3,0,0)$ is split by the anisotropies as shown in Fig.(\ref{sphere43_2C}).
We have used a small value ${\tilde g}=10^{-4}$ so that the states are not mixed with
the nearby irreps.
The low-lying states centered onto the parent symmetric state $(7,3,0,0)$
are displayed in Fig.(\ref{sphere43_2C}). There are four distinct phases
separated by first-order transitions. The range of existence of these four phases
is similar to the case for $\nu=2$. However the quantum numbers we find
are not always those predicted by the variational approach~:
\begin{itemize}
\item
For $-\pi/4<\theta<+\pi/2$ there is a phase with $S=5$ and $T^z=2$
as expected for a ferromagnetic state. Since the two valleys should be populated
by respectively seven and three electrons, one can have indeed the maximum
possible spin $S=5$ while $T^z$ is given by the difference in valley occupation.
However this is not the whole story since we observe that states with $T^z=0,1,2$
are exactly degenerate while the full Hamiltonian doe not have $SU(2)$ symmetry
in the whole phase but only at the special point $\theta=\pi/4$.
This is not predicted by the wavefunction in Eq.(\ref{nu0wave}).
\item
For $+\pi/2<\theta<+3\pi/4$ we find a phase with $S=0$ and $T^z=0$,
which is natural to call antiferromagnetic. However it is definitely not
in agreement with the variational quantum numbers ($S=2$ and $T^z=2$).
\item
For $+3\pi/4<\theta<+5\pi/4$ the ground state has $S=2$ and $T^z=0$.
The value of $T^z$ points to XY valley order an these values are
those predicted variationally. So we call this phase a K\'ekul\'e phase.
\item
Finally for $+5\pi/4<\theta<+7\pi/4$ we find $S=2$ and $T^z=5$.
The maximal value of $T^z$ means that all electrons reside in a single valley
i.e. a given sublattice
as in a charge-density wave state and the total spin is correctly predicted
by the variational wavefunction.
\end{itemize}
The most intriguing result is the appearance of exact valley multiplets
when the state is completely polarized. This can be explained by the following
line of reasoning~: when we have full spin polarization the electrons
occupy the two valleys and in the case of the $(1,1/3)$ state one of these valleys
is completely filled. If we perform a two-component particle-hole transformation
on the populated valleys we obtain a state $(0,2/3)$ which is fully polarized
since only one valley is occupied by holes. As a consequence there is no effect of
anisotropies since they involve only a contact interaction, requiring space
coincidence of electrons (as is the case of the $\nu=1/3$ state discussed
in section (\ref{onethird})). This does not imply that such states have energies
independent of $g_z,g_\perp$ parameters because they appear in the one-body
terms in the particle-hole transformation.
So there is a subset of states that do not feel degeneracy-lifting anisotropies when
they are simply
given by contact interactions. Note that this argument is also valid for excited
states as long as they are also fully spin polarized
The total spin and valley polarizations certainly lead us
to name these phases as F,AF,KD,CDW as in the $\nu=2$ case.
So the symmetry breaking pattern is the same as the neutrality case.
However one of these four phases (AF) cannot be described by
trial wavefunctions
in Eq.(\ref{product}) if we use for the spinors $\alpha$ and $\beta$ the spinors
that describe the $\nu=2$ phase diagram.
Another difference with respect to the neutrality case is that AF and KD phases do not
have the same quantum numbers so the first-order transition
between them involves a ground state level-crossing unlike the $\nu=2$
case~\cite{Wu2014}.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{S43_2C.pdf}
\caption{Energy levels as a function of anisotropy for $\nu=4/3$
on the sphere geometry with $N_e=10$ and $N_\phi=6$.
The shift of the sphere geometry selects the
$(1,1/3)$ two occupied components of
the highest-weight state.
All these levels fan out from
the parent unperturbed
$SU(4)$ irrep which is $(7,3,0,0)$. In the $SU(4)$ symmetric limit this irrep is
not the ground state but it is the fourth excited state (fifth-lowest lying
eigenstate). However it is the lowest-lying state with the expected
quantum numbers for the $(1,1/3)$ state.
The vertical lines mark the special symmetry points
of the anisotropic interaction model we use. We identify four different phases.
They can be called F.AF.KD and CDW as in the neutral case with the caveat
that the quantum numbers do not match those predicted by simple trial wavefunctions.
Notably the AF phase is valley unpolarized and spin singlet.
Also the ground state of the F phase is fully spin polarized as expected
but form $SU(2)$ valley multiplets with $S=5$ and $T=2$.
Unlike the $\nu=2$ case, phases AF and KD no longer
have the same quantum numbers and the phase transitions between them now involve
a true level crossings at the $SO(5)$ point.}
\label{sphere43_2C}
\end{figure}
\subsection{the $(2/3,2/3)$ state}
The well-known $SU(2)$ singlet state for $\nu=2/3$ is obtained with a unit shift on the
sphere geometry
$N_\phi=(3/2)N_e-1$ with $N_e=N_\uparrow+N_\downarrow$. If we make
the particle-hole transformation
$N_{\uparrow,\downarrow}\rightarrow N_\phi+1 -N_{\uparrow,\downarrow}$
we obtain the relation $N_\phi=(3/4)N_e-1$. When embedded in the 4-component space
we expect to find an irrep with highest weight $(N_e/2,N_e/2,0,0)$ as the ground state.
The degeneracy is then lifted by the anisotropy~: in Fig.(\ref{Sphere_43}) we present results
of exact diagonalizations in the sphere geometry for $N_e=8$ electrons and $N_\phi=5$.
The important conclusion from this calculation is that the ground state quantum numbers
are now exactly the same as in the $\nu=2$ case.
So the phase diagram is the same as in the $\nu=2$ case~: see Fig.(\ref{nu0phase})
with the same behavior at the phase transition points. Notably the AF/KD phase transition
has no ground state level crossing.
This is exactly what we find with the variational approach. Indeed with the
particle-hole symmetric of the singlet state we have now $g_{\alpha\beta}(0)$
of order unity and an energy functional Eq.(\ref{anisoenergypair}) equal to
that of $\nu=2$ except from an overall factor.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{S43_singlet.pdf}
\caption{Energy levels as a function of anisotropy for $\nu=4/3$
on the sphere geometry with $N_e=8$ and $N_\phi=5$.
The shift of the sphere geometry selects the
$SU(2)$ two-component singlet $(2/3,2/3)$ of the two occupied components of
the highest-weight state.
The parent unperturbed
$SU(4)$ irrep is $(4,4,0,0)$ which is the ground state in the symmetric limit.
The vertical lines mark the special symmetry points
of the anisotropic interaction model we use. We identify four different phases.
The quantum numbers are exactly the same as in the neutrality case $\nu=2$.
We thus observe that the transition between AF and KD phases does not involve
a ground state level crossing but presumably happens through the collapse of
a tower of states.}
\label{Sphere_43}
\end{figure}
While the $(2/3,2/3)$ state is below the $(1,1/3)$ state at zero Zeeman energy
there may be a transition between these states that will be sensitive to the precise
phase which is realized. This is discussed in section (\ref{spintransition}).
\section{the fraction $\nu=5/3$}
Two possible candidates at this fraction are now $(1,2/3)$ which a two-component
state and $(1,1/3,1/3)$ which is a genuine three-component state.
\subsection{The two-component state $(1,2/3)$}
The first state
is obtained by adding the polarized
i.e. one-flavor $\nu=2/3$ state to a filled level. This polarized state with $\nu=2/3$
is the one-component particle-hole transform of the polarized Coulomb eigenstate
at $\nu=1/3$. The flux-number of particles is thus given by~:
\begin{eqnarray*}
N_\phi = N_1 -1, \quad & &N_\phi= 3/2 \times N_2,\quad N_e=N_1+N_2,\\
N_\phi&=&\frac{3}{5}(N_e-1).
\end{eqnarray*}
We have performed sphere exact diagonalizations for $N_e=11$ and $N_\phi=6$.
In the $SU(4)$ limit there is a lowest-lying state with $(7,4,0,0)$ irrep and $L=0$
as expected for the state obtained from the gluing procedure of Eq.(\ref{product})
but it is not the ground state. This is the same phenomenon that we observe at
$\nu=4/3$.
We posit that these extra states are flavor-changing quasiparticle excitations
and focus only onto the fate of the $(7,4,0,0)$ multiplet. By using a small value of
the anisotropy ${\tilde g}=10^{-4}$ the irrep is split in many levels but they do not
mix with
other multiplets. The result of this calculation is displayed in
Fig.(\ref{Fivethird2C}).
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{S53_2C.pdf}
\caption{Energy levels versus anisotropy angle $\theta$ for filling factor $\nu=5/3$
on the sphere geometry with $N_e=11$ and $N_\phi=6$ selecting the $(1,2/3)$ state.
The parent $SU(4)$ irrep is $(7,4,0,0)$. This state is not the absolute ground state in the
$SU(4)$ limit: it is the sixth excited state. One finds four phases
consistent with the $\nu=2$ phase diagram but with distinct quantum numbers.
There is a ferromagnetic phase for $-\pi/4<\theta<+\pi/2$ with $S=11/2$ and $T^z=1/2,3/2$ so there is an emergent $SU(2)$ valley symmetry,
an antiferromagnetic phase for $+\pi/2<\theta<+3\pi/4$ with $S=T^z=3/2$,
a K\'ekul\'e phase for $+3\pi/4<\theta<+5\pi/4$ with $S=3/2$ and $T^z=1/2$
and a charge-density wave phase for $+5\pi/4<\theta<+7\pi/4$ with $S=3/2$ and $T^z=11/2$.
All transitions are first-order with level crossings.}
\label{Fivethird2C}
\end{figure}
As in the case of the $(1,1/3)$ state the quantum numbers are exactly those
expected from using the spinors $\alpha$ and $\beta$ describing the various orderings
of $\nu=2$ F,AF,KD,CDW and using them with Coulomb eigenstates in Eq.(\ref{product}).
Also since AF and KD
do not have the same quantum numbers there is a level-crossing phase transition at
the $SO(5)$ point between AF and KD phases.
In the fully polarized sector we observe also the appearance of degeneracies
due to the $SU(2)$ valley symmetry not only at the special point $\theta=\pi/4$
but in the whole F phase. The manifold of $S=11/2$ states involves indeed
$T^=1/2$ and $T^z=3/2$
while the variational prediction is that we should observe only $T^z=3/2$.
This is due to the same phenomenon we found for the
ferromagnetic phase in the case of the $(1,1/3)$ state. The two-component
particle-hole symmetric
state of the fully polarized states has valley content $(0,1/3)$ so the holes being
polarized do not feel the point-contact anisotropies.
\subsection{the three-component state}
For the three-component state one now replaces the $\nu=2/3$ polarized state
by the two-component singlet state at the same filling factor leading to
a flavor content $(1,1/3,1/3)$. We call $N_1$ the number of electrons in the fully
filled LL
and $N_2,N_3$ the electron numbers in the two flavors forming the singlet state.
We obtain thus a shift on the sphere which is different from that of the previous state~:
\begin{eqnarray*}
N_\phi = N_1 -1, \quad & &N_\phi= 3/2 \times (N_2+N_3)-1,\quad N_e=N_1+N_2+N_3,\\
N_\phi&=&\frac{3}{5}N_e-1.
\end{eqnarray*}
We have studied in detail the case $N_e=10$, $N_\phi=5$.
In the $SU(4)$ limit we are already certain that
this state is lower in energy than $(1,2/3)$ since the $\nu=2/3$ singlet is lower
in energy than the polarized state at the same filling factor. We are also certain that
there is such an eigenstate of the $SU(4)$ symmetric Coulomb problem.
The only
remaining question is if there are some states lower in energy. Indeed it may very well
be that by spreading the electrons into more flavors one can reduce the energy cost of
Coulomb repulsion. If we consider the possible existence of an $SU(3)$ singlet state
at filling $2/3$ with shift 2 on the sphere~\cite{Wu2015} then it implies the existence
of a state with flavor content $(1,2/9,2/9,2/9)$ and a flux given by
$N_\phi=(3/5)N_e -(7/3) $. We observe on the sphere geometry that the ground state
for $N_e=14$ and $N_\phi=7$ is spanned by the irrep $(8,2,2,2)$.
Since we already know~\cite{Wu2015} that there is a $SU(3)$ singlet $(2,2,2)$ for
$N_e=6$ and $N_\phi=8$
this is in fact only a consistency check. Due to the severe size limitations of exact
diagonalizations we cannot shed further light on this issue and limit ourselves
to the states $(1,2/3)$ and $(1,1/3,1/3)$. If there are states like $(1,2/9,2/9,2/9)$
they are relevant only at small Zeeman energy.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{S53_singlet.pdf}
\caption{Energy levels versus anisotropy angle $\theta$ for filling factor $\nu=5/3$
on the sphere geometry with $N_e=10$ and $N_\phi=5$. This special shift
favors the singlet state for the two partially occupied Landau levels
so that the flavor content is $(1,1/3,1/3)$.
The parent $SU(4)$ irrep is $(6,2,2,0)$ and it is the ground state.
The magnitude of the anisotropy
is ${\tilde g}=10^{-2}$. We observe five phases whose quantum numbers are displayed
in Fig.(\ref{Fivethirdphase}). The five quantum phase transitions are first-order
with true level crossings.}
\label{Sphere_10_5}
\end{figure}
Our exact diagonalization results on the sphere geometry are presented in
Fig.(\ref{Sphere_10_5}).
We find now a phase diagram \textit{different} from the neutrality case.
There are five phase transitions and all of them involve level crossings in
the finite systems.
The five phases we observe have quantum numbers displayed in
Fig.(\ref{Fivethirdphase}).
Some of them may be captured by the variational approach but not all.
Since we are dealing with a three-component state it is no longer possible
to have full spin or valley polarization. The maximal value of the spin is observed
in the range $-\pi/4<\theta < +3\pi/4$. In this regime we observe two phases $A$
and $B$
that differ by the valley $T^z$ value.
\begin{itemize}
\item
The $A$ phase has no valley XY order and can be described by
$\{\alpha,\beta,\gamma\}=\{|K \uparrow\rangle,|K^\prime \uparrow\rangle,
|K^\prime \downarrow\rangle\}$
with $T^z=1$ and $S=3$.
\item
The $B$ phase or $-\pi/4<\theta < +\pi/4$
has $T^z=0$ and is plausibly described by a state with
$\{\alpha,\beta,\gamma\}=\{|\mathbf{t}_\perp \uparrow\rangle,
|-\mathbf{t}_\perp \uparrow\rangle,
|-\mathbf{t}_\perp \downarrow\rangle\}$.
Due to the three-component nature of the state
it is not possible to obtain $T^z=0$ by using states with definite projections
onto $|K\rangle$,
$|K^\prime\rangle$ but one has to use XY valley ordered states.
\end{itemize}
The transition between $A$ and $B$ phases is thus associated to the
change of the valley order from Ising-type in $A$ to XY-type in $B$.
The region $3\pi/4<\theta < 3\pi/2$ has now only partial spin polarization
and is divided into two phases that differ by a change in the value of the
valley polarization.
\begin{itemize}
\item
We find a phase $E_1$ for $3\pi/4<\theta < 5\pi/4$
with the maximal value of $T^z$.
$\{\alpha,\beta,\gamma\}=\{|K \uparrow\rangle,|K \downarrow\rangle , |K^\prime \downarrow\rangle\}$
\item
In the lower quadrant $5\pi/4<\theta < 3\pi/2$ we have a phase $E_2$
with $T^z=0$ indicative of XY valley ordering whose candidate ordering pattern
is given by
$\{\alpha,\beta,\gamma\}=\{|\mathbf{t}_\perp \uparrow\rangle,|-\mathbf{t}_\perp \downarrow\rangle,
|\mathbf{t}_\perp \downarrow\rangle\}$.
The transition between $E_1$ and $E_2$ corresponds in changing the valley order
from the Ising-like $z$-axis to the valley XY plane.
A variational treatment does not distinguish between Ising or XY character
in this range of anisotropies. Indeed all states are degenerate with trial energy
in Eq.(\ref{anisoenergypair}).
\item
Finally there is a fifth phase that we call $C$ for $3\pi/2<\theta < 7\pi/4$
which is a spin singlet $S=0$ as well as valley unpolarized $T^z=0$.
It is not possible to capture such a phase with the class of
variational states discussed above. If we look at the full set of degenerate states
in the $SU(4)$ limit, one notes that the irrep $(6,2,2,0)$ that we study
contains notably symmetric states like $(3,2,2,3)$ from which one can
construct states with zero spin and zero isospin values. It is an open question
to obtain explicitly a wavefunction with the correct quantum numbers for the $C$ phase.
\end{itemize}
\begin{figure}[h]
\centering
\includegraphics[width=0.3\columnwidth]{phasediag2-cropped.pdf}
\caption{Phase diagram for the $\nu=5/3$
with $N_e=10$ and $N_\phi=5$.
The parent $SU(4)$ irrep is $(6,2,2,0)$. This is valid on the sphere geometry.
On the torus geometry the quantum numbers are slightly different
since the number of states per Landau level differ by one unit.
While $A$,$B$ can be captured plausibly by the variational method, it is not case of the singlet phase $C$. The $E_{1,2}$ phase are found to be degenerate variationally while
our results show that they differ by type of valley ordering.}
\label{Fivethirdphase}
\end{figure}
To shed some more light onto the nature of the $C$ phase we have computed the pair
correlation function $g_{\alpha\beta}(r)$ of the exact ground state in the sphere
geometry. With a ground state having zero spin and $T^z=0$ there are only four
independent combinations of spin-valley that are plotted in Fig.(\ref{paircorrelation})
At short distance the leading correlation is
$g_{K\uparrow K\downarrow}(0)=g_{K^\prime\uparrow K^\prime\downarrow}(0)$.
This function has a maximum at the origin while all other cases have a deep
minimum as expected from Coulomb repulsion.
This may point to formation of spin S=0 singlet pairs in each valley.
On the contrary the antiferromagnetic-like repulsion between
$K\uparrow$ and $K^\prime\downarrow$ is maximal at a finite distance
$\approx 2.5\ell$.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\columnwidth]{paircorrelation.pdf}
\caption{The various pair correlation functions $g_{\alpha\beta}(r)$
calculated in the middle of the singlet $C$ phase for $\theta=3\pi/2+\pi/8$.
The sphere geometry is used with $N_e=10,N_\phi=5$. The chord distance $r$ varies from zero
up to $\sqrt{10}\ell$.
Since we have $S^z=0$ and $T^z=0$ in this phase there are only four distinct
correlations.}
\label{paircorrelation}
\end{figure}
\section{spin transitions}
\label{spintransition}
In the case of filling factor $\nu=2/3$ it is well known~\cite{Ashoori} that
one can induce a
spin transition between the singlet state and the fully polarized state.
Indeed
while the polarized state has a higher Coulomb energy, increasing the Zeeman energy will lower
it eventually below the singlet state. The crossing happens when the Zeeman energy
equals the energy difference between the two states~:
\begin{equation}
\Delta E =\epsilon_Z B_{crit}.
\end{equation}
This prototypical transition is simple because of the well-defined magnetization of
the two competing states. Here in monolayer graphene the situation is richer
since the competing states that we have studied above can have various magnetizations
according to the value of the anisotropy parameters. The Coulomb energy
scales as $\propto \sqrt{B}$ while anisotropy energies and the Zeeman energy
scale linearly with $B$. The energy per particle of a state $i$ has thus three
contributions~:
\begin{equation}
\epsilon_i = E_{Ci} + a_i B + \epsilon_Z m,
\end{equation}
where we have defined the magnetization per particle $m$
and the Coulomb energy $E_{Ci}$
A spin transition between two states $0$ and $1$ arises when the Coulomb energy difference is equal
to the contribution from anisotropies and Zeeman energy~:
\begin{equation}
\Delta\epsilon_{01}(B_{crit}) = (a_0-a_1+z)B_{crit}
\label{criticalfield}
\end{equation}
The Zeeman factor $z$ depends upon the magnetization of the two competing states
and in the case of monolayer graphene it takes different values in different parts of
anisotropy phase diagram.
We now give a simplified description of spin transitions for the fractions we have
studied. We limit ourselves to situations with only a perpendicular field
and also we ignore the possibility of spin canting. Indeed the effect of the canting
is restricted to low enough fields. At large enough values of the field one obtains
a fully polarized state or a collinear antiferromagnet depending on the filling
fraction. Certainly spin canting may drive interesting transitions but detailed
predictions are hampered by our lack of knowledge about the values of anisotropies.
\subsection{$\nu=2/3$}
At filling factor $2/3$ the polarized state is insensitive to anisotropies
while the singlet state can be in any of four phases F,AF,CDW,KD
as shown in section (\ref{twothirds}).
In KD or CDW the state is a spin singlet so it is insensitive to the Zeeman coupling.
We thus expect a spin transition towards the fully polarized state
at some field value Eq.(\ref{criticalfield}). The AF state turns into a canted
antiferromagnetic state which becomes fully polarized beyond some field value.
Once in this fully polarized state, as is the case of the F phase, the state lowers
its energy
at the same rate as the polarized $\nu=2/3$ state so that there will be no crossing
hence there will be no spin transition in the AF or F phases.
\subsection{$\nu=4/3$}
The phase diagrams for the states $(1,1/3)$ and $(2/3,2/3)$ are similar~:
there is the same number of phases with their domain of stability having
the same range of anisotropy angle $\theta$.
but the difference is now in their total magnetization.
In the F phase the two states have the same total spin value so they never
cross under Zeeman coupling. In the KD and CDW phases the situation is different.
In the state $(2/3,2/3)$ there is zero net magnetization in both cases KD and CDW
while in the state
$(1,1/3)$ the KD phase has now a net magnetization equal to 1/3 of the saturation value
and the CDW phase has an even smaller magnetization (but nonzero).
So we deduce that there can be a spin transition in both phases KD and CDW.
\subsection{$\nu=5/3$}
The situation is more complex since now the phase diagrams
of the two competing states $(1,2/3)$ and $(1,1/3,1/3)$ do not overlap exactly
as a function of anisotropy.
We describe the situation by using as a reference the phases of the $(1,2/3)$ state.
In the F phase the state $(1,2/3)$ is fully polarized with magnetization
$M=M_{sat}$ while $A$ and $B$ phases of $(1,1/3,1/3)$ have only $M=M_{sat}/\nu$
so we expect a spin transition.
In KD and CDW phases the higher-lying state $(1,2/3)$ has $M=M_{sat}/3$
while the competing phases in $(1,1/3,1/3)$ are the $C$ and $E_2$ phases
(see Fig.(\ref{Fivethirdphase})).
In the singlet $C$ phase we have $M=0$ and in the $E_1$ and $E_2$ phases $M=M_{sat}/5$
so there will be a spin transition. Its location $B_{crit}$
will be phase dependent because there are (yet unknown) contributions from
anisotropies in the value of the critical field in Eq.(\ref{criticalfield}).
The general picture is that states with maximal spread-out of electrons in various
spin-valley components will be favored only at small Zeeman energies.
Notably graphene experiments with large Zeeman splittings and large sublattice
effects like measurements in ref.(\onlinecite{Zeng-MLG9}) will involve only states like $(1,1/3)$
and $(1,2/3)$ as well as their generalizations to other fractions.
Genuine multicomponent states will require minimal effect of the one-body fields.
\section{conclusion}
We have studied the impact of anisotropies relevant to the description
of monolayer graphene in the regime of the fractional quantum Hall effect.
At neutrality the phase diagram involves four phases F, AF, KD and CDW.
Simple generalizations of this diagram apply for $\nu=(1/3,1/3)$,
$\nu=(2/3,2/3)$
and $\nu=(1,2/3)$ states. This is found from exact diagonalizations on the
sphere geometry and is also confirmed by a variational approach involving parent
Coulomb eigenstates. The two spin-valley vectors $\alpha,\beta$ that characterize
the spin-valley order in the variational approach are identical to the neutral case.
Since the occupations of the two filled states are in general not equal
it means that the quantum numbers of the ground state are now different from the
neutral
case. As a consequence all first-order transitions involve
level crossings. Indeed there is no evidence for exotic phase transitions\cite{Subir}.
In the case of $\nu=(1,1/3)$ there are also four phases whose range of stability
is the same as the neutral case but the quantum numbers are not all predicted
by the variational method. While CDW and KD-like phases have spin and valley
quantum numbers correctly predicted, we find that the antiferromagnetic phase
is a spin singlet with no net valley polarization. Interestingly we
find that the fully polarized states partly escape effects of anisotropy and still
form $SU(2)$ valley multiplets even though this is not a symmetry of the
Hamiltonian (their energies still depend upon $g_z$ and $g_\perp$).
This emergent symmetry also appears in the polarized eigenstates of the $(1,2/3)$
state.
The case $\nu=(1,1/3,1/3)$ is different. We observe five phases. Two of them
can be described variationally. There are two distinct phases in our diagonalizations
that differ by Ising valley order versus XY valley order while they are degenerate
variationally. There is also a phase which a spin singlet with presumably XY valley
order that happens for negative Ising-like anisotropy. It is an open question
how to write a wavefunction to describe this phase. The pair correlation function
$g_{\alpha\beta}(0)$
shows an enhanced probability for electrons for opposite spins but in the same
valley to be at the same location. This interesting situation requires
however low Zeeman energy to be realized experimentally.
In present experiments it is likely that one observes the two-component states
$(1,1/3)$ and $(1,2/3)$. If they are fully polarized (the F phase) then we predict
that they should escape degeneracy-lifting
anisotropies of the form given in Eq.(\ref{aniso})
and thus feature an emergent valley $SU(2)$ symmetry. Invalidating this valley
symmetry would invalidate the simplified model of Eq.(\ref{aniso})
which is a crucial piece of our present understanding of IQHE and FQHE in graphene
systems.
Recent experiments using scanning tunneling microscopy~\cite{Yazdani,Sacepe}
have given evidence for a more complex picture at neutrality $\nu=0$ than previously
thought. Notably there is evidence for phases beyond the four states of
Fig.(\ref{nu0phase}).
There are at least two possible explanations. It may very well be that
the anisotropies are not small in comparison to coulomb and that the simple
model Eq.(\ref{aniso}) is not adequate. It may also mean that the Landau level
mixing
is strong enough to change the phase structure. Such a possibility would invalidate
standard theoretical treatments that focus on a fixed Landau level from the start.
Also some of the experiments~\cite{Yazdani} favor a K\'ekul\'e state which is
at odds with the explanation of the metal-insulator transition~\cite{Young-MLG2}.
This may mean that the anisotropy parameters are not in the range of
Eq.(\ref{anisobounds}) in which case the explanation for the titled-field transition
becomes more elusive. It may be that the edge of the sample
lies in a different phase from the bulk as observed in Hartree-Fock
studies~\cite{AK}.
\begin{acknowledgments}
We acknowledge discussions with A. Assouline and P. Roulleau.
We thank C. Repellin for useful correspondence.
We thank DRF and GENCI-CCRT for
computer time allocation on the Cobalt and Topaze clusters.
\end{acknowledgments}
|
{
"timestamp": "2021-12-01T02:25:38",
"yymm": "2111",
"arxiv_id": "2111.15453",
"language": "en",
"url": "https://arxiv.org/abs/2111.15453"
}
|
\section{Introduction}
A first principles understanding of bound states of a heavy quark and anti-quark pair (quarkonia) hold the key to probing the existence and properties of the quark gluon plasma (QGP) in heavy-ion collisions (HIC). Lattice QCD has made vital contributions to estimating the properties of quarkonium at zero temperature and to the thermodynamic properties of the QGP. Static quarks at finite temperature can be studied via the Wilson loop or Wilson line correlator in Coulomb gauge, which itself can be related to the evolution of quarkonia via effective field theories.
The separation of scales $M \gg Mv \gg Mv^2, M \gg \Lambda_{QCD}$ (where $M$ is the heavy quark mass and $v$ is the relative velocity in the bound state) tells us that if we consider processes at a scale below $M$ a non-relativistic description (NRQCD) of quarks is possible. If we set the energy cutoff to $Mv$ and integrate out degrees of freedom that cannot be excited above $Mv^2$, a different effective field theory called potential NRQCD (pNRQCD) ensues. The term potential in pNRQCD refers to the non-local Wilson coefficients in the Lagrangian, the lowest order of which can be related to the Wilson loop and its spectrum.
At zero temperature the evolution of static quark anti-quark pairs after time coarse-graining can be described by a Schr\"odinger-like equation for the Wilson loop with a non-relativistic potential $V(r)$ \cite{Bali:2000gf}
\begin{equation}
\label{potential}
i \partial_t W_\Box (t,r) = \Phi(t,r)W_\Box(t,r), \quad
V(r) = \lim_{t \rightarrow \infty} \Phi(t,r).
\end{equation}
Our long-term goal lies in ascertaining whether the potential picture also holds at T>0. And if it does, how the form of the potential changes from the zero temperature case.
The real-time evolution of the Wilson loop referred to above can be studied on the lattice via its spectral function $\rho_\square$, which acts as a link between the real-time $W_{\Box}(r,t)$ and imaginary-time Wilson Loop $W_{\Box}(r,\tau)$ \cite{Rothkopf:2011db}
\begin{equation}
\label{spectral_link}
W_\Box (r,t) = \int d \omega e^{-i \omega t}\rho_\Box (r,\omega) \leftrightarrow
\int d \omega e^{-\omega \tau} \rho_\Box (r,\omega) = W_\Box (r,\tau).
\end{equation}
In order to extract the spectral function from the Wilson loop $W_{\Box}(t,r)$ we must invert \cref{spectral_link}. However, in practice the presence of noisy data and the availability of only a few data points along imaginary time make the inversion an ill-posed problem. Hard thermal loop perturbative computations (HTL) show that there exists a dominant peak structure in the spectral function related to a complex potential with a screened real- \cite{Laine:2006ns} and finite imaginary part.
A first step toward establishing whether \cref{potential} holds requires us to investigate the spectral structure of the Wilson loop, in particular we are interested in the position $\Omega$ and width $\Gamma$ of the lowest lying spectral structure, which has been related to its late time evolution in \cite{Burnier:2012az}.
\section{Lattice Setup}
\label{setup}
We performed calculations on Wilson loop and Wilson line correlators from (2+1)-flavour QCD configurations generated by HotQCD and TUMQCD collaborations \cite{Bazavov:2018wmo,Bazavov:2017dsy,HotQCD:2014kol,Bazavov:2019qoo}.
The Highly Improved Staggered Quark (HISQ) action was used to generate ensembles providing us with $2-6 \times 10^4$ gauge configurations. $N_\sigma^3 \times N_\tau$ lattices were used with $N_\tau = 10,12,16$ to control lattice spacing effects and the aspect ratio $N_\sigma/N_\tau$ was set to 4 to control finite volume effects. The Wilson Line correlators were calculated in Coulomb Gauge. The fixed-box approach was used to scan temperature range from 140Mev to 2GeV, with $T=0$ simulations available for scale setting. The ratio $m_l/m_s$ was set to 1/20 for low temperatures (T<300 MeV) and 1/5 for some ensembles at higher
temperatures (T>300 MeV).
\section{Cumulants and HTL comparison}
To analyse properties of lattice data we define cumulants of the correlation function.
\begin{eqnarray}
m_1(r,\tau,T)=-\partial_{\tau} \ln W(r,\tau,T),\quad m_n=\partial_{\tau} m_{n-1}(r,\tau,T), n>1,
\end{eqnarray}
where the first cumulant $m_1$ is nothing but the effective mass. We find that our data only allows us to extract information up to the first three
cumulants, due to loss of signal for higher orders.
For the analysis we subtract the UV part of the correlator $W$ using T=0 data as described in \cite{Larsen:2019zqv}, since the UV part is temperature independent and assumed to be well separated from the low-lying structures of interest to our study. A comparison of the first cumulant obtained on the lattice with HTL is shown in Fig.\ref{cumulant1}, where we plot the difference of $m_1$ with the color singlet free energies. In HTL this difference is antisymmetric around $\tau=\beta/2$.
\begin{figure}[h]
\includegraphics[scale=0.6]{m1_r14_T667.eps}
\includegraphics[scale=0.6]{m1_r14_T1938.eps}
\caption{Comparison of $m_1-F_S$ calculated on the lattice with HTL results at $T=667$ MeV and $T=1938$ MeV for $rT=1/4$.
The HTL result for $\mu=2 \pi T$ are shown as solid lines. The dashed lines
correspond to variation of the scale $\mu$ by a factor of two.}
\label{cumulant1}
\end{figure}
We see that the first cumulant does not agree with HTL results at $T=667$ MeV however, better agreement is observed with increasing temperature e.g. at $T=1938$MeV. A comparison of the second cumulant with HTL at $T=667$MeV is shown in Fig \ref{cumulant2}.
\begin{figure}[h]
\centering
\includegraphics[scale=.7]{m2_r14_T667.eps}
\caption{The second cumulant of the subtracted Wilson line correlators as function of $\tau$ at $T=667$ MeV calculated
on $N_{\tau}=12$ lattices and in HTL perturbation theory (lines) for
$rT=1/4$. The renormalization scale $\mu$ $\mu=2 \pi T$ are shown as solid lines. The dashed lines
correspond to variation of the scale $\mu$ by a factor of two.}
\label{cumulant2}
\end{figure}
There we again find that the symmetry present in the HTL result is not reproduced by the data. Quantitative agreement however is observed around $\tau = \beta/2$.
\section{Spectral function using model fits}
As seen in the last section, we can only extract the first two cumulants of the lattice data. While the first
cumulant
carries information on the dominant peak position $\Omega$, the second
cumulant
is related to the width $\Gamma$ of that spectral structure. When we plot the first cumulant based on the subtracted data we see a linear falloff at small $\tau$. This is compatible with a Gaussian peak. Therefore as a first step we model the correlator as
\begin{equation}
\label{gaussian_fit}
\begin{split}
W(r,\tau,T) = A_P \exp\big[-\Omega(r,T)\tau+\Gamma_G(r,T) ^2 \tau ^2/2\big]+
A_{cut}(r,T) \exp(
\big[-\omega_{cut}(r,T)\tau\big]
\end{split}
\end{equation}
Fitting the lattice correlator data with equation \ref{gaussian_fit} we can extract $\Omega$ and
an effective width $\Gamma=\sqrt{2\ln2}\Gamma^G$
as shown in Fig.\ref{Fig:fit_plot}.
\begin{figure}
\centering
\includegraphics[scale=.5]{pot_Nt12.eps}
\includegraphics[scale=.5]{ImV_Nt12_T.eps}
\caption{The peak position of the spectral function (left figure) and the width (right figure) as function of the separation $r$ obtained from Gaussian fits of the $N_{\tau}=12$ data.}
\label{Fig:fit_plot}
\end{figure}
We find no significant dependence of $\Omega$ on temperature and the results are similar to the T=0 results. $\Gamma/T$
scales as a function of $rT$
within the uncertainties.
\section{Spectral Function Extraction Using Pad\'e}
\label{pade}
The second approach we deploy is the Pad\'e approximation, which on the one hand is model independent but on the other hand very sensitive to noise in the input data. We first start by Fourier transforming the Euclidean correlator into Matsubara frequency space.
\begin{align}
\displaystyle
W(r,\tilde \omega_n,T)=\sum_{j=0}^{N_\tau-1}e^{ia \tilde \omega_n j } W(r,j a,T),~\tilde \omega_n=2 \pi n/aN_{\tau}. \label{eq:specdec}
\end{align}
We then project the data onto a basis of rational functions and carry out an interpolation. This step is carried out according to the Schlessinger prescription \cite{PhysRev.167.1411}. Taking an interpolation of data instead of fitting avoids costly minimisation. We analytically continue the rational function by replacing Matsubara frequencies with real-time frequencies. Note that instead naive Fourier frequencies $\hat{\omega_n}$ we use corrected frequencies based on the lattice dispersion relation
\begin{align}
\tilde \omega_n \rightarrow \omega_n=2 {\rm sin}\big(\frac{\pi n}{ N_\tau}\big)/a.
\end{align}
When we have a rational interpolation function in the Matsubara frequency space we can directly obtain the pole structure by finding roots of the denominator. The pole structure is directly related to the peak position $\Omega$ and width $\Gamma$ of the spectral function. We select the pole that is closest to the real axis to get the dominant peak structure. We identify $\Omega$ with the real part of the dominant pole and $\Gamma$ with the imaginary part.
Pad\'e methods of rational approximation require usage of highly precise data. To test the reliability of this approach we use noisy Hard Thermal Loop (HTL) correlation functions. We compute HTL correlation functions for $T=667$MeV discretised to 12 lattice points. We then assign random noise of such that relative errors $\Delta D/D=10^{-2}$ and $10^{-3}$ and generate 1000 samples each. We then carry out Pade interpolation and pole analysis with results as shown in Fig \ref{Fig:HTL_PotPade}.
\begin{figure}[h]
\includegraphics[scale=0.5]{./PaperFig_ReV_HTL.pdf}
\includegraphics[scale=0.5]{./PaperFig_ImV_HTL.pdf}
\caption{Extraction of spectral position $\Omega$ and width $\Gamma$ of the dominant peak, based on Hard Thermal Loop mock data with $dD/D = 10^{-2}$ and $dD/D = 10^{-3}$ for $T=667$MeV using Pade. The error bars are obtained from Jackknive resampling.}
\label{Fig:HTL_PotPade}
\end{figure}
From the pole analysis of HTL mock data we see that we are able to recover the peak position $\Omega$ within uncertainties for relatively large errors $\frac{\Delta D}{D}=10^{-2}$. The results for $\Delta D/D=10^{-3}$ are excellent. When we try to estimate the width of the peak the results are less encouraging with the true values being consistently underestimated for both $\Delta D/D = 10^{-2}$ and $\Delta D/D = 10^{-3}$.
Given that the Pade is able to recover the peak position of the HTL mock data very well for error levels present in the actual data, we proceed to carry out the same analysis on the lattice correlators. Since the Pade underestimates the width we will show our analysis only for the peak position in Fig.\ref{Fig:Lattice_Pade}
\begin{figure}
\centering
\includegraphics[scale=.5]{PaperFig_ReV_lattice2.pdf}
\caption{$\Omega$ as a function of separation distance for different temperatures obtained from a Pad\'e pole analysis on $N_\tau = 12$. }
\label{Fig:Lattice_Pade}
\end{figure}
Again the results obtained for $\Omega$ are very similar to the $T=0$ results and show no significant change with temperature, as was previously seen with the Gaussian fits.
\section{Spectral function extraction using Bayesian Method}
The Bayesian approach makes use of the Bayes theorem:
\begin{equation}
P[\rho|D,I]\propto P[D|\rho,I]P[\rho|I] = {\rm exp}[-L+\alpha S_{\rm BR}],
\end{equation}
$P[\rho|D,I]$ is the posterior probability which is the probability of $\rho$ to be the correct spectrum given the data $D$ and prior information. $L$, the Likelihood, is the usual quadratic distance used in $\chi^2$ fitting. The prior probability acts as a regulator
\begin{equation}
P(\rho |I) = \exp ( \alpha S_{BR}).
\end{equation}
Generally the choice of regulator depends on the particular choice of Bayesian method, however for this study we use the BR prior\cite{Burnier:2013nla}.
\begin{equation}
S_{\rm BR}=\int d\omega \big( 1- \frac{\rho(\omega)}{m(\omega)} + {\rm log}\big[ \frac{\rho(\omega)}{m(\omega)} \big]\big),
\end{equation}
where $m(\omega)$ is the default model. We generally use the most uninformative default model m = const.
Unlike previous Bayesian approaches where the hyperparameter $\alpha$ is integrated out, here in the hope of removing ringing artifacts we draw inspiration from the Morozov Criteria and tune $\alpha$ such that $L = N_\tau/2$. Then we look for the most probable spectral function by locating the extremum of the posterior.
As with the Pad\'e, we test the reliability of Bayesian reconstruction with mock HTL euclidean data with errors as described in section \ref{pade}.
\begin{figure}
\includegraphics[scale=0.5]{PaperFig_ReV_HTL_BR.pdf}
\includegraphics[scale=0.5]{PaperFig_ImV_HTL_BR2.pdf}
\caption{Extraction of $\Omega$ and width $\Gamma$ for Hard Thermal Loop ideal data for $dD/D = 10^{-2}$ and $dD/D = 10^{-3}$ for $T=667$MeV using the BR method. The error bars are obtained from Jackknife resampling.}
\label{br_pot}
\end{figure}
Results for $\Omega$ and $\Gamma$ using Bayesian approach are shown in Fig.\ref{br_pot}. We see that the location of peak for $\Delta D/D = 10^{-2}$ and $\Delta D/D = 10^{-3}$ are very well estimated within uncertainties. The benchmark results show that the BR method is able to reproduce the peak position better than the Pad\'e for $\Delta D/D = 10^{-2}$. However, the width $\Gamma$ is still not reliable even for $\Delta D/D = 10^{-3}$. It does seem to be closer to the analytical result than the Pad\'e for $\Delta D/D = 10^{-3}$.
Even though we have successfully carried out the analysis on mock HTL data, we are challenged to doing the same on our lattice data.
Only at low temperatures (low $\beta$) the reconstruction converges. At high temperatures (high $\beta$) we observe that the effective mass plots show non-monotonous behavior at small distances. Fine lattices with improved gauge actions exhibit such effects at small time and distances \cite{Bazavov:2019qoo}.
This is a manifestation of positivity violation in the spectral function. Since the Bayesian is based on positive definite spectral function it cannot be applied for spectral reconstruction at high temperatures on our data. However, we can extract the peak position of the spectral function at low temperature (T=151 MeV) as shown in Fig.\ref{br_lattice}.
\begin{figure}
\centering
\includegraphics[scale=.5]{ReV_comp6740.pdf}
\caption{Comparison of $\Omega$ using the Pad\'e and BR method at $\beta = 6.740$ with $N_\tau =12$ ($T=151$ MeV). The $T=0$ potential for the same $\beta$ are given as green data points.}
\label{br_lattice}
\end{figure}
\section{Conclusion}
Here we presented a first study of the Wilson loop/line spectral structure on high precision HISQ ensembles. The peak position of the dominant spectral peak $\Omega$ obtained from Gaussian fits and Pade differs from some previous studies \cite{Burnier:2015tda} and \cite{Burnier:2014ssa} and our results do not show any significant modification with increasing temperature.
However, are similar to an earlier analysis \cite{Petreczky:2017aiz} on a subset of the same data.
$\Gamma/T$ shows a linear increase with temperature in Gaussian fits, we have not shown the $\Gamma$ for Pad\'e since the results are systematically underestimated in our benchmarks. In the preprint \cite{Bala:2021fkm} and proceedings \cite{D.Bala:2021} we have also presented a different approach inspired by hard thermal loop perturbation theory \cite{Bala:2019cqu}.
\section*{Acknowledgements}
This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics through the (i) Contract No. DE-SC0012704, and (ii) Scientific Discovery through Advance Computing (SciDAC) award Computing the Properties of Matter with Leadership Computing Resources. (iii) R.L., G.P. and A.R. acknowledge funding by the Research Council of Norway under the FRIPRO Young Research Talent grant 286883. (iv) J.H.W.’s research was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Projektnummer 417533893/GRK2575 ``Rethinking Quantum Field Theory''. (v) D.B. and O.K. acknowledge support by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the CRC-TR 211 'Strong-interaction matter under extreme conditions'– project number 315477589 – TRR 211.
This research used awards of computer time provided by: (i) The INCITE and ALCC programs at Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility operated under Contract No. DE-AC05- 00OR22725. (ii) The National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02- 05CH11231. (iii) The PRACE award on JUWELS at GCS@FZJ, Germany. (iv) The facilities of the USQCD Collaboration, which are funded by the Office of Science of the U.S. Department of Energy. (v) The UNINETT Sigma2 - the National Infrastructure for High Performance Computing and Data Storage in Norway under project NN9578K-QCDrtX "Real-time dynamics of nuclear matter under extreme conditions".
|
{
"timestamp": "2021-12-01T02:25:14",
"yymm": "2111",
"arxiv_id": "2111.15437",
"language": "en",
"url": "https://arxiv.org/abs/2111.15437"
}
|
\section{Preliminaries}
\label{sec:preliminaries}
In the remainder of the paper, we will assume that $M$ is complete and that the eigenvalues of the Ricci tensor are constants $(\lambda, \lambda, 0)$ with $\lambda = \pm 1$ unless otherwise stated.
Since the eigenvalues of the Ricci tensor determine the curvature tensor $R$ at any point up to an orthogonal transformation, we know that $R$ must be pointwise the curvature tensor of $\Sigma \times \R$ for $\Sigma$ either the round sphere or the hyperbolic plane.
Observe that this implies that at a point $p \in M$, there is a unit eigenvector $T$ of the zero eigenvalue of the Ricci tensor such that $\sec(T, X) = 0$ for all $X$ and $\sec(X,Y) = \lambda$ when $\{X,Y\}$ form a basis of $T^\perp \subset T_p M$.
Defining $\ker R_p := \{X \in T_p M | R(X, \cdot) \cdot = 0\}$ we get that $T$ spans $\ker R_p$.
We may define $T$ globally on $M$ by passing to a double cover if necessary.
It is well known that for any complete manifold, the distribution $\ker R$ has complete, totally geodesic leaves on the open subset where $\dim \ker R$ is minimal, see \cite{maltz}.
Hence the integral curves of $T$ are complete geodesics.
We call these the \emph{nullity geodesics} of $M$.
Thus we have that $T^\perp$ is a parallel distribution along each nullity geodesic.
Define $C: T^\perp \rightarrow T^\perp$ to be the splitting tensor of $T$, i.e. $C(X) = - \nabla_X T$.
Note that if $C$ vanishes in an open set, then by the de Rham splitting theorem, that set is locally isometric to a product $V \times \R$ with $V$ a surface of constant curvature $\lambda$.
Along a nullity geodesic $\gamma(t)$, we can choose a parallel basis $\curly{e_1, e_2}$ of $\ker R^\perp$.
Then $C$ written in this basis is a matrix $C(t)$ along $\gamma(t)$ satisfying
\begin{equation}
\label{eqn:ricatti}
C'(t) = C^2
\end{equation}
since
\begin{align*}
0 &= R(X,T)T
= \nabla_T (C(X)) + C(\nabla_X T)
= (\nabla_T C)(X) - C(C(X)).
\end{align*}
Note that \eqref{eqn:ricatti} has solutions $C(t) = C_0(I - tC_0)^{-1}$ for some matrix $C_0 = C(0)$.
Hence any real eigenvalue of $C$ must be zero.
Since $C$ is a $2 \times 2$ matrix, it either is nilpotent or has two non-zero complex eigenvalues.
Along a nullity geodesic,
\begin{equation*}
\Scal' = - 2 \trace C \cdot \Scal.
\end{equation*}
Indeed, from the second Bianchi identity
and the fact that in our case $\Scal = 2\chevron{R(X, Y) Y, X}$ for some orthonormal basis $\{X,Y\}$ of $T^\perp$,
\begin{align*}
\Scal'
&= 2\chevron{(\nabla_T R)(X, Y) Y, X}
= 2\chevron{ R(Y, \nabla_{X} T) Y, X}
+ 2\chevron{R (\nabla_{Y} T, X) Y, X} \\
&= - 2\Scal \cdot \chevron{C(X), X} - 2\Scal \cdot \chevron{C(Y), Y}
= - 2 \trace C \cdot \Scal
\end{align*}
Since $M$ has constant scalar curvature, it follows that $\trace C$ is zero.
Note that \eqref{eqn:ricatti} also implies that $(\trace C)' = \trace (C^2) = (\trace C)^2 - 2 \det C$ along a nullity geodesic.
Hence $\det C = 0$ as well.
Since the only real eigenvalues of $C$ are zero, it follows that $C$ is nilpotent.
Define $M_C$ to be the subset of $M$ on which $C$ is non-zero.
Note that $M_{irred}$ is the closure of $M_C$ and that $M_{split}$ is the complement of $M_{irred}$, i.e. $M_{split}$ is the set of points $p \in M$ where $C = 0$ in a neighborhood of $p$.
Note that \eqref{eqn:ricatti} implies that if a nullity geodesic intersects $M_{split}$ ($M_C$ respectively) then it is contained in $M_{split}$ ($M_C$ respectively).
By \cite{graph_manifolds} Proposition 2.1, the universal cover of any connected component of $M_{split}$ is an isometric product $\Sigma \times \R$ where $\Sigma$ is a surface of constant curvature $\lambda$.
By going to a cover if necessary, we can define an orthonormal basis $e_1, e_2, T$ at any point in $M_C$ by $T \in \ker R$ and $e_1 \in \ker C$.
Since $C' = 0$, $e_1$ and $e_2$ are parallel along nullity geodesics.
There exists a smooth function $a \not= 0$ on $M_C$ so that $C(e_2) = a e_1$.
Hence
\begin{equation}
\label{eqn:start_cov_derivs}
\nabla_T e_1 = \nabla_T e_2 = \nabla_T T = 0, \quad \nabla_{e_1} T = 0, \quad \nabla_{e_2} T = - a e_1
\end{equation}
\begin{equation*}
\nabla_{e_1} e_1 = \alpha e_2, \quad \nabla_{e_2} e_2 = \beta e_1, \quad \nabla_{e_1} e_2 = - \alpha e_1, \quad \nabla_{e_2} e_1 = a T - \beta e_2
\end{equation*}
for some smooth functions $\alpha, \beta$ on $M_C$.
Thus for the curvature tensor we have
\begin{align*}
R(e_2, e_1) e_1 &= (e_1(\beta) + e_2(\alpha) - \alpha^2 - \beta^2) e_2
+ (a \beta - e_1(a)) T \\
R(e_1, e_2) e_2 &= (e_1(\beta) + e_2(\alpha) - \alpha^2 - \beta^2) e_2
+ \alpha a T.
\end{align*}
Since $T \in \ker R$, this implies that
\begin{equation}
\label{eqn:end_cov_derivs}
\alpha = 0, \quad \Scal_M = e_1(\beta) - \beta^2, \quad \mbox{ and } \quad e_1(a) = a \beta.
\end{equation}
Again, since $T \in \ker R$, we have that $T(a) = T(\beta) = 0$, i.e. $a$ and $\beta$ are constant along nullity geodesics.
We let $D$ be the distribution on $M_C$ with $D_p$ spanned by $e_1, T \in T_p M$.
Note that \eqref{eqn:start_cov_derivs} implies that this distribution is completely integrable with totally geodesic leaves, which are flat since $T \in D_p$.
We denote by $\mathcal{F}_p$ the leaf of $D$ containing the point $p$.
\begin{lemma}
\label{lemma:c_stays_nilpotent}
Let $M^3$ be complete with constant Ricci eigenvalues $(\lambda, \lambda, 0)$ with $\lambda \not= 0$ and $M$ not everywhere locally reducible.
Then
\begin{enumerate}[(a)]
\item up to scaling, $\lambda = -1$,
\item integral curves of $e_1$ and $T$ starting at points in $M_C$ are complete geodesics contained in $M_C$, hence leafs $\mathcal{F}_p$ of the distribution $D$ are complete, and contained in $M_C$,
\item and on $M_C$, we have $\abs{\beta} \leq 1$.
\end{enumerate}
\end{lemma}
\begin{proof}
Take $p \in M_C$.
Then $e_1$ is well-defined in a neighborhood of $p$ in $M_C$.
Since $\nabla_{e_1} e_1 = 0$, the integral curve of $e_1$ is a geodesic $\eta$ which is defined as long as $C \not= 0$.
We first show that the complete geodesic $\eta$ lies in $M_C$.
Writing a dot to indicate $e_1$ derivatives, we get
\begin{equation*}
\paren{\frac 1 a}^{\cdot \cdot} = - \paren{\frac {\dot a}{a^2} }^\cdot = - \paren{ \frac \beta a}^{\cdot} = - \frac{ (\Scal + \beta^2)} a + \frac {\beta^2} a.
\end{equation*}
Hence
\begin{equation*}
\paren{\frac 1 a}^{\cdot \cdot} + \frac 1 a \Scal = 0
\end{equation*}
and so $\tfrac 1 a$ satisfies the Jacobi equation.
This equation holds only in $M_C$.
We must then show that $a$ cannot go to zero along $\eta$.
We can scale the metric so that $\lambda = +1$ or $\lambda = -1$.
If $\Scal = 2$ is positive, then $\tfrac 1 a$ has solutions of the form
$\frac 1 a = A_0 \cos t + A_1 \sin t$ which is bounded and hence $a$ never goes to zero.
Therefore $\eta$ remains in $M_C$.
But then there is a zero of $\frac 1 a$ in finite time which implies that $a$ diverges.
This is a contradiction since $C$ is well-defined on all of $M$.
Hence we may assume that $\lambda = -1$.
Thus the solutions of \eqref{eqn:ricatti} are of the form $\tfrac 1 a = A_0 \cosh (t) + A_1 \sinh(t)$.
Hence $a \rightarrow 0$ only as $t \rightarrow \pm \infty$ and therefore $C$ remains non-zero along $\eta$ for all time.
Since \eqref{eqn:ricatti} implies that $C$ is constant along nullity geodesics as well, $C$ cannot go to zero on any leaf of the span of $\{e_1, T\}$ and hence the leaf is complete.
Since $\beta = e_1(a)/a = -a e_1(1/a)$, we have that
\begin{equation} \label{eqn:beta} \beta(t) = - \frac{A_0 \sinh(t) + A_1 \cosh(t)}{A_0 \cosh(t) + A_1 \sinh(t)} = - \frac{ \tanh(t) - \beta(0)}{ 1 - \beta(0) \tanh(t)}. \end{equation}
This implies that $\abs{\beta} \leq 1$ since otherwise $\beta$ has a singularity in finite time along the complete geodesic $\eta$.
\end{proof}
Notice that \eqref{eqn:beta} implies that if $\beta = \pm 1$ at any point, then it is $\pm 1$ along the entire nullity geodesic through that point.
Furthermore, if $M$ is simply connected, then the leafs of $D$ are isometric to $\R^2$ since $\exp$ is a diffeomorphism.
Finally, we observe that $a$ is a smooth function with $a = 0$ whenever $C = 0$.
To see this, let $\Theta$ be the rotation of $TM$ by $\pi/2$ about $T$ which takes $e_1$ to $e_2$ at points where $C \not= 0$.
This is smoothly defined (at least locally) on $M$ since $T$ is smooth.
Then $\Theta C$ has eigenvalues $a$ and $0$ since $\Theta C(e_2) = \Theta (a e_1) = a e_2$.
Hence the characterstic polynomial of $\Theta C$ is $t^2 - at$, and so $a$ is smooth on all of $M$.
\section{Description of the Metric on \texorpdfstring{$M_C$}{MC}}
\label{sec:metric_on_Mc}
The form of metrics with Ricci eigenvalues $(-1,-1,0)$ is well-known locally at points where $C \not= 0$ \cite{szabo2, KTV_curv_homog, KTV_new_examples}.
This form is a special case of the metric due to Sekigawa \cite{sekigawa} given by
\begin{equation}
\label{eqn:sekigawa_form}
g = p(x,u)^2 dx^2 + (du - v \; dx)^2 + (dv + u \; dx)^2.
\end{equation}
Specifically, if $M$ Ricci eigenvalues $(-1,-1,0)$, then $p(x,u) = f_1(x) \cosh(u) + f_2(x) \sinh(u)$.
We will work with a different parametrization (in the $x$ coordinate) which then enables us to include points where $C = 0$.
The metrics will be of the form of \eqref{eqn:g_curv_homog}, see Proposition~\ref{prop:g_curv_homog}.
Moreover, we will show that this form holds in a ``global'' sense: that such a coordinate chart covers an entire connected component of $M_C$ when $M$ is simply connected and complete.
Choosing $f(x) = 0$ and $h(x) = 0$ makes the $u,v$ coordinates a standard parametrization of the product metric on $\mathbb{H}^2 \times \R$.
Moreover, if $f(x) = 0$ with any function $h$, the metric is locally isometric to $\mathbb{H}^2 \times \R$, and, as follows from the lemma below, is complete if $\abs{h(x)} \leq 1$ for all $x$.
We begin with two technical lemmas.
The first considers the basic properties, particularly completeness, of metrics which have the form of \eqref{eqn:g_curv_homog}.
\begin{lemma}
\label{lemma:g_complete}
Suppose $g$ is a metric on $V = (a_1,a_2) \times \R^2$, with coordinates $x \in (a_1,a_2)$ and $u,v \in \R^2$, of the form \eqref{eqn:g_curv_homog} with $f, h:(a_1, a_2) \rightarrow \R$ smooth.
Then
\begin{enumerate}[(a)]
\item $g$ has Ricci eigenvalues $(-1,-1,0)$,
\item $T = \pder{}{v}$ and $e_1 = \pder{}{u}$ and each leaf $\mathcal{F}_p$ is given by a plane with $x$ constant,
\item $a(x,u,v) = f(x) (\cosh u - h(x) \sinh u)^{-1}$,
\item if $f(x) \not= 0$ then $\beta = (h(x) \cosh u - \sinh u)(\cosh u - h(x) \sinh u)^{-1}$, and $e_2 = (\cosh u - h(x) \sinh u)^{-1} \paren{\pder{}{x} + v f(x) \pder{}{u} - u f(x) \pder{}{v}}$,
\item $g$ is locally irreducible if and only if $f^{-1}(0)$ contains no open subsets,
\item $V$ is complete if and only if $(a_1,a_2) = (-\infty, \infty)$ and $\abs{h(x)} \leq 1$ for all $x$, and
\item $h$ is the geodesic curvature of the path $u = v =0$.
\end{enumerate}
\end{lemma}
\begin{proof}
Parts (a-d) and (g) follow by direct computation.
Note that (c) implies that if $a \not= 0$ at some point of an $x = const.$ plane then it is non-zero at every point on that plane.
Thus (e) follows since $M$ is locally reducible at a point if and only if $C = 0$ on an open neighborhood.
Now consider part (f).
From (d) and Lemma~\ref{lemma:c_stays_nilpotent}, it follows that $\abs{h(x)} \leq 1$ is necessary for completeness.
Define $A(x) = \int_0^x f(X) dX$ (see also Lemma~\ref{lemma:def_A} for a related function).
We make a change of coordinates by
\begin{equation*}
(x,y,z) = (x, u \cos A(x) - v \sin A(x), u \sin A(x) + v \cos A(x)).
\end{equation*}
This performs a rotation in each $u$-$v$ plane by an amount that depends on $x$.
In these coordinates, $g$ has the form
\begin{equation*}
g = p(x,y,z)^2 dx^2 + dy^2 + dz^2
\end{equation*}
where $p(x,y,z) = \cosh(u) - h(x) \sinh(u)$ with $u(x,y,z) = y \cos A(x) + z \sin A(x)$.
This is an explicit form of the metric described in Theorem 2.5 of \cite{szabo2}.
We now prove that $g$ is not complete if and only if the interval $(a_1,a_2)$ has $a_1$ or $a_2$ finite.
Suppose that $g$ is not complete.
Then there is a path $\gamma$ of finite length which has no limit in $V$.
Let $\gamma(t) = (x(t), y(t), z(t))$.
Then $\int \abs{y'(t)} dy$ and $\int \abs{z'(t)} dt$ are both lower bounds for the length of $\gamma$.
Hence $y(t)$ and $z(t)$ are bounded.
In particular, this shows that $\abs{u} \leq R$ for some $R \in \R$.
Since $\abs{h(x)} \leq 1$,
\begin{equation*}
\abs{ \cosh(u) - h(x) \sinh(u) } \geq 2 e^{-R}
\end{equation*}
and $p(x,y,z) \geq 2e^{-R}$ at any point of $\gamma$.
Since $\int \abs{p(x,y,z)} \abs{x'} dt$ is also a lower bound for the length of $\gamma$, $\int \abs{x'} dt$ is also finite.
So either $a_1$ or $a_2$ must be finite.
For the other direction, either $a_1$ or $a_2$ is finite.
Without loss of generality, we will assume that $a_2 \geq 0$ is finite and $a_1 \leq 0$.
Then consider the path $\gamma(x) = (x,0,0)$ for $x \in [0,a_2)$.
This path has length $\int_0^{a_2} dx$ which is finite but has no limit in $V$.
Hence $V$ is not complete.
\end{proof}
Observe that in the metric \eqref{eqn:g_curv_homog}, if $f(x) = 0$ on an interval $I$, then $I \times \R^2 \subset M_{split}$.
Thus this description of the metric allows us to glue a split region to a non-split part of the metric.
This is the first example with this property.
Next we describe the metric on $M_C$.
\begin{prop}
\label{prop:g_curv_homog}
Suppose that $M^3$ is a complete, simply connected Riemannian manifold with Ricci eigenvalues $(-1, -1, 0)$.
Then any connected component of $M_C$ has smooth coordinates $(x,u,v) \in (a_1, a_2) \times \R^2$ (with $a_i$ possibly $\pm \infty$) with metric of the form \eqref{eqn:g_curv_homog}
for some smooth functions $f, h: (a_1, a_2) \rightarrow \R$ with $f(x) \not= 0$ and $\abs{h(x)} \leq 1$.
The boundaries of this component are complete, flat, totally geodesic planes, one for each $a_i$ that is finite.
\end{prop}
\begin{proof}
Recall that \eqref{eqn:g_curv_homog} is
\begin{equation*}
g = (\cosh(u) - h(x) \sinh(u))^2 dx^2 + (du - v f(x) \; dx)^2 + (dv + u f(x) \; dx)^2.
\end{equation*}
Let $V$ be a connected component of $M_C$.
Fix a maximal integral curve $\gamma: (a_1, a_2) \rightarrow M$ of the vector field $e_2$ on $V$.
By maximality, $a(\gamma(t)) \not= 0$ for $t \in (a_1, a_2)$ and $\lim_{t \rightarrow a_i} a(\gamma(t)) = 0$ if $a_i$ is finite.
Let $N$ be the manifold defined by one coordinate chart with $(x,u,v) \in (a_1, a_2) \times \R^2$
and metric of the form in \eqref{eqn:g_curv_homog} where $f(x) = a(\gamma(x))$ and $h(x) = -\beta(\gamma(x))$,
By Lemma~\ref{lemma:c_stays_nilpotent} this implies that $\abs{h} \leq 1$.
The manifold $N$ is simply connected (but may not be complete).
Define $\phi: N \rightarrow M$ by
\begin{equation*}
\phi(x,u,v) = \exp_{\gamma(x)} (u e_1 + v T).
\end{equation*}
Notice that by Lemma~\ref{lemma:c_stays_nilpotent}, $\phi(N) \subset M_C$ and we will show that $\phi$ is in fact an isometry onto $V$.
We first show that $\phi$ is a local isometry.
Note that $\pder{\phi}{u} = e_1$ and $\pder{\phi}{v} = T$.
We next compute $\pder{\phi}{x}$.
Fix $(x_0, u_0, v_0)$.
Consider the family of geodesics $\alpha_s(t) = \phi(x_0 + s, t u_0, t v_0)$.
Define $J(t)$ to be the Jacobi field along $\alpha_0$ corresponding to the variation $\alpha$.
Then
\begin{equation}
J(0) = \gamma'(0) = e_2, J'(0) = \frac{D}{ds} \pder{\alpha_s}{t}\big|_{s=0,t=0} = \nabla_{e_2} (u_0 e_1 + v_0 T) = u_0 (a T - \beta e_2) - v_0 a e_1.
\end{equation}
It follows from \eqref{eqn:start_cov_derivs}--\eqref{eqn:end_cov_derivs} that the Jacobi field with these initial conditions is given by
\begin{equation*}
J(t) =
- v_0 a(\gamma(x_0)) t e_1
+ \brak{ \cosh(u_0 t) - \beta(\gamma(x_0)) \sinh(u_0 t) } e_2
+ u_0 a(\gamma(x_0)) t T.
\end{equation*}
Since $f(x) = a(\gamma(x))$ and $h(x) = \beta(\gamma(x))$, we see that
\begin{equation*}
\left.\pder{\phi}{x}\right|_{(x_0,u_0,v_0)} = J(1) = \paren{\cosh u_0 - h(x_0) \sinh u_0} e_2 - v_0 f(x) e_1 + u_0 f(x) T.
\end{equation*}
Now it is easy to compute that $\phi$ is a local isometry.
We now prove that $\phi$ is a covering map $N \rightarrow \phi(N)$.
It suffices to show that $\phi$ has the path lifting property.
Let $\mu: [0,1] \rightarrow \phi(N) \subset V$ be a path.
Then there exists $\delta >0$ such that $\abs{a(\mu(t))} \geq \delta$ for all $t$.
If $\tilde \mu: [0, t_0) \rightarrow N$ is a lift of $\mu$, then $\abs{a(\tilde \mu|_{[0,t_0)})} \geq \delta$ as well since $a$ is, up to sign, an isometry invariant.
Thus $x(\tilde \mu|_{[0, t_0)}) \in [b_1, b_2] \subset (a_1, a_2)$ for some $[b_1, b_2]$.
Since $\mbox{length}(\tilde \mu|_{[0,t]}) = \mbox{length}(\mu|_{[0,t]})$ for all $t < t_0$, the $u, v$ coordinates along $\tilde \mu$ are bounded as well, and hence $\lim \tilde \mu$ lies in compact set which implies that $\mu$ can be lifted past $t = t_0$.
The same argument implies that $\phi(N) = V$ since for any point $p \in V$, we can choose a path from $p$ to $p^* \in \phi(N)$, which by the above argument has a lift.
In order to show that $\phi$ is injective, let $\tilde \gamma$ be the integral curve of $e_2$ through a point $\tilde p \in N$.
Since $\phi$ takes each leaf of $\mathcal{F}$ in $N$ isometrically to a leaf of $\mathcal{F}$ in $M$, it suffices to show that $\phi \circ \tilde \gamma$ intersects each leaf of $\mathcal{F}$ at most once.
Let $\mathcal{F}_0$ the leaf through $p = \phi(\tilde p)$.
Since $\sec \leq 0$ and $M$ is simply connected, we have a globally defined signed distance function $t: M \rightarrow \R$ to the leaf $\mathcal{F}_0$.
The integral curves of $\grad t$ are the geodesics orthogonal to $\mathcal{F}_0$.
Hence $\frac{d}{ds}( t \circ \gamma(s)) = \chevron{\grad t, \gamma'} \not= 0$ since otherwise $\grad t \in T \mathcal{F}_{\gamma(s)}$ and so $\mathcal{F}_{\gamma(s)}$ intersects $\mathcal{F}_0$.
Thus $t$ is monotonic on $\gamma$ which implies that $\phi$ is injective and hence an isometry onto $V$.
\end{proof}
\section{Foliation by Flat Planes}
\label{sec:foliation}
We now discuss the properties of the foliation $\mathcal{F}$ on $M_C$.
We will see that it extends to a Lipschitz foliation on the closure $M_{irred}$ of $M_C$.
Furthermore, there are $C^{1,1}$ curves everywhere orthogonal to the foliation, and that the connected components of $M_{irred}$ are plane bundles over these curves.
We assume until Section~\ref{sec:topology} that $M$ is simply connected.
\begin{lemma}
\label{lemma:cont_foliation}
Let $M^3$ be a complete, simply connected Riemannian manifold with Ricci eigenvalues $(-1, -1, 0)$.
Then $\mathcal{F}$ extends to a continuous foliation on $M_{irred}$ whose leaves are complete, flat, totally geodesic planes.
\end{lemma}
\begin{proof}
By Lemma~\ref{lemma:c_stays_nilpotent}, through every point $p \in M_C$ there is a complete, flat, totally geodesic leaf $L_x$, which is given by $\exp_{p}(u e_1 + v T)$ for $u,v \in \R$.
Consider a sequence of points $p_k \rightarrow p$ with $p_k \in M_C$ and $p \not\in M_C$.
Since $T$ is smooth on all of $M$, $T_{p_k} \rightarrow T_{p}$.
Next, suppose that $(e_1)_{p_k}$ does not converge to a unit vector at $p$.
Then there must be two subsequences $q_k \rightarrow p$ and $r_k \rightarrow p$ with $(e_1)_{q_k} \rightarrow X$ and $(e_1)_{r_k} \rightarrow Y$ with $X \not= \pm Y$.
Since $e_1$ is always perpendicular to $T$, then $X, Y$ are orthogonal to $T$.
Defining $Q = \curly{\exp_{p}(uX + vT)| u,v \in \R}$ and $R = \curly{\exp_{p}(uY + vT)| u,v \in \R}$, then $\mathcal{F}_{q_k} \rightarrow Q$ and $\mathcal{F}_{r_k} \rightarrow R$.
So $Q$ and $R$ intersect at $p$ and each separate $M$ into two halves.
Then points for large $k$, $\mathcal{F}_{q_k}$ and $\mathcal{F}_{r_k}$ intersect and hence are equal.
This contradicts $X \not= \pm Y$.
Therefore $e_1, e_2$ and $\mathcal{F}$ extend to $M_{irred}$.
\end{proof}
In \cite{zeghib}, it is shown that every codimension one geodesic foliation of a smooth (not necessarily complete) manifold is locally Lipschitz.
Due to this, the Picard-Lindel\"of existence and uniqueness theorem for ODEs implies that there exists a unique $C^{1,1}$ curve orthogonal to the foliation through each point.
We apply this to our foliation $\mathcal{F}$ of $M_{irred}$.
\begin{prop}
\label{prop:plane_bundle}
Suppose that $M$ is a complete, simply connected $3$-manifold with Ricci eigenvalues $(-1, -1, 0)$.
For any point $p \in M_{irred}$, there exists a unique, maximal (in $M_{irred}$) $C^{1,1}$ integral curve $\gamma$ of $e_2$ which is orthogonal to $\mathcal{F}$ at every point.
Furthermore, $\gamma$ intersects exactly once each leaf of $\mathcal{F}$ in the connected component of $M_{irred}$ containing $p$.
\end{prop}
\begin{proof}
Consider the maximal integral curve $\gamma$ of $e_2$ at some point $p \in M_{irred}$.
We can assume that $\gamma$ is maximal in the connected component $V$ of $M_{irred}$ that contains $p$.
Since $\gamma$ has unit speed, the domain of $\gamma: I \rightarrow V$ is a closed interval $I$ (possibly infinite or half-infinite).
Define $\exp^\perp_{\gamma}$ to be $\exp$ restricted to the subset of $TM$ where $(x,V) \in TM$ is such that $x = \gamma(t)$ and $V$ is perpendicular to $\gamma'(t)$ for some $t$.
We claim that $\exp^\perp_{\gamma}$ is onto $V$, i.e. that $\gamma$ intersects each leaf of $\mathcal{F}$ in $V$ once.
We first prove that $\im(\exp^\perp_{\gamma})$ is closed.
If not, then there exists a point $q \in V \setminus \im(\exp^\perp_\gamma)$ and a sequence of points $q_k \in \im(\exp^\perp_\gamma)$ with $q_k \rightarrow q$.
Then $\mathcal{F}_{q_k} \rightarrow \mathcal{F}_q$.
For each $q_k$, let $\gamma(t_k)$ be a point on $\gamma$ through $\mathcal{F}_{q_k}$.
Since $I$ is closed, if $t_k$ is bounded, then the $t_k$ have a limit point $t_*$ in $I$, which implies that $\mathcal{F}_{t_*} = \mathcal{F}_q$, which is a contradiction.
So we may assume that $t_k \rightarrow \infty$.
Then $\mathcal{F}_{\gamma(t)} \rightarrow \mathcal{F}_q$ as $t \rightarrow \infty$.
Let $\eta_t$ be the shortest path from $\gamma(t)$ to $\mathcal{F}_q$ and $y(t)$ be the length of $\eta_t$.
Note that since $\mathcal{F}$ is Lipschitz, we have that $\chevron{e_2, \eta_t'} \geq 1 - c y(t)$ for some constant $c$, for any $\gamma(t)$ sufficiently close to $\mathcal{F}_q$.
Considering the variation of geodesics $\eta_x$, the first arc-length variation formula shows that
\begin{equation*}
\frac{d}{dt} y = -\chevron{\gamma', \eta_t'} = - \chevron{e_2, \eta_t'} \leq -1 + c y.
\end{equation*}
When $0 < y < 1/(2c)$, it follows that $\frac{d}{dt} y < - 1/2$, and hence $y(t) \rightarrow 0$ in finite time.
Since $I$ is closed, we again get a contradiction that $\mathcal{F}_q$ must be in $\exp^\perp_\gamma$.
Hence $\im(\exp^\perp_\gamma)$ is closed.
Now we want to show that $\im(\exp^\perp_\gamma)$ is all of $V$.
First we argue that $V$ is convex.
Suppose that there exists a geodesic $\mu:[a,b] \rightarrow \R$ with endpoints in $V$ but completely not contained in $V$.
Then there exists a $t_0$ such that $\mu(t_0) \not \in V$.
Since $V$ is closed, there exists a leaf $P \in \mathcal{F}$ which is the last leaf of $\mathcal{F}$ before $\mu(t_0)$.
Define $U$ to be a subset of the unit vectors at $\mu(t_0)$ by
\begin{equation*}
U := \{X \in T_{\mu(t_0)}^1 M | \exp_{\mu(t_0)}( t X) \in P \mbox{ for some } t > 0 \}.
\end{equation*}
Note that $U$ is connected, and non-empty.
It is open since $\exp_{\mu(t_0)}(t X)$ for $X \in U$ must be transverse to $P$ since otherwise the fact that $P$ is totally geodesic would imply that $\mu(t_0) \in P$.
Let $U_M = \curly{\exp(s v) | v \in U, s > 0 }$, which is open.
Then its boundary
\[ \partial U_M = \curly{\exp(s v) | v \in \partial U, t \geq 0}. \]
Suppose that $\partial U_M$ is not disjoint from $V$.
Then there is a $Q \in \mathcal{F}$ intersecting $\partial U_M$.
Since $Q$ is totally geodesic and does not contain $\mu(t_0)$, $Q$ must be transverse to $\partial U_M$.
So $Q$ also intersects $U_M$.
Then there is a geodesic from $\mu(t_0)$ to a point on $P$ that intersects $Q$ transversely.
Since a geodesic in a $\sec \leq 0$ space can only cross a transverse geodesic hyperplane once, $Q$ must separate $\mu(t_0)$ from a point on $P$.
Therefore it separates $\mu(t_0)$ from all of $P$ since $Q$ and $P$ are disjoint.
This contradicts the fact that no plane of $V$ lies on $\mu$ between $P$ and $\mu(t_0)$.
Therefore $U_M$ is an open subset of $M$ whose boundary does not interesct $V$.
Then $V \cap U_M$ and $V \cap (M \setminus U_M)$ are two disjoint open sets covering $V$ which is a contradiction with $V$ being connected.
Therefore $\im(\exp^\perp_\gamma)$ must be convex.
Now we show that $\im(\exp^\perp_\gamma)$ is onto $V$.
Suppose there is $x \in V$ and $x \not\in \im(\exp^\perp_\gamma)$.
Then the geodesic from $\gamma(0)$ to $x$ stays in $V$ and let $L$ be the last leaf of $\mathcal{F}$ in $\im(\exp^\perp_\gamma)$ it passes through.
Then $L = \exp^\perp_{\gamma(t_0)}$ for some $t_0$.
This contradicts maximality of $\gamma$ since $L$ (and hence $\gamma(t_0)$) must be in the interior of $V$.
We will now see that $\gamma$ intersects each leaf of the foliation at most once.
Take $L_0$ to be the leaf of $\mathcal{F}$ through $\gamma(t_0)$, for some time $t_0$.
Suppose that $\gamma$ intersects $L_0$ at some time $t_1 > t_0$.
Let $t_*$ be the time in $[t_0, t_1]$ where $\gamma$ is maximally far from $L_0$.
Then $\gamma$ must be orthogonal to the shortest geodesic $\eta$ from $L_0$ to $\gamma(t_*)$.
Since $\gamma$ is orthogonal to the leaves of $\mathcal{F}$, the totally geodesic leaf $L_*$ through $\gamma(t_*)$ must contain the geodesic $\eta$.
Note that $L_* \not= L_0$ since $\gamma(t_*)$ is maximally far from $L_0$ and $\gamma$ is orthogonal to $L_0$ at $t_0$.
But $L_*$ intersects $L_0$, a contradiction.
Hence $\gamma$ intersects each leaf of $\mathcal{F}$ exactly once.
\end{proof}
\section{Geodesic Foliations of \texorpdfstring{$\mathbb{H}^2$}{H2}}
\label{sec:geodesic_foliations_H2}
We now consider a lower-dimensional analog of $\mathcal{F}$, namely foliations of $\mathbb{H}^2$ by complete geodesics.
This will be used in Theorem~\ref{thm:irreducible_examples}.
Again, these foliations are Lipschitz by \cite{zeghib} and have $C^{1,1}$ curves orthogonal to them.
We study some basic properties of these curves.
Let $H$ be the turning angle of $\gamma$, i.e. the angle between $\gamma'(x)$ and $V$ where $V$ is a parallel translation of $\gamma'(0)$ along $\gamma$.
The following lemma about turning angles is no doubt well-known, but we provide a proof for completeness since we were unable to find a reference for it.
\begin{lemma}
Suppose that $H: \R \rightarrow \R$ is locally Lipschitz and $\Sigma$ is a complete surface.
For any starting point $p_0$ and initial unit vector $v_0$, there exists a unique arc-length parametrized $C^{1,1}$ curve $\gamma: \R \rightarrow \Sigma$ whose turning angle at $\gamma(t)$ is $H(t)$.
\label{lemma:turning_angle_curve}
\end{lemma}
\begin{proof}
Choose local coordinates $\vec{x}$ of a neighborhood $U \subset \Sigma$ of $p_0$.
We proceed by modifying the standard argument for the existence of the geodesic flow on $T U$.
Choose coordinates $(\vec{x}, \vec{y})$ on $T U$ such that $\vec{y} = \sum_{i=1}^2 y_i \pder{}{x_i}$.
For two vectors $\vec{y}, \vec{z}$ in $T_{\vec{x}} U$, define $\vec{v}_{\vec{x}}(\vec{y},\vec{z})= - \sum_{i,j} \Gamma^{k}_{ij} y_i z_j \pder{}{y_k}$ where $\Gamma^{k}_{ij}$ are the Christoffel symbols at $\vec{x}$.
Define $\Theta_{\vec{x}, r}: T_{\vec{x}}U \rightarrow T_{\vec{x}}U$ to be the rotation of of $T_\vec{x} U$ by angle $r$ at each $\vec{x}$ and $W$, a time-dependent vector field on $T U$ given by
\begin{equation*}
W(\vec{x}, \vec{y}, t) = (\Theta_{\vec{x}, H(t)}(\vec{y}), \vec{v}_{\vec{x}}(\vec{y}, \Theta_{H(t)}(\vec{y})).
\end{equation*}
Then the ODE defined by $\frac{d}{dt} (\vec{x}, \vec{y}) = W(\vec{x}, \vec{y}, t)$ is continuous in $t$ and smooth in $(\vec{x}, \vec{y})$.
By the standard Picard-Lindel\"of theorem, there exists a unique solution $(\gamma(t), Y(t))$ where $\gamma$ is a $C^{1,1}$ curve and $Y$ a vector field along that curve.
We choose initial conditions so that $\gamma(0) = p_0$ and $Y(0) = \gamma'(0) = v_0$.
Now we compute $\nabla_{\gamma'} Y$.
Writing $Y = (y_1, y_2)$ and $\gamma' = (z_1, z_2)$ we get that
\begin{align*}
\nabla_{\gamma'} Y
&= \sum_{k} \paren{\sum_{ij} \Gamma^k_{ij} y_i z_j + \gamma'(y_k)} \pder{}{x_k} \\
&= -\vec{v}_{\vec{x}}(Y, \gamma'(t)) + \sum_{k} \frac{d}{dt}(y_k) \pder{}{x_k}\\
&= -\vec{v}_{\vec{x}}(\gamma'(t), Y) + \vec{v}_{\vec{x}}(\gamma'(t), Y) = 0
\end{align*}
so $Y$ is parallel along $\gamma(t)$ and $\norm{\gamma'} = 1$.
By the ODE, $\gamma' = \Theta_{H(t)}(Y)$, and $\gamma$ has turning angle $H(t)$.
Lastly, note that Picard-Lindel\"of gives existence of $\gamma$ and $Y$ for at least time $1/C$ where $C$ is the maximum derivative of components of $W$ on $U$.
For $R > 0$, there exists such a $C$ on the ball $B_{2R}$ centered at $\gamma(0)$.
Therefore we may repeatedly extend the existence of $\gamma$ until either it exists for at least time $R$ or it has left the ball $B_R$.
Since $\gamma$ has unit speed, the second case cannot occur without also existing until at least time $R$.
Taking $R \rightarrow \infty$, $\gamma$ exists for all time.
\end{proof}
Suppose that $\gamma: \R \rightarrow \mathbb{H}^2$ is a $C^{1,1}$ curve that is arc-length parameterized.
Let $X$ be a unit vector field along $\gamma$ that is perpendicular to $\gamma'$ everywhere.
Define $\exp^\perp: \R^2 \rightarrow \mathbb{H}^2$ by
\begin{equation*}
\exp^\perp(s,t) = \exp_{\gamma(s)}(t X).
\end{equation*}
\begin{prop}
\label{prop:foliation_H_lipschitz}
Fix a point $p \in \mathbb{H}^2$ and a vector $V \in T_p \mathbb{H}^2$.
There is a bijection between Lipschitz functions $H: \R \rightarrow \R$ with Lipschitz constant $1$ and arc-length parameterized curves $\gamma$ such that
\begin{enumerate}[(a)]
\item $\gamma(0) = p$,
\item $\gamma'(0) = V$, and
\item the curves $\eta_s: t \mapsto exp^\perp(s,t)$ form a foliation of $\mathbb{H}^2$.
\end{enumerate}
In particular, $H$ is the turning angle of the corresponding $\gamma$.
\end{prop}
\begin{proof}
Lemma~\ref{lemma:turning_angle_curve} shows that $\gamma$ is determined uniquely by its starting conditions and turning angle $H$.
Next, we assume that $\gamma$ satisfies (a) and (b) and has turning angle $H$.
To see that (c) holds, it suffices to show that each point $p \in \mathbb{H}^2$ has a unique closest point $q$ on $\gamma$.
Then $p$ lies on the orthogonal geodesic at $q$ and uniqueness implies that the orthogonal geodesics are all disjoint, and hence foliate $\mathbb{H}^2$.
Let $\delta(q) = d(p,q)$ be the distance function to $p$.
By standard hyperbolic trigonometry,
\[ \nabla_{X} \grad \delta = \coth(\delta) \chevron{X, \grad \delta^\perp} \grad \delta^\perp \]
where $\grad \delta^\perp$ is a unit vector orthogonal to $\grad \delta$.
Note that almost everywhere $\nabla_{\gamma'} \gamma' = h(t) (\gamma')^\perp$ where $h = H'$ and $(\gamma')^\perp$ is a unit vector orthogonal to $\gamma'$.
The case where $\gamma$ is smooth is shown in \cite{ferus_geod_foliations}, and we follow the same strategy.
Define $L(q) = \cosh(\delta(q))$.
Since $H$ is Lipschitz, $\gamma$ is in-fact $C^{1,1}$ and is twice-differentiable almost everywhere.
So $L \circ \gamma$ is twice differentiable a.e. and, therefore
\begin{align*}
(L \circ \gamma)' &= \chevron{\grad L, \gamma'} = \sinh(\delta) \chevron{ \grad \delta, \gamma'},\\
(L \circ \gamma)''
&= \cosh(\delta) \chevron{\grad \delta, \gamma'}^2
+ \sinh(\delta) \brak{
\coth(\delta) \chevron{\grad \delta^\perp, \gamma'}^2
+ \chevron{\grad \delta, h(t) (\gamma')^{\perp}}
} \\
&= \cosh(\delta) + h(t) \sinh(\delta) \chevron{\grad \delta, (\gamma')^{\perp}}.
\end{align*}
Since the Lipschitz constant of $H$ is 1, $\abs{h} \leq 1$, and therefore $(L \circ \gamma)'' \geq e^{-\delta} > 0$ a.e..
Therefore $L$ is convex and has a unique minimum.
Since $L$ is monotone in $\delta$, $\delta$ too has a unique minimum and (c) holds.
For the converse, we assume that $\gamma$ has turning angle $H$ that does not have Lipschitz constant 1.
In \cite{zeghib} it is shown that any co-dimension one geodesic foliation of a smooth manifold is locally Lipschitz.
So $H$ is differentiable a.e.
If $H$ does not have Lipschitz constant 1, then there is a point $x$ where $\abs{h(x)} > 1$, where $h = H'$.
Then it is possible to choose $\epsilon$ such that $\cosh \epsilon + h \sinh \epsilon = 0$.
Note that the Jacobi field of geodesics orthogonal to $\gamma$ is $(\cosh t + h(x) \sinh(t)) X$ (where $X$ is the parallel transport of $\gamma'$ along the orthogonal geodesic).
Then $\gamma$ has a focal point at distance $\epsilon$ from $\gamma(x)$ and so the orthogonal geodesics do not foliate $\mathbb{H}^2$.
\end{proof}
Recall that the geodesic curvature of a curve is the derivative of its turning angle $H(t)$.
\begin{corollary}
If the orthogonal geodesics of $\gamma$ foliate $\mathbb{H}^2$, then $\gamma$ is $C^{1,1}$ and its geodesic curvature $h$ satisfies $\abs{h} \leq 1$ almost everywhere.
\label{cor:foliating_curves}
\end{corollary}
\section{Locally Irreducible Metrics}
\label{sec:locally_irreducible}
In this section, we consider the case where $M$ has Ricci eigenvalues $(-1, -1, 0)$ and is simply connected and locally irreducible, i.e. $M = M_{irred}$.
We first show that the metric has the form as desired in Theorem~\ref{thm:continuous_coords}.
\begin{proof}[Proof of Theorem~\ref{thm:continuous_coords}]
By Proposition~\ref{prop:plane_bundle}, there exists a maximal, unit speed $C^{1,1}$ curve $\gamma: \R \rightarrow M$ everywhere orthogonal to $\mathcal{F}$ such that each point of $M$ lies on $\mathcal{F}_{\gamma(t)}$ for some $t$.
Define $f(x) := a(\gamma(x))$ and $h(x) = \beta(\gamma(x))$, where $h(x)$ is defined only on the set $\{x \in \R: f(x) \not= 0\}$.
Define $x(p)$ such that $\mathcal{F}_{\gamma(x)} = \mathcal{F}_p$, and $u(p),v(p)$ such that $p = \exp_{\gamma(x)} (u e_1 + v T)$.
Then $(x,u,v)$ are smooth coordinates on each connected component of $M_C$ and they are Lipschitz since $\mathcal{F}$ is Lipschitz.
By Proposition~\ref{prop:g_curv_homog}, the metric on $M_C$ has the desired form and since $\overline{M_C} = M$, the theorem follows.
\end{proof}
\begin{corollary}
\label{cor:smooth_foliation}
If $\mathcal{F}$ is a smooth foliation, then $f, h$ are also smooth.
\end{corollary}
An immediate corollary of these results is Corollary~\ref{cor:analytic}, which gives a classification of the case where $M$ is irreducible and analytic.
Since $M$ is analytic, $C$ is analytic and hence $\mathcal{F}$ is analytic on $M$.
Hence $T$, $e_1$ and $e_2$ are analytic and so too are $f, h$.
We will next present some examples of Theorem~\ref{thm:irreducible_examples} and then give its proof.
\begin{example}
\label{example:irreducible_1}
Suppose $H$ is as in Theorem~\ref{thm:irreducible_examples} and is smooth.
Then the metric $g$ is just given by \eqref{eqn:g_curv_homog}.
Let $\gamma$ be the path in $\mathbb{H}^2$ with turning angle $H$ from Proposition~\ref{prop:foliation_H_lipschitz}.
Then the orthogonal curves of $\gamma$ foliation $\mathbb{H}^2$ and we can put $g$ on $\mathbb{H}^2 \times \R$ using coordinates with $(x,u,v)$ is just $(\exp_{\gamma(x)} (u (\gamma')^\perp), v)$.
In the case where $f = 0$, this $g$ becomes the standard metric on $\mathbb{H}^2 \times \R$.
This gives the strategy for the proof of the theorem even when $H$ is not smooth.
\end{example}
\begin{example}
\label{example:irreducible_2}
Each $H$ corresponds, by Proposition \ref{prop:foliation_H_lipschitz}, to a $C^{1,1}$ curve $\gamma$ in $\mathbb{H}^2$ and $h$ is the geodesic curvature of $\gamma$.
In Figure~\ref{fig:irreducible_examples}, we see three examples of curves $\gamma$ with their corresponding orthogonal geodesics, in the Poincar\'e disk model of $\mathbb{H}^2$.
The first two are smooth curves, with $H(x) = 0$ and $H(x) = x$, so they have $h(x) = 0$ and $h(x) = 1$, respectively.
The third curve is only $C^{1,1}$ and has one non-smooth point $\gamma(0)$.
On the left half, it has $h(x) = 1$ and on the right $h(x) = -1$.
Any choice of smooth $f(x)$ works for the first two examples.
For the last example, any smooth $f(x)$ works as long as $f^{(k)}(0) = 0$ for all $k$.
Therefore this demonstrates that $h$ need not even be continuous.
\end{example}
\begin{example}
\label{example:irreducible_4}
One may also choose $H$ such that $h$ is non-smooth on a Cantor set.
Consider the trinary expansion of numbers in $[0,1]$.
Define $h(t)$ to be $0$ if $1$ never occurs in the expansion, and $(-1)^n$ if the first $1$ occurs in the $n$th digit.
Defining $h(t) = 0$ outside of $[0,1]$, $h(t)$ has discontinuities at a Cantor set.
As $h(t)$ is the difference of two indicator functions, it is Lebesgue integrable.
Let $H(t) = \int_0^t h(s) ds$.
Then take any choice of $f$ which goes to zero to infinite order on the Cantor set and is nonzero elsewhere.
See Figure~\ref{fig:cantor_set_example} for the corresponding foliation in $\mathbb{H}^2$.
\end{example}
\begin{proof}[Proof of Theorem~\ref{thm:irreducible_examples}]
First, we apply Lemma~\ref{lemma:turning_angle_curve} to get a $C^{1,1}$ curve $\gamma$ in $\mathbb{H}^2$ with turning angle $H$.
We proceed by defining $g_f$ a smooth symmetric tensor on $M = \mathbb{H}^2 \times \R$ such that $g = g_{\mathbb{H}^2 \times \R} + g_f$ is the desired metric, where $g_{\mathbb{H}^2 \times \R}$ is the product metric.
Note that we can embed $\gamma$ into $M$ by $(\gamma(x),0)$, and we call this embedding $\gamma$ as well for simplicity.
Let $e_1$ be a unit vector field along $\gamma$ which is orthogonal to $\gamma'$ in $\mathbb{H}^2$, and let $e_3$ be a unit vector field in the $\R$ factor of $M$.
There are $C^0$ coordinates $(x,u,v)$ of $M$ such that $p \in M$ has coordinates $(x,u,v)$ if $p = \exp_{\gamma(x)}( u e_1 + v e_3)$.
Define $e_2$ to be a unit vector field orthogonal to $\{e_1, e_3\}$.
Let $S \subset \R$ be the set of $x$ values such that $\gamma$ is locally smooth at $\gamma(x)$.
Then there is a subset $S_M \subset M$ of points $p$ such that $x(p) \in S$.
$S_M$ is the set of points where the $(x,u,v)$ coordinates are locally smooth.
Note that $(M,g_{\mathbb{H}^2 \times \R})$ has Ricci eigenvalues $(-1,-1,0)$ with $C = 0$ and on $S_M$ has a smooth foliation by complete totally geodesic planes and hence the metric is of the form \eqref{eqn:g_curv_homog} on $S_M$ by Proposition~\ref{prop:g_curv_homog}.
Therefore, the vector $e_3 = T$ and $\{e_1,e_2,T\}$ satisfy all the equations of \eqref{eqn:start_cov_derivs}-\eqref{eqn:end_cov_derivs} when taking covariant derivatives with the Levi-Civita connection of $g_{\mathbb{H}^2 \times \R}$ where $f(x) = 0$ in those equations and $h(x) := \chevron{\nabla_{\gamma'} \gamma', e_1}$.
Moreover, the contents of Lemma~\ref{lemma:g_complete} also apply in $S_M$.
(Our goal is to modify $g_{\mathbb{H}^2 \times \R}$ to make our choice of $f(x)$ the one that occurs in these covariant derivatives.)
\begin{figure}[h]
\centering
\includegraphics[scale=0.5]{orthogonal_foliations_all.pdf}
\caption{Three examples of possible choices of $H$ for Example~\ref{example:irreducible_2}, given by their corresponding paths in the Poincar\'e disk model of $\mathbb{H}^2$ along with their orthogonal, geodesic foliation.
Note that the third example demonstrates that $h$ may be non-continuous.}
\label{fig:irreducible_examples}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.3]{poincare_disk.pdf}
\caption{Geodesic foliations of $\mathbb{H}^2$ where the set of discontinuities of $h$ forms a Cantor set.}
\label{fig:cantor_set_example}
\end{figure}
Define the symmetric 2-tensor $g_f$ point-wise on $M$ at points $p \in S_M$ by
\begin{equation*}
g_f = -2 f(x) v (dx \; du + du \; dx) + 2 f(x) u (dx \; dv + dv \; dx) + f(x)^2 (u^2 + v^2) dx^2
\end{equation*}
and by $g_f = 0$ for $p \not\in S_M$.
Using that $e_1 = \pder{}{u}$, $e_2 = \pder{}{x} (\cosh u - h(x) \sinh u)^{-1}$, and $e_3 = \pder{}{v}$, we get that
\begin{align}
\label{eqn:g_f}
g_f(X_1,X_2) &= -2 f(x) (\cosh u - h(x) \sinh u)^{-1} v \paren{\chevron{X_1, e_1} \chevron{X_2, e_2}
+ \chevron{X_1, e_2} \chevron{X_2, e_1}} \\
&\quad + 2 f(x)(\cosh u - h(x) \sinh u)^{-1} u \paren{\chevron{X_1, e_3} \chevron{X_2, e_2} + \chevron{X_1, e_2} \chevron{X_2, e_3}} \\
&\quad + f(x)^2(\cosh u - h(x) \sinh u)^{-2} (u^2 + v^2) \chevron{X_1, e_2} \chevron{X_2, e_2}
\end{align}
where $\chevron{\cdot, \cdot}$ is the inner product with respect to $g_{\mathbb{H}^2 \times \R}$.
Fix any smooth vector fields $X_1, X_2$, and define $F := g_f(X_1, X_2)$, a function on $M$.
By the above expression for $F$, we can observe that it has the following properties on $S_M$.
\begin{enumerate}[(a)]
\item $F$ is a rational function of functions of the following forms: $u$, $v$, $\cosh u$, $\sinh u$, $f^{(i)}(x)$, $h^{(i)}(x)$ (for $i = 0,1,2, \ldots$), or $\chevron{X, e_j}$ (for $j = 1,2,3$) where $X$ is a smooth vector field,
\item the denominator of this rational function is bounded away from $0$ on any set where $\abs{u}$ is bounded,
\item each term in the numerator of this rational function has a positive power of some $f^{(i)}(x)$.
\end{enumerate}
We can see that $F$ satisfies part $(b)$ since, by Proposition~\ref{prop:foliation_H_lipschitz}, $\abs{h(x)} \leq 1$ and hence
\[ \abs{\cosh u - h(x) \sinh u} \geq e^{\abs{u}}. \]
Furthermore, the derivatives $e_{j_1} \cdots e_{j_k}(F)$ on $S_M$, also satisfies these three properties.
This follows from directly computing the derivatives of functions of this form by using equations \eqref{eqn:start_cov_derivs}-\eqref{eqn:end_cov_derivs} and using that
\begin{align*}
a(x,u,v) &= 0 \\
\beta(x,u,v) &= (h(x) \cosh u - h(x) \sinh u)/(\cosh u - h(x) \sinh u).
\end{align*}
Hence all derivatives $Y_1 \cdots Y_k(F)$ for any smooth vector fields $Y_1, \ldots, Y_k$ on $S_M$ satisfies (a)-(c).
We next claim that for any function $G$ that satisfies these three properties (a)-(c), $G$ extends continuously to all of $M$ with $G = 0$ on $M \setminus S_M$.
Take a sequence of points $(x_k, u_k, v_k)$ in $S_M$ that converge to a point in $(x_*,u_*,v_*)$ in $M \setminus S_M$.
By property (c) and our assumption on $f$, every term of the numerator must go to zero since all factors other than $h^{(i)}(x)$ are bounded as $k \rightarrow \infty$.
By property (b), the denominator stays bounded away from 0 as $k \rightarrow \infty$.
Hence $G(x_k, u_k, v_k) \rightarrow 0$ as $k \rightarrow \infty$.
Since $F$ and all partial derivatives of $F$ on $S_M$ satisfy (a)-(c), $F$ extends smoothly to all of $M$ with $F = 0$ and $Y_1 \ldots Y_k(F) = 0$ on $M \setminus S_M$.
Hence $g_f(X_1, X_2)$ is a smooth function for any fixed smooth vector fields $X_1, X_2$.
Since $g_f(X_1, X_2)$ is bilinear in $X_1, X_2$, $g_f$ is a smooth tensor on $M$.
Therefore $g = g_{\mathbb{H}^2 \times \R} + g_f$ is smooth and is of the form \eqref{eqn:g_curv_homog} on $\tilde S$.
On $M \setminus \tilde S$, $g = g_{\mathbb{H}^2 \times \R}$ and hence $M$ has Ricci eigenvalues (-1,-1,0) everywhere.
\end{proof}
\begin{remark}
Since $h$ is bounded ($H$ is Lipschitz), we only need to check for the condition when all $\ell_i \geq 1$.
\end{remark}
\begin{remark}
If instead of a smooth metric, we wanted $g$ to be $C^K$, then the condition on $f$ and $h$ in equation \eqref{eqn:f_h_property} is needed only when $k + \sum_{i=1}^m \ell_i \leq K$.
In particular, for a $C^2$ metric, we need that $f, f', f'', fh', f(h')^2, fh'',$ and $f'h'$ go to zero at $x \not\in S$.
\end{remark}
\begin{remark}
By Proposition~\ref{prop:plane_bundle}, for any complete, simply connected $M$ with Ricci eigenvalues $(-1,-1,0)$ that is locally irreducible everywhere, there exists a $\gamma: \R \rightarrow M$ orthogonal to $\mathcal{F}$ and $f(x) := a(\gamma(x))$.
Let $H$ be the turning angle of $\gamma$.
This gives a candidate for a converse to Theorem~\ref{thm:irreducible_examples}.
However, it is not clear that such an $f$ and $h$ must satisfy the assumptions in equation~\eqref{eqn:f_h_property}.
\end{remark}
\section{Manifolds with Locally Reducible Points}
\label{sec:locally_reducible}
We now describe the structure of complete, simply connected manifolds $M$ which have Ricci eigenvalues $(-1, -1, 0)$ that may not be locally irreducible everywhere.
\begin{prop}
\label{prop:decomposition$V_i$}
Suppose that a complete, simply connected manifold $M$ has Ricci eigenvalues $(-1, -1, 0)$.
Then $M$ is decomposed as a union of disjoint regions $\{U_i\}$ such that each $U_i$ is either an open connected component of $M_{split}$ or a closed connected component of $M_{irred}$.
These satsify:
\begin{enumerate}[(A)]
\item in the first case, we call $U_i$ a \emph{split region}, and $U_i$ is isometric to $\Sigma \times \R$ for $\Sigma \subset \mathbb{H}^2$ a connected subset of the hyperbolic plane whose boundary components are complete geodesics,
\item in the second case, we call $U_i$ a \emph{non-split region}. In $U_i$, every point is locally irreducible and $C \not= 0$ on a dense, open subset. Furthermore, $U_i$ admits a Lipschitz foliation by the leafs of $\mathcal{F}$ and a path $\gamma_i$ orthogonal to $\mathcal{F}$ which intersects every leaf exactly once.
\end{enumerate}
\end{prop}
\begin{proof}
For each connected component of $M_{split}$ and each connected component of $M_{irred}$, we have a set $U_i$.
Since $M \setminus M_{irred}$ is $M_{split}$, $M$ is the union of these disjoint sets.
If $U$ is a non-split region, then its structure is given by Proposition~\ref{prop:plane_bundle}.
It remains to be shown that a split region $U$ is isometric to $\Sigma \times \R$ for some simply connected $\Sigma \subset \mathbb{H}^2$.
By the de Rham-type splitting result of \cite{graph_manifolds, twisted_products}, we know that $U$ is isometrically the product of $\Sigma \times \R$ for some surface $\Sigma$ with Gaussian curvature $-1$.
Each boundary component of $U$ is also a boundary component of a non-split region.
Since non-split regions have complete, flat, totally geodesic boundary components, so too must $U$.
Since $M$ is simply connected, we have that $U$ is simply connected, since otherwise there would be a non-trivial covering of $M$ (obtained by gluing copies of $M \setminus U$ to the non-trivial cover of $U$).
Hence $\Sigma$ is simply connected.
To see that $\Sigma \subset \mathbb{H}^2$, we can consider its double $\Sigma \cup \Sigma$ glued along the geodesic boundary components.
This is a complete surface with $K = -1$ and hence its universal cover is $\mathbb{H}^2$.
Since $\Sigma$ is simply connected, its inclusion into the double then lifts to an inclusion in $\mathbb{H}^2$, as desired.
\end{proof}
\begin{figure}
\centering
\includegraphics[scale=0.8]{curvature_homogeneous_examples.pdf}
\caption{Schematic examples of manifolds with Ricci eigenvalues $(-1, -1, 0)$ built off of trees.}
\label{fig:curv_homog_examples}
\end{figure}
\begin{example}
Figure~\ref{fig:curv_homog_examples} shows four possibilities for $M$, modelled after trees.
Each $M$ is drawn schematically in the Poincar\'e disk model of $\mathbb{H}^2$ where the split regions are white and the non-split regions are shaded.
Each non-split region may have any number of boundary components, including infinitely many.
Note that each split region has at most two (and possibly only one) boundary component.
We can construct these examples by taking non-split regions of the form \eqref{eqn:g_curv_homog} with $f(x) = 0$ outside of some interval.
Then these metrics are split outside of a strip and hence can be glued along their split regions.
\end{example}
\section{Topology}
\label{sec:topology}
In this section, we consider $M$ with Ricci eigenvalues $(-1, -1, 0)$ that may not be simply connected.
Since $\Sec \leq 0$, the universal cover $\widetilde M$ is always diffeomorphic to $\R^3$ and the topology of $M$ is determined by the fundamental group alone.
Our main result in this section will be Theorem~\ref{thm:pi1}, which states that any manifold with Ricci eigenvalues $(-1, -1, 0)$ and finitely generated fundamental group, has fundamental group that is a free group (unless $\widetilde M$ is split, i.e. isometric to $\mathbb{H}^2 \times \R$).
\begin{lemma}
\label{lemma:def_A}
Define $A(p,q)$ for $p,q \in \widetilde M$ by
\[A(p,q) = \int \norm{C(\gamma')} dt = \int \abs{a(\gamma(t)) \chevron{\gamma', e_2}} dt \]
integrating over the unique geodesic segment $\gamma$ from $p$ to $q$.
Then $A(p,q)$ depends only upon the leaves $\mathcal{F}_p, \mathcal{F}_q$ and not on the choice of points on those leaves.
Moreover, $A$ is an isometry invariant.
\end{lemma}
\begin{proof}
First observe that the value inside the integral is well-defined since $e_2$ is defined except where $a = 0$, and the integral exists because $\abs{\chevron{\gamma', e_2}}$ is bounded by $\norm{\gamma'}$.
Moreover, the definition is invariant of the choice of parametrization of $\gamma$.
Since $\abs{a}$ is an isometry invariant, $A(p,g) = A(gp, gq)$ for any isometry $g$.
Take points $p_2 \in \mathcal{F}_p$ and $q_2 \in \mathcal{F}_q$ and let $\gamma_2$ be the geodesic between the two points.
Observe that every leaf of $\mathcal{F}$ that intersects $\gamma$ must also intersect $\gamma_2$, since each such leaf must have $\mathcal{F}_{p}$ and $\mathcal{F}_q$ on opposite sides of it.
So the intervals where $a \not= 0$ on $\gamma$ are in bijection $\gamma$ with those on $\gamma_2$.
To compute that integral in $A(p,q)$ we can restrict to the sum of the integrals over each interval where $\abs{a(\gamma(t))}$ is positive, and these regions are in bijection between $\gamma$ and $\gamma_2$.
To show that $A(p,q) = A(p_2, q_2)$ we now need to show that the integral over these corresponding intervals of $\gamma$, $\gamma_2$ are equal.
On a connected component with $a \not= 0$, there are coordinates $(x,u,v) \in (x_0, x_1) \times \R^2$ such that
\begin{equation}
\label{eqn:coordinates_a_nonzero}
g = a^{-2}\; dx^2 + (du - v\; dx)^2 + (dv + u \; dx)^2
\end{equation}
where $a = C(x) (\cosh u - h(x) \sinh u)$ for some $C(x) \not= 0$, $\abs{h(x)} \leq 1$.
In these coordinates, $T = \pder{}{v}, e_1 = \pder{}{u}$, the vector $e_2 = \abs{a} \paren{\pder{}{x} + v \pder{}{u} - u \pder{}{v}}$, and the leaves of $\mathcal{F}$ are the sets where $x$ is constant.
We claim that for paths in this $a \not= 0$ region, $\int \abs{a(\gamma(t)) \chevron{\gamma', e_2}} dt$ is independent of the path taken between its starting and ending leaves, so long as it is increasing in $x$.
Assume that the domain of $\gamma$, restricted to one such region, is $[0,1]$, then
\begin{align*}
\int_0^1 \abs{a \chevron{\gamma'(t), e_2}} dt
&= \int_0^1 \abs{a^2 \chevron{\gamma'(t), \pder{}{x} + v \pder{}{u} - u \pder{}{v}}} dt \\
&= \int_0^1 \abs{a^2 a^{-2} dx(\gamma'(t))} dt \\
&= x(\gamma(1)) - x(\gamma(0))
\end{align*}
where the second equality follows by applying~\eqref{eqn:coordinates_a_nonzero}.
Since the $x$ coordinate is constant on the leaves of $\mathcal{F}$, this integral depends only upon the leaves of the end points and hence $A(p,q) = A(p_2, q_2)$.
\end{proof}
\begin{remark}
\label{remark:A}
We think of $A$ as measuring a distance between any two leaves of $\mathcal{F}$.
This distance measures the amount of rotation of the $T$ vector field between the two leaves.
However, $A$ is only a pseudometric on $\mathcal{F}$.
If $a$ is nonzero everywhere, then $A$ gives essentially the $x$-coordinate in the coordinates of the form in \eqref{eqn:coordinates_a_nonzero}.
This allows $A$ to act as an extension of the $x$ coordinate to any non-split region, even those with $a = 0$ at some leaves.
Moreover, the path integral in $A$ does not need to be a geodesic and a similar strategy shows that any path between $p,q$ will give the same value for $\int \abs{a \chevron{\gamma', e_2}} dt$, so long as there is no back-tracking, i.e. it never intersects the same leaf twice.
\end{remark}
We now work towards the proof of Theorem~\ref{thm:pi1} with lemmas that restrict the isometries that stabilize leaves of $\mathcal{F}$, non-split regions, and split regions of $M$, as well as a lemma for the case where $M$ is locally irreducible everywhere.
Let $\widetilde M$ be the universal cover of $M$.
Recall that since $\sec \leq 0$, if $G$ acts on $\widetilde M$ fixed point freely, then $G$ cannot have torsion.
If $g \in G$ has an invariant plane, i.e. $g(L) \subset L$ for some leaf $L$ in $\mathcal{F}$, then we will show that $g$ is trivial.
To do so, we may assume that $g$ acts by translations on the leaf.
Certainly $g$ acts by isometries on $L$ and has no fixed point and hence is either a translation or a glide reflection.
In the latter case, $g^2$ is a translation, so we may pass to $g^2$ instead of $g$, if necessary, since $g^2$ is trivial then so is $g$.
Similarly, if $g$ fixes a finite number of leaves, we will assume it acts by translations on all of them at once.
We make use of these assumptions in the proof of the following lemmas.
Suppose that $G$ is any group of isometries acting fixed point freely on $\widetilde M$.
\begin{lemma}
\label{lemma:two_fixed_planes}
If $G$ fixes two distinct leaves $L_0, L_1 \in \mathcal{F}$, i.e. $G(L_i) \subset L_i$,
then either $G$ is trivial or $L_0, L_1$ are boundary leaves of a split region.
\end{lemma}
\begin{proof}
Assume there is some $g \not= e$ in $G$.
First note that since the restriction $g|_{L_i}$ is an isometry of the flat leaf $L_i \simeq \R^2$, we may assume that $g$ acts by translation on each $L_i$, passing to $g^2$ if not.
Pick a point $p_0 \in L_0$ and let $\gamma_0$ be the geodesic along which $g$ translates $L_0$.
Recall $L_1$ is a totally geodesic plane, and hence is a convex subset of $\widetilde M$.
Since $\sec \leq 0$, we have that $d(\cdot, L_1)$ is a convex function along any geodesic and in particular along $\gamma_0$.
Moreover, $d(g^k(p_0), L_1) = d(p_0, L_1)$ and hence $d(\cdot, L_1)$ is constant along $\gamma_0$.
Let $p_1$ be the unique point on $L_1$ closest to $p_0$ and $\gamma_1$ the geodesic in $L_1$ along which $g$ translates the point $p_1$.
Then $g^k(p_1)$ is the unique point on $L_1$ closest to $g^k(p_0)$ and then $\gamma_0$ and $\gamma_1$ are parallel in the sense of having bounded (in fact constant) distance.
Since $\sec \leq 0$, the union of all geodesics parallel to any geodesic is a convex subset isometric to $N \times \R$ for some closed convex subset $N$ of $M$, see Lemma 2.4 in \cite{ballman}.
So the geodesics $\gamma_0$ and $\gamma_1$ bound a flat strip, i.e. a totally geodesic submanifold isometric to $[0,\ell] \times \R$.
Since this strip is flat, it must contain the nullity geodesics through each point of the strip.
Since the nullity geodesics are complete, they are parallel in the strip, so $C(X) = 0$ for any vector $X$ in the strip.
Note that the strip must be transverse to $e_1$ since if $e_1$ was in its tangent plane at one point, the strip would be contained in a single leaf of $\mathcal{F}$.
Hence $C = 0$ on the strip.
Since this holds for any $p_0 \in L_0$, we then have $C = 0$ on any point on a geodesic from $L_0$ to $L_1$, and so $L_0$ and $L_1$ must bound a split region.
\end{proof}
\begin{lemma}
\label{lemma:no_fixed_nonsplit_regions}
Suppose $V$ is any subset of $\widetilde M_{irred}$, with $V$ a strict subset of $\widetilde M$.
If $G(V) \subset V$, then $G$ is trivial.
\end{lemma}
\begin{proof}
Note that $G$ fixing $V$ implies that $G$ also fixes the entire non-split region (i.e., connected component of $M_{irred}$) containing $V$, so we may assume that $V$ is a non-split region rather than a subset of one.
First, we consider the case where $V$ has two distinct boundary components, $L_0$ and $L_1$ which are leaves of $\mathcal{F}$.
Pick $g \in G$.
Either $g$ stabilizes each boundary leaves or $g^2$ does.
Then Lemma~\ref{lemma:two_fixed_planes} shows that $g$ is trivial.
Instead, suppose $V$ has exactly one boundary component $L_0$.
(We allow the possibly that $V$ has no interior, so $V = L_0$.)
Then $G(L_0) \subset L_0$.
Since $V$ is a non-split region and so is in $M_{irred}$, we must have a sequence of points with $a \not= 0$ converging to $L_0$.
In particular, infinitely many of those points lie on one side of $L_0$ and we will consider the leaves of $\mathcal{F}$ near $L_0$ on that side.
Using $A$ defined in Lemma~\ref{lemma:def_A}, we consider $A(L_0, \mathcal{F}_p)$.
Since $A(\cdot,\cdot)$ and $L_0$ are invariant under $G$, $A(L_0, \cdot)$ must be as well.
Take $g \in G$ and $p_0 \in L_0$.
Then consider the geodesics $\gamma, \mu$ starting at $p_0$ and $gp_0$, respectively, and orthogonal to $L_0$.
There are infinitely many leaves of $\mathcal{F}$ intersecting $\gamma$, which limit to $p_0$.
Since these leaves do not intersect $L_0$, they must limit towards being parallel to $L_0$.
Therefore, there is an $\epsilon > 0$ so that all leaves within $\epsilon$ of $p_0$ along $\gamma$ also intersect $\mu$ near $gp_0$.
The same is true for $\mu$, so that there is an $\epsilon > 0$ where the leaves along $\gamma$ all intersect $\mu$ and the leaves along $\mu$ all intersect $\gamma$.
Taking $A(L_0, \cdot)$ along $\gamma$ and $\mu$ we then have that, for $s,t < \epsilon$, if $A(L_0, \mu(t)) = A(L_0, \gamma(s))$ then the leaves through $\mu(t)$ and $\gamma(s)$ must be the same.
Since $A$ is an isometry invariant, $g$ must map such leaves to themselves.
So $g$ fixes two leaves, and by \ref{lemma:two_fixed_planes}, $g$ is trivial.
\end{proof}
\begin{lemma}
\label{lemma:split_region_free}
Suppose that $U$ is a split region of $\widetilde{M}$ with at least one boundary component.
If $G(U) \subset U$, then $G$ is a free group.
\end{lemma}
\begin{proof}
Since $C = 0$ on $U$, $U$ is isometric to $\Sigma \times \R$ with $\Sigma$ a subset of $\mathbb{H}^2$ with complete geodesics for its boundary components.
Suppose for contradiction that there is a non-trivial $g \in G$ such that $g$ fixes a point $p \in \Sigma$.
Then let $r = \inf_{\gamma_j} d(p, \gamma_j)$ where $\curly{\gamma_j}$ is the set of boundary components of $\Sigma$.
There must be at least one boundary component that realizes the infimum and, moreover, only finitely many do, since the boundary components are complete geodesics in $\mathbb{H}^2$.
Then $g$ must act on the set $\curly{\gamma_j | d(p, \gamma_j) = r}$ of those boundary components.
Since the set is finite, there is some $k > 0$ such that some $\gamma_j$ is invariant under $g^k$.
But then the boundary $\gamma_j \times \R$ of $U$ is invariant under $g^k$.
By the previous lemma, we know the only such isometries are trivial.
Then $g$ has order at most $k$, but $G$ cannot have torsion.
So $G$ must act fixed-point freely on $\Sigma$
Similarly, we can see that $G$ acts properly discontinuously on $\Sigma$.
Suppose that $p_0 \in U$ and there is a sequence of distinct points $p_i = g_i(p_0)$, $g_i \in G$ with $g_i \not= g_j$ with $p_i \rightarrow p_* \in U$.
Then let $\gamma_*$ be any geodesic of minimal distance to $p_*$ and let $D = d(\gamma_*, p_*)$.
For $\epsilon > 0$, choose $N$ so that $d(p_i, p_*) < \epsilon$ for $i > N$.
Each $g_i$ has $g_i^{-1}(\gamma_*)$ a boundary geodesic and in particular since $d(p_i, p_*) < \epsilon$, $d(g_i^{-1}(\gamma_*), p_0) < D + \epsilon$.
The set of boundary geodesics that are distance at most $D + \epsilon$ from $p_0$ is finite.
Hence there exists $j > k > N$ such that there is a geodesic $\gamma_0$ of distance at most $D + \epsilon$ from $p_0$ so that $g_j(\gamma_0)$ and $g_k(\gamma_0)$ are both $\gamma_*$.
Then $g_j^{-1} g_k$ must fix $\gamma_0$.
This is a contradiction with the previous lemma.
So $G$ must act properly discontinuously as well as fixed-point freely.
Now $\Sigma$ is an open surface that is contractible (since it is a convex subset of $\mathbb{H}^2$) and $G$ acts on it fixed-point freely and properly discontinuously.
So $G$ is the fundamental group of $\Sigma / G$, a non-compact surface.
Hence $G$ is a free group, by a well-known fact that the fundamental group of any non-compact surface is free.
See Section 4.2.2 of \cite{stillwell} for reference.
\end{proof}
\begin{lemma}
\label{lemma:pi1_irreducible}
Suppose that a complete manifold $M$ has constant Ricci eigenvalues $(-1, -1, 0)$ and is everywhere locally irreducible.
Then $\pi_1(M)$ is either trivial or $\Z$.
\end{lemma}
\begin{proof}
We again use Lemma~\ref{lemma:def_A}.
Specifically, pick a leaf $L_0$ of $\mathcal{F}$ of $\widetilde M$.
Then define $A(L_p)$ by $\pm A(L_0, \mathcal{F}_p)$, choosing the positive sign on one side of $L_0$ and negative on the other.
That $M$ is locally irreducible implies that $A$ is injective, i.e. no two distinct leaves map to the same value.
Next, since $A$ does not depend upon the path used to compute it, $A(p,q)$ has the following property:
for any $p_0, p_0', q_1, q_2 \in \widetilde M$ such that $\mathcal{F}_{p_0}$ has all of $p_0', q_1$, and $q_2$ on one side of it,
\begin{align*}
A(p_0, q_1) - A(p_0, q_2)
&= (A(p_0, p_0') + A(p_0', q_1)) - (A(p_0, p_0') + A(p_0', q_1))\\
&= A(p_0', q_1) - A(p_0', q_2).
\end{align*}
This equality then extends to all $p_0, p_0'$ by applying it twice $p_0, p_0''$ and then $p_0'', p_0'$ for some $p_0''$.
This implies that $A(L_p)$ is equivariant under isometries:
\begin{align*}
A(g(L_p)) - A(g(L_q))
&= A(L_0, g(L_p)) - A(L_0, g(L_q)) \\
&= A(g(L_0), g(L_p)) - A(g(L_0), g(L_q)) \\
&= A(L_0, L_p) - A(L_0, L_q) \\
&= A(L_p) - A(L_q).
\end{align*}
Then $\pi_1(M)$ acts fixed-point freely on the image of $A$ since otherwise a non-trivial $g \in \pi_1(M)$ would fix $L_0$ which contradicts Lemma~\ref{lemma:no_fixed_nonsplit_regions}.
Therefore if $\pi_1(M)$ is non-trivial, $A$ must be surjective on $\R$.
Next, we want to show that $\pi_1(M)$ acts properly discontinously on $\R$ and hence is either trivial or $\Z$.
Suppose not.
Then the orbit of any $x \in \R$ under $\pi_1(M)$ is dense in $\R$.
Hence, if some leaf $P$ has $a = 0$ on it, then $a = 0$ on all leaves, by continuity of $a$, and so $a = 0$ on all of $M$.
This contradicts the assumption that $M$ is locally irreducible.
So $a \not= 0$ on $M$.
We may now assume without loss of generality that $a > 0$ on $M$.
Hence we have a smooth foliation with coordinates $(x,u,v)$ as in Proposition~\ref{prop:g_curv_homog}.
Fix some $p_0 \in \widetilde M$ with $p_0 \in L_0$ and call $p_0 = (0,0,0)$.
By assumption, there are $g_k \in \pi_1(M)$ such that $g_k(p_0)$ are in leaves $L_k$ with $A(L_k) \rightarrow 0$ as $k \rightarrow \infty$.
Since $\pi_1(M)$ acts properly discontinuously on $\widetilde M$, we must have that only finitely many of $p_k := g_k(p_0)$ are in any compact neighborhood of $p_0$.
Let $q_k$ be the point on each leaf $L_k$ so that $q_k$ lies on the curve $(x,0,0)$, so then $q_k \rightarrow p_0$ as $k \rightarrow \infty$.
Letting $(x_k, u_k, v_k) = p_k$, if $u_k \rightarrow \pm \infty$ as $k \rightarrow \infty$ (on any subsequence), then $g_k^{-1}(q_k)$ must have $u$-coordinate $-u_k$ and lies in $L_0$.
But since we know the form of $a(x,u,v) = f(x)/(\cosh u - h(x) \sinh u)$, either $a \rightarrow 0$ or $a \rightarrow \infty$ for $u \rightarrow \infty$ and $x$ fixed.
This is a contradiction since $a(0,-u_k,-v_k)$ must be the same as at $a(q_k)$ by the isometry $g_k$ and $a(q_k) \rightarrow a(p_0)$ which is non-zero and finite.
Hence $u_k$ must be bounded.
Then $v_k$ must diverge instead.
As in \cite{graph_manifolds}, we note that
\begin{equation*}
T(e_2(a)) = e_2(T(a)) + [T,e_2](a) = (\nabla_T e_2 - \nabla_{e_2} T)(a) = a e_1(a)
\end{equation*}
and that $T(e_1(a)) = e_1(T(a)) + [T,e_1](a) = 0$.
Hence $e_2(a) = a e_1(a) v + d$ for some $d$ with $d, a,$ and $e_1(a)$ all independent of $v$.
Therefore $e_2(a)$ at $g_k^{-1}(q_k)$ must diverge as $k \rightarrow \infty$ since $u$ is bounded and $x=0$.
But this means that $e_2(a)$ must diverge at $p_0$ since $q_k \rightarrow p_0$ and $e_2(a)$ is an isometry invariant up to sign.
This gives a contradiction since $e_2(a)$ must be finite at any point.
Hence $\pi_1(M)$ must actually act properly discontinuously on $\R$, and hence is trivial or $\Z$.
\end{proof}
Now we return to the Theorem~\ref{thm:pi1}.
Our strategy is to build an integer-valued Lyndon length function $N: \pi_1(M) \rightarrow \mathbb{N}$ \cite{lyndon} which can be thought of as an integer-valued version of $g \mapsto A(p, g(p))$.
Lemma~\ref{lemma:pi1_irreducible} covers the case where $M$ is locally irreducible everywhere, so we assume for the remainder of this section that $M$ is locally reducible at some points.
Moreover, we assume that $\widetilde M$ is irreducible, so $a \not= 0$ at some point.
\begin{definition}
Fix a point $p_0 \in \widetilde M$ such that $a \not= 0$.
Let $\mathcal{V}$ be a non-empty finite collection of connected components of $M_C$.
We require that $\mathcal{V}$ must include the connected component that contains $p_0$.
Let $\mathcal{L}_0$ be the set of boundary leaves of all $V \in \mathcal{V}$ and let $\mathcal{L} := \curly{g(L) |g \in \pi_1(M), \quad L \in \mathcal{L}_0}$ be the images of those leaves under the action of $\pi_1(M)$.
\end{definition}
\begin{definition}
For a choice of $\mathcal{V}$, define $N: \pi_1(M) \rightarrow \N$ so that $N(g)$ is the number of leaves of $\mathcal{L}$ crossed by the geodesic from $p_0$ to $g(p_0)$.
\end{definition}
We will vary the choice of $\mathcal{V}$ and write $N_\mathcal{V}$ to clarify when necessary.
\begin{lemma}
\label{lemma:def_N}
$N(g)$ is finite.
\end{lemma}
\begin{proof}
Suppose $g$ is non-trivial.
For each $V \in \mathcal{V}$, let $\epsilon_V = \int \abs{a \chevron{\gamma', e_2}} dt > 0$ along any geodesic $\gamma$ from one boundary component of $V$ to the other.
Then define $\epsilon = \min_{V \in \mathcal{V}} \epsilon_V$ so $\epsilon > 0$.
Then we claim that
\[ \epsilon (N(g) - 2) \leq A(p_0, g(p_0)), \]
and hence $N$ is finite.
Let $\gamma$ be the geodesic from $p_0$ to $g(p_0)$.
Consider the open regions where $a \not= 0$ that contain $g(p_0)$ for some $g \in \pi_1(M)$.
Then $\gamma$ intersects some number of these.
If it intersects $g(V)$ for some $V \in \mathcal{V}$, it must cross from one boundary to the other, unless $g(V)$ contains either of $p_0$ or $g(p_0)$.
So $\gamma$ crosses $(N(g) - 2)/2$ such regions, with each crossing contributing at least $\epsilon$ to $A(p, g(p_0))$.
Hence the inequality holds and $N$ is finite.
\end{proof}
\begin{lemma}
\label{lemma:Lyndon_length}
Define the overlap function $s(g,h) = \frac 1 2 \brak{N(g) + N(h) - N(gh^{-1})}$.
The function $N: \pi_1(M) \rightarrow \Z$ is a Lyndon length function (see \cite{lyndon}) in the sense that it satisfies the following:
\begin{enumerate}[(I)]
\item $N(g) = 0$ iff $g$ is trivial,
\item $N(g^{-1}) = N(g)$,
\item $s(g,h) \geq 0$,
\item $s(g,h) < s(g,\ell)$ implies that $s(h, \ell) = s(g,h)$, and
\item $s(g,h) + s(g^{-1}, h^{-1}) > N(g) = N(h)$ implies that $g = h$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $G := \pi_1(M)$.
For (I), note that if $N(g) = 0$, then the geodesic from $p_0$ and $g(p_0)$ must never reach a point with $a = 0$.
Then the subgroup generated by $\chevron{g} \subset G$ must leave $V$ invariant, where $V$ is the non-split region containing $p_0$.
So Lemma~\ref{lemma:no_fixed_nonsplit_regions} implies that $g$ is trivial.
For (II), if $\gamma$ is the geodesic from $p_0$ to $g(p_0)$, then $g^{-1}(\gamma)$ is the geodesic from $g^{-1}(p_0)$ to $p_0$ and $\mathcal{L}$ is isometry invariant.
For (III) and (IV), we give $s(g,h)$ a geometric interpretation.
Observe that $s(g,h)$ counts the number of leaves of $\mathcal{L}$ that lie on both the geodesic from $p_0$ to $g(p_0)$ and the one from $p_0$ to $h(p_0)$.
To see this, first observe that $N(gh^{-1})$ is equal to the number of leaves of $\mathcal{L}$ crossed by the geodesic from $h(p_0)$ to $g(p_0)$, since $h$ acts by an isometry.
Assume that $g$ and $h$ are non-trivial and distinct.
Consider all the leaves crossed by any of the three geodesics between $p_0, g(p_0)$ and $h(p_0)$.
Each of these leaves divides $\widetilde M$ into two connected components, with two of $\{p_0, g(p_0), h(p_0)\}$ on one side and one of the other.
None of the leaves can have all three points on one side, since it crosses the geodesic between two of the points.
Any leaf with $p_0$ and $g(p_0)$ on the same side contributes $+1$ to the $N(g)$ term and $-1$ to the $N(gh^{-1})$, and hence contributes 0 to $s(g,h)$.
Similarly, for leaves with $p_0$ and $h(p_0)$ on the same side.
For leaves with $g(p_0)$ and $h(p_0)$ on the same side, these contribute $+1$ to $N(g)$ and $+1$ to $N(h)$, so account for $+1$ to $s(g,h)$.
Then $s(g,h)$ is the number of leaves with $g(p_0)$ and $h(p_0)$ on one side and $p_0$ on the other, which equals the number of leaves that intersect both geodesics from $p_0$ to $g(p_0)$ and $h(p_0)$.
Lastly note that if $g$ or $h$ are trivial, then $s(g,h) = 0$, and if $g = h$ then $s(g,h) = N(g) = N(h)$.
Hence $s$ is non-negative and (III) holds.
For (IV), let $\gamma_g$, $\gamma_h$, and $\gamma_\ell$ be the geodesics from $p_0$ to $g(p_0), h(p_0)$ and $\ell(p_0)$, respectively.
See Figure~\ref{fig:Lyndon_length}.
We consider the leaves intersecting $\gamma_g$ as ordered from closest to $p_0$ to furthest.
Note that if one leaf $L \in \mathcal{L}$ intersects both $\gamma_g$ and $\gamma_h$, then all earlier leaves intersecting $\gamma_g$ must also intersect $\gamma_h$.
This follows since $L$ has $g(p_0)$ and $h(p_0)$ on one side and the earlier leaves on the other, so all earlier leaves also have $g(p_0)$ and $h(p_0)$ on the same side.
Hence the first $s(g,h)$ leaves on $\gamma_g$ must also intersect $\gamma_h$ and the first $s(g,\ell)$ intersect $\gamma_\ell$.
Since $s(g,h) < s(g,\ell)$, all leaves intersecting both $\gamma_g$ and $\gamma_h$ also intersect $\gamma_\ell$ and so $s(g,h) \leq s(h, \ell)$.
Since $s(g,h) \not= s(g,l)$, the $s(g,h)+\nth{1}$ leaf intersecting $\gamma_g$ must intersect $\gamma_\ell$ but not $\gamma_h$.
Hence the $s(g,h)+\nth{1}$ leaf on $\gamma_\ell$ intersects $\gamma_g$ but not $\gamma_h$ and so $s(h,\ell) < s(g,h) + 1$ which gives (IV).
\begin{figure}
\centering
\includegraphics[scale=0.32]{Lyndon_length.pdf}
\caption{Left, diagram of property (IV) showing possible leaves of $\mathcal{L}$ (thick lines) and the geodesics from the point $p_0$. Right, diagram of property (V) showing the leaf $L$ (dashed line) intersecting all three geodesics.}
\label{fig:Lyndon_length}
\end{figure}
For property (V), assume that $g,h \in \pi_1(M)$ satisfy $s(g, h) + s(g^{-1}, h^{-1}) > N(g) = N(h)$.
Note that $s(g,h)$ counts the number of leaves of $\mathcal{L}$ that intersect both the geodesics $\gamma_g$ and $\gamma_h$ from $p_0$ to $g(p_0)$ and $h(p_0)$.
By applying the isometry $h$ first, we see that $s(g^{-1}, h^{-1})$ counts the number of leaves intersecting both the geodesic from $h(p_0)$ to $p_0$ and the geodesic from $h(p_0)$ to $h g^{-1}(p_0)$.
So these both count leaves intersecting the geodesic $\gamma_h$.
The total number of leaves of $\mathcal{L}$ interescting $\gamma_h$ is $N(h) < s(g,h) + s(g^{-1}, h^{-1})$.
Let $\gamma_{hg^{-1},h}$ be the geodesic from $hg^{-1}(p_0)$ to $h(p_0)$.
By the pigeonhole principle, at least one leaf $L$ intersects all three of $\gamma_g, \gamma_h$ and $\gamma_{h,hg^{-1}}$.
In particular, $L$ can be taken to be the $s(g,h)$th leaf along the $\gamma_g$ and $\gamma_h$.
Moreover, it is the $N(h) - s(g,h)+1$st leaf going backwards along $\gamma_h$ and therefore also the $N(h) - s(g,h)+1$st leaf going backward along $\gamma_{hg^{-1}, h}$.
Since $N(g) = N(h)$, it is also the $N(h) - s(g,h)+1$st leaf going backwards along $\gamma_g$.
Since the isometry $h g^{-1}$ takes $\gamma_g$ to $\gamma_{hg^{-1}, h}$, it takes $L$ to $L$.
Then Lemma~\ref{lemma:no_fixed_nonsplit_regions} implies that $g^{-1}h$ is trivial.
\end{proof}
If $g \in \pi_1(M)$ is such that $N(g^2) \leq N(g)$, then we call $g$ \emph{non-Archimedean}.
If $g$ is non-Archimedean, then we define
\[ \mathcal{N}_x := \curly{y \in \pi_1(M): N(x y^{-1}) \leq N(x) = N(y)} \cup \{e\} \]
where $e$ is the identity element.
By~\cite{lyndon}, $\mathcal{N}_x$ satisfies the following properties:
\begin{enumerate}
\item $\mathcal{N}_x$ is a group all of whose elements are non-Archimedean,
\item either $\mathcal{N}_x = \mathcal{N}_y$ or $\mathcal{N}_x \cup \mathcal{N}_y = \{e\}$, and
\item $y \in \mathcal{N}_x$ and $x \not= y$ implies that $s(x,y) = \frac 1 2 N(x) = \frac 1 2 N(y)$
\end{enumerate}
The main result, Theorem 7.1, of ~\cite{lyndon}, states that for any group $G$ with a Lyndon length function, there exists a decomposition of $G$ as the free product of subgroups that are either
\begin{enumerate}
\item $\mathcal{N}_x$ for a non-Archimedian $x \in G$, or
\item an infinite cyclic subgroup generated by an Archimedean $x \in G$.
\end{enumerate}
Considering $N$ as a discrete approximation of $g \mapsto A(p_0, gp_0)$, the next lemma classifies analogs of the subgroups $\mathcal{N}_x$ using $A$ instead of $N$.
We will then refine $N$ repeatedly to approximate $A$ sufficiently.
\begin{lemma}
\label{lemma:nonarchimedean_free}
Suppose that $G \subset \pi_1(M)$ is such that for every non-trivial $g,h \in G$,
\[ A(p_0, gp_0) = A(p_0, hp_0). \]
Then $G$ is a free group.
\end{lemma}
\begin{proof}
Note that for any $g,h \in G$, $A(p_0, gp_0) = A(p_0, hp_0)$ implies that the leaf $\mathcal{F}_{gp_0}$ must have $p_0$ and $hp_0$ on the same side.
Thefore there must be a split region $U \subset \widetilde M_{split}$ such that $\widetilde M \setminus U$ has $p_0, gp_0,$ and $h p_0$ on three separate connected components.
To see this, take $U$ to be the split region with the following three leaves of $\mathcal{F}$ in its boundary:
\begin{itemize}
\item $L_{p_0}$, the last leaf along the geodesic from $p_0$ to $gp_0$ which has both $gp_0$ and $hp_0$ on the same side,
(which is therefore also the last leaf from $p_0$ to $hp_0$ with this property),
\item $L_{gp_0}$, the last leaf from $gp_0$ to $p_0$ which has both $p_0$ and $hp_0$ on one side (which is therefore also the last leaf from $gp_0$ to $h p_0$ with this property), and
\item $L_{hp_0}$, the last leaf from $hp_0$ to $p_0$ which has both $p_0$ and $gp_0$ on one side (which is therefore also the last leaf from $hp_0$ to $gp_0$ with this property).
\end{itemize}
Then each of $p_0$, $gp_0$ or $hp_0$ is separated from $U$ by the corresponding boundary leaf.
Moreover, this is the unique $U$ to have this property, since any other split region is contained entirely in one component of $\widetilde M \setminus U$ and therefore has two of $p_0, gp_0,$ and $hp_0$ on one side.
Let $x = A(p_0, L_{p_0})$, $y = A(gp_0, L_{gp_0})$, and $z = A(hp_0, L_{hp_0})$.
By construction of the leaves, note that $A(p_0, gp_0) = x+y$, $A(p_0, hp_0) = x + z$ and $A(gp_0, hp_0) = y+z$.
By the assumption on $G$, and the fact that $A$ is invariant under $g$, we have $x+y = x+z = y+z$.
Therefore $x=y=z = \frac 1 2 A(p_0,gp_0)$.
So $L_{p_0}$ is independent of $g$ and $h$, and therefore $U$ is independent of $g$ and $h$, depending only upon $p_0$.
Replacing $p_0$ with $gp_0$, we see that $U$ is determined only by the orbit $Gp_0$.
Therefore the action of $G$ on $\widetilde M$ must stabilize $U$.
By Lemma~\ref{lemma:split_region_free}, $G$ must then be a free group.
\end{proof}
\begin{figure}
\includegraphics[scale=0.3]{nonArchimedean.pdf}
\caption{Examples of non-Archimedean isometries. Consider the upper half-plane model of $\mathbb{H}^2 \times \R$, modified to have non-split regions in gray. The isometry $g$ acts by translation. Using $\mathcal{V}$ containing just the shaded gray regions, we can see that $N_{\mathcal{V}}(g) = N_{\mathcal{V}}(g^2) = 2$ by tracing the geodesics from $p_0$ (solid lines). In this case the split region $U$ is fixed by $g$. If instead there were also non-split regions added at the dashed lines, then there would be no such fixed region, but $A(p_0, gp_0)$ no longer equals $A(p_0, g^2p_0)$. In this case, we then include these non-split regions in $\mathcal{V}$ so that $g$ is no longer non-Archimedean.}
\label{fig:nonArchimedean}
\end{figure}
\begin{proof}[Proof of Theorem~\ref{thm:pi1}]
The strategy is to apply the result of \cite{lyndon}, Theorem 7.1, described above, to $N_{\mathcal{V}}$, for an appropriate choice of $\mathcal{V}$.
We start with the smallest possible $\mathcal{V}$ and then add to it to further refine the free decomposition of $\pi_1(M)$, until all subgroups in the free decomposition are themselves free.
As motivation of the following strategy, see Figure~\ref{fig:nonArchimedean}, which gives examples of non-Archimedean isometries.
Let $\mathcal{V}$ be $\curly{gV: g \in \pi_1(M)}$ where V is the component of $M_C$ that contains $p_0$.
Applying the theorem of Lyndon gives a decomposition of $G = \pi_1(M)$ as a free product of groups $G_1, \ldots, G_k$.
If $G_i$ is a subgroup which is the cyclic group generated by an Archimedean element, then $G_i$ is free since $\pi_1(M)$ is torsion-free.
If not, then $G_i$ is $\mathcal{N}_x$ for some non-Archimedean $x \in G$.
If that $G_i$ has $A(p_0, gp_0) = A(p_0, hp_0)$ for all $g,h \in G_i$, then by Lemma~\ref{lemma:nonarchimedean_free}, it too is free.
If not, then take $g, h \in G_i$ with $A(p_0, gp_0) \not= A(p_0, hp_0)$.
For each connected component $W$ of $M_C$, let $m$ be the number of its images under $\pi_1(M)$ which intersect the geodesic from $p_0$ to $gp_0$ and $n$ be the number that intersect the geodesic from $p_0$ to $hp_0$.
Since $A(p_0, gp_0) \not= A(p_0, hp_0)$, there must be at least one $W$ with $n \not= m$.
We then take $\mathcal{V}' = \mathcal{V} \cup \curly{gW: g \in \pi_1(M)}$.
Then $N_{\mathcal{V}'}(g) \not= N_{\mathcal{V}'}(h)$.
Therefore $G_i$ is not a non-Archimedean subgroup for $N_{\mathcal{V}'}$.
We can then apply the result of \cite{lyndon} again to $G_i$ using $N_{\mathcal{V}'}$ to get that either $G_i$ is free or a free product of at least two non-trivial subgroups.
We can proceed recursively on these subgroups if necessary.
This recursion must stop after finitely many steps, since $\pi_1(M)$ is finitely generated and therefore is the free product of at most finitely may subgroups, by the Grushko theorem.
Therefore $\pi_1(M)$ is a free product of free groups and so itself is free.
Conversely, any countable free group can be achieved as the fundamental group of some $M$ with Ricci eigenvalues $(-1,-1,0)$.
First, recall that every countably generated free group is a subgroup of $F_2$, the free group with two generators.
Then it suffices to show that $\pi_1(M) = F_2$ is possible.
We can construct such a manifold by taking a subset $U$ of $\mathbb{H}^2 \times \R$ with four boundary components $P_1, \ldots, P_4$ that are totally geodesic planes.
Then take any two non-split regions $V_1, V_2$ with two boundary planes each and $a \rightarrow 0$ to infinite order on these boundary planes (and $h = 0$) constructed by Theorem~\ref{thm:irreducible_examples}.
Glue the two boundaries of $V_1$ to $P_1$ and $P_2$ and the two boundaries of $V_2$ to $P_3$ and $P_4$.
Then $M$ deformation retracts onto a wedge of two circles and $\pi_1(M) = F_2$.
\end{proof}
\begin{example}
There is a $\Z$ action on any metric of the form \eqref{eqn:g_curv_homog} if $f,h$ are periodic of the same period.
Then the $\Z$ action is just by translation in $x$ by the period of $f$ and $h$, and $M$ is locally irreducible everywhere if $f$ is never zero in a neighborhood.
\end{example}
\begin{example}
Note that the assumption in Theorem~\ref{thm:pi1} that $\widetilde M$ is irreducible is necessary.
For example, $\Z \times \Z$ acts on the product metric $\mathbb{H}^2 \times \R$ with one $\Z$ acting on each factor, or a surface group can act on the $\mathbb{H}^2$ factor.
\end{example}
\printbibliography
\end{document}
|
{
"timestamp": "2021-12-01T02:27:12",
"yymm": "2111",
"arxiv_id": "2111.15499",
"language": "en",
"url": "https://arxiv.org/abs/2111.15499"
}
|
\section{Answers to Referee \#1}
\begin{enumerate}
\item `` {\it The authors recognized the importance of inverse problems in nano science, and with this work they certainly contribute to expanding knowledge about linking structure and property, in this case resolving the role of the concentration of impurities in the conductance fluctuations in the spectra, or in more general terms, transforming a noisy transmission response function into a mathematical representation.}''
{\bf Our answer:} We thank the referee for recognising the value of our contribution. It is worth reiterating that the goal of our manuscript was to demonstrate that the methodology in question is not exclusive of graphene but is useful to a wider range of applications and materials, which certainly adds value to this inversion tool.\\
\item ``{\it The approach to solving an inverse problem depends on scale. Namely, in this specific case the authors tackle a mathematically well-defined inverse problem, but they are dealing with a very limited, I would dare to say rather idealized case of a physical system. Of course I do not mean to undermine this work. However, in nanoscience one is dealing with large systems, containing $>=10^3$ atoms and often much more, thus different scale of the problem (e.g., Ann. Phys. 527, 187). Thus, in those cases this methodology is not applicable and other approaches were put forward. For example, extracting structural information from excitonic spectra of self-assembled semiconductor quantum dots (e.g., PRB 79, 075443, PRB 80, 035328, PRB 84, 235317, etc.).}''
{\bf Our answer:} The referee is correct in pointing out the systems we presented are $\leq 10^3$ atoms. However, the methodology can be scaled up to larger system sizes. The methodology is robust and works for a wide range of sizes that exceeds up to the order of the localization length. In fact, the method performs well in diffusive, ballistic, and in localized transport regimes. Figure~\ref{errorl} below shows the error ($\alpha$) as a function of device size ($L$) of an armchair edge graphene nanoribbon doped with an impurity concentration of $n=4\%$. For instance, for the case of $L=140$, our system contains $1960$ atoms and for each $L$ size, our calculations took roughly 30 minutes to 1 hour (depending on ensemble size) to complete. Nevertheless, we thank the referee for raising this issue which is recurrent in every transport and electronic structure computational problem at the nanoscale. Certainly, for more complex hamiltonians, calculations can take longer and the use of computing parallelization schemes may be required. For our particular inverse problem study, we suggest that our methodology could be paired with scalable transport calculations as proposed by Fan et al. Computer Physics Communications 185, 28 (2014). The latter presents a way of implementing and optimizing the linear-scaling Kubo-Greenwood formula for electronic quantum transport, allowing the authors to perform calculations in systems of $10^6$ atoms. Our misfit function methodology could be applied on a large ensemble of large-scale disordered systems treated as proposed by Fan et al. We included a brief discussion about this point in the new version of the manuscript.
\begin{figure}[!h]
\centering
\includegraphics{error_L.pdf}
\caption{Relative error of the inversion procedure ($\alpha$) as a function of device size $L$ (in unit cells) calculated for an armchair edge graphene nanoribbon with 4\% of impurities. For reference in terms of number of atoms, the system for the case of $L=140$ contains $1960$ atoms.}
\label{errorl}
\end{figure}
\\
A brief discussion about the feasibility of our methodology to treat large-scale nanoscale systems is included in the new version of the manuscript on page 12, last paragraph of section 3. The new piece of text appears as:\\\\
``It is worth mentioning that our methodology is not immune to the typical limitations regarding the investigation of large-scale systems containing over $10^3$ atoms. Yet, we tested our methodology with relatively large systems, e.g. nanoribbon systems with $\sim 2000$ atoms along their length. A single point calculation to determine the accuracy of the method took approximately 30-60 minutes to complete depending on the ensemble size. Depending on the computational resources available, this computing time can be significantly reduced. Certainly systems treated with more complex Hamiltonians, beyond the nearest-neighbour single-orbital tight-binding approximation, can take longer computational times. Nonetheless, our methodology can be paired with other scalable electronic transport approaches, e.g. as the one proposed by Fan et al. [..] in which an optimization for the linear-scaling Kubo-Greenwood formula is presented, allowing large-scale calculations in systems of $\sim 10^6$ atoms.''\\\\
The following citations are also included in our manuscript:\\
- V. Mlinar, ``Utilization of inverse approach in the design of materials over nano- to macro-scale,'' Ann. Phys. (Berlin) {\bf 527}, 187 (2015); \\
- Z. Fan, A. Uppstu, T. Siro, and A. Harju, ``Efficient linear-scaling quantum transport calculations on graphics processing units and applications on electron transport in graphene,'' Computer Physics Communications {\bf 185}, 28 (2014).\\
\item ``{\it In "Inverse Problems" journal, there have been many case-studies of inverse problems with, from a purely mathematical viewpoint, similar approach presented here (e.g., Inverse Probl. 29, 015006), however, this work by the level of physics insights deserves to be considered for publication in JPCM. I do recommend the authors to expand on the discussion regarding the application of inverse problems and put this work in a proper context.}''
{\bf Our answer:} We thank the referee for such positive comment which aligns with one particular aspect of our methodology which is the possibility of decoding response functions of distinct physical natures. The work in Bao et al., Inverse Probl. 29, 015006 (2013), ``inverse-investigates'' mechanical properties of nanomaterials which can certainly inspire our methodology to also decode mechanical responses of low dimensional nanostructures. Just recently, part of our team demonstrated that the methodology presented in this manuscript can be adapted to decode the optical conductivity of disordered MoS$_2$ films as presented in New J. Phys. 23, 073035 (2021). We acknowledge that the consolidation of a methodology can only be achieved by continuous testing of its capabilities, however this manuscript serves as an important probing milestone for our methodology.\\
As recommended by the reviewer, we included the following paragraph in the new version of the manuscript (page 5):\\
``In this work, we put emphasis on extracting materials' composition from electronic transport analysis following the misfit function minimization scheme as illustrated in the case studies above. It is important to emphasize a general aspect of our methodology which is the possibility of decoding not only electronic conductance curves but also response functions of other physical natures, e.g. mechanical, optical, magnetic, etc. For instance, this methodology was recently adapted to decode the optical conductivity of disordered MoS$_2$ films as presented in Duarte et al [..]. Another promising applicability of the methodology is to use in the context of mechanical response in line with the approach proposed by Bao et al. [..].''\\\\
The following citations are also included in our manuscript:\\
- F. R. Duarte, S. Mukim, A. Molina-S\'anchez, T. G. Rappoport, and M. S. Ferreira, ``Decoding the DC and optical conductivities of disordered MoS$_2$ films: an inverse problem,'' New J. Phys. {\bf 23}, 073035 (2021);\\
- G. Bao, and X. Xu, ``An inverse random source problem in quantifying the elastic modulus of nanomaterials,'' Inverse Problems {\bf 29}, 015006 (2013).\\
\item ``{\it ...it would be useful to discuss in more detail, how realistic these predictions are and if one would be able link them to real-world counterparts. A brief discussion on this would be beneficial for the readers.}''
{\bf Our answer:} We thank the referee for the suggestion. We added in the new version of the manuscript a discussion to highlight the capability of our methodology in treating any target response function, including measured ones in laboratory (real-world situation). It is worth mentioning that we are currently involved in a study where conductance measurements are being generated specifically for the purposes of inversion which will enable us to carry out a detailed quantitative analysis of our methodology with experimental data as well. The illustrative analysis we performed in this manuscript, with emphasis on numerical target conductance curves, is to give us the possibility of understanding further how to control the main methodology parameters to achieve the desired accuracy and performance. The discussion added on page 5 reads:\\
``Note also that our misfit function methodology is not limited to conductance curves derived from numerical methods; our method is applicable to any energy-dependent response function that can serve as a ``target'' function for the misfit function minimization scheme. Typical experimental data that could be used with this methodology could come from Hall-bar experiments, in our case, conducted in 2D nanostructures. As long as the main methodology parameters (i.e., bandwidth, size of the ensemble, transport regime, and hamiltonian flavor) are set to achieve the desired accuracy, a convergence to a minimum in the misfit function can be expected for any fluctuating energy-dependent response function obtained from experiments or numerical means.''\\\\
\end{enumerate}
\section{Answers to Referee \#2}
\begin{enumerate}
\item ``{\it The authors study how the degree of disorder in a nanostructure can be determined from the energy resolved transmission. The manuscript on its own, is interesting, well written and should be suitable for publication. However, the manuscript is largely identical to the paper published by the authors in Phys. Rev. B 102, 075409 (2020). From my point of view, the authors just applied their method on some other systems, CNTs and hBN nanoribbons, without obtaining any additional insight that would justify another paper.}''
{\bf Our answer:} We thank the referee for acknowledging that the manuscript is interesting, well written and worth of publication. Regarding the referee's view that this is not too dissimilar to our original publication (Phys. Rev. B 102, 075409 (2020)), we must emphasise that the objective of the paper is to demonstrate that the methodology does work for materials besides graphene. Despite our claims of generality, the method had been previously criticised for being suitable for graphene only, given the linear dispersion relation that is characteristic of that material. With that in mind, we have presented in this manuscript a few other cases ranging from simple square barriers to h-BN which possesses a more elaborate electronic structure. To consolidate a new methodology in a way that it is adopted by the wider community, it is essential to put all its capabilities to the test and to demonstrate where it is applicable. The wider its applicability the more interest it is likely to attract and that is precisely the purpose of the current manuscript, {\it i.e.}, to demonstrate the versatility, robustness and generality of our method.\\
\item ``{\it The authors use the transmission T and conductance gamma. However, both quantities are essentially identically (apart form a factor $gamma0= e^2/h$). Only one of these quantities should be used and defined in the manuscript to improve its readability.}''
{\bf Our answer:} We thank the referee for the remark. To improve readability of our manuscript as suggested by the referee, we opted to present our results in terms of $T$ for electronic transmission.\\
\item ``{\it The left hand side of Equation (4) depends on the disorder concentration n. The dependences of the right hand side on n should be indicated explicitly in the equation and not only in the text below.}''
{\bf Our answer:} We fixed equation (4) in the new version of the manuscript to accommodate the explicit dependency with $n$ which now appears as:
\begin{equation}
\chi(n)={1 \over {\cal E}_+ - {\cal E}_-}\,\int_{{\cal E}_-}^{{\cal E}_+} dE \, (T(E) - \langle T(E, n) \rangle)^2\,\,,
\label{misfit}
\end{equation}
\vspace{0.5cm}
\item ``{\it The comment on page 7, line 10: "Nonetheless, the integration in equation (4) may not be necessary..." is somehow trivial as integrations is essentially summation.}''
{\bf Our answer:} We rearranged this sentence in the new version of the manuscript and it now reads:
``The integration in equation (4) can be hence replaced with a discrete sum without major impact to the accuracy of the inversion procedure.''
\end{enumerate}
\section{List of changes:}
\begin{itemize}
\item New citations included in the new version of the manuscript are:\\
- V. Mlinar, ``Utilization of inverse approach in the design of materials over nano- to macro-scale,'' Ann. Phys. (Berlin) {\bf 527}, 187 (2015);\\
- Z. Fan, A. Uppstu, T. Siro, and A. Harju, ``Efficient linear-scaling quantum transport calculations on graphics processing units and applications on electron transport in graphene,'' Computer Physics Communications {\bf 185}, 28 (2014);\\
- F. R. Duarte, S. Mukim, A. Molina-S\'anchez, T. G. Rappoport, and M. S. Ferreira, ``Decoding the DC and optical conductivities of disordered MoS$_2$ films: an inverse problem,'' New J. Phys. {\bf 23}, 073035 (2021);\\
- G. Bao, and X. Xu, ``An inverse random source problem in quantifying the elastic modulus of nanomaterials,'' Inverse Problems {\bf 29}, 015006 (2013).\\
\item A brief discussion on the generality and context of the methodology was added on page 5:\\
``In this work, we put emphasis on extracting materials' composition from electronic transport analysis following the misfit function minimization scheme as illustrated in the case studies above. It is important to emphasize a general aspect of our methodology which is the possibility of decoding not only electronic conductance curves but also response functions of other physical natures, e.g. mechanical, optical, magnetic, etc. For instance, this methodology was recently adapted to decode the optical conductivity of disordered MoS$_2$ films as presented in Duarte et al [..]. Another promising applicability of the methodology is to use in the context of mechanical response in line with the approach proposed by Bao et al. [..].''\\
\item A discussion on the applicability of our methodology beyond numerical target functions, e.g. treatment of experimental data, is also added on page 5:\\
``Note also that our misfit function methodology is not limited to conductance curves derived from numerical methods; our method is applicable to any energy-dependent response function that can serve as a ``target'' function for the misfit function minimization scheme. Typical experimental data that could be used with this methodology could come from Hall-bar experiments, in our case, conducted in 2D nanostructures. As long as the main methodology parameters (i.e., bandwidth, size of the ensemble, transport regime, and hamiltonian flavor) are set to achieve the desired accuracy, a convergence to a minimum in the misfit function can be expected for any fluctuating energy-dependent response function obtained from experiments or numerical means.''\\
\item Aligned with the new citations above, a brief discussion in terms of feasibility of the methodology was added on page 12, last paragraph of section 3, which says:\\
``It is worth mentioning that our methodology is not immune to the typical limitations regarding the investigation of large-scale systems containing over $10^3$ atoms. Yet, we tested our methodology with relatively large systems, e.g. nanoribbon systems with $\sim2000$ atoms along their length. A single point calculation to determine the accuracy of the method took approximately 30-60 minutes to complete depending on the ensemble size. Depending on the computational resources available, this computing time can be significantly reduced. Certainly systems treated with more complex Hamiltonians, beyond the nearest-neighbour single-orbital tight-binding approximation, can take longer computational times. Nonetheless, our methodology can be paired with other scalable electronic transport approaches, e.g. as the one proposed by Fan et al. [..] in which an optimization for the linear-scaling Kubo-Greenwood formula is presented, allowing large-scale calculations in systems of $\sim 10^6$ atoms.''\\
\item Electronic transmission results appearing in the figures are expressed using the notation $T$. Figure captions 1 and 2 were altered in accordance to the new notation. Equation (6) was also altered in accordance to the new notation.\\
\item Equation (4) in the manuscript was corrected to include the proper dependency with the concentration of impurities $n$.\\
\item Sentence on page 7, line 10, was altered to: ``The integration in equation (4) can be hence replaced with a discrete sum without major impact to the accuracy of the inversion procedure.''
\end{itemize}
\end{document}
\section{Introduction}
It is a simple Quantum Mechanics problem to find the electronic conductance of a nanoscale device by directly solving the Schr\"odinger equation that governs the wave function of the quantum system under study. Obviously, this is the case if and only if the underlying Hamiltonian is known, i.e., if all scattering sources and interactions are fully specified. Trying to perform the same task in reverse is significantly more challenging. For example, assuming that the conductance of the system is known, how can one infer about the Hamiltonian components from that information alone? To make matters worse, what if the system is made of a heavily disordered material? Questions of this type are generally labeled as Inverse Problems (IP) and are classified as those which attempt to obtain from a set of observations the causal factors that generated them in the first place. IP are integral parts of classical visualization tools \cite{medical, fwi, tromp2008spectral, sonar} but far less common in the quantum realm. The literature on the field of Quantum IP (QIP) is primarily focused on the fundamentals of inversion processes, e.g., whether a problem is ill-posed \cite{Lassas2008}, whether solutions are unique and how stable they are \cite{PhysRevLett.111.090403}. Following a different scope, studies of nanomaterials and their physical properties could benefit from applications of QIP since they involve investigating structures for which the underlying Hamiltonians are not always known \cite{PhysRevX.8.031029,inv-CMP,tsymbal,jasper,gianluca,PhysRevLett.97.046401,Franceschetti1999,liping,plasma}.
Identifying the precise Hamiltonian that generates a specific observable is a difficult task. In general, it consists of solving the Schr\"odinger equation with a Hamiltonian containing one (or more) parameter(s) that must be changed until the solution closely matches the original observation. Because of such a large variance in phase space, finding the optimum parameter set can be computationally demanding. With the advance of high-performance computing and various optimization algorithms available, distinct methods for probing the parameter phase space in electronic structure problems can be used. Neural-network-based search engines \cite{jensen,yazyev}, genetic algorithms \cite{Zhang2013, Luo, Vmlinar}, and more recently, machine-learning strategies \cite{Vargas2019, kyriienko, anatole, fazli, burak, collins, Xia2018, dral, melko} have been proposed and, with different degrees of success, can speed up the search for the ``inverted'' solution. As a result, simulations are starting to have an impact in reducing the time and cost associated with materials design, specially those involving high throughput studies of material groups \cite{Ziletti2018, rajan, suram, Koinuma2004, choi, nardelli, PhysRevLett.108.068701, Yan2015,Fischer2006, Gautier2015}. These are large-scale simulations that generate volumes of data with the intention of identifying optimal combinations that can be subsequently used as candidates for an exploratory search for new materials. These methods certainly serve their purposes of determining the optimum parameter phase space that fits a given observable. Nonetheless, they can be less intuitive to implement due to their data-science scope and somehow ``black-box'' approach embedded in their algorithms \cite{Schmidt2019}. A significant portion of the electronic structure community studying low dimensional materials is certainly familiar with the Hamiltonian construction in the Schr\"odinger equation but also with its reciprocal form, involving the use of equations of motion for the inverse of the Hamiltonian operator which can be identified as the Green's function. Green's function-based methods\cite{abcd} have played a central role as a theoretical tool to study electronic and transport properties of nanoscale systems. These are versatile methods that can be used to obtain the conductance of complex low dimensional materials, including disordered ones.
A mathematically transparent inversion technique capable of extracting structural and composition information from a disordered quantum device by looking at its conductance signatures has recently been proposed \cite{shardul}. Using energy-dependent two-terminal conductance as input functions, the reported inversion method identifies the concentration of impurities disorderly distributed within a graphene nanoribbon (GNR) host. Despite claims that the method is general, robust, and stable, evidence has so far been limited to nitrogen-doped GNRs \cite{shardul}. In this manuscript, we put the generality of this inversion methodology to the test by implementing it with a variety of systems, some described by extremely simple electronic structure models, others by more realistic ones. Furthermore, the inversion accuracy is also tested and shown to be able to identify correctly the impurity concentration of a number of systems studied in this work. This suggests that this method can indeed be employed as a useful characterisation tool with a variety of systems composed of different materials and dimensions. The method relies on working with an objective function that can be built directly from conductance calculations which we call the `misfit' function. A remarkable advantage of the method is its systematic approach in dealing with the misfit function, without the need of extra adjustments or parameter tuning to perform the optimization search of concentration and impurity characteristics. This article starts by illustrating in the next section the inversion methodology applied to a very simple one-dimensional atomic chain system. In subsequent sections, we demonstrate how this inversion tool can be implemented to extract valuable information about nanoscale systems acting as hosts of chemical impurities; hosts include carbon nanotubes (CNTs), GNRs, and hexagonal boron nitride (hBN) nanoribbons, on which foreign atoms are randomly distributed over their atomic structure.
\section{Methodology}
To demonstrate the versatility of this inversion methodology, we separate the cases we study into two groups: one with systems possessing very simple electronic structures and another in which the materials are described by more realistic band structures. The purpose of separating the case studies is twofold: (i) it is a lot easier to introduce the inversion procedure with simple systems since it helps preventing artifacts coming from complex electronic structure models and our focus lies entirely on the inversion method itself; (ii) showing that the inversion method works for systems regardless of their electronic-structure details and of their dimensionality is strong evidence of the robustness of the methodology. With that in mind, we will present the inversion method with the first group consisting of three different one-dimensional systems, all possessing rather simple electronic structures. The first system consists of an electron gas in the presence of short-range scatters represented by randomly distributed delta functions. In this case, the delta-functions act as impurities in the presence of an otherwise homogeneous potential produced by an infinitely-long host material. Mathematically, the single-particle Hamiltonian is defined by the following potential
\begin{equation}
V_1(x) = \sum_{j=1}^N \lambda \, \delta(x-x_j) \,\,,
\label{delta-potential}
\end{equation}
where $j$ is an integer running from $1$ up to the total number of scatters $N$, $x$ is the spatial variable, $x_j$ gives the randomly-selected position of the $j^{th}$ scatter, and $\lambda$ is a positive constant that reflects the scattering strength of the impurities, assumed to be all identical for simplicity. The $N$ random values of $\{x_j\}$ are constrained to be within a region of length $L$ and fully determine the Hamiltonian as well as the corresponding transmission coefficient $T(E)$, $E$ being the energy. Note that the integer number of impurities $N$ or the non-integer concentration $n=N/L$ are considered equivalent control parameters and will be used interchangeably throughout this article.
In the second case study considered here, we relax the short-range aspect of our scatters by replacing the delta-function potentials with identical square barriers of width $d$ and height $V_0$. In other words, the new Hamiltonian has now a potential described by
\begin{equation}
V_2(x) = \sum_{j=1}^N V_0 \, [\Theta(x-x_j) - \Theta(x-x_j-d)] \,\,,
\label{barrier-potential}
\end{equation}
where $\Theta$ is the Heaviside step function. Once again, the barrier positions are determined by randomly selected values of $\{x_j\}$ with the additional constraint that barriers must not overlap, {\it i.e.}, no pair of values $x_j$ and $x_{j^\prime}$ must be less than a distance $d$ apart.
The third and final case studied departs from a free-electron gas description for the electronic structure of the host material. Still keeping a simplified picture of the electronic structure, we move on to a lattice representation of the Hamiltonian, in one-dimension, captured by the tight-binding picture of an infinitely long chain of atoms. With no loss of generality, we adopt the following Hamiltonian
\begin{equation}
{\hat H}_3 = \sum_j ( \, \vert j \rangle t \langle j+1 \vert + \vert j \rangle t \langle j-1 \vert \, ) + \sum_{j^\prime} \vert j^\prime \rangle \lambda_{j^\prime} \langle j^\prime \vert \,\,,
\label{tb}
\end{equation}
which contains an off-diagonal hopping term and a diagonal term describing the potential on all sites of the system. In equation (\ref{tb}), $j$ and $j^\prime$ are integers that label the atomic sites, $\vert j \rangle$ represents an electronic orbital centred at site $j$ and $t$ is the electronic hopping between nearest-neighbour sites only. $\lambda_{j^\prime}$ represents the on-site potential of substitutional impurities that are once again randomly distributed within a region of size $L$. It is worth mentioning that impurities in the form of adatoms (adsorption) can also be considered in the Hamiltonian in equation (\ref{tb}) without fundamental changes to our approach.
All three cases introduced above can be characterised by an infinitely-long host containing a number $N$ of randomly distributed impurities confined to within a finite-sized region and described by the respective equations (\ref{delta-potential}), (\ref{barrier-potential}), and (\ref{tb}). With this general setup, one can obtain the transmission coefficient across all three systems following a method of choice, e.g. Green function\cite{abcd,rocha1,rocha2,lawlor1} or scattering matrix\cite{datta_1995}. The top three panels of Fig.~\ref{fig-simple} depict the electronic transmission $T(E)$ for a specific configuration of each one of these simple cases. It is worth noting that in each one of the three cases, the exact number of impurities and their respective locations defining the parent Hamiltonian that generated the $T(E)$ curve of Fig.~\ref{fig-simple} are set aside and will not be used in any part of the subsequent inversion calculation. In other words, $T(E)$ serves as the only input function from which the inversion will take place. The task at hand is a QIP that aims to find structural and composition information about the scattered impurities using the seemingly noisy curves shown on the top panels of Fig.~\ref{fig-simple} as the only starting point. The transmission coefficient is a rather representative quantity to use as input function because it has several parallels in real materials that go beyond this simple illustrative model. For example, the conductance of a metal is commonly expressed in terms of $T(E)$, following the Landauer-B\"uttiker formalism \cite{datta_1995}. Likewise, other quantities such as thermal and optical conductivity can also be expressed in a spectral representation that resembles the transmission coefficient\cite{tuovinen}. In this work, we put emphasis on extracting materials' composition from electronic transport analysis following the misfit function minimization scheme as illustrated in the case studies above. It is important to emphasize a general aspect of our methodology which is the possibility of decoding not only electronic conductance curves but also response functions of other physical natures, e.g. mechanical, optical, magnetic, etc. For instance, this methodology was recently adapted to decode the optical conductivity of disordered MoS$_2$ films as presented in Duarte et al. \cite{duarte2021}. Another promising applicability of the methodology is to use in the context of mechanical response in line with the approach proposed by Bao et al. \cite{bao2013}. Note also that our misfit function methodology is not limited to conductance curves derived from numerical methods; our method is applicable to any energy-dependent response function that can serve as a ``target'' function for the misfit function minimization scheme. Typical experimental data that could be used with this methodology could come from Hall-bar experiments, in our case, conducted in 2D nanostructures. As long as the main methodology parameters (i.e., bandwidth, size of the ensemble, transport regime, and hamiltonian flavor) are set to achieve the desired accuracy, a convergence to a minimum in the misfit function can be expected for any fluctuating energy-dependent response function obtained from experiments or numerical means.
As previously alluded to, a naive approach would be to attempt a comparison of the input function with the transmission coefficient of every single possible disorder configuration but the prohibitively large number of combinations proves this strategy impracticable.
\begin{figure}
\centering
\includegraphics[height=0.57\columnwidth,right]{final_fig1_up.png}
\caption{(first row) Transmission coefficient ($T$), (second row) misfit function ($\chi$), and (third row) error ($\alpha$) as a function of representative physical quantities in disordered systems: $E$ is the energy expressed in certain units of energy $t$, $N$ is the number of scatters, $n$ is the impurity concentration expressed in $\%$, B.W. is the normalized bandwidth used for integration in the inversion procedure, and $M$ is the total number of disordered configurations in the studied ensemble. These results were obtained for three case study systems: (first column) electron gas with delta functions representing scatters, (second column) electron gas with identical square barriers of width $d=1$ and height $V_0=1$ in arbitrary units, (third column) one-dimensional (infinite) atomic chain. In all cases, $t=1$. For the third case (atomic chain), the transmission is scaled in terms of the conductance quantum, $\Gamma_0 = 2e^2/h$ with $e$ being the elementary charge, and $h$ the Planck constant. Panel g) also contains the transmission as a function of energy for the pristine atomic chain (red curve). Impurities in the doped case were modelled with on-site energy of $\lambda = 0.5t$ and they were spread within a length of $L=100$ atomic layers. Transmission results in the first row are for the target systems in which the misfit function locates a minimum: (a,b) electron gas with $N_{min}=5$ delta functions, (d,e) electron gas with $n_{min}=4\%$ square barrier scatters, and (g,h) 1D atomic chain with $n_{min}=8\%$ impurities. }
\label{fig-simple}
\end{figure}
Fortunately, the inversion procedure introduced in \cite{shardul} provides a far more
efficient technique which combines configurationally-averaged (CA) values of the transmission coefficient with the ergodic principle. More specifically, the ergodic hypothesis assumes that a running average over a continuous parameter upon which the transmission coefficient depends is equivalent to sampling different impurity configurations and this can be used to assist us in the inversion procedure. In mathematical terms, this is done through the so-called misfit function defined as \cite{shardul}
\begin{equation}
\chi(n)={1 \over {\cal E}_+ - {\cal E}_-}\,\int_{{\cal E}_-}^{{\cal E}_+} dE \, (T(E) - \langle T(E, n) \rangle)^2\,\,,
\label{misfit}
\end{equation}
where the integration limits are arbitrary energy values. $\chi$ can be interpreted as a functional that measures the deviation between the input transmission of the parent configuration $T(E)$ and its CA counterpart $\langle T(E) \rangle$, defined as
\begin{equation}
\langle T(E,n) \rangle = {1 \over M} \sum_{m=1}^M T_m(E).
\label{CA}
\end{equation}
where $M$ is the total number of different disordered configurations taken into account and $T_m(E)$ is the corresponding transmission coefficient for each one of these configurations, which are themselves labelled by the integer $m$. While both $T(E)$ and $\langle T(E) \rangle$ in the integrand above are functions of energy, the latter is also a function of the impurity concentration $n$. When plotted as a function of $n$, the misfit function displays a very distinctive minimum at a value that corresponds to the real impurity concentration. This can be seen in the middle panels of Fig.~\ref{fig-simple}. The concentration $n_{min}$ that minimises the misfit function coincides with the real concentration $n_i$ in all three cases, indicating that the inversion is successful. Also included in the Figure (bottom-row panels), we plot the relative error $\alpha = \vert n_{min} - n_i\vert/n_i$, where $n_{min}$ stands for the concentration for which the misfit function is minimum. The error $\alpha$ is calculated over several different parent configurations in order to achieve statistical significance.
One question that naturally arises is how relevant the energy integral of equation (\ref{misfit}) is. On the one hand, the energy integration plays a significant part in identifying the real impurity concentration because $T(E)$ fluctuations of a single sample versus energy is equivalent to sample-to-sample fluctuations at a fixed energy. Therefore, this energy integration is similar to vastly augmenting the number of disordered configurations taken into account. In fact, if we were to use equation (\ref{misfit}) with ${\cal E}_-={\cal E}_+$, the misfit function would not have any distinctive feature and would not lead to a successful inversion \cite{shardul}. However, if the integration limits span a small fraction of the relevant energy range, the misfit function acquires a very distinctive shape with minima located at the correct concentrations.
Some further insights we can learn from Fig.~\ref{fig-simple} are as follows. Fig.~\ref{fig-simple}(c) plots the inversion error $\alpha$ as a function of the normalized bandwidth defined as B.W.$=({\cal E}_+ - {\cal E}_-)/Z$, with $Z$ being the total bandwidth of the electron gas system. Note that $\alpha$ is large for very narrow energy windows. When the energy window for integration is broadened beyond $20\%$ of the bandwidth, $\alpha< 0.1$ in the case of $N=5$ delta function potentials. This indicates that larger energy windows accounted in the misfit function can capture more of the transmission features, minimizing then the error of the inversion procedure. The integration in equation (\ref{misfit}) can
be hence replaced with a discrete sum without major impact to the accuracy of the
inversion procedure. By reducing the number of energy points ${\cal N}_\epsilon$ involved in generating the misfit function, there is a point below which the inversion accuracy drops quite significantly but this threshold is normally at very low levels (${\cal N}_\epsilon < 50$), which enables calculations with systems for which the numerical complexity is significantly higher \cite{shardul}, as we will demonstrate in the next section.
Regarding the error study done in Fig.~\ref{fig-simple}(f) and the number of configurations needed in the CA values $M$, the backbone of our inversion procedure is that fluctuations in the conductance ($\Gamma$) contain very little system specific information but the average conductance depends smoothly on the variable of interest. In this sense, the higher the value of disorder realizations $M$, the smaller the fluctuations in $\langle \Gamma(E,n) \rangle$. Universal conductance fluctuations (UCF) can be used to estimate values of $M$ required for sufficiently accurate results in the inversion procedure. Both for diffusive \cite{lee1985universal,lee1987universal} and chaotic ballistic systems \cite{Baranger1994}, var($\Gamma) \approx 1$ is the main fingerprint of UCF. Hence, the CA relative statistical error is expected to scale with $[\sqrt{M}\times \langle \Gamma(E,n) \rangle]^{-1}$. To bring the degree of fluctuations to an acceptable level that yields statistically significant results, we estimate that $M$ should be of the order of $10^3$ \cite{Alhassid2000,lopez2014modeling}. As seen in Fig.~\ref{fig-simple}(f), the error is only guaranteed to stabilize at $M\sim 1000$. Depending on the complexity of the system, higher values of $M$ can be required to achieve statistical significance.
Finally, Fig.~\ref{fig-simple}(i) shows the inversion error $\alpha$ for an infinite 1D atomic chain as a function of impurity concentration $n$. Substitutional impurities were spread randomly over $L=100$ unit cells in length. One can see that $\alpha$ is relatively small for concentrations below 10\%, clearly indicating that our inversion method is very reliable for dilute regimes. To test the inversion procedure, we defined hamiltonians with the knowledge of concentration as well as positions. By increasing the concentration of scatters spread over a finite length, one can start forming pairs or atomic clusters of impurities, in which the method is not equipped to distinguish from single scatters. For this reason, we see a reduction in the accuracy of our method ($\alpha$ increases) as impurity concentration increases.
\section{Results and discussion}
\begin{figure}
\centering
\includegraphics[height=0.57\columnwidth,right]{figure2_new_up.pdf}
\caption{
(first row) Electronic transmission ($T$), (second row) misfit function ($\chi$), and (third row) error ($\alpha$) as a function of representative physical quantities in disordered systems: $E$ is the energy expressed in units of hopping $t$ or directly in eV, $n$ is the impurity concentration expressed in $\%$, $M$ is the total number of disordered configurations in the studied ensemble, and B.W. is the normalized bandwidth used for integration in the inversion procedure. These results were obtained for three quasi-1D systems: (first column) carbon nanotube (CNT), (second column) graphene nanoribbon (GNR), and (third column) hexagonal boron nitride (hBN) nanoribbon. The transmission results on the top panels are scaled in terms of the conductance quantum, $\Gamma_0 = 2e^2/h$ with $e$ being the elementary charge, and $h$ the Planck constant. Top panels (a,d,g) contain the transmission as a function of energy for the respective pristine systems (red curves) and for the doped target systems (blue curves) in which the misfit function locates a minimum: (a,b) CNT at $n_{min}=2.5\%$, (d,e) GNR at $n_{min}=3\%$, and (g,h) hBN at $n_{min}=2\%$ impurities. The CNT is an armchair (5,5) hosting impurities modelled with an on-site energy of $\lambda = 0.5\, t$ with $t=1$ spread over L=50 unit cells in length. The GNR is an armchair-edge nanoribbon with 7 atoms along its width and it is hosting impurities with an on-site energy of $\lambda = 0.5\, t$ with $t=1$ spread over L=100 unit cells in length. The hBN host is an armchair-edge ribbon with 8 atoms along its width and infinite in length. Carbon atoms act as impurities and are spread over $L=100$ layers long. The on-site energy values to model B, N, and C atoms are: $\lambda_B = -6.64$ eV, $\lambda_N = -11.47$ eV, and $\lambda_C = -8.97$ eV\cite{dibenthesis}. See main text for details on the hopping characteristics used to model the hBN.}
\label{fig:gnr}
\end{figure}
We now proceed to introduce three new case studies in the second group of systems, which are less simplistic and more realistic. The first system in this group consists of carbon nanotubes (CNTs); the second is made of graphene nanoribbons (GNRs); the third and final case is made of hexagonal boron nitride (hBN) nanoribbons. Each one of these materials will contain a finite concentration of substitutional impurities that differ from their pristine atomic composition. One obvious difference to the previous cases is that these are no-longer one-dimensional systems and their electronic structures are a lot more involved than the ones displayed by the cases shown in Fig.~\ref{fig-simple}.
Even though these new cases possess band structures that have finer details than the ones in the first group, it is still convenient to maintain the tight-binding description of the electronic structure. Successful inversions have already been reported with Density Functional Theory (DFT) calculations used to describe the multi-orbital electronic structure of doped graphene nanoribbons \cite{shardul}. Despite its success, {\it ab-initio} calculations are far more time-consuming than semi-empirical approaches such as tight-binding, thus justifying our choice of electronic structure model for this manuscript. Therefore, the Hamiltonian appearing in equation (\ref{tb}) is still applicable to describe the cases in this group, the main difference being that the integer $j$ now accounts for all sites of these quasi-2D structures. Particularly for the hBN case, the Hamiltonian will carry three on-site contributions to account for the nitrogen ($\lambda_N$), boron ($\lambda_B$), and impurity atoms as we will detail later on.
Results found for these new cases are shown in Fig.~\ref{fig:gnr} with CNT on the left panels, GNR on the middle panels and hBN on the right panels. Following the same pattern of Fig.~\ref{fig-simple}, the top-row panels display the electronic transmission of the parent configuration that serves as the input function in the inversion procedure. Note that in this case, these functions can be interpreted as the Fermi-energy-dependent conductance. Also on the same plots are the impurity-free conductance plots shown for comparison purposes. The middle-row panels depict the corresponding misfit function $\chi(n)$, always plotted as a function of the concentration of impurities $n$. Once again for the sake of comparison, the same plot contains a vertical dashed line that indicates the real impurity concentration contained in the parent configuration. Finally, the bottom-row panels indicate the accuracy of the inversion procedure by plotting the error $\alpha$ as a function of certain control parameters that will be discussed later.
For the CNT results on the left-column panels of Fig.~\ref{fig:gnr}, they contain impurities represented by on-site potentials of $\lambda= 0.5$ in units of hopping (see equation (\ref{tb})). While this value corresponds to an arbitrary choice that does not represent any specific atomic species, the inversion method correctly identifies the impurity concentration for whichever choice of $\lambda$ values. Such a good agreement between the inverted result and the real concentration is seen in the misfit function plot for the nanotube. The concentration that minimizes $\chi$, $n_{min}$, coincides with the $n_i=2.5\%$ chosen for the parent configuration. Having performed the inversion procedure with numerous different parent configurations, we were able to assess its success rate in the case of nanotubes serving as hosts. Fig.~\ref{fig:gnr}(c) depicts the error $\alpha$ plotted as a function of the bandwidth, revealing that the accuracy of the method improved as the integration window of the misfit function increased. The central-column panels display equally good results for the GNR with the same choice of $\lambda = 0.5$ in units of hopping to mimic substitutional impurity atoms. Fig.~\ref{fig:gnr}(d) shows the conductance of a pristine armchair-edged GNR with 7 atoms wide plotted as a function of energy. Note that only the conduction band ($E>0$) is shown to simplify visualization. On the same panel, the conductance of the parent configuration containing 3\% of impurities is also shown. Fig.~\ref{fig:gnr}(e) displays the $n$-dependent misfit function with a distinctive minimum that agrees with the real concentration, $n_i=3\%$. The accuracy for the nanoribbon inversions is tested by plotting $\alpha$ as a function of concentration in Fig.~\ref{fig:gnr}(f). Once more, we observe that the inversion method performs better in dilute regimes of doping, with the error $\alpha$ increasing relatively slow with the concentration of impurities.
The right-column panels of Fig.~\ref{fig:gnr} show results for an armchair-edge hBN nanoribbon with 8 atoms along its width. The ribbon is infinite in length but, in its doped form, impurities are spread over a section of $L=100$ layers long. The tight-binding Hamiltonian for the hBN host uses three distinct on-site energy values to model B and N atoms on the host and C atoms as impurities of the system: $\lambda_B = -6.64$ eV, $\lambda_N = -11.47$ eV, and $\lambda_C = -8.97$ eV\cite{dibenthesis}. Hopping terms were parameterized as $t =-6.17/a^2$ \cite{dibenthesis, harrison} with $a$ being the pair-wise bond length. For a boron-nitrogen bond, $a=1.43$ \AA\, and, for the sake of simplicity, we consider that boron-carbon and nitrogen-carbon bond lengths do not change significantly from that value. The noisy conductance curve in Fig.~\ref{fig:gnr}(g) corresponds to the parent system of a hBN ribbon doped with $n_i=2\%$ of substitutional impurities and the red curve is the conductance of the pristine hBN nanoribbon. The misfit function shown in Fig.~\ref{fig:gnr}(h) acquires a minimum at $n_{min}=2\%$, evidencing that the method predicted correctly the concentration of the parent system. Finally, Fig.~\ref{fig:gnr}(i) confirms that the accuracy of the method is significantly improved as the number of disordered configurations in the ensemble, $M$, increases.
\begin{figure}[!tbp]
\centering
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[width=\textwidth,right]{hbn_sur.pdf}
\end{minipage}
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[width=\textwidth,left]{fig3_b.pdf}
\end{minipage}
\caption{(Left panel) Misfit function ($\chi$) surface plot taken for an armchair-edge hBN nanoribbon with 7 atoms along its width. The ribbon is infinite but carbon impurities were spread randomly over $L=100$ unit cells along its length. The bar color express values of $\chi$. The control parameters along x- and y-axis are $N_N$ and $N_B$, respectively; those account for the number of carbon impurities replacing nitrogen ($N_N$) or boron ($N_B$) atoms. Dashed lines intersect at the characteristics of the parent system with $N_{N} = 50$ and $N_B=20$. The minimum of $\chi$ correctly predicts the characteristics of the parent system as it lays within the region where $N_{N} = 50$ and $N_B=20$. (Right panel) Curve of $\chi$ versus $N_{N}$ taken from left panel by slicing the misfit surface plot horizontally at $N_{B} = 20$ (red horizontal dashed line). A distinctive minimum at $N_{N}=50$ (red vertical dashed line) correctly finds the occupation of carbon impurities on nitrogen sites of the parent system. Bandwidth considered for inversion is 70\% of the entire spectrum. }
\label{Hbn_N}
\end{figure}
Another way to prove the generality of our methodology is to extend its use to a multi-dimensional phase space. Till this point, the inversion procedure looked at one degree of freedom which was the impurity concentration. However, it is possible to extend this analysis to a two-dimensional phase space, e.g., as we are going to illustrate. This analysis turns particularly interesting if applied to the hBN case because impurities can replace two types of atoms on the host (N or B). This means we can decompose the concentration information into the two distinct sublattices. Considering that a total of $N$ impurities can be substitutionally dope a segment length of an hBN nanoribbon, we can write that $N=N_B+N_N$ where $N_B$ ($N_N$) is the number of boron (nitrogen) atoms replaced by an impurity. Therefore, we can write the misfit function in terms of these two degrees of freedom to probe occupation of impurities on both boron and nitrogen sites as
\begin{equation}
\chi(N_{B},N_{N}) = \int_{{\cal E}_-}^{{\cal E}_+} dE \, \left[\, T(E) - \langle T(E,N_{B},N_{N})\rangle \, \right]^2 \,\,
\label{chi2d}
\end{equation}
A certain number of C atoms are then spread over the hBN host in such a way that they can replace equal portions of B and N atoms or they can cause a sublattice unbalance. In other words, we can define a parameter $\delta$ as $\delta = N_N - N_B$ that can serve as a metric for this unbalance: if $\delta>0$, C atoms majorly occupy the nitrogen sublattice, if $\delta<0$, C atoms majorly occupy the boron sublattice, and if $\delta=0$, C atoms occupy both sublattices equally.
A 2D contour plot of $\chi$ as defined in equation (\ref{chi2d}) is presented in Fig.~\ref{Hbn_N} left panel. The global minimum correctly coincides with the occupation of impurities placed on boron ($N_B=20$) and nitrogen ($N_N=50$) sublattice of the parent system highlighted by the intersection of vertical and horizontal dashed red lines. The right panel in Fig.~\ref{Hbn_N} corresponds to an horizontal slice taken from the surface plot at fixed $N_B=20$ to visualize the dependency of $\chi$ with $N_N$ and to highlight the minimum of $\chi$ at $N_N=50$. The fundamental difference between boron and nitrogen sites allows to identify occupation of carbon impurities on corresponding sublattices. However, identifying the exact position of impurities still remains an elusive task. This example proves that extending the inversion methodology to inspect more than one degree of freedom does not affect its generality and robustness. Looking at more than one degree of freedom in the misfit function certainly increases computational cost and may generate multiple minima. Nonetheless, the method can serve as a first approach to determine the exact features of a parent system that can be described in a multi-dimensional phase space. In other words, results as the one depicted in Fig.~\ref{Hbn_N} narrows down considerably the range of search for the optimum $N_B$ and $N_N$ values. A refinement in the range of values can be achieved by re-applying the inversion procedure within the reduced parameter range and with an increased number of CA samples, $M$, and/or by increasing the bandwidth established for the inversion procedure.
It is worth mentioning that our methodology is not immune to the typical limitations regarding the investigation of large-scale systems containing over $10^3$ atoms. Yet, we tested our methodology with relatively large systems, e.g. nanoribbon systems with $\sim 2000$ atoms along their length. A single point calculation to determine the accuracy of the method took approximately 30-60 minutes to complete depending on the ensemble size. Depending on the computational resources available, this computing time can be significantly reduced. Certainly systems treated with more complex Hamiltonians, beyond the nearest-neighbour single-orbital tight-binding approximation, can take longer computational times. Nonetheless, our methodology can be paired with other scalable electronic transport approaches, e.g. as the one proposed by Fan et al. \cite{FAN201428} in which an optimization for the linear-scaling Kubo-Greenwood formula is presented, allowing large-scale calculations in systems of $\sim 10^6$ atoms.
\section{Conclusions}
In this manuscript, we illustrated the use of an inversion methodology capable of extracting information out of disordered systems, more specifically host media perturbed by local potentials representing impurities or dopants. For situations in which we know the initial conditions of the system of study, the method may sound unimpressive. However, if we do not have access to its initial conditions, one needs to find a way of retrieving the system initial setup by inspecting response functions of the system subjected to perturbations. In this work, we studied a series of solid state systems ranging from electron gas, 1D atomic chain, up to carbon nanotubes, graphene and hBN nanoribbons that give the conductance or electronic transmission as response function. Each of these (host) solid state systems were perturbed by a certain concentration of impurities and their presence induce fluctuations in the transmission response functions, turning them into quite noisy spectra. It is not straightforward to deduce the concentration of impurities doping the hosts by only looking at the noisy profile of the transmission curve. We proved the generality of an inversion methodology proposed by Mukim et al. \cite{shardul} in which unknown quantities such as the impurity concentration in doped nanoscale systems can be uncovered by computing the so called ``misfit function'', an objective function written in terms of the configurationally averaged conductance and the target conductance. The misfit function reveals minima at locations in the phase space parameter that correspond to the unknown initial conditions of the system, in this case, uniquely characterized by the concentration of impurities doping a host material. The method itself can be built upon numerous control parameters that inform on its accuracy and performance such as relative error to target, influence of bandwidth window, number of configurationally averaged samples in the ensembles, to name but a few. We observed that the accuracy of the method improves considerably when: (i) sufficiently large bandwidths are selected for the integration of the misfit function, (ii) the sampling of the ensemble is increased, (iii) target systems are in dilute regime of sufficiently low concentration of dopants. In summary, the method proved capable of ``decoding'' noisy transmission response functions into a more meaningful mathematical representation, the misfit function, that exhibits minima at the characteristics defining the unknown a priori initial conditions of the studied systems.
\section*{Acknowledgements}
This publication has emanated from research supported in part by a research grant from Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2278-P2. This work was also supported by U of C start-up funding and partially support by the Natural Sciences and Engineering Research Council of Canada (NSERC), [Discovery Grant funding reference number xxxxxx]. We also acknowledge the WestGrid (www.westgrid.ca), the Compute Canada Calcul Canada (www.computecanada.ca), and the CMC Microsystems (www.cmc.ca) for computational resources.
\
\
\section{Introduction}
It is a simple Quantum Mechanics problem to find the electronic conductance of a nanoscale device by directly solving the Schr\"odinger equation that governs the wave function of the quantum system under study. Obviously, this is the case if and only if the underlying Hamiltonian is known, i.e., if all scattering sources and interactions are fully specified. Trying to perform the same task in reverse is significantly more challenging. For example, assuming that the conductance of the system is known, how can one infer about the Hamiltonian components from that information alone? To make matters worse, what if the system is made of a heavily disordered material? Questions of this type are generally labeled as Inverse Problems (IP) and are classified as those which attempt to obtain from a set of observations the causal factors that generated them in the first place. IP are integral parts of classical visualization tools \cite{medical, fwi, tromp2008spectral, sonar} but far less common in the quantum realm. The literature on the field of Quantum IP (QIP) is primarily focused on the fundamentals of inversion processes, e.g., whether a problem is ill-posed \cite{Lassas2008}, whether solutions are unique and how stable they are \cite{PhysRevLett.111.090403}. Following a different scope, studies of nanomaterials and their physical properties could benefit from applications of QIP since they involve investigating structures for which the underlying Hamiltonians are not always known \cite{PhysRevX.8.031029,inv-CMP,tsymbal,jasper,gianluca,PhysRevLett.97.046401,Franceschetti1999,liping,plasma}.
Identifying the precise Hamiltonian that generates a specific observable is a difficult task. In general, it consists of solving the Schr\"odinger equation with a Hamiltonian containing one (or more) parameter(s) that must be changed until the solution closely matches the original observation. Because of such a large variance in phase space, finding the optimum parameter set can be computationally demanding. With the advance of high-performance computing and various optimization algorithms available, distinct methods for probing the parameter phase space in electronic structure problems can be used. Neural-network-based search engines \cite{jensen,yazyev}, genetic algorithms \cite{Zhang2013, Luo, Vmlinar}, and more recently, machine-learning strategies \cite{Vargas2019, kyriienko, anatole, fazli, burak, collins, Xia2018, dral, melko} have been proposed and, with different degrees of success, can speed up the search for the ``inverted'' solution. As a result, simulations are starting to have an impact in reducing the time and cost associated with materials design, specially those involving high throughput studies of material groups \cite{Ziletti2018, rajan, suram, Koinuma2004, choi, nardelli, PhysRevLett.108.068701, Yan2015,Fischer2006, Gautier2015}. These are large-scale simulations that generate volumes of data with the intention of identifying optimal combinations that can be subsequently used as candidates for an exploratory search for new materials. These methods certainly serve their purposes of determining the optimum parameter phase space that fits a given observable. Nonetheless, they can be less intuitive to implement due to their data-science scope and somehow ``black-box'' approach embedded in their algorithms \cite{Schmidt2019}. A significant portion of the electronic structure community studying low dimensional materials is certainly familiar with the Hamiltonian construction in the Schr\"odinger equation but also with its reciprocal form, involving the use of equations of motion for the inverse of the Hamiltonian operator which can be identified as the Green's function. Green's function-based methods\cite{abcd} have played a central role as a theoretical tool to study electronic and transport properties of nanoscale systems. These are versatile methods that can be used to obtain the conductance of complex low dimensional materials, including disordered ones.
A mathematically transparent inversion technique capable of extracting structural and composition information from a disordered quantum device by looking at its conductance signatures has recently been proposed \cite{shardul}. Using energy-dependent two-terminal conductance as input functions, the reported inversion method identifies the concentration of impurities disorderly distributed within a graphene nanoribbon (GNR) host. Despite claims that the method is general, robust, and stable, evidence has so far been limited to nitrogen-doped GNRs \cite{shardul}. In this manuscript, we put the generality of this inversion methodology to the test by implementing it with a variety of systems, some described by extremely simple electronic structure models, others by more realistic ones. Furthermore, the inversion accuracy is also tested and shown to be able to identify correctly the impurity concentration of a number of systems studied in this work. This suggests that this method can indeed be employed as a useful characterisation tool with a variety of systems composed of different materials and dimensions. The method relies on working with an objective function that can be built directly from conductance calculations which we call the `misfit' function. A remarkable advantage of the method is its systematic approach in dealing with the misfit function, without the need of extra adjustments or parameter tuning to perform the optimization search of concentration and impurity characteristics. This article starts by illustrating in the next section the inversion methodology applied to a very simple one-dimensional atomic chain system. In subsequent sections, we demonstrate how this inversion tool can be implemented to extract valuable information about nanoscale systems acting as hosts of chemical impurities; hosts include carbon nanotubes (CNTs), GNRs, and hexagonal boron nitride (hBN) nanoribbons, on which foreign atoms are randomly distributed over their atomic structure.
\section{Methodology}
To demonstrate the versatility of this inversion methodology, we separate the cases we study into two groups: one with systems possessing very simple electronic structures and another in which the materials are described by more realistic band structures. The purpose of separating the case studies is twofold: (i) it is a lot easier to introduce the inversion procedure with simple systems since it helps preventing artifacts coming from complex electronic structure models and our focus lies entirely on the inversion method itself; (ii) showing that the inversion method works for systems regardless of their electronic-structure details and of their dimensionality is strong evidence of the robustness of the methodology. With that in mind, we will present the inversion method with the first group consisting of three different one-dimensional systems, all possessing rather simple electronic structures. The first system consists of an electron gas in the presence of short-range scatters represented by randomly distributed delta functions. In this case, the delta-functions act as impurities in the presence of an otherwise homogeneous potential produced by an infinitely-long host material. Mathematically, the single-particle Hamiltonian is defined by the following potential
\begin{equation}
V_1(x) = \sum_{j=1}^N \lambda \, \delta(x-x_j) \,\,,
\label{delta-potential}
\end{equation}
where $j$ is an integer running from $1$ up to the total number of scatters $N$, $x$ is the spatial variable, $x_j$ gives the randomly-selected position of the $j^{th}$ scatter, and $\lambda$ is a positive constant that reflects the scattering strength of the impurities, assumed to be all identical for simplicity. The $N$ random values of $\{x_j\}$ are constrained to be within a region of length $L$ and fully determine the Hamiltonian as well as the corresponding transmission coefficient $T(E)$, $E$ being the energy. Note that the integer number of impurities $N$ or the non-integer concentration $n=N/L$ are considered equivalent control parameters and will be used interchangeably throughout this article.
In the second case study considered here, we relax the short-range aspect of our scatters by replacing the delta-function potentials with identical square barriers of width $d$ and height $V_0$. In other words, the new Hamiltonian has now a potential described by
\begin{equation}
V_2(x) = \sum_{j=1}^N V_0 \, [\Theta(x-x_j) - \Theta(x-x_j-d)] \,\,,
\label{barrier-potential}
\end{equation}
where $\Theta$ is the Heaviside step function. Once again, the barrier positions are determined by randomly selected values of $\{x_j\}$ with the additional constraint that barriers must not overlap, {\it i.e.}, no pair of values $x_j$ and $x_{j^\prime}$ must be less than a distance $d$ apart.
The third and final case studied departs from a free-electron gas description for the electronic structure of the host material. Still keeping a simplified picture of the electronic structure, we move on to a lattice representation of the Hamiltonian, in one-dimension, captured by the tight-binding picture of an infinitely long chain of atoms. With no loss of generality, we adopt the following Hamiltonian
\begin{equation}
{\hat H}_3 = \sum_j ( \, \vert j \rangle t \langle j+1 \vert + \vert j \rangle t \langle j-1 \vert \, ) + \sum_{j^\prime} \vert j^\prime \rangle \lambda_{j^\prime} \langle j^\prime \vert \,\,,
\label{tb}
\end{equation}
which contains an off-diagonal hopping term and a diagonal term describing the potential on all sites of the system. In equation (\ref{tb}), $j$ and $j^\prime$ are integers that label the atomic sites, $\vert j \rangle$ represents an electronic orbital centred at site $j$ and $t$ is the electronic hopping between nearest-neighbour sites only. $\lambda_{j^\prime}$ represents the on-site potential of substitutional impurities that are once again randomly distributed within a region of size $L$. It is worth mentioning that impurities in the form of adatoms (adsorption) can also be considered in the Hamiltonian in equation (\ref{tb}) without fundamental changes to our approach.
All three cases introduced above can be characterised by an infinitely-long host containing a number $N$ of randomly distributed impurities confined to within a finite-sized region and described by the respective equations (\ref{delta-potential}), (\ref{barrier-potential}), and (\ref{tb}). With this general setup, one can obtain the transmission coefficient across all three systems following a method of choice, e.g. Green function\cite{abcd,rocha1,rocha2,lawlor1} or scattering matrix\cite{datta_1995}. The top three panels of Fig.~\ref{fig-simple} depict the electronic transmission $T(E)$ for a specific configuration of each one of these simple cases. It is worth noting that in each one of the three cases, the exact number of impurities and their respective locations defining the parent Hamiltonian that generated the $T(E)$ curve of Fig.~\ref{fig-simple} are set aside and will not be used in any part of the subsequent inversion calculation. In other words, $T(E)$ serves as the only input function from which the inversion will take place. The task at hand is a QIP that aims to find structural and composition information about the scattered impurities using the seemingly noisy curves shown on the top panels of Fig.~\ref{fig-simple} as the only starting point. The transmission coefficient is a rather representative quantity to use as input function because it has several parallels in real materials that go beyond this simple illustrative model. For example, the conductance of a metal is commonly expressed in terms of $T(E)$, following the Landauer-B\"uttiker formalism \cite{datta_1995}. Likewise, other quantities such as thermal and optical conductivity can also be expressed in a spectral representation that resembles the transmission coefficient\cite{tuovinen}. In this work, we put emphasis on extracting materials' composition from electronic transport analysis following the misfit function minimization scheme as illustrated in the case studies above. It is important to emphasize a general aspect of our methodology which is the possibility of decoding not only electronic conductance curves but also response functions of other physical natures, e.g. mechanical, optical, magnetic, etc. For instance, this methodology was recently adapted to decode the optical conductivity of disordered MoS$_2$ films as presented in Duarte et al. \cite{duarte2021}. Another promising applicability of the methodology is to use in the context of mechanical response in line with the approach proposed by Bao et al. \cite{bao2013}. Note also that our misfit function methodology is not limited to conductance curves derived from numerical methods; our method is applicable to any energy-dependent response function that can serve as a ``target'' function for the misfit function minimization scheme. Typical experimental data that could be used with this methodology could come from Hall-bar experiments, in our case, conducted in 2D nanostructures. As long as the main methodology parameters (i.e., bandwidth, size of the ensemble, transport regime, and hamiltonian flavor) are set to achieve the desired accuracy, a convergence to a minimum in the misfit function can be expected for any fluctuating energy-dependent response function obtained from experiments or numerical means.
As previously alluded to, a naive approach would be to attempt a comparison of the input function with the transmission coefficient of every single possible disorder configuration but the prohibitively large number of combinations proves this strategy impracticable.
\begin{figure}
\centering
\includegraphics[height=0.57\columnwidth,right]{final_fig1_up.png}
\caption{(first row) Transmission coefficient ($T$), (second row) misfit function ($\chi$), and (third row) error ($\alpha$) as a function of representative physical quantities in disordered systems: $E$ is the energy expressed in certain units of energy $t$, $N$ is the number of scatters, $n$ is the impurity concentration expressed in $\%$, B.W. is the normalized bandwidth used for integration in the inversion procedure, and $M$ is the total number of disordered configurations in the studied ensemble. These results were obtained for three case study systems: (first column) electron gas with delta functions representing scatters, (second column) electron gas with identical square barriers of width $d=1$ and height $V_0=1$ in arbitrary units, (third column) one-dimensional (infinite) atomic chain. In all cases, $t=1$. For the third case (atomic chain), the transmission is scaled in terms of the conductance quantum, $\Gamma_0 = 2e^2/h$ with $e$ being the elementary charge, and $h$ the Planck constant. Panel g) also contains the transmission as a function of energy for the pristine atomic chain (red curve). Impurities in the doped case were modelled with on-site energy of $\lambda = 0.5t$ and they were spread within a length of $L=100$ atomic layers. Transmission results in the first row are for the target systems in which the misfit function locates a minimum: (a,b) electron gas with $N_{min}=5$ delta functions, (d,e) electron gas with $n_{min}=4\%$ square barrier scatters, and (g,h) 1D atomic chain with $n_{min}=8\%$ impurities. }
\label{fig-simple}
\end{figure}
Fortunately, the inversion procedure introduced in \cite{shardul} provides a far more
efficient technique which combines configurationally-averaged (CA) values of the transmission coefficient with the ergodic principle. More specifically, the ergodic hypothesis assumes that a running average over a continuous parameter upon which the transmission coefficient depends is equivalent to sampling different impurity configurations and this can be used to assist us in the inversion procedure. In mathematical terms, this is done through the so-called misfit function defined as \cite{shardul}
\begin{equation}
\chi(n)={1 \over {\cal E}_+ - {\cal E}_-}\,\int_{{\cal E}_-}^{{\cal E}_+} dE \, (T(E) - \langle T(E, n) \rangle)^2\,\,,
\label{misfit}
\end{equation}
where the integration limits are arbitrary energy values. $\chi$ can be interpreted as a functional that measures the deviation between the input transmission of the parent configuration $T(E)$ and its CA counterpart $\langle T(E) \rangle$, defined as
\begin{equation}
\langle T(E,n) \rangle = {1 \over M} \sum_{m=1}^M T_m(E).
\label{CA}
\end{equation}
where $M$ is the total number of different disordered configurations taken into account and $T_m(E)$ is the corresponding transmission coefficient for each one of these configurations, which are themselves labelled by the integer $m$. While both $T(E)$ and $\langle T(E) \rangle$ in the integrand above are functions of energy, the latter is also a function of the impurity concentration $n$. When plotted as a function of $n$, the misfit function displays a very distinctive minimum at a value that corresponds to the real impurity concentration. This can be seen in the middle panels of Fig.~\ref{fig-simple}. The concentration $n_{min}$ that minimises the misfit function coincides with the real concentration $n_i$ in all three cases, indicating that the inversion is successful. Also included in the Figure (bottom-row panels), we plot the relative error $\alpha = \vert n_{min} - n_i\vert/n_i$, where $n_{min}$ stands for the concentration for which the misfit function is minimum. The error $\alpha$ is calculated over several different parent configurations in order to achieve statistical significance.
One question that naturally arises is how relevant the energy integral of equation (\ref{misfit}) is. On the one hand, the energy integration plays a significant part in identifying the real impurity concentration because $T(E)$ fluctuations of a single sample versus energy is equivalent to sample-to-sample fluctuations at a fixed energy. Therefore, this energy integration is similar to vastly augmenting the number of disordered configurations taken into account. In fact, if we were to use equation (\ref{misfit}) with ${\cal E}_-={\cal E}_+$, the misfit function would not have any distinctive feature and would not lead to a successful inversion \cite{shardul}. However, if the integration limits span a small fraction of the relevant energy range, the misfit function acquires a very distinctive shape with minima located at the correct concentrations.
Some further insights we can learn from Fig.~\ref{fig-simple} are as follows. Fig.~\ref{fig-simple}(c) plots the inversion error $\alpha$ as a function of the normalized bandwidth defined as B.W.$=({\cal E}_+ - {\cal E}_-)/Z$, with $Z$ being the total bandwidth of the electron gas system. Note that $\alpha$ is large for very narrow energy windows. When the energy window for integration is broadened beyond $20\%$ of the bandwidth, $\alpha< 0.1$ in the case of $N=5$ delta function potentials. This indicates that larger energy windows accounted in the misfit function can capture more of the transmission features, minimizing then the error of the inversion procedure. The integration in equation (\ref{misfit}) can
be hence replaced with a discrete sum without major impact to the accuracy of the
inversion procedure. By reducing the number of energy points ${\cal N}_\epsilon$ involved in generating the misfit function, there is a point below which the inversion accuracy drops quite significantly but this threshold is normally at very low levels (${\cal N}_\epsilon < 50$), which enables calculations with systems for which the numerical complexity is significantly higher \cite{shardul}, as we will demonstrate in the next section.
Regarding the error study done in Fig.~\ref{fig-simple}(f) and the number of configurations needed in the CA values $M$, the backbone of our inversion procedure is that fluctuations in the conductance ($\Gamma$) contain very little system specific information but the average conductance depends smoothly on the variable of interest. In this sense, the higher the value of disorder realizations $M$, the smaller the fluctuations in $\langle \Gamma(E,n) \rangle$. Universal conductance fluctuations (UCF) can be used to estimate values of $M$ required for sufficiently accurate results in the inversion procedure. Both for diffusive \cite{lee1985universal,lee1987universal} and chaotic ballistic systems \cite{Baranger1994}, var($\Gamma) \approx 1$ is the main fingerprint of UCF. Hence, the CA relative statistical error is expected to scale with $[\sqrt{M}\times \langle \Gamma(E,n) \rangle]^{-1}$. To bring the degree of fluctuations to an acceptable level that yields statistically significant results, we estimate that $M$ should be of the order of $10^3$ \cite{Alhassid2000,lopez2014modeling}. As seen in Fig.~\ref{fig-simple}(f), the error is only guaranteed to stabilize at $M\sim 1000$. Depending on the complexity of the system, higher values of $M$ can be required to achieve statistical significance.
Finally, Fig.~\ref{fig-simple}(i) shows the inversion error $\alpha$ for an infinite 1D atomic chain as a function of impurity concentration $n$. Substitutional impurities were spread randomly over $L=100$ unit cells in length. One can see that $\alpha$ is relatively small for concentrations below 10\%, clearly indicating that our inversion method is very reliable for dilute regimes. To test the inversion procedure, we defined hamiltonians with the knowledge of concentration as well as positions. By increasing the concentration of scatters spread over a finite length, one can start forming pairs or atomic clusters of impurities, in which the method is not equipped to distinguish from single scatters. For this reason, we see a reduction in the accuracy of our method ($\alpha$ increases) as impurity concentration increases.
\section{Results and discussion}
\begin{figure}
\centering
\includegraphics[height=0.57\columnwidth,right]{figure2_new_up.pdf}
\caption{
(first row) Electronic transmission ($T$), (second row) misfit function ($\chi$), and (third row) error ($\alpha$) as a function of representative physical quantities in disordered systems: $E$ is the energy expressed in units of hopping $t$ or directly in eV, $n$ is the impurity concentration expressed in $\%$, $M$ is the total number of disordered configurations in the studied ensemble, and B.W. is the normalized bandwidth used for integration in the inversion procedure. These results were obtained for three quasi-1D systems: (first column) carbon nanotube (CNT), (second column) graphene nanoribbon (GNR), and (third column) hexagonal boron nitride (hBN) nanoribbon. The transmission results on the top panels are scaled in terms of the conductance quantum, $\Gamma_0 = 2e^2/h$ with $e$ being the elementary charge, and $h$ the Planck constant. Top panels (a,d,g) contain the transmission as a function of energy for the respective pristine systems (red curves) and for the doped target systems (blue curves) in which the misfit function locates a minimum: (a,b) CNT at $n_{min}=2.5\%$, (d,e) GNR at $n_{min}=3\%$, and (g,h) hBN at $n_{min}=2\%$ impurities. The CNT is an armchair (5,5) hosting impurities modelled with an on-site energy of $\lambda = 0.5\, t$ with $t=1$ spread over L=50 unit cells in length. The GNR is an armchair-edge nanoribbon with 7 atoms along its width and it is hosting impurities with an on-site energy of $\lambda = 0.5\, t$ with $t=1$ spread over L=100 unit cells in length. The hBN host is an armchair-edge ribbon with 8 atoms along its width and infinite in length. Carbon atoms act as impurities and are spread over $L=100$ layers long. The on-site energy values to model B, N, and C atoms are: $\lambda_B = -6.64$ eV, $\lambda_N = -11.47$ eV, and $\lambda_C = -8.97$ eV\cite{dibenthesis}. See main text for details on the hopping characteristics used to model the hBN.}
\label{fig:gnr}
\end{figure}
We now proceed to introduce three new case studies in the second group of systems, which are less simplistic and more realistic. The first system in this group consists of carbon nanotubes (CNTs); the second is made of graphene nanoribbons (GNRs); the third and final case is made of hexagonal boron nitride (hBN) nanoribbons. Each one of these materials will contain a finite concentration of substitutional impurities that differ from their pristine atomic composition. One obvious difference to the previous cases is that these are no-longer one-dimensional systems and their electronic structures are a lot more involved than the ones displayed by the cases shown in Fig.~\ref{fig-simple}.
Even though these new cases possess band structures that have finer details than the ones in the first group, it is still convenient to maintain the tight-binding description of the electronic structure. Successful inversions have already been reported with Density Functional Theory (DFT) calculations used to describe the multi-orbital electronic structure of doped graphene nanoribbons \cite{shardul}. Despite its success, {\it ab-initio} calculations are far more time-consuming than semi-empirical approaches such as tight-binding, thus justifying our choice of electronic structure model for this manuscript. Therefore, the Hamiltonian appearing in equation (\ref{tb}) is still applicable to describe the cases in this group, the main difference being that the integer $j$ now accounts for all sites of these quasi-2D structures. Particularly for the hBN case, the Hamiltonian will carry three on-site contributions to account for the nitrogen ($\lambda_N$), boron ($\lambda_B$), and impurity atoms as we will detail later on.
Results found for these new cases are shown in Fig.~\ref{fig:gnr} with CNT on the left panels, GNR on the middle panels and hBN on the right panels. Following the same pattern of Fig.~\ref{fig-simple}, the top-row panels display the electronic transmission of the parent configuration that serves as the input function in the inversion procedure. Note that in this case, these functions can be interpreted as the Fermi-energy-dependent conductance. Also on the same plots are the impurity-free conductance plots shown for comparison purposes. The middle-row panels depict the corresponding misfit function $\chi(n)$, always plotted as a function of the concentration of impurities $n$. Once again for the sake of comparison, the same plot contains a vertical dashed line that indicates the real impurity concentration contained in the parent configuration. Finally, the bottom-row panels indicate the accuracy of the inversion procedure by plotting the error $\alpha$ as a function of certain control parameters that will be discussed later.
For the CNT results on the left-column panels of Fig.~\ref{fig:gnr}, they contain impurities represented by on-site potentials of $\lambda= 0.5$ in units of hopping (see equation (\ref{tb})). While this value corresponds to an arbitrary choice that does not represent any specific atomic species, the inversion method correctly identifies the impurity concentration for whichever choice of $\lambda$ values. Such a good agreement between the inverted result and the real concentration is seen in the misfit function plot for the nanotube. The concentration that minimizes $\chi$, $n_{min}$, coincides with the $n_i=2.5\%$ chosen for the parent configuration. Having performed the inversion procedure with numerous different parent configurations, we were able to assess its success rate in the case of nanotubes serving as hosts. Fig.~\ref{fig:gnr}(c) depicts the error $\alpha$ plotted as a function of the bandwidth, revealing that the accuracy of the method improved as the integration window of the misfit function increased. The central-column panels display equally good results for the GNR with the same choice of $\lambda = 0.5$ in units of hopping to mimic substitutional impurity atoms. Fig.~\ref{fig:gnr}(d) shows the conductance of a pristine armchair-edged GNR with 7 atoms wide plotted as a function of energy. Note that only the conduction band ($E>0$) is shown to simplify visualization. On the same panel, the conductance of the parent configuration containing 3\% of impurities is also shown. Fig.~\ref{fig:gnr}(e) displays the $n$-dependent misfit function with a distinctive minimum that agrees with the real concentration, $n_i=3\%$. The accuracy for the nanoribbon inversions is tested by plotting $\alpha$ as a function of concentration in Fig.~\ref{fig:gnr}(f). Once more, we observe that the inversion method performs better in dilute regimes of doping, with the error $\alpha$ increasing relatively slow with the concentration of impurities.
The right-column panels of Fig.~\ref{fig:gnr} show results for an armchair-edge hBN nanoribbon with 8 atoms along its width. The ribbon is infinite in length but, in its doped form, impurities are spread over a section of $L=100$ layers long. The tight-binding Hamiltonian for the hBN host uses three distinct on-site energy values to model B and N atoms on the host and C atoms as impurities of the system: $\lambda_B = -6.64$ eV, $\lambda_N = -11.47$ eV, and $\lambda_C = -8.97$ eV\cite{dibenthesis}. Hopping terms were parameterized as $t =-6.17/a^2$ \cite{dibenthesis, harrison} with $a$ being the pair-wise bond length. For a boron-nitrogen bond, $a=1.43$ \AA\, and, for the sake of simplicity, we consider that boron-carbon and nitrogen-carbon bond lengths do not change significantly from that value. The noisy conductance curve in Fig.~\ref{fig:gnr}(g) corresponds to the parent system of a hBN ribbon doped with $n_i=2\%$ of substitutional impurities and the red curve is the conductance of the pristine hBN nanoribbon. The misfit function shown in Fig.~\ref{fig:gnr}(h) acquires a minimum at $n_{min}=2\%$, evidencing that the method predicted correctly the concentration of the parent system. Finally, Fig.~\ref{fig:gnr}(i) confirms that the accuracy of the method is significantly improved as the number of disordered configurations in the ensemble, $M$, increases.
\begin{figure}[!tbp]
\centering
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[width=\textwidth,right]{hbn_sur.pdf}
\end{minipage}
\begin{minipage}[b]{0.45\textwidth}
\includegraphics[width=\textwidth,left]{fig3_b.pdf}
\end{minipage}
\caption{(Left panel) Misfit function ($\chi$) surface plot taken for an armchair-edge hBN nanoribbon with 7 atoms along its width. The ribbon is infinite but carbon impurities were spread randomly over $L=100$ unit cells along its length. The bar color express values of $\chi$. The control parameters along x- and y-axis are $N_N$ and $N_B$, respectively; those account for the number of carbon impurities replacing nitrogen ($N_N$) or boron ($N_B$) atoms. Dashed lines intersect at the characteristics of the parent system with $N_{N} = 50$ and $N_B=20$. The minimum of $\chi$ correctly predicts the characteristics of the parent system as it lays within the region where $N_{N} = 50$ and $N_B=20$. (Right panel) Curve of $\chi$ versus $N_{N}$ taken from left panel by slicing the misfit surface plot horizontally at $N_{B} = 20$ (red horizontal dashed line). A distinctive minimum at $N_{N}=50$ (red vertical dashed line) correctly finds the occupation of carbon impurities on nitrogen sites of the parent system. Bandwidth considered for inversion is 70\% of the entire spectrum. }
\label{Hbn_N}
\end{figure}
Another way to prove the generality of our methodology is to extend its use to a multi-dimensional phase space. Till this point, the inversion procedure looked at one degree of freedom which was the impurity concentration. However, it is possible to extend this analysis to a two-dimensional phase space, e.g., as we are going to illustrate. This analysis turns particularly interesting if applied to the hBN case because impurities can replace two types of atoms on the host (N or B). This means we can decompose the concentration information into the two distinct sublattices. Considering that a total of $N$ impurities can be substitutionally dope a segment length of an hBN nanoribbon, we can write that $N=N_B+N_N$ where $N_B$ ($N_N$) is the number of boron (nitrogen) atoms replaced by an impurity. Therefore, we can write the misfit function in terms of these two degrees of freedom to probe occupation of impurities on both boron and nitrogen sites as
\begin{equation}
\chi(N_{B},N_{N}) = \int_{{\cal E}_-}^{{\cal E}_+} dE \, \left[\, T(E) - \langle T(E,N_{B},N_{N})\rangle \, \right]^2 \,\,
\label{chi2d}
\end{equation}
A certain number of C atoms are then spread over the hBN host in such a way that they can replace equal portions of B and N atoms or they can cause a sublattice unbalance. In other words, we can define a parameter $\delta$ as $\delta = N_N - N_B$ that can serve as a metric for this unbalance: if $\delta>0$, C atoms majorly occupy the nitrogen sublattice, if $\delta<0$, C atoms majorly occupy the boron sublattice, and if $\delta=0$, C atoms occupy both sublattices equally.
A 2D contour plot of $\chi$ as defined in equation (\ref{chi2d}) is presented in Fig.~\ref{Hbn_N} left panel. The global minimum correctly coincides with the occupation of impurities placed on boron ($N_B=20$) and nitrogen ($N_N=50$) sublattice of the parent system highlighted by the intersection of vertical and horizontal dashed red lines. The right panel in Fig.~\ref{Hbn_N} corresponds to an horizontal slice taken from the surface plot at fixed $N_B=20$ to visualize the dependency of $\chi$ with $N_N$ and to highlight the minimum of $\chi$ at $N_N=50$. The fundamental difference between boron and nitrogen sites allows to identify occupation of carbon impurities on corresponding sublattices. However, identifying the exact position of impurities still remains an elusive task. This example proves that extending the inversion methodology to inspect more than one degree of freedom does not affect its generality and robustness. Looking at more than one degree of freedom in the misfit function certainly increases computational cost and may generate multiple minima. Nonetheless, the method can serve as a first approach to determine the exact features of a parent system that can be described in a multi-dimensional phase space. In other words, results as the one depicted in Fig.~\ref{Hbn_N} narrows down considerably the range of search for the optimum $N_B$ and $N_N$ values. A refinement in the range of values can be achieved by re-applying the inversion procedure within the reduced parameter range and with an increased number of CA samples, $M$, and/or by increasing the bandwidth established for the inversion procedure.
It is worth mentioning that our methodology is not immune to the typical limitations regarding the investigation of large-scale systems containing over $10^3$ atoms. Yet, we tested our methodology with relatively large systems, e.g. nanoribbon systems with $\sim 2000$ atoms along their length. A single point calculation to determine the accuracy of the method took approximately 30-60 minutes to complete depending on the ensemble size. Depending on the computational resources available, this computing time can be significantly reduced. Certainly systems treated with more complex Hamiltonians, beyond the nearest-neighbour single-orbital tight-binding approximation, can take longer computational times. Nonetheless, our methodology can be paired with other scalable electronic transport approaches, e.g. as the one proposed by Fan et al. \cite{FAN201428} in which an optimization for the linear-scaling Kubo-Greenwood formula is presented, allowing large-scale calculations in systems of $\sim 10^6$ atoms.
\section{Conclusions}
In this manuscript, we illustrated the use of an inversion methodology capable of extracting information out of disordered systems, more specifically host media perturbed by local potentials representing impurities or dopants. For situations in which we know the initial conditions of the system of study, the method may sound unimpressive. However, if we do not have access to its initial conditions, one needs to find a way of retrieving the system initial setup by inspecting response functions of the system subjected to perturbations. In this work, we studied a series of solid state systems ranging from electron gas, 1D atomic chain, up to carbon nanotubes, graphene and hBN nanoribbons that give the conductance or electronic transmission as response function. Each of these (host) solid state systems were perturbed by a certain concentration of impurities and their presence induce fluctuations in the transmission response functions, turning them into quite noisy spectra. It is not straightforward to deduce the concentration of impurities doping the hosts by only looking at the noisy profile of the transmission curve. We proved the generality of an inversion methodology proposed by Mukim et al. \cite{shardul} in which unknown quantities such as the impurity concentration in doped nanoscale systems can be uncovered by computing the so called ``misfit function'', an objective function written in terms of the configurationally averaged conductance and the target conductance. The misfit function reveals minima at locations in the phase space parameter that correspond to the unknown initial conditions of the system, in this case, uniquely characterized by the concentration of impurities doping a host material. The method itself can be built upon numerous control parameters that inform on its accuracy and performance such as relative error to target, influence of bandwidth window, number of configurationally averaged samples in the ensembles, to name but a few. We observed that the accuracy of the method improves considerably when: (i) sufficiently large bandwidths are selected for the integration of the misfit function, (ii) the sampling of the ensemble is increased, (iii) target systems are in dilute regime of sufficiently low concentration of dopants. In summary, the method proved capable of ``decoding'' noisy transmission response functions into a more meaningful mathematical representation, the misfit function, that exhibits minima at the characteristics defining the unknown a priori initial conditions of the studied systems.
\section*{Acknowledgements}
This publication has emanated from research supported in part by a research grant from Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2278-P2. This work was also supported by U of C start-up funding and partially support by the Natural Sciences and Engineering Research Council of Canada (NSERC), [Discovery Grant funding reference number xxxxxx]. We also acknowledge the WestGrid (www.westgrid.ca), the Compute Canada Calcul Canada (www.computecanada.ca), and the CMC Microsystems (www.cmc.ca) for computational resources.
\
\
\section{Answers to Referee \#1}
\begin{enumerate}
\item `` {\it The authors recognized the importance of inverse problems in nano science, and with this work they certainly contribute to expanding knowledge about linking structure and property, in this case resolving the role of the concentration of impurities in the conductance fluctuations in the spectra, or in more general terms, transforming a noisy transmission response function into a mathematical representation.}''
{\bf Our answer:} We thank the referee for recognising the value of our contribution. It is worth reiterating that the goal of our manuscript was to demonstrate that the methodology in question is not exclusive of graphene but is useful to a wider range of applications and materials, which certainly adds value to this inversion tool.\\
\item ``{\it The approach to solving an inverse problem depends on scale. Namely, in this specific case the authors tackle a mathematically well-defined inverse problem, but they are dealing with a very limited, I would dare to say rather idealized case of a physical system. Of course I do not mean to undermine this work. However, in nanoscience one is dealing with large systems, containing $>=10^3$ atoms and often much more, thus different scale of the problem (e.g., Ann. Phys. 527, 187). Thus, in those cases this methodology is not applicable and other approaches were put forward. For example, extracting structural information from excitonic spectra of self-assembled semiconductor quantum dots (e.g., PRB 79, 075443, PRB 80, 035328, PRB 84, 235317, etc.).}''
{\bf Our answer:} The referee is correct in pointing out the systems we presented are $\leq 10^3$ atoms. However, the methodology can be scaled up to larger system sizes. The methodology is robust and works for a wide range of sizes that exceeds up to the order of the localization length. In fact, the method performs well in diffusive, ballistic, and in localized transport regimes. Figure~\ref{errorl} below shows the error ($\alpha$) as a function of device size ($L$) of an armchair edge graphene nanoribbon doped with an impurity concentration of $n=4\%$. For instance, for the case of $L=140$, our system contains $1960$ atoms and for each $L$ size, our calculations took roughly 30 minutes to 1 hour (depending on ensemble size) to complete. Nevertheless, we thank the referee for raising this issue which is recurrent in every transport and electronic structure computational problem at the nanoscale. Certainly, for more complex hamiltonians, calculations can take longer and the use of computing parallelization schemes may be required. For our particular inverse problem study, we suggest that our methodology could be paired with scalable transport calculations as proposed by Fan et al. Computer Physics Communications 185, 28 (2014). The latter presents a way of implementing and optimizing the linear-scaling Kubo-Greenwood formula for electronic quantum transport, allowing the authors to perform calculations in systems of $10^6$ atoms. Our misfit function methodology could be applied on a large ensemble of large-scale disordered systems treated as proposed by Fan et al. We included a brief discussion about this point in the new version of the manuscript.
\begin{figure}[!h]
\centering
\includegraphics{error_L.pdf}
\caption{Relative error of the inversion procedure ($\alpha$) as a function of device size $L$ (in unit cells) calculated for an armchair edge graphene nanoribbon with 4\% of impurities. For reference in terms of number of atoms, the system for the case of $L=140$ contains $1960$ atoms.}
\label{errorl}
\end{figure}
\\
A brief discussion about the feasibility of our methodology to treat large-scale nanoscale systems is included in the new version of the manuscript on page 12, last paragraph of section 3. The new piece of text appears as:\\\\
``It is worth mentioning that our methodology is not immune to the typical limitations regarding the investigation of large-scale systems containing over $10^3$ atoms. Yet, we tested our methodology with relatively large systems, e.g. nanoribbon systems with $\sim 2000$ atoms along their length. A single point calculation to determine the accuracy of the method took approximately 30-60 minutes to complete depending on the ensemble size. Depending on the computational resources available, this computing time can be significantly reduced. Certainly systems treated with more complex Hamiltonians, beyond the nearest-neighbour single-orbital tight-binding approximation, can take longer computational times. Nonetheless, our methodology can be paired with other scalable electronic transport approaches, e.g. as the one proposed by Fan et al. [..] in which an optimization for the linear-scaling Kubo-Greenwood formula is presented, allowing large-scale calculations in systems of $\sim 10^6$ atoms.''\\\\
The following citations are also included in our manuscript:\\
- V. Mlinar, ``Utilization of inverse approach in the design of materials over nano- to macro-scale,'' Ann. Phys. (Berlin) {\bf 527}, 187 (2015); \\
- Z. Fan, A. Uppstu, T. Siro, and A. Harju, ``Efficient linear-scaling quantum transport calculations on graphics processing units and applications on electron transport in graphene,'' Computer Physics Communications {\bf 185}, 28 (2014).\\
\item ``{\it In "Inverse Problems" journal, there have been many case-studies of inverse problems with, from a purely mathematical viewpoint, similar approach presented here (e.g., Inverse Probl. 29, 015006), however, this work by the level of physics insights deserves to be considered for publication in JPCM. I do recommend the authors to expand on the discussion regarding the application of inverse problems and put this work in a proper context.}''
{\bf Our answer:} We thank the referee for such positive comment which aligns with one particular aspect of our methodology which is the possibility of decoding response functions of distinct physical natures. The work in Bao et al., Inverse Probl. 29, 015006 (2013), ``inverse-investigates'' mechanical properties of nanomaterials which can certainly inspire our methodology to also decode mechanical responses of low dimensional nanostructures. Just recently, part of our team demonstrated that the methodology presented in this manuscript can be adapted to decode the optical conductivity of disordered MoS$_2$ films as presented in New J. Phys. 23, 073035 (2021). We acknowledge that the consolidation of a methodology can only be achieved by continuous testing of its capabilities, however this manuscript serves as an important probing milestone for our methodology.\\
As recommended by the reviewer, we included the following paragraph in the new version of the manuscript (page 5):\\
``In this work, we put emphasis on extracting materials' composition from electronic transport analysis following the misfit function minimization scheme as illustrated in the case studies above. It is important to emphasize a general aspect of our methodology which is the possibility of decoding not only electronic conductance curves but also response functions of other physical natures, e.g. mechanical, optical, magnetic, etc. For instance, this methodology was recently adapted to decode the optical conductivity of disordered MoS$_2$ films as presented in Duarte et al [..]. Another promising applicability of the methodology is to use in the context of mechanical response in line with the approach proposed by Bao et al. [..].''\\\\
The following citations are also included in our manuscript:\\
- F. R. Duarte, S. Mukim, A. Molina-S\'anchez, T. G. Rappoport, and M. S. Ferreira, ``Decoding the DC and optical conductivities of disordered MoS$_2$ films: an inverse problem,'' New J. Phys. {\bf 23}, 073035 (2021);\\
- G. Bao, and X. Xu, ``An inverse random source problem in quantifying the elastic modulus of nanomaterials,'' Inverse Problems {\bf 29}, 015006 (2013).\\
\item ``{\it ...it would be useful to discuss in more detail, how realistic these predictions are and if one would be able link them to real-world counterparts. A brief discussion on this would be beneficial for the readers.}''
{\bf Our answer:} We thank the referee for the suggestion. We added in the new version of the manuscript a discussion to highlight the capability of our methodology in treating any target response function, including measured ones in laboratory (real-world situation). It is worth mentioning that we are currently involved in a study where conductance measurements are being generated specifically for the purposes of inversion which will enable us to carry out a detailed quantitative analysis of our methodology with experimental data as well. The illustrative analysis we performed in this manuscript, with emphasis on numerical target conductance curves, is to give us the possibility of understanding further how to control the main methodology parameters to achieve the desired accuracy and performance. The discussion added on page 5 reads:\\
``Note also that our misfit function methodology is not limited to conductance curves derived from numerical methods; our method is applicable to any energy-dependent response function that can serve as a ``target'' function for the misfit function minimization scheme. Typical experimental data that could be used with this methodology could come from Hall-bar experiments, in our case, conducted in 2D nanostructures. As long as the main methodology parameters (i.e., bandwidth, size of the ensemble, transport regime, and hamiltonian flavor) are set to achieve the desired accuracy, a convergence to a minimum in the misfit function can be expected for any fluctuating energy-dependent response function obtained from experiments or numerical means.''\\\\
\end{enumerate}
\section{Answers to Referee \#2}
\begin{enumerate}
\item ``{\it The authors study how the degree of disorder in a nanostructure can be determined from the energy resolved transmission. The manuscript on its own, is interesting, well written and should be suitable for publication. However, the manuscript is largely identical to the paper published by the authors in Phys. Rev. B 102, 075409 (2020). From my point of view, the authors just applied their method on some other systems, CNTs and hBN nanoribbons, without obtaining any additional insight that would justify another paper.}''
{\bf Our answer:} We thank the referee for acknowledging that the manuscript is interesting, well written and worth of publication. Regarding the referee's view that this is not too dissimilar to our original publication (Phys. Rev. B 102, 075409 (2020)), we must emphasise that the objective of the paper is to demonstrate that the methodology does work for materials besides graphene. Despite our claims of generality, the method had been previously criticised for being suitable for graphene only, given the linear dispersion relation that is characteristic of that material. With that in mind, we have presented in this manuscript a few other cases ranging from simple square barriers to h-BN which possesses a more elaborate electronic structure. To consolidate a new methodology in a way that it is adopted by the wider community, it is essential to put all its capabilities to the test and to demonstrate where it is applicable. The wider its applicability the more interest it is likely to attract and that is precisely the purpose of the current manuscript, {\it i.e.}, to demonstrate the versatility, robustness and generality of our method.\\
\item ``{\it The authors use the transmission T and conductance gamma. However, both quantities are essentially identically (apart form a factor $gamma0= e^2/h$). Only one of these quantities should be used and defined in the manuscript to improve its readability.}''
{\bf Our answer:} We thank the referee for the remark. To improve readability of our manuscript as suggested by the referee, we opted to present our results in terms of $T$ for electronic transmission.\\
\item ``{\it The left hand side of Equation (4) depends on the disorder concentration n. The dependences of the right hand side on n should be indicated explicitly in the equation and not only in the text below.}''
{\bf Our answer:} We fixed equation (4) in the new version of the manuscript to accommodate the explicit dependency with $n$ which now appears as:
\begin{equation}
\chi(n)={1 \over {\cal E}_+ - {\cal E}_-}\,\int_{{\cal E}_-}^{{\cal E}_+} dE \, (T(E) - \langle T(E, n) \rangle)^2\,\,,
\label{misfit}
\end{equation}
\vspace{0.5cm}
\item ``{\it The comment on page 7, line 10: "Nonetheless, the integration in equation (4) may not be necessary..." is somehow trivial as integrations is essentially summation.}''
{\bf Our answer:} We rearranged this sentence in the new version of the manuscript and it now reads:
``The integration in equation (4) can be hence replaced with a discrete sum without major impact to the accuracy of the inversion procedure.''
\end{enumerate}
\section{List of changes:}
\begin{itemize}
\item New citations included in the new version of the manuscript are:\\
- V. Mlinar, ``Utilization of inverse approach in the design of materials over nano- to macro-scale,'' Ann. Phys. (Berlin) {\bf 527}, 187 (2015);\\
- Z. Fan, A. Uppstu, T. Siro, and A. Harju, ``Efficient linear-scaling quantum transport calculations on graphics processing units and applications on electron transport in graphene,'' Computer Physics Communications {\bf 185}, 28 (2014);\\
- F. R. Duarte, S. Mukim, A. Molina-S\'anchez, T. G. Rappoport, and M. S. Ferreira, ``Decoding the DC and optical conductivities of disordered MoS$_2$ films: an inverse problem,'' New J. Phys. {\bf 23}, 073035 (2021);\\
- G. Bao, and X. Xu, ``An inverse random source problem in quantifying the elastic modulus of nanomaterials,'' Inverse Problems {\bf 29}, 015006 (2013).\\
\item A brief discussion on the generality and context of the methodology was added on page 5:\\
``In this work, we put emphasis on extracting materials' composition from electronic transport analysis following the misfit function minimization scheme as illustrated in the case studies above. It is important to emphasize a general aspect of our methodology which is the possibility of decoding not only electronic conductance curves but also response functions of other physical natures, e.g. mechanical, optical, magnetic, etc. For instance, this methodology was recently adapted to decode the optical conductivity of disordered MoS$_2$ films as presented in Duarte et al [..]. Another promising applicability of the methodology is to use in the context of mechanical response in line with the approach proposed by Bao et al. [..].''\\
\item A discussion on the applicability of our methodology beyond numerical target functions, e.g. treatment of experimental data, is also added on page 5:\\
``Note also that our misfit function methodology is not limited to conductance curves derived from numerical methods; our method is applicable to any energy-dependent response function that can serve as a ``target'' function for the misfit function minimization scheme. Typical experimental data that could be used with this methodology could come from Hall-bar experiments, in our case, conducted in 2D nanostructures. As long as the main methodology parameters (i.e., bandwidth, size of the ensemble, transport regime, and hamiltonian flavor) are set to achieve the desired accuracy, a convergence to a minimum in the misfit function can be expected for any fluctuating energy-dependent response function obtained from experiments or numerical means.''\\
\item Aligned with the new citations above, a brief discussion in terms of feasibility of the methodology was added on page 12, last paragraph of section 3, which says:\\
``It is worth mentioning that our methodology is not immune to the typical limitations regarding the investigation of large-scale systems containing over $10^3$ atoms. Yet, we tested our methodology with relatively large systems, e.g. nanoribbon systems with $\sim2000$ atoms along their length. A single point calculation to determine the accuracy of the method took approximately 30-60 minutes to complete depending on the ensemble size. Depending on the computational resources available, this computing time can be significantly reduced. Certainly systems treated with more complex Hamiltonians, beyond the nearest-neighbour single-orbital tight-binding approximation, can take longer computational times. Nonetheless, our methodology can be paired with other scalable electronic transport approaches, e.g. as the one proposed by Fan et al. [..] in which an optimization for the linear-scaling Kubo-Greenwood formula is presented, allowing large-scale calculations in systems of $\sim 10^6$ atoms.''\\
\item Electronic transmission results appearing in the figures are expressed using the notation $T$. Figure captions 1 and 2 were altered in accordance to the new notation. Equation (6) was also altered in accordance to the new notation.\\
\item Equation (4) in the manuscript was corrected to include the proper dependency with the concentration of impurities $n$.\\
\item Sentence on page 7, line 10, was altered to: ``The integration in equation (4) can be hence replaced with a discrete sum without major impact to the accuracy of the inversion procedure.''
\end{itemize}
\end{document}
|
{
"timestamp": "2021-12-01T02:25:28",
"yymm": "2111",
"arxiv_id": "2111.15441",
"language": "en",
"url": "https://arxiv.org/abs/2111.15441"
}
|
\section{Introduction}
In this paper we propose and analyse a novel numerical approximation of the
following moving boundary problem. Let $\Omega\subset{\mathbb R}^d$, $d\geq2$,
be a domain with a Lipschitz boundary $\partial\Omega$ and
outer unit normal $\vec\nu_{\Omega}$.
Given the hypersurface $\Gamma(0) \subset \Omega$,
find $u:\Omega\times [0,T]\to{\mathbb R}$ and the evolving hypersurface
$(\Gamma(t))_{t\in[0,T]}$ such that for all $t\in(0,T]$ the following
conditions hold:
\begin{subequations} \label{eq:MS}
\begin{alignat}{2}
-\Delta u &=0 \quad &&\text{in } \Omega \setminus \Gamma(t),\label{eq:MSa}\\
u &= \varkappa \quad &&\text{on } \Gamma(t),\label{eq:MSb}\\
\left[\frac{\partial u}{\partial{\vec\nu}}\right]_{\Gamma(t)}
&= -\mathcal{V}\quad && \text{on } \Gamma(t),\label{eq:MSc}\\
\frac{\partial u}{\partial {\vec\nu_\Omega}} &= 0 \quad && \text{on }
\partial\Omega,\label{eq:MSd}
\end{alignat}
\end{subequations}
where $\vec\nu$ is the outer unit normal of $\Gamma(t)$,
$\varkappa$ is its mean curvature,
$[\cdot]_{\Gamma(t)}$ denotes the jump of a quantity across the interface
$\Gamma(t)$ and $\mathcal{V}$ is the normal velocity of
$(\Gamma(t))_{t\in[0,T]}$. Here our sign convention is such that the unit
sphere has mean curvature $\varkappa = 1 - d < 0$.
The problem \eqref{eq:MS} is usually called the Mullins--Sekerka problem,
or the two-sided Mullins--Sekerka flow,
and geometrically it can be viewed as a prototype for a
curvature driven interface evolution that involves
quantities defined in the bulk regions surrounding the interface.
Alternative names for \eqref{eq:MS} in the literature are Hele--Shaw flow
with surface tension, or quasi-static Stefan problem. For theoretical
results on the existence of strong and weak solutions to \eqref{eq:MS}
we refer to \cite{Chen93,ChenHY96,EscherS97} and the references therein.
Physically, the system \eqref{eq:MS} was derived as a model
for solidification and liquidation of materials of negligible specific heat,
\cite{MullinsS63}. In addition, the Mullins--Sekerka problem arises as the
sharp interface limit of the non-degenerate Cahn--Hilliard equation,
as was proved in \cite{AlikakosBC94}. Here we recall that the Cahn--Hilliard
equation models the process of phase separation and coarsening in melted
alloys, \cite{CahnH58}.
As regards the numerical approximation of the Mullins--Sekerka problem
\eqref{eq:MS}, several different approaches are available from the literature.
Approximations based on a boundary integral formulation can be found in e.g.\
\cite{BatesCD95,ZhuCH96,Mayer00}, while a front-tracking method based on
parametric finite elements has been proposed in \cite{dendritic}. For a
finite difference approximation of a
levelset formulation we refer to \cite{ChenMOS97}, while finite element
approximations of phasefield models have been considered in
\cite{FengP04,FengP04a,eck,vch}.
In this paper we will consider a front-tracking method, where the numerical
approximation of the interface $\Gamma(t)$ is completely independent of the
finite element mesh for the bulk equation \eqref{eq:MSa}.
In fact, we will propose an improvement for the unfitted finite element
approximation that was introduced by the author together with John W.\ Barrett
and Harald Garcke in \cite{dendritic}. Here we will put particular emphasis on
the conservation of physically relevant properties on the discrete level.
By way of motivation, we observe that it is not difficult to prove that a
solution to the Mullins--Sekerka problem \eqref{eq:MS} reduces the surface
area $|\Gamma(t)|$, while it maintains the volume of the enclosed domain
$\Omega_-(t)$. In particular, it holds that
\begin{equation} \label{eq:dte}
\dd{t} |\Gamma(t)|
= \dd{t} \int_{\Gamma(t)} 1 \dH{d-1}
= - \int_{\Gamma(t)} \varkappa\mathcal{V} \dH{d-1}
= -\int_{\Omega} |\nabla u|^2 \dL{d} \leq 0
\end{equation}
and
\begin{equation} \label{eq:dtv}
\dd{t} \operatorname{vol}(\Omega_-(t)) = \int_{\Gamma(t)} \mathcal{V} \dH{d-1}
= 0,
\end{equation}
see e.g.\ Remark~105 in \cite{bgnreview}. We remark that these properties
motivate the interpretation of \eqref{eq:MS} as a volume
preserving gradient flow of the surface area.
Examples for volume preserving gradient flows of the surface area that only
depend on geometric properties of the interface are the conserved mean
curvature flow and surface diffusion, \cite{CahnT94,TaylorC94}.
In contrast, the flow \eqref{eq:MS} also depends on the field $u$ that is
defined in the bulk.
A detailed description of the gradient flow structure for \eqref{eq:MS}
can be found in \cite[Appendix~A]{dendritic}.
Clearly, for a numerical approximation of \eqref{eq:MS} it would be highly
desirable to have a discrete analogue of the energy dissipation law
\eqref{eq:dte} and the volume conservation property \eqref{eq:dtv}.
The fully discrete method from \cite{dendritic} satisfies a discrete analogue
of \eqref{eq:dte}, in particular it is unconditionally stable. But a discrete
version of \eqref{eq:dtv} does not hold. That means that for large time steps,
and in certain situations, a significant loss of mass can be observed in
computations. On utilizing very recent ideas from \cite{JiangL21,BaoZ21},
we will appropriately adapt the fully discrete scheme from \cite{dendritic}
to obtain a new method for \eqref{eq:MS} that satisfies discrete analogues of
both \eqref{eq:dte} and \eqref{eq:dtv}. We believe this is the first such
fully discrete approximation of \eqref{eq:MS} in the literature.
In many physical applications, e.g.\ when considering the solidification
or liquidation of materials, the density of the interfacial energy is
directionally dependent. A typical example for such an anisotropic surface
energy is
\begin{equation} \label{eq:Egamma}
|\Gamma(t)|_\gamma = \int_{\Gamma(t)} \gamma(\vec\nu) \dH{d-1},
\end{equation}
where $\gamma$ is a given anisotropy function.
We refer to \cite{TaylorCH92,BellettiniNP99,DeckelnickDE05,Giga06}
for more details on
anisotropic surface energies. On defining the anisotropic mean curvature
$\varkappa_\gamma$ as the first variation of \eqref{eq:Egamma}, so that e.g.\
$\dd{t} |\Gamma(t)|_\gamma
= - \int_{\Gamma(t)} \varkappa_\gamma\mathcal{V} \dH{d-1}$, we can introduce
the anisotropic Mullins--Sekerka problem by replacing $\varkappa$ with
$\varkappa_\gamma$ in \eqref{eq:MS}. Then the energy dissipation \eqref{eq:dte}
and volume conservation \eqref{eq:dtv} hold as before, where of course in the
former we need to replace $|\Gamma(t)|$ with $|\Gamma(t)|_\gamma$ and
$\varkappa$ with $\varkappa_\gamma$. The numerical method we discuss in this
paper, by virtue of being derived from the scheme in \cite{dendritic},
can deal with the anisotropic Mullins--Sekerka problem as well. In addition,
for a class of anisotropies that was first proposed in \cite{triplejANI,ani3d},
the anisotropic scheme will still be structure preserving, in the sense that
discrete analogues of the anisotropic \eqref{eq:dte} and \eqref{eq:dtv}
will hold.
In summary, the novel fully practical and fully discrete numerical method
proposed in this paper has the following properties:
\begin{itemize}[itemsep=0.5pt,topsep=-3pt]
\item
The method is unconditionally stable, i.e.\ it mimics \eqref{eq:dte} on the
discrete level.
\item
The volume of the two phases, i.e.\ the interior and the exterior of the
interface, is conserved exactly, as a fully discrete analogue to
\eqref{eq:dtv}.
\item
The polyhedral interface approximation maintains a nice mesh property,
leading to asymptotically equidistributed polygonal curves in the case $d=2$
for an isotropic surface energy.
\item
The method is unfitted, meaning mesh deformations of the bulk mesh are
avoided, and no remeshings of the bulk triangulation are necessary.
\item
The method can take an anisotropic surface energy into account, meaning that
a discrete analogue of the anisotropic generalization of \eqref{eq:dte}
still holds on the fully discrete level.
\end{itemize}
The remainder of the paper is organized as follows. In Section~\ref{sec:weak}
we introduce a weak formulation for the Mullins--Sekerka problem \eqref{eq:MS}
on which our finite element method is going be based. We also state a
semidiscrete continuous-in-time approximation and briefly analyse its
properties. Our novel fully discrete finite element approximation is presented
and analysed in Section~\ref{sec:fd}, where in order to focus on the structure
preserving aspect of the method, we at first concentrate on the isotropic case.
Subsequently, in Section~\ref{sec:ani}, we discuss
the extension of the weak formulation and the finite element scheme to the
anisotropic case. Finally, in Section~\ref{sec:nr} we consider several
numerical simulations for the introduced numerical method, including some
convergence experiments.
\setcounter{equation}{0}
\section{Weak formulation and semidiscrete approximation} \label{sec:weak}
Our parametric finite element method will be based on a suitable weak
formulation of \eqref{eq:MS}, which we introduce in this section. Here we
follow the notation and presentation from the recent review article
\cite{bgnreview}.
Let
\[\GT = \bigcup_{t\in[0,T]} (\Gamma(t)\times\{t\})\]
be a smooth evolving hypersurface, such that for
every $t \in [0,T]$ the closed hypersurface $\Gamma(t) \subset \Omega$
partitions the domain $\Omega$
into two phases: the interior $\Omega_-(t)$ and the
exterior $\Omega_+(t) = \Omega \setminus \overline{\Omega_-(t)}$, so that
$\partial\Omega_-(t) = \Gamma(t)$. In what follows, we will often not
distinguish between $\Gamma(t) \times \{t\}$ and $\Gamma(t)$. Moreover, as we
are interested in a parametric formulation of the evolving interface, we assume
that $\vec x : \Upsilon\times [0,T] \to {\mathbb R}^d$ is a global parameterization
of $(\Gamma(t))_{t\in[0,T]}$, where $\Upsilon \subset {\mathbb R}^d$ is a smooth
reference manifold. We recall that the induced full velocity of
$\Gamma(t)$ is defined by
\begin{equation*}
\vec{\mathcal{V}}(\vec x(\vec q,t),t) = (\partial_t\vec x) (\vec q,t)
\quad\forall\ (\vec q,t) \in \Upsilon \times [0,T],
\end{equation*}
and satisfies $\vec{\mathcal{V}} \cdot \vec\nu = \mathcal{V}$.
Multiplying \eqref{eq:MSa} with a test
function $\phi \in H^1(\Omega)$, integrating over $\Omega$
and performing integration by parts yields
\[
0 = \int_{\Omega_-(t)\cup\Omega_+(t)} \phi \Delta u \dL{d}
= \int_{\partial\Omega} \phi\frac{\partial u}{\partial \vec\nu_\Omega} \dH{d-1}
-\int_{\Gamma(t)}\phi
\left[\frac{\partial u}{\partial \vec\nu}\right]_{\Gamma(t)}\dH{d-1}
-\int_\Omega \nabla u \cdot \nabla\phi \dL{d} ,
\]
which in view of the conditions \eqref{eq:MSc} and \eqref{eq:MSd} reduces to
\begin{equation*}
0 = \int_{\Gamma(t)}\phi \mathcal{V}\dH{d-1}
-\int_\Omega \nabla u \cdot \nabla\phi \dL{d} .
\end{equation*}
The only other ingredient needed for the weak formulation is the well-known
variational formulation of mean curvature, given by
\begin{equation} \label{eq:varkappa}
\int_{\Gamma(t)} \varkappa\vec\eta\cdot\vec\nu + \nabla_{\!s}\vec{\rm id}: \nabla_{\!s}\vec\eta
\dH{d-1}
= 0 \qquad \vec\eta \in [H^1(\Gamma(t))]^d,
\end{equation}
where $\vec{\rm id}$ denotes the identity function in ${\mathbb R}^d$ and $\nabla_{\!s}$ is
the surface gradient on $\Gamma(t)$,
see e.g.\ Remark~22 in \cite{bgnreview}. Hence, on denoting the
$L^2$--inner products over $\Omega$ and $\Gamma(t)$ by
$(\cdot,\cdot)$ and $\langle\cdot,\cdot\rangle_{\Gamma(t)}$, respectively,
we can state the weak formulation as follows.
Given a closed hypersurface $\Gamma(0) \subset \Omega$,
we seek an evolving hypersurface $(\Gamma(t))_{t\in[0,T]}$
that separates $\Omega$ into $\Omega_-(t)$ and $\Omega_+(t)$,
with a global parameterization and induced velocity field $\vec{\mathcal{V}}$,
and $\varkappa : L^2(\GT)$ as well as
$u : \Omega \times [0,T] \to {\mathbb R}$, such that for almost all $t \in (0,T)$
it holds for $(u(\cdot,t),\vec {\mathcal V}(\cdot,t),\varkappa(\cdot,t))\in
H^1(\Omega) \times [L^2(\Gamma(t))]^d \times L^2(\Gamma(t))$ that
\begin{subequations} \label{eq:WMS}
\begin{align} \label{eq:WMS1}
\left(\nabla u,\nabla\phi\right) -
\left\langle \vec{\mathcal{V}},\phi\vec\nu \right\rangle_{\Gamma(t)} & = 0
\qquad\forall\ \phi\in H^1(\Omega),\\
\left\langle u - \varkappa,\chi \right\rangle_{\Gamma(t)}
& = 0 \qquad\forall\ \chi\in L^2(\Gamma(t))\label{eq:WMS2},\\
\left\langle \varkappa\vec\nu,\vec\eta \right\rangle_{\Gamma(t)}
+ \left\langle \nabla_{\!s}\vec{\rm id} , \nabla_{\!s}\vec\eta
\right\rangle_{\Gamma(t)}& =0 \qquad
\forall\ \vec\eta\in [H^1(\Gamma(t))]^d. \label{eq:WMS3}
\end{align}
\end{subequations}
Clearly, choosing $\phi = u$ in \eqref{eq:WMS1} and
$\chi = \mathcal{V} = \vec{\mathcal{V}} \cdot \vec\nu$ in \eqref{eq:WMS2}
yields the energy dissipation law
\eqref{eq:dte}, while choosing $\phi = 1$ in \eqref{eq:WMS1} leads to the
volume conservation property \eqref{eq:dtv}.
Mimicking these testing procedures on the discrete level will be crucial to
prove the structure preserving aspect of our finite element approximations.
For the numerical approximation of \eqref{eq:WMS} we first introduce the
necessary finite element space in the bulk. To this end, we assume that
$\Omega$ is a polyhedral domain. Then let $\mathcal{T}^h$ be a
regular partitioning of $\Omega$ into disjoint open simplices, so that
$\overline{\Omega}=\cup_{o\in\mathcal{T}^h}\overline{o}$,
see \cite{Ciarlet78}.
Associated with $\mathcal{T}^h$ is the finite element space
\begin{equation} \label{eq:Sh}
S^h = \left\{\chi \in C^0(\overline{\Omega}) : \chi_{\mid_{o}}
\text{ is affine } \forall\ o \in \mathcal{T}^h\right\}
\subset H^1(\Omega).
\end{equation}
In addition we need appropriate parametric finite element spaces.
Let a polyhedral hypersurface $\Gamma^h \subset {\mathbb R}^d$ be given by
\begin{equation} \label{eq:Gammah}
\Gamma^h = \bigcup_{j=1}^J \overline{\sigma_j},
\end{equation}
where $\{\sigma_j\}_{j=1}^J$ is a family of disjoint,
(relatively) open $d$-simplices, such that
$\overline{\sigma_i}\cap\overline{\sigma_j}$ for $i\not=j$ is either empty or
a common $k$-simplex of $\overline{\sigma_i}$ and $\overline{\sigma_j}$,
$0 \leq k < d$. We denote the vertices of $\Gamma^h$
by $\{\vec q_k\}_{k=1}^K$, and assume that the vertices of $\sigma_j$
are given by $\{\vec q_{j,k}\}_{k=1}^{d}$, $j=1,\ldots,J$.
Here the numbering of the local vertices is assumed to be such that
\begin{equation} \label{eq:nuh}
\vec\nu^h =
\frac{(\vec q_{j,2}-\vec q_{j,1}) \land \cdots \land
(\vec q_{j,d}-\vec q_{j,1})}{|(\vec q_{j,2}-\vec q_{j,1}) \land
\cdots \land (\vec q_{j,d}-\vec q_{j,1})|}
\qquad\text{on } \sigma_j, \quad j = 1,\ldots,J ,
\end{equation}
defines the outer normal $\vec\nu^h \in [L^\infty(\Gamma^h)]^d$ to the interior
$\Omega^h_-$ of $\Gamma^h = \partial\Omega^h_-$.
Here we recall the definition of the wedge product from
\cite[Definition~45]{bgnreview}, i.e.\ for
$\vec v_1, \ldots, \vec v_{d-1} \in {\mathbb R}^d$, the wedge product
is the unique vector such that
$\vec b \cdot (\vec v_1\land\cdots\land\vec v_{d-1})
= \det(\vec v_1,\ldots,\vec v_{d-1}, \vec b)$ for all $\vec b\in{\mathbb R}^d$.
It follows that it is the usual cross product
of two vectors in ${\mathbb R}^3$, and the anti-clockwise rotation
through $\frac\pi2$ of a vector in ${\mathbb R}^2$. We note also that
\begin{equation} \label{eq:detsigma}
|\sigma_j| =
\frac{1}{d-1}|(\vec q_{j,2}-\vec q_{j,1}) \land \cdots \land
(\vec q_{j,d}-\vec q_{j,1})|.
\end{equation}
We define the finite element spaces of continuous
piecewise linear functions on $\Gamma^h$ via
\[
V(\Gamma^h) = \{\chi \in C^0(\Gamma^h) : \chi_{\mid_{\sigma_j}}
\text{ is affine for $j=1, \ldots, J$} \}\quad\text{and}\quad \underline{V}(\Gamma^h) = [V(\Gamma^h)]^d.
\]
We let $\{\phi^{\Gamma^h}_k\}_{k=1}^K$ denote
the standard basis of $V(\Gamma^h)$, i.e.\
\[
\phi^{\Gamma^h}_i(\vec q_j) = \delta_{ij}, \qquad i,j = 1,\ldots,K.
\]
Moreover, we let $\pi_{\Gamma^h}:C^0(\Gamma^h) \to V(\Gamma^h)$ be the standard
interpolation operator, and let
$\left\langle \cdot, \cdot \right\rangle_{\Gamma^h}$
denote the $L^2$--inner product on $\Gamma^h$.
For two piecewise continuous functions
$u,v\in L^\infty(\Gamma^h)$, with possible jumps
across the edges of $\{\sigma_j\}_{j=1}^J$,
we introduce the mass lumped inner product
$\left\langle\cdot,\cdot\right\rangle^h_{\Gamma^h}$ as
\begin{equation}
\left\langle u, v \right\rangle^h_{\Gamma^h} =
\frac1{d}\sum_{j=1}^J |\sigma_j|
\sum_{k=1}^{d} (uv)((\vec q_{j,k})^-),
\label{eq:ip0}
\end{equation}
where $u((\vec q)^-) =
\underset{\sigma_j\ni \vec p\to \vec q}{\lim} u(\vec p)$.
The definition \eqref{eq:ip0}
is naturally extended to vector- and tensor-valued functions.
On recalling \eqref{eq:nuh}, we define the vertex normal vector
$\vec\omega^h \in \underline{V}(\Gamma^h)$ to be the mass-lumped $L^2$--projection
of $\vec\nu^h$ onto $\underline{V}(\Gamma^h)$, i.e.\
\begin{equation} \label{eq:omegah}
\left\langle \vec\omega^h, \vec\varphi \right\rangle_{\Gamma^h}^h
= \left\langle \vec\nu^h, \vec\varphi \right\rangle_{\Gamma^h}
\quad\forall\ \vec\varphi\in\underline{V}(\Gamma^h).
\end{equation}
{From} now on, we let \[{\mathcal{G}^h_T} = \bigcup_{t\in[0,T]} (\Gamma^h(t)\times\{t\})\]
be an evolving polyhedral hypersurface, so that
$\Gamma^h(t)$, for each $t\in[0,T]$, is a polyhedral surface of the form
\eqref{eq:Gammah} for fixed $J$ and $K$.
That is, $\Gamma^h(t)$ is defined through its elements
$\{\sigma^h_j(t)\}_{j=1}^J$ and its vertices $\{\vec q^h_k(t)\}_{k=1}^K$.
We will often not distinguish between ${\mathcal{G}^h_T}$ and $(\Gamma^h(t))_{t\in[0,T]}$.
Then the full velocity of $\Gamma^h(t)$ is defined by
\begin{equation} \label{eq:Vh}
\vec{\mathcal{V}}^h(\vec z, t) = \sum_{k=1}^{K}
\left[\dd{t}\vec q_k(t)\right] \phi^{\Gamma^h(t)}_k(\vec z)
\quad\forall\ (\vec z,t) \in {\mathcal{G}^h_T}.
\end{equation}
We also define the finite element spaces
\[
V(\GhT) = \{ \chi \in C^0({\mathcal{G}^h_T}) :
\chi(\cdot, t) \in V(\Gamma^h(t)) \quad\forall\ t \in [0,T] \}
\quad\text{and}\quad \underline{V}(\GhT) = [V(\GhT)]^d.
\]
Our unfitted semidiscrete finite element approximation of \eqref{eq:WMS} can
then be formulated as follows.
Given the closed polyhedral hypersurface $\Gamma^h(0)$,
find an evolving polyhedral hypersurface $(\Gamma^h(t))_{t\in[0,T]}$,
that separates $\Omega$ into $\Omega^h_-(t)$ and $\Omega^h_+(t)$,
with induced velocity $\vec{\mathcal{V}}^h \in \underline{V}(\GhT)$,
and $\kappa^h \in V(\GhT)$ as well as $u^h \in S^h\times(0,T]$,
such that for all $t \in (0,T]$ it holds for
$(u^h(\cdot,t), \vec{\mathcal{V}}^h(\cdot, t), \kappa^h(\cdot,t))\in S^h \times
\underline{V}(\Gamma^h(t)) \times V(\Gamma^h(t))$ that
\begin{subequations} \label{eq:sdMS}
\begin{align} \label{eq:sdMS1}
\left(\nabla u^h,\nabla\phi\right) -
\left\langle \pi_{\Gamma^h(t)}
\left[\vec{\mathcal{V}}^h \cdot \vec\omega^h\right],
\phi\right\rangle_{\Gamma^h(t)}^{(h)} & = 0
\qquad\forall\ \phi\in S^h,\\
\left\langle u^h,\chi \right\rangle_{\Gamma^h(t)}^{(h)}
- \left\langle\kappa^h,\chi \right\rangle_{\Gamma^h(t)}^h
& = 0 \qquad\forall\ \chi\in V(\Gamma^h(t)),\label{eq:sdMS2}\\
\left\langle \kappa^h\vec\omega^h,\vec\eta \right\rangle_{\Gamma^h(t)}^h
+ \left\langle \nabla_{\!s}\vec{\rm id} , \nabla_{\!s}\vec\eta
\right\rangle_{\Gamma^h(t)}& =0 \qquad
\forall\ \vec\eta\in \underline{V}(\Gamma^h(t)). \label{eq:sdMS3}
\end{align}
\end{subequations}
Here and throughout, the notation $\cdot^{(h)}$ means an expression with or
without the superscript $h$. That is, the scheme \eqref{eq:sdMS}$^h$ employs
numerical integration in the two relevant terms in \eqref{eq:sdMS1} and
\eqref{eq:sdMS2}, while the scheme \eqref{eq:sdMS} uses true integration in
these two terms.
We also remark that thanks to \eqref{eq:omegah} and the piecewise constant
nature of $\vec\nu^h$, the first term in \eqref{eq:sdMS3} is equivalent
to $\langle \kappa^h\vec\nu^h,\vec\eta \rangle_{\Gamma^h(t)}^h$. We prefer
to write it in terms of $\vec\omega^h$ to make the testing procedure in the
analysis easier to follow. Before we present a proof for the structure
preserving properties of \eqref{eq:sdMS}$^{(h)}$,
we recall the following fundamental results from \cite{bgnreview}.
\begin{lemma}
Let $(\Gamma^h(t))_{t\in[0,T]}$ be an evolving polyhedral hypersurface. Then it
holds that
\begin{equation} \label{eq:dteh}
\dd{t}\left|\Gamma^h(t)\right|
= \left\langle \nabla_{\!s}\vec{\rm id}, \nabla_{\!s}\vec{\mathcal{V}}^h
\right\rangle_{\Gamma^h(t)}
\end{equation}
and
\begin{equation} \label{eq:dtvh}
\dd{t}\operatorname{vol}(\Omega^h_-(t))
= \left\langle \vec{\mathcal{V}}^h,\vec\nu^h\right\rangle_{\Gamma^h(t)}.
\end{equation}
\end{lemma}
\begin{proof}
The result \eqref{eq:dteh} directly follows from Theorem~70 and Lemma~9 in
\cite{bgnreview}, while a proof for \eqref{eq:dtvh} is given in
Theorem~71 in \cite{bgnreview}.
\end{proof}
We are now in a position to prove energy decay, volume conservation and
good mesh quality properties for a solution of \eqref{eq:sdMS}$^{(h)}$.
\begin{theorem} \label{thm:sd}
Let $(u^h, {\mathcal{G}^h_T}, \kappa^h)$ be a solution of \eqref{eq:sdMS}$^{(h)}$.
Then it holds that
\begin{equation} \label{eq:MSdteh}
\dd{t}\left|\Gamma^h(t)\right| + \left(\nabla u^h, \nabla u^h\right) = 0.
\end{equation}
Moreover we have that
\begin{equation} \label{eq:MSdtvh}
\dd{t} \operatorname{vol}(\Omega^h_-(t)) = 0.
\end{equation}
Finally, for any $t \in (0,T]$, it holds that
$\Gamma^h(t)$ is a conformal polyhedral surface.
In particular, for $d=2$, any two neighbouring elements of the curve
$\Gamma^h(t)$ either have equal length, or they are parallel.
\end{theorem}
\begin{proof}
Choosing $\phi = u^h(\cdot,t) \in S^h$ in \eqref{eq:sdMS1},
$\chi = \pi_{\Gamma^h(t)}[\vec{\mathcal{V}}^h \cdot \vec\omega^h]
\in V(\Gamma^h(t))$ in \eqref{eq:sdMS2} and
$\vec\eta = \vec{\mathcal{V}}^h(\cdot, t) \in \underline{V}(\Gamma^h(t))$ in \eqref{eq:sdMS3}
gives, on recalling \eqref{eq:dteh}, that
\begin{align*}
\dd{t}\left|\Gamma^h(t)\right| &
= \left\langle \nabla_{\!s}\vec{\rm id}, \nabla_{\!s}\vec{\mathcal{V}}^h
\right\rangle_{\Gamma^h(t)}
= - \left\langle \kappa^h\vec\omega^h,\vec{\mathcal{V}}^h
\right\rangle_{\Gamma^h(t)}^h
= - \left\langle \kappa^h , \pi_{\Gamma^h(t)}
\left[\vec{\mathcal{V}}^h \cdot \vec\omega^h\right]
\right\rangle_{\Gamma^h(t)}^h \\ &
= - \left\langle \pi_{\Gamma^h(t)}
\left[\vec{\mathcal{V}}^h \cdot \vec\omega^h\right],
u^h\right\rangle_{\Gamma^h(t)}^{(h)}
= - \left(\nabla u^h,\nabla u^h\right) ,
\end{align*}
which implies \eqref{eq:MSdteh}.
Moreover, choosing $\phi=1$ in \eqref{eq:sdMS1}
and noting \eqref{eq:omegah}, on recalling \eqref{eq:dtvh}, yields that
\begin{equation*}
\dd{t}\operatorname{vol}(\Omega^h_-(t))
= \left\langle \vec{\mathcal{V}}^h,\vec\nu^h\right\rangle_{\Gamma^h(t)}
= \left\langle \vec{\mathcal{V}}^h,\vec\omega^h\right\rangle_{\Gamma^h(t)}^h
= \left\langle \pi_{\Gamma^h(t)}
\left[\vec{\mathcal{V}}^h \cdot \vec\omega^h\right],
1\right\rangle_{\Gamma^h(t)}^{(h)}
= \left(\nabla u, \nabla 1\right)
= 0,
\end{equation*}
which is \eqref{eq:MSdtvh}.
Finally, the mesh properties for $\Gamma^h(t)$ follow directly from the side
condition \eqref{eq:sdMS3}, thanks to
Definition~60 and Theorem~72 in \cite{bgnreview}.
\end{proof}
\setcounter{equation}{0}
\section{Fully discrete approximation} \label{sec:fd}
The aim of this section is to introduce a fully practical fully discrete
approximation of \eqref{eq:sdMS}$^{(h)}$ that maintains the structure preserving
properties from Theorem~\ref{thm:sd}.
Let $0 = t_0 < t_1<\ldots<t_M=T$ form a partition of the time interval $[0,T]$
with time steps $\Delta t_m= t_{m+1}-t_m$, $m=0,\ldots,M-1$.
The main idea going back to the seminal paper \cite{Dziuk91} is now to
construct polyhedral hypersurfaces $\Gamma^m$, which approximate the
true continuous solutions $\Gamma(t_m)$, in such a way that for $m\geq 0$
we obtain $\Gamma^{m+1} = \vec X^{m+1}(\Gamma^m)$ for a
parameterization $\vec X^{m+1} \in \underline{V}(\Gamma^m)$. In addition we consider a sequence of
bulk triangulations $\mathcal{T}^m$ with associated finite element spaces
$S^m$, $m=0,\ldots,M-1$, similarly to \eqref{eq:Sh}.
For motivational purposes, we first recall the linear fully discrete
approximation of \eqref{eq:sdMS} from \cite{bgnreview}.
Let the closed polyhedral hypersurface $\Gamma^0$ be an approximation of
$\Gamma(0)$.
Then, for $m=0,\ldots,M-1$, find
$(U^{m+1},\vec X^{m+1},\kappa^{m+1}) \in S^m \times \underline{V}(\Gamma^m) \times V(\Gamma^m)$ such that
\begin{subequations} \label{eq:DMS}
\begin{align}\label{eq:DMS1}
\left(\nabla U^{m+1}, \nabla\varphi\right)
- \left\langle \pi_{\Gamma^m}\left[
\frac{\vec X^{m+1}-\vec{\rm id}}{\Delta t_m} \cdot\vec\omega^m \right], \varphi
\right\rangle_{\Gamma^m}^{(h)} & = 0 \qquad\forall\ \varphi \in S^m,\\
\left\langle U^{m+1},\chi\right\rangle_{\Gamma^m}^{(h)}
-\left\langle\kappa^{m+1},\chi\right\rangle^h_{\Gamma^m}
&= 0 \qquad\forall\ \chi \in V(\Gamma^m) ,\label{eq:DMS2}\\
\left\langle \kappa^{m+1}\vec\omega^m,\vec\eta\right\rangle^h_{\Gamma^m} +
\left\langle\nabla_{\!s}\vec X^{m+1},\nabla_{\!s}\vec\eta\right\rangle_{\Gamma^m}
&= 0
\qquad\forall\ \vec\eta \in \underline{V}(\Gamma^m) \label{eq:DMS3}
\end{align}
\end{subequations}
and set $\Gamma^{m+1} = \vec X^{m+1}(\Gamma^m)$. We observe that
\eqref{eq:DMS} corresponds to \cite[(119)]{bgnreview}, which was first
introduced in \cite[(3.5)]{dendritic}. Under mild conditions on $\Gamma^m$,
existence and uniqueness for the linear system \eqref{eq:DMS}$^{(h)}$
can be shown.
Moreover, solutions to \eqref{eq:DMS}$^{(h)}$ are unconditionally stable, see
\cite[Theorem~109]{bgnreview}. However, in general the volume of the interiors
$\Omega^{m+1}_-$ and $\Omega^{m}_-$ of $\Gamma^{m+1}$ and $\Gamma^m$,
respectively, will
differ, meaning that the fully discrete scheme \eqref{eq:DMS}$^{(h)}$
is not volume preserving.
The reason for this behaviour is the explicit approximation
of $\vec\omega^h$ from \eqref{eq:sdMS1} in \eqref{eq:DMS1}. Following the
recent ideas in \cite{BaoZ21}, we now investigate a semi-implicit approximation
of $\vec\omega^h$ which will lead to a volume preserving approximation.
Given a sequence of polyhedral surfaces $(\Gamma^m)_{m=0}^M$, where each
$\Gamma^m$ is defined through its
vertices $\{\vec q^m_k\}_{k=1}^K$ and elements
$\{\sigma^m_j\}_{j=1}^J$, we define the piecewise-linear-in-time family of
polyhedral surfaces $(\widehat\Gamma^h(t))_{t \in[0,T]}$ via
\begin{equation*}
\widehat\Gamma^h(t) = \frac{t_{m+1}-t}{\Delta t_m} \Gamma^m +
\frac{t - t_m}{\Delta t_m} \Gamma^{m+1}, \qquad t \in [t_m, t_{m+1}],
\end{equation*}
which means that the polyhedral surface $\widehat\Gamma^h(t)$
is induced by the vertices
\begin{equation*}
\widehat q^h_k(t) = \frac{t_{m+1}-t}{\Delta t_m} \vec q^m_k +
\frac{t - t_m}{\Delta t_m} \vec q^{m+1}_k, \qquad t \in [t_m, t_{m+1}],
\end{equation*}
for $k=1,\ldots,K$. We note that $\widehat\Gamma^h(t_m) = \Gamma^m$,
$m=0,\ldots,M$.
Then it immediately follows from \eqref{eq:Vh} that
\begin{equation*}
\vec{\mathcal{V}}^h(\cdot, t) = \frac1{\Delta t_m} \sum_{k=1}^{K}
\left[\vec q_k^{m+1} - \vec q_k^m \right]\phi^{\widehat\Gamma^h(t)}_k
\quad\text{on } \widehat\Gamma^h(t),\qquad t \in (t_m, t_{m+1}).
\end{equation*}
On denoting the interior of $\widehat\Gamma^h(t)$ by $\widehat\Omega_-^h(t)$,
with outer unit normal $\widehat\nu^h(t)$,
the fundamental theorem of calculus, together with \eqref{eq:dtvh}, yields
that
\begin{align} \label{eq:volvol}
\operatorname{vol}(\Omega_-^{m+1}) - \operatorname{vol}(\Omega_-^{m}) & =
\operatorname{vol}(\widehat\Omega_-^h(t_{m+1})) - \operatorname{vol}(\widehat\Omega_-^h(t_{m})) =
\int_{t_m}^{t_{m+1}} \dd{t} \operatorname{vol}(\widehat\Omega_-^h(t)) \;{\rm d}t \nonumber \\ &
=\int_{t_m}^{t_{m+1}}
\left\langle \vec{\mathcal{V}}^h,\widehat\nu^h\right\rangle_{\widehat\Gamma^h(t)} \;{\rm d}t
\nonumber \\ &
=\int_{t_m}^{t_{m+1}} \left\langle
\frac1{\Delta t_m} \sum_{k=1}^{K}
\left[\vec q_k^{m+1} - \vec q_k^m \right]\phi^{\widehat\Gamma^h(t)}_k
,\widehat\nu^h\right\rangle_{\widehat\Gamma^h(t)}\;{\rm d}t
\nonumber \\ &
= \frac1{\Delta t_m} \sum_{k=1}^{K}
\left[\vec q_k^{m+1} - \vec q_k^m \right] \cdot
\int_{t_m}^{t_{m+1}} \int_{\widehat\Gamma^h(t)}
\phi^{\widehat\Gamma^h(t)}_k \widehat\nu^h \dH{d-1} \;{\rm d}t
\nonumber \\ &
= \frac1{\Delta t_m} \sum_{k=1}^{K}
\left[\vec q_k^{m+1} - \vec q_k^m \right] \cdot
\int_{t_m}^{t_{m+1}} \sum_{j=1}^J \int_{\widehat\sigma^h_j(t)}
\phi^{\widehat\Gamma^h(t)}_k \widehat\nu^h \dH{d-1} \;{\rm d}t
\nonumber \\ &
= \frac1{\Delta t_m} \sum_{k=1}^{K}
\left[\vec q_k^{m+1} - \vec q_k^m \right] \cdot
\int_{t_m}^{t_{m+1}} \sum_{j=1}^J \int_{\sigma^m_j}
\phi^{\Gamma^m}_k \dH{d-1} \widehat\nu^h\!\mid_{\widehat\sigma^h_j(t)}
\frac{|\widehat\sigma^h_j(t)|}{|\sigma^m_j|} \;{\rm d}t \nonumber \\ &
= \frac1{\Delta t_m} \int_{t_m}^{t_{m+1}} \sum_{j=1}^J \int_{\sigma^m_j}
\vec X^{m+1} - \vec{\rm id} \dH{d-1} \cdot
\widehat\nu^h\!\mid_{\widehat\sigma^h_j(t)}
\frac{|\widehat\sigma^h_j(t)|}{|\sigma^m_j|} \;{\rm d}t\nonumber \\ &
= \sum_{j=1}^J \int_{\sigma^m_j}
\vec X^{m+1} - \vec{\rm id} \dH{d-1} \cdot
\frac1{\Delta t_m |\sigma^m_j|} \int_{t_m}^{t_{m+1}}
\widehat\nu^h\!\mid_{\widehat\sigma^h_j(t)} |\widehat\sigma^h_j(t)| \;{\rm d}t,
\end{align}
where we have used the previously introduced notation
$\vec X^{m+1} = \sum_{k=1}^K \phi^{\Gamma^m}_k \vec q^{m+1}_k \in \underline{V}(\Gamma^m)$.
The calculation in \eqref{eq:volvol} suggests the definition of the piecewise
constant vector $\vec\nu^{m+\frac12} \in [L^\infty(\Gamma^m)]^d$ by setting
\begin{equation} \label{eq:nuhalf}
\vec\nu^{m+\frac12} =
\frac1{\Delta t_m |\sigma^m_j|} \int_{t_m}^{t_{m+1}}
\widehat\nu^h\!\mid_{\widehat\sigma^h_j(t)} |\widehat\sigma^h_j(t)| \;{\rm d}t
\qquad\text{on } \sigma_j^m, \quad j = 1,\ldots,J .
\end{equation}
We note that $\vec\nu^{m+\frac12}$ can be interpreted as an averaged normal
vector for the linearly interpolated surfaces between $\Gamma^m$ and
$\Gamma^{m+1}$. Note also that in general $\vec\nu^{m+\frac12}$ will not have
unit length. Overall we have proven the following result, which generalizes
the corresponding results from Theorems~2.1 and 3.1 in
\cite{BaoZ21} to the case $d\geq2$.
\begin{lemma} \label{lem:volvol}
It holds that
\[
\operatorname{vol}(\Omega_-^{m+1}) - \operatorname{vol}(\Omega_-^{m})
= \left\langle \vec X^{m+1} - \vec{\rm id} , \vec\nu^{m+\frac12}
\right\rangle_{\Gamma^m}.
\]
\end{lemma}
\begin{proof}
The desired result follows immediately from \eqref{eq:volvol} and the
definition \eqref{eq:nuhalf}.
\end{proof}
\begin{remark} \label{rem:nuhalf}
In practice, given $\Gamma^m$ and $\Gamma^{m+1}$, the vector
$\vec\nu^{m+\frac12}$ is remarkably easy to compute, since the integrand in
\eqref{eq:nuhalf} is a polynomial of degree $d-1$. In particular, it holds that
\[
\vec\nu^{m+\frac12}\!\mid_{\sigma^m_j} =
\frac1{\Delta t_m}
\int_{t_m}^{t_{m+1}}
\frac{(\widehat q^h_{j,2}(t)-\widehat q^h_{j,1}(t)) \land \cdots \land
(\widehat q^h_{j,d}(t)-\widehat q^h_{j,1}(t))}{
|(\vec q^m_{j,2}-\vec q^m_{j,1}) \land
\cdots \land (\vec q^m_{j,d}-\vec q^m_{j,1})|}
\;{\rm d}t ,
\]
where we have recalled \eqref{eq:nuh} and \eqref{eq:detsigma}. Using suitable
quadrature rules then yields in the case $d=2$ that
\begin{equation} \label{eq:nuhalf2}
\vec\nu^{m+\frac12}\!\mid_{\sigma^m_j}
= \frac12 \frac{(\vec q^m_{j,2}-\vec q^m_{j,1}
+ \vec q^{m+1}_{j,2}-\vec q^{m+1}_{j,1})^\perp}{
|\vec q^m_{j,2}-\vec q^m_{j,1}|},
\end{equation}
while for $d=3$ we obtain
\begin{align} \label{eq:nuhalf3}
\vec\nu^{m+\frac12}\!\mid_{\sigma^m_j}
& = \frac16 \frac{(\vec q^m_{j,2}-\vec q^m_{j,1}) \times
(\vec q^m_{j,3}-\vec q^m_{j,1})+(\vec q^{m+1}_{j,2}-\vec q^{m+1}_{j,1}) \times
(\vec q^{m+1}_{j,3}-\vec q^{m+1}_{j,1})}{
|(\vec q^m_{j,2}-\vec q^m_{j,1}) \times
(\vec q^m_{j,3}-\vec q^m_{j,1})|} \nonumber \\ & \quad
+ \frac16 \frac{(\vec q^m_{j,2} - \vec q^m_{j,1}
+ \vec q^{m+1}_{j,2} - \vec q^{m+1}_{j,1}) \times
(\vec q^m_{j,3} - \vec q^m_{j,1} + \vec q^{m+1}_{j,3} - \vec q^{m+1}_{j,1})}{
|(\vec q^m_{j,2}-\vec q^m_{j,1}) \times
(\vec q^m_{j,3}-\vec q^m_{j,1})|}
.
\end{align}
\end{remark}
Before we can apply the result from Lemma~\ref{lem:volvol} to the approximation
\eqref{eq:DMS}$^{(h)}$,
we need to introduce a vertex based normal corresponding to
$\vec\nu^{m+\frac12}$. Analogously to \eqref{eq:omegah} we therefore define
$\vec\omega^{m+\frac12} \in \underline{V}(\Gamma^m)$ such that
\begin{equation} \label{eq:omegahalf}
\left\langle \vec\omega^{m+\frac12}, \vec\varphi \right\rangle_{\Gamma^m}^h
= \left\langle \vec\nu^{m+\frac12}, \vec\varphi \right\rangle_{\Gamma^m}
\quad\forall\ \vec\varphi\in\underline{V}(\Gamma^m).
\end{equation}
Now our novel fully discrete approximation of \eqref{eq:sdMS}$^{(h)}$
is given as follows.
Let the closed polyhedral hypersurface $\Gamma^0$ be an approximation of
$\Gamma(0)$.
Then, for $m=0,\ldots,M-1$, find
$(U^{m+1},\vec X^{m+1},\kappa^{m+1}) \in S^m \times \underline{V}(\Gamma^m) \times V(\Gamma^m)$
and $\Gamma^{m+1} = \vec X^{m+1}(\Gamma^m)$ such that
\begin{subequations} \label{eq:fdMS}
\begin{align}\label{eq:fdMS1}
\left(\nabla U^{m+1}, \nabla\varphi\right)
- \left\langle \pi_{\Gamma^m}\left[
\frac{\vec X^{m+1}-\vec{\rm id}}{\Delta t_m} \cdot\vec\omega^{m+\frac12}\right],\varphi
\right\rangle_{\Gamma^m}^{(h)} & = 0 \qquad\forall\ \varphi \in S^m,\\
\left\langle U^{m+1},\chi\right\rangle_{\Gamma^m}^{(h)}
-\left\langle\kappa^{m+1},\chi\right\rangle^h_{\Gamma^m}
&= 0 \qquad\forall\ \chi \in V(\Gamma^m) ,\label{eq:fdMS2}\\
\left\langle \kappa^{m+1}\vec\omega^{m+\frac12},
\vec\eta\right\rangle^h_{\Gamma^m} +
\left\langle\nabla_{\!s}\vec X^{m+1},\nabla_{\!s}\vec\eta\right\rangle_{\Gamma^m}
&= 0
\qquad\forall\ \vec\eta \in \underline{V}(\Gamma^m). \label{eq:fdMS3}
\end{align}
\end{subequations}
We note that in contrast to
\eqref{eq:DMS}$^{(h)}$, the scheme \eqref{eq:fdMS}$^{(h)}$
leads to a system of nonlinear
equations at each time level, because $\vec\omega^{m+\frac12}$ depends on
$\vec X^{m+1}$.
The next theorem proves the structure preserving properties of the fully
discrete approximation \eqref{eq:fdMS}$^{(h)}$.
\begin{theorem} \label{thm:HDMS}
Let $(U^{m+1}, \vec X^{m+1},\kappa^{m+1})\in
S^m\times \underline{V}(\Gamma^m)\times V(\Gamma^m)$ be a solution to \eqref{eq:fdMS}$^{(h)}$.
Then the enclosed volume is preserved, i.e.\
\begin{equation} \label{eq:thmvol}
\operatorname{vol}(\Omega_-^{m+1}) = \operatorname{vol}(\Omega_-^{m}) .
\end{equation}
In addition, if $d=2$ or $d=3$, then the solution satisfies
the stability estimate
\begin{equation} \label{eq:thmstab}
|\Gamma^{m+1}| + \Delta t_m \left(\nabla U^{m+1}, \nabla U^{m+1} \right)
\leq |\Gamma^m|.
\end{equation}
\end{theorem}
\begin{proof}
On choosing $\varphi=1$ in \eqref{eq:fdMS1}, it follows from
\eqref{eq:omegahalf} and Lemma~\ref{lem:volvol} that
\begin{align*}
0 & = \left\langle \pi_{\Gamma^m}\left[
\frac{\vec X^{m+1}-\vec{\rm id}}{\Delta t_m} \cdot\vec\omega^{m+\frac12}\right],1
\right\rangle_{\Gamma^m}^{(h)}
= \left\langle \frac{\vec X^{m+1}-\vec{\rm id}}{\Delta t_m}, \vec\omega^{m+\frac12}
\right\rangle_{\Gamma^m}^h
=\left\langle \frac{\vec X^{m+1}-\vec{\rm id}}{\Delta t_m}, \vec\nu^{m+\frac12}
\right\rangle_{\Gamma^m} \\ &
= \frac1{\Delta t_m} \left(\operatorname{vol}(\Omega_-^{m+1}) - \operatorname{vol}(\Omega_-^{m})\right) .
\end{align*}
This proves \eqref{eq:thmvol}.
It remains to prove the stability bound. Here we choose
$\varphi= U^{m+1}$ in \eqref{eq:fdMS1},
$\chi = \pi_{\Gamma^m}[(\vec X^{m+1}-\vec{\rm id})\cdot\vec\omega^{m+\frac12}]$
in \eqref{eq:fdMS2} and
$\vec\eta = \vec X^{m+1}-\vec{\rm id}_{\mid_{\Gamma^m}}$ in \eqref{eq:fdMS3}
in order to obtain
\begin{equation} \label{eq:step1}
\Delta t_m\left(\nabla U^{m+1}, \nabla U^{m+1} \right) +
\left\langle\nabla_{\!s}\vec X^{m+1},\nabla_{\!s}(\vec X^{m+1}-\vec{\rm id})
\right\rangle_{\Gamma^m} =0.
\end{equation}
Now we recall from Lemma~57 in \cite{bgnreview} the well-known bound
\begin{equation} \label{eq:stab2d3d}
\left\langle\nabla_{\!s}\vec X^{m+1},\nabla_{\!s}(\vec X^{m+1}-\vec{\rm id})
\right\rangle_{\Gamma^m} \geq |\Gamma^{m+1}| - |\Gamma^m|
\end{equation}
for the cases $d=2$ and $d=3$. Combining \eqref{eq:step1} and
\eqref{eq:stab2d3d} yields the desired result \eqref{eq:thmstab}.
\end{proof}
In practice the system of nonlinear equations \eqref{eq:fdMS}$^{(h)}$
can be solved
with a simple lagged iteration. Given $\Gamma^m$, let $\Gamma^{m+1,0} =
\Gamma^m$. Then for $i \geq 0$ define $\vec\omega^{m+\frac12,i} \in \underline{V}(\Gamma^m)$
through \eqref{eq:omegahalf} and \eqref{eq:nuhalf}, but with $\Gamma^{m+1}$
replaced by $\Gamma^{m+1,i}$, and find
$(U^{m+1,i+1},\vec X^{m+1,i+1},\kappa^{m+1,i+1})
\in S^m \times \underline{V}(\Gamma^m) \times V(\Gamma^m)$
such that
\begin{subequations} \label{eq:itMS}
\begin{align}\label{eq:itMS1}
\left(\nabla U^{m+1,i+1}, \nabla\varphi\right)
- \left\langle \pi_{\Gamma^m}\left[
\frac{\vec X^{m+1,i+1}-\vec{\rm id}}{\Delta t_m} \cdot\vec\omega^{m+\frac12,i}\right],
\varphi \right\rangle_{\Gamma^m}^{(h)} & = 0 \qquad\forall\ \varphi \in S^m,\\
\left\langle U^{m+1,i+1},\chi\right\rangle_{\Gamma^m}^{(h)}
-\left\langle\kappa^{m+1,i+1},\chi\right\rangle^h_{\Gamma^m}
&= 0 \qquad\forall\ \chi \in V(\Gamma^m) ,\label{eq:itMS2}\\
\left\langle \kappa^{m+1,i+1}\vec\omega^{m+\frac12,i},
\vec\eta\right\rangle^h_{\Gamma^m} +
\left\langle\nabla_{\!s}\vec X^{m+1,i+1},\nabla_{\!s}\vec\eta\right\rangle_{\Gamma^m}
&= 0
\qquad\forall\ \vec\eta \in \underline{V}(\Gamma^m) \label{eq:itMS3}
\end{align}
\end{subequations}
and set $\Gamma^{m+1,i+1} = \vec X^{m+1,i+1}(\Gamma^m)$. The iteration can be
repeated until the stopping criterion
\begin{equation} \label{eq:stop}
\| \vec X^{m+1,i+1} - \vec X^{m+1,i} \|_\infty \leq \text{tol}
\end{equation}
is satisfied. Note that the existence of a unique solution to the linear
system of equations \eqref{eq:itMS}$^{(h)}$, which is of the same form as
\eqref{eq:DMS}$^{(h)}$,
can be shown under mild assumptions on $\Gamma^m$, recall Theorem~109
in \cite{bgnreview}.
\setcounter{equation}{0}
\section{Generalization to anisotropic surface energies} \label{sec:ani}
In this section we briefly discuss the extension of the finite element
approximation \eqref{eq:fdMS}$^{(h)}$ to the case of an anisotropic surface energy of
the form \eqref{eq:Egamma}, i.e.\
\[
|\Gamma(t)|_\gamma = \int_{\Gamma(t)} \gamma(\vec\nu) \dH{d-1}.
\]
On defining the anisotropic curvature through
\[
\varkappa_\gamma = - \nabla_{\!s} \cdot \gamma'(\vec\nu) \quad \text{on } \Gamma(t),
\]
where $\gamma'$ denotes the spatial gradient of $\gamma : {\mathbb R}^d \to {\mathbb R}$,
which itself is defined as a one-homogeneous extension of the originally given
density on the unit ball, we introduce the anisotropic analogue of
\eqref{eq:MS} via
\begin{equation} \label{eq:aniMS}
-\Delta u =0 \quad\!\! \text{in } \Omega \setminus \Gamma(t),\quad
u = \varkappa_\gamma \quad\!\! \text{on } \Gamma(t),\quad
\left[\frac{\partial u}{\partial{\vec\nu}}\right]_{\Gamma(t)}
= -\mathcal{V}\quad\!\! \text{on } \Gamma(t),\quad
\frac{\partial u}{\partial {\vec\nu_\Omega}} = 0 \quad\!\! \text{on }
\partial\Omega.
\end{equation}
{From} now on we are going to restrict ourselves to a class of anisotropies
first proposed in \cite{triplejANI,ani3d}. That is, we assume that the
anisotropy can be written as
\begin{equation} \label{eq:g}
\gamma(\vec p) = \left(
\sum_{\ell=1}^L [G_{\ell}\vec p \cdot \vec p]^{\frac r2}\right)^\frac1r,
\end{equation}
where $r\in[1,\infty)$ and $G_{\ell} \in {\mathbb R}^{d\times d}$,
$\ell=1,\ldots, L$, are symmetric and positive definite.
We also define ${\widetilde{G}}_\ell = [\det G_\ell]^{\frac{1}{d-1}} G_\ell^{-1}$ for
$\ell=1,\ldots, L$.
Using a suitable differential calculus, the authors in \cite{ani3d} then
derived the following anisotropic analogue of \eqref{eq:varkappa}
\begin{equation*}
\left\langle \varkappa_\gamma \vec\nu,\vec\eta \right\rangle_{\Gamma(t)}
+ \left\langle
\nabla_{\!s}^{{\widetilde{G}}}\vec{\rm id}, \nabla_{\!s}^{{\widetilde{G}}}\vec\eta \right\rangle_{\Gamma(t),\gamma}
=0 \quad\forall\ \vec\eta \in [H^1(\Gamma(t))]^d,
\end{equation*}
see \cite{ani3d} and also \cite[(110)]{bgnreview} for the precise definitions.
Hence the natural anisotropic analogue of \eqref{eq:WMS} is given by
\begin{subequations} \label{eq:aniWMS}
\begin{align} \label{eq:aniWMS1}
\left(\nabla u,\nabla\phi\right) -
\left\langle \vec{\mathcal{V}},\phi\vec\nu \right\rangle_{\Gamma(t)} & = 0
\qquad\forall\ \phi\in H^1(\Omega),\\
\left\langle u - \varkappa_\gamma,\chi \right\rangle_{\Gamma(t)}
& = 0 \qquad\forall\ \chi\in L^2(\Gamma(t))\label{eq:aniWMS2},\\
\left\langle \varkappa_\gamma \vec\nu,\vec\eta \right\rangle_{\Gamma(t)}
+ \left\langle
\nabla_{\!s}^{{\widetilde{G}}}\vec{\rm id}, \nabla_{\!s}^{{\widetilde{G}}}\vec\eta \right\rangle_{\Gamma(t),\gamma}
& =0 \qquad
\forall\ \vec\eta\in [H^1(\Gamma(t))]^d. \label{eq:aniWMS3}
\end{align}
\end{subequations}
The same testing procedure as in the isotropic setting shows that solutions to
\eqref{eq:aniWMS} satisfy
\begin{equation} \label{eq:anistruct}
\dd{t} |\Gamma(t)|_\gamma
= - \left\langle \varkappa_\gamma, \mathcal{V} \right\rangle_{\Gamma(t)}
= -( \nabla u, \nabla u)\leq 0
\quad \text{and} \quad
\dd{t} \operatorname{vol}(\Omega_-(t)) = 0,
\end{equation}
where in the first equation we have noted Lemma~97 from \cite{bgnreview}.
For the adaptation of \eqref{eq:fdMS}$^{(h)}$ to the anisotropic setting
we make use of
the stable discretization of \eqref{eq:aniWMS3} introduced in \cite{ani3d}. To
this end, we define
\begin{equation} \label{eq:ipGG}
\left\langle \nabla_{\!s}^{{\widetilde{G}}_{\ell}}\vec X^{m+1},
\nabla_{\!s}^{{\widetilde{G}}_{\ell}}\vec\eta \right\rangle_{\Gamma^m,\gamma}
=\sum_{\ell=1}^L \int_{\Gamma^m} \left[
\frac{\gamma_\ell(\vec\nu^{m+1} \circ \vec X^{m+1})}
{\gamma(\vec\nu^{m+1}\circ \vec X^{m+1})} \right]^{r-1}
(\nabla_{\!s}^{{\widetilde{G}}_{\ell}}\vec X^{m+1},\nabla_{\!s}^{{\widetilde{G}}_{\ell}}\vec\eta)_{{\widetilde{G}}_\ell}
\gamma_{\ell}(\vec\nu^m) \dH{d-1}
\end{equation}
for $\Gamma^{m+1} = \vec X^{m+1}(\Gamma^m)$ with normal $\vec\nu^{m+1}$ and
$\vec X^{m+1}, \vec\eta \in \underline{V}(\Gamma^m)$. Here $\nabla_{\!s}^{{\widetilde{G}}_{\ell}}$ is a surface
differential operator weighted by ${\widetilde{G}}_\ell$, while $(\cdot,\cdot)_{{\widetilde{G}}_\ell}$
denotes the inner product in ${\mathbb R}^d$ induced by the symmetric positive definite
matrix ${\widetilde{G}}_\ell$, see (108) and (111) in \cite{bgnreview} for details.
We note that \eqref{eq:ipGG} depends linearly on $\vec X^{m+1}$ if $r=1$.
Then our fully discrete approximation of \eqref{eq:aniMS} is given as follows.
Let the closed polyhedral hypersurface $\Gamma^0$ be an approximation of
$\Gamma(0)$. Then, for $m=0,\ldots,M-1$, find
$(U^{m+1},\vec X^{m+1},\kappa^{m+1}_\gamma) \in S^m \times \underline{V}(\Gamma^m) \times V(\Gamma^m)$
and $\Gamma^{m+1} = \vec X^{m+1}(\Gamma^m)$ such that
\begin{subequations} \label{eq:fdaniMS}
\begin{align}\label{eq:fdaniMS1}
\left(\nabla U^{m+1}, \nabla\varphi\right)
- \left\langle \pi_{\Gamma^m}\left[
\frac{\vec X^{m+1}-\vec{\rm id}}{\Delta t_m} \cdot\vec\omega^{m+\frac12}\right],\varphi
\right\rangle_{\Gamma^m}^{(h)} & = 0 \qquad\forall\ \varphi \in S^m,\\
\left\langle U^{m+1},\chi\right\rangle_{\Gamma^m}^{(h)}
-\left\langle\kappa^{m+1}_\gamma,\chi\right\rangle^h_{\Gamma^m}
&= 0 \qquad\forall\ \chi \in V(\Gamma^m) ,\label{eq:fdaniMS2}\\
\left\langle \kappa^{m+1}_\gamma\vec\omega^{m+\frac12},
\vec\eta\right\rangle^h_{\Gamma^m}
+ \left\langle \nabla_{\!s}^{{\widetilde{G}}_{\ell}}\vec X^{m+1},
\nabla_{\!s}^{{\widetilde{G}}_{\ell}}\vec\eta \right\rangle_{\Gamma^m,\gamma}
&= 0
\qquad\forall\ \vec\eta \in \underline{V}(\Gamma^m). \label{eq:fdaniMS3}
\end{align}
\end{subequations}
Once again, \eqref{eq:fdaniMS}$^{(h)}$ is a structure preserving
approximation, in that
its solution satisfy discrete analogues of \eqref{eq:anistruct}.
\begin{theorem}
Let $(U^{m+1}, \vec X^{m+1},\kappa^{m+1}_\gamma)\in
S^m\times \underline{V}(\Gamma^m)\times V(\Gamma^m)$ be a solution to \eqref{eq:fdaniMS}$^{(h)}$.
Then the enclosed volume is preserved, i.e.\
$\operatorname{vol}(\Omega_-^{m+1}) = \operatorname{vol}(\Omega_-^{m})$.
In addition, if $d=2$ or $d=3$, then the solution satisfies
the stability estimate
\begin{equation} \label{eq:thmanistab}
|\Gamma^{m+1}|_\gamma + \Delta t_m \left(\nabla U^{m+1}, \nabla U^{m+1} \right)
\leq |\Gamma^m|_\gamma.
\end{equation}
\end{theorem}
\begin{proof}
The volume preservation property follows as in the proof of
Theorem~\ref{thm:HDMS}, on choosing $\varphi=1$ in \eqref{eq:fdaniMS1}.
Similarly, for the discrete stability bound we choose
$\varphi= U^{m+1}$ in \eqref{eq:fdaniMS1},
$\chi = \pi_{\Gamma^m}[(\vec X^{m+1}-\vec{\rm id})\cdot\vec\omega^{m+\frac12}]$
in \eqref{eq:fdaniMS2} and
$\vec\eta = \vec X^{m+1}-\vec{\rm id}_{\mid_{\Gamma^m}}$ in \eqref{eq:fdaniMS3}
in order to obtain
\begin{equation} \label{eq:anistep1}
\Delta t_m\left(\nabla U^{m+1}, \nabla U^{m+1} \right) +
\left\langle\nabla_{\!s}^{{\widetilde{G}}_\ell}\vec X^{m+1},\nabla_{\!s}^{{\widetilde{G}}_\ell}(\vec X^{m+1}-\vec{\rm id})
\right\rangle_{\Gamma^m,\gamma} =0.
\end{equation}
Now we recall from Lemma~102 in \cite{bgnreview} the result
\begin{equation} \label{eq:anistab2d3d}
\left\langle\nabla_{\!s}^{{\widetilde{G}}_\ell}\vec X^{m+1},\nabla_{\!s}^{{\widetilde{G}}_\ell}(\vec X^{m+1}-\vec{\rm id})
\right\rangle_{\Gamma^m,\gamma} \geq |\Gamma^{m+1}|_\gamma - |\Gamma^m|_\gamma
\end{equation}
for the cases $d=2$ and $d=3$. Combining \eqref{eq:anistep1} and
\eqref{eq:anistab2d3d} yields the desired result \eqref{eq:thmanistab}.
\end{proof}
The adaptation of the iterative solution method \eqref{eq:itMS},
\eqref{eq:stop} to the anisotropic case is easy in the case $r=1$.
For $r>1$ we combine the lagging of the nonlinear term $\vec\omega^{m+\frac12}$
in \eqref{eq:fdaniMS1} and \eqref{eq:fdaniMS3}
with the lagging of $\vec\nu^{m+1}$ in the second term of \eqref{eq:fdaniMS3},
compare with \eqref{eq:ipGG}. Overall, we use the following iteration in order
to find a solution to \eqref{eq:fdaniMS}$^{(h)}$.
For $i \geq 0$ find $(U^{m+1,i+1},\vec X^{m+1,i+1},\kappa^{m+1,i+1}_\gamma)
\in S^m \times \underline{V}(\Gamma^m) \times V(\Gamma^m)$
such that
\begin{subequations} \label{eq:itaniMS}
\begin{align}\label{eq:itaniMS1}
&
\left(\nabla U^{m+1,i+1}, \nabla\varphi\right)
- \left\langle \pi_{\Gamma^m}\left[
\frac{\vec X^{m+1,i+1}-\vec{\rm id}}{\Delta t_m} \cdot\vec\omega^{m+\frac12,i}\right],
\varphi \right\rangle_{\Gamma^m}^{(h)} = 0 \qquad\forall\ \varphi \in S^m,\\
&\left\langle U^{m+1,i+1},\chi\right\rangle_{\Gamma^m}^{(h)}
-\left\langle\kappa^{m+1,i+1}_\gamma,\chi\right\rangle^h_{\Gamma^m}
= 0 \qquad\forall\ \chi \in V(\Gamma^m) ,\label{eq:itaniMS2}\\
&\left\langle \kappa^{m+1,i+1}_\gamma\vec\omega^{m+\frac12,i},
\vec\eta\right\rangle^h_{\Gamma^m} \nonumber \\ & \
+ \sum_{\ell=1}^L \int_{\Gamma^m} \left[
\frac{\gamma_\ell(\vec\nu^{m+1,i} \circ \vec X^{m+1,i})}
{\gamma(\vec\nu^{m+1,i}\circ \vec X^{m+1,i})} \right]^{r-1}
(\nabla_{\!s}^{{\widetilde{G}}_{\ell}}\vec X^{m+1},\nabla_{\!s}^{{\widetilde{G}}_{\ell}}\vec\eta)_{{\widetilde{G}}_\ell}
\gamma_{\ell}(\vec\nu^m) \dH{d-1}
= 0
\qquad\forall\ \vec\eta \in \underline{V}(\Gamma^m) \label{eq:itaniMS3}
\end{align}
\end{subequations}
and set $\Gamma^{m+1,i+1} = \vec X^{m+1,i+1}(\Gamma^m)$. The iteration is
stopped when the criterion \eqref{eq:stop} is satisfied. We note that the
second term in \eqref{eq:itaniMS3} is a linearization of \eqref{eq:ipGG}.
The term will be independent of $\vec X^{m+1,i}$ in the case $r=1$.
\setcounter{equation}{0}
\section{Numerical results} \label{sec:nr}
We implemented the fully discrete finite element approximations
\eqref{eq:DMS}$^{(h)}$, \eqref{eq:fdMS}$^{(h)}$ and
\eqref{eq:fdaniMS}$^{(h)}$ within the
finite element toolbox ALBERTA, see \cite{Alberta}. The systems of
linear equations arising from \eqref{eq:DMS}$^{(h)}$, \eqref{eq:itMS}$^{(h)}$
and \eqref{eq:itaniMS}$^{(h)}$, in the case $d=2$,
are solved with the help of the sparse
factorization package UMFPACK, see \cite{Davis04}. For the simulations in 3d,
on the other hand,
we employ the Schur complement solver described in \cite[(4.9)]{dendritic}.
For the stopping criterion in \eqref{eq:stop} we use the value
$\text{tol}=10^{-10}$.
For the triangulation $\mathcal{T}^m$ of the bulk domain $\Omega$, that is used
for the bulk finite element space $S^m$, we use an adaptive mesh that uses
fine elements close to the interface $\Gamma^m$ and coarser elements away
from it. The precise strategy is as described in \cite[\S5.1]{dendritic}
and for a domain $\Omega=(-H,H)^d$ and two integer parameters
$N_c < N_f$ results in elements with maximal diameter approximately equal to
$h_f = \frac{2H}{N_f}$ close to $\Gamma^m$ and elements with maximal diameter
approximately equal to $h_c = \frac{2H}{N_c}$ far away from it.
For all our computations we use $H=4$.
An example adaptive mesh is shown in Figure~\ref{fig:2dmesh}, below.
We stress that due to the unfitted nature of our finite element approximations,
special quadrature rules need to be employed in order to assemble terms that
feature both bulk and surface finite element functions.
An example is the first term in \eqref{eq:fdMS2}.
For the schemes using numerical integration, e.g.\ \eqref{eq:fdMS}$^h$, this
task boils down to finding for each vertex of $\Gamma^m$ the bulk element
$o^m \in\mathcal{T}^m$ it resides in, together with its barycentric
coordinates with respect to that bulk element.
For the remaining schemes that task is more involved.
Then the most challenging aspect of assembling the contributions for e.g.\
the first term in \eqref{eq:fdMS2}, for the scheme \eqref{eq:fdMS},
is to compute intersections
$\sigma^m \cap o^m$ between an arbitrary surface element
$\sigma^m \subset \Gamma^m$ and an element $o^m\in\mathcal{T}^m$
of the bulk mesh.
An algorithm that describes how these intersections can be calculated is given
in \cite[p.\ 6284]{dendritic}, see also Figure~4 in
\rm \cite{dendritic} for a visualization of possible intersections of the form
$\sigma^m \cap o^m$ in ${\mathbb R}^3$.
Throughout this section we use (almost) uniform time steps, in that
$\Delta t_m=\Delta t$ for $m=0,\ldots, M-2$ and
$\Delta t_{M-1} = T - t_{m-1} \leq \Delta t$. For many of the presented simulations we
will put particular emphasis on the volume preserving aspect, and so we recall
that given a polyhedral surface $\Gamma^m$, the enclosed volume can be computed
by
\begin{equation} \label{eq:fdvol}
\operatorname{vol}(\Omega^m_-) =
\frac1d \int_{\Gamma^m} \vec{\rm id} \cdot \vec\nu^m \dH{d-1} ,
\end{equation}
where we have used the divergence theorem. We note that the integrand in
\eqref{eq:fdvol} is piecewise constant on $\Gamma^m$. For later use we also
define the relative volume loss at time $t=t_m$ as
\[
v_\Delta^m = \frac{\operatorname{vol}(\Omega^0_-) - \operatorname{vol}(\Omega^m_-)}{\operatorname{vol}(\Omega^0_-)}.
\]
\subsection{Convergence experiment}
We begin with a convergence experiment for the scheme \eqref{eq:fdMS}
for the cases $d=2$ and $d=3$. To this
end, we recall from \cite[\S6.6]{dendritic} the following exact solution to
\eqref{eq:MS} consisting of two concentric spheres.
Let $(\Gamma(t))_{t\in[0,T]}$ be a solution of \eqref{eq:MS}, where
$\Gamma(t) = \partial\Omega_-(t)$ with
$\Omega_-(t) = \{ \vec z \in {\mathbb R}^3 : r_1(t) < |\vec z| < r_2(t) \}$.
Then the two radii $r_1 < r_2$ satisfy the following system of nonlinear ODEs:
In the case $d=2$ we have
\begin{subequations} \label{eq:ODE}
\begin{equation}
[r_1]_t = - \frac1{r_1}\frac{\frac1{r_1} + \frac1{r_2}}{\ln\frac{r_2}{r_1}}
\quad\mbox{ and }\quad
[r_2]_t
= \frac{r_1}{r_2}[r_1]_t \qquad \forall\ t \in[0,T_0), \label{eq:ODEa}
\end{equation}
while for $d=3$ it holds that
\begin{equation}
[r_1]_t = - \frac2{r_1^2}\frac{r_1 + r_2}{r_2-r_1} \quad\mbox{ and }\quad
[r_2]_t
= \frac{r_1^2}{r_2^2}[r_1]_t \qquad \forall\ t \in[0,T_0), \label{eq:ODEb}
\end{equation}
\end{subequations}
where $T_0$ is the extinction time of the smaller sphere,
i.e.\ $\lim_{t\to T_0} r_1(t) = 0$.
The corresponding solution $u$ satisfying \eqref{eq:MS} is given
by the radially symmetric function
\begin{equation}
u(\vec{z},t) = \begin{cases}
-\frac{d-1}{r_2(t)} & |\vec{z}| \geq r_2(t), \\
\begin{cases}
\frac1{r_1(t)} - \ln\frac{|\vec{z}|}{r_1(t)}
\dfrac{\frac1{r_1(t)} + \frac1{r_2(t)}}{\ln\frac{r_2(t)}{r_1(t)}} & d = 2 \\
-\frac4{r_2(t) - r_1(t)} +
\frac2{|\vec{z}|}\frac{{r_1(t)} + {r_2(t)}}{r_2(t)-r_1(t)} & d = 3
\end{cases}
& r_1(t) \leq |\vec{z}| \leq r_2(t), \\
\frac{d-1}{r_1(t)} & |\vec{z}| \leq r_1(t).
\end{cases}
\label{eq:ODE_u}
\end{equation}
The volume preserving property of the flow implies that
$v(t) = r_2^d(t) - r_1^d(t)$ is an invariant, so that
$r_2(t) = (v(0) + r_1^d(t))^\frac1d$. Hence $r_1$ satisfies
\begin{equation}
[r_1]_t = \begin{cases}
- \dfrac1{r_1}\dfrac{\frac1{r_1} + (v(0) + r_1^2)^{-\frac12}}
{\ln\frac{(v(0) + r_1^2)^{\frac12}}{r_1}} & d = 2, \\[5mm]
- \dfrac2{r_1^2}\dfrac{r_1 + (v(0) + r_1^3)^{\frac13}}
{(v(0) + r_1^3)^{\frac13}-r_1} & d = 3 ,
\end{cases}
\qquad \forall\ t \in[0,T_0).
\label{eq:ODE1d}
\end{equation}
In order to obtain a higher accuracy for the reference solution
in our numerical convergence experiments,
rather than integrating \eqref{eq:ODE1d} directly,
we rather use a root-finding algorithm for the equation
\begin{equation*}
0 = t + \begin{cases} \displaystyle
\int_{r_1(0)}^{r_1(t)} r\frac{\ln\frac{(v(0) + r^2)^{\frac12}}{r}}
{\frac1{r} + (v(0) + r^2)^{-\frac12}} \;{\rm d}r & d = 2 ,\\[5mm]
\displaystyle
\int_{r_1(0)}^{r_1(t)} \frac{r^2}2\frac{(v(0) + r^3)^{\frac13}-r}
{r + (v(0) + r^3)^{\frac13}} \;{\rm d}r & d = 3 ,
\end{cases}
\qquad \forall\ t \in[0,T_0)
\end{equation*}
in order to find $r_1(t)$.
For the initial radii $r_1(0) = 2.5$, $r_2(0) = 3$ and
the time interval $[0,T]$ with $T=\frac12$,
so that $r_1(T) \approx 1.66$ and $r_2(T) \approx 2.35$,
we perform a convergence
experiment for the true solution \eqref{eq:ODE}, at first for $d=2$.
To this end, for $i=0\to 4$, we set
$N_f = \frac12 K = 2^{7+i}$, $N_c = 4^{i}$
and $\tau= 4^{3-i}\times10^{-3}$.
We visualize the evolution with the help of the discrete solutions computed
with the scheme \eqref{eq:fdMS} for the run $i=1$ in Figure~\ref{fig:2dmesh},
where we also present a plot of the final bulk mesh $\mathcal{T}^M$ in order
to show the effect of the adaptive mesh refinement strategy.
\begin{figure}
\center
\includegraphics[angle=-90,width=0.3\textwidth]{figures/2dmesht0}
\includegraphics[angle=-90,width=0.3\textwidth]{figures/2dmesht05}\quad
\includegraphics[angle=-90,width=0.3\textwidth]{figures/2dmesh_mesh}
\caption{The solution \eqref{eq:ODE} at times $t=0$ and $t=\frac12$,
as well as the adaptive bulk mesh $\mathcal{T}^M$.}
\label{fig:2dmesh}
\end{figure}%
In Table~\ref{tab:fdMS2dT05} we display the errors
\[\|\Gamma^h - \Gamma\|_{L^\infty} = \max_{m=1,\ldots, M}
\max_{k=1,\ldots, K} \operatorname{dist}(\vec{q}^m_k, \Gamma(t_m))
\]
and
\[
\|U^h - u\|_{L^\infty} = \max_{m=1,\ldots, M}\|U^m - I^mu(\cdot,t_m)\|_{L^\infty(\Omega)},
\]
where $I^m : C^0(\overline\Omega) \to S^m$ denotes the standard
interpolation operator. We also let $K_\Omega^m$ denote the number of
degrees of freedom of $S^m$, and define $h^m_\Gamma = \max_{j = 1,\ldots,J}
\operatorname{diam}(\sigma^m_j)$.
As a comparison, we show the same error computations for the linear scheme
\eqref{eq:DMS} in Table~\ref{tab:DMS2dT05}.
As expected, we observe true volume preservation for the scheme \eqref{eq:fdMS}
in Table~\ref{tab:fdMS2dT05}, up to solver tolerance, while the relative volume
loss in Table~\ref{tab:DMS2dT05} decreases as $\Delta t$ becomes smaller.
Surprisingly, the two error quantities $\|\Gamma^h - \Gamma\|_{L^\infty}$ and $\|U^h - u\|_{L^\infty}$ are generally
lower in Table~\ref{tab:DMS2dT05} compared to Table~\ref{tab:fdMS2dT05},
although the difference becomes smaller with smaller discretization parameters.
For completeness, we also present the errors for the same convergence
experiment for the two schemes \eqref{eq:fdMS}$^h$ and \eqref{eq:DMS}$^h$ with
numerical integration, see Tables~\ref{tab:numintfdMS2dT05} and
\ref{tab:numintDMS2dT05}.
\begin{table}
\center
\begin{tabular}{c|c|c|c|c|c|c}
$h_{f}$ & $h^M_\Gamma$ & $\|U^h - u\|_{L^\infty}$ & $\|\Gamma^h - \Gamma\|_{L^\infty}$ & $K^M_\Omega$ & $K$
& $|v_\Delta^M|$ \\ \hline
6.2500e-02 & 1.1400e-01 & 1.5609e-01 & 3.4036e-02 & 2925 & 256 & $<10^{-10}$\\
3.1250e-02 & 5.7282e-02 & 4.5306e-02 & 1.7416e-02 & 5101 & 512 & $<10^{-10}$\\
1.5625e-02 & 2.8714e-02 & 1.4406e-02 & 8.9079e-03 & 9785 & 1024& $<10^{-10}$\\
7.8125e-03 & 1.4375e-02 & 5.0773e-03 & 4.6020e-03 & 21557 & 2048& $<10^{-10}$\\
3.9062e-03 & 7.1929e-03 & 2.8734e-03 & 2.1860e-03 & 96781 & 4096& $<10^{-10}$\\
\end{tabular}
\caption{Convergence test for \eqref{eq:ODE} over the time interval
$[0,\frac12]$ for the scheme \eqref{eq:fdMS}.}
\label{tab:fdMS2dT05}
\end{table}%
\begin{table}
\center
\begin{tabular}{c|c|c|c|c|c|c}
$h_{f}$ & $h^M_\Gamma$ & $\|U^h - u\|_{L^\infty}$ & $\|\Gamma^h - \Gamma\|_{L^\infty}$ & $K^M_\Omega$ & $K$
& $|v_\Delta^M|$ \\ \hline
6.2500e-02 & 1.1497e-01 & 1.4990e-01 & 5.1377e-03 & 2869 & 256 & 1.2e-02\\
3.1250e-02 & 5.7408e-02 & 4.3367e-02 & 7.7591e-03 & 5097 & 512 & 3.2e-03\\
1.5625e-02 & 2.8730e-02 & 1.3917e-02 & 6.4656e-03 & 9857 & 1024 & 8.3e-04\\
7.8125e-03 & 1.4377e-02 & 4.9546e-03 & 3.9948e-03 & 21593 & 2048 & 2.1e-04\\
3.9062e-03 & 7.1932e-03 & 2.7345e-03 & 2.0351e-03 & 96969 & 4096 & 5.1e-05\\
\end{tabular}
\caption{Convergence test for \eqref{eq:ODE} over the time interval
$[0,\frac12]$ for the scheme \eqref{eq:DMS}.}
\label{tab:DMS2dT05}
\end{table}%
\begin{table}
\center
\begin{tabular}{c|c|c|c|c|c|c}
$h_{f}$ & $h^M_\Gamma$ & $\|U^h - u\|_{L^\infty}$ & $\|\Gamma^h - \Gamma\|_{L^\infty}$ & $K^M_\Omega$ & $K$
& $|v_\Delta^M|$ \\ \hline
6.2500e-02 & 1.1433e-01 & 1.6079e-01 & 2.4789e-02 & 2941 & 256 & $<10^{-10}$\\
3.1250e-02 & 5.7357e-02 & 4.9133e-02 & 1.3107e-02 & 5077 & 512 & $<10^{-10}$\\
1.5625e-02 & 2.8733e-02 & 1.6422e-02 & 6.8358e-03 & 9865 & 1024& $<10^{-10}$\\
7.8125e-03 & 1.4380e-02 & 6.1040e-03 & 3.5755e-03 & 21605 & 2048& $<10^{-10}$\\
3.9062e-03 & 7.1941e-03 & 2.6860e-03 & 1.6743e-03 & 96893 & 4096& $<10^{-10}$\\
\end{tabular}
\caption{Convergence test for \eqref{eq:ODE} over the time interval
$[0,\frac12]$ for the scheme \eqref{eq:fdMS}$^h$.}
\label{tab:numintfdMS2dT05}
\end{table}%
\begin{table}
\center
\begin{tabular}{c|c|c|c|c|c|c}
$h_{f}$ & $h^M_\Gamma$ & $\|U^h - u\|_{L^\infty}$ & $\|\Gamma^h - \Gamma\|_{L^\infty}$ & $K^M_\Omega$ & $K$
& $|v_\Delta^M|$ \\ \hline
6.2500e-02 & 1.1530e-01 & 1.6291e-01 & 1.3785e-02 & 2881 & 256 & 1.2e-02\\
3.1250e-02 & 5.7482e-02 & 4.7307e-02 & 4.5358e-03 & 5185 & 512 & 3.2e-03\\
1.5625e-02 & 2.8749e-02 & 1.5926e-02 & 4.4224e-03 & 9757 & 1024 & 8.2e-04\\
7.8125e-03 & 1.4382e-02 & 5.9809e-03 & 2.9693e-03 & 21501 & 2048 & 2.1e-04\\
3.9062e-03 & 7.1943e-03 & 2.5431e-03 & 1.5237e-03 & 96997 & 4096 & 5.1e-05\\
\end{tabular}
\caption{Convergence test for \eqref{eq:ODE} over the time interval
$[0,\frac12]$ for the scheme \eqref{eq:DMS}$^h$.}
\label{tab:numintDMS2dT05}
\end{table}%
We also perform a convergence experiment for the true solution
\eqref{eq:ODE} for $d=3$. To this end, we choose
the initial radii $r_1(0) = 2.5$, $r_2(0) = 3$ and
the time interval $[0,T]$ with $T=0.1$,
so that $r_1(T) \approx 2.15$ and $r_2(T) \approx 2.77$.
Moreover, for $i=0\to 2$, we set $N_f = 2^{5+i}$, $N_c = 4^{i}$,
$\frac12 K=\widehat K(i)$, where $(\widehat K(0), \widehat K(1), \widehat K(2)) =
(770, 3074, 12290)$, and $\tau= 4^{3-i}\times10^{-3}$.
The errors $\|U^h - u\|_{L^\infty}$ and $\|\Gamma^h - \Gamma\|_{L^\infty}$ for the four schemes
\eqref{eq:fdMS}, \eqref{eq:fdMS}$^{(h)}$
\eqref{eq:DMS} and \eqref{eq:DMS}$^{(h)}$
on the interval $[0,T]$ with $T=0.1$ are displayed in
Tables~\ref{tab:fdMS3d}, \ref{tab:numintfdMS3d}, \ref{tab:DMS3d} and
\ref{tab:numintDMS3d}.
\begin{table}
\center
\begin{tabular}{c|c|c|c|c|c|c}
$h_{f}$ & $h^M_\Gamma$ & $\|U^h - u\|_{L^\infty}$ & $\|\Gamma^h - \Gamma\|_{L^\infty}$ & $K^M_\Omega$ & $K$
& $|v_\Delta^M|$ \\ \hline
2.5000e-01 & 5.6320e-01 & 7.3514e-01 & 1.3667e-01 & 10831 & 1540& $<10^{-10}$\\
1.2500e-01 & 2.8759e-01 & 2.5135e-01 & 4.6999e-02 & 46311 & 6148& $<10^{-10}$\\
6.2500e-02 & 1.4473e-01 & 9.1052e-02 & 1.9356e-02 & 188389 &24580&$<10^{-10}$\\
\end{tabular}
\caption{Convergence test for \eqref{eq:ODE} over the time interval $[0,0.1]$
for the scheme \eqref{eq:fdMS}.}
\label{tab:fdMS3d}
\end{table}%
\begin{table}
\center
\begin{tabular}{c|c|c|c|c|c|c}
$h_{f}$ & $h^M_\Gamma$ & $\|U^h - u\|_{L^\infty}$ & $\|\Gamma^h - \Gamma\|_{L^\infty}$ & $K^M_\Omega$ & $K$
& $|v_\Delta^M|$ \\ \hline
2.5000e-01 & 5.6594e-01 & 8.9355e-01 & 1.3062e-01 & 10879 & 1540& $<10^{-10}$\\
1.2500e-01 & 2.8815e-01 & 3.1381e-01 & 4.3354e-02 & 46335 & 6148& $<10^{-10}$\\
6.2500e-02 & 1.4484e-01 & 1.2228e-01 & 1.7321e-02 & 188725 &24580&$<10^{-10}$\\
\end{tabular}
\caption{Convergence test for \eqref{eq:ODE} over the time interval $[0,0.1]$
for the scheme \eqref{eq:fdMS}$^h$.}
\label{tab:numintfdMS3d}
\end{table}%
\begin{table}
\center
\begin{tabular}{c|c|c|c|c|c|c}
$h_{f}$ & $h^M_\Gamma$ & $\|U^h - u\|_{L^\infty}$ & $\|\Gamma^h - \Gamma\|_{L^\infty}$ & $K^M_\Omega$ & $K$
& $|v_\Delta^M|$ \\ \hline
2.5000e-01 & 5.7042e-01 & 6.5892e-01 & 6.2158e-02 & 10879 & 1540 & 2.3e-02\\
1.2500e-01 & 2.8847e-01 & 2.3273e-01 & 3.0705e-02 & 46375 & 6148 & 6.2e-03\\
6.2500e-02 & 1.4485e-01 & 8.6575e-02 & 1.5551e-02 & 188725 & 24580 & 1.5e-03\\
\end{tabular}
\caption{Convergence test for \eqref{eq:ODE} over the time interval $[0,0.1]$
for the scheme \eqref{eq:DMS}.}
\label{tab:DMS3d}
\end{table}%
\begin{table}
\center
\begin{tabular}{c|c|c|c|c|c|c}
$h_{f}$ & $h^M_\Gamma$ & $\|U^h - u\|_{L^\infty}$ & $\|\Gamma^h - \Gamma\|_{L^\infty}$ & $K^M_\Omega$ & $K$
& $|v_\Delta^M|$ \\ \hline
2.5000e-01 & 5.7401e-01 & 7.8420e-01 & 6.5619e-02 & 10879 & 1540 & 2.2e-02\\
1.2500e-01 & 2.8908e-01 & 2.9887e-01 & 2.7440e-02 & 46423 & 6148 & 6.0e-03\\
6.2500e-02 & 1.4497e-01 & 1.1943e-01 & 1.3544e-02 & 188965 & 24580 & 1.5e-03\\
\end{tabular}
\caption{Convergence test for \eqref{eq:ODE} over the time interval $[0,0.1]$
for the scheme \eqref{eq:DMS}$^h$.}
\label{tab:numintDMS3d}
\end{table}%
Similarly to the convergence experiments in 2d, we note that for the schemes
\eqref{eq:DMS}$^{(h)}$ the relative volume loss converges to zero as the
discretization parameters get smaller, while the
schemes \eqref{eq:fdMS}$^{(h)}$ preserve the volume exactly in every case.
The error quantities $\|U^h - u\|_{L^\infty}$ and $\|\Gamma^h - \Gamma\|_{L^\infty}$ behave very similarly for all
four schemes.
\subsection{Simulations in 2d}
In this subsection we consider some numerical experiments for the case $d=2$.
In the first computation, we numerically confirm the well-known result shown in
\cite{Mayer98}, which says that the Mullins--Sekerka flow \eqref{eq:MS}
does not preserve convexity. To this end, we choose for $\Gamma(0)$ an
elongated cigar shape of total dimension $7\times1$. The discretization
parameters for the computation are $N_f=128$, $N_c=16$, $\Delta t=10^{-3}$,
$T=2$ and $K=256$, and the results are shown in
Figure~\ref{fig:nonconvex2d}.
We observe that during the evolution the interface becomes nonconvex, before
reaching a circular steady state. As expected, the enclosed volume is preserved
during the evolution. This is not the case when using the scheme
\eqref{eq:DMS}, as can be seen from Figure~\ref{fig:oldnonconvex2d}, where for
completeness we show the same simulation for this alternative finite element
approximation.
\begin{figure}
\center
\includegraphics[angle=-90,width=0.3\textwidth]{figures/2dmullins_nc}
\includegraphics[angle=-90,width=0.3\textwidth]{figures/2dmullins_nc_e}\quad
\includegraphics[angle=-90,width=0.3\textwidth]{figures/2dmullins_nc_v}
\caption{$\Gamma^m$ at times $t=0,0.1,\ldots,1,T=2$
for the scheme \eqref{eq:fdMS}. We also show a plot of
the discrete energy $|\Gamma^m|$ and of the relative volume loss
$v_\Delta^m$ over time.}
\label{fig:nonconvex2d}
\end{figure}%
\begin{figure}
\center
\includegraphics[angle=-90,width=0.3\textwidth]{figures/2dmullins_old_nc}
\includegraphics[angle=-90,width=0.3\textwidth]{figures/2dmullins_old_nc_e}\quad
\includegraphics[angle=-90,width=0.3\textwidth]{figures/2dmullins_old_nc_v}
\caption{$\Gamma^m$ at times $t=0,0.1,\ldots,1,T=2$
for the scheme \eqref{eq:DMS}. We also show a plot of
the discrete energy $|\Gamma^m|$ and of the relative volume loss
$v_\Delta^m$ over time.}
\label{fig:oldnonconvex2d}
\end{figure}%
Our second simulation is for an anisotropic surface energy. Here we make use of
the fact that anisotropies of the form \eqref{eq:g} can be used to approximate
crystalline surface energies, where the isoperimetric minimizers
(the so-called Wulff shapes) exhibit flat
parts and sharp corners. In particular, we choose the density
\begin{equation} \label{eq:gammabgnL}
\gamma_0(p)
= \tfrac14 \sum_{\ell=1}^4 \sqrt{ [(R(\tfrac\pi{4})^\ell]^T D(\delta)
(R(\tfrac\pi{4}))^\ell p \cdot p}, \quad \delta = 10^{-4},
\end{equation}
where
$R(\theta)=\binom{\phantom{-}\cos\theta\ \sin\theta}{-\sin\theta\ \cos\theta}$
and $D(\delta) = \operatorname{diag}(1,\delta^2)$.
Then, inspired by the initial curve from \cite[Fig.~0]{AlmgrenT95},
see also \cite[Fig.~7]{finsler},
we perform a computation for our scheme \eqref{eq:fdaniMS}. We observe that
all the facets of the initial data are aligned with the Wulff shape of
\eqref{eq:gammabgnL} with $\delta=0$, i.e.\ \ regular octagon.
For the computations shown in Figure~\ref{fig:AlmgrenT95} we employed the
discretization parameters $N_f = 256$, $N_c = 32$, $K=512$ and
$\Delta t = 10^{-3}$.
We note that during the evolution all the facets remain aligned with the facets
of the Wulff shape. Some facets grow at the expense of others, leading to some
facets vanishing completely. Eventually a scaled Wulff shape is approached as a
steady state of the flow.
As a comparison, we also show the evolution for the isotropic case for the same
initial data, in Figure~\ref{fig:isoAlmgrenT95}. Here the nonconvex initial
data soon evolves to a convex curve, which then converges towards a circle.
\begin{figure}
\center
\includegraphics[angle=-90,width=0.3\textwidth]{figures/AlmgrenT95}
\includegraphics[angle=-90,width=0.3\textwidth]{figures/AlmgrenT95_T}
\includegraphics[angle=-90,width=0.3\textwidth]{figures/AlmgrenT95_e}
\caption{$\Gamma^m$ at times $t=0,0.1,\ldots,1$, and at time $t=T=3$,
for the scheme \eqref{eq:fdaniMS}.
We also show a plot of the discrete energy $|\Gamma^m|_\gamma$
over time.}
\label{fig:AlmgrenT95}
\end{figure}%
\begin{figure}
\center
\includegraphics[angle=-90,width=0.3\textwidth]{figures/isoAlmgrenT95}
\includegraphics[angle=-90,width=0.3\textwidth]{figures/isoAlmgrenT95_T}
\includegraphics[angle=-90,width=0.3\textwidth]{figures/isoAlmgrenT95_e}
\caption{$\Gamma^m$ at times $t=0,0.1,\ldots,1$, and at time $t=T=3$,
for the scheme \eqref{eq:fdMS}.
We also show a plot of the discrete energy $|\Gamma^m|$
over time.}
\label{fig:isoAlmgrenT95}
\end{figure}%
\subsection{Simulations in 3d}
We end this section with some numerical simulations for the case $d=3$. All the
initial data will always be chosen symmetric with respect to the origin.
First we look at the 3d analogue of the experiment in
Figure~\ref{fig:nonconvex2d}, that is we start with an initial interface in the
shape of a rounded cylinder with total dimensions $7\times1\times1$.
The discretization parameters for this computation are $N_f=128$, $N_c=16$,
$\tau=10^{-3}$, $T=2$ and $K=1154$.
\begin{figure}
\center
\includegraphics[angle=-0,width=0.18\textwidth]{figures/3dmullins_ncf_T0}
\includegraphics[angle=-0,width=0.18\textwidth]{figures/3dmullins_ncf_T01}
\includegraphics[angle=-0,width=0.18\textwidth]{figures/3dmullins_ncf_T02}
\includegraphics[angle=-0,width=0.18\textwidth]{figures/3dmullins_ncf_T05}
\includegraphics[angle=-0,width=0.18\textwidth]{figures/3dmullins_ncf_T2}
\\
\includegraphics[angle=-90,width=0.3\textwidth]{figures/3dmullins_ncf_e}\quad
\includegraphics[angle=-90,width=0.3\textwidth]{figures/3dmullins_ncf_v}
\caption{$\Gamma^m$ at times $t=0,0.1,0.2,0.5,2$. Below we show a plot of
the discrete energy $|\Gamma^m|$ and of the relative volume loss
$v_\Delta^m$ over time.}
\label{fig:nonconvex3dfine}
\end{figure}%
We observe that the initially convex interface loses its convexity
during the evolution, which numerically confirms that such evolutions also
exist in the case $d=3$. Recall that the corresponding result for $d=2$
has been shown in \cite{Mayer98}.
For the numerical simulation in Figure~\ref{fig:nonconvex3dfine}
we also note that
the discrete energy is monotonically decreasing, while the enclosed volume is
maintained up to the chosen solver tolerance.
In a second experiment where an initially convex interface loses its convexity,
we start the evolution with a rounded cylinder of total dimension
$6\times6\times1$. We see from the evolution in
Figure~\ref{fig:cigar661_K770} that the moving interface becomes nonconvex,
before it approaches the shape of a sphere.
The discretization parameters for this computation are $N_f=64$, $N_c=8$,
$\tau=10^{-3}$, $T=2$ and $K=770$.
\begin{figure}
\center
\includegraphics[angle=-0,width=0.18\textwidth]{figures/3dcigar661_K770_T0}
\includegraphics[angle=-0,width=0.18\textwidth]{figures/3dcigar661_K770_T01}
\includegraphics[angle=-0,width=0.18\textwidth]{figures/3dcigar661_K770_T02}
\includegraphics[angle=-0,width=0.18\textwidth]{figures/3dcigar661_K770_T05}
\includegraphics[angle=-0,width=0.18\textwidth]{figures/3dcigar661_K770_T2}
\\
\includegraphics[angle=-90,width=0.3\textwidth]{figures/3dcigar661_K770_e}\quad
\includegraphics[angle=-90,width=0.3\textwidth]{figures/3dcigar661_K770_v}
\caption{$\Gamma^m$ at times $t=0,0.1,0.2,0.5,2$. Below we show a plot of
the discrete energy $|\Gamma^m|$ and of the relative volume loss
$v_\Delta^m$ over time.}
\label{fig:cigar661_K770}
\end{figure}%
We also present two simulations for an anisotropic surface energy.
In the first one, we repeat the simulation in Figure~\ref{fig:nonconvex3dfine}
for the anisotropy
\begin{equation*}
\gamma(\vec{p}) = \sum_{i=1}^3 \left[ \delta^2|\vec{p}|^2 +
p_i^2(1-\delta^2)\right]^\frac12,\quad \delta = 0.1,
\end{equation*}
which approximates the $\ell^1$--norm of $\vec p$.
For the computation in Figure~\ref{fig:aniL3nonconvex}
we use the discretization parameters $N_f=64$, $N_c=8$,
$\tau=10^{-3}$, $T=2$ and $K=578$. It can be observed that, as in the isotropic
case, the interface loses its convexity. Eventually it settles down to an
approximation of the Wulff shape, which here is a smoothed cube.
\begin{figure}
\center
\includegraphics[angle=-0,width=0.18\textwidth]{figures/3dmullins_anc_T0}
\includegraphics[angle=-0,width=0.18\textwidth]{figures/3dmullins_anc_T005}
\includegraphics[angle=-0,width=0.18\textwidth]{figures/3dmullins_anc_T01}
\includegraphics[angle=-0,width=0.18\textwidth]{figures/3dmullins_anc_T02}
\includegraphics[angle=-0,width=0.18\textwidth]{figures/3dmullins_anc_T2}
\\
\includegraphics[angle=-90,width=0.3\textwidth]{figures/3dmullins_anc_e}\quad
\includegraphics[angle=-90,width=0.3\textwidth]{figures/3dmullins_anc_v}
\caption{$\Gamma^m$ at times $t=0,0.05,0.1,0.2,2$. Below we show a plot of
the discrete energy $|\Gamma^m|_\gamma$ and of the relative volume loss
$v_\Delta^m$ over time.}
\label{fig:aniL3nonconvex}
\end{figure}%
In the final simulation we use an anisotropic energy of the form
\eqref{eq:g} with $r>1$, so that the iteration \eqref{eq:itaniMS} also has to
account for the nonlinearity in the approximation of the anisotropy in
\eqref{eq:fdaniMS}. In particular, we choose
\begin{equation*}
\gamma(\vec{p}) = \left(\sum_{i=1}^3 \left[ \delta^2|\vec{p}|^2 +
p_i^2(1-\delta^2)\right]^\frac r2 \right)^\frac1r,\quad \delta = \tfrac12,\
r = 9,
\end{equation*}
in order to model an anisotropy with an octahedral Wulff shape, see e.g.\
\cite[Figs.\ 4, 15]{ani3d}. For the experiment in Figure~\ref{fig:anisphere}
we start from an initial sphere of radius $3$, and use as discretization
parameters $N_f=64$, $N_c=8$, $\tau=10^{-3}$, $T=0.4$ and $K=386$. During the
evolution the moving interface approaches the Wulff shape, and decreases its
anisotropic surface energy as it does so. As expected, the numerical
approximation conserves the enclosed volume exactly.
\begin{figure}
\center
\includegraphics[angle=-0,width=0.18\textwidth]{figures/sphere3_ani_T0}
\includegraphics[angle=-0,width=0.18\textwidth]{figures/sphere3_ani_T005}
\includegraphics[angle=-0,width=0.18\textwidth]{figures/sphere3_ani_T01}
\includegraphics[angle=-0,width=0.18\textwidth]{figures/sphere3_ani_T02}
\includegraphics[angle=-0,width=0.18\textwidth]{figures/sphere3_ani_T04}
\\
\includegraphics[angle=-90,width=0.3\textwidth]{figures/sphere3_ani_e}\quad
\includegraphics[angle=-90,width=0.3\textwidth]{figures/sphere3_ani_v}
\caption{$\Gamma^m$ at times $t=0,0.05,0.1,0.2,0.4$. Below we show a plot of
the discrete energy $|\Gamma^m|_\gamma$ and of the relative volume loss
$v_\Delta^m$ over time.}
\label{fig:anisphere}
\end{figure}%
\def\soft#1{\leavevmode\setbox0=\hbox{h}\dimen7=\ht0\advance \dimen7
by-1ex\relax\if t#1\relax\rlap{\raise.6\dimen7
\hbox{\kern.3ex\char'47}}#1\relax\else\if T#1\relax
\rlap{\raise.5\dimen7\hbox{\kern1.3ex\char'47}}#1\relax \else\if
d#1\relax\rlap{\raise.5\dimen7\hbox{\kern.9ex \char'47}}#1\relax\else\if
D#1\relax\rlap{\raise.5\dimen7 \hbox{\kern1.4ex\char'47}}#1\relax\else\if
l#1\relax \rlap{\raise.5\dimen7\hbox{\kern.4ex\char'47}}#1\relax \else\if
L#1\relax\rlap{\raise.5\dimen7\hbox{\kern.7ex
\char'47}}#1\relax\else\message{accent \string\soft \space #1 not
defined!}#1\relax\fi\fi\fi\fi\fi\fi}
|
{
"timestamp": "2021-12-01T02:24:35",
"yymm": "2111",
"arxiv_id": "2111.15418",
"language": "en",
"url": "https://arxiv.org/abs/2111.15418"
}
|
\section{Introduction}
The Su-Schrieffer-Heeger (SSH) model~\cite{PhysRevLett.42.1698, PhysRevB.22.2099} is a one-dimensional (1D) tight-binding model of a fermion chain with a certain degree of a dimerization, i.e., with alternating hopping amplitudes. It was applied initially for a description of topological excitations, moving solitons, in {\it trans}-polyacetylene molecules that have a doubly degenerate ground state.
Later on, the dimerized fermion model was reexamined in condensed matter physics in contexts of topological insulators, fractionalization of quasiparticle charge, and adiabatic spin pump~\cite{RevModPhys.83.1057,Qi:2008aa,PhysRevLett.99.196805,PhysRevB.74.195312}. The SSH model describes a connection of geometric Zak phase and band topology in 1D case, where non-trivial edge modes can be formed. The experimental realizations of topological phases in SSH become feasible in such platforms as trapped ultracold atomic gases~\cite{Atala:2013aa,Leder:2016aa,Lohse:2016aa,xie2019topological,doi:10.1126/science.aat3406} and superconducting qubits~\cite{besedin2021}.
A significant interest is attracted by generalizations of SSH model. They include extensions on two-chain ladders~\cite{PhysRevB.102.045108}, 2D lattice~\cite{PhysRevB.100.075437}, and long-range hopping~\cite{PhysRevB.102.205425}. This model has a deep connection to driven-dissipative systems described by non-Hermitian Hamiltonians~\cite{PhysRevLett.102.065703,PhysRevB.97.045106, PhysRevX.8.031079}.
As it was systematically studied in Refs.~\cite{PhysRevLett.112.206602,PhysRevB.91.085429,PhysRevLett.113.046802,PhysRevB.89.085111}, disordered versions of SSH chains reveal transitions into topological Anderson insulator phase. An experimental simulation of this phenomenon in ultracold atoms was reported in Ref.~\cite{doi:10.1126/science.aat3406}.
In this work, we provide an analytical calculation of $\mathbb{Z}_2$ topological index $\nu$ averaged via central limiting theorem. This solution provides a relation for the critical phase boundary. For finite size system, we provide a formula for fluctuations of the index and study numerically how edge modes evolve when the disorder increases.
The paper is organized as follows. We start from an introducing of the model in Sec.~\ref{model}. In Sec.~\ref{methods} we define methods of a calculation $\nu$ in clean system (\ref{index-cl}) and in disordered one (\ref{index-dis}). In Sec.~\ref{results} we present our results. In the part~\ref{averaging} the analytic formula for $\langle \nu\rangle$ is obtained and in~\ref{critical-ph-b} the critical phase boundary and fluctuations are calculated. Results of numerical simulations for finite-size systems are presented in Sec.~\ref{numerical}: phase diagram is analyzed in~\ref{ph-diagram} edge modes wavefunction in~\ref{edgemodes}, and the gap suppression in~\ref{gap}. In Sec.~\ref{summary} we conclude. The averaged Green function is found in Appendix~\ref{BA}. In Appendix~\ref{band-touching} we derive a band-touching condition from the spectral density of states.
\section{Model}
\label{model}
\label{Hamiltonian}
The SSH Hamiltonian for a dimerized chain,
\begin{equation}
H=\sum\limits_{i=1}^N u_i \big( a^\dagger_i b_i+ b^\dagger_i a_i \big)+w\sum\limits_{i=1}^{N-1} \big( a^\dagger_{i+1} b_i+ b^\dagger_i a_{i+1} \big) \ , \label{H}
\end{equation}
consists of two types of sublattices with fermion orbitals where hopping amplitudes are chosen real. The respective annihilation (creation) operators are $a_i (a^\dagger_i)$ and $b_i (b^\dagger_i)$, where $i\in \{1, \ ... , \ N\}$ and $N$ is total number of dimers. Intra-cell hopping amplitudes (at even bonds) are constant and equal to $w$. Inter-cell amplitudes at odd bonds, $u_i$, are
random with the average value $\langle u_i\rangle=u$. Random deviations $\delta u_i=u_i-u$ are uncorrelated at different sites, i.e., $\langle\delta u_i \delta u_{i'}\rangle = \delta_{i,i'} \gamma^2$. Here, $\gamma$ is the disorder strength.
This disorder preserves the chiral symmetry of (\ref{H}), i.e., $S H S=-H$ where $S=\prod_{i=1}^N\big( a^\dagger_i a_i - b^\dagger_i b_i \big)$. This symmetry indicates that zero energy modes can exist and a quantum phase transition into a topological Anderson insulator state is possible~\cite{Ryu_2010}.
The SSH model is known to have two distinct topological phases. They are distinguished by the presence or absence of the midgap edge modes localized at the chain ends. (The energies of these states are exponentially close to $E=0$ in the thermodynamic limit, $N\to \infty$.) The phases have two distinct topologies of energy bands characterized by $\mathbb{Z}_2$-topological invariant $\nu$ that takes two possible values, $\nu=0$ and $\nu=1$.
There is a spectral gap, $E_G=2|u-w|$, in both of these phases. It means that the topological phase transition, which occurs at the critical point $|w|= |u|$, is accompanied by the band-touching phenomenon.
\section{Methods}
\label{methods}
\subsection{Topological invariant in the clean limit}
\label{index-cl}
In the limit of infinite $N$ and $\gamma=0$, the binary $\nu$ can be formulated in terms of a geometric phase. In order to do that we rewrite the translationary invariant $H$ as an integral over the Brillouin zone with momentum ${\mathbf{k}}\in[-\pi,\ \pi]$,
\begin{equation}H=\int\limits_{-\pi} ^\pi\frac{d{\mathbf{k}}}{2\pi}\begin{bmatrix}a_{\mathbf{k}}^\dagger & b_{\mathbf{k}}^\dagger\end{bmatrix} \begin{bmatrix} 0 & u+w^{-i{\mathbf{k}}}\\ u+we^{i{\mathbf{k}}} & 0 \end{bmatrix} \begin{bmatrix} a_{\mathbf{k}}\\ b_{\mathbf{k}}\end{bmatrix} \ . \label{H(k)}
\end{equation}
The Fourier transform to the momentum space reads as $a_i=\int\frac{d{\mathbf{k}}}{2\pi} a_{\mathbf{k}}e^{i{\mathbf{k}}n}$ (and similarly for $b_i$).
The topological index is given as $\nu=\frac{1}{\pi}\varphi_{\rm Zak}$ where $\varphi_{\rm Zak}$ is the geometric Zak phase. It is given by an integral over the Brillouin zone of a Berry connection:
$
\varphi_{\rm Zak} =(-i) \int\limits_{-\pi}^\pi \langle\psi_{\mathbf{k}}|\partial_{\mathbf{k}}|\psi_{\mathbf{k}}\rangle d\mathbf{k}
$.
Eigenfunctions $|\psi_{\mathbf{k}}\rangle $ correspond to the lower band with the dispersion $\varepsilon_{\mathbf{k}}(u,w)=-\sqrt{u^2+w^2+2uw\cos {\mathbf{k}} }$. They read \begin{equation}
|\psi_{\mathbf{k}}\rangle = \frac{1}{\sqrt2}\begin{bmatrix} -e^{-i {\rm arctg} \frac{w\sin \mathbf{k}}{u+w\cos \mathbf{k}}} \\ 1 \end{bmatrix} \ .
\end{equation}
Consequently, the Berry connection is a $2\pi$-periodic in $k$ function given as
$
\langle\psi_{\mathbf{k}}|\partial_{\mathbf{k}}|\psi_{\mathbf{k}}\rangle = i\frac{w(w+u \cos k) }{2 \varepsilon_{\mathbf{k}}^2(u,w)}
$.
The integral in $\varphi_{\rm Zak}$ can be performed after the change of variables, $e^{i{\mathbf{k}}}=z$, and integration along the contour $|z|=1$. There are two poles enclosed by the contour and their residues contribute to $\varphi_{\rm Zak}$. The first pole is located at $z=0$ and the second one is at $z = - u/w$ if $|u/w|<1$ (or at $z = - w/u$ if $|u/w|>1$). After some algebra one finds:
\begin{equation}
\nu=\left \{ \begin{matrix} 0, \ |u|>|w|\ ; \\ 1, \ |u|<|w| \ .\end{matrix} \right.
\end{equation}
If the first and last elements in the chain have hopping amplitudes $u$, then one has $\nu=0$ for $|u|> |w|$, and $\nu=1$ for $|u|< |w|$.
The value of $\nu=0$ corresponds to trivial phase with an absence of zero modes. Oppositely, $\nu=1$ is related to a topological phase with a presence of non-trivial zero modes.
It can be illustrated with the use of secular equation ${\rm det} H =0$ for zero energy. It has two complex solutions for momenta ${\mathbf{q}}_{a,b}=\pm i \ln |w/u|+\pi$ which correspond to $a$ and $b$ sublattices, respectively. We note that in the trivial phase, $|w/u|<1$, zero modes do not exist because wavefunctions would grow exponentially. Oppositely, in the topological phase with $|w/u|>1$ these solutions decay and, consequently, determine wavefunctions of edge states. One of them belongs to $a$-sublattice. It is localized at the left ($n=0$) edge and has the exponential envelope $|\phi^{(a)}_n\rangle \sim (-1)^{n-1}e^{-(n-1)/\xi}$. Another edge mode is hosted by $b$-sublattice and is located at the opposite edge, $|\phi^{(b)}_n\rangle \sim (-1)^{N-n}e^{-(N-n)/\xi }$. The coherence length is $\xi=(|{\rm Im} \tilde {\mathbf{q}}_{a,b}|)^{-1}=\frac{1}{\ln |w/u|}$. Of course, in the finite size system these solutions are not exact. As follows from symmetries of the Hamiltonian, equally weighted linear combinations of these exponential solutions can approximate exact wavefunctions of edge modes. The overlap integral of the above solutions determines exponentially small gap between their eigenvalues.
\subsection{Topological invariant in disordered system}
\label{index-dis}
In a system with a disorder, $\gamma\neq 0$, there is no translation invariance and the above method can not be applied. An alternative definition for $\nu$ is based on an auxiliary Aharonov-Bohm phase $\Phi$ introduced for SSH chain closed in a loop~\cite{PhysRevX.8.031079}. A periodic boundary condition (PBC) is implied in this consideration; it provides a gauge-invariant $\Phi$. By an analogy with the previous case of $\gamma=0$, where we dealt with $2\pi$-periodic Berry connection, here, the Hamiltonian becomes $2\pi$-periodic by $\Phi$.
This method is based on an analysis of a complex phase of a non-Hermitian part $h$ of the total Hamiltonian represented as $H=h+h^\dagger$.
Here, the non-Hermitian operator $h$ annihilates $b$-states and creates $a$-states only: $h=\sum\limits_{i=1}^N u_i a^\dagger_i b_i+w\sum\limits_{i=1}^{N-1} a^\dagger_{i+1} b_i $. In order to impose PBC one adds the term $w a^\dagger_1 b_N$ into $h$ (and $w b^\dagger_N a_1 $ into $h^\dagger$). The new Hamiltonian with PBC is $h_{\rm PBC}= h+w a^\dagger_1 b_N$.
The phase can be gauged into hopping matrix elements that is equivalent to a phase drop along the chain. Without loss of generality, we assume that $\Phi$ is dropped at $N$-th bond. This transform changes a complex phase of the matrix element, i.e., one replaces $w\to w e^{i\Phi}$ in $h$. Finally, the non-Hermitian matrix is introduced,
\begin{equation}\mathbf{h}_{i,j}(\Phi)=\delta_{i,j} u_i+w (\delta_{i,j+1}+e^{i\Phi}\delta_{i,j-N+1}) \ . \label{h}
\end{equation}
It parametrizes the phase dependent Hamiltonian as $h_{\rm PBC}(\Phi)=\sum\limits_{i,j}^N a^\dagger_i \mathbf{h}_{i,j} (\Phi)b_j$. The matrix (\ref{h}) possesses a desired $2\pi$-periodicity which provides the alternative definition of the topological invariant:
\begin{equation}\nu=\frac{1}{2\pi i }\int\limits_0^{2\pi} d\Phi\frac{\partial}{\partial\Phi}\ln ({\rm det}\mathbf{h}_{i,j}(\Phi) )\ .\label{nu}
\end{equation}
It can be shown that in the clean limit this definition becomes equivalent to that formulated via the Berry connection.
\section{Results}
\label{results}
\subsection{Averaging via central limiting theorem}
\label{averaging}
A behavior of the invariant as a function of $w/u$ and $\gamma/u$ depends on a particular disorder realization. In what follow we study how its average by realizations, denoted as $\langle \nu\rangle$, does behave.
According to (\ref{h}) and (\ref{nu}), we find that
\begin{equation}\langle \nu\rangle=\langle\theta(1-\xi) \rangle \ , \label{nu-def}
\end{equation}
where the random value is introduced \begin{equation}\xi=\left(\frac{u}{w}\right)^N\prod\limits_{i=1}^N\left|1+\frac{\delta u_i}{u}\right| \ . \label{xi}
\end{equation}
We start our consideration from the noting that the following identity holds, \begin{equation}\langle\theta(1-\xi) \rangle=\langle\theta(-\ln \xi) \rangle \ .
\label{theta-log}
\end{equation}
Let us think about $\ln \xi$ as of a new random variable. According to (\ref{xi}), $\ln \xi= N\ln\frac{u}{w}+\eta$ where $\eta=\sum\limits_{i=1}^N\ln|1+\delta u_i/u|$. The central limiting theorem can be applied at this step for $\eta$. It says that the sum of independent and identically distributed random variables is normally distributed as \begin{equation}P(\eta)=\frac{1}{\sqrt{2\pi}z_2}\exp\left(-\frac{1}{2 z_2^2}\left[\eta-N\left(\ln\frac{u}{w}+ z_1\right)\right]^2\right) \ .
\end{equation}
Here, $z_1$ and $z_2$ are the first and second cumulants of $\eta$. For an arbitrary random distribution $p_\gamma(\epsilon)$, their expressions read as
\begin{equation}z_1 =\int\ln (1+\epsilon/u)p_\gamma(\epsilon)d\epsilon \label{z1}
\end{equation}
and
\begin{equation}
z_2 =\int\left[\ln (1+\epsilon/u)\right]^2p_\gamma(\epsilon)d\epsilon - z_1^2 \ .
\end{equation}
Having applied the central limiting theorem, the average (\ref{theta-log}) is reduced to the integral $\langle\theta(-\ln\xi) \rangle=\int \theta(-\eta)P(\eta) d \eta$.
The integration is performed straightforward and one arrives at
one of central results of this work:
\begin{equation}
\langle\nu\rangle=\frac{1}{2}\left(1-{\rm erf }
\left[\sqrt{N}\frac{\ln (u/w)+z_1( \gamma/u)}{\sqrt{2z_2(\gamma/u)}} \right]
\right) \ . \label{nu-0}
\end{equation}
This analytical formula for $\langle\nu\rangle$ describes a critical phase boundary and finite-size fluctuations of the invariant near the transition. At finite $N$ the transition between different topological phases, associated with $\langle\nu\rangle=0$ and $\langle\nu\rangle=1$, is smooth due to the averaging. In the thermodynamic limit, $N\to \infty$, there are no finite size fluctuations and it becomes sharp.
\subsection{Critical phase boundary. Fluctuations of $\mathbb{Z}_2$ invariant}
\label{critical-ph-b}
The result (\ref{nu-0}) allows to obtain the following properties of the phase transition.
First, this is the critical phase boundary. It follows from (\ref{nu-0}) under the condition $\langle\nu\rangle=1/2$. Resolving it one finds a critical $w_0$ at a given $\gamma$ and $u$. According to (\ref{nu-0}), it reads:
\begin{equation}
w_0=u e^{z_1(\gamma/u)
} \ . \label{gamma-0-eq}
\end{equation}
An alternative resolving of this condition with respect to $\gamma$ is complicated because one has to solve a transcendental equation. However, this can be done in the limit of weak disorder $\gamma/u\ll 1$ and weak dimerization, $u-w\ll u$, where $u>w>0$. In this limit one finds $z_1\approx -\frac{\gamma^2}{2 u^2}$. Embedding this approximate expression into the Eq.~(\ref{gamma-0-eq}) one arrives at the critical disorder strength,
\begin{equation}\gamma_0=
\sqrt{2 u(u-w)} \ . \label{BA-gamma-0}
\end{equation}
Similarly to the clean limit, the topological transition driven by the disorder, which occurs at $\gamma=\gamma_0$, is accompanied by a gap closing as well. The gap closing can be shown analytically in the limit of $u-w\ll u$ via a calculation of the density of states within first Born approximation (see Appendix~\ref{appendix}).
Second, for the binary quantity $\nu$ we immediately find that finite size fluctuations of $\Delta\nu=\nu-\langle\nu\rangle$ are given by
\begin{equation}
\langle\Delta\nu^2\rangle= \langle\nu\rangle(1- \langle\nu\rangle) \ . \label{nu-fluc}
\end{equation}
In the limits of weak disorder and dimerization mentioned above, one finds $z_2\approx \frac{\gamma^2}{u^2}$. In this case, the fluctuations read
\begin{equation}
\langle\Delta\nu^2\rangle= \frac{1}{4}\left(1-{\rm erf}^2\left[\sqrt{\frac{N}{2}}\left(\frac{u-w}{\gamma} - \frac{\gamma}{2u}\right)\right] \right) \ . \label{nu-fluc-2}
\end{equation}
The width $\Delta \gamma$ of the fluctuational region near the critical value $\gamma_0$, when other parameters are constant, is estimated as
\begin{equation}
\Delta \gamma\sim \frac{u}{\sqrt N} \ . \label{delta-gamma} \end{equation}
We note that only the size of the system appears in this estimation while the dimerization parameter does not. This means that finite-size fluctuations near the critical surface are usually not small.
\subsection{Numerical simulations}
\label{numerical}
\subsubsection{Phase diagram}
\label{ph-diagram}
The formula (\ref{nu-0}) demonstrates a good agreement with the data found after the numerical averaging.
Hereafter, we assume flat distribution in $u_i\in [u-\sqrt{3}\gamma; \ u+\sqrt{3} \gamma ]$ with
\begin{equation}
p_\gamma(\epsilon)
=\frac{1}{2\sqrt{3} \gamma} \theta(\sqrt{3}\gamma - |\epsilon|) \ . \label{rho}
\end{equation}
As demonstrated in Fig.~\ref{index}, the theoretical dependence (\ref{nu-0}) (blue curve) matches with data of the simulation (red dots) for the chain with $N=100$ dimers. The agreement is observed in a domain of strong disorder, $\gamma\gtrsim u$, as well.
\begin{figure}[h!]
\center{\includegraphics[width=0.8\linewidth]{index-0.pdf}}
\caption{ Topological invariant averaged by disorder realizations $\langle\nu\rangle$ (red dots) as a function of the disorder strength, $\gamma$. The simulation is performed for the chain of $N=100$ dimers, $w/u=0.95$, and $15\times 10^3$ random distributions for $u_i$. Blue line: theoretical dependence given by Eq.~(\ref{nu-0}). The vertical dashed line is a character value of $\gamma$ that separates domains of weak and strong disorder.
}
\label{index}
\end{figure}
If $w$ and $\gamma$ are varied at a constant $u$, then one arrives at a phase diagram of the disorder driven transition.
It is shown in the Fig.~\ref{diagram} where the gap value is plotted for the system of $N=300$ dimers where PBC are imposed. The data shown correspond to a particular disorder realization. Bright red areas correspond to a finite gap value and blue ones to a suppressed gap. Joined black dots represent the boundary between trivial ($\nu=0$) and topological ($\nu=1$) phases for the particular realization of $H$. The red curve determines the critical phase boundary $w_0$ as a function of $\gamma$, see Eq.~(\ref{gamma-0-eq}), where the exponent has the following explicit form:
\begin{equation}
z_1
=- 1+\frac{u}{2\sqrt{3} \gamma} \ln \frac{u+\sqrt{3}\gamma}{|u - \sqrt{3}\gamma|} + \frac{1}{2}\ln \left|1-\frac{3\gamma^2}{u^2}\right| \ . \label{z1-flat}
\end{equation}
The function is found after the integration in (\ref{z1}) with the flat distribution.
\begin{figure}[h]
\center{\includegraphics[width=0.9\linewidth]{diagram-0.pdf}}
\caption{ Phase diagram of the topological transition for a particular disorder realization. Density plot: the gap value in the logarithmic scale, $\ln(E_G/2u)$, as a function of $\gamma$ and $w$ in the system of $N=300$ dimers with PBC. Black joined dots: the boundary between trivial and topological phases. Red curve: critical phase boundary given by Eqs.~(\ref{gamma-0-eq}) and (\ref{z1-flat}). Blue dashed line: critical $w=u\sqrt{1-\frac{\gamma^2}{2u^2}}$ found for weak dimerization limit (see Eq.~\ref{BA-gamma-0}) where the Born approximation is justified.}
\label{diagram}
\end{figure}
\subsubsection{Edge modes}
\label{edgemodes}
Here, we analyze an evolution of the edge modes wavefunction of the chain Hamiltonian (\ref{H}) when $\gamma$ increases. In numerical simulations, we consider the eigenstates $\psi_\sigma(q E_{\rm min})$ from upper and lower bands that have energies closest to $E=0$. Here, $\sigma=a,b$ stands for the sublattice index and the minimal energies are $q E_m$ where $E_{\rm min}={\rm min}|E_j|$ and $q=\pm 1$ due to particle-hole symmetry of $H$. For a particular realization of the disorder we calculate the wavefunction
\begin{equation}
|\psi|^2 =\sum\limits_{ \sigma=a,b ; q =\pm 1} |\psi_\sigma(q E_{\rm min})|^2 \label{wf-2}
\end{equation}
where a trace over $q $- and $\sigma$-index is taken. On a next step, the averaging of (\ref{wf-2}) is performed.
In Fig.~\ref{wf} we present the data for $\langle|\psi|^2\rangle$ with the averaging over 100 disorder realizations with $N=100$ dimers and $w/u=0.95$.
One can see that edge modes appear at $\gamma$ above $\gamma_0$ where $\langle\nu\rangle$ (red dots) saturates to the unity. Further increase of $\gamma/u\gtrsim 1$ reveals a disruption of localized modes and a reentrance into the non-topological phase with the saturation of the averaged invariant to $\langle\nu\rangle=0$. The smooth decrease of $\langle\nu\rangle$ from the unity to zero means that the topological invariant is strongly fluctuating and is sensitive to a particular random realization.
\begin{figure}[h!]
\center{\includegraphics[width=0.8\linewidth]{wf-index-0.pdf}}
\caption{ The wavefunction given by Eq.~(\ref{wf-2}) averaged by 100 realizations. Simulation is performed for the chain Hamiltonian (\ref{H}) with $N=100$ dimers and $w/u=0.95$. Density plot for $\langle |\psi|^2\rangle$ shows a formation of edge states (red spots near $n=0$ and $n=N$) when the averaged topological invariant $\langle\nu\rangle$ (red dots) saturates to unity. Decrease of $\langle\nu\rangle$ when $\gamma/u\gtrsim 1$ corresponds to a disruption of the edge modes and an onset of a randomly distributed wavefunction.}
\label{wf}
\end{figure}
\subsubsection{Energy gap suppression}
\label{gap}
Similarly to the clean limit, the disorder driven topological transition is accompanied by the gap closing. This effect is described analytically in the limit of $u-w\ll u$ via a calculation of the density of states within first Born approximation (see Appendix~\ref{appendix}). In simulations for a finite-size system (see Fig.~\ref{gap-nu}) we observe a strong suppression of the gap, $E_G=2E_m$, which is decreased by five orders in the magnitude (blue dots). The data are shown for the chain of $N=300$ dimers and averaging over 100 realizations.
This suppression corresponds to the band-touching phenomenon in the thermodynamic limit. At the same time, the averaged $\langle\nu\rangle$ changes smoothly from 0 to 1 (red dots) near the critical point $\gamma=\gamma_0$. The relative width of this transition, which is of the order of 10$\%$, is in agreement with the estimation (\ref{delta-gamma}) that predicts a weak power-law decay of width with $N$.
\begin{figure}[h!]
\center{\includegraphics[width=0.7\linewidth]{gap-nu-0.pdf}}
\caption{
Topological phase transition driven by the disorder and suppression of the gap. Numerical data for $\langle E_G\rangle$ (blue dots, left axis) and $\langle \nu\rangle$ (red dots, right axis) averaged over 100 disorder realizations as a function of the disorder strength, $\gamma$. The simulated system involves $N=300$ dimers with PBC and $w/u=0.8$. The critical value of $\gamma_0$ corresponds to the transition between trivial ($\nu=0$) and topological ($\nu=1$) phases and to the minumum of the gap value. }
\label{gap-nu}
\end{figure}
\section{Discussion and outlook
}
\label{summary}
To conclude, we studied theoretically topological transitions in finite-size disordered SSH model. Our findings were motivated by state-of-the-art experiments where topologically ordered phases were observed in artificial dimerized chains.
In this work, we derived an analytic formula for $\mathbb{Z}_2$-topological invariant $\nu$ and its fluctuations averaged by an ensemble. The approach is based on the central limiting theorem and a non-Hermitian Hamiltonian. In particular, this method gives an exact result for the critical surface at an arbitrary strength of the disorder.
Our work is complementary to previous studies of topological phases in disordered chains~\cite{PhysRevLett.112.206602,PhysRevB.91.085429,PhysRevLett.113.046802,PhysRevB.89.085111}.
A particular case of random inter-cell tunnelling rates was considered here, however, the results can be extended for a more general forms of the disorder that preserve chiral symmetry.
We also provided a detailed comparison of our findings with numerical simulations.
\begin{acknowledgments}
The reported study was supported by Russian Foundation for Basic Research (RFBR) according to the research project N\textsuperscript{\underline{o}}~20-37-70028.
\end{acknowledgments}
\renewcommand\theequation{A\arabic{equation}}
\setcounter{equation}{0}
\makeatletter
\let\@AAC@list\@empty
\makeatother
\setcounter{secnumdepth}{2}
\section{Appendix}
\label{appendix}
\subsection{Averaged Green function within Born approximation}
\label{BA}
Consider a retarded propagator, which has $2N\times2N$ matrix structure in the coordinate ($n,n'$) and sublattice ($\sigma$) spaces,
\begin{multline}[G_{n,n'}(t-t')]_{\sigma,\sigma'}= \\
[i \delta_{n,n'}\delta_{\sigma,\sigma'}\delta(t-t')\partial_{t'} -\delta(t-t')[\mathcal{H}_{n,n'}]_{\sigma,\sigma'}]^{-1} \label{G-0}
\end{multline} for the system with the Hamiltonian (\ref{H}). The matrix $[\mathcal{H}_{n,n'}]_{\sigma,\sigma'}=\langle n,\sigma | H |n',\sigma'\rangle$ is a projection of (\ref{H}) on the single-particle basis $|n,\sigma\rangle$ where states are defined on $n$~-~th site and sublattice index $\sigma=a,b$.
As long as the system is stationary, we use a Fourier transform by the time, i.e. $\partial_t \to - i \omega$.
Let us represent the Hamiltonian matrix $\mathcal{H}=\mathcal{H}^{(0)}+\mathcal{V}$, i.e. as a sum of the translational invariant part \begin{equation}\mathcal{H}^{(0)}_{n,n'}=u\delta_{n,n'} \sigma_x+w(\delta_{n,n'+1}\sigma_+ + \delta_{n+1,n'}\sigma_-) \ ,
\end{equation} and the part with the disorder, $\mathcal{V}_{n,n'}=\epsilon_n\delta_{n,n'} \sigma_x$ that is considered as a perturbation. The sublattice indices are encoded by Pauli matrices $\sigma_x$, $\sigma_+=\frac{1}{2}(\sigma_x+i\sigma_y)$ and $\sigma_-=\frac{1}{2}(\sigma_x-i\sigma_y)$.
The Fourier transformed propagator (\ref{G-0}) is expanded in series by $\mathcal{V}$:
\begin{equation}
G_{n,n'}(\omega)=\mathcal{G}_{n,n'}(\omega)+\mathcal{G}_{n,k}(\omega) \sum\limits_{q=1}^{\infty} [(\mathcal{V} \mathcal{G}(\omega))^q]_{k,n'} \ . \label{G-1}
\end{equation}
Here, the matrix $\mathcal{G}_{n,n'}(\omega)$ is the retarded propagator for the clean system: $\mathcal{G}_{n,n'}(\omega)=\big[\delta_{n,n'} (\omega+i\alpha) \sigma_0 -\mathcal{H}^{(0)}_{n,n'}\big]^{-1}$ where $\alpha$ is a positive infinitesimal frequency. Performing a discrete Fourier transform and using the representation of the Hamiltonian from (\ref{H(k)}), on finds $\sum\limits_{n,n'} e^{-ik n-ik'n'}\mathcal{G}_{n,n'}(\omega)=2\pi \delta({\mathbf{k}}-{\mathbf{k}}') \mathcal{G}_{\mathbf{k}}(\omega )$. Here, the bare Green function is defined for a momentum ${\mathbf{k}}$ in the Brillouin zone. It has a two-dimensional structure in $\sigma$-space:
\begin{multline}
\mathcal{G}_{\mathbf{k}}(\omega; u,w )
= \\ =\frac{(\omega+i\alpha) \sigma_0 + (u+w\cos \mathbf{k}) \sigma_x+ w \sin \mathbf{k} \sigma_y }{(\omega+i\alpha)^2 - \epsilon_{\mathbf{k}}^2(u,w)} \ . \label{bare-GF}
\end{multline}
Let us return back to the Green function in the coordinate space (\ref{G-1}). The averaging of $G_{n,n'}$ by disorder realizations makes it translational invariant. Below we perform this calculation in a first order self-energy (Born approximation) where one takes into account only Fock-type diagram~\cite{altland2010condensed}.
The averaged Green function does not include crossed line and ``rainbow'' diagrams; it is represented as follows:
\begin{equation}
\langle G_{n,n'}\rangle =\mathcal{G}_{n,n'} +\mathcal{G}_{n,k} \sum\limits_{q=1}^\infty [\Sigma_{k,m} \mathcal{G}_{m,n'}]^q \ . \label{G-Sigma}
\end{equation}
The straightforward resummation in (\ref{G-Sigma}) yields the Green function in the first Born approximation:
\begin{equation} \langle G_{\mathbf{k}} (\omega)\rangle = \left[ \mathcal{G}^{-1}_{\mathbf{k}}(\omega)-\Sigma^{\rm (BA)}_{\mathbf{k}}(\omega)\right]^{-1} \ .
\end{equation}
Here, the self-energy is given by $ \Sigma_{n,n'}^{\rm (BA)}=\langle \mathcal{V}_{n,k} \mathcal{G}_{k,m} \mathcal{V}_{m,n'}\rangle$.
After the averaging one finds that the self-energy is local in the real space and reads:
\begin{equation}
\Sigma_{n,n'}^{\rm (BA)}(\omega; u,w)
=\gamma^2 \delta_{n,n'} \sigma_x \mathcal{G}_{n,n}(\omega; u,w) \sigma_x \ .
\end{equation}
Here, $\mathcal{G}_{n,n}=\int\limits_{-\pi}^\pi\mathcal{G}_{\mathbf{k}}\frac{d{\mathbf{k}}}{2\pi}$. Odd terms in ${\mathbf{k}}$ cancel out under the integration and we have the following structure of the self-energy in $\sigma$-space:
\begin{equation}
\Sigma^{\rm (BA)}_{\mathbf{k}}(\omega;u,w)= \gamma^2\Big[ f(\omega;u,w)\sigma_0 +g(\omega;u,w)\sigma_x \Big]\ . \label{Sigma-BA}
\end{equation}
Here,
\begin{equation}
f(\omega;u,w)= \int\limits_{-\pi}^\pi \frac{\omega+i\alpha }{(\omega+i\alpha)^2 - \epsilon_{\mathbf{k}}^2(u,w)} \frac{d{\mathbf{k}}}{2\pi} \ .
\label{int-f}
\end{equation}
and
\begin{equation}
g(\omega;u,w)= \int\limits_{-\pi}^\pi \frac{ u+w\cos \mathbf{k} }{(\omega+i\alpha)^2 - \epsilon_{\mathbf{k}}^2(u,w)} \frac{d{\mathbf{k}}}{2\pi} \ . \label{int-g}
\end{equation}
We observe that functions $f$ and $g$ renormalize the frequency and hopping element $u$ as follows: \begin{equation}
\omega\to \omega-\gamma^2f(\omega;u,w)
\end{equation}
and
\begin{equation}
u\to u+\gamma^2g(\omega;u,w) \ .
\end{equation} Finally,
we find that the averaged Green function is represented via the bare Green function (\ref{bare-GF}) as follows:
\begin{multline}
\langle G_{\mathbf{k}} (\omega;u,w,\gamma)\rangle
= \\ =\mathcal{G}_{\mathbf{k}}\left(\omega-\gamma^2f(\omega;u,w); u+\gamma^2g(\omega;u,w),w \right) \ . \label{G-BA}
\end{multline}
The renormalization in (\ref{G-BA}) allows to construct a self-consistent Born approximation procedure. We leave this issue beyond the scope of our consideration.
\subsection{Band-touching condition}
\label{band-touching}
We address the limit of weak dimerization, $|u-w|\ll u$, when the gap in the clean limit is small compared to the bandwidth. The low-energy modes reside close to the momentum $\mathbf{k}=\pm \pi$. We approximate the spectrum near this point reads as $\epsilon_{\mathbf{q}}(\Delta,u)=\sqrt{\Delta^2+u^2\mathbf{q}^2}$; we introduced here the momentum counted from the edge of the Brillouin zone, $\mathbf{q}=\mathbf{k}-\pi$, and the dimerization parameter $\Delta=u-w$ which is small compared to $u$. We note that an exact calculation of $g$ via the contour integrals with $z=e^{i{\mathbf{k}}}$ gives $g=0$ at $\omega=0$ and $\Delta<0$. Consequently, the band-touching condition is only possible for $\Delta>0$ which is assumed hereafter.
Calculation of the integrals (\ref{int-f}) and (\ref{int-g}) is reduced to an integration of a very narrow Lorentian peak near $\mathbf{q}=0$ within the approximation indicated. We find that for $\omega=0$, that corresponds to local Green function at the midgap, the integrals are
\begin{multline}
f(\Delta, u )\approx - \int\limits_{-\infty}^\infty \frac{i\alpha }{{\mathbf{q}}^2 + \frac{\Delta^2+\alpha^2}{u^2} } \frac{d{\mathbf{q}}}{2\pi u^2} = \\ =\frac{-i\alpha}{2 u \sqrt{\Delta^2+\alpha^2}} \label{f-1}
\end{multline}
and
\begin{multline}
g(\Delta, u )\approx - \int\limits_{-\infty}^\infty \frac{ \Delta }{{\mathbf{q}}^2 + \frac{\Delta^2+\alpha^2}{u^2} } \frac{d{\mathbf{q}}}{2\pi u^2} = \\ = \frac{-\Delta}{2 u \sqrt{\Delta^2+\alpha^2}} \ . \label{g-1}
\end{multline}
The diagonal component of the Green function at coincident coordinates near $\omega=0$ reads:
\begin{equation}\mathcal{G}^{\rm (diag)}= f(\omega-\gamma^2f(\Delta,u);u+\gamma^2g(\Delta,u),w) \ .
\end{equation}
It has the following form
\begin{multline}
\mathcal{G}^{\rm (diag)}(\omega)\approx\\
\approx - \int\limits_{-\infty}^\infty \frac{\omega-\gamma^2f(\Delta,u)+i\alpha }{{\mathbf{q}}^2 + \frac{(\Delta+\gamma^2 g(\Delta,u))^2-(\omega-\gamma^2f(\Delta,u)+i\alpha)^2}{u^2} } \frac{d{\mathbf{q}}}{2\pi u^2} = \\
= -\frac{\omega-\gamma^2f(\Delta,u)+i\alpha}{2 u \sqrt{(\Delta+\gamma^2 g(\Delta,u))^2+(\omega-\gamma^2f(\Delta,u)+i\alpha )^2}} \ .
\end{multline}
The imaginary part of the Green function allows to obtain the spectral density of states as
\begin{equation}
\rho(\omega) =\frac{-1}{\pi}{\rm Im} \mathcal{G}^{\rm (diag)}(\omega) \ .
\end{equation}
For the midgap energy, i.e., $\omega=0$, one finds:
\begin{multline}
\rho(0) = \frac{\alpha(1+\frac{\gamma^2}{2u\Delta})}{2\pi u \sqrt{(\Delta-\frac{\gamma^2 \theta(\Delta)}{2u} )^2+\alpha^2(1+\frac{\gamma^2}{2u\Delta})^2}} \ . \label{DOS}
\end{multline}
Here, we use approximate form of $f$ and $g$ from (\ref{f-1}) and (\ref{g-1}).
The midgap density of states given by (\ref{DOS}) has a singularity when the band-touching condition holds:
\begin{equation}
\Delta-\frac{\gamma^2 }{2u}=0 \ . \label{cond-gap}
\end{equation}
The equation is resolved as $
\gamma_0^{\rm (BA)} = \sqrt{2u(u-w)}$ where $ u>w>0$.
In other words, the critical disorder strength found after first Born approximation reproduces the result (\ref{BA-gamma-0}) derived via central limiting theorem.
|
{
"timestamp": "2021-12-01T02:27:13",
"yymm": "2111",
"arxiv_id": "2111.15500",
"language": "en",
"url": "https://arxiv.org/abs/2111.15500"
}
|
\section{Introduction}
In real-world settings, it is crucial to robustly perform classification and OoD detection with high levels of confidence.
The problem of detecting whether a sample is in-distribution, from the training distribution, or OoD is critical for adversarial attacks.
This is crucial nowadays in many applications in safety, security, and defence.
However, deep neural networks produce overconfident predictions and do not distinguish in- and out-of-data-distribution.
Adversarial examples, when small modifications of the input appear, can change the classifier decision.
It is an important property of a classifier to address such limitations with high level of confidence, and provide robustness guarantees for neural networks.
In parallel, OoD detection is a challenging aim since classifiers set high confidence to OoD samples away from the training data.
The state-of-art models are overconfident in their predictions, and do not distinguish in- and OoD.
The setting that our proposed Few-shot ROBust (FROB) model addresses is robust few-shot Out-of-Distribution (OoD) detection and few-shot Outlier Exposure (OE).
To address rarity and the limited samples in the few-shot setting, we aim at reducing the number of the few-shots of the OoD samples, while maintaining accurate and robust performance.
Diverse data are available today in large quantities.
Deep learning magnifies the difficulty of distinguishing OoD from in-distribution.
It is possible to use such data to improve OoD detection by training detectors with auxiliary outlier sets \citep{3}.
OE enables detectors to generalize to detect unseen OoD samples with improved robustness and performance.
Models trained with different outliers can detect unmodelled data and improve OoD detection by learning cues for whether inputs are unmodelled.
By exposing models to different OoD, the complement of the support of the normal class distribution is modelled and the detection of new types of anomalies is enabled.
OE improves the calibration of deep neural network classifiers in the setting where a fraction of the data is OoD, addressing the problem of classifiers being overconfident when applied to OoD \citep{1}.
Aiming at solving the few-shot robustness problem with classification and OoD detection, the contribution of our FROB methodology
is the development of an integrated robust framework for self-supervised few-shot negative data augmentation on the distribution confidence boundary,
combined with few-shot OE, for improved OoD detection.
The combination of the generated boundary in a self-supervised learning way and the imposition of low confidence at this learned boundary is the main contribution of FROB, which greatly and decisively improves robustness for few-shot OoD detection.
To address the rarity of relevant outliers during training using OoD samples, we propose to use even few-shots to improve the OoD detection performance.
FROB achieves significantly better robustness and resilience to few-shot OoD detection, while maintaining competitive in-distribution accuracy.
FROB achieves generalization to unseen anomalies, with applicability to new, in the wild, test sets that do not correlate to the training sets.
\iffalse
Our contributions are:
\begin{itemize}
\item Develop an integrated robust framework for self-supervised few-shot multi-class classification and OoD detection.
\item Integrate few-shot capability: Perform robust few-shot OoD detection and classification for adversarial attacks.
\item Devise few-shot outliers: Generate few-shot samples on the boundary of the distribution and use few-shot OE.
\item Propose a novel loss function for self-supervised OoD sample generation on the distribution boundary and negative training.
\item Provide significantly better resilience to OoD detection, while maintaining competitive in-distribution accuracy.
\item Improve robustness by not overfitting to adversarial examples generated by applying a specific algorithm.
\item Confidence prediction by addressing perturbations to the data and measuring robustness in $l_{\infty}$-norm spaces.
\end{itemize}
We develop a self-supervised learning algorithm that performs negative sample generation on the distribution boundary to improve robustness for OoD detection.
We propose a novel loss function, and we integrate few-shot capability.
We perform
cross-dataset classification evaluation and OoD detection evaluation.
\fi
FROB's evaluation on different sets, CIFAR-10, SVHN, CIFAR-100, and low-frequency noise,
using cross-dataset and One-Class Classification (OCC) evaluations, shows that our self-supervised model with few-shot OE on the confidence boundary and few-shot adaptation improves the few-shot OoD detection performance and outperforms benchmarks.
The robustness performance analysis of FROB to the number of few-shots and to outlier variation shows that it is robust to few-shots and outperforms baselines.
\section{Our Proposed Few-shot ROBustness (FROB) Methodology}
We propose FROB for few-shot OoD detection and classification using discriminative and generative models.
We devise a methodology for improved robustness and reliable confidence prediction, to force low confidence close and away from the data.
To improve robustness, FROB generates strong adversarial samples on the boundary close to the normal class.
It finds the boundary of the normal class, and it combines the self-supervised learning few-shot boundary with our robustness loss.
\textbf{Flowchart of FROB.}
Fig.~\ref{fig:asdfasdasdfasdfdfdfd} shows the flowchart of FROB which uses a discriminative model for classification and OoD detection.
FROB also uses a generator for the OoD samples and the learned boundary.
It generates low-confidence samples and performs active negative training with the generated OoD samples on the boundary.
It performs self-supervised learning negative sampling of confidence boundary samples via the generation of strong and specifically adversarial OoD.
It trains classifiers and generators to robustly classify as less confident samples on and out of the boundary.
\textbf{Our proposed loss.}
We denote the normal class data by $\textbf{x}$ where $\textbf{x}_i$ are the labeled data with class labels $y_i$.
Our proposed loss of the discriminative model which is minimized during training is
\begin{equation}
\text{arg min}_{f} \ - \dfrac{1}{N} \sum_{i=1}^N \log \dfrac{\exp(f_{y_i}(\textbf{x}_i))}{\sum_{k=1}^K \exp(f_k(\textbf{x}_i))} - \lambda \dfrac{1}{M} \sum_{m=1}^{M} \log \left( 1 - \dfrac{\exp(f(\textbf{Z}_m))}{\sum_{k=1}^K \exp(f_k(\textbf{Z}_m))} \right)
\label{eq:equatioononn1}
\end{equation}
where $f(.)$ is the Convolutional Neural Network (CNN) discriminative model for multi-class classification with $K$ classes.
Our loss has $2$ terms and a hyper-parameter.
The $2$ losses operate on different samples for positive and negative training, respectively.
The first loss is the cross-entropy between $y_i$ and the predictions, $\text{softmax} (f(\textbf{x}_i))$; the CNN is followed by the normalized exponential to obtain the probability over the classes.
Our robustness loss forces $f(.)$ to accurately detect outliers, in addition to classification.
It operates on the few-shot OE samples, $\textbf{Z}$.
It is weighted by the hyper-parameter $\lambda$.
$k$ is a class index.
For the in-distribution, $N$ is the batch size and $i$ is the batch data sampling index.
For the OoD, $M$ is the batch size and $m$ is the batch data sampling index.
FROB then trains a generator to generate low-confidence samples on the normal class boundary.
Our algorithm includes these learned low-confidence samples in the training to improve the performance in the few-shot setting.
Instead of using a large OE set, which constitutes an ad hoc choice of outliers to model the complement of the support of the normal class distribution, FROB performs learned negative data augmentation and self-supervised learning, to model the boundary of the support of the normal class distribution.
We train a CNN deep neural network generator and denote it by $O(\textbf{z})$, where $O$ refers to OoD samples and $\textbf{z}$ are latent space samples from a standard Gaussian distribution.
Our proposed optimization of maximizing dispersion subject to being on the boundary is given by
\begin{align}
\begin{split}
\text{arg} \ \, \, \text{min}_{O} \, \, & \, \frac{1}{N-1} \sum_{j=1, \, \textbf{z}_j \neq \textbf{z}}^N \dfrac{ || \textbf{z} - \textbf{z}_j||_2 }{ || O(\textbf{z}) - O(\textbf{z}_j) ||_2 } \label{eq:q1werqwrq}\\
& + \, \, \mu \, \max_{l=1,2,\dots,K} \, \dfrac{\exp(f_l(O(\textbf{z})) - f_l(\textbf{x}))}{\sum_{k=1}^K \exp(f_k(O(\textbf{z})) - f_k(\textbf{x}))} \, + \, \, \nu \, \text{min}_{ j = 1,2,\dots,Q} \, || O(\textbf{z}) - \textbf{x}_j ||_2
\end{split}
\end{align}
where using (\ref{eq:q1werqwrq}), we penalize the probability that $O(\textbf{z})$ have higher confidence than the normal class.
We hence make $O(\textbf{z})$ have lower probability than $\textbf{x}$ \citep{32, 33}.
\begin{figure*}[!tbp]
\centering \includegraphics[width=0.815\textwidth]{FROB_Flowchart_29_August_2021.pdf}
\caption{FROB training with learned negative sampling, FS-$O(\textbf{z})$, and few-shot outliers, FS-OE.}
\label{fig:asdfasdasdfasdfdfdfd}
\end{figure*}
FROB includes the learned low-confidence samples in the training by performing (\ref{eq:equatioononn1}) with the self-generated few-shot boundary, $O(\textbf{z})$, in addition to $\textbf{Z}$.
Our self-supervised learning mechanism to calibrate confidence in unforeseen scenarios is (\ref{eq:q1werqwrq}) followed by (\ref{eq:equatioononn1}).
FROB performs boundary data augmentation in a learnable self-supervised learning manner.
It introduces self-generated boundary samples, and sets them as OoD to better perform few-shot OoD detection.
This learned boundary has strong and adversarial anomalies close to the distribution support and near high probability normal class samples.
FROB introduces optimal, relevant, and useful anomalies to more accurately detect few-shots of OoD \citep{25, 26}.
It detects OoD robustly, by generating strong adversarial OoD samples and helpful task-specific anomalies.
A property of our nested optimization, where the inner optimization is $O(\textbf{z})$ in (\ref{eq:q1werqwrq}) and the outer one is cross-entropy with negative training in (\ref{eq:equatioononn1}), is that if an optimum is reached for the inner one, an optimum will also be reached for the outer.
FROB addresses the few-shots problem by performing negative data augmentation in a well-sampled manner on the support boundary of the normal class.
It performs OoD sample description and characterization, not allowing space between the normal class and our self-generated anomalies.
FROB addresses the question of what OoD samples to introduce to our model for negative training, to robustly detect few-shots of data.
FROB introduces self-supervised learning and learned data augmentation using the Deep Tightest-Possible Data Description algorithm of (\ref{eq:q1werqwrq}) followed by (\ref{eq:equatioononn1}), and our self-generated confidence boundary in (\ref{eq:q1werqwrq}) is robust to mode collapse \citep{7, 10}.
By performing scattering, FROB achieves diversity using the ratio of distances in the latent and data spaces rather than maximum entropy \citep{20, 21}.
Our framework uses data space point-set distances \citep{7, 10, 22, 23}.
\textbf{Inference.}
The Anomaly Score (AS) of FROB for \textit{any} queried test sample, $\tilde{\textbf{x}}$, during inference is
\begin{equation}
\text{AS}(f, \tilde{\textbf{x}}) \, = \, \max_{l=1,2,\dots,K} \, \dfrac{\exp(f_l(\tilde{\textbf{x}}))}{\sum_{k=1}^K \exp(f_k(\tilde{\textbf{x}}))}
\label{eq:equatioononn4}
\end{equation}
where if the AS is smaller than a threshold $\tau$, i.e. $\text{AS} < \tau$, $\tilde{\textbf{x}}$ is OoD. Otherwise, $\tilde{\textbf{x}}$ is in-distribution.
\section{Related Work on Classification with OoD Detection}
\textbf{Outlier Exposure.}
The OE method trains detectors with outliers to improve the OoD performance to detect unseen anomalies \citep{3}.
Using auxiliary sets, disjoint from train and test data, models learn better representations for OoD detection.
Confidence Enhancing Data Augmentation (CEDA), Adversarial Confidence Enhancing Training (ACET), and Guaranteed OoD Detection (GOOD) tackle the problem of classifiers being overconfident at OoD samples \citep{1, 2}.
Their aim is to force low confidence in a $l_{\infty}$-norm ball around each OoD sample where the prediction confidence is $\text{max}_{k=1,2,\dots,K} \ p_k(\textbf{x})$ for the output $K$-class softmax \citep{50, 4, 5}.
CEDA employs point-wise robustness \citep{27, 54}.
GOOD finds worst-case OoD detection guarantees.
The models are trained on OE sets, using the 80 Million Tiny Images reduced by the normal class.
Disjoint distributions are used for positive and negative training, but the OoD samples for OE are chosen in an ad hoc way.
In contrast, FROB performs learned negative data augmentation on the boundary of the normal class to streamline and redesign few-shot OE (and zero-shot OE).
\textbf{Human prior.}
GOOD defines the normal class, then filters it out from the 80 Million Tiny Images.
This filtering-out process of normality from the OE set is human-dependent.
This modified dataset is set as anomalies.
Next, GOOD learns the normal class and sets low confidence to these OoD.
This process is data-dependent, not automatic, and feature-dependent \citep{13, 38}.
In contrast, FROB eliminates the need for feature extraction and human intervention which is the aim of Deep Learning, as these do not scale.
This filtering-out process is not practical and cannot be used in real-world scenarios as anomalies are not confined in finite closed sets \citep{51}.
FROB avoids feature-, application-, and dataset-dependent processes.
Our self-supervised boundary data augmentation obviates memorization, scalability, and data diversity problems arising from memory replay and prioritized experience replay \citep{28, 37}.
\textbf{Learned OoD samples.}
The Confidence-Calibrated Classifier (CCC) uses a GAN to create samples out of, but close to the normal class \citep{12}.
FROB substantially differs from CCC, as CCC finds a threshold and not the boundary.
CCC uses the OE set, $U(\textbf{y})$, where the labels follow a Uniform distribution, to compute this threshold.
This is limiting as the threshold depends on $U(\textbf{y})$, which is an ad hoc choice of outliers.
In contrast, FROB finds the confidence boundary and does not use $U(\textbf{y})$ to find this boundary.
FROB streamlines OE and few-shot outliers.
Our boundary is not a function of $U(\textbf{y})$, as $U(\textbf{y})$ is not necessary \citep{38}.
For negative training, CCC defines a closeness metric (KL divergence), and then penalizes this metric \citep{28, 56, 13}.
CCC suffers from mode collapse as it does not perform scattering for diversity.
The models in \citet{12, 14, 15} and \citet{36} perform confidence-aware classification.
Self-Supervised outlier Detection (SSD) creates OoD samples in the Mahalanobis metric \citep{11}. It is not a classifier, as it performs OoD detection with OE.
FROB achieves fast inference with (\ref{eq:equatioononn4}), in contrast to \citet{52} which is slow during inference \citep{53}.
\citet{52} does not address issues arising from detecting with nearest neighbors while using a different composite loss for training.
\section{Evaluation and Results}
We evaluate FROB trained on different sets, CIFAR-10, SVHN, CIFAR-100, 80 Million Tiny Images, Uniform noise, and low-frequency noise, and we report the Area Under the Receiver Operating Characteristic Curve (AUROC), the Adversarial AUROC (AAUROC), and the Guaranteed AUROC (GAUROC) which uses $l_{\infty}$-norm perturbations for the OoD \citep{1, 30}.
For the evaluation of FROB, we test different combinations of normal class sets, OE datasets, few-shot outliers (FS-OE), the generated boundary (FS-$O(\textbf{z})$), and test sets, in an alternating manner.
We examine the generalization performance of FROB to few-shots of unseen new OoD samples at the dataset level, Out-of-Dataset anomalies.
To examine the robustness to the number of few-shot samples, we decrease the number of few-shots by dividing them by two.
We perform uniform sampling for choosing the few-shots, and we examine the variation of the dependent variable, AUROC, to changes of the independent variable, the provided number of few-shots of OoD (FS-OE).
In this way, we evaluate the robustness of FROB to the number of few-shots.
We also examine the Failure Point of our proposed FROB algorithm and of benchmarks; we define this Break Point as the number of few-shots from which the performance in AUROC decreases and then eventually falls to $0.5$.
\textbf{Datasets.}
For normal class, we use CIFAR-10 and SVHN.
For OE, we use 80 Million Tiny Images, SVHN, and CIFAR-100.
For few-shot, we use CIFAR-10, CIFAR-100, SVHN, and Low-Frequency Noise (LFN).
We evaluate FROB on CIFAR-100, SVHN, CIFAR-10, LFN, and Uniform noise.
\textbf{Benchmarks.}
We compare FROB to benchmarks.
Having access to large OE sets is not representative of the few-shot OoD detection setting.
We compare FROB to GOOD, CEDA, ACET, and OE \citep{1, 2, 3}.
We also compare FROB to GEOM, GOAD, DROCC, Hierarchical Transformation-Discriminating Generator (HTD), Support Vector Data Description (SVDD), and Patch SVDD (PaSVDD) in the few-shot setting using One-Class Classification (OCC) \citep{31}.
GOOD and \citet{3} use the 80 Million Tiny Images for OE.
FROB outperforms baselines in the few-shot OoD detection setting.
\textbf{Ablation Study.}
We test FROB for: (i) with OE, and without (w/o) FS-OE and FS-$O(\textbf{z})$,
(ii) with (w/) FS-OE and w/o FS-$O(\textbf{z})$,
(iii) w/ FS-OE and FS-$O(\textbf{z})$, and
(iv) w/ FS-OE, FS-$O(\textbf{z})$, and OE.
\subsection{FROB Performance Analysis Compared to Benchmarks}
\textbf{Overview.}
We evaluate the benchmarks using OE and compare them to FROB and its OoD performance.
We analyse the performance of baselines, CEDA, OE model, ACET, and GOOD, using 80 Million Tiny Images for OE, as well as the performance of CCC using OE SVHN or CIFAR-10.
We compare them to FROB, using the 80 Million Tiny Images for OE.
FROB without using our self-supervised generated distribution boundary shows similar behavior to the benchmarks, and outperforms them in all the examined AUC-type metrics in Table~\ref{tab:adfsadfsasdfssfgdsadsfgdfadsfgasasdfdf1}.
Table~\ref{tab:adfsadfsasdfssfgdsadsfgdfadsfgasasdfdf1} shows the results of FROB without few-shot boundary samples, when the normal class is SVHN and CIFAR-10, using the OE set 80 Million Tiny Images, evaluated on different test sets in AUROC.
FROB without $O(\textbf{z})$ outperforms benchmarks.
Taking into account this FROB model behavior, we examine the performance of FROB without the boundary $O(\textbf{z})$ to a variable number of few-shot outliers in Figs.~1 and 2.
Without the generated $O(\textbf{z})$, the AUROC performance decreases as the number of few-shots decreases, and is not robust and suitable for few-shot OoD detection, for few-shots less than approximately $800$.
Then, we examine the performance of FROB with the self-learned $O(\textbf{z})$, in Figs.~4 and 5.
Sec.~\ref{sec:adfaasdsadffsaasdfdfs} shows that $O(\textbf{z})$ is effective and that FROB is robust, even to a very small number of few-shots.
\subsubsection{FROB Performance Analysis Using OE}
First, we examine the performance of benchmarks, CEDA, OE, ACET, and GOOD, when setting (a) SVHN and (b) CIFAR-10 as the normal class, with OE 80 Million Tiny Images, tested on different sets, CIFAR-100, CIFAR-10, SVHN, and Uniform noise, in AUROC, AAUROC, and GAUROC.
We examine the performance of CCC using CIFAR-10 for OE, tested on Uniform noise.
We compare the performance of benchmarks to that of FROB without $O(\textbf{z})$.
Our algorithm, when setting SVHN as the normal class, on average outperforms the baselines CCC, CEDA, OE, and ACET, in AAUROC and GAUROC, and yields competitive results in AUROC. FROB shows comparable performance to GOOD in Table~\ref{tab:adfsadfsasdfssfgdsadsfgdfadsfgasasdfdf1}. This is also shown in Table~\ref{tab:adfsadfsasdfsaaasasdfdf1}, in the Appendix.
When setting CIFAR-10 as the normal class, FROB on average outperforms the benchmarks CEDA, OE, ACET, and GOOD in AUC-type metrics, according to Table~\ref{tab:adfsadfsasdfssfgdsadsfgdfadsfgasasdfdf1} and also to Table~\ref{tab:adfssfgsfdasfgsdfsasdfsaaasasdfdf1} in the Appendix.
Table~\ref{tab:adfsadfsasdfssfgdsadsfgdfadsfgasasdfdf1}, as well as Tables~\ref{tab:adfsadfsasdfsaaasasdfdf1} and \ref{tab:adfssfgsfdasfgsdfsasdfsaaasasdfdf1} in the Appendix, present the performance of FROB without using our self-generated boundary samples, $O(\textbf{z})$. They, together with Table~\ref{tab:sadfadssadffasdf2}, show that FROB outperforms benchmarks.
\begin{table*}[!tbp]
\caption{ \centering
Performance of benchmarks with 80 Million Tiny Images in AUROC, AAUROC, and GAUROC \citep{1}. Comparison to FROBInit without $O(\textbf{z})$. \textit{\small FROBInit refers to FROB w/o $O(\textbf{z})$, C10 to CIFAR-10, C100 to CIFAR-100, 80M to 80 Million Tiny Images, and UN to Uniform noise.}}
\begin{center}
\begin{scriptsize}
\begin{sc}
\begin{tabular}
{p{1.2cm} p{2.1cm} p{1.0cm} p{2.65cm} p{1.1cm} p{1.15cm} p{1.2cm}}
\toprule{ {\footnotesize NORMAL}}
& \normalsize {\footnotesize MODEL}
& \normalsize {\footnotesize Outlier}
& \normalsize {\footnotesize Test Data}
& \normalsize {\footnotesize AUROC}
& \normalsize {\footnotesize AAUROC}
& \normalsize {\footnotesize GAUROC}
\\
\midrule
\midrule
\iffalse
\normalsize SVHN
& \normalsize {\text{FROB w/ $O(\textbf{z})$}}
& \normalsize {NONE}
& \normalsize {\text {C10, C100, UN}}
& \normalsize {\text{0.961}}
& \normalsize {\text{0.961}}
& \normalsize {\text{0.000}}
\\
\midrule
\fi
\normalsize SVHN
& \normalsize {\text{FROBInit}}
& \normalsize {80M}
& \normalsize {\text {C10, C100, UN}}
& \normalsize {\text{0.995}}
& \normalsize {\text{0.995}}
& \normalsize {\text{0.979}}
\\
\midrule
\normalsize SVHN
& \normalsize {CCC}
& \normalsize {C10}
& \normalsize {\text{C10, UN}}
& \normalsize {\textbf{1.000}}
& \normalsize {\text{0.000}}
& \normalsize {\text{0.000}}
\\
\midrule
\normalsize SVHN
& \normalsize {CEDA}
& \normalsize {80M}
& \normalsize {\text{C10, C100, UN}}
& \normalsize {\text{0.999}}
& \normalsize {\text{0.773}}
& \normalsize {\text{0.000}}
\\
\midrule
\normalsize SVHN
& \normalsize {OE}
& \normalsize {80M}
& \normalsize {\text{C10, C100, UN}}
& \normalsize {\textbf{1.000}}
& \normalsize {\text{0.736}}
& \normalsize {\text{0.000}}
\\
\midrule
\normalsize SVHN
& \normalsize {ACET}
& \normalsize {80M}
& \normalsize {\text{C10, C100, UN}}
& \normalsize {\text{0.999}}
& \normalsize {\text{0.984}}
& \normalsize {\text{0.000}}
\\
\midrule
\normalsize SVHN
& \normalsize {GOOD}
& \normalsize {80M}
& \normalsize {\text{C10, C100, UN}}
& \normalsize {\text{0.998}}
& \normalsize {\textbf{0.987}}
& \normalsize {\textbf{0.984}}
\\
\midrule
\midrule
\iffalse
\normalsize C10
& \normalsize {\text{FROB w/ $O(\textbf{z})$}}
& \normalsize {NONE}
& \normalsize {\text{SVHN, C100, UN}}
& \normalsize {\text{0.882}}
& \normalsize {\textbf{0.882}}
& \normalsize {\text{0.009}}
\\
\midrule
\fi
\normalsize C10
& \normalsize {\text{FROBInit}}
& \normalsize {80M}
& \normalsize {\text{SVHN, C100, UN}}
& \normalsize {\text{0.860}}
& \normalsize {\text{0.860}}
& \normalsize {\textbf{0.718}}
\\
\midrule
\normalsize C10
& \normalsize {CCC}
& \normalsize {SVHN}
& \normalsize {\text{SVHN, UN}}
& \normalsize {\text{0.570}}
& \normalsize {\text{0.000}}
& \normalsize {\text{0.000}}
\\
\midrule
\normalsize C10
& \normalsize {CEDA}
& \normalsize {80M}
& \normalsize {\text{SVHN, C100, UN}}
& \normalsize {\text{0.957}}
& \normalsize {\text{0.427}}
& \normalsize {\text{0.000}}
\\
\midrule
\normalsize C10
& \normalsize {OE}
& \normalsize {80M}
& \normalsize {\text{SVHN, C100, UN}}
& \normalsize {\textbf{0.962}}
& \normalsize {\text{0.522}}
& \normalsize {\text{0.000}}
\\
\midrule
\normalsize C10
& \normalsize {ACET}
& \normalsize {80M}
& \normalsize {\text{SVHN, C100, UN}}
& \normalsize {\text{0.957}}
& \normalsize {\text{0.871}}
& \normalsize {\text{0.000}}
\\
\midrule
\normalsize C10
& \normalsize {GOOD}
& \normalsize {80M}
& \normalsize {\text{SVHN, C100, UN}}
& \normalsize {\text{0.817}}
& \normalsize {\text{0.709}}
& \normalsize {\text{0.700}}
\\
\midrule
\midrule
\end{tabular}
\end{sc}
\end{scriptsize}
\end{center}
\label{tab:adfsadfsasdfssfgdsadsfgdfadsfgasasdfdf1}
\end{table*}
\iffalse
\begin{table*}[!tbp]
\caption{ \centering
Mean performance of benchmarks using OE 80 Million Tiny Images in AUROC, AAUROC, and GAUROC \citep{1}. Comparison with FROB without (w/o) $O(\textbf{z})$. \textit{\small C10 refers to CIFAR-10, C100 to CIFAR-100, 80M to 80 Million Tiny Images, and UN to Uniform noise.}}
\begin{center}
\begin{scriptsize}
\begin{sc}
\begin{tabular}
{p{1.2cm} p{1.05cm} p{1.2cm} p{0.8cm} p{2.65cm} p{1.1cm} p{1.15cm} p{1.2cm}}
\toprule{ {\footnotesize NORMAL}}
& \normalsize {\footnotesize MODEL}
& \normalsize {\footnotesize FS-$O(\textbf{z})$}
& \normalsize {\footnotesize OE}
& \normalsize {\footnotesize Test Data}
& \normalsize {\footnotesize AUROC}
& \normalsize {\footnotesize AAUROC}
& \normalsize {\footnotesize GAUROC}
\\
\midrule
\midrule
\normalsize SVHN
& \normalsize {\text{FROB}}
& \normalsize { w/o }
& \normalsize {80M}
& \normalsize {\text {C10, C100, UN}}
& \normalsize {\text{0.995}}
& \normalsize {\textbf{0.995}}
& \normalsize {\text{0.979}}
\\
\midrule
\normalsize SVHN
& \normalsize {CCC}
& \normalsize { w/o }
& \normalsize {C10}
& \normalsize {\text{C10, UN}}
& \normalsize {\textbf{1.000}}
& \normalsize {\text{0.000}}
& \normalsize {\text{0.000}}
\\
\midrule
\normalsize SVHN
& \normalsize {CEDA}
& \normalsize { w/o }
& \normalsize {80M}
& \normalsize {\text{C10, C100, UN}}
& \normalsize {\text{0.999}}
& \normalsize {\text{0.773}}
& \normalsize {\text{0.000}}
\\
\midrule
\normalsize SVHN
& \normalsize {OE}
& \normalsize { w/o }
& \normalsize {80M}
& \normalsize {\text{C10, C100, UN}}
& \normalsize {\textbf{1.000}}
& \normalsize {\text{0.736}}
& \normalsize {\text{0.000}}
\\
\midrule
\normalsize SVHN
& \normalsize {ACET}
& \normalsize { w/o }
& \normalsize {80M}
& \normalsize {\text{C10, C100, UN}}
& \normalsize {\text{0.999}}
& \normalsize {\text{0.984}}
& \normalsize {\text{0.000}}
\\
\midrule
\normalsize SVHN
& \normalsize {GOOD}
& \normalsize { w/o }
& \normalsize {80M}
& \normalsize {\text{C10, C100, UN}}
& \normalsize {\text{0.998}}
& \normalsize {\text{0.987}}
& \normalsize {\textbf{0.984}}
\\
\midrule
\midrule
\normalsize C10
& \normalsize {\text{FROB}}
& \normalsize { w/o }
& \normalsize {80M}
& \normalsize {\text{SVHN, C100, UN}}
& \normalsize {\textbf{0.979}}
& \normalsize {\textbf{0.861}}
& \normalsize {\textbf{0.718}}
\\
\midrule
\normalsize C10
& \normalsize {CCC}
& \normalsize { w/o }
& \normalsize {SVHN}
& \normalsize {\text{SVHN, UN}}
& \normalsize {\text{0.570}}
& \normalsize {\text{0.000}}
& \normalsize {\text{0.000}}
\\
\midrule
\normalsize C10
& \normalsize {CEDA}
& \normalsize { w/o }
& \normalsize {80M}
& \normalsize {\text{SVHN, C100, UN}}
& \normalsize {\text{0.957}}
& \normalsize {\text{0.427}}
& \normalsize {\text{0.000}}
\\
\midrule
\normalsize C10
& \normalsize {OE}
& \normalsize { w/o }
& \normalsize {80M}
& \normalsize {\text{SVHN, C100, UN}}
& \normalsize {\text{0.962}}
& \normalsize {\text{0.522}}
& \normalsize {\text{0.000}}
\\
\midrule
\normalsize C10
& \normalsize {ACET}
& \normalsize { w/o }
& \normalsize {80M}
& \normalsize {\text{SVHN, C100, UN}}
& \normalsize {\text{0.957}}
& \normalsize {\text{0.871}}
& \normalsize {\text{0.000}}
\\
\midrule
\normalsize C10
& \normalsize {GOOD}
& \normalsize { w/o }
& \normalsize {80M}
& \normalsize {\text{SVHN, C100, UN}}
& \normalsize {\text{0.817}}
& \normalsize {\text{0.709}}
& \normalsize {\text{0.700}}
\\
\midrule
\midrule
\end{tabular}
\end{sc}
\end{scriptsize}
\end{center}
\label{tab:adfsadfsasdfssfgdsadsfgdfadsfgasasdfdf1}
\end{table*}
\fi
\subsubsection{Comparison of FROB with Benchmarks}
\textbf{Analysis of FROB without $O(\textbf{z})$ by reducing the few-shot outliers.}
Fig.~\ref{fig:dfasdsadfsafsadffsadf1} shows the performance of FROBInit for normal CIFAR-10 without $O(\textbf{z})$, using variable number of Outlier Set SVHN few-shots, tested on different sets.
The FROBInit performance decreases with reducing number of SVHN few-shots.
A low AUROC of $0.5$ is reached for approximately $800$ few-shots for CIFAR-100, in Table~\ref{tab:asdfaasdfsasassadfdf7} in the Appendix.
Fig.~\ref{fig:dfasdsadfsafsadffsadf1} shows that the benchmarks in Table~\ref{tab:adfsadfsasdfssfgdsadsfgdfadsfgasasdfdf1}, as well as in Tables~\ref{tab:adfsadfsasdfsaaasasdfdf1} and \ref{tab:adfssfgsfdasfgsdfsasdfsaaasasdfdf1} in the Appendix, do not achieve a robust performance for decreasing few-shots, their performance is reduced with decreasing number of few-shots, yield a steep decline for few-shots less than $1830$ samples, and have a Failure Break Point at approximately $800$ few-shots for the test set CIFAR-100 (AUROC $0.51$).
The performance of FROBInit, without $O(\textbf{z})$, decreases fast and relatively sharp below $1830$ samples, when testing on low-frequency noise in Fig.~\ref{fig:dfasdsadfsafsadffsadf1}, where for few-shots less than $1800$, the modeling error covering the full complement of the support of the normal class is high.
\begin{figure}[t]
\begin{minipage}[t]{0.483\columnwidth}%
\centering \includegraphics[width=1.0\columnwidth]{Fig2_20November2021.pdf}
\caption{OoD performance of FROBInit in AUROC, for normal class CIFAR-10, w/o $O(\textbf{z})$ and w/ few-shots of variable number from SVHN.}
\label{fig:dfasdsadfsafsadffsadf1}
\label{fig:asdfdsfszsz4}
\end{minipage}\hfill{}%
\begin{minipage}[t]{0.483\columnwidth}%
\centering \includegraphics[width=1.0\columnwidth]{Fig3_20November2021.pdf}
\caption{Performance of FROBInit in GAUROC for normal CIFAR-10 w/o $O(\textbf{z})$ and w/ variable number of FS samples from SVHN.}
\label{fig:szdkfasfkasdfsz4}
\end{minipage}
\label{fig:EDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1}
\label{fig:EDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1}
\label{fig:EDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-2}
\label{fig:EDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-1}
\label{fig:EDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-1-1}
\label{fig:EDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-1-1-1}
\end{figure}
Fig.~\ref{fig:szdkfasfkasdfsz4} and Table~\ref{tab:asdfaasdfsasassadfdf7} show the performance of FROBInit in GAUROC for normal CIFAR-10 without $O(\textbf{z})$, with variable number of FS SVHN.
The GAUROC performance of FROB shows that when the few-shots originate from the test set, SVHN, a less steep decrease is observed compared to when the few-shots and the test samples are from different sets, low-frequency noise.
The rate of decrease of GAUROC below $1800$ samples is small when the OE and the test samples are from the same set.
The GAUROC performance of FROB without $O(\textbf{z})$, for the test set CIFAR-100, is low.
FROBInit achieves better performance than the benchmarks in the FS OoD detection setting (Table~1).
FROBInit with only few-shots, the OoD performance rapidly decreases for FS of $1800$ samples, and tends to a Break Point of approximately $800$ shots for normal CIFAR-10.
It shows a more robust behavior at reducing the number of FS when the Outlier Set and test sets are the same.
\textbf{Using reduced number of 80 Million.}
Table~\ref{tab:sadfadssadffasdf2} in the Appendix shows the performance of FROBInit using the 80 Million Tiny Images for Outlier Dataset with all available data and for reduced number of samples, few-shots of it (FS).
We evaluate FROBInit without the boundary, $O(\textbf{z})$, on different test sets, CIFAR-10, CIFAR-100, SVHN, low-frequency noise, uniform noise, and 80 Million Tiny Images.
We set SVHN and CIFAR-10 as the normal classes.
The performance of FROBInit slightly decreases, in AUC-type metrics, when $73257$ samples from 80 Million Tiny Images are used as the chosen Outlier Calibration Set, instead of the $50$ million total training samples, during training.
\textbf{Using various Outlier Sets.}
Table~\ref{tab:asfgasadfssadsdfdf3}, in the Appendix,
shows the performance of FROBInit trained on normal CIFAR-10 with the Outlier Datasets -all data- of SVHN, CIFAR-100, and 80 Million Tiny Images 73257 samples, without our $O(\textbf{z})$, tested on different sets in AUC-type metrics.
FROB, using SVHN for Outlier Dataset,
achieves higher AUROC compared to using the OE sets 80 Million Tiny Images and CIFAR-100.
FROBInit achieves higher GAUROC with the Outlier Dataset set SVHN, i.e. average $0.69$, and 80 Million Tiny Images, average $0.66$, compared to when using CIFAR-100.
\begin{figure}[t]
\begin{minipage}[t]{0.483\columnwidth}%
\centering \includegraphics[width=1.0\columnwidth]{Fig4_20November2021.pdf}
\caption{FROB for normal C10 with $O(\textbf{z})$\\ and FS SVHN, in AUROC and GAUROC.}
\label{fig:asdfasdsadfasdsadfsadfsafsadffsadfszaz5}
\label{fig:asdfasadfasadfsaszsz3}
\label{fig:asdfsadsadfasf6}
\end{minipage}\hfill{}%
\begin{minipage}[t]{0.483\columnwidth}%
\centering \includegraphics[width=1.0\columnwidth]{Fig5_b20November2021.pdf}
\caption{FROB performance in AUROC for\\ normal class SVHN w/ $O(\textbf{z})$ and FS C10.}
\label{fig:afzxadfasfsdsdfasfsdfasadfasadfsaszsz3}
\label{fig:asfzxadfasfsdsdfasdfdfsadsadfasf6}
\end{minipage}
\label{fig:EDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1}
\label{fig:EDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1}
\label{fig:EDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-2}
\label{fig:EDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-1}
\label{fig:EDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-1-1}
\label{fig:EDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-1-1-1}
\end{figure}
\begin{table*}[!tbp]
\caption{ \centering Mean OCC performance of FROB w/ $O(\textbf{z})$, w/ $80$ FS OCC C10 \citep{31}.}
\begin{center}
\begin{scriptsize}
\begin{sc}
\begin{tabular}
{p{1.5cm} p{4.6cm} p{1.3cm}
p{2.0cm} p{1.3cm}}
\toprule{\small NORMAL}
& \normalsize {\small MODEL}
& \normalsize {\small $O(\textbf{z})$}
& \normalsize {\small TEST DATA}
& \normalsize {\small AUROC}
\\
\midrule
\midrule
\normalsize C10:~OCC
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {\text{C10:~OCC}}
& \normalsize {\textbf{0.784}}
\\
\midrule
\normalsize C10:~OCC
& \normalsize {FROB w/ Outlier SVHN}
& \normalsize {w/}
& \normalsize {\text{C10:~OCC}}
& \normalsize {\textbf{0.802}}
\\
\midrule
\normalsize C10:~OCC
& \normalsize {HTD~\citep{31}}
& \normalsize {w/o}
& \normalsize {\text{C10:~OCC}}
& \normalsize {\text{0.756}}
\\
\iffalse
\midrule
\normalsize C10:~OCC
& \normalsize {GOAD}
& \normalsize {w/o}
& \normalsize {\text{C10:~OCC}}
& \normalsize {\text{0.562}}
\\
\fi
\midrule
\normalsize C10:~OCC
& \normalsize {GEOM (which $>$ GOAD)}
& \normalsize {w/o}
& \normalsize {\text{C10:~OCC}}
& \normalsize {\text{0.735}}
\\
\iffalse
\midrule
\normalsize C10:~OCC
& \normalsize {DROCC}
& \normalsize {w/o}
& \normalsize {\text{C10:~OCC}}
& \normalsize {\text{0.585}}
\\
\fi
\midrule
\normalsize C10:~OCC
& \normalsize {SVDD ($>$ PaSVDD, DROCC)}
& \normalsize {w/o}
& \normalsize {\text{C10:~OCC}}
& \normalsize {\text{0.608}}
\\
\iffalse
\midrule
\normalsize C10:~OCC
& \normalsize {PaSVDD}
& \normalsize {w/o}
& \normalsize {\text{C10:~OCC}}
& \normalsize {\text{0.510}}
\\
\fi
\midrule
\midrule
\end{tabular}
\end{sc}
\end{scriptsize}
\end{center}
\label{tab:asdfdfgdssdfvxbacvbdfdfsdfadfaasdfs}
\end{table*}
\subsection{Effectiveness of our Proposed Confidence Boundary} \label{sec:adfaasdsadffsaasdfdfs}
\textbf{Efficacy and effectiveness of learned boundary.}
We evaluate FROB with the generated boundary, $O(\textbf{z})$, and few-shot outliers.
Fig.~\ref{fig:asdfasadfasadfsaszsz3}, and Table~\ref{tab:asdfasfasdfdxfdfsdfs8} in the Appendix, show the performance of FROB trained on normal class CIFAR-10 with $O(\textbf{z})$ and few-shots from SVHN in decreasing number.
We evaluate the performance of FROB on the test sets CIFAR-100, SVHN, and low-frequency noise.
Compared to Fig.~\ref{fig:dfasdsadfsafsadffsadf1}, when we use $O(\textbf{z})$, the performance increases, showing robustness even for a very small number of few-shots, even for zero-shots.
We have experimentally demonstrated the effectiveness of $O(\textbf{z})$, in Fig.~4.
We have demonstrated the improvement in AUROC, when $O(\textbf{z})$ is used, compared to when it is not used.
FROB using few-shot data outperforms benchmarks and improves robustness to the number of few-shots, pushing down the phase transition point (Fig.~4).
\textbf{FROB robustness to number of few-shots.}
With decreasing few-shot samples, the performance of FROB in AUROC is robust and approximately independent of the FS number of samples, till approximately zero-shots.
FROB achieves robustness using the boundary $O(\textbf{z})$ as the performance is approximately independent of the FS samples for the test sets, CIFAR-100, SVHN, and low-frequency noise.
The component of FROB with the highest benefit is $O(\textbf{z})$.
FROB achieves robustness to the number of few-shot samples, and this is the main contribution of this paper.
As the number of few-shots decreases, the performance of FROB does not decrease.
When the few-shots are from the test set,
Fig.~\ref{fig:asdfasadfasadfsaszsz3} (and Table~\ref{tab:asdfasfasdfdxfdfsdfs8} in Appendix) shows that using the learned boundary, $O(\textbf{z})$, is effective and robust in the few-shot OoD detection setting.
FROB with the self-generated boundary samples, $O(\textbf{z})$, achieves better performance than the benchmarks in the few-shot OoD detection setting.
\textbf{Existing methodologies sensitive to number of few-shots.}
Current methodologies are not robust to a small number of few-shots as they perform negative training by including OoD samples randomly somewhere in the data space, allowing a lot of unfilled space between the OoD and the normal class \citep{1, 3}.
They need $50$ million outliers, to model the complement of the support of the normal class distribution.
They use irrelevant conservative OoD samples and do not model the support boundary of the normal class distribution.
Instead, FROB learns OoD samples generated on the boundary, not requiring $50$ million outliers.
FROB redesigns and streamlines OE, to work even for zero-shots.
Our negative data augmentation creates the tightest possible OoD samples that are as close as possible to the support of the normal class distribution.
\subsubsection{OoD Detection Performance of FROB when FS data are from Test Set}
FROB improves the AUROC and the GAUROC, when the few-shots and the OoD test samples originate from the same set.
In addition to improving both the AUROC and the GAUROC when the few-shot outliers originate from the test set, as shown in Figs.~\ref{fig:asdfsadsadfasf6} and \ref{fig:asdfdsfszsz4}, our proposed FROB model also improves the AUROC performance, when the few-shots and the OoD test samples originate from different sets.
These results for FROB are also presented in Tables~\ref{tab:asdfasfasdfdxfdfsdfs8} and \ref{tab:asdfaasdfsasassadfdf7} in the Appendix.
Tables \ref{tab:ajsdhfjasdfsadasdff} and \ref{tab:asddfasfasdfasdf}
in the Appendix show that FROB (i) improves both the AUROC and the GAUROC when the few-shots and the OoD test samples originate from the same set, and (ii) enhances the AUROC when the few-shots and the OoD test samples originate from different sets, i.e. CIFAR-100 and low frequency noise.
When using the learned boundary $O(\textbf{z})$ and FS, as well as the 80 Million Tiny Images, the performance of FROB in GAUROC improves, according to Tables~7-10 in the Appendix for $O(\textbf{z})$ of $1830$, $915$, $100$, and $80$ samples, respectively.
When using the same Outlier Dataset and test set, SVHN, without the 80 Million Tiny Images set, FROB achieves a comparable high value in GAUROC, $0.98$ in Fig.~4, compared to when using 80 Million Tiny Images, $0.97$.
\textbf{FROB outperforming baselines.}
FROB with $O(\textbf{z})$ achieves an AUROC of $0.92$ for normal CIFAR-10, SVHN $1830$ samples, and test set Uniform noise in Table~\ref{tab:asdfasfasdfdxfdfsdfs8} in the Appendix.
It is effective and outperforms benchmarks.
FROB outperforms \citet{12}: for normal CIFAR-10, when the Outlier Dataset is SVHN, CCC yields an AUROC of $0.14$ for Uniform noise.
CCC does not have few-shot capability, FS functionality.
It does not test CIFAR-100 (small domain gap to CIFAR-10).
Fig.~4
shows the performance of FROB in GAUROC for normal CIFAR-10 with $O(\textbf{z})$, using a variable number of FS data SVHN, tested on SVHN.
We evaluate FROB with $O(\textbf{z})$, by reducing the number of few-shot outliers.
The GAUROC performance shows that when the few-shots originate from the test set, SVHN, a less steep decrease is achieved compared to when the few-shots and the test samples are from different sets, i.e. SVHN and low-frequency noise.
The rate of decrease of the GAUROC of FROB below $1830$ samples
is smaller when the few-shots are from the test dataset.
\iffalse
\begin{figure*}[!tbp]
\centering \includegraphics[width=0.72\textwidth]{Figure 5_AUROC_5 Sept 2021.pdf}
\caption{FROB GAUROC performance for normal CIFAR-10 w/ $O(\textbf{z})$ and variable number of FS-OE SVHN, tested on SVHN.}
\label{fig:asdfasdsadfasdsadfsadfsafsadffsadfszaz5}
\end{figure*}
\fi
\iffalse
\begin{table*}[!tbp]
\caption{ \centering
Average performance of FROB with our boundary, $O(\textbf{z})$, using One-Class Classification (OCC) and $80$ FS-OE CIFAR-10 OCC anomalies, and comparison with benchmarks w/o $O(\textbf{z})$.}
\begin{center}
\begin{scriptsize}
\begin{sc}
\begin{tabular}
{p{3.5cm} p{2.cm} p{1.5cm}
p{3cm} p{1.5cm} }
\toprule{\small NORMAL CLASS }
& \normalsize {\small MODEL }
& \normalsize {\small FS-$O(\textbf{z})$}
& \normalsize {\small TEST DATA}
& \normalsize {\small AUROC}
\\
\midrule
\midrule
\normalsize CIFAR-10: OCC
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {\text{CIFAR-10: OCC}}
& \normalsize {\textbf{0.784}}
\\
\midrule
\normalsize CIFAR-10: OCC
& \normalsize {HTD \citep{31}}
& \normalsize {w/o}
& \normalsize {\text{CIFAR-10: OCC}}
& \normalsize {\text{0.756}}
\\
\midrule
\normalsize CIFAR-10: OCC
& \normalsize {GOAD}
& \normalsize {w/o}
& \normalsize {\text{CIFAR-10: OCC}}
& \normalsize {\text{0.562}}
\\
\midrule
\normalsize CIFAR-10: OCC
& \normalsize {GEOM}
& \normalsize {w/o}
& \normalsize {\text{CIFAR-10: OCC}}
& \normalsize {\text{0.735}}
\\
\midrule
\normalsize CIFAR-10: OCC
& \normalsize {DROCC}
& \normalsize {w/o}
& \normalsize {\text{CIFAR-10: OCC}}
& \normalsize {\text{0.585}}
\\
\midrule
\normalsize CIFAR-10: OCC
& \normalsize {SVDD}
& \normalsize {w/o}
& \normalsize {\text{CIFAR-10: OCC}}
& \normalsize {\text{0.608}}
\\
\midrule
\normalsize CIFAR-10: OCC
& \normalsize {PaSVDD}
& \normalsize {w/o}
& \normalsize {\text{CIFAR-10: OCC}}
& \normalsize {\text{0.510}}
\\
\midrule
\midrule
\end{tabular}
\end{sc}
\end{scriptsize}
\end{center}
\label{tab:asdfdfgdssdfvxbacvbdfdfsdfadfaasdfs}
\end{table*}
\fi
Figure~\ref{fig:afzxadfasfsdsdfasfsdfasadfasadfsaszsz3}, together with Table~\ref{tab:asadfasdsadfsadffsadfsadfsaaasdfsadfssadfasdfdxfkdkfzsdfzs16} and Figure~\ref{fig:asfzxddfasfsdfasdsadfasdsadfsadfsafsadffsadfszaz5} in the Appendix, show the performance of FROB using $O(\textbf{z})$, the normal class SVHN, and variable number of FS CIFAR-10 samples. In
Figures~5 and 7, compared to Figure~4, we show that FROB achieves better performance for normal class SVHN, compared to for normal CIFAR-10 in all AUC-type metrics on the unseen test dataset CIFAR-100.
\begin{figure}[t]
\begin{minipage}[t]{0.485\columnwidth}%
\centering \includegraphics[width=1.0\columnwidth]{Fig6_20November2021.pdf}
\caption{FROB performance in GAUROC for normal CIFAR-10 w/ $O(\textbf{z})$ and variable number of OE SVHN, tested on the unseen sets CIFAR-100 and low frequency noise, using OE 80M.}
\label{fig:asdfasdsadfasdsadfsadfsafsadffsadfszaz5}
\end{minipage}\hfill{}%
\begin{minipage}[t]{0.485\columnwidth}%
\centering \includegraphics[width=1.0\columnwidth]{Fig7_20November2021.pdf}
\caption{OoD performance of FROB for normal SVHN in GAUROC using the boundary, $O(\textbf{z})$, and variable number of FS-OE CIFAR-10, tested on CIFAR-100, CIFAR-10, and Uniform noise.}
\label{fig:asfzxddfasfsdfasdsadfasdsadfsadfsafsadffsadfszaz5}
\label{fig:EDfzxdfasfsdCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1}
\label{fig:EfzxDdfasfsdCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1}
\label{fig:EfzxdfzxasfsdDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-2}
\label{fig:EDfzxdfasfsdCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-1}
\label{fig:EdfafzxsfsdDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-1-1}
\label{fig:EdfasffzxsdDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-1-1-1}
\end{minipage}
\end{figure}
\subsection{Performance of FROB on Useen, in the Wild, Datasets}
We evaluate FROB using OoD test samples from unseen, in the wild, sets.
Importantly, we evaluate FROB on test samples that are neither from the normal class nor from the few-shot data (or from Outlier Dataset).
In Figs.~2 and 4, we show the performance of FROB in the few-shot setting, for the normal CIFAR-10 with few-shot outliers from SVHN, tested on the unknown sets of CIFAR-10 and low-frequency noise.
Fig.~4 shows that the OoD detection performance of FROB in the few-shot setting is robust, tested on the new sets of CIFAR-100 and low frequency noise.
FROB has a robust performance in AUROC with reduced number of SVHN few-shots, till approximately zero-shots.
According to
Table~\ref{tab:asdfasfasdfdxfdfsdfs8} in the Appendix, the GAUROC performance of FROB depends on the unknown test set and obtains low values for SVHN $80$ few-shots, approximately $0.07$ on CIFAR-100, $0.25$ on low frequency noise, and $0.02$ on Uniform noise.
We use an effective Outlier Set, 80 Million Tiny Images, to improve the GAUROC of FROB in the few-shot setting to obtain Table~9: for SVHN $80$ few-shots, $0.43$ on CIFAR-100, $0.95$ on low frequency noise, and $0.79$ on Uniform noise.
\textbf{Effect of domain, normal class.}
The performance of FROB with the boundary in AUROC depends on the normal class.
In Table~\ref{tab:asadfasdsadfsadffsadfsadfsaaasdfsadfssadfasdfdxfkdkfzsdfzs16} and Fig.~\ref{fig:asfzxadfasfsdsdfasdfdfsadsadfasf6}, the performance of FROB with $O(\textbf{z})$ is higher for normal SVHN than for CIFAR-10, as presented in Fig.~\ref{fig:asdfsadsadfasf6} and Table~\ref{tab:asdfasfasdfdxfdfsdfs8}.
For zero-shots, using the CIFAR-100 test set, the AUROC is $0.87$ when the normal class is CIFAR-10 compared to $0.95$ when the normal class is SVHN.
Figs.~5~and~7 show that FROB is robust and effective for normal SVHN, for Outlier Set CIFAR-10, on seen and unseen sets.
FROB with $O(\textbf{z})$ is not sensitive to the number of few-shots in Figs.~4-7 in the FS OoD detection setting when we have OoD sample complexity constraints.
\iffalse
\begin{figure*}[!tbp]
\centering \includegraphics[width=0.82\textwidth]{2 Sept 2021_withOz_GAUROC_Fig3b_bbb.pdf}
\caption{OoD performance of FROB in GAUROC for normal CIFAR-10, with $O(\textbf{z})$ and with variable number of FS SVHN.}
\label{fig:asdfasdsadfasdsadfsadfsafsadffsadfszaz5}
\end{figure*}
\begin{table*}[!tb]
\caption{ \centering
OoD performance of FROB using an Outlier Dataset set, using few-shots and our boundary, $O$, tested on different sets in AUROC, AAUROC, and GAUROC, where “w/ $\, O$” means with our learned self-generated OoD samples, $O(\textbf{z})$.}
\begin{flushleft}
\begin{scriptsize}
\begin{sc}
\begin{tabular}
{p{1.8cm} p{1.cm} p{0.6cm} p{5.4cm} p{2.6cm} p{0.9cm} p{0.9cm} p{0.9cm}}
\toprule{NORMAL CLASS }
& \normalsize {\small MODEL }
& \normalsize {\small FS-$O(\textbf{z})$}
& \normalsize {\small
FS-OE}
& \normalsize {\small TEST DATA}
& \normalsize {\small AU ROC}
& \normalsize {\small AAU ROC }
& \normalsize {\small GAU ROC }
\\
\midrule
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 1830}
& \normalsize {\text{CIFAR-100 }}
& \normalsize {\text{0.850}}
& \normalsize {\text{0.850}}
& \normalsize {\text{0.585}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 1830}
& \normalsize {\text{SVHN}}
& \normalsize {\text{0.994}}
& \normalsize {\text{0.994}}
& \normalsize {\text{0.972}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 1830}
& \normalsize {\text{LFN}}
& \normalsize {\text{0.997}}
& \normalsize {\text{0.997}}
& \normalsize {\text{0.987}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 1830}
& \normalsize {\text{UN}}
& \normalsize {\text{0.967}}
& \normalsize {\text{0.967}}
& \normalsize {\text{0.865}}
\\
\midrule
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 915}
& \normalsize {\text{CIFAR-100}}
& \normalsize {\text{0.759}}
& \normalsize {\text{0.759}}
& \normalsize {\text{0.090}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 915}
& \normalsize {\text{SVHN}}
& \normalsize {\text{0.993}}
& \normalsize {\text{0.993}}
& \normalsize {\text{0.333}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 915}
& \normalsize {\text{LFN}}
& \normalsize {\text{0.993}}
& \normalsize {\text{0.993}}
& \normalsize {\text{0.040}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 915}
& \normalsize {\text{UN}}
& \normalsize {\text{0.434}}
& \normalsize {\text{0.434}}
& \normalsize {\text{0.010}}
\\
\midrule
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 732}
& \normalsize {\text{CIFAR-100}}
& \normalsize {\text{0.715}}
& \normalsize {\text{0.715}}
& \normalsize {\text{0.010}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 732}
& \normalsize {\text{SVHN}}
& \normalsize {\text{0.990}}
& \normalsize {\text{0.990}}
& \normalsize {\text{0.010}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 732}
& \normalsize {\text{LFN}}
& \normalsize {\text{0.869}}
& \normalsize {\text{0.869}}
& \normalsize {\text{0.010}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 732}
& \normalsize {\text{UN}}
& \normalsize {\text{0.988}}
& \normalsize {\text{0.988}}
& \normalsize {\text{0.010}}
\\
\midrule
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \
SVHN: 457}
& \normalsize {\text{CIFAR-100}}
& \normalsize {\text{0.827}}
& \normalsize {\text{0.827}}
& \normalsize {\text{0.397}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \
SVHN: 457}
& \normalsize {\text{SVHN}}
& \normalsize {\text{0.997}}
& \normalsize {\text{0.997}}
& \normalsize {\text{0.807}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \
SVHN: 457}
& \normalsize {\text{LFN}}
& \normalsize {\text{0.997}}
& \normalsize {\text{0.997}}
& \normalsize {\text{0.841}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \
SVHN: 457}
& \normalsize {\text{UN}}
& \normalsize {\text{0.914}}
& \normalsize {\text{0.914}}
& \normalsize {\text{0.842}}
\\
\midrule
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 100}
& \normalsize {\text{CIFAR-100}}
& \normalsize {\text{0.744}}
& \normalsize {\text{0.744}}
& \normalsize {\text{0.427}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 100}
& \normalsize {\text{SVHN}}
& \normalsize {\textbf{0.992}}
& \normalsize {\textbf{0.992}}
& \normalsize {\textbf{0.896}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 100}
& \normalsize {\text{LFN}}
& \normalsize {\text{0.985}}
& \normalsize {\text{0.985}}
& \normalsize {\text{0.912}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 100}
& \normalsize {\text{UN}}
& \normalsize {\text{0.934}}
& \normalsize {\text{0.934}}
& \normalsize {\text{0.911}}
\\
\midrule
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 80}
& \normalsize {\text{CIFAR-100}}
& \normalsize {\text{0.772}}
& \normalsize {\text{0.772}}
& \normalsize {\text{0.425}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 80}
& \normalsize {\text{SVHN}}
& \normalsize {\textbf{0.981}}
& \normalsize {\textbf{0.981}}
& \normalsize {\textbf{0.922}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 80}
& \normalsize {\text{LFN}}
& \normalsize {\text{0.990}}
& \normalsize {\text{0.990}}
& \normalsize {\text{0.951}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 80}
& \normalsize {\text{UN}}
& \normalsize {\text{0.901}}
& \normalsize {\text{0.901}}
& \normalsize {\text{0.788}}
\\
\midrule
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 0}
& \normalsize {\text{CIFAR-100}}
& \normalsize {\text{0.864}}
& \normalsize {\text{0.864}}
& \normalsize {\text{0.312}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 0}
& \normalsize {\text{SVHN}}
& \normalsize {\textbf{0.927}}
& \normalsize {\textbf{0.927}}
& \normalsize {\textbf{0.601}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 0}
& \normalsize {\text{LFN}}
& \normalsize {\text{0.891}}
& \normalsize {\text{0.891}}
& \normalsize {\text{0.301}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 0}
& \normalsize {\text{UN}}
& \normalsize {\text{0.865}}
& \normalsize {\text{0.865}}
& \normalsize {\text{0.212}}
\\
\midrule
\midrule
\end{tabular}
\end{sc}
\end{scriptsize}
\end{flushleft}
\label{tab:asdfasadfassasdf15}
\end{table*}
\begin{table*}[!tb]
\caption{ \centering
OoD performance of FROB using an OE set, using few-shots and our boundary, $O$, tested on different sets in AUROC, AAUROC, and GAUROC, where “w/ $\, O$” means with our learned self-generated OoD samples, $O(\textbf{z})$.}
\begin{flushleft}
\begin{scriptsize}
\begin{sc}
\begin{tabular}
{p{1.8cm} p{1.cm} p{0.6cm} p{5.4cm} p{2.6cm} p{0.9cm} p{0.9cm} p{0.9cm}}
\toprule{NORMAL CLASS }
& \normalsize {\small MODEL }
& \normalsize {\small FS-$O(\textbf{z})$}
& \normalsize {\small
FS-OE}
& \normalsize {\small TEST DATA}
& \normalsize {\small AU ROC}
& \normalsize {\small AAU ROC }
& \normalsize {\small GAU ROC }
\\
\midrule
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 1830}
& \normalsize {\text{CIFAR-100 }}
& \normalsize {\text{0.850}}
& \normalsize {\text{0.850}}
& \normalsize {\text{0.585}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 1830}
& \normalsize {\text{SVHN}}
& \normalsize {\text{0.994}}
& \normalsize {\text{0.994}}
& \normalsize {\text{0.972}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 1830}
& \normalsize {\text{LFN}}
& \normalsize {\text{0.997}}
& \normalsize {\text{0.997}}
& \normalsize {\text{0.987}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 1830}
& \normalsize {\text{UN}}
& \normalsize {\text{0.967}}
& \normalsize {\text{0.967}}
& \normalsize {\text{0.865}}
\\
\midrule
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 915}
& \normalsize {\text{CIFAR-100}}
& \normalsize {\text{0.759}}
& \normalsize {\text{0.759}}
& \normalsize {\text{0.090}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 915}
& \normalsize {\text{SVHN}}
& \normalsize {\text{0.993}}
& \normalsize {\text{0.993}}
& \normalsize {\text{0.333}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 915}
& \normalsize {\text{LFN}}
& \normalsize {\text{0.993}}
& \normalsize {\text{0.993}}
& \normalsize {\text{0.040}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 915}
& \normalsize {\text{UN}}
& \normalsize {\text{0.434}}
& \normalsize {\text{0.434}}
& \normalsize {\text{0.010}}
\\
\midrule
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 732}
& \normalsize {\text{CIFAR-100}}
& \normalsize {\text{0.715}}
& \normalsize {\text{0.715}}
& \normalsize {\text{0.010}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 732}
& \normalsize {\text{SVHN}}
& \normalsize {\text{0.990}}
& \normalsize {\text{0.990}}
& \normalsize {\text{0.010}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 732}
& \normalsize {\text{LFN}}
& \normalsize {\text{0.869}}
& \normalsize {\text{0.869}}
& \normalsize {\text{0.010}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 732}
& \normalsize {\text{UN}}
& \normalsize {\text{0.988}}
& \normalsize {\text{0.988}}
& \normalsize {\text{0.010}}
\\
\midrule
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \
SVHN: 457}
& \normalsize {\text{CIFAR-100}}
& \normalsize {\text{0.827}}
& \normalsize {\text{0.827}}
& \normalsize {\text{0.397}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \
SVHN: 457}
& \normalsize {\text{SVHN}}
& \normalsize {\text{0.997}}
& \normalsize {\text{0.997}}
& \normalsize {\text{0.807}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \
SVHN: 457}
& \normalsize {\text{LFN}}
& \normalsize {\text{0.997}}
& \normalsize {\text{0.997}}
& \normalsize {\text{0.841}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \
SVHN: 457}
& \normalsize {\text{UN}}
& \normalsize {\text{0.914}}
& \normalsize {\text{0.914}}
& \normalsize {\text{0.842}}
\\
\midrule
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 100}
& \normalsize {\text{CIFAR-100}}
& \normalsize {\text{0.744}}
& \normalsize {\text{0.744}}
& \normalsize {\text{0.427}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 100}
& \normalsize {\text{SVHN}}
& \normalsize {\textbf{0.992}}
& \normalsize {\textbf{0.992}}
& \normalsize {\textbf{0.896}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 100}
& \normalsize {\text{LFN}}
& \normalsize {\text{0.985}}
& \normalsize {\text{0.985}}
& \normalsize {\text{0.912}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 100}
& \normalsize {\text{UN}}
& \normalsize {\text{0.934}}
& \normalsize {\text{0.934}}
& \normalsize {\text{0.911}}
\\
\midrule
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 80}
& \normalsize {\text{CIFAR-100}}
& \normalsize {\text{0.772}}
& \normalsize {\text{0.772}}
& \normalsize {\text{0.425}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 80}
& \normalsize {\text{SVHN}}
& \normalsize {\textbf{0.981}}
& \normalsize {\textbf{0.981}}
& \normalsize {\textbf{0.922}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 80}
& \normalsize {\text{LFN}}
& \normalsize {\text{0.990}}
& \normalsize {\text{0.990}}
& \normalsize {\text{0.951}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 80}
& \normalsize {\text{UN}}
& \normalsize {\text{0.901}}
& \normalsize {\text{0.901}}
& \normalsize {\text{0.788}}
\\
\midrule
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 0}
& \normalsize {\text{CIFAR-100}}
& \normalsize {\text{0.864}}
& \normalsize {\text{0.864}}
& \normalsize {\text{0.312}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 0}
& \normalsize {\text{SVHN}}
& \normalsize {\textbf{0.927}}
& \normalsize {\textbf{0.927}}
& \normalsize {\textbf{0.601}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 0}
& \normalsize {\text{LFN}}
& \normalsize {\text{0.891}}
& \normalsize {\text{0.891}}
& \normalsize {\text{0.301}}
\\
\midrule
\normalsize CIFAR-10
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {80M: 73257, \ SVHN: 0}
& \normalsize {\text{UN}}
& \normalsize {\text{0.865}}
& \normalsize {\text{0.865}}
& \normalsize {\text{0.212}}
\\
\midrule
\midrule
\end{tabular}
\end{sc}
\end{scriptsize}
\end{flushleft}
\label{tab:asdfasadfassasdf15}
\end{table*}
\fi
\iffalse
\begin{figure*}[!tbp]
\centering \includegraphics[width=0.725\textwidth]{Figure 6_GAUROC_5 Sept 2021.pdf}
\caption{Performance of FROB in GAUROC for normal CIFAR-10 w/ $O(\textbf{z})$ and variable number of FS-OE SVHN, tested on unseen in the wild sets CIFAR-100 and LFN, using the 80 Million Tiny Images dataset for OE.}
\label{fig:asdfasdsadfasdsadfsadfsafsadffsadfszaz5}
\end{figure*}
\fi
\iffalse
\begin{figure}[t]
\begin{minipage}[t]{1.0\columnwidth}%
\centering \includegraphics[width=1.0\columnwidth]{sfgfsgsdfgsdfgds AD performance of FROB for normal SVHN, with O(z), and with FS-OE variable CIFAR-10 samples .pdf}
\caption{OoD detection performance of FROB for normal SVHN in GAUROC with $O(\textbf{z})$ and variable number of FS-OE CIFAR-10 samples, tested on CIFAR-100, CIFAR-10, and Uniform noise.}
\label{fig:asfzxddfasfsdfasdsadfasdsadfsadfsafsadffsadfszaz5}
\label{fig:EDfzxdfasfsdCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1}
\label{fig:EfzxDdfasfsdCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1}
\label{fig:EfzxdfzxasfsdDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-2}
\label{fig:EDfzxdfasfsdCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-1}
\label{fig:EdfafzxsfsdDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-1-1}
\label{fig:EdfasffzxsdDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-1-1-1}
\end{minipage}
\end{figure}
\fi
\iffalse
\begin{figure}[t]
\begin{minipage}[t]{0.48\columnwidth}%
\centering \includegraphics[width=1.0\columnwidth]{Figure 6_GAUROC_5 Sept 2021.pdf}
\caption{Performance of FROB in GAUROC for normal CIFAR-10 w/ $O(\textbf{z})$ and variable number of FS-OE SVHN, tested on unseen in the wild sets CIFAR-100 and low frequency noise, using the 80 Million Tiny Images set for OE.}
\label{fig:asdfasdsadfasdsadfsadfsafsadffsadfszaz5}
\end{minipage}\hfill{}%
\begin{minipage}[t]{0.48\columnwidth}%
\centering \includegraphics[width=1.0\columnwidth]{sfgfsgsdfgsdfgds AD performance of FROB for normal SVHN, with O(z), and with FS-OE variable CIFAR-10 samples .pdf}
\caption{OoD detection performance of FROB for normal SVHN in GAUROC with $O(\textbf{z})$ and variable number of FS-OE CIFAR-10 samples, tested on CIFAR-100, CIFAR-10, and Uniform noise.}
\label{fig:asfzxddfasfsdfasdsadfasdsadfsadfsafsadffsadfszaz5}
\label{fig:EDfzxdfasfsdCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1}
\label{fig:EfzxDdfasfsdCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1}
\label{fig:EfzxdfzxasfsdDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-2}
\label{fig:EDfzxdfasfsdCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-1}
\label{fig:EdfafzxsfsdDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-1-1}
\label{fig:EdfasffzxsdDCs-1-2-1-2-1-1-1-2-1-1-1-2-1-1-1-1-1-1-1-1-2-5-001-1-3-1-1-1-1-1-1-1-1-1-1}
\end{minipage}
\end{figure}
\fi
\iffalse
\begin{wrapfigure}{r}{0.50\textwidth}
\vspace{-7pt}
\centering
\centering \includegraphics[width=0.5\textwidth]{Figure 6_GAUROC_5 Sept 2021.pdf}
\caption{Performance of FROB in GAUROC for normal CIFAR-10 w/ $O(\textbf{z})$ and variable number of FS-OE SVHN, tested on unseen in the wild sets CIFAR-100 and low frequency noise, using the 80 Million Tiny Images set for OE.}
\label{fig:asdfasdsadfasdsadfsadfsafsadffsadfszaz5}
\end{wrapfigure}
\fi
\subsubsection{FROB Performance with $O(\textbf{z})$ in GAUROC Using 80 Million Tiny Images}
To improve the performance of FROB with the learned boundary in GAUROC, we use the 80 Million Tiny Images for OE.
Fig.~6 and Table~9 both show that the performance of FROB, in GAUROC, improves for normal CIFAR-10 with $O(\textbf{z})$ and variable number of OE SVHN, tested on the unseen sets CIFAR-100 and low frequency noise.
In the few-shot OoD detection setting, for decreasing few-shots till approximately $80$, FROB achieves a mean value of approximately $0.88$ in GAUROC,
tested on low frequency noise.
FROB with the OE 80 Million Tiny Images set and the learned $O(\textbf{z})$ reduces the threshold linked to the model’s few-shot robustness.
Tables~10-13 in the Appendix show the performance of FROB with $O(\textbf{z})$, in GAUROC (and AUROC), when the OE 80 Million Tiny Images in reduced number ($73257$) is used and few-shots from $1830$ to $80$ are used.
The performance of FROB in GAUROC increases with OE 80 Million Tiny Images, compared to without this set.
\iffalse
We examine the OoD detection performance of FROB trained on the normal class using the OE set of 80 Million Tiny Images, using our confidence boundary, FS-$O(\textbf{z})$, and few-shots FS-OE of $100$ SVHN samples in Table~8. We test it on different sets, CIFAR-100, SVHN, low-frequency noise, and uniform noise, in AUROC, AAUROC, and GAUROC.
In Table~\ref{tab:asdfasfasdfdxfdfsdfs8}, the last group of results show the performance of FROB for zero-shots.
We obtain high GAUROC values when using the same FS-OE and test set of SVHN.
Also, Table~8 shows that training our FROB model on the OE 80 Million Tiny Images, we obtain higher GAUROC values compared to when we do not train on it.
\subsubsection{Performance of FROB without FS-$O(\textbf{z})$ Using OE}
Table~\ref{tab:sadfadssadffasdf2} in the Appendix shows the performance of FROB using the 80 Million Tiny Images set for OE with all available data and for reduced number of samples, i.e. few-shots of it (FS-OE).
We evaluate FROB without the distribution boundary, FS-$O(\textbf{z})$, on different test sets, CIFAR-10, CIFAR-100, SVHN, low-frequency noise, uniform noise, and 80 Million Tiny Images.
We use the 80 Million Tiny Images set for Calibration and OE.
We set SVHN and CIFAR-10 as the normal class sets.
The performance of our FROB model slightly decreases in AUC-type metrics when $73257$ samples from the 80 Million Tiny Images dataset are used for OE, instead of the $50$ million total samples, during training.
Table~\ref{tab:sadfadssadffasdf2} shows that the AUROC performance does not decrease when a fraction of the 80 Million Tiny Images is used for OE, $73$ thousand samples instead of $50$ million.
In contrast, the GAUROC performance is dataset-dependent as when CIFAR-10 is the normal class, the GAUROC decreases when a fraction of the 80 Million Tiny Images set is used.
When SVHN is the normal class, the GAUROC does not decrease when a fraction of the 80 Million Tiny Images is used.
\iffalse
We examine the performance of FROB without $O(\textbf{z})$ trained on the normal class CIFAR-10 in Table~\ref{tab:sadfadssadffasdf2}.
We test this model on different sets, CIFAR-100, SVHN, low-frequency noise, and uniform noise, in AUROC, AAUROC, and GAUROC.
Setting CIFAR-100 as anomalies to CIFAR-10 leads to stronger and more adversarial OoD samples than setting CIFAR-100 as anomalies to SVHN.
An example of this can be found in Table 1 in [1] where GOOD with quantile $1.0$ yields a GAUROC of $54.2$ compared to $97.3$.
GAUROC highly depends on the anomalous point and works only for weak anomalies.
GAUROC is not applicable for strong adversarial anomalies as small perturbations of such OoD samples lead to high probability normal class data.
The center of the $l_{\infty}$-norm ball is the adversarial OoD sample.
The radius of the $l_{\infty}$-norm ball, $\epsilon$, is empirically chosen and dataset-dependent.
The value of $\epsilon$ is ad hoc and arbitrarily selected, and it is possible to have very high AUROC and very low GAUROC [1].
\fi
FROB when using SVHN as the normal class, as well as the 80 Million Tiny Images set for OE, without our self-supervised boundary, FS-$O(\textbf{z})$, tested on unseen in the wild sets, CIFAR-10, CIFAR-100, low frequency noise, and Uniform noise, achieves on average high performance, AUROC $0.988$, AAUROC $0.988$ and GAUROC $0.942$.
Table~\ref{tab:sadfadssadffasdf2} shows that training on CIFAR-10 and testing on SVHN is more challenging than the vice versa [12].
When the normal class is SVHN, FROB achieves higher performance compared to when the normal class is CIFAR-10.
SVHN is a more robust normal set compared to CIFAR-10.
The 80 Million Tiny Images is a robust OE set as FROB achieves high performance when using it.
\fi
\iffalse
\subsection{Performance of FROB without FS-$O(\textbf{z})$ Using OE}
Table~\ref{tab:asfgasadfssadsdfdf3} in the Appendix
shows the OoD detection performance of FROB trained on the normal class CIFAR-10 using the OE sets -all data- of SVHN, CIFAR-100, and 80 Million Tiny Images 73257 samples, without using our confidence boundary, FS-$O(\textbf{z})$, tested on different sets in AUC-type metrics.
FROB using SVHN for OE
achieves higher AUROC and AAUROC compared to using the OE sets CIFAR-100 and 80 Million Tiny Images.
FROB achieves higher GAUROC using the OE set SVHN, i.e. average $0.69$, and 80 Million Tiny Images, i.e. average $0.66$.
\fi
\subsection{Evaluation of FROB Using OCC Compared to Benchmarks}
We evaluate FROB using OCC for each CIFAR-10 class.
We report the mean performance of FROB and compare FROB to benchmarks, HTD, GEOM, GOAD, DROCC, SVDD, and PaSVDD, in the few-shot OE setting of $80$ samples \citep{31}. FROB outperforms benchmarks in Table~\ref{tab:asdfdfgdssdfvxbacvbdfdfsdfadfaasdfs}, and in Table~\ref{tab:asdfssdfvxbacvbdfdfsdfadfaasdfs} in the Appendix.
Tables~\ref{tab:asdfdfgdssdfvxbacvbdfdfsdfadfaasdfs}~and~\ref{tab:asdfssdfvxbacvbdfdfsdfadfaasdfs} show the performance of FROB in the OCC setting, when normality is a CIFAR-10 class.
FROB using the self-learned boundary outperforms baselines in the few-shot OoD detection task, when we have budget constraints and OoD sampling complexity limitations.
The improvement of our proposed FROB model, using OE SVHN, in mean AUROC is $2.3 \%$, when compared to without using OE SVHN, in Table~\ref{tab:asdfdfgdssdfvxbacvbdfdfsdfadfaasdfs}, and in Table~\ref{tab:asdfssdfvxbacvbdfdfsdfadfaasdfs} in the Appendix.
\iffalse
\begin{table*}[!tbp]
\caption{ \centering
Average performance of FROB with few-shot boundary samples, FS-$O(\textbf{z})$, and FS-OE of $80$ CIFAR-10 OCC anomalies, using One-Class Classification (OCC). Compared to benchmarks without using FS-$O(\textbf{z})$, with using FS-OE of $80$ CIFAR-10 OCC anomalies,
in AUROC.}
\begin{center}
\begin{scriptsize}
\begin{sc}
\begin{tabular}
{p{3.5cm} p{2.cm} p{1.5cm}
p{3cm} p{1.6cm} }
\toprule{\small NORMAL CLASS }
& \normalsize {\small MODEL }
& \normalsize {\small FS-$O(\textbf{z})$}
& \normalsize {\small TEST DATA}
& \normalsize {\small AUROC}
\\
\midrule
\midrule
\normalsize CIFAR-10: OCC
& \normalsize {HTD \citep{31}}
& \normalsize {w/o}
& \normalsize {\text{CIFAR-10: OCC}}
& \normalsize {\text{0.756}}
\\
\midrule
\normalsize CIFAR-10: OCC
& \normalsize {GOAD}
& \normalsize {w/o}
& \normalsize {\text{CIFAR-10: OCC}}
& \normalsize {\text{0.562}}
\\
\midrule
\normalsize CIFAR-10: OCC
& \normalsize {GEOM}
& \normalsize {w/o}
& \normalsize {\text{CIFAR-10: OCC}}
& \normalsize {\text{0.735}}
\\
\midrule
\normalsize CIFAR-10: OCC
& \normalsize {DROCC}
& \normalsize {w/o}
& \normalsize {\text{CIFAR-10: OCC}}
& \normalsize {\text{0.585}}
\\
\midrule
\normalsize CIFAR-10: OCC
& \normalsize {SVDD}
& \normalsize {w/o}
& \normalsize {\text{CIFAR-10: OCC}}
& \normalsize {\text{0.608}}
\\
\midrule
\normalsize CIFAR-10: OCC
& \normalsize {PaSVDD}
& \normalsize {w/o}
& \normalsize {\text{CIFAR-10: OCC}}
& \normalsize {\text{0.510}}
\\
\midrule
\normalsize CIFAR-10: OCC
& \normalsize {FROB}
& \normalsize {w/}
& \normalsize {\text{CIFAR-10: OCC}}
& \normalsize {\textbf{0.784}}
\\
\midrule
\midrule
\end{tabular}
\end{sc}
\end{scriptsize}
\end{center}
\label{tab:asdfdfgdssdfvxbacvbdfdfsdfadfaasdfs}
\end{table*}
\fi
\iffalse
\subsection{4.7 Sensitivity Analysis of FROB}
\textbf{Robustness sensitivity analysis of FROB to variable number of OE samples, when setting CIFAR-10 as the Normal Class.}
Table~\ref{tab:asddfsadasdfsf} shows the outputs of the FROB model trained on the normal class CIFAR-10.
We use SVHN as the OE dataset using a variable number of samples.
We compare the results of this trained FROB model on different test sets.
The examined test datasets are SVHN, CIFAR-100, and low-frequency noise.
With a decreasing number of OE samples, we observe that the AUC performance decreases per testing dataset.
At about $800$ image samples from SVHN, we observe that the AUC falls to zero.
We also observe that using OE and testing samples from the same dataset, as well as testing samples from the low-frequency noise (image from the normal plus noise), the AUC performance decreases at a smaller rate.
When the few-shot outliers and the test samples originate from the same dataset, then we observe that higher AUROC scores are achieved compared to when the few-shot outliers and the test samples originate from different datasets.
In the fourth group of results of Table~\ref{tab:asddfsadasdfsf}, we observe that for a specific decreased number of few-shot outliers, different AUROC metrics are achieved by FROB when using different test datasets.
In this last group of results of Table~\ref{tab:asddfsadasdfsf}, we note that the AUROC performance on the test dataset CIFAR-100 is low, i.e. AUROC of $0.512$.
This indicates randomness and chance in the predictions made by the model, which is not desirable.
This result of $0.512$ AUROC on the CIFAR-100 test dataset is when no OE dataset is used, i.e. the 80 Million Tiny Images dataset is not used for OE.
The deep neural network classifier becomes extremely overconfident and predicts that the queried test sample is always in-distribution [2, 1].
This leads to misses of anomalies, Type II errors, and false negatives.
At this point, we note that AUROC scores below $0.5$ are indicators of poor performance and an AUROC of $0.5$ indicates randomness, chance, and random results, as explained in [2], in its Section 5 and its Appendix D.
In the third group of results in Table~\ref{tab:asddfsadasdfsf}, we observe that for the CIFAR-100 test set, the AUROC of $0.651$ is low and might not be acceptable for many applications.
For this AUROC of $0.651$ and for all our evaluation results, we have used a fixed number of abnormal samples, i.e. $1000$, and a fixed number of normal samples, i.e. $1000$.
We have used a non-variable ratio of abnormal to normal samples, i.e. $100 \%$, in contrast to other models that set the percentage of outliers at test time to $50 \%$ [28, 29].
Our training and test datasets are in the range of thousands, while the examined few-shot outliers are both dozens of samples, i.e. few-shots, and in the range of hundreds.
\fi
\iffalse
\subsection{4.8 FROB Model Performance Analysis}
Table~\ref{tab:ajsdhfjasdfsadasdff} presents the FROB model performance trained on CIFAR-10 as the normal class dataset, with or without generated distribution boundary samples, and few-shot samples of SVHN data in decreasing number.
We evaluate the performance of our proposed FROB model using various testing datasets, CIFAR-100, SVHN, low-frequency noise, and uniform noise.
We observe that using distribution boundary samples, the performance increases.
We also observe that with decreasing few-shot samples, the performance decreases.
FROB achieves a smaller rate decrease as the few-shot samples are decreased in number.
Our proposed FROB model achieves robustness and resilience using the distribution boundary because the performance decreases at a smaller rate with decreasing few-shot samples, specifically when the testing data is low-frequency noise.
Tables~\ref{tab:asddfsadasdfsf} and \ref{tab:ajsdhfjasdfsadasdff} show that using the distribution boundary, i.e. the self-generated $O(\textbf{z})$ samples, is effective, beneficial, and advantageous in the few-shot OoD detection setting.
This is observed specifically when comparing the fourth group of results of Table~\ref{tab:asddfsadasdfsf} to the fourth group of results of Table~\ref{tab:ajsdhfjasdfsadasdff}.
Our proposed FROB algorithm achieves robustness to the number of few-shots and works in the few-shot anomaly detection setting, according to the AUROC in the fourth group of results in Table~\ref{tab:asddfsadasdfsf} and to the AUROC in the fourth group of results in Table~\ref{tab:ajsdhfjasdfsadasdff}.
Our proposed FROB model with the learned distribution boundary outperforms others in the few-shot OoD detection setting.
The evaluation results in Table~\ref{tab:ajsdhfjasdfsadasdff} show that the model without the self-supervised learning $O(\textbf{z}$ samples does not work, fails, and is struggling according to the OoD detection performance in terms of AUROC.
Comparing FROB with the learned distribution boundary in Table~\ref{tab:ajsdhfjasdfsadasdff} to the fourth group of results in Table~\ref{tab:asddfsadasdfsf}, we observe that including the self-generated $O(\textbf{z})$ is beneficial.
This is based on the head-to-head comparison of the AUROC in the fourth group of results in Table~\ref{tab:asddfsadasdfsf} and the fourth group of results in Table~\ref{tab:ajsdhfjasdfsadasdff}.
We also note that our proposed FROB model achieves comparable performance to benchmarks that use a lot of samples for OE, according to Table~\ref{tab:ajsdhfjasdfsadasdff}.
FROB achieves comparable results to when not using few-shot outlier samples.
Having complete access to a large dataset for OE violates the few-shot OoD detection setting.
Table~\ref{tab:ajsdhfjasdfsadasdff} also shows that our FROB model achieves better performance when the OE dataset is SVHN compared to when it is the 80 Million Tiny Images dataset.
In addition, we also observe that FROB achieves better AUROC performance when the OE dataset is 80 Million Tiny Images compared to when it is the CIFAR-100 dataset.
Comparing the fourth group of results in Table~\ref{tab:asddfsadasdfsf} with the third group of results in Table~\ref{tab:ajsdhfjasdfsadasdff}, we observe that when the 80 Million Tiny Images dataset, which has 50 Million training images, is used for OE, then the AUROC increases from $0.512$ to $0.722$.
However, in this case, we note that using 50 Million samples for OE during training does not conform to and violates the few-shot OoD detection setting.
FROB, according to the fourth group of results in Table~\ref{tab:ajsdhfjasdfsadasdff}, achieves an AUROC of $0.744$ when both our learned boundary and the 80 Million Tiny Images OE set are used.
\fi
\iffalse
\subsection{FROB Model Performance Analysis using variable OE datasets}
Table~4 presents the OoD detection performance of FROB trained on the normal class using an OE set (all data), and without using boundary samples FS-$O(\textbf{z})$, tested on different datasets in AUROC, AAUROC, and GAUROC.
FROB with SVHN as the OE Calibration set achieves better performance in terms of AUROC than with 80 Million Tiny Images as the OE dataset, i.e. 0.83 compared to 0.78 for the test set CIFAR-100.
In addition, FROB with the 80 Million Tiny Images set for OE achieves better performance in terms of GAUROC than with SVHN for OE, i.e. 0.64 compared to 0.15 for the test set CIFAR-100.
In Table~4, we observe that training FROB model on 80 Million Tiny Images with reduced number of samples gives more robust results in GAUROC, and competitive results in AUROC and AAUROC, compared to training the FROB model in other datasets, i.e. SVHN and CIFAR-100.
Hence, the 80 Million Tiny Images dataset is a robust and reliable dataset for few-shot anomaly OE training.
\fi
\iffalse
\subsection{Benchmark Models}
Table~1 presents the CEDA and ACET performance for anomaly and OoD detection [2].
Table~1 shows that both CEDA and ACET with low-frequency noise as the OE dataset yield decreased performance compared to using the 80 Million Tiny Images dataset for OE.
\fi
\iffalse
\subsection{Our Proposed FROB Model}
In Table~\ref{tab:asdfsadfsadfaasdfs}, our proposed FROB model with the distribution boundary achieves better performance than FROB without the distribution boundary when the OE data and the few-shot outliers are $915$ samples from SVHN.
Importantly, when the test set is CIFAR-100, then FROB with the distribution boundary achieves an AUROC of $0.83$ while FROB without the distribution boundary yields an AUROC of $0.51$.
In addition, we also observe that our proposed FROB model with the distribution boundary and with the 80 Million Tiny Images set for OE achieves better performance in terms of GAUROC than FROB with the distribution boundary and without the 80 Million Tiny Images dataset for OE, i.e. $0.41$ compared to $0.12$.
Comparing the first two groups of results, we observe that our proposed FROB model with the self-generated distribution boundary outperforms FROB without the learned distribution boundary in terms of AUROC.
Next, comparing the third and fourth groups of results, we observe that FROB with the distribution boundary and with the 80 Million Tiny Images set for OE outperforms FROB without the learned distribution boundary and with the 80 Million Tiny Images OE dataset in terms of AUROC.
Then, comparing the first and third groups of results, we observe that our proposed FROB model with our distribution boundary achieves comparable performance to FROB with the distribution boundary and with the 80 Million Tiny Images OE set in terms of AUROC.
This shows that including not useful anomalies from the 80 Million Tiny Images dataset in our training process is not beneficial in terms of AUROC.
The comparison of the AUROC in the first and third groups of results indicates that 80 Million Tiny Images introduces irrelevant and not meaningful OoD samples.
Next, comparing the second and fourth groups of results, and relating this comparison to the one between the first and third groups of results, we observe that including the distribution boundary in our training is advantageous and beneficial.
Only when we include the learned distribution boundary in our training process, then samples from the 80 Million Tiny Images set are not particularly useful and meaningful for anomaly detection.
Only when distribution boundary samples are introduced, then the OoD samples from the 80 Million Tiny Images set are not particularly relevant, and this is one of the key takeaways of this paper.
Our claim is that conservative anomalies are introduced by the 80 Million Tiny Images set for OE when self-supervised learning based boundary samples are used.
Relevant examples of anomalies are introduced by using the 80 Million Tiny Images OE set only when the generated distribution boundary is not used.
We show and present the ablation study of our proposed FROB model with respect to the OoD distribution boundary samples.
We present the effect of the distribution boundary on the OoD detection performance.
The first, third, and fourth columns of the table are for training.
FROB uses the normal class data in order to learn them using the cross-entropy loss, the generated distribution boundary which are few-shot OoD samples and are denoted by $O(\textbf{z})$, the few-shot outliers, and the OE dataset.
FROB sets low confidence to the OoD data, specifically to the generated distribution boundary, the few-shot outliers, and the OE dataset.
Robustness models, as GOOD and OE, first define the normal class dataset and then filter out the normal class dataset from the 80 Million Tiny Images dataset. This modified dataset is set as anomalies. Next, they learn the normal class dataset and set zero to these anomalies. This process is highly feature- and human-dependent.
Instead, FROB first trains
a multi-class classification Robustness model to assign high confidence to normal distributions.
FROB then trains our proposed distribution boundary model by using the output of the Robustness model for the first loss term.
Next, FROB negatively trains the model by setting the output of our distribution boundary model as anomalies,
rather than the 80 Million Tiny Images dataset.
Also, FROB negatively trains the model using the few-shot outliers and the OE as anomalies.
Compared to [31], our proposed FROB model trained on CIFAR-10 as the normal class
\fi
\iffalse
\begin{figure*}[!tbp]
\centering \includegraphics[width=0.72\textwidth]{GAUROC_wo Oz_1 Sept 2021.pdf}
\caption{The OoD performance of FROB in GAUROC for normal CIFAR-10 setting, without O(z) and with FS-OE variable number of SVHN samples.}
\label{fig:sdfasfasdfsz5}
\end{figure*}
\fi
\section{Conclusion}
We have proposed FROB which uses the generated support boundary of the normal data distribution for few-shot OoD detection.
FROB tackles the few-shot problem using classification with OoD detection.
The contribution of FROB is the combination of the generated boundary in a self-supervised learning manner and the imposition of low confidence at this learned boundary.
To improve robustness, FROB generates strong adversarial samples on the boundary, and forces samples from OoD and on the boundary to be less confident.
By including the self-produced boundary, FROB reduces the threshold linked to the model’s few-shot robustness.
FROB redesigns, restructures, and streamlines OE to work even for zero-shots.
It robustly performs classification and few-shot OoD detection with a high level of reliability in real-world applications, in the wild.
FROB maintains the OoD performance approximately constant, independent of the few-shot number.
The performance of FROB with the self-supervised learning boundary is robust and effective, as the performance is approximately stable as the few-shot outliers decrease in number, while the performance of FROB without $O(\textbf{z})$ decreases as the few-shots decrease.
The evaluation of FROB, on many sets, shows that it is effective, achieves competitive state-of-the-art performance, and outperforms benchmarks in the few-shot OoD detection setting in AUC-type metrics.
In the future, in addition to confidence and the class, FROB will also output important regions and bounding boxes around abnormal objects.
\vfill
\clearpage
\iffalse
\null
\vfil
\clearpage
\section{Submission of conference papers to ICLR 2022}
ICLR requires electronic submissions, processed by
\url{https://openreview.net/}. See ICLR's website for more instructions.
If your paper is ultimately accepted, the statement {\tt
{\textbackslash}iclrfinalcopy} should be inserted to adjust the
format to the camera ready requirements.
The format for the submissions is a variant of the NeurIPS format.
Please read carefully the instructions below, and follow them
faithfully.
\subsection{Style}
Papers to be submitted to ICLR 2022 must be prepared according to the
instructions presented here.
Authors are required to use the ICLR \LaTeX{} style files obtainable at the
ICLR website. Please make sure you use the current files and
not previous versions. Tweaking the style files may be grounds for rejection.
\subsection{Retrieval of style files}
The style files for ICLR and other conference information are available online at:
\begin{center}
\url{http://www.iclr.cc/}
\end{center}
The file \verb+iclr2022_conference.pdf+ contains these
instructions and illustrates the
various formatting requirements your ICLR paper must satisfy.
Submissions must be made using \LaTeX{} and the style files
\verb+iclr2022_conference.sty+ and \verb+iclr2022_conference.bst+ (to be used with \LaTeX{}2e). The file
\verb+iclr2022_conference.tex+ may be used as a ``shell'' for writing your paper. All you
have to do is replace the author, title, abstract, and text of the paper with
your own.
The formatting instructions contained in these style files are summarized in
sections \ref{gen_inst}, \ref{headings}, and \ref{others} below.
\section{General formatting instructions}
\label{gen_inst}
The text must be confined within a rectangle 5.5~inches (33~picas) wide and
9~inches (54~picas) long. The left margin is 1.5~inch (9~picas).
Use 10~point type with a vertical spacing of 11~points. Times New Roman is the
preferred typeface throughout. Paragraphs are separated by 1/2~line space,
with no indentation.
Paper title is 17~point, in small caps and left-aligned.
All pages should start at 1~inch (6~picas) from the top of the page.
Authors' names are
set in boldface, and each name is placed above its corresponding
address. The lead author's name is to be listed first, and
the co-authors' names are set to follow. Authors sharing the
same address can be on the same line.
Please pay special attention to the instructions in section \ref{others}
regarding figures, tables, acknowledgments, and references.
There will be a strict upper limit of 9 pages for the main text of the initial submission, with unlimited additional pages for citations.
\section{Headings: first level}
\label{headings}
First level headings are in small caps,
flush left and in point size 12. One line space before the first level
heading and 1/2~line space after the first level heading.
\subsection{Headings: second level}
Second level headings are in small caps,
flush left and in point size 10. One line space before the second level
heading and 1/2~line space after the second level heading.
\subsubsection{Headings: third level}
Third level headings are in small caps,
flush left and in point size 10. One line space before the third level
heading and 1/2~line space after the third level heading.
\section{Citations, figures, tables, references}
\label{others}
These instructions apply to everyone, regardless of the formatter being used.
\subsection{Citations within the text}
Citations within the text should be based on the \texttt{natbib} package
and include the authors' last names and year (with the ``et~al.'' construct
for more than two authors). When the authors or the publication are
included in the sentence, the citation should not be in parenthesis using \verb|\citet{}| (as
in ``See \citet{Hinton06} for more information.''). Otherwise, the citation
should be in parenthesis using \verb|\citep{}| (as in ``Deep learning shows promise to make progress
towards AI~\citep{Bengio+chapter2007}.'').
The corresponding references are to be listed in alphabetical order of
authors, in the \textsc{References} section. As to the format of the
references themselves, any style is acceptable as long as it is used
consistently.
\subsection{Footnotes}
Indicate footnotes with a number\footnote{Sample of the first footnote} in the
text. Place the footnotes at the bottom of the page on which they appear.
Precede the footnote with a horizontal rule of 2~inches
(12~picas).\footnote{Sample of the second footnote}
\subsection{Figures}
All artwork must be neat, clean, and legible. Lines should be dark
enough for purposes of reproduction; art work should not be
hand-drawn. The figure number and caption always appear after the
figure. Place one line space before the figure caption, and one line
space after the figure. The figure caption is lower case (except for
first word and proper nouns); figures are numbered consecutively.
Make sure the figure caption does not get separated from the figure.
Leave sufficient space to avoid splitting the figure and figure caption.
You may use color figures.
However, it is best for the
figure captions and the paper body to make sense if the paper is printed
either in black/white or in color.
\begin{figure}[h]
\begin{center}
\fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}}
\end{center}
\caption{Sample figure caption.}
\end{figure}
\subsection{Tables}
All tables must be centered, neat, clean and legible. Do not use hand-drawn
tables. The table number and title always appear before the table. See
Table~\ref{sample-table}.
Place one line space before the table title, one line space after the table
title, and one line space after the table. The table title must be lower case
(except for first word and proper nouns); tables are numbered consecutively.
\begin{table}[t]
\caption{Sample table title}
\label{sample-table}
\begin{center}
\begin{tabular}{ll}
\multicolumn{1}{c}{\bf PART} &\multicolumn{1}{c}{\bf DESCRIPTION}
\\ \hline \\
Dendrite &Input terminal \\
Axon &Output terminal \\
Soma &Cell body (contains cell nucleus) \\
\end{tabular}
\end{center}
\end{table}
\section{Default Notation}
In an attempt to encourage standardized notation, we have included the
notation file from the textbook, \textit{Deep Learning}
\cite{goodfellow2016deep} available at
\url{https://github.com/goodfeli/dlbook_notation/}. Use of this style
is not required and can be disabled by commenting out
\texttt{math\_commands.tex}.
\centerline{\bf Numbers and Arrays}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1in}p{3.25in}}
$\displaystyle a$ & A scalar (integer or real)\\
$\displaystyle {\bm{a}}$ & A vector\\
$\displaystyle {\bm{A}}$ & A matrix\\
$\displaystyle {\tens{A}}$ & A tensor\\
$\displaystyle {\bm{I}}_n$ & Identity matrix with $n$ rows and $n$ columns\\
$\displaystyle {\bm{I}}$ & Identity matrix with dimensionality implied by context\\
$\displaystyle {\bm{e}}^{(i)}$ & Standard basis vector $[0,\dots,0,1,0,\dots,0]$ with a 1 at position $i$\\
$\displaystyle \text{diag}({\bm{a}})$ & A square, diagonal matrix with diagonal entries given by ${\bm{a}}$\\
$\displaystyle {\textnormal{a}}$ & A scalar random variable\\
$\displaystyle {\mathbf{a}}$ & A vector-valued random variable\\
$\displaystyle {\mathbf{A}}$ & A matrix-valued random variable\\
\end{tabular}
\egroup
\vspace{0.25cm}
\centerline{\bf Sets and Graphs}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1.25in}p{3.25in}}
$\displaystyle {\mathbb{A}}$ & A set\\
$\displaystyle \mathbb{R}$ & The set of real numbers \\
$\displaystyle \{0, 1\}$ & The set containing 0 and 1 \\
$\displaystyle \{0, 1, \dots, n \}$ & The set of all integers between $0$ and $n$\\
$\displaystyle [a, b]$ & The real interval including $a$ and $b$\\
$\displaystyle (a, b]$ & The real interval excluding $a$ but including $b$\\
$\displaystyle {\mathbb{A}} \backslash {\mathbb{B}}$ & Set subtraction, i.e., the set containing the elements of ${\mathbb{A}}$ that are not in ${\mathbb{B}}$\\
$\displaystyle {\mathcal{G}}$ & A graph\\
$\displaystyle \parents_{\mathcal{G}}({\textnormal{x}}_i)$ & The parents of ${\textnormal{x}}_i$ in ${\mathcal{G}}$
\end{tabular}
\vspace{0.25cm}
\centerline{\bf Indexing}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1.25in}p{3.25in}}
$\displaystyle {a}_i$ & Element $i$ of vector ${\bm{a}}$, with indexing starting at 1 \\
$\displaystyle {a}_{-i}$ & All elements of vector ${\bm{a}}$ except for element $i$ \\
$\displaystyle {A}_{i,j}$ & Element $i, j$ of matrix ${\bm{A}}$ \\
$\displaystyle {\bm{A}}_{i, :}$ & Row $i$ of matrix ${\bm{A}}$ \\
$\displaystyle {\bm{A}}_{:, i}$ & Column $i$ of matrix ${\bm{A}}$ \\
$\displaystyle {\etens{A}}_{i, j, k}$ & Element $(i, j, k)$ of a 3-D tensor ${\tens{A}}$\\
$\displaystyle {\tens{A}}_{:, :, i}$ & 2-D slice of a 3-D tensor\\
$\displaystyle {\textnormal{a}}_i$ & Element $i$ of the random vector ${\mathbf{a}}$ \\
\end{tabular}
\egroup
\vspace{0.25cm}
\centerline{\bf Calculus}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1.25in}p{3.25in}}
$\displaystyle\frac{d y} {d x}$ & Derivative of $y$ with respect to $x$\\ [2ex]
$\displaystyle \frac{\partial y} {\partial x} $ & Partial derivative of $y$ with respect to $x$ \\
$\displaystyle \nabla_{\bm{x}} y $ & Gradient of $y$ with respect to ${\bm{x}}$ \\
$\displaystyle \nabla_{\bm{X}} y $ & Matrix derivatives of $y$ with respect to ${\bm{X}}$ \\
$\displaystyle \nabla_{\tens{X}} y $ & Tensor containing derivatives of $y$ with respect to ${\tens{X}}$ \\
$\displaystyle \frac{\partial f}{\partial {\bm{x}}} $ & Jacobian matrix ${\bm{J}} \in \mathbb{R}^{m\times n}$ of $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$\\
$\displaystyle \nabla_{\bm{x}}^2 f({\bm{x}})\text{ or }{\bm{H}}( f)({\bm{x}})$ & The Hessian matrix of $f$ at input point ${\bm{x}}$\\
$\displaystyle \int f({\bm{x}}) d{\bm{x}} $ & Definite integral over the entire domain of ${\bm{x}}$ \\
$\displaystyle \int_{\mathbb{S}} f({\bm{x}}) d{\bm{x}}$ & Definite integral with respect to ${\bm{x}}$ over the set ${\mathbb{S}}$ \\
\end{tabular}
\egroup
\vspace{0.25cm}
\centerline{\bf Probability and Information Theory}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1.25in}p{3.25in}}
$\displaystyle P({\textnormal{a}})$ & A probability distribution over a discrete variable\\
$\displaystyle p({\textnormal{a}})$ & A probability distribution over a continuous variable, or over
a variable whose type has not been specified\\
$\displaystyle {\textnormal{a}} \sim P$ & Random variable ${\textnormal{a}}$ has distribution $P$\\% so thing on left of \sim should always be a random variable, with name beginning with \r
$\displaystyle \mathbb{E}_{{\textnormal{x}}\sim P} [ f(x) ]\text{ or } \mathbb{E} f(x)$ & Expectation of $f(x)$ with respect to $P({\textnormal{x}})$ \\
$\displaystyle \mathrm{Var}(f(x)) $ & Variance of $f(x)$ under $P({\textnormal{x}})$ \\
$\displaystyle \mathrm{Cov}(f(x),g(x)) $ & Covariance of $f(x)$ and $g(x)$ under $P({\textnormal{x}})$\\
$\displaystyle H({\textnormal{x}}) $ & Shannon entropy of the random variable ${\textnormal{x}}$\\
$\displaystyle D_{\mathrm{KL}} ( P \Vert Q ) $ & Kullback-Leibler divergence of P and Q \\
$\displaystyle \mathcal{N} ( {\bm{x}} ; {\bm{\mu}} , {\bm{\Sigma}})$ & Gaussian distribution %
over ${\bm{x}}$ with mean ${\bm{\mu}}$ and covariance ${\bm{\Sigma}}$ \\
\end{tabular}
\egroup
\vspace{0.25cm}
\centerline{\bf Functions}
\bgroup
\def1.5{1.5}
\begin{tabular}{p{1.25in}p{3.25in}}
$\displaystyle f: {\mathbb{A}} \rightarrow {\mathbb{B}}$ & The function $f$ with domain ${\mathbb{A}}$ and range ${\mathbb{B}}$\\
$\displaystyle f \circ g $ & Composition of the functions $f$ and $g$ \\
$\displaystyle f({\bm{x}} ; {\bm{\theta}}) $ & A function of ${\bm{x}}$ parametrized by ${\bm{\theta}}$.
(Sometimes we write $f({\bm{x}})$ and omit the argument ${\bm{\theta}}$ to lighten notation) \\
$\displaystyle \log x$ & Natural logarithm of $x$ \\
$\displaystyle \sigma(x)$ & Logistic sigmoid, $\displaystyle \frac{1} {1 + \exp(-x)}$ \\
$\displaystyle \zeta(x)$ & Softplus, $\log(1 + \exp(x))$ \\
$\displaystyle || {\bm{x}} ||_p $ & $L^p$ norm of ${\bm{x}}$ \\
$\displaystyle || {\bm{x}} || $ & $L^2$ norm of ${\bm{x}}$ \\
$\displaystyle x^+$ & Positive part of $x$, i.e., $\max(0,x)$\\
$\displaystyle \bm{1}_\mathrm{condition}$ & is 1 if the condition is true, 0 otherwise\\
\end{tabular}
\egroup
\vspace{0.25cm}
\section{Final instructions}
Do not change any aspects of the formatting parameters in the style files.
In particular, do not modify the width or length of the rectangle the text
should fit into, and do not change font sizes (except perhaps in the
\textsc{References} section; see below). Please note that pages should be
numbered.
\section{Preparing PostScript or PDF files}
Please prepare PostScript or PDF files with paper size ``US Letter'', and
not, for example, ``A4''. The -t
letter option on dvips will produce US Letter files.
Consider directly generating PDF files using \verb+pdflatex+
(especially if you are a MiKTeX user).
PDF figures must be substituted for EPS figures, however.
Otherwise, please generate your PostScript and PDF files with the following commands:
\begin{verbatim}
dvips mypaper.dvi -t letter -Ppdf -G0 -o mypaper.ps
ps2pdf mypaper.ps mypaper.pdf
\end{verbatim}
\subsection{Margins in LaTeX}
Most of the margin problems come from figures positioned by hand using
\verb+\special+ or other commands. We suggest using the command
\verb+\includegraphics+
from the graphicx package. Always specify the figure width as a multiple of
the line width as in the example below using .eps graphics
\begin{verbatim}
\usepackage[dvips]{graphicx} ...
\includegraphics[width=0.8\linewidth]{myfile.eps}
\end{verbatim}
or
\begin{verbatim}
\usepackage[pdftex]{graphicx} ...
\includegraphics[width=0.8\linewidth]{myfile.pdf}
\end{verbatim}
for .pdf graphics.
See section~4.4 in the graphics bundle documentation (\url{http://www.ctan.org/tex-archive/macros/latex/required/graphics/grfguide.ps})
A number of width problems arise when LaTeX cannot properly hyphenate a
line. Please give LaTeX hyphenation hints using the \verb+\-+ command.
\subsubsection*{Author Contributions}
If you'd like to, you may include a section for author contributions as is done
in many journals. This is optional and at the discretion of the authors.
\subsubsection*{Acknowledgments}
This work was supported by the Engineering and Physical Sciences Research Council of the UK (EPSRC) Grant number EP/S000631/1 and the UK MOD University Defence Research Collaboration (UDRC) in Signal Processing.
\citep{1}
\citep{2}
\citep{3}
\citep{4}
\citep{5}
\citep{6}
\citep{7}
\citep{8}
\citep{9}
\citep{10}
\citep{11}
\citep{12}
\citep{13}
\citep{14}
\citep{15}
\citep{16}
\citep{17}
\citep{18}
\citep{19}
\citep{20}
\citep{21}
\citep{22}
\citep{23}
\citep{24}
\citep{25}
\citep{26}
\citep{27}
\citep{28}
\citep{29}
\citep{30}
\citep{31}
\citep{32}
\citep{33}
\citep{34}
\citep{35}
\citep{36}
\citep{37}
\citep{38}
\citep{39}
\citep{40}
\citep{41}
\citep{42}
\citep{43}
\citep{44}
\citep{45}
\fi
\nocite{*}
|
{
"timestamp": "2021-12-01T02:26:45",
"yymm": "2111",
"arxiv_id": "2111.15487",
"language": "en",
"url": "https://arxiv.org/abs/2111.15487"
}
|
\section{Introduction}
The use of data analytics
is growing, as it plays a crucial role
in making decisions and identifying social and economical strategies. Not all data, however, are equally useful, and
the availability of accurate data is crucial for obtaining high-quality analytics.
In line with this trend, data are considered an asset and commercialized, and data markets, such as Datacoup\cite{Datacoup84:online} and Liveen\cite{LiveenBl14:online},
are on the rise.
Unlike traditional data brokers, data markets provide a direct data trading service between data providers and data consumers. Through data markets, data providers can be informed of the value of their private data, and data consumers can collect and process personal data directly at reduced costs, as intermediate entities are not needed in this model.
Two important issues that need to be addressed for the success of such data markets are (a) the prevention of privacy violation, and (b) an appropriate pricing mechanism for personal data. Data owners are increasingly aware of the privacy risks, and are less and less inclined to expose their sensitive data without proper guarantees. If the data market cannot be trusted concerning the protection of the sensitive information, the data providers will not be willing to trade their data.
For example, Cambridge Analytica collected millions of Facebook users' profiles under the pretext of using them for academic purposes, while in reality they used this information to influence the 2016 US presidential election \cite{hinds2020wouldn}. When media outlets broke news of Cambridge Analytica's business practices, many Facebook users felt upset about the misuse of their data and left Facebook.
Differential privacy \cite{dwork2014algorithmic} can prevent exposure of personal information while preserving statistical utility, hence it is a good candidate to protect privacy in the data market. Another benefit of differential privacy is that it provides a metric, i.e., the parameter
$\epsilon$, which represents the amount of obfuscation, and therefore the level of privacy and utility of the sanitized data. Hence $\epsilon$ can be used directly to establish the price of personal data as a function of the level of privacy protection desired by an individual.
We envision a data trading framework in which groups of data providers ally to form federations in order
to increase their bargaining power, following the traditional model of trade unions.
At the same time, federations guarantee that the members respect their engagement concerning the trade.
Another important aspect of the federation is that the value of the collection of all data is usually different from the sum of the values of all members' data. It could be larger, for instance because the accuracy of the statistical analyses increases with the size of the dataset, or could be smaller, for instance because of some discount offered by the federation.
Data consumers are supposed to make a collective deal with a federation rather than with the individual data providers, and, from their perspective,
this approach can be more reliable and efficient than dealing with individuals. Thus, data trading through federations can benefit both parties.
Given such a scenario, two questions are in order:
\begin{enumerate}
\item{ How is the price of data determined in a federation environment?}
\item { How does the federation fairly distribute the earnings to its members?}
\end{enumerate}
In this paper, we consider these issues, and we provide the following contributions:
\begin{enumerate}
\item{ We propose a method to determine the price of collective data based on the differential privacy
metric.}
\item {We propose a distribution model based on game theory. More precisely, we borrow the
notion of Shapley value~\cite{winter2002shapley, roth1988shapley} from the theory of cooperative games. This is a method to determine the contribution of each participant to the payoff, and we will use it
to ensure that each member of the federation receives a compensation according to his contribution.}
\end{enumerate}
The paper is organized as follows: Sections 2
recalls some basic notions about differential privacy and Shapley values. Section 3 summarizes related works. Section 4 describes the federation-based data trading and our proposal for the distribution of the earnings. Section 5 validates the proposed technique through experiments. Section 6 concludes and discusses potential directions for future work.
\section{Preliminaries}
In this section, we recall the basics about differential privacy and Shapley values.
\subsection{Differential privacy}
Differential privacy (DP) is a method to ensure privacy
on datasets based on obfuscating the answers to queries. It is parametrized by $\epsilon\in \mathbb{R}^+$, that represents the level of privacy.
We recall that two datasets $D_1$ and $D_2$ are
neighboring if they differ by only one record.
\begin{definition}[Differential privacy\text{\cite{dwork2014algorithmic}}]
\label{dp}
A randomized function $\mathcal{R}$ provides \emph{$\epsilon$-differential privacy} if for all neighboring datasets $D_1$ and $D_2$ and all $ S \subseteq$ Range($\mathcal{R}$), we have
\begin{equation*}
\mathbb{P}[\mathcal{R}(D_1) \in S] \leq e^{\epsilon} \times \mathbb{P}[\mathcal{R}(D_2) \in S]
\end{equation*}
\end{definition}
For example, if we have $\mathbb{D}$ as the space of all datasets, and some $m\in \mathbb{N}$, then the randomized function $\mathcal{R}:\mathbb{D}\mapsto\mathbb{R}^m$ could be such that $\mathcal{R}(D) = \mathcal{Q}(D)+X$, where $\mathcal{Q}$ is a statistical query function executed on $D$, such as the counting or histogram query, and $X$ is some added noise to the true query response. For $\Delta_{\mathcal{Q}}=\max\limits_{D,D'\in\mathbb{D}}|\mathcal{Q}(D)-\mathcal{Q}(D')|$, if $X\sim\text{Lap}(0,\frac{\Delta_{\mathcal{Q}}}{\epsilon})$, $\mathcal{R}$ will guarantee $\epsilon$-DP.
DP is typically implemented by adding controlled random noise to the true answer to the query before reporting the result. $\epsilon$ is a positive real number parameter, and the value of $\epsilon$ affects the amount of privacy, which decreases as $\epsilon$ increases. For simplicity of discussion, we focus on the non-interactive and pure $\epsilon$-differential privacy.
Recently, a local variant of differential privacy (LDP), in which the data owner directly obfuscate their data, has been proposed\cite{erlingsson2014rappor}. This variant considers the individual data points (or records), rather than queries on datasets.
Its definition is as follows:
\begin{definition}[Local differential privacy\cite{erlingsson2014rappor}]
A randomized function $\mathcal{R}$ satisfies $\epsilon$-\emph{local differential privacy} if, for all pairs of individual data $x$ and $x'$, and for any subset $S \subseteq \text{Range}(\mathcal{R})$, we have
\begin{equation*}
\mathbb{P}[\mathcal{R}(x) \in S] \leq e^{\epsilon} \cdot \mathbb{P}[\mathcal{R}(x')] \in S,
\end{equation*}
\label{ldp}
\end{definition}
When the domain of data points is finite, one of the simplest and most used mechanisms for LDP is $k$RR\cite{kairouz2016discrete}.
In this paper, we assume that all data providers use this mechanism to obfuscate their data.
\begin{definition}[$k$RR Mechanism \cite{kairouz2016discrete}]
Let $\mathcal{X}$ be an alphabet of size $k < \infty$. For a given privacy parameter $\epsilon$, and given
an input $x\in \mathcal{X}$, the $k$RR mechanism
returns $y\in \mathcal{X}$ with probability:
\[\mathbb{P}(y|x)\;\; = \;\;\frac{1}{k-1+e^{\epsilon}} \begin{cases}
e^{\epsilon}, & \mbox{if } y=x \\
1, & \mbox{if } y \neq x
\end{cases}\]
\label{krr}
\end{definition}
\subsection{Shapley value}
When participating in data trading through a federation, \emph{Pareto efficiency} and \emph{symmetry} are the important properties for the intra-federation earning distribution. Pareto efficiency means that at the end of the distribution process, no change can be made without making participants worse off. Symmetry means that all players who make the same contribution must receive the same share. Obviously, the share should vary according to the member's contribution to the collective data.
The Shapley value\cite{winter2002shapley, roth1988shapley} is a concept from game theory named in honor of Lloyd Shapley, who introduced it. Thanks to this achievement, Shapley won the Nobel Prize in Economics in 2012.
The Shapley value applies to cooperative games, and it
is a method to distribute the total gain that satisfy Pareto efficiency, symmetry, and differential distribution according to a player's contribution.
Thus, all participants have the advantage of being fairly incentivized.
The solution based on the Shapley value
is unique. Due to these properties, the Shapley value is regarded as an excellent approach to design a distribution method.
Let $N=\{1,\ldots,n\}$ be a set of players involved in a cooperative game and $M\in\mathbb{R}^+$ be a financial revenue from the data consumer. Let $v: 2^N\mapsto\mathbb{R}^+$ be the characteristic function, mapping each subset $S\subseteq N$ to the total expected sum of payoffs the members of $S$ can obtain by cooperation. (i.e., $v(S)$ is the total collective payoff of the players in $S$). According to the Shapley value, the benefit received by player $i$ in the cooperative game is given follows:
\[ \psi_i(v, M) \;\; =\; \sum_{S \subseteq N\setminus\{i\}} \frac{|S|!\times(n-|S|-1)!}{n!}(v(S \cup \{i\} )-v(S)) \]
We observe that $v(A) > v(B)$ for any subsets $B \subset A \subseteq N$, and hence, $v(S \cup \{i\} )-v(S)$ is positive. We call this quantity the \emph{marginal contribution} of player $i$ in a given subset $S$. Note that $\psi_i(v, M)$ is the expected marginal contribution of player $i$ over all subsets $S \subseteq N$.
In this paper, we use the Shapley value to distribute the earnings according to the contributions of the data providers in the federations.
\section{Related works}
Data markets, such as Datacoup\cite{Datacoup84:online} and Liveen\cite{LiveenBl14:online}, need to provide privacy protection in order to encourage the data owners to participate.
One of the key questions is how to appropriately price data obfuscated by
a privacy-protection mechanism.
When we use differential privacy, the accuracy of data depends on the value of the noise parameter $\epsilon$, which determines the privacy-utility trade-off. Thus, this question is linked to the problem of how to establish the value of $\epsilon$. Researchers have debated how to choose this value since the introduction of differential privacy, and there have been several proposals \cite{tang2017privacy, lee2011much,domingo2015t,holohan2017k}. In particular, \cite{lee2011much} showed that the privacy protection level by an arbitrary $\epsilon$ can be infringed by inference attacks, and it proposed a method for setting $\epsilon$ based on the posterior belief. \cite{domingo2015t} considered the relation between differential privacy and t-closeness, a notion of group privacy which prescribes that the earth movers distance between the distribution in any group $E$ and the distribution in the whole dataset does not exceed the threshold $t$, and showed that both $\epsilon$-differential privacy and t-closeness are satisfied when the $t=\max_E\frac{|E|}{N}\left(1+\frac{N-|E|-1}{|E|})e^{\epsilon}\right)$ where $N$ is the number of records of the database.
Several other works have studied how to price the data according to the value of $\epsilon$ \cite{hsu2014differential, ghosh2011selling, roth2012buying, fleischer2012approximately, nget2017balance,li2014theory,zhang2021differential,jung2019privacy}. The purpose of these studies is to determine the price and value of the $\epsilon$ according to the data consumer’s budget, accuracy requirement of information, the privacy preference of the data provider, and the relevance of the data. In particular, the study in \cite{zhang2021differential} assumed a dynamic data market and proposed an incentive mechanism for data owners to truthfully report their privacy preferences. In \cite{nget2017balance}, the authors proposed a framework to find the balance between financial incentive and privacy in personal data markets where data owners sell their own data, and suggested the main principles to achieve reasonable data trading. Ghosh and Roth \cite{ghosh2011selling} proposed a pricing mechanism based on auctions that maximizes the data accuracy under the budget constraint or minimizes the budget for the fixed data accuracy requirement, where data is privatized with differential privacy.
Our study differs from previous work in that, unlike the existing approaches assuming a one-to-one data trading between data consumers and providers, we consider trades between a data consumer and a federation of data providers. In such a federated environment, the questions are (a) how to determine the price of the collective data according to the privacy preferences of each member, and (b) how to determine the individuals' contribution
to the overall data value, in order to receive a share of the earnings accordingly.
In this paper, we estimate the value of $\epsilon$ for the $k$RR mechanism \cite{kairouz2016discrete}, and we fairly distribute the earnings to the members of the federations using the Shapley value. We propose a valuation function that fits the characteristics of differential privacy. For example, increasing value of $\epsilon$ does not infinitely increase the price (we will elaborate on this in section 4). Furthermore, we characterize the conditions required for setting up the earning distribution schemes.
\section{Differentially Private Data Trading Mechanism}
\subsection{Mechanism outline}
\paragraph{Overview: }We focus on an environment with multiple federations of data providers and one data consumer who interacts with the federations in order to obtain information (data obfuscated using $k$RR mechanism with varying values of $\epsilon$) in exchange of financial revenues. We assume that federations and consumer are aware that the data providers use $k$RR mechanism, independently and with their desired privacy level (which can differ from provider to provider). Our method provides a sensible way of splitting the earnings using the Shapley value. In addition, it also motivates an individual to cooperate with the federation she is a part of, and penalises intentional and recurring non-cooperation.
\paragraph{Notations and set-up: } Let $\mathcal{F}=\{F_1,\ldots, F_k\}$ be a set of $k$ federations of data providers, where each federation $F_i$ has $n_{F_i}$ members for each $i \in \{1,\ldots,k\}$. For a federation $F\in\mathcal{F}$, let its members be denoted by $F=\{p^F_1,\ldots,p^F_{n_F}\}$. And finally, for every federation $F$, let $p^F_* \in F$ be an elected representative of $F$ interacting with the data consumer. This approach to communication benefits both the data consumer and the data providers because (a) the data consumer minimizes her communication cost by interacting
with just one representative of the federation, and (b) the reduced communication induces an additional layer of privacy.
We assume that each member $p$
of a federation $F$ has a maximum privacy threshold $\epsilon^T_p$ with which she, independently, obfuscates her data using the $k$RR mechanism. We also assume that $p$ has $d_p$ data points to potentially report.
We know from Equation (18) of \cite{Ehab2021privacy} that if there are $m$ data providers reporting $d_1,\ldots,d_m$ data points, independently privatizing them using $k$RR mechanism with the privacy parameters $\epsilon_1,\ldots, \epsilon_m$, the federated data of all the $m$ providers also follow a $k$RR mechanism with the privacy parameter being:
\[
\epsilon=\ln{\left(\frac{\sum_{i=1}^m d_i}{\sum_{i=1}^m \;\frac{d_i}{k-1 + e^{\epsilon_i}}}\; +1-k\right)}.
\]
We call the quantity $d_p\epsilon^T_p$ the \emph{information limit} of data provider $p\in F$, and
\begin{equation}\label{eq:maxinfothreshold}
\epsilon^T_F=\ln{\left(\frac{\sum_{p\in F} d_p}{\sum_{p\in F} \;\frac{d_p}{k-1 + e^{\epsilon^T_p}}}\; +1-k\right)}
\end{equation}
the \emph{maximum information threshold of the federation} $F$.
We now introduce the concept of
\emph{valuation function}
$f(.)$, that maps financial revenues to information, representing the amount of information to be obtained for a given price. It is reasonable to require that $f(.)$ is strictly monotonically increasing and continuous.
In this work we focus on the effect on the privacy parameter, hence we
regard the collection of data points as a constant, and assume that only $\epsilon$ can vary. We will call $f(.)$ the \emph{privacy valuation function}.
\begin{definition}[Privacy valuation function]
A function $f: \mathbb{R^+}\mapsto\mathbb{R^+}$ is a \emph{privacy valuation function} if $f(.)$ is strictly monotonically increasing and continuous.
\label{privvalfn}
\end{definition}
As $f(.)$ is strictly monotonically increasing and continuous, it is also invertible. We denote the inverse of $f(.)$ as $f^{-1}(.)$, where $f^{-1}: \mathbb{R^+}\mapsto\mathbb{R^+}$, maps a certain privacy parameter $\epsilon$ to the financial revenue evaluated with selling data privatized using $k$RR mechanism with $\epsilon$ as the privacy parameter.
As $f(.)$ is essentially determining the privacy parameter of a differentially private mechanism ($k$RR, in this case), it is reasonable to assume that $f(.)$ should be not only increasing, but also increasing exponentially for a linear increase of money. In fact, when $\epsilon$ is high, it hardly makes any difference to further increase its value. For example, when $\epsilon$ increases from 200 to 250, it practically makes no difference to the data as they were already practically no private. On the other hand, if we increase $\epsilon$ from 0 to 50, it creates a huge difference, conveying much more information. Therefore, it makes sense to set $f(.)$ to increase exponentially with a linear increase of the financial revenue.
An example of a privacy valuation function that we consider in this paper is $f(M)=K_1(e^{K_2M}-1)$, taking the financial revenue $M\in\mathbb{R}^+$ as its argument, satisfying the reasonable assumptions of evaluating the differential privacy parameter that should be used to privatize the data in exchange of the financial revenue of $M$. Here the parameters $K_1\in \mathbb{R}^+$ and $K_2\in \mathbb{R}^+$ are decided by the data consumer according to her requirements.
\begin{figure}[htbp]
\centerline{\includegraphics[width=0.7\textwidth]{Figure1.png}}
\caption{Some examples of the privacy valuation function $f(.)$ illustrated with different values of $K_1$ and $K_2$. The data consumer decides the values of the parameters $K_1$ and $K_2$ according to her requirement, and broadcasts the determined function to the federations.}
\label{fig1}
\end{figure}
\paragraph{Finalizing and achieving the deal: }Before the private-data trading commences, the data consumer, $D$, truthfully broadcasts her financial budget, $\$B$, and a privacy-valuation function, $f(.)$, chosen by her to all the federations. At this stage, each federation computes their maximum privacy threshold. In particular, for a federation $F$ with members $F=\{p_1,\ldots,p_n\}$, and a representative $p_*$, $p_i$ reports $d_{p_i}$ and $\epsilon^T_{p_i}$ to $p_*$ for all $i \in \{1,\ldots, n\}$. $p_*$ computes the maximum information threshold of federation $F$, $\epsilon^T_F$, as given by \eqref{eq:maxinfothreshold}.
At this point, $p_*$ places a bid to $D$ to obtain \$$M$, which maximises the earning for $F$ under the constraint of their maximum privacy threshold and the maximum budget available from $D$, i.e., $p_*$ wishes to maximize $M$ within the limits $M\leq B$ and $f(M) \leq \epsilon^T_F$. Thus, $p_*$ bids for sending data privatized using the $k$RR mechanism with $\epsilon^T_F$ in exchange of $f^{-1}(\epsilon^T_F)$.
At the end of this bidding process by all the federations, $D$ ends up with $\epsilon=\{\epsilon^T_{F_1},\ldots,\epsilon^T_{F_k}\}$, the maximum privacy thresholds of all the federations. At this stage $D$ must ensure that $\sum_{i=1}^k f^{-1}(\epsilon^T_{F_i}) \leq B$, adhering to her financial budget. In all probability, $\sum_{i=1}^k f^{-1}(\epsilon^T_{F_i})$ is likely to exceed $B$ in a realistic setup. Here, $D$ needs a way to ``seal the deal'' with the federations staying within her financial budget, maximizing her information gain, i.e., maximizing $\sum_{i=1}^kd_{F_i}\epsilon_{F_i}$, where $d_{F_i}$ is the total number of data points obtained from the $i^{th}$ federation $F_i$, and $\epsilon_{F_i}$ is the overall privacy parameter of the $k$RR differential privacy with the combined data of all the members of $F_i$.
A way $D$ could finalize the deal with the federations is by proposing to receive information obfuscated with $w^*\epsilon^T_{F_i}$ using $k$RR mechanism to $F_i$ $\forall i \in \{1,\ldots,k\}$, where \[w^*=\max\left\{w: \sum_{i\in\{1,\ldots,k\}}f^{-1}(w\epsilon^T_{F_i}) \leq B, w\in [0,1]\right\},\]
i.e., proportional to every federation's maximum privacy threshold ensuring that the price to be paid to the federations is within $D$'s budget. Note that $w\in [0,1]$ guarantees that $w\epsilon^T_{F} \leq \epsilon^T_F$ for every federation $F$, making the proposed privacy parameter possible to achieve by every federation, as it's within their respective maximum privacy thresholds. Let the combined privacy parameter for federation $F_i$ proposed by $D$ to successfully complete the deal be denoted by $\epsilon^P_{F_i}=w^*\epsilon^T_{F_i}$ $\forall i \in \{1,\ldots,k\}$ which is the privacy level \emph{promised} to be achieved by each federation to participate in the data market.
The above method to scale down the maximum privacy parameters to propose a deal, maximizing $D$'s information gain, is just one of the possible approaches. In theory, any method that ensures the total price to be paid to all the federations, in exchange of their data, is within $D$'s budget, and the privacy parameters proposed are within the corresponding privacy budgets of the federations, could be implemented to propose a revised set of privacy parameters and, in turn, the price associated with them.
\begin{definition}[Seal the deal]
When all the federations are informed about the revised privacy parameters desired of them, and they agree to proceed with the private-data trading with the data consumer by achieving the revised privacy parameter by combining the data of their members, we say \emph{the deal has been sealed} between the federations and the data consumer.
\end{definition}
Once the deal is sealed between the federations and the data consumer, $F_i$ is expected to provide data gathered from its members with an overall obfuscation with the privacy parameter $\epsilon^P_{F_i}$ using the $k$RR mechanism, in exchange of a price $M^i=f^{-1}(\epsilon^P_{F_i})$ for every $i \in \{1,\ldots,k\}$. Failing to achieve this parameter of privacy for any federation results in a failure to uphold the conditions of the ``deal'' and makes the deal void for that federation, with no price received.
A rational assumption made here is that if a certain federation $F$ fails to gather data from its members such that the overall $k$RR privacy parameter of $F$ is less than $\epsilon^P_F$, then $F$ doesn't receive any partial compensation for its contribution, as it would incur an increase in communication cost and time for the data consumer in proceeding to this stage and ``seal a new deal'' with $F$, instead of investing the revenue to a more responsible federation.
The rest of the process consists in collecting the data and it takes place within
every federation $F$ which has sealed the deal. At the $t^{th}$ round, for $t\in \{1,2,\ldots\}$, any member $p$ of $F$ has the freedom of contributing $d^{t}_p \leq d_p - \sum_{i=1}^{t-1}d^{i}_p$ data points privatized using $k$RR mechanism with any parameter $\epsilon^t_p$. The process continues until the overall information collected until then achieves a privacy level of of at least $\epsilon^P_F$. Let $\cal T$ denote the number of rounds needed by $F$ to achieve the required privacy level. As per the deal sealed between $F$ and $D$, $F$ needs to submit $D_F=\sum_{p \in F}\sum_{i=1}^{\cal T}d^i_p$ data points to $D$ such that the overall $k$RR privacy level of the collated data, $\epsilon_F$, is at least $\epsilon^P_F$, and in return $F$ receives a financial revenue of $\$M$ from $D$.
\subsection{Earning Splitting}
We use the Shapley value to estimate the contribution of each data provider of the federation, in order to split the whole earning $M$, which $F$ would receive from $D$ at the end of the trade. Let $\psi:\mathbb{R}^+\times\mathbb{R}^+ \mapsto \mathbb{R}^+$ be the valuation function used for evaluating the Shapley values of the members after each contribution. If a certain member, $p$, of $F$ reports $d$ differentially private data points with privacy parameter $\epsilon$, $\psi_i(v)$ should give the share of ``contribution'' made by $p$ over the total budget, $M$, of $F$, to be split across all its members. It is assumed that each member, $p$, of $F$ computes her Shapley value, knows what share of revenue she would receive by contributing her data privatized with a chosen privacy parameter, and uses this knowledge to decide on $\epsilon^t_p$ at every round $t$, depending on her financial desire. In our model, characteristic function $v(S)$ is as follows:
\[ v(S)=
\begin{cases}
M, & \mbox{if } \epsilon_F \geq \epsilon_{F}^P \\
0, & \mbox{if } \epsilon_F < \epsilon_{F}^P
\end{cases} \]
where $n$ is the number of data provider in subset $S$ .
\begin{example}
As an example, let us assume that there are $p_1$, $p_2$, $p_3$, and each provider's contribution $\sum_{t=1}^{\cal T}d^t_p\frac{e^{\epsilon^t_{p}}}{k-1 + e^{\epsilon^t_{p}}}$ are $1.0$, $0.5$ and $0.3$. And we assume that $\epsilon_F^P$ is 1.4 and financial revenue of $M$ is 60. In this case, the calculation of each provider's revenue using Shapley value is as follows:\\
\\
Case 1) Only one data provider participates:
\begin{center}
$p_1: v(p_1)=0$\\
$p_2: v(p_2)=0$\\
$p_3: v(p_3)=0$
\\
\end{center}
Case 2) Two providers participate: $v(p_1+)$=0,$v(p_2)$=0,
\begin{center}
$p_1: v(p_1+p_2)-v(p_2)=M, v(p_1+p_3)-v(p_3)=M$ \\
$p_2: v(p_1+p_2)-v(p_1)=M, v(p_2+p_3)-v(p_3)=0$\\
$p_3: v(p_1+p_3)-v(p_1)=0, v(p_2+p_3)-v(p_2)=0$
\\
\end{center}
Case 3) All providers participate:
\begin{center}
$p_1: v(p_1+p_2+p_3)-v(p_2+p_3)=M$ \\
$p_2: v(p_1+p_2+p_3)-v(p_1+p_3)=M$ \\
$p_3: v(p_1+p_2+p_3)-v(p_1+p_2)=0$ \\
\end{center}
According to the above results, the share of each user, according to their Shapley values, is as follows:
\begin{center}
$\psi_1(v)=\frac{0!2!}{3!}0+\frac{1!1!}{3!}M+\frac{1!1!}{3!}M+\frac{2!0!}{3!}M=\frac{4M}{6}$=40\\
$\psi_2(v)=\frac{0!2!}{3!}0+\frac{1!1!}{3!}M+\frac{1!1!}{3!}0+\frac{2!0!}{3!}M=\frac{2M}{6}$=20\\
$\psi_3(v)=\frac{0!2!}{3!}0+\frac{1!1!}{3!}0+\frac{1!1!}{3!}0+\frac{2!0!}{3!}0=\frac{0M}{6}$=0\\
\end{center}
In this example, $p_3$ has no effect on achieving the $\epsilon_F^P$ . Thus, $p_3$ is excluded from the revenue distribution. If the revenue were distributed proportionally, without considering the Shapley values, the revenue of $p_1$ would be 33, $p_2$ is 17, and $p_3$ is 10. It would mean $p_1$ and $p_2$ would receive lower revenues even though their contribution are sufficient to achieve the $\epsilon_F^P$, irrespective of the participation of $p_3$. The Shapley value enables the distribution of revenues only for those who have contributed to achieving the goal.
\end{example}
One of the problems of computing the Shapley values is the high computational complexity involved. If there is a large number of players, i.e., the size of a federation is large, the total number of subsets to be considered becomes considerably large, engendering a limitation to real-world applications. To overcome this, we use a \emph{pruning technique} to reduce the computational complexity of the mechanism. A given federation $F$ receives revenue $M$ only when
$\epsilon_F \geq \epsilon_F^P$, as per the deal sealed with the data consumer. Therefore, it is not necessary to calculate for Shapley values for the cases where $\epsilon_F < \epsilon^P_F$, since such cases do not contribute towards the overall Shapley value evaluated for the members of $F$.
It is reasonable to assume this differentially private data trading between the data consumer and the federations would continue periodically for a length of time. For example, Acxiom, a data broker company, periodically collects and manages personal data related to daily life, such as consumption patterns and occupations. Periodic data collection has higher value than one-time data collection because it can
track temporal trends. For simplicity of explanation, let's assume that the trading occurs ever year. Hence, we consider a yearly period to illustrate the final two steps of our proposed mechanism - ``swift data collection'' and the ``penalty scheme''. This would ensure that the data collection process is as quick as possible for every federation in every year. Additionally, this would motivate the members to cooperate and act in the best interests of their respective federations by not, unnecessarily, withholding their privacy contributions to hinder achieving the privacy goals of their group, as per the deal finalized with $D$.
Let $R\in \mathbb{N}$ be the ``tolerance period''. For a member $p \in F$, we denote $d(m)^i_p$ to be the number of data points reported by $p$ in the $i^{th}$ round of data collection of year $m$ and we denote $\epsilon(m)^i_p$ to be the privacy parameter used by $p$ to obfuscate the data points in the $i^{th}$ round of data collection of year $m$. Let $T_m$ be the number of rounds of data collection needed in year $m$ by federation $F$ to achieve their privacy goal. We denote the total number of data points reported by $p$ in the year $m$ by $d(m)_p$, and observe that $d(m)_p=\sum_{i=1}^{T_m}d(m)^i_p$. Let $\epsilon(m)^P$ denote the value of the privacy parameter of the combined $k$RR mechanism of the collated data that $F$ needs, in order to successfully uphold the condition of the deal sealed with $D$.
\begin{definition}[Contributed privacy level]
For a given member $p\in F$, we define the \emph{contributed privacy level} of $p$ in year $m$ as
\[\epsilon(m)_p=\sum\epsilon(m)^i_p\].
\end{definition}
\begin{definition}[Privacy saving] For a given member $p\in F$, we define the \emph{privacy saving} of $p$ over a tolerance period $R$ (given by a set of some previous years), decided by the federation $F$, as
\[\Delta_p = \sum_{m\in R}\left(d(m)_p\epsilon^T_p-d(m)_p\epsilon(m)_p\right) \]
\end{definition}
\paragraph{Swift data collection:}
It is in the best interest of $F$, and all its members, to reduce the communication cost, time, and resources over the data collection rounds, and achieve the goal of $\epsilon^P$ as soon as possible, to catalyze the trade with $D$, and receive the financial revenue. We aim to capture this through our mechanism, and enable the members not to ``hold back'' their data well below their capacity.
To do this, in our model we design the Shapley valuation function, $\psi(.)$, such that for $p\in F$, in year $m$, $\psi(N_p\epsilon(m)^{t+1}_p,d(m)_p,M)$ $=\psi(\epsilon(m)^{t}_p,d(m)_p,M)$, where $N_p \in \mathbb{Z}^+$ is the \emph{catalyzing parameter} of the data collection, decided by the federation, directly proportional to $\Delta_p$. In particular, for $p\in F$, and a tolerance period $R$ decided, in prior, by $F$, it is a reasonable approach to make $N_p\propto\Delta_p$, as this would mean that any member $p\in F$, reporting $d(m)_p$ data points, would need to use $N_p$ times higher value of $\epsilon$ in the $(t+1)^{st}$ round of data collection in the year $m$, as compared to that in the $t^{th}$ round for the same number of data points reported to get the same share of the benefit of the federation's overall revenue, where $N_p$ is decided by how much privacy savings $p$ has had over a fixed period of $R$.
This is made to ensure that if a member of a federation has been holding back her information by using high values of privacy parameters over a period of time, she should need to compensate in the following year by helping to quicken up the process of data collection of her federation. This should motivate the members of $F$ to report their data with a high value of the privacy parameter in earlier rounds than later, staying within their privacy budgets, so that the number of rounds needed to achieve $\epsilon(m)^P$ is reduced.
\paragraph{Penalty scheme:} It is also desirable to have every member of any given federation to cooperate with the other members of the same federation, and facilitate the trading process in the best interest of the federation, to the best of their ability. That is why, in our mechanism, we incorporate an idea of a ``penalty scheme'' for the members of a federation who are being selfish by keeping a substantial gap between their maximum privacy threshold and their contributed privacy level, wishing to enjoy benefits of the revenue at an unfair cost of other members providing information privatized with almost their maximum privacy threshold. To prevent such non-cooperation and attempted ``free ride'', we design a ``penalty scheme'' in the mechanism.
\begin{definition}[Free rider]
We call a certain member $p\in F$ to be a \emph{free rider} if $\Delta_p \geq \delta_F$, for some $\delta_F \in \mathbb{R}^+$. Here, $\delta_F$ is a threshold decided by the federation $F$ beforehand and informed to all the members of $F$.
\end{definition}
Thus, in the ideal case, every member of $F$ would have their privacy savings to be 0 if everyone contributed information to the best of their abilities, i.e., provided data obfuscated with their maximum privacy parameter. But as a federation, a threshold amount of privacy savings is tolerated for every member. Under the ``penalty scheme'', if a certain member $p \in F$ qualifies as a free rider, she is excluded from the federation, and is given a demerit point by the federation, that can be recorded by a central system keeping a track of every member of every federation, preventing $p$ from getting admission to any other federation for her tendency to free ride. This would mean $p$ and has the responsibility of trading with the data consumer by herself. We could define the Shapley valuation function used to determine the share of $p$'s contribution such that $f^{-1}(\epsilon^T_p)< \psi(v,M)$, implying that the revenue to be received by $p$ dealing directly with $D$, providing one data point obfuscated with her maximum privacy threshold with respect to the privacy valuation function $f(.)$, would be giving a much lower revenue than what $p$ would receive being a member of federation $F$. \footnote{Here, $v(.)$ is the characteristic function of $\psi(.)$, depending on $\epsilon_p^T$.}
\begin{restatable}{theorem}{penaltyscheme}\label{th:1}
If the privacy valuation function used by the data consumer, $D$, is $f(m)=K_1(e^{K_2m}-1)$, in order to impose the \emph{penalty scheme} to any member $p\in F$ of a federation $F$, the Shapley valuation function, $\psi(.)$, chosen by $F$, must satisfy $\frac{\ln(\frac{\epsilon^T_p}{K_1}+1)}{K_2} < \psi\left(\epsilon^T_p, \frac{\ln(\frac{w^*\epsilon^T_p}{K_1}+K)}{K_2}\right)$, where $K=\frac{\sum_{p'\neq p \in F}d_{p'}\epsilon^T_{p'}}{K_1}+1$, $d_{\pi}$ is the number of data points reported by any $\pi\in F$, and $w^*$ is the suggested scaling parameter computed by $D$ to propose a realistic deal, as described in section 4.1.
\end{restatable}
\begin{proof}
See Appendix~\ref{app:a}
\end{proof}
Imposing the ``penalty scheme'' is expected to drive every member of a given federation to be cooperating with the interests of the federation and all the other fellow members to the best of their abilities, preventing potential free riders.
We show the pseudocode for the entire process in Algorithm \ref{alg:entireAlg} and describe the swift data collection and penalty scheme in Algorithm \ref{alg:swiftAlg} and \ref{alg:penaltyAlg}.
\begin{algorithm}
\SetAlFnt{\small}
\textbf{Input:} Federation $F$, Data consumer $D$\;
\textbf{Output:} $\epsilon^P_F$ and $M$\;
$D$ broadcasts total budget $B$ and $f(.)$\;
Federation $F$ computes the $\epsilon^T_F=\sum_{i=1}^nd_{p_i}\epsilon^T_{p_i}$\;
$p_*$ places a bid to D to obtain revenue $M$\;
$F$ and $D$ ``seal the deal'' to determine the $\epsilon^P_F$ and $M$\;
\While{$\epsilon_F \leq \epsilon^P_F$ and $t \leq T$}{
\Call{SwiftDataCollection}{$F$, $\epsilon^P_F$}\;
$p_*$ computes the overall privacy $\epsilon_F$
}
\eIf{$\epsilon_F \geq \epsilon^P_{F_i}$}
{
F receives the revenue $M$\;
$p_*$ computes the Shapley value $\psi_i(v, M)$\;
$p_i$ get their share of the revenue $M$
}
{
deal fails
}
\caption{Federation based data trading algorithm}\label{alg:entireAlg}
\end{algorithm}
\begin{algorithm}
\SetAlFnt{\small}
\textbf{Input:} $F=\{p_1,\ldots,p_{n_F}\}$,$\epsilon^P_F$\;
\textbf{Output:} $\epsilon(m)^t_p$\;
\SetKwFunction{Fmain}{SwiftDataCollection}
\SetKwProg{Fn}{Function}{:}{}
\Fn{\Fmain{$F$,$\epsilon^P_F$}}{
\While{$i \leq n_F$}
{
Compute $\Delta_{p_i}$\;
Compute the catalyzing parameter $N_{p_i}$\;
Determine the $\epsilon(m)_{p_i}^{t}=N_{p_i}\epsilon(m)_{p_i}^{t-1}$
}
}
\caption{Swift data collection algorithm}\label{alg:swiftAlg}
\end{algorithm}
\begin{algorithm}
\SetAlFnt{\small}
\textbf{Input:} $F=\{p_1,\ldots,p_{n_F}\}$,$\Delta_{F}=\{\Delta_{p_1},\ldots ,\Delta_{p_{n_F}}\}$, $\delta_F$\;
\textbf{Output:} Updated $F$\;
\While{$i \leq n_F$}
{
\If{$\Delta_{p_i} \geq \delta_F$}
{
$ F\setminus \{i\}$
}
}
\caption{Penalty scheme}\label{alg:penaltyAlg}
\end{algorithm}
\section{Experimental results}
\subsection{Experimental environments}
In this section, we show some experiments that support the claim that proposed method succeeds to obtain the promised $\epsilon$ and reduce the computation time for Shapley value evaluation. The number of data providers constituting the federation is set to 25, 50, 75, and 100, respectively. The value of $\epsilon^T_p$ is selected from the normal distribution between 1 and 10 with mean 5 and standard deviation 1 independently for all participants $p$ in the federation. The experimental environment is a Intel(R) i5-9400H CPU and 16 GB of memory.
\subsection{Number of rounds needed for data collection}
\begin{figure}[htbp]
\centerline{\includegraphics[width=1\textwidth]{Figure2.png}}
\caption{Experimental results for combined $\epsilon $. Combined $\epsilon$ refers to the amount of information provided by the data providers. }
\label{fig2}
\end{figure}
Achieving the $\epsilon_F^P$ is the key for the participation of $F$ in the data trading. If $\epsilon_F^P$ is not achieved as the collated information level for the federation, there is no revenue from the data consumer. Thus, it is important to encourage the data providers to report sufficient data in order to reach the goal of the deal sealed with the data consumer. The swift data collection is a way to catalyze the process of obtaining data from the members of every federation $F$, minimising the number of rounds of data-collection, to achieve $\epsilon_F^P$. Furthermore, we set $N_p=\frac{\Delta_p}{d(m)_p\epsilon^T_p}$ for a certain member $p$ in federation $F$, to motivate the data providers who have larger privacy savings to provide more information per round.
In the experiment, $\epsilon_F^P$ is set to be 125, 250, 375 and 500, respectively. Data provider $p$ determines $\epsilon(m)^t_p$ randomly in first round, and then computes $\epsilon(m)^t_p$ according to $N_p$, for every $p$ in the federation. We compare two cases, the catalyzing method and the non-catalyzing method.
As illustrated in figure \ref{fig2}, the experimental results show that both catalyzing data collection and its non-catalyzing counterpart achieve the promised epsilon values within 5 rounds, but it can be seen that the catalyzing method achieves $\epsilon_F^P$ in an earlier round because data providers decide the privacy level used to obfuscate their data with, considering their privacy savings, resulting in a \emph{swift data collection}.
\subsection{Number of free riders by penalty scheme}
\begin{figure}[htbp]
\centerline{\includegraphics[width=1\textwidth]{figure3.png}}
\caption{Experimental results for number of free riders. We compared the number of free riders incurred by the penalty scheme in catalyzing and non-catalyzing methods for cases where the number of data providers is 50 and 100. }
\label{fig3}
\end{figure}
The penalty scheme that prevents free riders is based on the premise that trading data by participating in a federation is more beneficial than trading data directly with data consumers (Theorem \ref{th:1}). We evaluated the number of free riders in the catalyzing and non-catalyzing methods according to the increase of the threshold $\delta_F$ in the experiment.
As shown in the figure \ref{fig3}, we can see that the number of free riders increases in both techniques as the threshold value $\delta_F$ is increased to 1,2,3. However, the non-catalyzing method makes more free riders than the catalyzing method that changes the amount of provided information according to privacy saving $\Delta_P$. In other words, the catalyzing method and penalty scheme help to keep members in the federation by inducing them to reach the target epsilon in an earlier time.
\subsection{Reduced Shapley value computation time }
As mentioned in section 4.2, one of the limitations of Shapley value evaluation is to compute it for all combinations of subsets. Through this experiment, we demonstrate that the proposed pruning technique reduces the computation time for calculating the Shapley values. We compared the computation times of the proposed method with brute force method that calculates all the cases by increasing the number of data providers in the federation, by 3, from 15 to 27.
\begin{table}
\centering
\caption{Computation time of brute force and proposed pruning method}\label{tab1}
\begin{tabular}{|c|c|c|}
\hline
\bfseries \# of data providers& \bfseries brute force(Sec) & \bfseries pruning method (Sec)\\
\hline
15 & 0.003 & 0.0007\\
18& 0.02 & 0.001\\
21 & 0.257 & 0.0049\\
24 & 2.313 & 0.009\\
27 & 19.706 & 0.019\\
\hline
\end{tabular}
\end{table}
As shown in the table, the computation time of Shapley value evaluation increases exponentially because the total number of subsets to be considered does the same. The proposed method can calculate the Shapley values in less time by removing unnecessary computations.
\section{Conclusion}
With the spreading of data-driven decision making practices, the interest in personal data is increasing. The data market gives a new opportunity to trade personal data, but a lot of research is still needed to solve privacy and pricing issues.
In this paper, we have considered a data market environment in which data providers form federations and protect their data with the locally differentially private $k$RR mechanism, and we have proposed a pricing and earnings-distribution method. Our method integrates different data providers' values of the privacy parameter $\epsilon$ and combines them to obtain the privacy parameter of the federation. The received earning is distributed using the Shapley values of the members, which guarantees the \emph{Pareto efficiency} and \emph{symmetry}. In addition, we have proposed a swift data collection mechanism and a penalty scheme to catalyze the process of achieving the target amount of information quickly, by penalizing the free riders who do not cooperate with their federation's best interest.
Our study has also disclosed new problems that need further investigation. Firstly, we are assuming that the data providers keep the promise for the ``seal the deal", but, in reality, the data providers can always add more noise than what they promised. We plan to study how to ensure that data providers uphold their data trading contracts. Another direction for future work is considering more differential privacy mechanisms, other than $k$RR.
\bibliographystyle{splncs04}
|
{
"timestamp": "2021-12-01T02:24:33",
"yymm": "2111",
"arxiv_id": "2111.15415",
"language": "en",
"url": "https://arxiv.org/abs/2111.15415"
}
|
\section{Introduction}
\label{intro}
Swarm intelligence has shed light to collective behavior of
autonomous entities with simple rules,
such as ant, boid, and particles.
The notion is applied to a collection of robots and
swarm robotics has attracted much attention in the past two decades.
Each autonomous element of the system is called
a robot, module, agent, process, and sensor,
and a variety of swarm robot systems have been investigated
such as the \emph{autonomous mobile robot system}~\cite{SY99},
the \emph{population protocol model}~\cite{AADFP04},
the \emph{programmable particles}~\cite{DDGRSS14},
\emph{Kilobot}~\cite{RCN14},
and \emph{3D Catoms}~\cite{TPB19}.
Dumitrescu et al. considered the \emph{metamorphic robotic system} (MRS),
that consists of a collection of \emph{modules} in the infinite 2D square grid~\cite{DSY04a,DSY04b}.
The modules are \emph{anonymous}, i.e., they are indistinguishable.
They are \emph{autonomous} and \emph{uniform}, i.e.,
each module autonomously decides its movement by a common algorithm.
They are \emph{oblivious}, i.e., each module has no memory of past.
Thus, each module decides its behavior by
observing other modules in nearby cells.
Each module can perform a \emph{sliding} to a side-adjacent cell
and a \emph{rotation} by $90$ degrees around a cell.
The modules must keep \emph{connectivity}, which is defined by
side-adjacency of cells occupied by modules.
The authors considered \emph{reconfiguration} of the MRS,
that requires the MRS to change the initial shape
to a specified final shape~\cite{DSY04a}.
They showed that any horizontally convex connected initial shape of an MRS
can be transformed to any convex final shape via a straight chain shape.
Later Dumitrescu and Pach showed that any connected initial shape
can be transformed to any connected final shape via a
straight chain shape~\cite{DP06}.
In other words, the MRS has the ability of "universal reconfiguration."
Reconfiguration can generate dynamic behavior of the MRS.
Dumitrescu et al. demonstrated that the MRS can move forward
by repeating a reconfiguration~\cite{DSY04b}.
They showed a reconfiguration that realizes the fastest \emph{locomotion}.
Doi et al. pointed out that the oblivious modules can use the
shape of the MRS as global memory and
the MRS can solve more complicated problem
as the number of modules increases.
They investigated \emph{search} in a finite 2D square grid,
that requires the MRS to find a target cell in a finite
rectangle \emph{field}~\cite{DYKY18}.
Each module does not know the position of the target cell or
the initial configuration
of the MRS.
They showed that the MRS can find the target cell
by sweeping each row of the field without such a priori knowledge.
Specifically, if the modules agree on the cardinal directions
(i.e., north, south, east and west),
three modules are necessary and sufficient,
otherwise five modules are necessary and sufficient.
Nakamura et al. considered \emph{evacuation} of the MRS from a
finite rectangular field in the 2D square grid~\cite{NSY20}.
There is a hole (i.e., two side-adjacent cells)
on the wall of the field,
and the MRS is required to exit from it from an arbitrary
initial position and arbitrary initial shape.
They showed that two modules are necessary and sufficient
when the modules agree on the cardinal directions,
otherwise four modules are necessary and sufficient.
In this paper, we investigate the effect of common knowledge
on search by the MRS in the 3D cubic grid.
We consider the following three cases:
$(i)$ modules equipped with a common \emph{compass}
(i.e., they agree on the direction and orientation of $x$, $y$, and $z$ axes).
$(ii)$ modules equipped with a common vertical axes
(i.e., they agree on the direction and orientation of $z$ axes),
and
$(iii)$ modules not equipped with a common compass
(i.e., they have no agreement on directions and orientations).
We demonstrate that
three modules are necessary and sufficient when
the modules are equipped with a common compass
and five modules are necessary and sufficient
when the modules are not equipped with a common compass.
The numbers of sufficient modules in the 3D cubic grid
are the same as those in the 2D square grid~\cite{DYKY18}
because the MRS has more states in the 3D cubic grid
than in the 2D square grid.
For the intermediate case with a common vertical axis,
we demonstrate that four modules are necessary and sufficient.
Thus, our results in the 3D cubic grid
show a smooth trade-off between
the computational power of the MRS and
common knowledge among modules, that the previous results
in the 2D square grid could not find.
We present search algorithms for these three settings
and
show the necessity by examining the
state transition graph of the MRS.
\noindent{\bf Related work.}~Reconfiguration of swarm robot systems
have been discussed for the MRS~\cite{DP06,DSY04a},
autonomous mobile robots~\cite{FYOKY15,SY99} and
the programmable particles~\cite{DGRSS15,DFSVY20}.
Michail et al. considered the \emph{programmable matter system},
that is similar to the MRS
and investigated reconfiguration by rotations only and
that by rotations and slidings~\cite{MSS19}.
They showed that the combination of rotations and slidings
guarantees universal reconfiguration,
while rotations only cannot.
They also presented $\mathrm{O}(n^2)$-time reconfiguration algorithm
by rotations and slidings.
Almethen et al. considered reconfiguration by \emph{line-pushing},
where each module is equipped with the ability of pushing a line of
modules~\cite{AMP20a}.
They presented $\mathrm{O}(n \log n)$-time universal reconfiguration algorithm
that does not promise connectivity of intermediate shapes
and $\mathrm{O}(n \sqrt{n})$-time reconfiguration algorithm
that transforms a diagonal line into a straight chain with
preserving connectivity.
The same authors later showed that their programmable matter system
has the ability of universal reconfiguration
and $\mathrm{O}(n \sqrt{n})$-time reconfiguration algorithm
together with $\Omega(n \log n)$ lower bound~\cite{AMP20b}.
\section{Introduction}
\label{intro}
Swarm intelligence has shed light to collective behavior of
autonomous entities with simple rules,
such as ant, boid, and particles.
The notion is applied to a collection of robots and
swarm robotics has attracted much attention in the past two decades.
Each autonomous element of the system is called
a robot, module, agent, process, and sensor,
and a variety of swarm robot systems have been investigated
such as the \emph{autonomous mobile robot system}~\cite{SY99},
the \emph{population protocol model}~\cite{AADFP04},
the \emph{programmable particles}~\cite{DDGRSS14},
\emph{Kilobot}~\cite{RCN14},
and \emph{3D Catoms}~\cite{TPB19}.
Dumitrescu et al. considered the \emph{metamorphic robotic system} (MRS),
that consists of a collection of \emph{modules} in the infinite 2D square grid~\cite{DSY04a,DSY04b}.
The modules are \emph{anonymous}, i.e., they are indistinguishable.
They are \emph{autonomous} and \emph{uniform}, i.e.,
each module autonomously decides its movement by a common algorithm.
They are \emph{oblivious}, i.e., each module has no memory of past.
Thus, each module decides its behavior by
observing other modules in nearby cells.
Each module can perform a \emph{sliding} to a side-adjacent cell
and a \emph{rotation} by $90$ degrees around a cell.
The modules must keep \emph{connectivity}, which is defined by
side-adjacency of cells occupied by modules.
The authors considered \emph{reconfiguration} of the MRS,
that requires the MRS to change the initial shape
to a specified final shape~\cite{DSY04a}.
They showed that any horizontally convex connected initial shape of an MRS
can be transformed to any convex final shape via a straight chain shape.
Later Dumitrescu and Pach showed that any connected initial shape
can be transformed to any connected final shape via a
straight chain shape~\cite{DP06}.
In other words, the MRS has the ability of "universal reconfiguration."
Reconfiguration can generate dynamic behavior of the MRS.
Dumitrescu et al. demonstrated that the MRS can move forward
by repeating a reconfiguration~\cite{DSY04b}.
They showed a reconfiguration that realizes the fastest \emph{locomotion}.
Doi et al. pointed out that the oblivious modules can use the
shape of the MRS as global memory and
the MRS can solve more complicated problem
as the number of modules increases.
They investigated \emph{search} in a finite 2D square grid,
that requires the MRS to find a target cell in a finite
rectangle \emph{field}~\cite{DYKY18}.
Each module does not know the position of the target cell or
the initial configuration
of the MRS.
They showed that the MRS can find the target cell
by sweeping each row of the field without such a priori knowledge.
Specifically, if the modules agree on the cardinal directions
(i.e., north, south, east and west),
three modules are necessary and sufficient,
otherwise five modules are necessary and sufficient.
Nakamura et al. considered \emph{evacuation} of the MRS from a
finite rectangular field in the 2D square grid~\cite{NSY20}.
There is a hole (i.e., two side-adjacent cells)
on the wall of the field,
and the MRS is required to exit from it from an arbitrary
initial position and arbitrary initial shape.
They showed that two modules are necessary and sufficient
when the modules agree on the cardinal directions,
otherwise four modules are necessary and sufficient.
In this paper, we investigate the effect of common knowledge
on search by the MRS in the 3D cubic grid.
We consider the following three cases:
$(i)$ modules equipped with a common \emph{compass}
(i.e., they agree on the direction and orientation of $x$, $y$, and $z$ axes).
$(ii)$ modules equipped with a common vertical axes
(i.e., they agree on the direction and orientation of $z$ axes),
and
$(iii)$ modules not equipped with a common compass
(i.e., they have no agreement on directions and orientations).
We demonstrate that
three modules are necessary and sufficient when
the modules are equipped with a common compass
and five modules are necessary and sufficient
when the modules are not equipped with a common compass.
The numbers of sufficient modules in the 3D cubic grid
are the same as those in the 2D square grid~\cite{DYKY18}
because the MRS has more states in the 3D cubic grid
than in the 2D square grid.
For the intermediate case with a common vertical axis,
we demonstrate that four modules are necessary and sufficient.
Thus, our results in the 3D cubic grid
show a smooth trade-off between
the computational power of the MRS and
common knowledge among modules, that the previous results
in the 2D square grid could not find.
We present search algorithms for these three settings
and
show the necessity by examining the
state transition graph of the MRS.
\noindent{\bf Related work.}~Reconfiguration of swarm robot systems
have been discussed for the MRS~\cite{DP06,DSY04a},
autonomous mobile robots~\cite{FYOKY15,SY99} and
the programmable particles~\cite{DGRSS15,DFSVY20}.
Michail et al. considered the \emph{programmable matter system},
that is similar to the MRS
and investigated reconfiguration by rotations only and
that by rotations and slidings~\cite{MSS19}.
They showed that the combination of rotations and slidings
guarantees universal reconfiguration,
while rotations only cannot.
They also presented $\mathrm{O}(n^2)$-time reconfiguration algorithm
by rotations and slidings.
Almethen et al. considered reconfiguration by \emph{line-pushing},
where each module is equipped with the ability of pushing a line of
modules~\cite{AMP20a}.
They presented $\mathrm{O}(n \log n)$-time universal reconfiguration algorithm
that does not promise connectivity of intermediate shapes
and $\mathrm{O}(n \sqrt{n})$-time reconfiguration algorithm
that transforms a diagonal line into a straight chain with
preserving connectivity.
The same authors later showed that their programmable matter system
has the ability of universal reconfiguration
and $\mathrm{O}(n \sqrt{n})$-time reconfiguration algorithm
together with $\Omega(n \log n)$ lower bound~\cite{AMP20b}.
\section{Preliminary}
\label{junbi}
We consider a \emph{metamorphic robotic system} (MRS) in a finite 3D cubic grid.
A metamorphic robotic system consists of
a collection of anonymous (i.e., indistinguishable) ¥emph{modules}.
A module can observe the positions of other modules in nearby cells,
computes its next movement with a common algorithm,
and performs the movement.
Each cell of the 3D cubic grid can adopt at most one module at a time.
Cell $(x,y,z)$ is the cell surrounded by grid points
$(x, y, z)$, $(x+1, y, z)$, $(x, y+1, z)$, $(x, y, z+1)$,
$(x+1, y+1, z)$, $(x+1, y, z+1)$, $(x, y+1, z+1)$, and $(x+1, y+1, z+1)$.
Cells $(x+1, y, z)$, $(x, y+1, z)$, $(x, y, z+1)$,
$(x-1, y, z)$, $(x, y-1, z)$, $(x, y, z-1)$ are \emph{side-adjacent} to
cell $(x, y, z)$.
We consider the positive $x$ direction as East,
the positive $y$ direction as North,
and the positive $z$ direction is Up.
The MRS moves in a finite \emph{field},
which is a cuboid of width $w$, depth $d$ and height $h$
with its two diagonal cells being $(0,0,0)$ and $(w-1, d-1, h-1)$.
We consider two types of \emph{planes};
the first type is a set of cells forming a plane
perpendicular to one of the $x$, $y$, and $z$ axes.
The second type is a set of cells parallel to one of the
$x$, $y$, and $z$ axes and diagonal to the remaining two axes.
For example, $\{(x,y,z) | y = s\} $ for some $s \in \mathcal{Z}$ is a
plane of the first type
and $\{(x, y, z) | x+y = s'\}$ for some $s' \in \mathcal{Z}$ is
a plane of the second type.
A \emph{line} of cells is a set of cells forming a horizontal or vertical line
on a plane.
For example, $\{(x,y,z) | y = u, z = v\} $
for some $u,v \in \mathcal{Z}$ is a line
and $\{(x,y,z) | x+y = u', z = v'\}$
for some $u', v' \in \mathcal{Z}$ is a line.
The field is surrounded by six planes, which we call \emph{walls}.
More precisely, the walls are
$\{(x, y, z) | x = -1\}$ (the West wall),
$\{(x, y, z) | y = -1\}$ (the South wall),
$\{(x, y, z) | z = -1\}$ (the Bottom wall),
$\{(x, y, z) | x = w\}$ (the East wall),
$\{(x, y, z) | y = d\}$ (the North wall),
and $\{(x, y, z) | z = h\}$ (the Top wall).
All modules synchronously perform observation, computation, and movement in
each discrete time $t=0, 1, 2, \ldots$.
A \emph{configuration} of the MRS is the set of cells occupied by the modules.
We say two modules are \emph{side-adjacent} if they are in the two side-adjacent cells.
We also say that a module $m$ is side-adjacent to cell $c$ if the cell occupied by $m$ is
side-adjacent to $c$.
Given a configuration of the MRS, consider a graph where
each vertex corresponds to a module and there is an edge between two vertices
if the corresponding modules are side-adjacent.
If this graph is connected, we say the MRS is \emph{connected}.
A module can perform two types of movements, \emph{sliding} and \emph{rotation}.
\begin{enumerate}
\item Sliding:
When two modules $m_i$ and $m_j$ are side-adjacent,
another module $m_k$ can move from a cell side-adjacent to $m_i$
to an empty cell side-adjacent to $m_k$ and $m_j$ along $m_i$ and $m_j$.
During the movement, $m_i$ and $m_j$ cannot perform any movement.
See Figure~\ref{Module-Move} as an example.
\item Rotation:
When two modules $m_i$ and $m_j$ are side-adjacent,
$m_i$ can move to a cell side-adjacent to $m_j$ by rotating $\pi/2$ in some
direction.
There are six cells side-adjacent to $m_j$ and $m_i$ can move to four of them
by rotation.
During the movement, $m_j$ cannot move and the cells that $m_i$ passes must be empty.
See Figure~\ref{Module-Move} as an example.
\end{enumerate}
Note that several modules can move at the same time as long as their moving tracks do not overlap.
The modules must keep two types of connectivity at each time step.
\begin{enumerate}
\item At the beginning of each time step,
the modules must be connected.
\item At each time step, the modules that do not move must be connected.
\end{enumerate}
\begin{figure}
\centering
\includegraphics[keepaspectratio, scale=0.25]
{module-move2.png}
\caption{Sliding and rotation. The red modules perform movement.}
\label{Module-Move}
\end{figure}
We assume that each module obtains the result of an observation and moves to
the next cell in its ¥emph{local $x$-$y$-$z$ coordinate system}.
We assume that the origin of the local coordinate system of a module
is its current cell and all local coordinate systems are right-handed.
In this paper, we consider three types of MRSs with different degree of agreement
on the coordinate system.
When all modules agree on the directions and orientations of $x$, $y$, and $z$ axes,
we say the MRS is equipped with a \emph{common compass}.
When all modules do not agree on the directions or orientations of $x$, $y$, and $z$ axes,
we say the MRS is not equipped with a common compass.
Hence, local coordinate systems are not consistent among the modules.
As an intermediate model, we consider modules that
agree on the direction and orientation of the vertical axis.
In this case, we say the MRS is equipped with a \emph{common vertical axis}.
The \emph{state} of the MRS is its local shape.
If the modules are equipped with a common compass or a common vertical axis,
the state of the MRS contains common directions and orientations.
Otherwise the state of the MRS does not contain any directions and orientations.
The \emph{search problem} requires the MRS to find a \emph{target} placed at one cell
in the field from a given initial configuration.
We call the cell containing the target the \emph{target cell}.
The MRS \emph{finds} a target when one of its modules enters the target cell.
\begin{figure}
\centering
\includegraphics[keepaspectratio, scale=0.35]
{modulenotation.png}
\caption{Example of an observation at the red module with the local coordinate system. }
\label{Modules}
\end{figure}
When a module executes the common algorithm,
the input is the \emph{observation} of cells in a cube of size
$(2k+1) \times (2k+1) \times (2k+1)$ centered at the module
(i.e., its \emph{$k$-neighborhood}).
It detects whether each cell in its $k$-neighborhood
is a wall cell or not and
whether it is occupied by a module or not.
Let $C_m$ be the set of cells occupied by some modules,
$C_w$ be the set of wall cells,
and $C_e$ be the set of the remaining (i.e., empty) cells
of an observation.
More precisely, each set of cells is a set of coordinates of the
corresponding cells observed in the local coordinate system of the module.
For example, in Figure~\ref{Modules},
the result of an observation at the red module is
$C_m = \{(0, -1, 0), (1, -1, 0)\}$, $C_w = \emptyset$, and
$C_e $ is the remaining cells.
When a common algorithm outputs coordinate $(a,b,c)$ at a module,
the module moves to $(a, b, c)$ in its local coordinate system.
When we describe an algorithm,
the elements of $C_m$, $C_w$, and $C_e$ are specified in a "canonical coordinate system," i.e.,
the global coordinate system.
When the modules are equipped with a common compass, without loss of generality,
we assume that the common compass is identical to the global coordinate system.
Thus, each module obtains its movement by checking
$C_m$, $C_w$, and $C_e$.
When the modules agree on a common vertical axis,
without loss of generality, we assume that
the vertical axis is identical to the $z$ axis of the global coordinate system.
Each module computes its movement by
rotating the current observation by $\pi/2$, $\pi$, $3\pi/2$, and $2\pi$ around
the common vertical axis and comparing the results with
$C_m$, $C_w$, and $C_e$.
It selects the output with movement
and if there are multiple outputs with movement
it nondeterministically selects one of them.
When the modules are not equipped with a common compass,
a module checks $24$ rotations of the current observation and
selects an output in the same way as the above case.
\section{Search algorithms for MRSs in a finite 3D cubic grid}
In this section, we present search algorithms with small number of modules.
Our proposed algorithms are based on a common strategy.
Since the MRS does not know the position of the target cell,
we make the MRS visit all cells of the field.
The proposed algorithms slice the field into planes
and the MRS visits the cells of each plane
by sweeping each row or column of the plane.
Thus, the proposed algorithms are extensions of the search algorithms
by Doi et al. in a finite 2D cubic grid~\cite{DYKY18}.
\subsection{Search with a common compass}
\label{ExpWithCompass}
We show the following theorem by a search algorithm for the MRS of
three modules equipped with a common compass.
\begin{theorem}
\label{theorem: 3 modules can Exprole 3D space with compass}
The MRS of three modules equipped with a common compass can solve the search problem
in a finite 3D cubic grid from any initial configuration.
\end{theorem}
The proposed algorithm considers each plane
$\{(x,y,z) | x+y=s\}$ for $s = 1, 2, \ldots$ and
the MRS moves along each line $\{(x,y,z) | x+y=s, z=t\} $
for $t= 0, 1, 2, \ldots$.
Figure~\ref{Explode-Move3} shows a moving track of the proposed aglorithm.
The MRS continues to search each planes until it reaches the northeasternmost plane.
Then, it moves along the edges of the field so that it returns to the
southwesternmost plane.
It starts searching each plane again to visit all cells of the field.
The MRS moves forward or turns by repeating a sequence of movements, that we call a \emph{move sequence}.
The proposed algorithm consists of the following move sequences.
\begin{itemize}
\item Move sequence $M_{NW}$ (Figure~\ref{3MoveToNorthwest}).
The blue module is in cell $(x,y,z)$ at the first.
By this move sequence,
the green module reaches cell $(x-1,y+1,z)$.
By repeating $M_{NW}$ $n$ times, any of the modules visit the cells $(x-k,y+k,z)(0 \leq k \leq n)$.
That is, it visits all the cells of the horizontal line $\{(x,y,z)|x+y=s,z=t\}$.
\item Move sequence $M_{TurnNW}$ (Figure~\ref{3TurnOnTheNorthWestWall}).
By this move sequence,
the MRS changes its move sequence from $M_{NW}$ to $M_{SE}$.
\item Move sequence $M_{SE}$ (Figure~\ref{3MoveToSoutheast}).
The blue module is in cell $(x,y,z)$ at the first.
By this move sequence,
the green module reaches cell $(x+1,y-1,z)$.
By repeating $M_{SE}$ $n$ times, any of the modules visit the cells $(x+k,y-k,z)(0 \leq k \leq n)$.
That is, it visits all the cells of the horizontal line $\{(x,y,z)|x+y=s,z=t\}$.
\item Move sequence $M_{TurnSE}$ (Figure~\ref{3TurnOnTheSouthEastWall}).
By this move sequence,
the MRS changes its move sequence from $M_{SE}$ to $M_{NW}$.
\item Move sequence $M_{T}$ (Figure~\ref{3MoveOnTheTopOfWall}).
By this move sequence,
the MRS changes its move sequence from $M_{SE}$ to $M_{D}$.
\item Move sequence $M_{D}$ (Figure~\ref{3DecentOnTheEastWall}).
The blue module is in cell $(x,y,z)$ at the first.
By this move sequence,
the green module reaches cell$(x,y,z-1)$.
By repeating $M_{D}$ $n$ times, any of the modules visit the cells $(x,y,z-k)(0 \leq k \leq n)$.
That is, it visits all the cells of the line $\{(x,y,z)|x=s,y=t\}$.
\item Move sequence $M_{B}$ (Figure~\ref{3LeaveTheBottomOfTheEastWall}).
By this move sequence,
the MRS changes its move sequence from $M_{D}$ to $M_{NW}$.
\item Move sequence $M_{NECorner}$ (Figure~\ref{3MoveOnTheNortheastCorner}).
By this move sequence,
the MRS changes its move sequence from $M_{D}$ to $M_{WallBottom}$.
\item Move sequence $M_{WallBottom}$ (Figure~\ref{3MoveAlongTheBottomOfTheWall}).
The blue module is in cell $(x,y,0)$ at the first.
By this move sequence,
the green module reaches cell $(x,y-1,0)$ along the edge.
By repeating $M_{WallBottom}$ $n$ times, any of the modules visit the cells $(x,y-k,0)(0 \leq k \leq n)$.
That is, it visits all the cells of the line $\{(x,y,z)|x=s,z=0\}$.
\item Move sequence $M_{SWCorner}$ (Figure~\ref{3MoveOnTheSouthwestCorner}).
By this move sequence,
the MRS changes its move sequence from $M_{WallBottom}$ to $M_{Up}$.
\item Move sequence $M_{Up}$. (Figure~\ref{3RiseOnTheSouthwestCorner})
The blue module is in cell $(0,0,z)$ at the first.
By this move sequence,
the green module reaches cell $(0,0,z+1)$.
By repeating $M_{Up}$ $n$ times, any of the modules visit the cells $(0,0,k)(0 \leq k \leq n)$.
That is, it visits all the cells of the line $\{(x,y,z)|x=0,y=0\}$.
\end{itemize}
\begin{figure}[ht]
\centering
\includegraphics[keepaspectratio, scale=0.3]
{explodeMove3.png}
\caption{Search by three modules equipped with common compass}
\label{Explode-Move3}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[keepaspectratio, height=2.3cm]
{3MoveToNorthwest.png}
\caption{Move to northwest. In each figure, the red module moves. When the blue module is in cell $(x,y,z)$ at the start, after this move sequence, the green model reaches cell $(x-1, y+1, z)$. }
\label{3MoveToNorthwest}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[keepaspectratio, height=2.5cm]
{3TurnOnTheNorthWall.png}
\caption{Turn on the north or west wall. In the first figue, the red module moves. }
\label{3TurnOnTheNorthWestWall}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[keepaspectratio, height=2.3cm]
{3MoveToSoutheast.png}
\caption{Move to southeast. In each figure, the red module moves. When the blue module is in cell $(x,y,z)$ at the start, after this move sequence, the green model reaches
$(x+1,y-1,z)$. }
\label{3MoveToSoutheast}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[keepaspectratio, height=2.5cm]
{3TurnOnTheEastWall.png}
\caption{Turn on the south or east wall. In each figure, the red module moves.}
\label{3TurnOnTheSouthEastWall}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[keepaspectratio, height=2.5cm]
{3MoveOnTheTopOfWall.png}
\caption{Move around the top of the north or east wall. In each figure, the red module moves.}
\label{3MoveOnTheTopOfWall}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[keepaspectratio, height=2.5cm]
{3DecentOnTheEastWall.png}
\caption{Move down on the north or east wall.
In each figure, the red module moves. When the blue module is in cell $(x,y,z)$ at the start, after this move sequence, the green model reaches cell $(x, y, z-1)$.}
\label{3DecentOnTheEastWall}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[keepaspectratio, height=2.5cm]
{3LeaveTheBottomOfTheEastWall.png}
\caption{Leaving the bottom of the north or east wall. In each figure, the red module moves.}
\label{3LeaveTheBottomOfTheEastWall}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[keepaspectratio, height=2.5cm]
{3MoveOnTheNortheastCorner.png}
\caption{Move on the northeast corner. In the first figure, the red module moves.}
\label{3MoveOnTheNortheastCorner}
\end{figure}
\begin{figure}[hp]
\centering
\includegraphics[keepaspectratio, height=2.5cm]
{3MoveAlongTheBottomOfTheWall.png}
\caption{Move along the bottom of the wall.
In each figure, the red module moves. When the blue module is in cell $(x,y,0)$ at the start, after this move sequence, the green model reaches cell $(x, y-1, 0)$.}
\label{3MoveAlongTheBottomOfTheWall}
\end{figure}
\begin{figure}[hp]
\centering
\includegraphics[keepaspectratio, height=2.5cm]
{3MoveOnTheSouthWestCorner.png}
\caption{Move on the southwest corner. In the first figure, the red module moves.}
\label{3MoveOnTheSouthwestCorner}
\end{figure}
\begin{figure}[hp]
\centering
\includegraphics[keepaspectratio, height=2.5cm]
{3RiseOnTheSouthWestCorner.png}
\caption{Move up on the southwest corner. In each figure, the red module moves. When the blue module is in cell $(0,0,z)$ at the start, after this move sequence, the green model reaches cell $(0, 0, z+1)$.}
\label{3RiseOnTheSouthwestCorner}
\end{figure}
The proposed algorithm consists of the following seven steps.
\begin{description}
\item[Step 1] The MRS repeats $M_{NW}$,
that makes it move in the northwest direction
along a horizontal line on a plane $\{(x,y,z) | x + y = s\}$ for some $s$.
\item[Step 2] When the MRS reaches the north or west wall,
it changes the moving direction to southeast
by $M_{TurnNW}$.
\item[Step 3] The MRS repeats $M_{SE}$,
that makes it move in the southeast direction
along a horizontal line on $\{(x,y,z) | x + y = s\}$.
This movement makes the MRS move along the same horizontal line as Step $1$.
\item[Step 4] If the MRS is adjacent to the top wall
it moves to the plane $\{(x,y,z) | x + y = s+1\}$
by $M_{T}$.
Then, it repeats $M_{D}$ shown in Figure \ref{3DecentOnTheEastWall},
that makes it move down along the south wall or east wall
until it reaches the bottom wall.
Then, it leaves the wall by $M_{B}$.
It starts searching the new plane by repeating Steps $1$, $2$, $3$, and $4$.
Otherwise, it proceeds to Step $5$.
\item[Step 5] When the MRS reaches the south or east wall,
it moves to the row above by $M_{TurnSE}$.
Then, it repeats Steps $1$, $2$, and $3$ so that it visits all cells on the new horizontal line.
\item[Step 6] When the MRS reaches the northeast corner of the top wall,
the algorithm sends the MRS back to the southwest corner,
where the MRS starts searching by repeating Steps $1$ to $5$.
It moves along the northeast edge until it reaches the northeast corner
of the bottom wall by $M_D$.
Then, it moves along the east edge of the bottom wall
until it reaches the south east corner of the bottom wall
by $M_{NECorner}$ and repeating $M_{WallBottom}$.
It moves along the south edge of the bottom wall
until it reaches the southwest corner of the bottom wall
by repeating $M_{WallBottom}$.
Finally, it moves along the southeast edge
until it reaches the southwest corner of the top wall
by $M_{SWCorner}$ and repeating $M_{Up}$.
Then, the MRS returns to Step 4.
\end{description}
Table~\ref{tab_3module} shows the input and the output of the proposed algorithm.
Each element specifies a part of the input (especially, $C_w$ and $C_e$),
and the MRS does not care whether other cells than those specified are walls or not.
When the MRS is on a plane $\{(x, y, z) | x+y=s\}$ for some $s$,
it visits all cells in the horizontal line $\{(x,y,z)|x+y=s,z=t\}$ for some $t$
by repeating Steps $1$, $2$, and $3$.
Then, it proceeds to the horizontal line $\{(x,y,z)|x+y=s,z=t+1\}$ by Step $5$.
By repeating Steps $1$, $2$, $3$, and $5$,
it eventually reaches the top wall.
Then, it starts searching for cells in $\{(x,y,z)|x+y=s+1\}$ by Step $4$ and $6$.
Repeating the above movement, the MRS eventually reaches the northeast corner of the top wall.
At this point, it may have not yet visited the cells near the south west corner.
Steps $6$ enables the MRS visit these cells
by moving it to the southwest corner of the top wall and
starting Steps $1$ again.
\begin{figure}
\centering
\includegraphics[keepaspectratio, scale=0.35]
{3ModulesShapes.png}
\caption{States of the MRS of three modules equipped with a common compass}
\label{3ModulesShapes}
\end{figure}
There exist initial configurations that satisfies no condition of Table~\ref{tab_3module}.
We add exceptional transformation rules from such initial configurations.
Figure~\ref{3ModulesShapes} shows all states of three modules equipped with a common compass.
Observe that any configuration can be transformed to another one in one time step.
(Note that more than one module can move in one time.)
Hence, even if the initial state of the MRS does not match any entry of Table~\ref{tab_3module},
the MRS can be transformed to one of the entries
and the MRS can start search from any initial configuration.
\begin{longtable}{|l|c|c|c|c|}
\caption{Search algorithm for the MRS of three modules equipped with a common compass}
\endhead
\endfoot
\cline{1-5} & $C_m$ & $C_w$ & $C_e$ &Output \\
\cline{1-5} $M_{SE}$ & $(0,0,1)$,$(1,0,1)$ & & $(2,0,0)$ & $(1,0,0)$ \\
\cline{2-5} & $(1,0,0)$,$(1,0,-1)$ & & & $(1,-1,0)$ \\
\cline{2-5} & $(0,0,1)$,$(0,-1,1)$ & & $(0,-2,0)$ & $(0,-1,0)$ \\
\cline{2-5} & $(0,-1,0)$,$(0,-1,-1)$ & & & $(1,-1,0)$ \\
\cline{1-5} $M_{TurnSE}$ & $(0,0,1)$,$(1,0,1)$ & $(2,0,0)$ & $(0,-1,0)$ & $(-1,0,1)$ \\
\cline{2-5} & $(1,0,0)$,$(2,0,0)$ & & $(0,0,-1)$, & $(1,0,1)$ \\
& & & $(0,0,1)$ &\\
\cline{2-5} & $(0,0,1)$,$(0,-1,1)$ & $(0,-2,0)$ & & $(0,1,1)$ \\
\cline{2-5} & $(0,-1,0)$,$(0,-2,0)$ & $(0,0,-1)$ & & $(0,-1,1)$ \\
\cline{1-5} $M_{NW}$ & $(-1,0,0)$,$(-1,0,1)$ & & $(0,-1,0)$ & $(-1,1,0)$ \\
\cline{2-5} & $(0,0,-1)$,$(0,1,-1)$ & & $(0,2,0)$ & $(0,1,0)$ \\
\cline{2-5} & $(0,1,0)$,$(0,1,1)$ & & $(1,0,0)$ & $(-1,1,0)$ \\
\cline{2-5} & $(0,0,-1)$,$(-1,0,-1)$ & & $(-2,0,0)$ & $(-1,0,0)$ \\
\cline{1-5} $M_{TurnNW}$ & $(0,-1,0)$,$(0,-1,1)$ & $(0,1,0)$ & & $(0,0,1)$ \\
\cline{2-5} & $(1,0,0)$,$(1,0,1)$ & $(-1,0,0)$ & & $(0,0,1)$ \\
\cline{1-5} $M_{T}$ & $(0,0,1)$,$(1,0,1)$ & $(2,0,0)$,$(0,0,2)$ & & $(1,0,0)$ \\
\cline{2-5} & $(1,0,0)$,$(1,0,-1)$ & $(2,0,0)$,$(0,0,1)$& & $(1,1,0)$ \\
\cline{2-5} & $(0,0,1)$,$(0,1,1)$ & $(1,0,0)$,$(0,0,2)$ & & $(0,1,0)$ \\
\cline{2-5} & $(0,0,1)$,$(0,-1,1)$ & $(0,0,2)$,$(0,-2,0)$ & & $(0,-1,0)$ \\
\cline{2-5} & $(0,-1,0)$,$(0,-1,-1)$ & $(0,0,1)$,$(0,-2,0)$ & & $(1,-1,0)$ \\
\cline{2-5} & $(0,0,1)$,$(1,0,1)$ & $(0,0,1)$,$(0,-1,0)$ & & $(1,0,0)$ \\
\cline{1-5} $M_{D}$ & $(0,1,0)$,$(0,1,-1)$ & $(1,0,0)$ & & $(0,0,-1)$ \\
\cline{2-5} & $(0,1,0)$,$(0,1,1)$ & $(1,0,0)$ & $(0,0,-1)$ & $(0,1,-1)$ \\
\cline{2-5} & $(0,0,-1)$,$(0,0,-2)$ & $(1,0,0)$ & & $(0,-1,-1)$ \\
\cline{2-5} & $(1,0,0)$,$(1,0,-1)$ & $(0,-1,0)$ & & $(0,0,-1)$ \\
\cline{2-5} & $(1,0,0)$,$(1,0,1)$ & $(0,-1,0)$ & $(0,0,-1)$ & $(1,0,-1)$ \\
\cline{2-5} & $(0,0,-1)$,$(0,0,-2)$ & $(0,-1,0)$ & & $(-1,0,-1)$ \\
\cline{1-5} $M_{B}$ & $(0,1,0)$,$(0,1,1)$ & $(1,0,0)$,$(0,0,-1)$ & $(0,-1,0)$, & $(-1,1,0)$ \\
& & & $(0,2,0)$ & \\
\cline{2-5} & $(0,0,-1)$,$(-1,0,-1)$ & $(1,0,0)$,$(0,0,-2)$ & $(0,-1,0)$ & $(-1,0,0)$ \\
\cline{2-5} & $(1,0,0)$,$(1,0,1)$ & $(0,-1,0)$,$(0,0,-1)$ & & $(1,1,0)$ \\
\cline{2-5} & $(0,0,-1)$,$(0,1,-1)$ & $(0,-1,0)$,$(0,0,-2)$ & & $(0,1,0)$ \\
\cline{1-5} $M_{NECorner}$ & $(0,0,-1)$,$(0,-1,-1)$ & $(0,1,0)$,$(1,0,0)$,& & $(-1,0,-1)$ \\
& & $(0,0,-2)$ & & \\
\cline{1-5} $M_{WallBottom}$ & $(1,0,0)$,$(1,-1,0)$ & $(0,0,-1)$ & & $(0,-1,0)$ \\
\cline{2-5} & $(1,0,0)$,$(1,1,0)$ & $(0,0,-1)$ & $(0,-1,0)$ & $(1,-1,0)$ \\
\cline{2-5} & $(0,-1,0)$,$(0,-2,0)$ & $(0,0,-1)$ & & $(-1,-1,0)$ \\
\cline{2-5} & $(0,-1,0)$,$(-1,-1,0)$ & $(0,0,-1)$,$(0,-2,0)$ & & $(-1,0,0)$ \\
\cline{2-5} & $(0,-1,0)$,$(1,-1,0)$ & $(0,0,-1)$ & $(-1,0,0)$ & $(-1,-1,0)$ \\
\cline{2-5} & $(-1,0,0)$,$(-2,0,0)$ & $(0,0,-1)$ & & $(-1,1,0)$ \\
\cline{1-5} $M_{SWCorner}$ & $(-1,0,0)$,$(-1,1,0)$ & $(0,0,-1)$,$(-2,0,0)$,& & $(-1,0,1)$ \\
& & $(0,-1,0)$& & \\
\cline{1-5} $M_{Up}$ & $(0,-1,0)$,$(0,-1,1)$ & $(-1,0,0)$,$(0,-2,0)$ & & $(0,0,1)$ \\
\cline{2-5} & $(0,-1,0)$,$(0,-1,-1)$ & $(-1,0,0)$,$(0,-2,0)$ & $(0,0,1)$ & $(0,-1,1)$ \\
\cline{2-5} & $(0,0,1)$,$(0,0,2)$ & $(-1,0,0)$,$(0,-1,0)$ & & $(0,1,1)$ \\
\cline{1-5} & \multicolumn{1}{c|}{Otherwise} & \multicolumn{1}{c|}{Otherwise} & \multicolumn{1}{c|}{Otherwise} & $(0,0,0)$ \\
\cline{1-5}
\end{longtable}
\label{tab_3module}%
\subsection{Search with a common vertical axis}
We show the following theorem by a search algorithm for the MRS of
four modules equipped with a common vertical axis.
\begin{theorem}
\label{theorem: 4 modules can Exprole 3D space without horizontal compass}
The MRS of four modules with a common vertical axis can solve a search problem
in a finite 3D grid
if no pair of modules have an identical observation in an initial configuration.
\end{theorem}
We prove Theorem~\ref{theorem: 4 modules can Exprole 3D space without horizontal compass}
by a search algorithm.
The proposed algorithm considers each plane
$x=s$ for $s = 0, 1, 2, \ldots$ and
$y=s'$ for $s' = 0, 1, 2, \ldots$.
The MRS moves along each vertical line $x=s,y=t$ when it is on the plane $x=s$ for $t=0,1,2,\ldots$
and $x=t', y=s'$ when it is on the plane $y=s'$ for $t'=0,1,2,\ldots$.
Figure~\ref{Explode-Move4} shows an execution of the proposed algorithm.
The MRS continues to search each plane perpendicular to the $x$-axis
until it reaches the east wall.
Then, it starts to search each plane perpendicular to the $y$-axis.
The algorithm description is given in Table~\ref{tab_4module}.
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, width=\linewidth]
{explodeMove4.png}
\caption{Example of a search by four modules}
\label{Explode-Move4}
\end{figure}
The proposed algorithm consists of the following move sequences.
\begin{itemize}
\item Move sequence $M_{Down}$(Figure~\ref{4MoveToDown}).
The blue module is in cell $(x,y,z)$ at the start.
By this move sequence,
the green module reaches cell $(x,y,z-1)$.
By repeating $M_{Down}$ $n$ times, any of the modules visit the cells $(x,y,z-k)(0 \leq k \leq n)$.
That is, it visits all the cells of the horizontal line $\{(x,y,z)|x=s,y=t\}$.
\item Move sequence $M_{Up}$(Figure~\ref{4MoveToUp}).
The blue module is in cell $(x,y,z)$ at the start.
By this move sequence,
the green module reaches cell $(x,y,z+1)$.
By repeating $M_{Up}$ $n$ times, any of the modules visit the cells $(x,y,z+k)(0 \leq k \leq n)$.
That is, it visits all the cells of the horizontal line $\{(x,y,z)|x=s,y=t\}$.
\item Move sequence $M_{TurnUp}$(Figure~\ref{4TurnOnTheUpWall}).
By this move sequence,
the MRS changes its move sequence from $M_{Up}$ to $M_{Down}$.
\item Move sequence $M_{TurnD}$(Figure~\ref{4TurnOnTheDownWall}).
By this move sequence,
the MRS changes its move sequence from $M_{Down}$ to $M_{Up}$.
\item Move sequence $M_{B1}$(Figure~\ref{4MoveOnTheBottomOfTheWall}).
By this move sequence,
the MRS changes its move sequence from $M_{Down}$ to $M_{B2}$.
\item Move sequence $M_{B2}$(Figure~\ref{4MoveAlongTheDownWall}).
The blue module is in cell $(x,y,0)$ at the start.
By this move sequence,
the green module reaches cell $(x+1,y,0)$.
By repeating $M_{B2}$ $n$ times, any of the modules visit the cells $(x+k,y,0)(0 \leq k \leq n)$.
That is, it visits all the cells of the horizontal line $\{(x,y,z)|y=s,z=0\}$.
\item Move sequence $M_{B3}$(Figure~\ref{4MoveOnTheBottomOfTheWallToRise}).
By this move sequence,
the MRS changes its move sequence from $M_{B2}$ to $M_{Up}$.
\item Move sequence $M_{Corner}$(Figure~\ref{4MoveOnTheBottomOfTheCorner}).
By this move sequence,
the MRS changes its move sequence from $M_{Down}$ to $M_{B1}$.
\end{itemize}
The proposed algorithm consists of the following six steps.
We use north, south, east, and west for explanation,
however each module does not need to know the directions.
\begin{description}
\item[Step 1] The MRS repeats $M_{Down}$,
that makes it move in the down direction
along a vertical line on a plane $\{(x,y,z) | y = s\}$ for some $s$.
\item[Step 2] When the MRS reaches the bottom wall,
it changes the direction to up
by $M_{TurnD}$.
\item[Step 3] The MRS repeats $M_{Up}$,
that makes it move in the up direction.
This movement makes the MRS move along the same vertical line as Step $1$.
\item[step 4] If the MRS is adjacent to the bottom of the west wall,
it moves to the next plane $\{(x,y,z) | y = s-1\}$
by $M_{B1}$.
Then it moves along the bottom wall in the east direction
by repeating $M_{B2}$.
When the MRS reaches the east wall,
it performs $M_{B3}$.
Then it starts searching the next plane by Step $1$.
Otherwise, it proceeds to Step $5$.
\item [Step 5] When the MRS reaches the top wall,
it moves west by one row
by $M_{TurnUp}$.
Then it returns to Step $1$.
\item[Step 6] When the MRS reaches the southwest corner,
it performs $M_{Corner}$,
and performs Step $4$,
that makes it start searching the plane obtained by rotation the current
search plane by $90$ degrees.
Thus, the search planes are first perpendicular to the $x$ axis and
the MRS moves to east.
Then, the search planes are perpendicular to the $y$ axis and the MRS moves to north.
Third the search planes are perpendicular to the $x$ axis and the MRS moves to west.
Finally, the search planes are perpendicular to the $y$ axis and the MRS moves to south.
\end{description}
Depending on its initial state,
MRS may start from the middle of the above track.
When the MRS is on a plane $\{(x, y, z) | y=s\}$ for some $s$,
it visits all cells on a vertical line $\{(x,y,z)|x=t,y=s\}$ by Steps $1$, $2$, and $3$.
Then, it proceeds to a vertical line $\{(x,y,z)|x=t-1,y=s\}$ by Step $5$.
By repeating Steps $1$, $2$, $3$, and $5$, it visits all cells on the plane
$\{(x,y,z) | y=s\}$.
By Step $4$, it moves to the bottom of east wall in the next plane $y=s-1$,
then it starts searching the plane.
By repeating Steps $1$ to $5$, it eventually reaches the southwest corner of the bottom wall.
It starts searching the vertical line \{$\{(x,y,z)|x=1,y=d-2\}$\} by Step $6$.
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, width=\linewidth]
{explodeMove4.png}
\caption{Example of a search by four modules}
\label{Explode-Move4ap}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, height=2.5cm]
{4MoveToDown.png}
\caption{Move to down.
In each figure, the red module moves. When the blue module is in cell $(x,y,z)$ at the start, after this move sequence, the green model reaches
$(x,y,z-1)$.}
\label{4MoveToDown}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, height=2.0cm]
{4MoveToUp.png}
\caption{Move to up.
In each figure, the red module moves. When the blue module is in cell $(x,y,z)$ at the start, after this move sequence, the green model reaches
$(x,y,z+1)$. }
\label{4MoveToUp}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, height=2.5cm]
{4TurnOnTheUpWall.png}
\caption{Turn on the up wall. In each figure, the red modules move.}
\label{4TurnOnTheUpWall}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, height=2.5cm]
{4TurnOnTheDownWall.png}
\caption{Turn on the down wall. In the first figure, the red module moves.}
\label{4TurnOnTheDownWall}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, height=2.0cm]
{4MoveOnTheBottomOfTheWall.png}
\caption{Move on the bottom of the wall. In each figure, the red modules move.}
\label{4MoveOnTheBottomOfTheWall}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, height=2.5cm]
{4MoveAlongTheDownWall.png}
\caption{Move along the down wall.
In each figure, the red module moves. When the blue module is in cell $(x,y,0)$ at the start, after this move sequence, the green model reaches
$(x+1,y,0)$.}
\label{4MoveAlongTheDownWall}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, height=2.5cm]
{4MoveOnTheBottomOfTheWallToRise.png}
\caption{Move on the bottom of the wall to rise. In each figure, the red modules move.}
\label{4MoveOnTheBottomOfTheWallToRise}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, height=2.5cm]
{4MoveOnTheBottomOfTheCorner.png}
\caption{Move on the bottom corner. In each figure, the red module moves.}
\label{4MoveOnTheBottomOfTheCorner}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, scale=0.5]
{4ModuleShapes.png}
\caption{Configurations of the MRS of four modules equipped with a common vertical axis}
\label{4ModuleShapes}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, scale=0.5]
{4ModulesTransformation.png}
\caption{Transition graph of the MRS of four modules equipped with a common vertical axis}
\label{4ModuleTransitionGraph}
\end{figure}
\clearpage
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, height=2cm]
{ShapeLtransform.png}
\caption{Transformation of $S^4_{L3}$}
\label{Ltransform}
\end{figure}
Depending on the initial configuration, the search may not be possible,
and even if it is possible, the MRS cannot start the search directly because some initial configurations do not satisfy the
conditions in Table~\ref{tab_4module}.
Figure~\ref{4ModuleShapes} shows all possible configurations for four modules not equipped with a common vertical axis.
The configurations containing the coloured modules are symmetric, i.e.,
In each figure the modules painted by the same color may
have an identical observation and move symmetrically.
The MRS cannot start the search because it cannot move from their positions.
Therefore, if there are pairs of modules with the same observation in an initial configuration,
the search is not possible.
The proposed algorithm guarantees that the MRS finds the target
when all modules have different observations in an initial configuration.
Figure~\ref{4ModuleTransitionGraph} shows these initial configurations can be transformed to $S^4_{L3}$ by a sequence of transformations.
Even if some walls prevents these transformations,
the MRS can leave the wall because its state is asymmetric.
If the MRS satisfies the condition of the fourth movement of Table~\ref{tab_4module},
the MRS can start the search directly.
Otherwise, the MRS cannot move because it touches a wall.
In this case, as shown in Figure~\ref{Ltransform}, the MRS in $S^4_{L3}$ on a wall
can change its direction by a single rotation
and in this new configuration it can perform one of these movements.
By adding the above rules to the algorithm, the MRS can start the search.
\begin{longtable}{|l|c|c|c|c|}
\caption{Search algorithm for the MRS of four modules equipped with a common vertical axis}
\endhead
\endfoot
\cline{1-5} & $C_m$& $C_w$ & $C_e$ & Output \\
\cline{1-5} $M_{Down}$ & $(0,1,0)$,$(0,1,1)$, & & $(0,0,-2)$ & $(0,0,-1)$ \\
& $(0,1,-1)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(0,1,1)$, & & $(0,0,-1)$ & $(0,1,-1)$ \\
& $(0,1,2)$ & & & \\
\cline{2-5} & $(0,0,-1)$,$(0,0,-2)$, & & $(0,0,-3)$ & $(0,-1,-1)$ \\
& $(0,-1,-2)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(0,1,-1)$, & & $(1,0,0)$ & $(0,0,-1)$ \\
& $(0,1,-2)$ & & & \\
\cline{1-5} $M_{TurnD}$ & $(0,0,-1)$,$(-1,0,-1)$, & $(0,0,-3)$ & & $(0,-1,-1)$ \\
& $(0,0,-2)$ & $(0,0,-3)$ & & \\
\cline{1-5} $M_{Up}$ & $(-1,0,1)$,$(0,-1,1)$, & & & $(1,0,1)$ \\
& $(0,0,1)$ & & & \\
\cline{2-5} & $(-1,0,0)$,$(-2,0,0)$, & & $(0,0,2)$,& $(-1,0,1)$ \\
& $(-1,-1,0)$ & & $(0,0,-1)$ & \\
\cline{2-5} & $(1,0,0)$,$(1,-1,0)$, & & $(0,0,-1)$ & $(0,0,1)$ \\
& $(1,0,1)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(0,1,1)$, & & $(0,0,-1)$ & $(0,0,1)$ \\
& $(-1,1,1)$ & & & \\
\cline{1-5} $M_{TurnU}$ & $(0,1,0)$,$(1,1,0)$, & $(0,0,2)$ & & $(1,0,0)$ \\
& $(-1,1,0)$ & & & \\
\cline{2-5} & $(1,0,0)$,$(2,0,0)$, & $(0,0,2)$ & & $(1,0,1)$ \\
& $(1,-1,0)$ & $(0,0,2)$ & & \\
\cline{2-5} & $(0,0,-1)$,$(1,0,-1)$, & $(0,0,1)$ & & $(1,0,0)$ \\
& $(1,-1,-1)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(-1,1,0)$, & $(0,0,2)$ & & $(0,1,-1)$ \\
& $(-1,1,1)$ & & & \\
\cline{1-5} $M_{B1}$ & $(0,1,0)$,$(0,1,1)$, & $(0,2,0)$,$(0,0,-2)$ & & $(0,0,-1)$ \\
& $(0,1,-1)$ & & & \\
\cline{2-5} & $(0,0,-1)$,$(0,0,-2)$, & $(0,1,0)$,$(0,0,-3)$ & $(-1,0,0)$ & $(-1,0,-1)$ \\
& $(0,-1,-2)$ & & & \\
\cline{2-5} & $(1,0,0)$,$(1,0,-1)$, & $(0,1,0)$,$(0,0,-2)$ & & $(0,0,-1)$ \\
& $(1,-1,-1)$ & & & \\
\cline{2-5} & $(0,0,-1)$,$(-1,0,-1)$, & $(0,1,0)$,$(0,0,-2)$ & $(1,0,0)$ & $(-1,0,0)$ \\
& $(0,-1,-1)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(-1,1,0)$, & $(0,2,0)$,$(0,0,-1)$ & $(1,0,0)$ & $(-1,0,0)$ \\
& $(0,1,1)$ & & & \\
\cline{2-5} & $(0,0,-1)$,$(1,0,-1)$, & $(0,1,0)$ & & $(0,-1,0)$ \\
& $(0,-1,-1)$ & & & \\
\cline{2-5} & $(0,0,-1)$,$(0,1,-1)$, & $(0,1,0)$,$(0,0,-2)$ & & $(0,-1,-1)$ \\
& $(1,1,-1)$ & & & \\
\cline{1-5} $M_{B2}$ & $(0,1,0)$,$(-1,1,0)$, & $(0,0,-1)$ & & $(-1,0,0)$ \\
& $(-2,1,0)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(1,1,0)$, & $(0,0,-1)$ & $(-2,0,0)$ & $(-1,0,0)$ \\
& $(-1,1,0)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(1,1,0)$, & $(0,0,-1)$ & & $(-1,1,0)$ \\
& $(2,1,0)$ & & & \\
\cline{2-5} & $(-1,0,0)$,$(-2,0,0)$, & $(0,0,-1)$ & & $(-1,-1,0)$ \\
& $(-2,-1,0)$ & & & \\
\cline{1-5} $M_{B3}$ & $(0,1,0)$,$(1,1,0)$,& $(0,0,-1)$,$(-2,0,0)$ & & $(0,1,1)$ \\
& $(-1,1,0)$ & & & \\
\cline{2-5} & $(1,0,0)$,$(2,0,0)$, & $(0,0,-1)$,$(-1,0,0)$ & & $(0,0,1)$ \\
& $(1,0,1)$ & & & \\
\cline{2-5} & $(-1,0,0)$,$(-2,0,0)$, & $(0,0,-1)$,$(-3,0,0)$ & & $(0,0,1)$ \\
& $(-1,0,1)$ & & & \\
\cline{2-5} & $(0,0,1)$,$(1,0,1)$, & $(0,0,-1)$,$(-2,0,0)$ & & $(0,-1,1)$ \\
& $(-1,0,1)$ & & & \\
\cline{1-5} $M_{Corner}$ & $(0,0,-1)$,$(0,0,-2)$, & $(0,1,0)$,$(1,0,0)$,& & $(-1,0,-1)$ \\
& $(-1,0,-2)$ &$(0,0,-3)$ & & \\
\cline{2-5} & $(1,0,0)$,$(0,0,-1)$, & $(0,1,0)$,$(2,0,0)$,& & $(0,-1,-1)$ \\
& $(1,0,-1)$ & $(0,0,-2)$ & & \\
\cline{2-5} & $(0,0,-1)$,$(-1,0,-1)$,& $(1,0,0)$,$(0,1,0)$,& $(0,0,-2)$ & $(-1,0,0)$ \\
& $(-1,-1,-1)$ & $(0,0,-2)$ & & \\
\cline{1-5} & \multicolumn{1}{c|}{Otherwise} & \multicolumn{1}{c|}{Otherwise} & \multicolumn{1}{c|}{Otherwise} & $(0,0,0)$ \\
\cline{1-5}
\end{longtable}
\label{tab_4module}%
\subsection{Search without a common compass}
We show the following theorem by a search algorithm for the MRS of
five modules not equipped with a common compass.
\begin{theorem}
\label{theorem: 5 modules can Explore 3D space without compass}
The MRS of five modules not equipped with a common compass can solve a search problem
in a finite 3D grid
if no pair of modules have an identical observation in an initial configuration.
\end{theorem}
We prove Theorem~\ref{theorem: 5 modules can Explore 3D space without compass}
by a search algorithm for the MRS of three modules not equipped with a common compass.
The proposed algorithm considers each plane
perpendicular to one of the $x$, $y$, and $z$ axis.
The choice of the axis depends on the initial configuration of the MRS, and
the modules do not need to know the global coordinate system.
In the following, without loss of generality, we assume that the MRS considers
planes perpendicular to the $x$ axis, i.e.,
$x=s$($y=s$,$z=s$, respectively) for $s = 0, 1, 2, \ldots$.
It moves along each vertical line $\{(x,y,z) | x=s, y=t, z=u\}$ or
horizontal line $\{(x,y,z) | x=s, y=u,z=t\}$ for $u= 0, 1, 2, \ldots$
on the plane.
Figure~\ref{Explode-Move5} shows an execution of the algorithm.
The MRS continues to search each plane perpendicular to the $x$-axis
until it reaches the east wall.
Then, it changes the search direction from $x^+$ direction to
$x^-$ direction and
it starts to search each plane perpendicular to the $x$-axis
until it reaches the west wall.
The algorithm description is given in Table~\ref{tab_5modules}.
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio,width=\linewidth]
{explodeMove5.png}
\caption{Example of a search with five modular robots}
\label{Explode-Move5}
\end{figure}
The proposed algorithm consists of the following move sequences.
\begin{itemize}
\item Move sequence $M_{Forward}$(Figure~\ref{5MoveToForward}).
The blue module is in cell $(x,y,z)$ at the start.
By this move sequence,
the green module reaches cell $(x,y,z-1)$.
By repeating $M_{Forward}$ $n$ times, any of the modules visit the cells $(x,y,z-k)(0 \leq k \leq n)$.
That is, it visits all the cells of the horizontal line $\{(x,y,z)|x=s,y=t\}$.
\item Move sequence $M_{Back}$(Figure~\ref{5MoveToBack}).
The blue module is in cell $(x,y,z)$ at the start.
By this move sequence,
the green module reaches cell $(x,y,z+1)$.
By repeating $M_{Back}$ $n$ times, any of the modules visit the cells $(x,y,z+k)(0 \leq k \leq n)$.
That is, it visits all the cells of the horizontal line $\{(x,y,z)|x=s,y=t\}$.
\item Move sequence $M_{TurnB}$(Figure~\ref{5TurnToBack}).
By this move sequence,
the MRS changes its move sequence from $M_{Forward}$ to $M_{Back}$.
\item Move sequence $M_{TurnF}$(Figure~\ref{5TurnToForward}).
By this move sequence,
the MRS changes its move sequence from $M_{Back}$ to $M_{Forward}$.
\item Move sequence $M_{Edge}$(Figure~\ref{5MoveOnTheEdge}).
By this move sequence,
the MRS changes its move sequence from $M_{Forward}$ to $M_{TurnB}$.
\item Move sequence $M_{Corner}$(Figure~\ref{5MoveOnTheCorner}).
By this move sequence,
the MRS changes its move sequence from $M_{Forward}$ to $M_{TurnB}$.
\end{itemize}
The proposed algorithm consists of the following six steps.
We use down direction for explanation, but each module does not need to know the down direction.
\begin{description}
\item [Step 1] The MRS repeats $M_{Forward}$,
that makes it move to the down direction
along a vertical line on a plane $\{(x,y,z) | x = s\}$ for some $s$.
\item [Step 2] When the MRS reaches the bottom wall,
it changes the direction to up by $M_{TurnB}$.
\item[Step 3] The MRS repeats $M_{Back}$,
that makes it move in the up direction along a vertical line followed in Step $1$.
\item[Step 4] If the MRS is adjacent to the bottom of the west wall,
it moves to plane $\{(x,y,z) | x = s+1\}$
by $M_{TurnF}$,
and starts searching the new plane by repeating Steps $1$, $2$, and $3$.
\item[step 5] When the MRS reaches the top wall or the bottom wall,
it moves to the southern row
by $M_{Edge}$.
Then it repeats Steps $1$, $2$, and $3$ again.
\item[Step 6] When the MRS reaches a corner of the field,
perform $M_{Corner}$
that makes it changes the search direction from the positive $x$ direction to
the negative $x$ direction.
\end{description}
When the MRS is on a plane $\{(x, y, z) | x=s\}$ for some $s$,
it visits all the cells on line $\{(x,y,z)|x=s,y=t$\} by Steps $1$, $2$, and $3$.
Then, it proceeds to a new line \{$\{(x,y,z)|x=s,y=t-1,z=u\}$\} by Step $5$.
By repeating Steps $1$, $2$, $3$, and $5$, it visits all cells on plane $x=s$.
By Step $4$, it starts searching a new plane $x=s+1$.
By repeating Step $1$ to $5$, it eventually reaches the corner adjacent to the south wall
and the east wall.
By Step $6$, it starts searching west wall.
By repeating Step $1$ to $6$ twice, the MRS visits all cells of the field.
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio,width=\linewidth]
{explodeMove5.png}
\caption{Example of a search with five modular robots}
\label{Explode-Move5ap}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, height=2cm]
{5MoveToForward.png}
\caption{Move to forward.
In each figure, the red modules move. When the blue module is in cell $(x,y,z)$ at the start, after this move sequence, the green model reaches
$(x,y,z-1)$.}
\label{5MoveToForward}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, height=2.5cm]
{5TurnToBack.png}
\caption{Turn to back. In each figure, the red modules move.}
\label{5TurnToBack}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, height=2cm]
{5MoveToBack.png}
\caption{Move to back. In each figure, the red modules move. When the blue module is in cell $(x,y,z)$ at the start, after this move sequence, the green model reaches
$(x,y,z+1)$.}
\label{5MoveToBack}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, height=5cm]
{5TurnToForward.png}
\caption{Turn to forward. In each figure, the red modules move.}
\label{5TurnToForward}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, height=5cm]
{5MoveOnTheEdge.png}
\caption{Move on the edge. In each figure, the red modules move.}
\label{5MoveOnTheEdge}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, height=5cm]
{5MoveOnTheCorner.png}
\caption{Move on the corner. In each figure, the red modules move.}
\label{5MoveOnTheCorner}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, width=\linewidth]
{5ModuleShapes.png}
\caption{Configurations of the MRS of five modules not equipped with a common compass}
\label{5ModuleShapes}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, scale=0.5]
{5ModulesTransformation.png}
\caption{Transition graph of the MRS of five modules not equipped with a common compass}
\label{5ModuleTransitionGraph}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, height=2cm]
{ShapeQtranform.png}
\caption{Transformation of $S^5_Q$}
\label{Qtransform}
\end{figure}
Depending on the initial configuration, the search may not be possible,
and even if it is possible, the MRS cannot start the search directly because some initial configurations does not satisfy the
conditions in Table~\ref{tab_5modules}.
Figure~\ref{5ModuleShapes} shows all possible configurations for five modules not equipped with a common compass.
The configurations containing the coloured modules are symmetric, i.e.,
in each figure the modules painted by the same color may
have an identical observation and move symmetrically.
The MRS cannot start the search because it cannot move from their positions.
Therefore, if there are pairs of modules with the same observation in an initial configuration,
the search is not possible.
We show that the search is possible when all modules have different observations in the initial configuration.
Figure~\ref{5ModuleTransitionGraph} shows that the configurations $S^5_L, S^5_S, S^5_U$ can transform into $S^5_P$, $S^5_W, S^5_X, S^5_Z$ can transform into $S^5_F$,
$S^5_M$ can transform into $S^5_B$, and the other configurations can transform into $S^5_Q$.
Even if some walls prevents these transformations,
the MRS can leave the wall because its state is asymmetric.
Then, any configuration can be transformed to $S^5_Q$.
If the MRS satisfies the condition of the fourth or fifth movement of Table~\ref{tab_5modules},
the MRS can start the search directly.
Otherwise, the MRS cannot move because it touches a wall.
In this case, as shown in Figure~\ref{Qtransform}, the MRS in $S^5_Q$ on a wall
can change its direction by a single sliding
and in this new configuration it can perform one of these movements.
By adding the above rules to the algorithm, the MRS can start the search.
\begin{center}
\begin{longtable}{|l|c|c|c|c|}
\caption{Search algorithm for the MRS of five modules not equipped with a common compass}
\endhead
\endfoot
\cline{1-5} & \multicolumn{1}{c|}{$C_{m}$} & \multicolumn{1}{c|}{$C_{w}$} & \multicolumn{1}{c|}{$C_e$} & \multicolumn{1}{c|}{Output} \\
\cline{1-5} $M_{Forward}$ & $(0,0,1)$,$(0,-1,1)$,& & $(0,0,4)$ & $(-1,0,1)$ \\
& $(0,0,2)$,$(0,0,3)$& & & \\
\cline{2-5} & $(0,1,0)$,$(0,1,-1)$,& & $(0,0,3)$ & $(0,0,1)$ \\
& $(0,1,1)$,$(0,1,2)$ & & & \\
\cline{2-5} & $(0,0,-1)$,$(0,0,-2)$,& & $(0,0,1)$ & $(-1,0,-1)$ \\
& $(0,-1,-2)$,$(0,0,-3)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(-1,1,0)$,& & $(0,0,1)$ & $(0,1,1)$ \\
& $(0,1,-1)$,$(-1,1,-1)$ & & & \\
\cline{2-5} & $(1,0,0)$,$(1,-1,0)$,& & $(0,0,2)$ & $(0,-1,1)$ \\
& $(0,0,-1)$,$(1,0,-1)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(1,1,0)$,& & $(0,0,2)$ & $(0,1,1)$ \\
& $(1,1,1)$,$(1,1,-1)$ & & & \\
\cline{2-5} & $(1,0,0)$,$(0,0,-1)$,& & $(0,0,1)$ & $(1,0,1)$ \\
& $(1,0,-1)$,$(1,0,-2)$ & & & \\
\cline{2-5} & $(1,0,0)$,$(0,0,1)$ & & $(0,0,2)$ & $(1,-1,0)$ \\
& $(1,0,1)$,$(1,0,-1)$,& & & \\
\cline{1-5} $M_{TurnB}$ & $(0,1,0)$,$(0,1,1)$,& $(0,0,3)$ & $(1,0,0)$ & $(-1,1,0)$ \\
& $(0,1,-1)$,$(0,1,2)$ & & & \\
\cline{2-5} & $(0,0,-1)$,$(0,0,-2)$ & $(0,0,1)$ & $(1,0,0)$ & $(0,-1,-1)$ \\
& $(0,-1,-2)$,$(0,0,-3)$ & & & \\
\cline{1-5} $M_{Back}$ & $(0,0,-1)$,$(-1,0,-1)$,& & $(0,0,1)$,& $(0,-1,-1)$ \\
& $(0,0,-2)$,$(1,0,-2)$ & & $(0,0,-3)$ & \\
\cline{2-5} & $(1,0,0)$,$(1,0,1)$,& & $(0,0,2)$ & $(0,0,-1)$ \\
& $(1,0,-1)$,$(2,0,-1)$& & & \\
\cline{2-5} & $(-1,0,0)$,$(-1,0,1)$,& & $(0,0,3)$,& $(-1,0,-1)$ \\
& $(-2,0,1)$,$(-1,0,2)$& & $(0,0,-1)$ & \\
\cline{2-5} & $(0,1,0)$,$(0,1,-1)$,& & $(0,0,-3)$ & $(0,0,-1)$ \\
& $(-1,1,-1)$,$(0,1,-2)$ & & & \\
\cline{2-5} & $(0,0,1)$,$(-1,0,1)$,& & $(0,0,-1)$ & $(-1,0,0)$ \\
& $(0,0,2)$,$(0,-1,2)$ & & & \\
\cline{2-5} & $(0,0,1)$,$(1,0,1)$,& & $(0,0,-1)$,& $(1,0,0)$ \\
& $(1,-1,1)$,$(1,0,2)$ & & $(2,0,0)$ & \\
\cline{2-5} & $(0,0,-1)$,$(-1,0,-1)$,& & & $(-1,0,0)$ \\
& $(0,-1,-1)$,$(-1,0,-2)$ & & & \\
\cline{2-5} & $(0,0,-1)$,$(1,0,-1)$,& & & $(1,0,0)$ \\
& $(1,-1,-1)$,$(1,0,-2)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(-1,1,0)$,& & $(0,0,-2)$ & $(0,0,-1)$ \\
& $(-1,1,1)$,$(0,1,-1)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(0,1,1)$,& & $(0,0,-1)$,& $(1,1,0)$ \\
& $(-1,1,1)$,$(0,1,2)$ & & $(0,0,3)$ & \\
\cline{1-5} $M_{TurnF}$ & $(0,1,0)$,$(0,1,-1)$,& $(0,0,-3)$ & & $(-1,1,0)$ \\
& $(-1,1,-1)$,$(0,1,-2)$& & & \\
\cline{2-5} & $(1,0,0)$,$(1,0,1)$,& $(0,0,-2)$ & & $(0,0,-1)$ \\
& $(1,-1,1)$,$(1,0,-1)$ & & & \\
\cline{2-5} & $(1,0,0)$,$(1,0,1)$,& $(0,0,-1)$ & & $(1,-1,0)$ \\
& $(1,0,2)$,$(0,0,2)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(0,1,1)$,& $(0,0,-1)$ & & $(1,1,0)$ \\
& $(0,1,2)$,$(-1,1,2)$ & & & \\
\cline{2-5} & $(-1,0,0)$,$(-1,0,1)$,& $(0,0,-1)$ & & $(0,0,1)$ \\
& $(-1,0,2)$,$(-2,0,2)$ & & & \\
\cline{2-5} & $(0,0,1)$,$(1,0,1)$,& $(0,0,-1)$ & & $(1,0,0)$ \\
& $(0,0,2)$,$(-1,0,2)$,& & & \\
\cline{2-5} & $(1,0,0)$,$(1,0,-1)$ & $(0,0,-3)$ & & $(1,-1,0)$ \\
& $(2,0,-1)$,$(1,0,-2)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(0,1,-1)$,& $(0,0,-3)$ & & $(1,1,0)$ \\
& $(1,1,-1)$,$(1,1,-2)$ & & & \\
\cline{1-5} $M_{Edge}$ & $(0,0,1)$,$(0,-1,1)$,& $(0,0,4)$,$(1,0,0)$ & & $(-1,0,1)$ \\
& $(0,0,2)$,$(0,0,3)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(0,1,1)$,& $(0,0,3)$,$(1,0,0)$ & & $(0,0,1)$ \\
& $(0,1,2)$,$(0,1,-1)$ & & & \\
\cline{2-5} & $(0,0,-1)$,$(0,0,-2)$,& $(0,0,1)$,$(1,0,0)$ & & $(-1,0,-1)$ \\
& $(0,-1,-2)$,$(0,0,-3)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(1,1,0)$,& $(2,0,0)$,$(0,0,2)$ & $(0,2,0)$& $(0,1,1)$ \\
& $(1,1,1)$,$(1,1,-1)$& & & \\
\cline{2-5} & $(1,0,0)$,$(0,0,-1)$,& $(0,0,1)$,$(2,0,0)$ & & $(0,1,-1)$ \\
& $(1,0,-1)$,$(1,0,-2)$ & & & \\
\cline{2-5} & $(0,0,1)$,$(-1,0,1)$,& $(0,0,3)$,$(1,0,0)$ & & $(0,1,1)$ \\
& $(0,0,2)$,$(-1,0,2)$& & & \\
\cline{2-5} & $(0,0,-1)$,$(0,1,-1)$,& $(0,0,1)$,$(1,0,0)$ & & $(0,1,0)$ \\
& $(-1,0,-1)$,$(-1,1,-1)$ & & & \\
\cline{2-5} & $(1,0,0)$,$(0,1,0)$,& $(0,0,2)$,$(2,0,0)$ & & $(-1,1,0)$ \\
& $(1,1,0)$,$(1,0,1)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(-1,1,0)$,& $(0,0,2)$,$(1,0,0)$ & & $(-1,0,0)$ \\
& $(-2,1,0)$,$(0,1,1)$ & & & \\
\cline{2-5} & $(0,1,0)$,$(1,1,0)$,& $(0,0,2)$,$(2,0,0)$ & & $(-1,0,0)$ \\
& $(-1,1,0)$,$(1,1,1)$ & & & \\
\cline{2-5} & $(0,0,-1)$,$(-1,0,-1)$,& $(0,0,1)$,$(1,0,0)$ & & $(-1,0,0)$ \\
& $(-2,0,-1)$,$(-1,-1,-1)$ & & & \\
\cline{2-5} & $(0,0,-1)$,$(1,0,-1)$,& $(0,0,1)$,$(2,0,0)$ & & $(-1,0,0)$ \\
& $(-1,0,-1)$,$(-1,-1,-1)$ & & & \\
\cline{2-5} & $(0,0,-1)$,$(1,0,-1)$,& $(0,0,1)$,$(3,0,0)$ & & $(-1,0,-1)$ \\
& $(2,0,-1)$,$(0,-1,-1)$ & & & \\
\cline{1-5} $M_{Corner}$ & $(0,0,-1)$,$(-1,0,-1)$,& $(0,0,1)$,$(1,0,0)$,& & $(-1,0,0)$ \\
& $(-1,-1,-1)$,$(0,0,-2)$ & $(0,1,0)$ & & \\
\cline{2-5} & $(0,0,1)$,$(-1,0,1)$,& $(0,0,3)$,$(1,0,0)$,& & $(-1,0,0)$ \\
& $(-1,-1,1)$,$(0,0,2)$ & $(0,1,0)$ & & \\
\cline{2-5} & $(0,0,1)$,$(1,0,1)$,& $(0,0,3)$,$(2,0,0)$,& & $(0,-1,0)$ \\
& $(0,-1,1)$,$(0,0,2)$& $(0,1,0)$ & & \\
\cline{2-5} & $(-1,0,0)$,$(-1,-1,0)$,& $(0,0,2)$,$(1,0,0)$,& & $(0,-1,0)$ \\
& $(-1,0,-1)$,$(-1,0,1)$ & $(0,1,0)$ & & \\
\cline{2-5} & $(0,0,-1)$,$(1,0,-1)$,& $(0,0,1)$,$(2,0,0)$,& & $(0,-1,0)$ \\
& $(0,-1,-1)$,$(0,0,-2)$ & $(0,1,0)$ & & \\
\cline{2-5} & $(-1,0,0)$,$(-1,-1,0)$,& $(0,0,2)$,$(1,0,0)$,& & $(0,0,1)$ \\
& $(-1,0,-1)$,$(-1,0,1)$ & $(0,-2,0)$ & & \\
\cline{2-5} & $(-1,0,0)$,$(-1,1,0)$,& $(0,0,2)$,$(1,0,0)$,& & $(0,0,1)$ \\
& $(-1,0,1)$,$(-1,0,-1)$ & $(0,2,0)$ & & \\
\cline{1-5} & \multicolumn{1}{c|}{Otherwise} & \multicolumn{1}{c|}{Otherwise} & \multicolumn{1}{c|}{Otherwise} & $(0,0,0)$ \\
\cline{1-5}
\end{longtable}%
\end{center}
\label{tab_5modules}%
\section{Necessary number of modules}
We show that the three algorithms presented in
Section~\ref{ExpWithCompass} uses the
minimum number of modules for each settings.
\begin{theorem}
\label{theorem: 2 modules cant Exprole 3D space with compass}
The MRS of less than three modules equipped with a common compass cannot solve the search problem
in a finite 3D cubic grid.
\end{theorem}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio,scale=0.35]
{DiagonalMoveExample.png}
\caption{Example of diagonal move}
\label{DiagonalMoveExample}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, scale=0.25]
{2ModulesShapes.png}
\caption{Configurations of two modules with a common compass}
\label{2ModulesShapes}
\end{figure}
\begin{proof}
In the case of one module, the MRS cannot move because there is no other static module during sliding or rotation.
In the case of two modules, we show that
there is a cell which cannot be visited by the MRS.
Doi et al. showed that in the 2D square grid two modules equipped with a common compass
can move straight to one of the eight directions
(north, south, east, west, northeast, northwest, southeast, and southwest)
when they observe no wall~\cite{DYKY18}.
By the same discussion, the MRS of two modules equipped with a common compass
in 3D cubic grid can move straight to one direction, and
the possible moving directions are eight diagonal directions
in addition to those eight directions on planes perpendicular
to $x$, $y$, or $z$ axis
when they can observe no wall.
Figure~\ref{DiagonalMoveExample} shows an example of a 3D diagonal move.
Without loss of generality, we assume that the MRS continues to move in positive $x$ direction
in the area where the two modules observe no wall.
If it enters the area from other walls than the west wall,
it cannot visit the cells with smaller $x$ coordinates on the straight moving track.
Hence, the MRS must start the straight move when some modules can observe the west wall.
Assume that the MRS can visit all cells of the field.
There exists a subsequence of configurations where the two modules observes only the west wall.
We focus on the sequence of states of the MRS.
Let $T_W = S_1, S_2, \ldots, S_{\ell}$ be a sequence of states of the MRS
in the area where the MRS observes only west wall.
$T_W$ consists of different states, otherwise the MRS cannot leave the west wall.
Additionally, in the transformation from $S_{\ell}$ to the next state, say $S_{\ell+1}$
the MRS enters the area where it observes no wall.
As shown in Figure~\ref{2ModulesShapes}, when we fix the bottom westmost module,
there are three states of the MRS.
The number of states where the modules observe only the west wall is at most $3k$.
Thus, the length of $T_W$ is at most $3k$.
In addition, the maximum distance that the MRS can move by $T_W$ in the $y$ direction
or the $z$ direction is $3k$,
since the maximum number of coordinates that it can travel by one transition is one in each direction.
Since the MRS cannot move in the negative $x$ direction from the area
where the modules cannot observe the west wall,
the only way for the MRS to enter the area where the modules can observe the west wall
is to move from the area where the modules observe two or more walls.
Thus, the transformation sequence $T_W$ always starts around the edge of the field that touches the west wall.
However, since the maximum distance that the MRS can travel by $T_W$ is $3k$,
it cannot reach the coordinates $(a,b,c)$ such that $(0 <= a <= k, 3k+k < b < d-(3k+k), 3k+k < c < h-(3k+k))$.
Hence, it is impossible for the two modules to visit all cells.
\end{proof}
We next show the necessary number of modules equipped with a common vertical axis.
\begin{theorem}
\label{theorem: 3 modules cant Exprole 3D space with horizontal compass}
The MRS of less than four modules equipped with a common vertical axis
cannot solve the search problem in a finite 3D cubic grid.
\end{theorem}
\begin{proof}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio, scale=0.23]
{3moduleCantMoveHorizon.png}
\caption{State transition graph for a MRS consisting of 3 modules with
common vertical axis}
\label{3moduleCantMoveHorizon}
\end{figure}
In the case of one module, the MRS cannot move because it cannot perform any sliding or rotation.
In the case of two modules, there are two possible states of the MRS.
Let $S_A$ be the state, where the two modules form a vertical line,
and $S_B$ be the state, where the two modules form a horizontal line.
There exists only one horizontal line state because
the modules do not agree on $x$ axis or $y$ axis.
In $S_A$, one of the two modules can perform a rotation
because they agree on a common vertical axis.
Any rotation in $S_A$ results in $S_B$.
In $S_B$, both modules obtain the same observation
if their local coordinate systems are symmetric against their midpoint,
and if one of them moves then the other also moves.
Thus, the two modules cannot perform any movement.
Consequently, the two modules cannot move to any direction.
In the case of three modules, we check possible movements of the MRS
by the state transition graph shown in Figure~\ref{3moduleCantMoveHorizon}.
State $S^3_a$ cannot be transformed to any other configuration
because both
endpoint modules get the same observation.
Therefore, it is necessary to move only in the $S^3_b, S^3_c, S^3_d$, and $S^3_e$.
However, no matter which transformation of the $S^3_b, S^3_c, S^3_d$, and $S^3_e$,
the MRS cannot move in the east, west, south or south direction.
Therefore, when there is no wall in the vicinity, it becomes impossible to move,
and it is impossible to search the whole field.
\end{proof}
We finally show the necessary number of modules not equipped with a common compass.
\begin{theorem}
\label{theorem: 4 modules cant Exprole 3D space without compass }
The MRS of less than five modules not equipped with a common compass
cannot solve the search problem in a finite 3D cubic grid.
\end{theorem}
\begin{proof}
\begin{figure}[htbp]
\centering
\includegraphics[keepaspectratio,scale=0.25]
{4moduleCantMoveNoCompass.png}
\caption{State transition graph for the MRS of 4 modules not equipped with a common compass}
\label{4moduleCantMoveNoCompass}
\end{figure}
In the case of one module, the MRS cannot move because it cannot perform any sliding or rotation.
In the case of two modules, the MRS can take one state,
where the two modules obtain the same observation
if their local coordinate systems are symmetric against their midpoint.
Thus, if one of them moves then the other also moves and
the two modules cannot perform any movement.
In the case of three modules, there are two possible states of the MRS, i.e.,
the L-shape shown in the middle of the Figure~\ref{3moduleCantMoveHorizon},
and the I-shape, i.e., a line.
In the L-shape, the two endpoint modules obtain the same observation
if their local coordinate systems are symmetric against the center module.
The center module cannot perform any movement due to connectivity.
The two endpoint modules cannot perform a sliding because they move into the same cell
and there is no static modules during a sliding.
If the two modules perform a rotation, the resulting state is the L-shape
or the I-shape.
In the I-shape, the two endpoint modules obtain the same observation in the worst case.
They can only perform a rotation, and a resulting configuration
is the L-shape or the I-shape.
In any movement in the L-shape and the I-shape,
the center module does not move and the MRS does not move to any direction.
Consequently, the MRS cannot move to any direction.
In the case of four modules, the state transition graph is shown in Figure~\ref{4moduleCantMoveNoCompass}.
In each state, modules that may have the same observation is painted in red,
and each arrow shows that its starting point state can be transformed to
its end point in one time step.
The MRS cannot change its state in $S_O^4$ and $S_K^4$.
In $S_O^4$ the four modules have the same observation, and
if one of them moves, the other also move.
Thus, they cannot keep a static module during movement.
Also in $S_K^4$ the three red module have the same observation, and
if one of them moves, the others also move.
It cause a conflict of their movement paths or transformation into $S_K^4$.
Thus, they can transform only into $S_K^4$.
Next, we can find $S_S^4$, $S_Z^4$, and $S_I^4$ form a loop.
However, the MRS cannot move to any direction by repeating these transitions.
The MRS enters this loop from $S_N^4$, thus
it cannot move to any direction from $S_N^4$.
We also find $S_T^4$ and $S_L^4$ forms a loop and
$S_L^4$ has arrows to $S_O^4$, $S_K^4$, $S_I^4$, and $S_N^4$.
Thus, the MRS cannot move to any direction from $S_T^4$ and $S_L^4$.
Therefore, there is no way for MRS to move.
\end{proof}
\section{Conclusion and future work}
\label{conc}
In this paper, we considered search by the single MRS
in the finite 3D cubic grid.
We demonstrated a trade-off between the common knowledge and
the necessary and sufficient number of modules for search.
When the modules are equipped with a common compass,
three modules are necessary and sufficient.
When the modules are not equipped with a common compass,
five modules are necessary and sufficient.
As an intermediate case, when the modules are equipped with
a common vertical axis, four modules are necessary and sufficient.
We finally note that the proposed algorithms depend on
parallel movements, i.e., they are not designed for
the centralized scheduler.
Our future goal is a distributed coordination theory for the MRS.
First, reconfiguration and locomotion
of a single MRS in the 3D cubic grid have not been discussed yet.
Second, it is important to consider interaction among multiple MRSs
such as rendezvous, collision avoidance, and collective search.
Finally, the MRS is expected to solve more complicated tasks
by interaction with the environment.
\newpage
|
{
"timestamp": "2021-12-01T02:26:42",
"yymm": "2111",
"arxiv_id": "2111.15480",
"language": "en",
"url": "https://arxiv.org/abs/2111.15480"
}
|
\section{Introduction}
\label{sec:introduction}
Time of Flight (ToF) cameras are devices that are able to capture depth information of a scene by measuring the time the light emitted by the device needs to travel back once intersecting with an object.
In practice, timing the reception of a light impulse requires precise and costly hardware and, as a result, consumer-level ToF cameras perform indirect measurements.
The most common types are so-called Amplitude-Modulated Continuous-Wave (AMCW) ToF cameras, as they are for example used by the Kinect.
The working principle of these AMCW cameras is to emit a periodically modulated light signal and retrieve the phase shift of the received signal, through which the travel time of the light and, consequently, the distance of the object to the camera is given~\cite{hansard2012tof_principles}.
However, the continuous illumination of the scene inevitably leads to Multi-Path Interference (MPI), as not only the direct reflection is received but also indirectly illumination which significantly impairs the depth estimation.
Furthermore, these ToF cameras suffer from low Signal to Noise Ratios (SNR) on dark surfaces and the mixed pixel effect along sharp object edges~\cite{hansard2012tof_principles}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/Teaser.pdf}
\caption{Given an initial ToF depth reconstruction our method projects the problem into a latent 3D space. The 3D point positions are updated iteratively along the camera rays using RADU point convolutions, in order to optimize the final 2D depth prediction.}
\label{fig:teaser}
\end{figure}
Since deep learning approaches have shown great success in visual computing problems, many works have investigated the capabilities of 2D neural networks to denoise ToF depth images~\cite{agresti2018mpiremoval, agresti2019unsupervised, buratto2021deep, su2018end2end, Guo_2018_ECCV, gutierrez2021itof2dtof}.
However, existing methods treat the task of ToF denoising as a 2D image problem and do not take into account the explicit 3D information in their computations.
In these works, the depth information is usually used as an input to standard Convolutional Neural Networks (CNN) for images~\cite{marco2017deeptof, agresti2018mpiremoval, agresti2019unsupervised, son2016robotictof, song2018depth}, while the underlying 3D structure is not analyzed.
In this work instead, we propose a new neural network architecture that projects the problem into the 3D domain and makes use of point convolutional neural networks~\cite{hermosilla2018monte} to analyze the noisy reconstruction and adjust the point positions along the view direction, see \cref{fig:teaser}.
This iterative process makes small changes to the point positions to reduce the noise level in between the convolutional layers and improve the final depth reconstruction.
Further, we propose a novel fine-tuning procedure for Unsupervised Domain Adaptation (U-DA) based on self-training methods, to transfer the knowledge acquired by our network from synthetic to real world ToF data.
The effectiveness of this approach is evaluated on both synthetic and real world datasets and proves to be able to outperform existing methods.
Moreover, we introduce a large-scale high-resolution dataset consisting of challenging scenes, containing high MPI levels, materials which produce low SNR captures and objects with high frequency details.
In summary, our contributions are:
\begin{itemize}
\item A novel architecture for denoising of ToF images in a latent 3D space.
\item A two-staged training procedure with a cyclic self-training approach designed to bridge the gap between synthetic and real world ToF images.
\item A large-scale high-resolution synthetic ToF dataset containing measurements for scenes with high MPI.
\end{itemize}
Our synthetic data set, code and trained networks will be made publicly available in the future.
\section{Related Work}\label{sec:rel_work}
In this section we will briefly review existing work related to our approach in different fields.
\noindent \textbf{Learned ToF Denoising.}
Beginning with Marco \etal~\cite{marco2017deeptof}, several works on denoising ToF depth images using deep learning have been proposed.
While the former uses an initial low-frequency ToF-depth prediction as input, subsequent work by Agresti \etal~\cite{agresti2018mpiremoval, agresti2019unsupervised} greatly improved the reconstruction by considering additional multi-frequency features derived from raw camera measurements to further reduce the noise.
The same year, Su \etal~\cite{su2018end2end} proposed a generative approach that generates depths directly from raw camera measurements.
Instead of predicting a denoised depth directly, Guo \etal~\cite{Guo_2018_ECCV} followed an inverse approach and used a 2D CNN to denoise the raw camera measurements prior to the LF2~\cite{lingzhu2016LF2} depth reconstruction algorithm of the Kinect2.
Further, several works on improving on various aspects and applications using machine learning followed, \eg online calibration using RGB information~\cite{qiu2019deep}, frame rate optimization~\cite{chen2019learning}, power efficiency~\cite{chen2020very}, robotic arm setups~\cite{son2016robotictof} or translucent materials~\cite{song2018depth}, to name a few.
The aforementioned approaches all use standard 2D CNNs and thus consider the denoising problem as an image task.
Recent work aims to estimate the real depth by reconstructing the transient image, \textit{i.e.}~the impulse response of the scene, through learning methods.
Burrato \etal~\cite{buratto2021deep} used the assumption that the direct reflection reaches the camera sensor early and predict the intensity and arrival time of the first two peaks of the impulse response.
The iToF2dToF method~\cite{gutierrez2021itof2dtof} first predicts ToF depths at various frequencies which are used to estimate the two leading coefficients of the Fourier-Transform of the impulse response.
While transient reconstruction is a promising direction, these methods are, as of now, still very memory demanding, and can thus only query few pixels in parallel, and do not reach the denoising capabilities of learned denoising approaches.
In contrast to previous work on ToF-denoising, we propose to lift the problem into 3D and use 3D point CNNs.
With this approach we follow the findings of recent research, that neural networks can benefit from latent 3D representations in various tasks, such as object detection~\cite{wang2019pseudo} using 3D voxel CNNs on lidar data, semantic segmentation~\cite{xing20192_5Dconv} using 2.5D convolutions on RGB-D data or even image generation~\cite{nguyen2019hologan}.
\noindent \textbf{Domain Adaptation for ToF Images.}
Real ToF data with labeled ground truth data is only sparsely available, as its collection is rather complex.
Thus various synthetic data sets have been introduced~\cite{agresti2018mpiremoval, marco2017deeptof, Guo_2018_ECCV} to provide the amount of data needed to train deep neural networks.
However the usage of data from a different source, in this case a synthetic simulation, comes at the cost of introducing a domain gap, \textit{i.e.}~real and synthetic images differ in their statistical properties.
Consequently, one major challenge when learning the task of denoising ToF images is bridging the domain gap between synthetic data sets and real world data.
To improve performance on real data Marco \etal~\cite{marco2017deeptof} pre-train an auto-encoder network on unlabeled real world ToF-data, and transfer the encoder layers to a encoder-decoder network for denoising.
Other works~\cite{su2018end2end, agresti2019unsupervised} have suggested to use adversarial losses, where a second network acts as an adversarial agent which is trained to distinguish depths generated from real and synthetic data.
During training the generator is optimized to deceive the discriminator.
Recently, self-training methods, which originated from a student-teacher approach by Hinton \etal~\cite{hinton2015distilling}, have shown great success in various variants of domain-adaptation~\cite{lee2013pseudo, mey2016soft, kumar2020understanding, xie2020innout, zou2019confidence, prabhu2021sentry, zou2018unsupervised, liu2021cycle}.
Building upon these works, we employ a cyclic self-training procedure to adapt our network to real world statistics by generating pseudo-labels for unlabeled real data, after pre-training the network on synthetic data.
\section{Problem Statement}\label{sec:tof_principle}
To estimate the distance $d$ of an object, an AMCW ToF camera emits a light signal $s_e$, which is typically modulated by a sinusoidal periodic function, in the form of
\begin{align}
s_e(t) &= s_0 \cdot \big(1 + m \cdot \sin(2\pi f t)\big),
\end{align}
where $s_0$ is the average intensity, $m$ is the modulation coefficient, $f$ is the modulation frequency, and $t$ is the time.
For compactness and \textit{w.l.o.g.}~we neglect $s_0$ and $m$ in the following and assume them to be equal to one.
In the optimal case of only direct reflection the received light $s_r$ is a scaled and phase shifted version of the emitted signal
\begin{align}
s_r(t) = r\cdot s_e(t - \Delta t) = r \cdot \big(1 + \sin(2\pi f t - \Delta\varphi)\big),\label{eq:signal_received}
\end{align}
where $r$ is the ratio of the light backscattered to the sensor from the surface, and $\Delta\varphi$ is the phase delay after the signal has traveled the distance $2d$, \textit{i.e.}~$\Delta\varphi = 2dc /f$, where $c$ is the speed of light.
The received signal is averaged during the exposure time $\delta t$ at the sensor with a phase shifted version of the emitted signal, resulting in the measurement
\begin{align}
m_\theta &= \frac{1}{\delta t}\int_{\delta t} s_r(t) \cdot s_e\left(t + \frac{\theta}{2\pi f}\right)\,dt.
\end{align}
Under the assumption $\delta t \gg 1/f$ the measurement $m_\theta$ can be approximated as~\cite{frank2009theoretical}
\begin{align}
m_\theta &= I + A \cdot \cos(\Delta\varphi + \theta),
\end{align}
where $I$ is called the intensity and $A$ the amplitude.
By measuring $m_\theta$ for multiple phase offsets $\theta$ the phase shift $\Delta\varphi$ and the distance $d$ can be reconstructed as~\cite{hansard2012tof_principles}
\begin{align}
\Delta\varphi &= \arctan \left(\frac{\sum_\theta {-\sin(\theta)\cdot m_\theta}}
{\sum_\theta \cos(\theta)\cdot m_\theta}\right),\label{eq:tof_phase}\\
d &= \frac{c\cdot \Delta\varphi}{4\pi f}.\label{eq:tof_depth}
\end{align}
Due to the periodic nature of the signal, the reconstructed distance $d$ using Eq.~\eqref{eq:tof_depth} is ambiguous for distances larger than $d_{max} = c / (2f)$.
To resolve this so called phase-wrapping the common solution is to acquire measurements for different modulation frequencies $f_k$, which thus also have different maximal distances $d_{max}$.
In practice the received signal $s_r$ is not only the direct reflection after the time $\Delta t$, as in Eq.~\eqref{eq:signal_received}, but a composition of light scattered along various paths $P$ in the scene
\begin{align}
s_r(t) = \int_{P} r(p) \cdot \big(1 + \sin(2\pi f t - \Delta\varphi(p)\big)\,dp.
\end{align}
While the intensity $r(p)$ can be expected to be low after multiple reflections on an isolated path $p$, the accumulation over all possible paths $P$ leads to the aforementioned notable MPI distortion in the distance recovery, Eq.~\eqref{eq:tof_depth}.
Additionally, ToF-sensors suffer from the common camera noise sources in the form of photon shot noise, thermal shot noise and read noise, which are typically modeled jointly as an additive Gaussian noise on the measurement $m_\theta$.
Although, $d$ from Eq.~\eqref{eq:tof_depth} denotes the distance, it is commonly referred to as ToF depth, as we will do in this work.
\section{Proposed Method}
\label{sec:method}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{figures/camera_projection.pdf}
\caption{The non-parallelism of camera rays (left) leads to a distortion in the resulting depth image, as the camera measures the distance along the rays (middle).
By applying the inverse camera projection $\mathcal{P}_{C\rightarrow G}$ the depth map is projected into 3D space, and, in the ideal case, aligns with the scene geometry (right).}
\label{fig:global_space}
\end{figure}
In this work we propose to incorporate 3D learning techniques to exploit the spatial structure of the scene geometry.
In contrast to 2D convolutions, where spatial relations are encoded implicitly in the pixel grid location, a 3D convolution relies on explicitly given 3D coordinates~\cite{hermosilla2018monte}.
In order to represent the 2.5D data in 3D space we use the inverse of the camera projection $\mathcal{P}_{G\rightarrow C}$ to create an initial spatial position for each pixel, as illustrated in \cref{fig:global_space}.
This allows us to denoise the depth in global coordinates, allowing the 3D layers to consider the 3D neighborhoods of the individual pixels, and learn on the actual scene geometry, undistorted by the non-linear camera transformation.
In the case of ToF-data we derive an initial 3D coordinate from the ToF depth given by Eq.~\eqref{eq:tof_depth}.
Of course this initial depth is inherently noisy, and we assume that a 3D network would benefit from denoised 3D positions, which lie closer to the scene geometry.
Thus, we propose to iteratively update the 3D positions of the points, enabling the network to optimize the latent point cloud in between the networks 3D layers.
To keep the point position in aligment with the underlying structure given by the pixel position, we restrict the points movement to the respective camera ray.
However, compared to 2D Convolutions, 3D Convolutions are more demanding in compute power and memory consumption.
To reduce this load we embed our proposed 3D layers in a 2D Network and introduce 2.5D pooling layers to reduce the spatial resolution in the 3D convolutions.
This is necessary as for example a $320\times240$ image would result in $76.8k$ points, for comparison common networks for 3D object recognition typically use $1k$ points per model.
In the following we will first define the 2.5D pooling and the RADU convolution layers, before describing our network architecture, the loss function and the self-cycled training procedure for domain adaptation.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/AvgPool_architecture_rot.pdf}
\caption{Proposed network architecture. We embed the 3D RADU convolutions in two 2D blocks and use an initial ToF depth given by Eq.~\eqref{eq:tof_depth} to compute initial spatial positions for the 2D features. After the 3D block the point coordinates updated by the RADU layers are projected back to a coarse depth image, which is used as an additional input feature to the following 2D block.}
\label{fig:net_architecture}
\end{figure*}
\begin{figure}[b]
\centering
\includegraphics[width=\linewidth]{figures/RADU.pdf}
\caption{Realization of our RADU convolutional layer based on a 3D Monte-Carlo Convolution. In contrast to standard 3D point convolutions the spatial positions are not only used to predict the kernel weights but are also updated after each convolution.}
\label{fig:conv}
\end{figure}
\subsection{2.5D Pooling}\label{ssec:2.5D_pool}
Several methods have been proposed for pooling operations on 3D point clouds, \eg Poisson disk sampling~\cite{hermosilla2018monte} and cell average sampling~\cite{thomas2019KPConvsphere}.
However these methods suffer from drawbacks when applied to 2.5D data.
Cell average sampling creates a new point as the mean point in a pre-defined 3D grid, which in general does not lie on the camera ray of a pixel.
Poisson disk sampling chooses a subset of the given points based on their distances in 3D space, making these points still align with the original camera-rays. However this method aims to optimize the distribution of points in 3D space, which can result in non-uniform distributions when projected back into 2D space.
Instead, we propose a 2.5D pooling operation which pools the points in 3D according to their structure induced by the pixel grid, by considering the points' 2D neighborhoods given implicitly through the pixel grid and the camera ray direction associated with the corresponding pixel.
For this we pool only the depth value of the 3D points of a $k\times k$ 2D patch using an order invariant operation, \eg maximum or average pooling, and place a point with the pooled depth on the respective camera ray at the desired reduced resolution $N/k\times N/k$.
The feature of the resulting point is computed by performing a second pooling operation on the features inside the points 2D neighborhood.
This way each resulting point represent a 2D neighborhood of fixed size and aligns with the camera rays.
Note that this is different from (a) pooling the three point coordinates directly, where the result is not necessarily aligned with the camera rays, and (b) 2D pooling on the distance image, as the projections $\mathcal{P}_{G\rightarrow C}$ and $\mathcal{P}_{C\rightarrow G}$ are non-linear, see~\cref{fig:global_space}.
Furthermore, this allows us to exploit existing 2D Pooling operations, dropping the need for costly 3D neighborhood and sampling computations for pooling.
\subsection{Ray Aligned Depth-Update}\label{ssec:RADU}
In order to optimize the point position during the network execution, we propose to extend a given 3D convolution to predict an additional feature channel per layer.
This value is used to shift the point along the respective camera ray, which we call Ray Aligned Depth Update (RADU) convolution.
By updating along the camera ray, the point position stays consistent to the 2.5D nature of the point cloud, see~\cref{fig:teaser}.
In order to stabilize the depth updates we regularize the update $p^u$ to the range $(-\alpha, \alpha)$ using a scaled tanh function.
In our experiments we set $\alpha = 0.1$, and provide an ablation study on different values for this hyperparameter in
the supplementary.
Formally, given a 3D convolution operator $\mathrm{conv}$, a point cloud $\{p_i^{in}\}\subset\mathbb{R}^3$ with associated camera rays $\{r_i\}\subset\mathbb{R}^3$, and input features $\{f_i^{in}\}\subset\mathbb{R}^{C_{in}}$, the RADU convolution on point $p_j^{in}$ with neighborhood $\mathcal{N}_j\subset\mathbb{N}$ is given as
\begin{align}
(f_j^{out}, p_j^u) &= \mathrm{conv}\left(\{f_i^{in}\}_{i\in\mathcal{N}_j}, \{p_i^{in}\}_{i\in\mathcal{N}_j}\right), \\
p_j^{out} &= p_j^{in} + \alpha \cdot \tanh(p_j^u) \cdot r_j,
\end{align}
where $f_j^{out}$ is the output feature of the point $p_j^{out}$.
This extension is independent of the type of 3D-convolution used, in our experiments we implement it based on the Monte-Carlo-Convolution~\cite{hermosilla2018monte}, which is illustrated in \cref{fig:conv}.
This type of 3D-convolution has been shown to work well with non-uniformly sampled point clouds, as in our case where the density varies with the distance of the points to the camera, which we also investigate in \cref{ssec:ablation_conv_type}.
\subsection{Network Architecture}\label{ssec:net_arch}
Our network architecture is illustrated in \cref{fig:net_architecture}.
It consists of an initial stack of three 2D convolutions with kernel size $3\times3$, which is followed by a 2.5D pooling layer, using average pooling on both depth and features, with a stride of $8\times8$.
This 2D block is followed by a stack of three 3D RADU convolutions with increasing receptive fields of $0.1\,m, 0.2\,m,$ and $0.4\,m$.
After the 3D block we use bilinear upsampling to increase the spatial resolution of the intermediate features and the updated depth values.
The coarse depth prediction of the 3D block is projected back into 2D using the inverse camera transform.
Both the upscaled depth and features are processed by a second block of three 2D convolutions with kernel size $3\times3$.
We further introduce a skip connection between the 2D blocks.
In between convolutions we use leaky ReLU as a non-linearity.
As input to our method we use the same multi-frequency ToF features used in the works of Agresti \etal~\cite{agresti2018mpiremoval, agresti2019unsupervised}.
That is, given measurements at three different frequencies $f_k$, $k=1,2,3$, we use five input features, $d_1, d_2 - d_1, d_3 - d_1, A_2 / A_1 - 1,$ and $A_3 / A_1 -1$, where $d_k$ is the ToF depth and $A_k$ is the amplitude at frequency $f_k$.
We follow the argumentation of Agresti \etal~\cite{agresti2018mpiremoval}, that the differences $d_1 - d_k, k=2,3$ encode the frequency dependent influence of MPI on the depth recovery, and the relative amplitudes $A_1 / A_k, k=2,3$ provide information about the strength of the MPI for a given pixel.
The depth $d_1$ further serves as initial ToF depth for the projection $\mathcal{P}_{C\rightarrow G}$ into 3D space.
Further, we augment the input data using rotation, mirroring and noise, a more detailed description of the data augmentation can be found in the supplementary material.
\subsection{Coarse-Fine Loss}\label{ssec:loss}
The 3D block of our network architecture produces an intermediate coarse depth estimate which is fed as an additional input to the subsequent 2D layers.
To guide the network to predict an adequate representation of the 3D geometry we optimize both the final output $\hat{d}_{out}$ and the coarse 3D representation of the 3D blocks $\hat{d}_{3D}$:
\begin{align}\label{eq:loss}
\mathcal{L} = \|d_{gt} - \hat{d}_{out}\|_1 + \|d_{gt} - \hat{d}_{3D}\|_1.
\end{align}
As the coarse depth $\hat{d}_{3D}$ is not predicted by the final layer of the 3D network but is constructed iteratively as
\begin{align}
(\hat{d}_{3D})_j = \mathcal{P}_{G\rightarrow C}\left(d^{init}_j + \sum_{l=1}^3 \alpha \cdot \tanh(p_{j,l}^u) \cdot r_j \right),
\end{align}
this allows each RADU layer to receive gradients directly from the loss function, preventing vanishing gradients, which is comparable to the influence on gradient flow of skip connections. However, as each layer is optimized to produce correctly denoised depths, this also increases the risk of overfitting.
The influence of the choice of $\alpha$ is further discussed in the supplementary
\subsection{Unsupervised Domain Adaptation}\label{sec:DA}
\begin{algorithm}
\caption{Adapted Cycled Self-Training Procedure}\label{alg:cst}
\begin{algorithmic}
\Require $n_{cycle}\in\mathbb{N}, p\in[0, 1], D_{real}, D_{syn},\text{ network }N$
\For{$epoch$}
\If{$epoch\mod n_{cycle}\equiv0$}
\State $F_{in} \gets D_{real}$ \Comment{Get real world data}
\State $\hat{d}_{out} = N(F_{in})$ \Comment{predict, unaugmented}
\State $S_2 \gets \{d_{gt}:\hat{d}_{out}\}$ \Comment{Save pseudo labels}
\EndIf
\For{$training\ step$}
\If{$rand.unif([0,1])<p$}
\State $(F_{in}, d_{gt}) \gets D_{real}$\Comment{Real with pseudo label}
\Else
\State $(F_{in}, d_{gt}) \gets D_{syn}$\Comment{Synthetic with label}
\EndIf
\State $(F_{in}, d_{gt}) = augment(F_{in}, d_{gt})$
\State $(\hat{d}_{out}, \hat{d}_{3D}) = N(F_{in})$
\State $minimize\ \mathcal{L},\text{~Eq.\eqref{eq:loss}, on }(\hat{d}_{out}, \hat{d}_{3D})$
\EndFor
\EndFor
\end{algorithmic}
\end{algorithm}
To improve the performance of our method on real data we investigate a cyclic self-training procedure, derived from existing self-training methods for other tasks~\cite{liu2021cycle, lee2013pseudo, kumar2020understanding}.
We evaluate a network, which is pre-trained on synthetic data, on unlabeled real data and use the predictions as pseudo labels in the following training phase.
During training we choose randomly between synthetic data with labels and real data with pseudo-labels, to prevent the network from overfitting to the pseudo-labels.
We repeat this process multiple times by updating the pseudo-labels every $n_{cycle}$ epochs.
To avoid providing the exact same input during pseudo label generation and training we create pseudo-labels on unaugmented data and use augmented input during training.
The procedure is summarized in \cref{alg:cst}.
We refrain from training a teacher network on labeled real world data for pseudo-label generation to keep the assumption of U-DA and to make our approach applicable to settings where no labeled real data is available.
\begin{figure}[b]
\centering
\includegraphics[width=0.9\linewidth]{figures/dataset_vis.pdf}
\caption{Example scene from our Cornell-Box dataset. The top row shows the four measurements $m_\theta$ at 20MHz, bottom row shows, from left to right, ground truth depth and ToF depths using Eq.~\eqref{eq:tof_depth} at 20MHz, 50MHz, and 70MHz, with phase wrapping.}
\label{fig:dataset}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/coolwarm_real_results.pdf}
\caption{Depth error maps with MAE on the real datasets S4 (left) and S5 (right) of our method using both RADU convolutions and cyclic self-training for unsupervised domain adaptation.
Areas were no ground truth depth is available are masked in black.
The zoom in on the left displays both depth and error, with enhanced color coding on the error.}
\label{fig:real_results}
\end{figure*}
\section{Cornell Box ToF Dataset}\label{sec:dataset}
To allow a more broad evaluation of our method, we generate a new large scale synthetic dataset.
As recent advances in ToF-camera hardware~\cite{miller2020large} could allow for higher resolution ToF sensors in the future, we render our dataset at a comparably high resolution of $600\times600$ pixels.
Inspired by Miller \etal~\cite{miller2020large} we simulate the properties of a RaspBerry Pi 3 camera equipped with an EAM for modulation.
\begin{table}[b]
\centering
\begin{tabular}{lccccl}
\toprule
Dataset & Type & GT & $m_\theta$ & Size & Resolution\\
\midrule
S1~\cite{agresti2018mpiremoval} & Syn. & Yes & No & 54 & 320$\times$240 \\
S2~\cite{agresti2019unsupervised} & Real & No & No & 96 & 320$\times$239 \\
S3~\cite{agresti2019unsupervised} & Real & Yes & No & 8 & 320$\times$239 \\
S4~\cite{agresti2019unsupervised} & Real & Yes & No & 8 & 320$\times$239 \\
S5~\cite{agresti2018mpiremoval} & Real & Yes & No & 8 & 320$\times$239 \\
\midrule
FLAT~\cite{Guo_2018_ECCV} & Syn. & Yes & Yes & 1.2k & 424$\times$512 \\
\midrule
Cornell-Box & Syn. & Yes & Yes & 21.3k & 600$\times$600\\
\bottomrule
\end{tabular}
\caption{Properties of the datasets considered in our experiments.}
\label{tab:datasets}
\end{table}
To create a challenging setup for ToF-denoising we generate 142 scenes inspired by the Cornell box layout~\cite{goral1984modeling}, to ensure high levels of MPI.
We render each scene from 50 viewpoints with 3 different material properties, including dark materials for low SNR values, which results in 21.3k different renderings.
The data is split into 116 training scenes, 13 validation scenes, and 13 test scenes.
Each rendering is processed to simulate raw measurements for sinusoidal modulations at 20MHz, 50MHz, and 70MHz at phase offsets at $0, \pi/2, \pi,$ and $3\pi/2$, resulting in 12 correlation measurements per capture.
To our knowledge our dataset exceeds existing ToF datasets in both size and resolution, see Table~\ref{tab:datasets}.
We refer the reader to the supplementary for further details about the dataset.
\section{Experiments}\label{sec:experiments}
First we test our method on the established data sets of Agresti \etal~\cite{agresti2018mpiremoval, agresti2019unsupervised}, who provide a synthetic and several real data sets for the SoftKinect camera, containing both MPI and shot noise, to evaluate our method including U-DA.
We refer to these data sets using the same notation as previous authors~\cite{agresti2019unsupervised, buratto2021deep} , see \cref{tab:datasets}.
As the real world datasets are rather small, we further evaluate our method on larger synthetic datasets, our dataset described in \cref{sec:dataset}, where we additionally consider phase wrapping in the input features, and the FLAT dataset~\cite{Guo_2018_ECCV}, which simulates a Kinect2 sensor, to also investigate the influence of non-sinusoidal modulations of $s_e(t)$, which violate the assumptions from \cref{sec:tof_principle}.
Further we perform an ablation experiment, comparing RADU layers to other convolution layers.
For additional informations about hyperparameter settings and other aspects of the experiments we refer to the supplementary material.
\begin{table}[t]
\centering
\begin{tabular}{lcccc}
\toprule
& \multicolumn{2}{c}{S4} &\multicolumn{2}{c}{S5}\\
Method & MAE & Relative & MAE & Relative\\
& [cm] & Error & [cm] & Error \\
\midrule
ToF (low freq.) & 7.28 & - & 5.06 & - \\
ToF (high freq.) & 5.43 & - & 3.62 & - \\
SRA~\cite{freedman2014sra}$^\dag$ & 5.11 & 94.1\% & 3.37 & 93.1\% \\
DeepToF~\cite{marco2017deeptof}$^\dag$ & 5.13 & 70.5\%$^\ast$ & 6.68 & 132\%$^\ast$ \\
+calibration~\cite{agresti2018mpiremoval}$^\dag$ & 5.46 & 75.0\%$^\ast$ & 3.36 & 66.4\% \\
TIR~\cite{buratto2021deep}$^\dag$ & 2.60 & 47.9\% & 1.88 & 52.0\% \\
CFN~\cite{agresti2018mpiremoval}$^\dag$ & 3.19 & 58.7\% & 2.22 & 60.5\% \\
+ U-DA~\cite{agresti2019unsupervised}$^\dag$ & 2.36 & 43.5\% & 1.66 & 46.1\% \\
\hline
RADU & \textbf{1.83} & \textbf{33.7}\% & 2.59 & 71.5\% \\
RADU + U-DA & 2.11 & 38.8\% & \textbf{1.63} & \textbf{45.0}\% \\
\hhline{=====}
RADU + S-DA & 1.89 & 34.8\% & 1.53 & 42.3\% \\
\bottomrule
\end{tabular}
\caption{Results of various methods on the real world datasets S4 and S5. Each row reports the MAE and the relative error with respect to the phase unwrapped high frequency ToF depth. ($\ast$: relative to low frequency, $\dag$: numbers taken from Buratto \etal~\cite{buratto2021deep})}
\label{tab:results_ToF_denoising}
\end{table}
\subsection{Experiment 1: Real World Data}\label{ssec:real_world_experiment}
The dataset S1-S5 contain intensities, amplitudes and phase unwrapped ToF depths.
Similar to Agresti \etal~\cite{agresti2019unsupervised} we train our network on the synthetic dataset S1 and use the real dataset S3 for validation.
In the second stage we use the unlabeled real world dataset S2 in our cycled self-training procedure for U-DA.
We evaluate our method with and without U-DA on the two real world data sets S4, with lower MPI levels but more detailed objects, and S5, a 'box' dataset with higher MPI levels but fewer details.
We compare to the Coarse-Fine-Network (CFN), with~\cite{agresti2019unsupervised} and without U-DA~\cite{agresti2018mpiremoval}, the DeepToF network~\cite{marco2017deeptof}, the Transient Image Reconstruction network (TIR)~\cite{buratto2021deep}, and the non-learned Sparse Reconstruction Analysis (SRA)~\cite{freedman2014sra}. The Mean Absolute Error (MAE) and the remaining relative error are reported in \cref{tab:results_ToF_denoising}.
The combination of our RADU network with cycled self-training for U-DA outperforms existing approaches on both datasets and successfully removes noise and MPI from the unwrapped ToF depth images, as can be seen in \cref{fig:real_results}.
To additionally evaluate the stability of the cyclic self-training we repeat the fine-tuning of our method 10 times and measured a standard deviation of the MAE at 0.072cm on S4 and 0.021cm on S5, which indicates that the proposed U-DA approach is stable.
Interestingly, the performance of our RADU network drops after the domain adaptation on the dataset S4, which we account to the fact that the unlabeled dataset S2 contains 'box' scenes, with few details, similar to the ones in S5, whereas S4 contains more complex objects.
For comparison, we also report the results of our method after a Supervised Domain Adaptation (S-DA) using the small labeled real world dataset S3 for fine-tuning.
\begin{table}
\centering
\begin{tabular}{lcccc}
\toprule
& \multicolumn{2}{c}{Cornell-Box} & \multicolumn{2}{c}{FLAT} \\
Method & MAE & Relative & MAE & Relative\\
& [cm] & Error & [cm] & Error\\
\midrule
ToF(low freq.) & 29.0 & - & 59.34 & -\\
ToF(high freq.) & 11.14 & - & - & -\\
DeepToF~\cite{marco2017deeptof} & 10.17 & 35.1\%$^\ast$ & 23.0 & 38.8\%$^\ast$\\
CFN~\cite{agresti2018mpiremoval,agresti2019unsupervised} & 3.99 & 35.8\%$^\dag$ & 6.29 & 10.6\%$^\ast$\\
End2End~\cite{su2018end2end} & 5.99 & 53.8\%$^\dag$ & 6.20 & 10.5\%$^\ast$\\
\hline
RADU & \textbf{3.64} & \textbf{32.7}\%$^\dag$ & \textbf{3.31} & \textbf{5.58}\%$^\ast$\\
\bottomrule
\end{tabular}
\caption{Results of various methods on unseen data from our synthetic Cornell-Box dataset and the FLAT dataset. Each row reports the depth MAE and the relative error with respect to a phase unwrapped ToF depth. ($\ast$: relative to low frequency, $\dag$: relative to high frequency)}
\label{tab:results_on_synthetic}
\end{table}
\subsection{Experiment 2: Cornell-Box Dataset}\label{ssec:our_data_experiment}
We train instances of the previously mentioned CFN, DeepToF and our RADU network on our Cornell-Box dataset described in \cref{sec:dataset}.
Since our dataset contains raw measurements $m_\theta$, we further compare to the End2End network~\cite{su2018end2end}.
For a fairer comparison, we perform a hyperparameter tuning for each method, we refer to the supplementary material for details.
As our dataset is purely synthetic we do not use domain adaptation strategies.
We evaluate on the test images where our method achieves the lowest MAE of the mentioned methods as can be seen in \cref{tab:results_on_synthetic}.
The remaining relative error is comparable to the previous experiment, showing that our network, as well as CFN, are able to handle phase wrapping in the input features.
The iterative denoising in the latent 3D space in between the three RADU convolutions is shown in \cref{fig:radu_step}.
However, while our method yields better results than the others, objects with low SNR and scenes with high MPI can still lead to failures in the reconstruction, as shown in \cref{fig:ourdata_results}.
\begin{figure}[b]
\centering
\includegraphics[width=\linewidth]{figures/coolwarm_OurData_results.pdf}
\caption{Visual results for two challenging scenes from our Cornell-Box dataset.
The top scene contains an object with low SNR at the bottom, where all methods fail to retrieve the correct depth.
The bottom scene exhibits high MPI, and shows a failure case of our method, where the object boundaries are blurred.}
\label{fig:ourdata_results}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/RADU_Stepv2.pdf}
\caption{Visualization of the latent point clouds on a corner scene. The depth reconstruction improves after every RADU layer.}
\label{fig:radu_step}
\end{figure}
\subsection{Experiment 3: FLAT Dataset}\label{ssec:FLAT_experiment}
In a third experiment we train our method on the FLAT dataset which contains nine raw correlations for three frequencies, simulating the Kinect2 ToF sensor. Unlike in the previous cases the low frequency signal is non-sinusoidal~\cite{Guo_2018_ECCV} which introduces additional distortions in the depth estimation using Eq.~\eqref{eq:tof_depth}.
A further challenge is the domain gap between the training data, which to a large part consists of images of isolated floating objects and thus low MPI levels, while the test data contains complete scenes.
We compare our method to the same learned methods as in the previous experiment, again performing a hyperparameter search for each method.
We evaluate on the 120 test images, where our method achieves the lowest MAE compared to the other learned methods, see \cref{tab:results_on_synthetic}.
In \cref{fig:flat_results} we show error images for the different methods.
We recognize that End2End can produce competitive results in some cases but has a tendency to create artifacts, which we assume to stem from the domain gap between training and test data.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{figures/coolwarm_FLAT_results.pdf}
\caption{Depth maps on the FLAT dataset.
The top row shows an example where both RADU and End2End produce an almost noise free prediction.
The bottom row shows a failure case of End2End.}
\label{fig:flat_results}
\end{figure}
\subsection{Ablation: Latent 3D Representation}\label{ssec:ablation_conv_type}
To validate the benefits of our 3D RADU convolutions we evaluate the performance when replacing the RADU convolutions with 2D, 2.5D or 3D point convolutions.
In detail we compare to 2.5D convolutions~\cite{xing20192_5Dconv}, which performs three 2D convolutions on separate depth ranges, KPConv~\cite{thomas2019KPConvsphere} which uses spherical kernel points to represent the 3D kernel function, MCConv~\cite{hermosilla2018monte}, which uses kernel MLPs and a density estimation, and PointConv~\cite{wu2019pointconv} which uses a kernel MLP with a learned density estimation.
We conduct a smaller experiment without domain adaption, using S1 for training and S3 for validation.
For a fair comparison we conduct a hyperparameter search for each method, and report results of the validation MAE in \cref{tab:ablation_convolution_type}.
The results show that latent 3D information can help to improve the performance, but the choice of the convolution type is critical.
Both the 2.5D convolution and MCConv yield better results than the 2D convolution, on both training and validation data, outperformed only by our proposed RADU extension of the MCConv layer.
\begin{table}[]
\centering
\begin{tabular}{lcc}
\toprule
Method & Training (S1) & Validation (S3) \\
& MAE [cm] & MAE [cm] \\
\midrule
2D Conv & 9.68 & 2.72\\
2.5D Conv~\cite{xing20192_5Dconv} & 8.79 & 2.61 \\
3D KPConv~\cite{thomas2019KPConvsphere} & 10.86 & 4.00\\
3D PointConv~\cite{wu2019pointconv} & 10.49 & 3.42\\
3D MCConv~\cite{hermosilla2018monte} & 8.38 & 2.51\\
3D MCConv + RADU & 7.87 & 2.28 \\
\bottomrule
\end{tabular}
\caption{Results of our network architecture with different layer types in the latent block. We report MAE on training (synthetic) and validation (real) data after hyperparameter optimization.}
\label{tab:ablation_convolution_type}
\end{table}
\section{Limitations}\label{sec:limitations}
While we covered several error sources present in ToF data in our experiments, we did not investigate motion artifacts that occur in dynamic scenes~\cite{hansard2012tof_principles}, which would require additional considerations in the network architecture as shown in previous works~\cite{Guo_2018_ECCV, qiu2019deep}.
However, we believe that the latent denoised 3D point cloud representation of our network can potentially be used to improve image alignment.
Further, the results on our Cornell-Box dataset indicate that high MPI, low SNR and fine details can have a drastic impact in the depth estimation, not only for our method but for all evaluated methods.
Finally learned methods, including ours, are trained for specific sensor data and are thus not able to generalize to for example different modulation frequencies. We believe learned methods which are able to treat sensor properties as input parameters, as in traditional approaches~\cite{freedman2014sra}, is a promising line of research.
\section{Conclusion}\label{sec:conclusion}
In this paper we presented an extension of unstructured 3D convolutions for ToF denoising, which exploits structured information from depth images to iteratively denoise the point cloud in 3D space.
The experiments indicate that latent 3D representations improve the denoising capabilities of neural networks for various error sources present in ToF data.
Further, we demonstrated that cyclic self-training using pseudo-labels on real data can effectively be used for unsupervised domain adaptation on ToF data and, applied to our network, outperforms existing methods on two real world data sets.
While we demonstrated our RADU layers in the context of ToF denoising, they can in principle be applied to any task where 2.5D information is present, in order to benefit from both a latent 3D representation and an iterative denoising of the former.
\section{Acknowledgements}
This project is financed by the Baden-Württemberg Stiftung gGmbH.
We thank Julio Marco for his information on adapting the transient renderer, and Markus Miller and Rainer Michalzik for their insights on the ToF working principle and noise simulation.
\section{Further Insights on ToF Working Principle}
A convenient formulation for the sinusoidal part $v$ of the measurements $m_\theta$ , is to represent it in the complex plane $\mathbb{C}$ via
\begin{align}
v &= A\cdot e^{i\Delta\varphi}, \\
Re(v) &= \sum_\theta -\sin(\theta)\cdot m_\theta, \\
Im(v) &= \sum_\theta cos(\theta)\cdot m_\theta.
\end{align}
Which allows to express the measurements $m_\theta$ as \cite{frank2009theoretical}
\begin{align}
m_\theta &= I + A \cos(\Delta\varphi + \theta), \label{eq:app_tof_measurement}\\
&= I + Re\left(v\cdot e^{i\theta}\right),
\end{align}
Using this notation the amplitude $A$, the intensity $I$ and the the phase delay $\Delta\varphi$ can be recovered as
\begin{align}
A &= \|v\|_2.\label{eq:app_amplitude}\\
I &= \frac{1}{N}\sum_\theta m_\theta, \label{eq:app_intensity}\\
\Delta\varphi &= arg(v)\\
&= \arctan \left(\frac{Re(v)}{Im(v)}\right),\label{eq:app_tof_phase}
\end{align}
where $N$ is the number of phase shifted measurements $m_\theta$.
Note that this is not an exact solution but a least-squares optimal solution \cite{frank2009theoretical}.
Given measurements at two different frequencies $f_1, f_2$ a computationally cheap solution \cite{hansard2012tof_principles} to unwrap the distances is by minimizing
\begin{align}
\underset{m, n}{\min} \big| d(f_1) + m \cdot d_{max}(f_1) - d(f_2) + n\cdot d_{max}(f_2) \big|. \label{eq_tof_unwrap}
\end{align}
We use this approach to compute the phase unwrapped high frequency ToF depth in the experiment on our Cornell-Box dataset.
\section{Extension of MC-Convolutions with RADU}
\noindent\textbf{Monte-Carlo Convolution} The Monte-Carlo point convolution of Hermosilla \etal~\cite{hermosilla2018monte} approximates the convolution of two continuous functions, the features $f$ and the kernel function $g$, where $f$ is only known at the discrete positions $\{p_i\}$, the point cloud, using the Monte-Carlo numerical integration.
Formally, let a point cloud $\{p_i^{in}\}\subset\mathbb{R}^3$, input features $\{f_i^{in}\}\subset\mathbb{R}^{C_{in}}$ and a convolution radius $r\in\mathbb{R}$ be given. Further let $\mathcal{N}_j$ denote the neighborhood of $p_j^{in}$, given as
\begin{align}
\mathcal{N}_j = \left\{i\ :\ \|p_j-p_i\|_2 \leq r\right\}\subset\mathbb{N}.
\end{align}
Then the Monte-Carlo convolution on point $p_j$ with radius $r$ is defined as
\begin{align}
(f\ast g) (p_j) &:= |\mathcal{N}_j|^{-1}
\sum_{i\in\mathcal{N}_j} \frac{f_i \cdot g\left(\frac{p_i - p_j}{r}\right)}{pde(p_i\ |\ p_j)},\label{eq:app_mcconv}
\end{align}
where $pde(p_i\ |\ p_j)$ denotes a point-density estimation of $p_i$ inside the receptive field of $p_j$, and the kernel function $g:\mathbb{R}^3 \rightarrow \mathbb{R}^{C_{in}\times C_{out}}$ is represented implicitly using one or multiple MLPs.
In analogy to 2D convolutions, the evaluation of $g$ on the relative position $(p_i - p_j)/r$ yields the convolution weight matrix $W_{ij}$.
\noindent\textbf{RADU} To predict the additional point update $p_j^{u}\in\mathbb{R}$ we extend $g$ to predict an additional output channel, \textit{i.e.}~
\begin{align}
g:\mathbb{R}^3 &\rightarrow \mathbb{R}^{C_{in}\times C_{out}} \times \mathbb{R}^{C_{in}\times 1} \notag\\
&\quad \simeq \mathbb{R}^{C_{in}\times (C_{out} + 1)}, \\
\frac{p_i - p_j}{r}&\mapsto\left(W_{ij}, W^u_{ij}\right).
\end{align}
Using this kernel function in Eq.~\eqref{eq:app_mcconv}, the dimensionality of the output of the convolution is increased by one, \textit{i.e.}~$(f\ast g) (p_j) \in \mathbb{R}^{C_{out} + 1}$.
We split the output into features $f_j^{out}\in\mathbb{R}^{C_{out}}$, and the point update $p_j^u$.
To summarize the point update is computed as
\begin{align}
p_j^u &:= |\mathcal{N}_j|^{-1}
\sum_{i\in\mathcal{N}_j} \frac{f_i \cdot W^u_{ij}}{pde(p_i\ |\ p_j)}.
\end{align}
The final update along the associated camera ray is performed as described in the main paper
\begin{align}
p_j^{out} &= p_j^{in} + \alpha \cdot \tanh(p_j^u) \cdot r_j.
\end{align}
\section{Network Architecture - Hyperparameters}
We provide additional parameters of our network architecture for a full description.
Our networks consists of three initial 2D convolutions, followed by a 2.5D pooling layer, three 3D RADU convolutions, a 3D-2D projection with up-scaling and a final stack of three 2D convolutions.
The first three 2D convolutions have feature channels of sizes [64, 64, 128].
The 2.5D pooling layer uses a stride of $8\times 8$ and applies average pooling on both the point depths and the features.
The 3D RADU convolutions use single MLPs with 16 hidden neurons as kernel functions, and a regularization of the point updates with $\alpha=0.1$ m.
The receptive fields are [0.1 m, 0.2 m, 0.4 m] and the feature channels are of sizes [128, 256, 128].
The up-scaling layer uses bilinear interpolation.
The extracted depth of the points is projected back into a 2D depth image and is used as additional input to the following 2D layers.
The final 2D convolutions have feature channels of sizes [64, 64, 1].
In between all convolutional layers, 2D and 3D, we use leaky ReLU with $\alpha=0.1$ as non-linear activation.
All 2D convolutions further have a kernel size of $3\times 3$ and use \emph{'same'} padding.
Finally we use a skip connection between the two 2D blocks with feature concatenation.
\section{Cornell-Box - Dataset Generation}
We generate our dataset using the transient renderer of Jarabo \etal~\cite{Jarabo14transient}, which has been deployed for ToF data generation in previous works \cite{marco2017deeptof, Guo_2018_ECCV}.
\noindent\textbf{Camera Properties}
Inspired by the work of Miller \etal \cite{miller2020large}, which propose an approach to modify standard cameras for ToF captures, we simulate the properties of a RaspBerry Pi 3 camera equipped with an Electro-Absorption Modulator (EAM), which modulates the received signal $s_r$ in front of the camera sensor.
\noindent\textbf{Scene Generation}
To ensure challenging scenes with high MPI levels we design our scenes inspired by the Cornell-Box~\cite{goral1984modeling} layout, and place a random number of objects, between 1 and 10, in the scene.
The used objects meshes are taken from a subset of the Thingi10k dataset~\cite{zhou2016thingi10k}, containing 3D models under CC license.
For each 3D object a material property is randomly sampled, the assignment of dark materials allows to simulate regions with lower SNR values.
Further the material of the surrounding box allows, to some degree, control over the level of MPI in the scene.
An example of a scene with different materials is shown in~\cref{fig:materials}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{supp_figures/intensities_materials.pdf}
\caption{An example scene with three different material properties. The images show the intensity $I$ at a frequency of 20 MHz as given by Eq.~\eqref{eq:app_intensity}. In the left scene the box material is highly reflective, the right scene contains an object with a dark material.}
\label{fig:materials}
\end{figure}
\noindent\textbf{ToF Simulation}
Using the transient renderer we compute the impulse response $h(t)$ of the scene, which contains the received signal $s_r$ in a time resolved format after illuminating the scene with a light impulse.
Given the impulse response the measurements $m_\theta$ can be simulated for different frequencies $f$ and phase offsets $\theta$ using
\begin{align}
s_r(t) &= h(t) \ast s_e(t),
\end{align}
in the formula from the main paper
\begin{align}
m_\theta &= \frac{1}{\delta t}\int_{\delta t} s_r(t) \cdot s_e\left(t + \frac{\theta}{2\pi f}\right)\,dt.
\end{align}
The resulting measurements for an example scene are shown in~\cref{fig:dataset_example}.
Further a scene table can be found at the end of this document in~\cref{fig:scene_table1}, \ref{fig:scene_table2} and \ref{fig:scene_table3}.
\noindent\textbf{Additional Noise}
We simulate the combination of shot noise, thermal noise and read noise as an Additive White Gaussian Noise (AWGN) using the specifications of the RaspBerry Pi 3 camera of Pagnutti \etal~\cite{pagnutti2017RaspPi}, who use the linear noise model of the EMVA Standard 1288~\cite{european2010standard}.
We use the ISO 100 measurements of the former as reference, who measured a gain $K$ of $0.33$ and a Y-intercept $b$ of $-18.4$ on the mean-variance curves
\begin{align}
\sigma^2 = K\cdot m + b, \label{eq:app_noise_char}
\end{align}
where $m$ is the mean value across multiple measurements and $\sigma^2$ is the variance of the measurements.
From Eq.~\eqref{eq:app_noise_char} we infer the pixel-wise variance $\sigma_{x,y}^2$ dependent on the measurement $m_\theta$ on pixel $(x,y)$ for the AWGN $\mathcal{N}(0, \sigma_{x,y}^2)$.
In our dataset, we provide measurements $m_\theta$ without the AWGN to allow the online generation of AWGN during training for data augmentation, and to allow future researchers to simulate other noise models.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{supp_figures/Dataset.pdf}
\caption{Example scene from our Cornell-Box dataset. Simulated measurements $m_\theta$ for different values of $\theta\in\{0, \pi/2, \pi, 3\pi/2\}$ and frequencies $f\in\{20, 50, 70\}$ are shown in the top rows.
The reconstructed ToF depths, including phase wrapping, are shown in the bottom row.}
\label{fig:dataset_example}
\end{figure}
\section{Experiments}
We briefly describe additional details about the experiments, including data augmentation and hyperparameter settings.
The ADAM optimizer\cite{kingma2014adam} is used during backpropagation in all trainings described below.
\subsection{Data Augmentation}
To increase the variety in the datasets we use the following data augmentation strategies on the input features.\\
\emph{Mirroring} Random mirroring along image axes.\\
\emph{Image Rotation} Random rotation by $0^\circ, 90^\circ, 180^\circ, 270^\circ$.\\
\emph{Small Rotation} Additional rotation with a random angle in the range $[-5^\circ, 5^\circ]$. Values outside the image boundaries are interpolated with a \emph{nearest} strategy, as implemented in the preprocessing pipeline of \verb|tensorflow.keras|.\\
\emph{Noise} Additive Gaussian noise with a relative standard deviation of $0.02$.\\
\emph{Random Cropping} In the case of training on image patches we crop random regions of the images every epoch.
We further experimented with the MPI augmentation of Agresti \etal~\cite{agresti2019unsupervised}, but found it did not improve the performance in our experiments.
\subsection{Soft-Kinect - Datasets S1-S5}
Since no raw measurements are available in the dataset, we do not compare to the End2End method in this setting.
While in theory a reconstruction of $m_\theta$ could be done using Eq.~\eqref{eq:app_tof_measurement}, the error accumulation results in data unfit for training.
The input resolution of the Soft-Kinect is $320\times240$, which results in a latent point cloud of $1.2k$ points after the 2.5D pooling in our proposed network architecture.
On the real datasets of size $320\times239$ we pad the input to the using reflective padding to the full resolution $320\times240$.
We use the same input features as CFN, by setting $f_1=70\text{MHz}, f_2=20\text{MHz}, f_3=40\text{MHz}$.
Note, the provided ToF depth at 70MHz is phase unwrapped in this experiment.
\noindent\textbf{Pre-Training}
Our network is pre-trained on the synthetic dataset S1 using a learning rate of 1e-3 and an exponential learning rate decay of 0.1 every 100 epochs and a batch size of 8. The network converged after 300 epochs.
\noindent\textbf{Cyclic Self-Training}
After pre-training our network on synthetic data, we train the network on the unlabeled real dataset S2 using pseudo-labels as described in the main paper.
In each training step, we choose real examples with a probability of $p=0.5$ and update the pseudo-labels every $n_{cycle}=20$ epochs. We train with a small learning rate of 5e-5 and a batch size of 4 for 100 epochs.
\noindent\textbf{Supervised Domain Adaptation} For comparison we fine-tune our network, after pre-training on synthetic data, using the labeled real dataset S3. We train with a small learning rate of 1e-5 and a batch size of 4 for 100 epochs.
\subsection{RaspBerry-Pi 3 - Cornell-Box Dataset}
We use a resolution of $512\times512$ during training which results in 4096 points in the latent point clouds.
The input features for our network are computed by using $f_1 = 20\text{MHz}, f_2=50\text{MHz}, f_3=70\text{MHz}$.
The network is trained with an initial learning rate of 1e-3 and an exponential learning rate decay of 0.1 every 100 epochs, and a batch size of 4. The network converged after 300 epochs.
\noindent\textbf{DeepToF} includes an Auto-Encoder (AE) training stage on real data for domain adaptation.
As there is no real data in this experiment, we train models with pre-training the AE, as described in the original paper, and training the entire network combined, without pre-training.
We also tune the learning rate with initial learning rates from \{1e-3, 1e-4\} and decay steps from \{50, 75, 100\}, The learning rate in the AE stage is set constant at 1e-4 for 15 epochs, as suggested in the original paper~\cite{marco2017deeptof}.
The learning rate decay is not fully specified in the original paper, we assume a decay of 0.1 every 75 epochs, which matches the authors description.
We use an L2-Loss, a batch size of 16, and the low frequency 20MHz ToF-depth as input as in the original paper.
\noindent\textbf{CFN} was investigated in two papers, we use the more recent version~\cite{agresti2019unsupervised} as reference for our experiments, which predicts the depth directly and does not use additional filtering algorithms.
As no real data is available we drop the unsupervised adversarial part of the training, as with our network architecture.
The original paper~\cite{agresti2019unsupervised} used a fixed learning rate of 5e-6.
We investigate the static learning rates \{1e-4, 1e-5, 5e-6\} and also in combination with a learning rate decay after \{100, 150\} epochs.
We use a coarse-fine L1-loss and a batch size of 4 as in the original paper~\cite{agresti2019unsupervised}.
The input features for CFN are the same as for our network.
\noindent\textbf{End2End} predicts depths directly from raw correlations, using a generative approach, which we also incorporate into our trainings.
The original network uses a static learning rate of 5e-4 for 50 epochs before decaying the learning rate linearly to zero for 100 epochs~\cite{su2018end2end}.
We additionally investigate using exponential learning rate decays with initial learning rates from \{5e-4, 5e-3\} at decay steps from \{50, 100\}.
We further use the combination of adversarial, total-variation and L1-loss of the original paper.
The first two raw measurements at phase offsets $0, \pi/2$ of the two higher frequencies 60MHz, 70MHz are used as input.
We also investigate the influence of training on image patches of resolution $128\times128$, as used by CFN and End2End originally.
During training we ensure that the cropped images inside a batch are from different scenes.
We compare the resulting MAE on the validation set after tuning and using the orignal hyperparameters (vanilla):
\begin{center}
\begin{tabular}{c|ccc}
& DeepToF & CFN & End2End \\
\midrule
vanilla & 11.97 & 4.72 & 9.14\\
tuned & 10.10 & 3.83 & 8.19\\
\end{tabular}
\end{center}
The hyperparameters after tuning are:\\
DeepToF: LR 5e-4, decay 0.1 every 100 epochs, AE pre-training, full resolution. \\
CFN: LR 1e-4, decay 0.1 every 100 epochs, on patches.\\
End2End: 5e-4, decay: 0.1 every 100 epochs, full resolution.
Error distributions and statistical values are compared in~\cref{fig:stats_ourdata}.
\begin{figure}
\centering
\includegraphics[width=0.49\linewidth]{supp_figures/OurData_error.pdf}
\includegraphics[width=0.49\linewidth]{supp_figures/OurData_absolute_error.pdf}
{\small
\\ \ \\
\begin{tabular}{lcc}
\toprule
Method & Median & $\sigma$\\
\midrule
DeepToF & 2.27 & 15.61 \\
CFN & 0.42 & 8.80 \\
End2End & 1.72 & 14.97 \\
RADU & \textbf{-0.28} & \textbf{7.64} \\
\bottomrule
\end{tabular}
}
\caption{Error distributions with median and standard deviation $\sigma$ of the examined networks on our Cornell-Box dataset. An optimal distribution would be a single peak at 0. Both CFN and RADU have a similar distribution, where the median and standard deviation of CFN are slightly worse.}
\label{fig:stats_ourdata}
\end{figure}
\subsection{Kinect2 - FLAT Dataset}
The FLAT dataset contains raw measurements simulating a Kinect2 camera for sinusoidal modulations at frequencies at 40MHz and $\thicksim$58.8MHz, and for a non-sinusoidal modulation at $\thicksim$30.3MHz.
For each modulation three measurements were performed for phase offsets with a spacing of approximately $2\pi/3$.
The resolution of the Kinect2 is $424\times 512$ which results in 3392 points in the latent point clouds of our network.
We train our network with an initial learning rate of 1e-3 and an exponential learning rate decay of 0.3 every 100 epochs, and a batch size of 2. The network converged after 600 epochs.
For DeepToF we use the low frequency ToF-depth as input.
For CFN and our network we compute the input features using $f_1=30.3\text{MHz}, f_2=40\text{MHz}, f_3=58.8\text{MHz}$.
For End2End we use the first two raw measurements of the two higher frequencies 40MHz, 58.8MHz as input.
We perform the same hyper parameter tuning as in the previous section and achieve the following MAE on the validation set:
\begin{center}
\begin{tabular}{c|ccc}
& DeepToF & CFN & End2End \\
\midrule
vanilla & 11.52 & 4.30 & 6.21\\
tuned & 8.66 & 3.57 & 5.90\\
\end{tabular}
\end{center}
The hyperparameters after tuning are:\\
DeepToF: LR: 1e-3, decay: 0.1 every 50 epochs, combined training, on patches.\\
CFN: static LR: 1e-4, on patches.\\
End2End: LR: 5e-4, decay: 0.1 every 100 epochs, full resolution.
Error distributions and statistical values are compared in~\cref{fig:stats_FLAT}.
\begin{figure}
\centering
\includegraphics[width=0.49\linewidth]{supp_figures/FLAT_error.pdf}
\includegraphics[width=0.49\linewidth]{supp_figures/FLAT_absolute_error.pdf}
{\small
\\ \ \\
\begin{tabular}{lcc}
\toprule
Method & Median & $\sigma$\\
\midrule
DeepToF & 12.19& 33.08 \\
CFN & 3.22 & 19.08 \\
End2End & 1.07 & 25.55 \\
RADU & \textbf{-0.08} & \textbf{7.83} \\
\bottomrule
\end{tabular}
}
\caption{Error distributions with median and standard deviation $\sigma$ of the examined networks on the FLAT dataset. An optimal distribution would be a single peak at 0.}
\label{fig:stats_FLAT}
\end{figure}
\noindent\textbf{Code optimizations for FLAT dataset.}
The synthetic data used for training in the FLAT dataset contains a high ratio of background pixels, which we masked in the loss functions during training.
When training on patches we ensure to have non-empty images inside the batches.
As the background pixels are associated with depth 0, they are all projected to the same point $\vec{0}\in\mathbb{R}^3$.
This introduces a heavy memory usage in 3D convolutions as these identical points are all considered as neighbors to each other.
To reduce the memory impact we apply a filter during training, which drops all masked points in the 3D projection $\mathcal{P}_{C\rightarrow G}$, which results in a varying number of points per image.
After the 3D block of our network the points are projected back into 2D by ordering the points in a grid using a sparse data format with pixel ids inferred from the masking.
\section{Ablations}
We provide additional information about the ablation from the main paper and an additional ablation on the hyperparameter $\alpha$ in our RADU convolutions.
\subsection{Ablation 1: Latent 3D Representation}
For the 2D and 3D variants of our network architecture we use the same number of features in the 3D bottleneck block, namely $[128, 256, 128]$.
For the 2.5D convolutions, which performs 3 convolutions for foreground, neighborhood and background, we change the feature dimensions to $[129, 258, 129] = [3\cdot 43, 3\cdot 86, 3\cdot43]$, in order to make them divisible by 3.
The neighborhood radii are equal for all 3D convolutions at $[0.1m, 0.2m, 0.4m]$, for the 2D convolutions we use pixel neighborhoods of $[3, 5, 9]$, which is equal to doubling the pixel L0-distances $[1, 2, 4]$.
For the 2.5D convolutions we use a fixed neighborhood size of $3$ pixels, as larger kernels were too demanding in memory consumption.
We use the following additional hyperparameters for the 3D convolutions:\\
KPConv: 15 kernel points.\\
PointConv: 16 hidden units in the kernel MLP.\\
MCConv: 1 kernel MLP with 16 hidden units.
We train all variants with different hyperparameters and choose the run which achieved the best validation loss.
The initial learning rate is chosen from \{1e-2, 1e-3, 1e-4\} and decayed with an exponential learning rate decay with rates \{0.1, 0.3\} every \{50, 100, 150\} epochs.
The following settings achieved the best validation MAE on S3:\\
\begin{center}
\begin{tabular}{l|ccc}
type & init LR & decay rate & decay steps\\
\midrule
2D & 1e-2 & 0.1 & 100\\
2.5D & 1e-3 & 0.1 & 100\\
KPConv & 1e-3 & 0.3 & 150\\
PointConv & 1e-3 & 0.1 & 150\\
MCConv & 1e-3 & 0.3 & 100\\
\end{tabular}
\end{center}
\subsection{Ablation 2: Hyperparameter $\alpha$}\label{ssec:ablation_update_scale}
As discussed in the main paper the RADU layers receive direct gradients from the coarse loss, which can increase the risk of overfitting.
We investigate the influence of the regularization hyperparameter $\alpha$ of the RADU layer, and train instances of our network, again using S1 and S3, for different values of $\alpha$, including a dynamic value choice, by using the convolution radius $\alpha_l=r_l$, in our case 0.1\,m, 0.2\,m, and 0.4\,m.
Results are reported in \cref{tab:ablation_update_scale}.
While the result indicate that a wrong choice of $\alpha$ yields the risk of overfitting on the training data, we found that $\alpha=0.1$\,m leads to good performance on the three datasets of the main paper.
\begin{table}
\centering
\begin{tabular}{ccc}
\toprule
$\alpha$ & Training (S1) & Validation (S3) \\
$ $[m] & MAE [cm] & MAE [cm] \\
\midrule
0.0 & 8.38 & 2.51 \\
0.1 & 7.87 & 2.28 \\
0.2 & 7.51 & 2.81 \\
1.0 & 6.93 & 3.42 \\
$r$ & 7.48 & 2.45 \\
\bottomrule
\end{tabular}
\caption{Influence of the hyperparameter $\alpha$ in the RADU convolutions. We report MAE on training (synthetic) and validation (real) data.
The case $\alpha = r$ uses the receptive field $r$ as scale. The value $\alpha=0$ corresponds to a standard MCConv layer.}
\label{tab:ablation_update_scale}
\end{table}
\section{Implementation}
All network implementations were done in \verb|TensorFlow 2.3.0-gpu| and \verb|Python 3.6|.
The dataset generation was done using \verb|Python 3.6| and the transient renderer of Jarabo \etal~\cite{Jarabo14transient} in version \verb|26 February 2019 - Release v1.2|.
\section{Qualitative Results}
\subsection{Cornell-Box Dataset}
We show predictions for one view point per scene in ~\cref{fig:ourdata1}, \ref{fig:ourdata2}, \ref{fig:ourdata3} and \ref{fig:ourdata4}.
We further show a larger version of Figure 8 of the main paper in~\cref{fig:RADU_step}, which shows the iterative denoising on the latent point clouds.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{supp_figures/RADU_Step.pdf}
\caption{Larger version of Figure 8 in the main paper.
The top row shows depth and error maps, the bottom row shows the point clouds in 3D space.
The initial ToF depth reconstruction (red) is far from the ground truth depth (blue).
After each RADU convolution the latent point clouds (orange to yellow) move closer to the correct depth.
The final latent point cloud (yellow) already yields a good coarse reconstruction of the scene, which is further refined in the 2D block of the network (green).}
\label{fig:RADU_step}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{supp_figures/results_ourdata/000.png}
\includegraphics[width=0.0165\linewidth]{supp_figures/figurebar60.pdf}
\caption{Results on the Cornell-Box Dataset. First row shows depths, second row shows error maps.}
\label{fig:ourdata1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{supp_figures/results_ourdata/001.png}
\includegraphics[width=0.0165\linewidth]{supp_figures/figurebar60.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_ourdata/002.png}
\includegraphics[width=0.0165\linewidth]{supp_figures/figurebar60.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_ourdata/003.png}
\includegraphics[width=0.0165\linewidth]{supp_figures/figurebar60.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_ourdata/004.png}
\includegraphics[width=0.0165\linewidth]{supp_figures/figurebar60.pdf}
\caption{Results on the Cornell-Box Dataset. First rows show depths, second rows show error maps.}
\label{fig:ourdata2}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{supp_figures/results_ourdata/005.png}
\includegraphics[width=0.0165\linewidth]{supp_figures/figurebar60.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_ourdata/006.png}
\includegraphics[width=0.0165\linewidth]{supp_figures/figurebar60.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_ourdata/007.png}
\includegraphics[width=0.0165\linewidth]{supp_figures/figurebar60.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_ourdata/008.png}
\includegraphics[width=0.0165\linewidth]{supp_figures/figurebar60.pdf}
\caption{Results on the Cornell-Box Dataset. First rows show depths, second rows show error maps.}
\label{fig:ourdata3}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{supp_figures/results_ourdata/009.png}
\includegraphics[width=0.0165\linewidth]{supp_figures/figurebar60.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_ourdata/010.png}
\includegraphics[width=0.0165\linewidth]{supp_figures/figurebar60.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_ourdata/011.png}
\includegraphics[width=0.0165\linewidth]{supp_figures/figurebar60.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_ourdata/012.png}
\includegraphics[width=0.0165\linewidth]{supp_figures/figurebar60.pdf}
\caption{Results on the Cornell-Box Dataset. First rows show depths, second rows show error maps.}
\label{fig:ourdata4}
\end{figure*}
\subsection{FLAT Dataset}
We show predictions for a subset of the images in the dataset in ~\cref{fig:FLAT1} and \ref{fig:FLAT2}.
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{supp_figures/results_FLAT/000.png}
\includegraphics[width=0.014\linewidth]{supp_figures/figurebar40.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_FLAT/003.png}
\includegraphics[width=0.014\linewidth]{supp_figures/figurebar40.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_FLAT/004.png}
\includegraphics[width=0.014\linewidth]{supp_figures/figurebar40.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_FLAT/008.png}
\includegraphics[width=0.014\linewidth]{supp_figures/figurebar40.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_FLAT/039.png}
\includegraphics[width=0.014\linewidth]{supp_figures/figurebar40.pdf}
\caption{Results on the FLAT Dataset. First rows show depths, second rows show error maps.}
\label{fig:FLAT1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{supp_figures/results_FLAT/057.png}
\includegraphics[width=0.014\linewidth]{supp_figures/figurebar40.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_FLAT/059.png}
\includegraphics[width=0.014\linewidth]{supp_figures/figurebar40.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_FLAT/068.png}
\includegraphics[width=0.014\linewidth]{supp_figures/figurebar40.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_FLAT/071.png}
\includegraphics[width=0.014\linewidth]{supp_figures/figurebar40.pdf}
\includegraphics[width=0.9\linewidth]{supp_figures/results_FLAT/111.png}
\includegraphics[width=0.014\linewidth]{supp_figures/figurebar40.pdf}
\caption{Results on the FLAT Dataset. First rows show depths, second rows show error maps.}
\label{fig:FLAT2}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{supp_figures/scene_table/1_20MHz_0.png}
\caption{Example images showing one of the 50 view points with one of the three material configurations for each of the scenes in our Cornell-Box dataset. We show the intensity $I$ at a frequency of 20 MHz as given by Eq.~\eqref{eq:app_intensity}.}
\label{fig:scene_table1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{supp_figures/scene_table/1_20MHz_1.png}
\caption{Example images showing one of the 50 view points with one of the three material configurations for each of the scenes in our Cornell-Box dataset. We show the intensity $I$ at a frequency of 20 MHz as given by Eq.~\eqref{eq:app_intensity}.}
\label{fig:scene_table2}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.9\linewidth]{supp_figures/scene_table/1_20MHz_2.png}
\caption{Example images showing one of the 50 view points with one of the three material configurations for each of the scenes in our Cornell-Box dataset. We show the intensity $I$ at a frequency of 20 MHz as given by Eq.~\eqref{eq:app_intensity}.}
\label{fig:scene_table3}
\end{figure*}
|
{
"timestamp": "2021-12-01T02:28:09",
"yymm": "2111",
"arxiv_id": "2111.15513",
"language": "en",
"url": "https://arxiv.org/abs/2111.15513"
}
|
\section{Introduction and history}
This manuscript was written during the summer of 1997 while the author worked as a research assistant in Prof. Juhani
Karhum\"aki's project. The task for the summer was to read and verify in details the proof of undecidability of the equivalence problem for finite substitutions on regular languages proved by Prof. Leonid P. Lisovik from Kiev, Ukraine. As a result the author wrote the present manuscript based on articles~\cite{Lis:83, Lis:91,Lis:97}. In the original articles a lot of details were left to the reader.
The main motivation for the manuscript was that Lisovik in~\cite{Lis:97} was able to prove that the equivalence problem problem for finite substitutions was undecidable already for a quite simple regular language $b\{01,1\}^* c$, see Section~\ref{sec:eq}. Lisovik's proof for this language was simplyfied by Halava and Harju \cite{HaHa:99} using the undecidability of the universe problem in integer weighted finite automata instead of the undecidability track of Lisovik's from the inclusion problem of finite transducers (Section~\ref{trans}) through undecidability in so called defence systems defined by Lisovik himself (Section~\ref{defe}). Note that the regular language with undecidable equivalence problem for finite substitutions was later improved by Karhum\"aki and Lisovik\footnote{It needs to be mentioned that Lisovik was a frequent visitor of Karhum\"aki's group in Turku around that time. Many stories of his peculiar but extremely friendly behaviour are still told in Turku. The author remembers particularly well the party after the defence of his PhD thesis in April 2002 where Lisovik participated, not with any official role on the defence, but as a quest as he happened to visit Turku at that time: Lisovik gave altogether almost ten speeches during the dinner and the topics of these speeches varied somewher between math, life and basketball. For the sake of honesty it must be told that after the first five speeches, Lisovik was encouraged by author's official supervisor Prof. Tero Harju to give more speeches. Naturally, the author is grateful for both, especially, because according to the official protocol of the party, the PhD candidate has to reply to all the speeches given with a new speech. } \cite{KaLi1} in 2002 (alternatively, see~\cite{KaLi2}) to the language $ab^*c$, and, further, by Kunc~\cite{Kunc} in 2007 to the language $a^*b$.
As mentioned above, the root of undecidability in Lisovik's proof is the undecidability of the inclusion of two
rational relations (recognized by finite transducers), the result which was originally proved by Ibarra~\cite{Iba:78}. Lisovik gave a new proof for this result in 1983 (see~\cite{Lis:83}) with a clever reduction from the Post Correspondence Problem. Indeed, the main motivation for publishing this manuscript now 24 years later lays on this proof, as it has not been published in this form before. Recently, in \cite{handbook} Harju and Karhum\"aki presented a version of this proof with citation to this manuscript.
\input{liso}
\paragraph{Acknowledgement.} I am grateful to Juhani Karhum\"aki for guiding me to this topic originally, and especially on his support in the beginning of my career. I also sincerely thank Tero Harju for his support and especially on his comments -- comments on this manuscript, comments on my work in general and all the comments off any research topics we have had together.
\section{Finite transducers}\label{trans}
Let $\Sigma$ be an alphabet and denote by $\epsilon$ the empty word.
The star operation on $\Sigma$, $\Sigma^*$, is as usual the set of
all word over $\Sigma$. Denote by $\Sigma^+=\Sigma^*\setminus \{\epsilon\}$.
We begin with a definition of {\it finite transducer}, FT for short,
which is a 6-tuple $(Q,\Sigma,\Delta,E,q_0,F)$, where
\begin{itemize}
\item $Q$ is a finite set of states,
\item $\Sigma$ and $\Delta$ are input and output alphabets,
\item $E\subseteq Q\times \Sigma^*\times \Delta^*\times Q$ is a finite
set of transitions,
\item $q_0\in Q$ is the initial state and $F\subseteq Q$ is the set of
final states.
\end{itemize}
FT is a finite automaton with output. If the underlying
automaton is nondeterministic, then FT is called {\it generalized
sequential machine}, GSM for short, or {\it sequential transducer}.
Let $T$ be a finite transducer. Define the set
\begin{align*}
O(T)=&\{(w,y)\mid w=a_0\dots a_n, \quad y=b_0\dots b_n, \quad n\in\mathbb{N}, \quad
a_i\in\Sigma^*,\\
&b_i\in \Delta^*,\quad 0\le i\le n,
\text{ and there exists states }q_i\in Q,\text{ such that }\\
&(q_i,a_i,b_i,q_{i+1})\in E \text{ and }q_{n+1}\in F\}
\end{align*}
If $(w,y)\in O(T)$, then we say that $(w,y)\in \Sigma^*\times \Delta^*$ is
recognized by $T$.
Let
$$
L(T)=\{w\mid (w,y)\in O(T) \text{ for some }y \}
$$
be the language accepted by the finite transducer $T$.
A subset $O(T)$ of $\Sigma^*\times \Delta$, which is recognized by
a FT $T$ is called {\it rational relation}. We denote the family of rational
relations of $\Sigma^*\times \Delta^*$ by $\rat(\Sigma^*\times\Delta^*)$.
It is clear that if $A,B\in \rat (\Sigma^*\times \Delta^*)$, then
\begin{align*}
A&\cup B\quad\text{ and }\\ A\cdot B=AB&=\{(w_1w_2,y_1y_2)\mid
(w_1,y_1)\in O(A), (w_2,y_2)\in O(B)\}
\end{align*}
are in $\rat (\Sigma^*\times\Delta^*)$. The union is clear, since
we may connected the FT's that recognize $A$ and $B$, by merging
their initial states of FT's recognizing $A$ and $B$. The product
$AB$ is recognized, by an FT, where we define every final state of
FT recognizing $A$ to be a initial state of the FT recognizing $B$.
The star operation for subset $U$ of $\Sigma^*\times \Delta^*$ is
defined naturally by
$$
U^*=\bigcup_{i\ge 0} U^i,
$$
where $U^i$ is the $i$'th power of $U$ defined using the product by initial values $U^0=\{\epsilon\}\times \{\epsilon\}$,
$U^1=U$, and $U^{i+1}=UU^i$ for all $i\ge 1$.
\vspace{6pt}
We shall next prove that the equivalence and inclusion of two
rational relations is an undecidable problem in the case where
$\Delta$ is unary. This result has many proofs, for example cf.
\cite{Iba:78}, \cite{Lis:83}. We shall here present the construction
from \cite{Lis:83}.
Before the theorem, recall that the {\it Post Correspondence Problem}, PCP
for short, which asks for a given pairs of non-empty words over
alphabet $\Gamma$,
$(u_1,v_1),(u_2,v_2),\dots,(u_n,v_n)$, whether there exists a sequence
$$
1\le \alpha_1,\alpha_2,\dots, \alpha_s\le n
$$ such that
$$
u_{\alpha_1}u_{\alpha_2}\dots u_{\alpha_s}=v_{\alpha_1}v_{\alpha_2}\dots
v_{\alpha_2},
$$
is known to be an undecidable problem. For more details about the PCP, cf. \cite{Post:46}, \cite{morphisms}.
\begin{thm}\label{lause}
Let $A$ and $B$ be two rational relations from $\rat (\Sigma^*\times c^*)$.
Then it is undecidable, whether
\begin{align*}
1) \quad A&\subseteq B,
\\ 2) \quad A&=B.
\end{align*}
\end{thm}
\begin{proof}
Assume that $(u_1,v_1),\dots,(u_n,v_n)$ is a sequence of pairs of non-empty
words over
$\{a,b\}$. Define alphabet $\Sigma=\{a,b,i_1,\dots,i_n\}$, and
$k_\alpha=|u_\alpha|$ for all $\alpha=1,2,\dots,n$.
Next we define needed subsets of $\Sigma^+\times c^+$:
\begin{align*}
L_1&=\{(i_\alpha,c^{k_\alpha+1})\mid 1\le \alpha \le n\}^*,\\
L_2&=\bigcup_{\beta=1}^n\bigcup_{j=1}^{k_\beta} L_{\beta j}, \\
\intertext{where
$L_{\beta j}=L_1\cdot (i_\beta,c^j)\{(i_\alpha,c)\mid 1\le \alpha\le n\}^*$,}
L_3&=L_2\{(a,c),(b,c)\}^*,\\
L_4&=L_1\{(a,c),(b,c)\}^*
\{(a,c^2),(b,c^2)\}^+.
\end{align*}
Finally, for $\beta\in\{1,\dots,n\}$, let
\begin{align*}
S_\beta&=\{\mu\mid \mu\in \{a,b\}^*,|\mu|=|u_\beta|, \mu\ne u_\beta\},\\
\intertext{and set}
L_5&=\bigcup_{\beta=1}^n\bigcup_{\mu \in S_\beta} M_{\beta \mu},\\
\intertext{where }
M_{\beta \mu}&=L_1(i_\beta,c)\{(i_\alpha,c)\mid 1\le \alpha\le n\}^*\{(a,c),
(b,c)\}^* (\mu,c^{2k_\beta})\{(a,c^2),b,c^2)\}^*.\\
\end{align*}
Now we define
$$
L_u=L_3\cup L_4\cup L_5.
$$
Similarly, let $L_v$ be defined for the second components of the pairs $(u_\alpha, v_\alpha)$ in the sequence. Note that $L_u$ and $L_v$ are in $\rat(\Sigma^*\times c^*)$, since we can define nondeterministic
FT's to recognize $L_1$, $L_{\beta j}$'s, $M_{\beta \mu}$'s and
therefore also $L_2$, $L_3$, $L_4$ and $L_5$ are rational
relations.
Next define
$L_0=\{(i_\alpha,c)\mid 1\le \alpha\le n\}^+\{(a,c^2),(b,c^2)\}^+$.
It is easy to construct a FT, that recognizes $L_0$.
\vspace{6pt}
{\it Claim.} $L_0\subseteq L_u\cup L_v$ if and only if
there does not exist sequence of $\alpha_i$'s, such that $1\le \alpha_1,\dots,\alpha_s\le n$
and
$u_{\alpha_1}\dots u_{\alpha_s}=v_{\alpha_1}\dots v_{\alpha_s}$.
\vspace{6pt}
{\it Proof of the Claim.} Assume that there exists such sequence $\alpha_1,
\dots,\alpha_s$, that PCP has solution and let
$$
w=(x,y)=(i_{\alpha_1}\dots i_{\alpha_s}u_{\alpha_1}\dots
u_{\alpha_s},c^{s+2(k_{\alpha_1}+
\dots+k_{\alpha_s})})\in L_0.
$$
(i) If $w\in L_3$, then for some
$
w_1=(i_{\alpha_1}\dots i_{alpha_s},c^m)\in L_2$,
$$
w=w_1(u_{\alpha_1}\dots u_{\alpha_s},c^{k_{\alpha_1}+
\dots+k_{\alpha_s}}).
$$
Therefore $w_1\in L_{\beta j}$, for some $\beta\in
\{\alpha_1,\dots,\alpha_s\}$ and $1\le j\le k_{\beta}$, and so in
path recognizing $w_1$, $i_{beta}$ has outputs $c^{j}$ and
$j<k_\beta+1$, so $m<k_{\alpha_1}+\dots+k_{\alpha_s}+s$. Therefore
$w\notin L_3$.
(ii) Let $\beta_i$'s, $i\in \{1,r\}$ be a sequence such that
$1\le \beta_1,\dots ,\beta_r\le n$, and let
$$
w_1=(i_{\alpha_1}\dots i_{\alpha_s}u_{\beta_1}\dots
u_{\beta_r},c^m)\in L_4.
$$
In the recognizing paths of $w_1$, for each $i_{\alpha_j}$ the
output is $k_{\alpha_j}+1$ and for $u_j$,
$j\in\{\beta_1,\dots,\beta_r\}$, the output is $c^{\ell_j}$, where
$\ell_j\ge k_j$ and at least for one $j$ $\ell_j >k_j$, because of
$\{(a,c^2),(b,c^2)\}^+$. So we have that
$m>k_{\alpha_1}+\dots+k_{\alpha_s}+s+k_{\beta_1}+\dots+k_{\beta_r}$,
and therefore $w\notin L_4$.
(iii) Assume that $w\in L_5$. Then there exists integer $\beta$,
$1\le \beta \le s$, and $\gamma_1,\mu,\gamma_2\in \{a,b\}^*$ such that
$w\in M_{\alpha_\beta \mu}$, $u_{\alpha_1}\dots u_{\alpha_s}=\gamma_1\mu\gamma_2$,
$|\mu|=u_{\alpha_\beta}$ and $\mu\ne u_{\alpha_\beta}$. If
$$
(i_{\alpha_1}\dots i_{\alpha_\beta}\dots
i_{\alpha_s}\gamma_1\mu\gamma_2,c^m)\in M_{\alpha_\beta \mu},
$$
then
\begin{align*}
m&=(k_{\alpha_1}+1)+\dots
+(k_{\alpha_{\beta-1}}+1)+(s-\beta+1)+|\gamma_1|+2|\mu|+
2|\gamma_2|\\ &=k_{\alpha_1}+\dots+k_{\alpha_{\beta
-1}}+s+|\gamma_1|+2|\mu|+2|\gamma_2|.
\end{align*}
Now since $w\in L_5$ and $|\gamma_1\mu\gamma_2|=k_{\alpha_1}+\dots+
k_{\alpha_s}$, we get that
$$
|\mu|+|\gamma_2|=k_{\alpha_\beta}+\dots+k_{\alpha_s},
$$
and since $|\mu|=k_{\alpha_\beta}$, finally
$$
|\gamma_2|=k_{\alpha_{\beta+1}}+\dots+k_{\alpha_s}\text{ and }
|\gamma_1|=k_{\alpha_1}+\dots+k_{\alpha_{\beta-1}}.
$$
It follows that $\mu=u_{\alpha_\beta}$ and we have a contradiction.
Therefore $w\notin L_5$.
So $w\notin L_u$ and by similarly it can shown that $w\notin L_v$,
and we have proved one direction of the claim.
\vspace{6pt}
Assume now that there is no sequence $1\le
\alpha_1,\dots,\alpha_s\le n$ such that the instance of PCP has
solution. Let $w_1\in \{a,b\}^+$ and $w=(i_{\alpha_1}\dots
i_{\alpha_s}w_1,c^{s+2|w_1|})\in L_0$.
By assumption, $w_1\ne u_{\alpha_1}\dots u_{\alpha_s}$ or
$w_1\ne v_{\alpha_1}\dots v_{\alpha_s}$.
We shall
show that if $w_1\ne u_{\alpha_1}\dots u_{\alpha_s}$, then
$w\in L_u$. Of course then similarly, if
$w_1\ne v_{\alpha_1}\dots v_{\alpha_s}$, then $w\in L_v$.
(i) If $|w_1|>|u_{\alpha_1}\dots u_{\alpha_s}|$, i.e.
$|w_1|>k_{\alpha_1}+\dots+k_{\alpha_s}$, then
for some
$x,y\in \{a,b\}^+$, $|x|=k_{\alpha_1}+\dots+k_{\alpha_s}$, $w_1=xy$ and
$$
w=(i_{\alpha_1}\dots
i_{\alpha_s},c^{k_{\alpha_1}+\dots+k_{\alpha_s}+s})
(x,c^{k_{\alpha_1}+\dots+k_{\alpha_s}})(y,c^{2|y|})\in L_4.
$$
(ii) If $|w_1|<|u_{\alpha_1}\dots u_{\alpha_s}|$, i.e.
$|w_1|<k_{\alpha_1}+\dots+k_{\alpha_s}$, then there exists
$\beta\in\{1,\dots,s\}$ and $ j\in \{1,\dots,k_{\alpha_\beta}\}$
such that
$$
|w_1|=k_{\alpha_1}+\dots+k_{\alpha_{\beta-1}}+j-1,
$$
and so
\begin{align*}
w=&(i_{\alpha_1}\dots
i_{\alpha_{\beta-1}},c^{(k_{\alpha_1}+1)+\dots+(k_{\alpha_{\beta-1}}+1)})
(i_{\alpha_\beta},c^j)\\ &\cdot(i_{\alpha_{\beta+1}}\dots
i_{\alpha_s},c^{s-\beta})(w_1,c^{k_{\alpha_1}+\dots+k_{\alpha_{\beta-1}}+j-1})\in
L_3.
\end{align*}
(iii) If $|w_1|= k_{\alpha_1}+\dots+k_{\alpha_s}$, then since
$w_1\ne u_{\alpha_1}\dots u_{\alpha_s}$, there exists
$\beta\in\{1,\dots,s\}$ and $\mu,\gamma\in \{a,b\}^*$ such that
$$
w_1=u_{\alpha_1}\dots u_{\alpha_{\beta-1}}\mu\gamma \text{ and }
|\mu|=|u_{\alpha_\beta}| \text{ but } \mu \ne u_{\alpha_\beta},
$$
and so
\begin{align*}
w=&(i_{\alpha_1}\dots
i_{\alpha_{\beta-1}},c^{(k_{\alpha_1}+1)+\dots+(k_{\alpha_{\beta-1}}+1)})
(i_{\alpha_\beta},c)\\ &\cdot(i_{\alpha_{\beta+1}}\dots
i_{\alpha_s},c^{s-\beta}) (u_{\alpha_1}\dots
u_{\alpha_{\beta-1}},c^{k_{\alpha_1}+\dots+k_{\alpha_{\beta-1}}})
(\mu,c^{2|u_{\alpha_\beta}|})(\gamma,c^{2|\gamma|})\in L_5.
\end{align*}
So $w\in L_u$, if $w_1\ne u_{\alpha_1}\dots u_{\alpha_s}$ and so
the claim is proved.
\vspace{6pt}
Now by the undecidability of PCP, it is undecidable whether
$L_0\subseteq L_u\cup L_v$ and whether $L_0\cup L_u\cup L_v =
L_u\cup L_v$. This proves the theorem.
\end{proof}
\begin{cor}\label{cor:lis}
It is undecidable for two rational relations $A$ and $B$ from $\rat
(\{0,1\}^*\times c^*)$, whether
\begin{align*}
1)\quad A&\subseteq B,\\ 2) \quad A&=B.
\end{align*}
\end{cor}
\begin{proof}
Claim follows straight forwardly from Theorem \ref{lause}, since we
can encode the alphabet $\Sigma$ into $\{0,1\}^*$ and the result
remains.
\end{proof}
We shall next define a special type of finite transducer, so called
{\it $Z$-transducer}. FT $T$ is called $Z$-transducer, if it is of the
form
$$
(Q,\{0,1\},\{c, cc\},E,q_0,g_f),
$$
i.e. it has input alphabet $\{0,1\}$,
output alphabet $\{c\}$, only one final state $q_f$ and the set of
transitions $E\subseteq Q\backslash \{q_f\}\times \{0,1\}\times\{c,cc\}\times
Q$. We shall define $Z$-transducer as quadruple $(Q,E,q_0,g_f)$ from now on,
since input and output alphabets are fixed.
Notice that $Z$-transducer reads one symbol at a time and always outputs
one or two $c$'s. Notice also that there is no transitions from the final
state $q_f$ in $Z$-transducer.
A $Z$-transducer is called {\it deterministic} if the underlying
automaton is deterministic, i.e. if for any $a\in \{0,1\}$, $q\in
Q\backslash \{q_f\}$ there exists a unique transition $(q,a,b,p)$,
where $b\in \{c,cc\}$ and $p\in Q$. A $Z$-transducer is called {\it
complete}, if for any $a\in\{0,1\}$, $q\in Q\backslash\{q_f\}$
there exists at least one transition of the form $(q,a,b,p)$. Note
that here determinism preserves completeness. Note also that every
$Z$-transducer can be maid complete by adding a {\it garbage state}
$f$ into $Q$ such that if there does not exists any transition
$(q,a,b,p)$ for some $q$ and $a$, then we add transition
$(q,a,c,f)$ to $E$ and further we add transition $(f,a,c,f)$ to $E$
for $a\in\{0,1\}$.
Let $T$ be a $Z$-transducer. As for FT's, we define the set
\begin{align*}
O(T)=&\{(w,y)\mid w=a_0\dots a_n, \quad y=b_0\dots b_n, \quad n\in\mathbb{N}, \quad
a_i\in\{0,1\},\\
&b_i\in \{c,cc\},\quad 0\le i\le n,
\text{ and there exists states }q_i\in Q,\text{ such that }\\
&(q_i,a_i,b_i,q_{i+1})\in E \text{ and }q_{n+1}=q_f\}
\end{align*}
Note that
in deterministic $Z$-transducer $T$, for all $w\in\{0,1\}^*$,
there exists either a unique path when reading word $w$ or a prefix
$u$ of $w$ such that $u\in L(T)$. Since there is no transitions from
final state, we see that if $w=uv$, $v$ is a nonempty word, then
$u\in L(T)$ implies $w\notin L(T)$.
\begin{cor}\label{lemma:1}
Let $C$ and $D$ be two $Z$-transducers, $C$ is deterministic and
$D$ nondeterministic and complete. It is undecidable, whether
$O(C)\subseteq O(D)$.
\end{cor}
\begin{proof}
In the proof of Corollary \ref{cor:lis} we mentioned the coding of
the alphabet $\Sigma$ in Theorem \ref{lause} to binary alphabet.
Let $(u_1,v_1),\dots,(u_n,v_n)$ be the instance of PCP used in the
proof of Theorem \ref{lause}. We can for example use coding
$\delta$, where $k=1+\max_{1\le i\le n}\{|u_i|,|v_i|\}$ and
alphabet $\Sigma=\{a,b,i_1,\dots,i_n\}$ is encoded to set
$\{10^{i}1 \mid k\le i \le k+n+1\}$.
If we now code with $\chi$ each element $w=(v,c^m)\in\Sigma^+
\times c^+$ used in the proof of Theorem \ref{lause} in such a way
that $\chi(w)=(\delta(v)0,c^{m+|\delta(v)0|})$. Denote by
$\chi(L_i)$ the coded set $L_i$, $i=1,2,3,4,5,u,v,0$.
Clearly $\chi(L_1)$ can be reorganized by a non-deterministic
$Z$-transducer, when reading $\delta(i_\alpha)$ the transducer
outputs $cc$ for $k_\alpha+1$ first input symbols and $c$ for the
others. When reading the last 0 in the input, $Z$-transducer
outputs one $c$ and moves to final state.
Using the same idea, also other $\chi(L_i)$'s can be recognized by
a non-deterministic $Z$-transducer. Actually
$$
\chi(L_0)=\{(\delta(i_\alpha),c^{|\delta(i_\alpha)|+1})\mid 1\le \alpha\le n\}^+
\{(\delta(a),c^{|\delta(a)|+2}),(\delta(b),c^{|\delta(b)|+2})\}^+(0,c)
$$
can be reorganized by a deterministic $Z$-transducer.
Now since $\chi(L_0)\subseteq \chi(L_u)\cup \chi(L_v)$ if and only
if $L_0\subseteq L_u\cup L_v$, the claim follows by the proof of
Theorem \ref{lause}.
\end{proof}
We shall use result in above corollary in the next section.
\section{Defense Systems}\label{defe}
In this section we shall consider so called {\it defense systems}, DS for short.
Result in this section is from \cite{Lis:91}.
A DS system is intended to defense some elements of the set integers $\mathbb{Z}$. The
elements of $\mathbb{Z}$ are also called defense {\it nodes}.
Any DS is a triple $V=(K,H,\Gamma)$, where $K$ is set of {\it lines},
$$
K=\{i\mid 1\le i \le s,\quad i,s\in \mathbb{Z}\},
$$
$H$ is the set of instructions and $\Gamma$ is the set of attacking symbols.
Each node can be defended by lines from $K$. In other words, each node
can be defended by $s$ different lines. The initial situation in our case
is that
only node 0 is defended by line 1,
and the other nodes don't have defence at all.
The attacking system is supposed to `send' symbols from the set $\Gamma$ to
the defending system. This means that attacks can be thought as a words from
$\Gamma^*$.
Each rule of the set $H$ is of the form $(k,a,j,z,p)$, where $1\le k,j\le s$,
$a\in \Gamma$, $z\in \{-1,0,1\}$ and $p$ is the real number $0\le p\le 1$.
Each rule means that when attacking symbol $a$ is send, defense of node $i$
by line $k$ is transferred with
probability $p$ to defense of node $i+z$ by line $j$. We shall denote the
probability above also $p_{a,k,j}^z$. Naturally for all $a\in \Gamma$
$$
\sum_{j=1}^s\sum_{z=-1}^1 p_{a,k,j}^z=1,
$$
i.e. on each attacking symbol something necessarily happens.
Note that the underlying system in defense systems is nondeterministic and
therefore the model of defense systems we defined is sometimes called
{\it nondeterministic DS}, NDS for short.
\begin{figure}[ht]
\begin{center}
\unitlength1mm
\begin{picture}(100,50)
\multiput(30,40)(0,-5){3}{\line(1,0){60}}
\put(30,15){\line(1,0){60}}
\multiput(26,40)(0,-5){3}{\line(1,0){2}}
\multiput(23,40)(0,-5){3}{\line(1,0){1}}
\multiput(21,40)(0,-5){3}{\line(1,0){.5}}
\multiput(92,40)(0,-5){3}{\line(1,0){2}}
\multiput(96,40)(0,-5){3}{\line(1,0){1}}
\multiput(98.5,40)(0,-5){3}{\line(1,0){.5}}
\put(26,15){\line(1,0){2}}
\put(23,15){\line(1,0){1}}
\put(21,15){\line(1,0){.5}}
\put(92,15){\line(1,0){2}}
\put(96,15){\line(1,0){1}}
\put(98.5,15){\line(1,0){.5}}
\multiput(40,40)(10,0){5}{\makebox(0,0)[c]{$\bullet$}}
\multiput(40,35)(10,0){5}{\makebox(0,0)[c]{$\bullet$}}
\multiput(40,30)(10,0){5}{\makebox(0,0)[c]{$\bullet$}}
\multiput(40,15)(10,0){5}{\makebox(0,0)[c]{$\bullet$}}
\put(18,40){\makebox(0,0)[c]{1}}
\put(18,35){\makebox(0,0)[c]{2}}
\put(18,30){\makebox(0,0)[c]{3}}
\put(18,15){\makebox(0,0)[c]{$s$}}
\put(40,45){\makebox(0,0)[c]{$-2$}}
\put(50,45){\makebox(0,0)[c]{$-1$}}
\put(60,45){\makebox(0,0)[c]{$0$}}
\put(70,45){\makebox(0,0)[c]{$1$}}
\put(80,45){\makebox(0,0)[c]{$2$}}
\put(60,40){\circle{3}}
\put(60,26){\makebox(0,0)[c]{$\cdot$}}
\put(60,23){\makebox(0,0)[c]{$\cdot$}}
\put(60,20){\makebox(0,0)[c]{$\cdot$}}
\end{picture}
\caption{\label{inids} A picture illustrating a defense system in the initial configuration
defending the node 0 by the line 1.}
\end{center}
\end{figure}
We fix the attacking symbol set $\Gamma=\{0,1\}$ in this paper.
\vspace{6pt}
A NDS can also be viewed as a countable Markov system. To simplify notations
we denote each configuration of a NDS by an integer. If node $i$ is defended
by line $j$, we denote this configuration by integers $i\cdot s+(j-1)$.
Recall that the initial configuration is that node 0 is defended by line
1, which is represented as an integer 0.
Let $w\in\{0,1\}^*$. We shall denote the probability that the NDS is in the
configuration $k\in \mathbb{Z}$ in response to a finite sequence of attacking signals
$w$ by $p_w(k)$.
\vspace{6pt}
Let $D=(K,H,\Gamma)$ be a defense system. $D$ is called {\it unreliable} if,
for some $w\in\Gamma^*$,
after attacking sequence $w$ the probability that node 0 is defended
by some line is 0, i.e. $p_w(j)=0$ for all $0\le j \le s-1$. The word $w$ here
is called {\it critical}. If there is no
critical words $w\in \Gamma^* $ that $D$,
then $D$ is called {\it reliable}.
\begin{thm}\cite{Lis:91}\label{thm:1}
The unreliability of NDS is undecidable, i.e. it is undecidable for a given
NDS $B=(K,H,\{0,1\})$, to determine whether there exist $w\in\{0,1\}^*$ such that
$p_w(j)=0$ for all $0\le j\le s-1$.
\end{thm}
\begin{proof}
In this proof we shall use the undecidability result of Corollary
\ref{lemma:1}.
Let $C$ be a deterministic $Z$-transducer and $D$ be a nondeterministic and
complete $Z$-transducer, $C=(K_1,H_1,q_0,q_f)$ and $D=(K_2,H_2,g_0,g_f)$.
Define a nondeterministic complete $Z$-transducer $D'=(K_3,H_3,g_0,g_f)$,
where
\begin{align*}
K_3&=K_1\cup K_2,\\
H_3&=H_1\cup H_2\cup \{(g_0,a,b,q)\mid (q_0,a,b,q)\in H_1)\}.
\end{align*}
$Z$-transducer $D'$ satisfies $O(D')=O(D)$, but the transducer also has
paths of $C$ in it, although they are not accepting paths.
Let $s$ be the number of elements of the set
$$
K=K_1\times K_3=\{(q,g)_j\mid 1\le j\le s\} \text{ and }(q,g)_1=(q_0,g_0).
$$
Let
$$
H\subseteq K\times\{0,1\}\times \{-1,0,1\}\times K
$$
so that $((q_k,g_\ell)_i,a,z,(q_r,g_t)_j)\in H$, if
$$
(q_k,a,b_1,q_r)\in H_1 \text{ and } (g_\ell,a,b_2,g_t)\in H_3,
$$
and $z$ follows by the rules $(b_1,b_2\in \{c,cc\})$
\begin{equation}\label{eq:z}
z=\begin{cases} -1 &\text{if }b_1=b_2c,\\
0 &\text{if }b_1=b_2,\\
1 &\text{if }b_2=b_1c.
\end{cases}
\end{equation}
Moreover $H$ contains elements
\begin{equation}\label{eq:1}
((q_f,g_f),a,0,(q_f,g_f)),
\end{equation}
\begin{equation}\label{eq:2}
((q,f),a,1,(q_f,q_f)), \text{ where }\{q,f\}\cap \{q_f,g_f\}\ne \emptyset,\quad
a=0,1.
\end{equation}.
We shall refer the elements of $H$ as rules.
We shall now associate a NDS B to construction above. Let
$$
M_{a,k}^z=\{j\mid ((q,g)_k,a,z,(q,g)_j)\in H\}
$$
and let $m(a,k,z)=\vert M_{a,k}^z\vert$ and
$$
m(a,k)=\sum_{z=-1}^1 m(a,k,z).
$$
Let $B=(K',H',\{0,1\})$ be a defense system, such that $K'=\{1,\dots,s\}$,
if $((q,g)_k,a,z,(q,g)_j)\in H$, then $(k,a,z,j,p_{a,k,j}^z)\in H'$,
$p_{a,k,j}^z=1/m(a,k)$. This probability is obvious by the construction.
\vspace{12pt}
{\it Claim.} The existence of finite sequence $w\in\{0,1\}^*$ such
that the NDS $B$ has $p_w(j)=0$ for all $0\le j\le s-1$ is equivalent to the
fact that $O(C)\not\subseteq O(D)$.
\vspace{6pt}
Before the proof, we note few facts about the construction.
Our defense system $B$ simulates the calculations of $Z$-transducers
$C$ and $D'$ at a same time in its lines, which can be thought as an elements
of $K=K_1\times K_3$.
By \eqref{eq:z}, $z$ gives the difference of lengths
of outputs in $C$ and $D'$. It follows that if the defended node is 0,
the outputs of $C$ and $D'$ are equal. If the node is negative, the length of
the output of $C$ is larger than the length of the output of $D'$
by the absolute value of the node. If it is positive,
then vice versa.
Now we are ready to proof the equivalence mentioned above.
\vspace{12pt}
{\it Proof of the Claim.}
Assume first that $O(C)\not\subseteq O(D)$. This means that there
exists a word $w\in \{0,1\}^*$ such that for unique $y\in c^*$,
$(w,y)\in O(C)$, but $(w,y)\notin O(D)$. We have two cases:
\vspace{6pt}
i) If $w\in L(D)$, then for all $(w,y')\in O(D)$, $y'\ne y$. There exists
four kind of paths in our NDS $B$, that have positive probability on attacking
sequence $w$, we separate them in terms of calculations of $C$ and $D'$:
\vspace{6pt}
1) If the simulation of $D'$ is similar to simulation of $C$. Then
we are all the time defending the node 0 and end up in state $(q_f,q_f)$.
Now for a word $wa$, $a\in\{0,1\}$, we use rule \eqref{eq:2} and the defense
shifts to node 1, since $z=1$. Note that we can add several symbols to
$w$, and defense of node moves to one larger by every symbol. The simulation
of $C$ does not change from beginning, since no
subword of accepted word can be accepted in deterministic $Z$-transducer.
2) If the simulation of $D'$ reaches the final state $g_f$ before than the
simulation of $C$. After that the rule used is \eqref{eq:2}.
Every step of this rule moves the defense of the node to the node
one larger. After that we may add a symbols from $\{0,1\}$ to the end of
$w$ to get the defense to a positive node.
3) If simulation of $D'$ is not in the final state when the simulation
$C$ ends. Again after that we may add symbols of $\{0,1\}$ to the end
of the word $w$ to get the defense to a positive node.
4) If the simulations of $D'$ and $C$ reach the final state at the same time, i.e.
in the end of $w$. Of course the node defended at that time can't be
0, since then the outputs would be equal in $C$ and $D$, and that is
impossible, by the fact that $(w,y)\notin O(D)$.
We can again add symbols to end of $w$, and the rule used is \eqref{eq:1}
and that does not change the defense anywhere.
\vspace{6pt}
By cases 1-4, we see that, there exists a word $wv$, $v\in\{0,1\}^*$ such that
$p_{wv}=0$ for all $0\le j\le s-1$. This follows, since there is a limit for
symbols, that has to be added to get all these possible paths of defense
to positive nodes.
\vspace{6pt}
ii) If $w\notin O(D)$, then the 1-3 above cases are possible, and again
there exists $wv$, $v\in\{0,1\}^*$, such that NDS $B$ is unreliable.
So we have proved that if $O(C)\not\subseteq O(D)$, then NDS
$B$ is unreliable.
\vspace{12pt}
Assume next that NDS $B$ is unreliable. It means that
there exists sequence $w\in\{0,1\}^*$ such that
$p_w(j)=0$ for all $0\le j\le s-1$. By the fact that $C$ is deterministic
and therefore complete, it means that some subword of $w$ must be in
$L(C)$, since otherwise there is a path in $C$ for a input word $w$
and therefore in $B$ node 0 has positive defense probability for
some line, which is related to element $(q,q)\in K$, $q\in K_1$.
Now assume that $v$ is the subword of $w$ such that $v\in L(C)$ and
let $y$ be the unique element of $\{0,1\}^*$, such that $(v,y)\in O(C)$.
Now $(v,y)\notin O(D)$, since otherwise there would be a possible defense
in node 0 after attacking sequence $v$ and after $v$ the instruction used
would be the corresponded to rule \eqref{eq:1} which does not move the
defense anywhere. Therefore for the attacking sequence $w$ there would be
a defense in the node 0 with positive probability,
which is not possible by the assumption.
Now we have finally proved the Claim.
\vspace{12pt}
By Corollary \ref{lemma:1} it is undecidable whether
$O(C)\not\subseteq O(D)$ and therefore the unreliability of NDS is
also undecidable.
\end{proof}
Note that since unreliability is a complement of reliability, this also
means that reliability is undecidable.
\section{Finite substitutions}\label{sec:eq}
Let $\Sigma$ and $\Delta$ two alphabets. For a set $S$ denote by $2^{S}$ the power set
of $S$, i.e. the collection of all subsets of $S$.
A mapping $\phi:\Sigma^* \to 2^{\Delta^*}$ is called {\it substitution}, if
\vspace{6pt}
1) $\phi(\epsilon)=\{\epsilon\}$ and
2) $\phi(xy)=\phi(x)\phi(y)$.
\vspace{6pt}
Because of condition 2, a substitution is usually defined by giving
the images of all letters in $\Sigma$.
Let $\phi$ be as above and $L$ be a language over $\Sigma^*$, i.e. $L\subseteq \Sigma^*$. We denote
$$
\phi(L)=\bigcup_{w\in L} \phi(w).
$$
Two substitutions $\phi,\xi:\Sigma^*\to 2^{\Delta^*}$ are equivalent on language
$L$ if
$$
\phi(L)=\xi(L).
$$
A substitution $\phi$ is called $\epsilon${\it -free}, if $\epsilon\notin\phi(a)$
for all $a\in \Sigma$.
And it is called a {\it finite substitution} if, for all $a\in\Sigma$,
the set $\phi(a)$ is finite.
\vspace{12pt}
A language $L$ is called {\it regular}, if it is accepted by a
finite automaton. It is known that regular languages are closed under finite
substitutions, which means that if $L$ is regular , so is $\phi(L)$ for finite
substitution $\phi$.
Next theorem states an undecidability result concerning finite substitutions and
regular languages. It is from \cite{Lis:97}
\begin{thm}
The equivalence problem for $\epsilon$-free finite substitutions on regular
language $b\{0,1\}^*c$ is undecidable.
\end{thm}
\begin{proof}
We shall use Theorem \ref{thm:1}. Let $V=(K,H,\{0,1\})$ be a NDS defined
in previous section, $K=\{1,..,s\}$, $H$ is the set of instructions and
attacking symbol set is $\{0,1\}$.
We shall define two finite substitutions
$\phi,\xi:\{b,0,1,c\}^*\to \{0,1\}^*$ such that $\phi$ and $\xi$ are equivalent
on language $b\{0,1\}^*c$ if and only if NDS $V$ is reliable.
First we define following sets and words:
\begin{align*}
D_a&=\{(k,z,j)\mid (k,a,j, z,p)\in H \text{ for some }p>0\}, \quad a\in\{0,1\},\\
D&=D_0\cup D_1,\\
w&=010010001\dots10^{s+1}1,\quad w^0=\epsilon, \quad w^1=w,\quad w^2=ww,\\
\alpha_k&=01001\dots10^k1,\quad \beta_k=0^{k+1}1\dots10^{s+1}1, \text{ for } 1\le k\le s,\\
w&=\alpha_k\beta_k, \quad F(k,z,j)=\beta_k w^{z+1}\alpha_j ,\quad
F(k,z)=F(k,z,j)\beta_j=\beta_kw^{z+2},\\ T_a&=\bigcup_{(k,z,j)\in
D_a} \{F(k,z,j)\},\quad C_a=\bigcup_{(k,z,j)\in D_a} \{F(k,z)\},
\quad a\in\{0,1\} \\
C&=C_0\cup C_1,\: M=\{w\},\: B=\{ww\},\: N=\{\beta_k\mid 1\le
k\le s\},\text{ and } S=\{\alpha_1\}.
\end{align*}
Now we can define finite substitutions $\phi,\xi:\{b,0,1,c\}\to \{0,1\}^*$:
\begin{align*}
\xi(b)&=S\cup MN=\{\alpha_1\}\cup \{w\beta_k \mid 1\le k\le s\},\\
\phi(b)&=\xi(b)\cup M=\{\alpha_1\}\cup \{w\beta_k \mid 1\le k\le s\}\cup \{w\},\\
\xi(c)&=\phi(c)=M\cup NM=\{w\}\cup \{\beta_kw \mid 1\le k\le s\},\\
\xi(a)&=\phi(a)=B\cup T_a\cup NT_a\cup C_aN\cup NC_aN\\
&=\{ww\}\cup \{\beta_kw^{z+1}\alpha_j\mid (k,z,j)\in D_a\}\\
&\cup \{\beta_\ell\beta_k w^{z+1}\alpha_j\mid 1\le \ell\le s,(k,z,j)\in D_a\}\\
&\cup \{\beta_kw^{z+2}\beta_\ell\mid 1\le \ell\le s,(k,z,j)\in D_a\}\\
&\cup \{\beta_{\ell_1}\beta_kw^{z+2}\beta_{\ell_2}\mid 1\le \ell_1,\ell_2\le s,
(k,z,j)\in D_a\},
\end{align*}
for $a=0,1$.
Let $L$ be the language $b\{0,1\}^*c$. Now clearly $\xi(x)\subseteq \phi(x)$ for
all $x\in L$, since $\xi(a)\subseteq \phi(a)$ for all letters
$a\in \{b,0,1,c\}$. Therefore to prove that $\xi(L)=\phi(L)$ iff and only
iff $V$ is reliable, we have show
that $\phi(L)\in\xi(L)$ iff and only iff $V$ is reliable.
\vspace{6pt}
Suppose first that $V$ is reliable. Let $x=x_0\dots x_{n+1}\in L$,
$u=u_0\dots u_{n+1}$, where $x_i\in \{b,0,1,c\}$ and $u_i\in \phi(x_i)$ for
all integers $0\le i\le n+1$. Note that $x_0=b$ and $x_{n+1}=c$. We have to
show that there exists $v_i\in \xi(x_i)$ for all $0\le i\le n+1$ such that
$v=v_0\dots v_{n+1}=u$.
First we note that the only difference in images by $\xi$ and $\phi$ is
in images of $b$, and $\xi(b)\setminus \phi(b)=M$. Therefore,if $u_0\ne w$, we have trivial solution $u_i=v_i$ for all $0\le i\le n+1$.
So we assume that $u_0=w$.
We shall use parenthesis to illustrate
factorizations by $\phi$ and $\xi$ to $u_i$'s and $v_i$'s. Now we divide
into three cases:
\vspace{6pt}
(i) If $n=0$, then $x=bc$ and we have two cases:
1) If $u_1=w\in \phi(c)$, then $u_0u_1=(w)(w)=(\alpha_1)(\beta_1w)\in\xi(x)$.
2) If $u_1\in NM\subseteq\phi(c)$, i.e. for some $1\le k\le s$,
$u_0u_1=(w)(\beta_kw)=(w\beta_k)(w)\in\xi(x)$.
\vspace{6pt}
(ii) If $n\ge 1$ and $u_1\notin B$. We shall show that there is a
factorization such that $u_i=v_i$ for $2\le i\le n+1$ and $u_0u_1=v_0v_1$.
Here we have four cases:
1) If $u_1\in T_{x_1}$, then, for $(k,z,j)\in D_{x_1}$,
$$
u_0u_1=(w)(\beta_kw^{z+1}\alpha_j)=(\alpha_1)(\beta_1\beta_kw^{z+1}\alpha_j)
=v_0v_1, \quad v_0\in S,v_1\in NT_{x_1}.
$$
2) If $u_1\in NT_{x_1}$, then, for $(k,z,j)\in D_{x_1}$ and $1\le \ell\le s$,
$$
u_0u_1=(w)(\beta_\ell\beta_kw^{z+1}\alpha_j)=(w\beta_\ell)(\beta_kw^{z+1}\alpha_j)
=v_0v_1, \quad v_0\in MN,v_1\in T_{x_1}.
$$
3) If $u_1\in C_{x_1}N$, then, for $(k,z,j)\in D_{x_1}$ and $1\le \ell \le s$,
$$
u_0u_1=(w)(\beta_kw^{z+2}\beta_\ell)=(\alpha_1)(\beta_1\beta_kw^{z+2}\beta_\ell)
=v_0v_1, \quad v_0\in S,v_1\in NC_{x_1}N.
$$
4) If $u_1\in NC_{x_1}N$, then, for $(k,z,j)\in D_{x_1}$ and $1\le \ell,t \le s$,
$$
u_0u_1=(w)(\beta_\ell\beta_kw^{z+2}\beta_t)=(w\beta_\ell)(\beta_kw^{z+2}\beta_t)
=v_0v_1, \quad v_0\in MN,v_1\in C_{x_1}N.
$$
\vspace{6pt}
(iii) If $n\ge 1$ and $u_1\in B$, then we need the reliability of $V$. Let
$t=\min\{i\mid i\ge1, u_i\notin B\}$. So the word
$u_0u_1\dots u_{t-1}=w(ww)\dots (ww) = w^{2t-1}$.
Since $V$ is reliable, there exists for attacking sequence
$x'=x_1\dots x_{t-1}\in \{0,1\}^*$ a sequence
$$
(j_0=1,x_1,j_1,z_1,p_1)(j_1,x_2,j_2,z_2,p_2)\dots
(j_{t-2},x_{t-1},j_{t-1},z_{t-1},p_{t-1})
$$
of elements of $H$ such that $p_i>0$ for all $1\le i\le t-1$ and
\begin{equation}\label{eq:3}
\sum_{i=1}^{t-1} z_i=0.
\end{equation}
Therefore there exists a sequence
$$
(j_0=1,z_1,j_1)(j_1,z_2,j_2)\dots
(j_{t-2},z_{t-1},j_{t-1}),
$$
where $(j_{i-1},z_i,j_i)\in D_{x_1}$. Now define $v_0=\alpha_1$, and
for $1\le i\le t-1$,
$$
v'_i=\beta_{j_{i-1}}w^{z_i+1}\alpha_{j_i}\in T_{x_i},
$$
we get that
\begin{align*}
v_0v'_1\dots v'_{t-1}&=\alpha_1\beta_1w^{z_1+1}\alpha_{j_1}
\beta_{j_1}w^{z_2+1}\alpha_{j_2}\dots \beta_{j_{t-2}}w^{z_{t-1}+1}\alpha_{j_t-1}\\
&=ww^{z_1+1}ww^{z_2+1}w\dots ww^{z_{t-1}+1}\alpha_{j_{t-1}}.
\end{align*}
Now by \eqref{eq:3} we get that
$$
v_0v'_1\dots v'_{t-1}=w^{2t-2}\alpha_{j_{t-1}}.
$$
So we have that $u_0u_1\dots u_{t-1}=v'_0v'_1\dots v'_{t-1}\beta_{j_{t-1}}$.
We may already set $v_i=v'_i$ for $1\le i\le t-2$.
Now we have two cases depending on $t$.
First if $t=n+1$, then
we have two cases:
1) If $u_{n+1}\in M$, then $v_{t-1}=v'_{t-1}$
and $v_{n+1}=\beta{j_{t-1}}w\in NM$ and
so $u=v$.
2) If $u_{n+1}\in NM$, $u_{n+1}=\beta_kw$, then we set
$$
v_{t-1}=v_n=\beta_{j_{t-2}}w^{z_{t-1}+2}\beta_k\in C_{x_n}N
\quad \text{ and } v_{n+1}=w\in M.
$$
Again $u=v$.
Second case is that $t\le n$. Then we set $v_i=u_i$ for $t+1\le i\le n+1$
and so we have four cases for $v_t$ and $v_{t-1}$:
1) If $u_{t}\in T_{x_t}$, for some $(k,z,j)\in D_{x_t}$
$u_t=\beta_kw^{z+1}\alpha_j$, then we set
$$
v_{t-1}=v'_{t-1} \text{ and }
v_t=\beta_{j_{t-1}}\beta_kw^{z+1}\alpha_j\in NT_{x_t},
$$
to get $u=v$.
2) If $u_{t}\in NT_{x_t}$, for some $(k,z,j)\in D_{x_t}$, $1\le \ell\le s$,
$u_t=\beta_\ell\beta_kw^{z+1}\alpha_j$, then we set
\begin{align*}
v_{t-1}&=\beta_{j_{t-2}}w^{z_{t-1}+1}\alpha_{j_{t-1}}\beta_{j_{t-1}}\beta_\ell
=\beta_{j_{t-2}}w^{z_{t-1}+2}\beta_\ell\in C_{x_t}N\\
\intertext{and}
v_t&=\beta_kw^{z+1}\alpha_j\in T_{x_t},
\end{align*}
to get $u=v$.
3) If $u_{t}\in C_{x_t}N$, for some $(k,z,j)\in D_{x_t}$, $1\le \ell\le s$,
$u_t=\beta_kw^{z+2}\beta_\ell$, then we set
$$
v_{t-1}=v'_{t-1} \text{ and }
v_t=\beta_{j_{t-1}}\beta_kw^{z+2}\beta_\ell \in NC_{x_t}N,
$$
to get $u=v$.
4) If $u_{t}\in NC_{x_t}N$, for some $(k,z,j)\in D_{x_t}$, $1\le \ell,t\le s$,
$u_t=\beta_\ell\beta_kw^{z+2}\beta_t$, then we set
\begin{align*}
v_{t-1}&=\beta_{j_{t-2}}w^{z_{t-1}+1}\alpha_{j_{t-1}}\beta_{j_{t-1}}\beta_\ell
=\beta{j_{t-2}}w^{z_{t-1}+2}\beta_\ell \in C_{x_t}N \\
\intertext{and}
v_t=\beta_kw^{z+2}\beta_t \in C_{x_t}N,
\end{align*}
to get $u=v$.
Now we have proved that if $V$ reliable then $\phi(L)\subseteq\xi(L)$.
\vspace{6pt}
Assume now that $V$ is unreliable, i.e. there is a word $x'=x_1\dots x_n$
such that $p_{x'}(j)=0$, for all $1\le j\le s$. Let
$x=bx'c=x_0x_1\dots x_n x_{n+1}\in L$. We shall first prove next claim
\vspace{6pt}
{\it Claim.} There is no elements $v'_i\in T_{x_i}$ for all $1\le i\le n$ such that
$w^{2n+1}=\alpha_1v'_1v'_2\dots v'_n\beta_j$.
\vspace{6pt}
{\it Proof of The Claim.} Assume the contrary. This means that there exists sequence
$$
y=\beta_1w^{z_1+1}\alpha_{j_1}\beta_{j_1}w^{z_2+1}\alpha_{j_2}\cdots
\beta_{j_{n-1}}w^{z_n+1}\alpha_{j_n}
$$
such that
$$
(1,,x_1,j_{1},z_1,p_1)(j_1,x_2,j_2,z_2,p_2)\cdots (j_{n-1},x_n,j_{n},z_n,p_n)
$$
is a sequence in $H$, $p_i>0$ for all $i$, and
$$
\alpha_1 y\beta_{j_n}=w^{2n+1}
$$
Now to get the number of $w$ correct on the left hand side, we must have
$$
1+(z_1+1)+1+(z_2+1)+\dots+1+(z_n+1)+1=\sum_{i=1}^n z_i +2n+1=2n+1,
$$
so $\sum_{i=1}^n z_i=0$, but this contradicts the fact that $x'$ is
critical word. This ends the proof of the claim.
\vspace{6pt}
Clearly $w^{2n+2}\in \phi(x)$ and we shall next show that
$w^{2n+2}\notin \xi(x)$. Assume contrary that $w^{2n+2}\in \xi(x)$,
then for all $0\le i\le n+1$ there exists $v_i\in \xi(x_i)$ such that
$w^{2n+2}=v_0\dots v_{n+1}$. Clearly the case $v_0=\alpha_1\in S$ is
only possible, since $v_0=w\beta_j\in MN$ leads to a contradiction.
Assume that $x_1=a\in \{0,1\}$ and let
$$
P=\{u\mid u\text{ is a prefix of }w^k \text{ for some integer }k\}.
$$
We divide the proof to five cases according to $v_1$:
\vspace{6pt}
1) If $v_1\in B$, i.e. $v_1=ww$, then $v_0v_1=\alpha_1ww\notin P$.
2) If $v_1\in NT_a$, i.e. for some $1\le \ell\le s$ and $(k,z,j)\in D_a$
$v_1=\beta_\ell\beta_kw^{z+1}\alpha_j$, then $v_0v_1\notin P$.
3) If $v_1\in C_aN$, i.e. for some $1\le \ell\le s$ and $(k,z,j)\in D_a$
$v_1=\beta_kw^{z+2}\beta_\ell$, then $v_0v_1\notin P$.
4) If $v_1\in NC_aN$, i.e. for some $1\le \ell,t\le s$ and $(k,z,j)\in D_a$
$v_1=\beta_\ell\beta_kw^{z+2}\beta_t$, then $v_0v_1\notin P$.
5) If $v_1\in T_a$, then let $t=\min\{i\mid v_i\notin T_{x_i}, 1\le i\le n\}$.
Now if $v_0v_1\dots v_{t-1}\in P$, then $v_0v_1\dots v_{t-1}=w^r\alpha_j$ for some integers $r$ and $j$,
where $1\le j\le s$.
Assume now that $t=n$. If now $v_{n+1}=w$, then $v_0v_1\dots v_nv_{n+1}\notin P$,
and if $v_{n+1}=\beta_j w\in NM$, by Claim above $v_0v_1\dots v_nv_{n+1}\ne
w^{2n+2}$ and so necessarily $t<n$.
Now we have four cases on whether $v_t\in B$, $v_t\in NT_{x_t}$,
$v_t\in C_{x_t}$ or $v_t\in NC_{x_t}N$, but like cases 1-4 above, these
cases lead to contradiction, since $v_0v_1\dots v_t\notin P$.
So we have proved that $w^{2n+2}\notin \xi(x)$ and therefore the prove
of the theorem is completed.
\end{proof}
|
{
"timestamp": "2021-12-01T02:24:41",
"yymm": "2111",
"arxiv_id": "2111.15420",
"language": "en",
"url": "https://arxiv.org/abs/2111.15420"
}
|
\section{Introduction}\label{sec:intro}
The IoT loosely refers to any interconnected devices used to monitor and control information in order to deliver services to remote users. Examples of such devices are countless, including for example smart plugs and lights, coffee makers and heating systems. The list is growing steadily, and we feel that the IoT era is only just dawning \cite{introIoT}.
Wherever useful services are available, malicious attacks arouse.
One of our favourite examples is the Samsung refrigerator leveraged to violate a set of Gmail credentials \cite{smartfridge}. Another example we contributed is about printers, whose unprotected 9100 ports allow an attacker to mount a paper DoS as well as to eavesdrop the contents of the printouts \cite{overtrustprinter}.
VoIP devices and related protocols, such as Session Initiation Protocol (SIP) \cite{SIP} and Real-time Transport Protocol (RTP) \cite{RTP}, revolutionised traditional voice calling technology. They brought up at the level of computer networks a service that traditionally run at another, separate level. Resting on a long-prototyped technology, the popularity of VoIP dates back to approximately the mid 1990s \cite{historyvoip}, hence VoIP devices can be considered among the earliest IoT members. Moreover, their integration with the current IoT is gaining significant momentum as we write, also due to the quest for controlling devices and services remotely by voice \cite{azienda1,azienda2,azienda3}.
The weaknesses of VoIP in front of malicious attackers are known at least since the ``Information Security Reading Room'' of the SANS Institute published an eminent report in 2002 \cite{sans}. The report provided a proof-of-concept of how VoIP calls could be overheard by using \textit{commercial tools}. Our research aims at verifying whether and how that work has been universally received today --- after nearly two decades --- namely whether VoIP in use has been hardened at all. Our methodology is empirical and leverages \textit{freeware} to conduct a Vulnerability Assessment and Penetration Testing session on the VoIP devices currently in use in our Department. The outcome is that those devices are variously exploitable from inside the Departmental network, although it is clear that the network has protection measures from the outside. We are aware that the very same devices are adopted in a number of other Institutions under similar configurations, so the same outcome could be expected elsewhere too.
VoIP hardening measures exist today, such as TLS-based solutions SIPS \cite{SIPS} and SRTP \cite{SRTP}. However, our findings demonstrate that \textit{there are devices still in use at present} that are as weak as two decades ago. This may be interpreted as yet another paradox of security economics \cite{ross}, but particularly surprises us for at least two reasons.
One is that the use of VoIP technology is widespread and, as noted above, additionally empowered lately.
The other one is that protection measures at the network level have known limitations because abuse at node level is still possible. Broader evaluations will come at the end (\S\ref{sec:concl}).
\subsection{Contributions}\label{sec:contrib}
This paper explores whether and how VoIP devices can be exploited today using freeware, namely non-commercial tools that teenagers may try out since school.
We address this question in a specific though common scenario: phones do not run TLS-based solutions; an insider attacker connects her attacking laptop to an Ethernet cable unplugged from a VoIP phone. The findings are that, based upon simple tools such as nmap, Ettercap, Wireshark and Python programming, the attacker can seriously compromise the VoIP service.
More precisely, we define a family of three attacks and, following the same style used against printers before \cite{overtrustprinter}, we term it the \textit{Phonejack family of attacks against VoIP}:
\begin{itemize}
\item \textbf{Phonejack 1 attack: Zombies for DDoS.} Every model of a VoIP device may suffer zero-day or documented vulnerabilities. These could be exploited to carry out large-scale DDoS attacks against specific Internet targets. We conjecture but do not attempt such attack (\S\ref{sec:zombie}).
\item \textbf{Phonejack 2 attack: phone DoS.} By continuously sending packets to a target VoIP device, this can be exploited to ring indefinitely and crash eventually. We leverage Python multi-thread programming to overwhelm a test network of four VoIP devices and publish a video clip to demonstrate the audio experience of the attack (\S\ref{sec:voipdos}). To the best of our knowledge, Phonejack 2 is entirely innovative.
\item \textbf{Phonejack 3 attack: audio call eavesdropping.} Because packets are sent in the clear, we eavesdrop successfully and dump them to an audio file (\S\ref{sec:privacyattack}). This attack is based on the mentioned 2002 SANS observations \cite{sans} but it is intriguing that it solely relies on freeware.
\end{itemize}
We also prototype effective countermeasures, at least against Phonejack 2 and 3. Taking advantage of inexpensive Raspberry Pi devices, every VoIP phone can be shielded to only accept each distinct traffic packet once, hence countering Phonejack 2. Raspberry Pis can also implement encryption so that packets are only transferred enciphered over the network hence cannot be understood by a man in the middle (MITM). Each Pi then relays cleartext packets to its phone. While this prototype somewhat optimistically trusts the network between a Pi and its phone, it demonstrates once more how a security measure could be incorporated and deployed in a VoIP phone.
Our experiments were conducted over an air gapped testbed detailed below.
Taking inspiration from them, we took the mere information gathering step at institutional level, finding out that the phone network and the computer network were not adequately separated. We reported this to our IT team, and corrective measures were implemented immediately.
\subsection{Testbed}
Our test-bed is an air-gapped network featuring an Asterisk server version 16, an attacking laptop running the offensive freeware, then one VoIP phone model Cisco SPA 921 and three phones model Cisco SPA 922. All are connected through a Netgear gs105se switch. Alternatively, because Cisco SPA 922 features two Ethernet sockets, one for incoming and one for outgoing traffic, the attacking laptop could be connected to one such phone.
The additional devices used to demonstrate the use of encryption to protect the calls were a Raspberry Pi 3 b+ e and a Pi 4 b. These were connected through a different network setup: each Pi to a phone via Ethernet, then the Pis to each other via their Wi-Fi interface through a SSID exposed by a Vodafone Station Revolution.
\subsection{Paper Structure}
This manuscript continues describing our Phonejack family of attacks (\S\ref{sec:zombie},\S\ref{sec:voipdos},\S\ref{sec:privacyattack}) and some possible countermeasures (\S\ref{sec:countermeasures}). It outlines some related work (\S\ref{sec:related}) and concludes with some broader evaluations of the findings (\S\ref{sec:concl}). The basics of the underlying VoIP protocols SIP and RTP are deferred to Appendix due to space constraints.
\section{Phonejack 1 attack: Zombies for DDoS} \label{sec:zombie}
Denial of Service (DoS) is one of the most dangerous attacks. A simple example would be a large infrastructure that no longer responds to millions of users for hours and hours. The distributed version of denial of service (DDoS) floods the victim with traffic generated from different sources. To perform a DDoS, an attacker builds a botnet, a network of infected machines called zombies. The prolonged duration of a DDoS can cause may have various logistic and monetary consequences, such as loss of customer trust towards the service and monetary damage to the business up to \texteuro 35.000 per hour over an average duration of 15 hours\cite{DDoS1}. In consequence, when new vulnerabilities are discovered on remotely controlled IoT devices, there is also a danger of exploiting these devices as zombies. So, even if the computational power of the single device is not significantly high, the huge number of available devices form together a valuable computing asset. For example, a botnet of 1.5 million cameras was built in 2016 and generated 660 Gbps of network traffic \cite{camerasvice}.
Therefore, we seek out to assess what vulnerabilities are known of VoIP systems. Querying the Common Vulnerabilities and Exposures (CVE) database of the MITRE \cite{homemitre} with keyword ``voip'' returns 107 CVE entries, some of which can be practically exploited on devices \cite{cvevoip}. The query can then be specified over the devices in our setup (\S\ref{sec:contrib}) yielding two CVEs, namely CVE-2014-3312, a Remote Command Execution (RCE) \cite{CVE-2014-3312}, and CVE-2014-3313, a Cross-Site Scripting (XSS) \cite{CVE-2014-3313}.
The impact of exploiting such vulnerabilities could be evaluated by considering the DDoS implications mentioned above. Querying Shodan \cite{shodan} may, in turn, help us assign a likelihood to such vulnerabilities, by informing us of how common the affected devices are. A ``cisco spa'' query returns 455 entries, which is a rather low outcome. A possible explanation is that VoIP devices are not left publicly visible, which is a commendable protection measure. However, a more general query, say ``asterisk'', returns 59.341 results, and would give a motivated attacker thousands of potential targets worth of further vulnerability assessment to seek VoIP exploitation. Of course, 2014 vulnerabilities have arguably been fixed ever since, but then we question whether updating phones falls into widespread security maintenance routine. We refrain from actively engaging into exploiting such vulnerabilities because this lies outside our research aims.
Finally, it must be recalled here that VoIP phones may also suffer undocumented, zero-day attacks.
\section{Phonejack 2 attack: phone DoS} \label{sec:voipdos}
While the exploitation of VoIP phones in a botnet might be considered somewhat ``traditional'', we also wonder to what extent phones themselves can become victims of a DoS activities. To assess such a vulnerability, we explore how to bombard a phone with tailored SIP packets, and observe that this can be successful.
As preliminary operations, we configure the Asterisk server and the four Cisco VoIP phones so that they authenticate to the server. A phone ID, phone number, password and gateway must be manually entered on each phone. We assume that the phone number is public. An attacker may build a function to scan the local network and obtain the IPs and MAC addresses of the connected devices. Table \ref{tab:netscanscript} shows a Python implementation of such a function. The \texttt{network} parameter represents the network to be scanned (e.g. 192.168.1.0/24).
\begin{table}[h]
\caption{Network scanning in Python}\label{tab:netscanscript}
\begin{lstlisting}
def scanNetwork(network):
hosts = []
nm = nmap.PortScanner()
out = nm.scan(hosts=network, arguments='-sP')
for k, v in out['scan'].iteritems():
if str(v['status']['state']) == 'up':
hosts.append([str(v['addresses']['ipv4']),
str(v['addresses']['mac'])])
return hosts
\end{lstlisting}
\end{table}
We now make a call between two phones and record the network traffic through Wireshark \cite{wireshark}, as if run by an attacker. We apply a filter to extract the SIP packet that causes a ring, and save this packet in a file called \texttt{sipInvite.pcap}. This file contains information such as number and IP address of the recipient phone. We note that a phone does not check the SIP timestamp of a received packet but only that the recipient phone number of the packer corresponds to itself. Thus, a receiving phone only checks whether a packet is intended for itself. We also observe that flooding a phone with requests causes it to ring continuously, then crash and reboot. These observations guide our attack. We write a function (again in Python) to carve a SIP packet as we want. Our \texttt{flood\_DoS} function in Table \ref{tab:floodDoS} takes an id, an IP address and a MAC address and calls the \texttt{tcprewrite} and \texttt{tcpreplay} commands. More specifically, \texttt{tcprewrite} takes the \texttt{sipInvite.pcap} file as input and modifies the fields containing the IP and MAC address of the packet. Finally, \texttt{tcprewrite} saves the new forged phone-ringing packet in a file called \texttt{newSipInvite.pcap}. After that, the \texttt{tcpreplay} command takes the \texttt{newSipInvite.pcap} file, sends it in loop to the phone and achieves the expected outcome: the phone rings for a few seconds, then crashes and reboots.
\begin{table}[h]
\caption{Phonejack 2 attack in Python}\label{tab:floodDoS}
\begin{lstlisting}
def flood_DoS(id, IP, MAC):
subprocess.call([`tcprewrite',
`--dstipmap=192.168.1.18:'+IP,
`--enet-dmac='+MAC,`--dlt=enet',`--fixcsum',
`--infile=sipInvite.pcap',
`--outfile=newSipInvite'+id+`.pcap'])
subprocess.Popen([`tcpreplay', `--intf1=eth0',
`--loop=5',`newSipInvite'+id+`.pcap']
return
\end{lstlisting}
\end{table}
We then to carry out this attack in parallel on all our four devices. Table \ref{tab:mainpy} shows a Python script that uses a thread for each phone. More precisely, the script scans the network using the \texttt{scanNetwork} function, builds a thread, gives it a job by means of the \texttt{flood\_DoS} function and starts it.
\begin{table}[h]
\caption{Parallelising the Phonejack 2 attack in Python}\label{tab:mainpy}
\begin{lstlisting}
if __name__ == ``__main__":
hosts = scanNetwork(sys.argv[1])
jobs = []
for i in range(0, len(hosts)):
IP=hosts[i][0]
MAC=hosts[i][1]
thr = threading.Thread(target=flood_DoS(i,IP,MAC))
jobs.append(thr)
for j in jobs:
j.start()
for j in jobs:
j.join()
\end{lstlisting}
\end{table}
Figure \ref{fig:telphonejack2} shows a laptop executing the Phonejack 2 attack against our four VoIP phones. Remarkably, all phones are ringing, as demonstrated by the red light on each of them. To better explain our results, we built a video clip \cite{videophonejack}.
It can be imagined that mounting this attack on a departmental or institutional scale would have dramatic consequences. Not only would the calling capability dwarfed and ultimately zeroed, but the work environment would realistically become unbearable. We have not, however, scaled up our experiments.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.15]{testbed.png}
\end{center}
\caption{Consequences of Phonejack 2}\label{fig:telphonejack2}
\end{figure}
\section{Phonejack 3 attack: audio call eavesdropping} \label{sec:privacyattack}
Let us assume that Alice and Bob want to get in touch and that the attacker Eve is present in the same network. Since calls are made in the clear, Even could attempt sniffing a call between Alice and Bob, clearly infringing their privacy.
This conjecture can be demonstrated by taking the following steps. First, use Ettercap \cite{ettercap} to perform a Man in The Middle attack. Then, use a feature of Wireshark to listen to the audio flow of communication between two devices. Figure \ref{fig:phonejack3} shows the RTP traffic in the clear, as sniffed through Wireshark and played. To do this, we select an RTP packet, use the \texttt{Telephony} option and select the \texttt{VoIP Calls} feature. After that, we select one of the two streams and press \texttt{Play Stream}. Moreover, at the end of the call, we can export the the audio track of the call as shown in our video clip \cite{videophonejack}.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.27]{sniffAudio.png}
\end{center}
\caption{Consequences of Phonejack 3}\label{fig:phonejack3}
\end{figure}
Clearly, this attack could be leveraged to exfiltrate data also at industrial espionage level. As noted above, it is not conceptually innovative but it is remarkable that we succeeded in carrying it out by using only freeware.
\section{Countermeasures}\label{sec:countermeasures}
We also design and develop countermeasures for the Phonejack 2 and Phonejack 3 attacks. This means that the countermeasures should thwart the malicious crashing of the phones as well as call sniffing.
We opt for adopting an inexpensive device that could be easily programmed to prototype our solutions, and decide to leverage a recent model of Raspberry Pi. Two VoIP phones can be connected trough two Pis and a Wi-Fi bridge \cite{piwifibridge} as shown in Figure \ref{fig:netArch}. Precisely, each phone is connected to a Raspberry Pi through a wired Ethernet connection. Each Raspberry Pi will (have to) communicate via Wi-Fi with the other Raspberry Pi (because a Pi only has one Ethernet card).
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.3]{netArch.png}
\end{center}
\caption{An inexpensive network upgrade in support of our countermeasures}\label{fig:netArch}
\end{figure}
For simplicity, we set static IP addresses by means of \textit{dhcpcd} and \textit{dnsmasq} tools. We initially connect the Raspberry Pi to the Wi-Fi router, then we modify the \textit{``/etc/dhcpcd.conf"} file by setting the interface, static IP address and the subnet. Then, we modify the \textit{``/etc/dnsmasq.conf"} file to tell dnsmasq how it should handle traffic. After that, we activate the forwarding mode of the network card and configure Iptables as shown in Table \ref{tab:iptablesrulenet} effectively bridge Ethernet and Wi-Fi. Finally, we update the routing tables of each Pi.
\begin{table}[h]
\caption{Bridging Ethernet and Wi-Fi through Iptables}\label{tab:iptablesrulenet}
\begin{lstlisting}
iptables -t nat -A POSTROUTING -o wlan0
-j MASQUERADE
iptables -A FORWARD -i wlan0 -o eth0 -m state
--state RELATED,ESTABLISHED -j ACCEPT
iptables -A FORWARD -i eth0 -o wlan0 -j ACCEPT
\end{lstlisting}
\end{table}
Having upgraded the network, we can turn out attention to the actual attack countermeasures. Since we need to modify the VoIP flow, we use Iptables to redirect traffic and build 3 queues with which the three scripts will be associated (Table \ref{tab:queues}). Specifically, queue 1 will be assigned with a dedicated anti-DoS script, then queues 2 and 3 respectively with scripts to encrypt and decrypt audio traffic.
\begin{table}[h]
\caption{Enqueuing SIP and RTP traffic through Iptables}\label{tab:queues}
\begin{lstlisting}
1) iptables -A FORWARD -p UDP -d PhoneAddress
--dport 5060 -j NFQUEUE --queue-num 1
2)iptables -A FORWARD -p UDP -s IPPhoneAddress
--sport rangeRTPport -j NFQUEUE --queue-num 2
3)iptables -A FORWARD -p UDP -d PhoneAddress
--dport rangeRTPport -j NFQUEUE --queue-num 3
\end{lstlisting}
\end{table}
\subsection{Countering Phonejack 2}
Table \ref{tab:scriptantidos} shows our script to counter Phonejack 2. It analyses each packet received via the \texttt{get\_payload} function. It checks the file called \texttt{blacklist.txt}. If the analysed packet has been previously received, then it is discarded, otherwise it is accepted and marked as received in the file. The penultimate statement builds a queue via the \texttt{NetfilterQueue} library, while the last instruction connects the ID and the anti-DoS script to queue 1.
\begin{table}[h]
\caption{A Phonejack 2 countermeasure in Python}\label{tab:scriptantidos}
\begin{lstlisting}
def antiDos(packet):
pkt = IP(packet.get_payload())
Flag=0
with open('blacklist.txt') as f:
if str(packet.get_payload()) in f.read():
Flag=1
if Flag == 1:
packet.drop()
else:
packet.accept()
f= open("blacklist.txt","a+")
f.write(str(packet.get_payload()))
f.close()
Flag=0
nfqueue = NetfilterQueue()
nfqueue.bind(1, antiDos)
\end{lstlisting}
\end{table}
At this point, as shown in our video clip \cite{videophonejack}, each Raspberry Pi acts as a shield for a phone by filtering old packets from new ones while preserving voice communication.
\subsection{Countering Phonejack 3}
In this section we implement our solution to encrypt and decrypt the audio stream without any significant overhead or additional latency to the call. Table \ref{tab:scriptenc} shows the encryption script. It runs a few preliminary cryptographic operations and then executes the encryption function on the packet payload. Subsequently, it sends the encrypted packet via a socket and dequeues the packet. As with the anti-DoS script, also this script also be associated with a queue, in this case with the queue 2.
\begin{table}[h]
\caption{A Phonejack 3 countermeasure in Python}\label{tab:scriptenc}
\begin{lstlisting}
def encrypt(packet):
cipher_suite = Fernet(key)
enc_vc=cipher_suite.encrypt(packet.get_payload())
pkt = IP(packet.get_payload())
MESSAGE = enc_vc
sk = socket.socket(socket.AF_INET,socket.SOCK_DGRAM)
sk.sendto(MESSAGE, (pkt[IP].dst, pkt[UDP].dport))
packet.drop()
nfqueue = NetfilterQueue()
nfqueue.bind(2, encrypt)
def decrypt(packet):
cipher_suite = Fernet(key)
dec_vc=cipher_suite.decrypt(packet.get_payload())
pkt = IP(packet.get_payload())
MESSAGE = dec_vc
sk=socket.socket(socket.AF_INET,socket.SOCK_DGRAM)
sk.sendto(MESSAGE,(pkt[IP].dst, pkt[UDP].dport))
packet.drop()
nfqueue = NetfilterQueue()
nfqueue.bind(3, decrypt)
\end{lstlisting}
\end{table}
We can now launch a sniffing job through Wireshark. This simulates a MITM who actively attempts call sniffing between the two phones. What the attacker would intercept is nothing but encrypted RTP traffic, as shown in Figure \ref{fig:sniffenc}.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.4]{rtpcifrato.png}
\end{center}
\caption{A call sniffing attempt under our Phonejack 3 countermeasure}\label{fig:sniffenc}
\end{figure}
The decryption script is similar to the encryption script but with two differences. One is the method invoked on the payload, which, in this case is \textit{decrypt}. The other one is the Iptables queue that is accessed, in this case queue number 3.
Figure \ref{fig:trafficoconPI} shows how a Raspberry Pi manages traffic under our Phonejack 3 countermeasure. The upper part of the Figure shows a terminal running the encryption routine, and hence displays an encrypted RTP stream; conversely, the lower part shows a terminal running the decryption routine, and hence displays unencrypted traffic.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.25]{trafencpi1.png}\\
\vspace{0.25cm}
\includegraphics[scale=0.25]{trafdecpi2.png}
\end{center}
\caption{Execution of our Phonejack 3 countermeasure on a Raspberry Pi}\label{fig:trafficoconPI}
\end{figure}
\section{Related Work}\label{sec:related}
There is only room for a very brief treatment here.
McGann and Sicker looked at security threats and tools for SIP-based VoIP technologies in 2005 \cite{Mcgann2005AnAO}. Their main conclusion was that testing tools do not always provide the coverage declared by the developers and may be difficult to install and configure properly. We find that work highly motivational but, regretfully, not followed up by much research.
The most notable work is of 2010 by Keromytis \cite{angelostat}. He drew VoIP security statistics showing that 58\% of VoIP attacks are on Denial of Service, while 20\% are on eavesdropping, and hijacking. This reconfirms the importance of VoIP security measures and, in this vein, our work shows what can be done using modern freeware both for attack and defence purposes.
Next to the SIP-based VoIP technologies tackled in the present paper, also TLS-based solutions such as SIPS \cite{SIPS} and SRTP \cite{SRTP} must be mentioned, with the extra ``S'' in their acronyms arguably meaning some ``security''. In particular, SRTP uses asymmetric encryption to aim at authentication, integrity and confidentiality. It is clear that moving on to these technologies requires significant upgrades at both server and clients level. It is a matter of separate, future research to investigate how common the use of such technologies is today and whether it may suffer, in turn, exploitable weaknesses, perhaps under a socio-technical lens.
\section{Evaluations and conclusions}\label{sec:concl}
We have contended that VoIP and IoT are tightly intertwined. VoIP phones can be seen as early representatives as well as present enhancers of the IoT. Secure versions of VoIP protocols exist but are often neglected in favour of more traditional, unsecured technologies. This paper targets traditional VoIP, focusing on attacks and corresponding defence measures. The findings are demonstrated on common CISCO devices, for example costing in the region of \texteuro40 on Amazon Italy \cite{amazonit} and Amazon UK \cite{amazonuk} at the time of this writing.
Phonejack 1 conjectures that such devices may suffer vulnerabilities, documented or not, that could be exploited. Phonejack 2 demonstrates, also in a video clip \cite{videophonejack}, that an entire network of phones can be overwhelmed with ringing, cutting a vital institutional service and virtually making the target institution worth of evacuation due to noise levels. Phonejack 3 intercepts calls. It is reassuring that inexpensive Raspberry Pis can be programmed to counter the last two attacks, inviting a \textit{by design and by default} integration of such technology with the actual phones.
Our offensive and defensive experiments were conducted in an isolated environment but also inspired some information gathering at institutional level; the latter highlighted a weakness in the network separation in our Department. The finding was promptly reported to our IT team, ultimately resulting in the configuration of a stronger separation between the phone network and the computer network.
Our findings are significant. Although the worldwide diffusion of the devices in our testbed seems limited to a few hundreds from what can be discovered publicly, we argue that there are many more in use, correctly protected by traditional network security measures such as VLANs and air gapping. Even so, from an outsider attacker standpoint, if a network attack point exists, then the outsider could leverage Phonejack attacks. From an insider attacker standpoint, network security measures could be as easy to bypass as to replace a phone with an attacking laptop.
A fundamental question we raised with printers \cite{overtrustprinter} firmly arises also in this case. Why are phones configured without any security measure at all when we are used to protecting our institutional laptops with a number of such measures, such as authentication, just to begin with? With the finally consolidated idea that our laptops host personal or sensitive data hence must be correspondingly protected \textit{even if} network security measures are in place, it is hard to justify why the same care is not devoted in practice to the verbal transmission of such data through voice calls. We cannot help but advocating that adequate security measures be wisely applied to \textit{every} computing node of the IoT.
\bibliographystyle{abbrv}
|
{
"timestamp": "2021-12-01T02:26:29",
"yymm": "2111",
"arxiv_id": "2111.15468",
"language": "en",
"url": "https://arxiv.org/abs/2111.15468"
}
|
\section{Introduction}
\label{sec:intro}
Semantic segmentation in urban scenes is an important technology for many vision-based applications. Current studies~\cite{fcn,unet,seg1,seg2,seg3,seg4} on semantic segmentation mainly focused on designing complex segmentation networks with higher segmentation capacities on in-distribution samples, while they paid less attention to out-of-distribution (OOD) samples, \emph{a.k.a}, anomalous objects, whose categories are unseen during training. A commonly-noticed shortcoming of current segmentation networks is that they are incapable of identifying anomalous objects. Instead, they can only predict an anomalous object as one seen category. This issue greatly impedes their uses in safety-critical applications such as autonomous driving in urban scenes. For example, the anomalous objects (marked by yellow boxes) in Fig.~\ref{fig:anomaly_scenarios} are predicted as a road by a segmentation network, which may lead to accidents. To address this issue, anomaly segmentation, a task to detect and segment out unseen anomalous objects, is attracting more and more attention.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{figures/avg9.pdf}
\caption{Statistical analysis of MulMem anomaly scores and AuxCon anomaly scores on Fishyscapes Lost \& Found. ``ID'' and ``OOD'' denote in-distribution samples and out-of-distribution samples, respectively. Normal in-distribution samples and hard in-distribution samples are distinguished by thresholding with the MulMem anomaly score at 95\% TPR. We can observe that the ability of MulMem to differentiate hard in-distribution samples from OOD samples is hardly guaranteed since the averaged MulMem anomaly score over hard in-distribution samples is even higher than that over OOD samples, while AuxCon shows a favorable ability to address this issue. }
\vspace{-5mm}
\label{fig:avg_ano}
\end{figure}
Previous approaches attempted to address this task by revealing anomalies according to prediction incorrectness of the segmentation networks, \emph{e.g.}, uncertainties over categories~\cite{MSP,SML} and re-synthesis errors caused by segmentation failures~\cite{imgr,SynthCP,noti}. However, they lack a mechanism to distinguish hard in-distribution samples from anomalies, and thus suffer from high false positive rates.
Differentiating between hard in-distribution samples and anomalies is the core challenge of anomaly segmentation. In this paper, we devote to addressing this challenge inspired by how humans deal with normal and hard examples: For a normal example, a human can confidently confirm his/her opinion about it is correct according to his/her memory, \emph{e.g.}, experience or knowledge; For a hard example, the human might be uncertain about it, but he/she can check with others and know his/her opinion is probably correct if others have consistent opinions. This is known as a general psychology finding that groups perform better than individuals on memory tasks~\cite{cac}.
Based on the above intuition, we introduce a novel and simple approach named \textbf{Co}nsensus \textbf{S}ynergizes with \textbf{Me}mory (CosMe) for anomaly segmentation. First, we present a strong memory-based baseline, named Multi-layer Memory (MulMem), which leverages a feature bank consisting of seen prototypes extracted from multiple layers of the pre-trained segmentation model to memorize seen objects with different scales. Then, we present a consensus-based module, named auxiliary consensus (AuxCon), in which an auxiliary network is trained to keep reaching a consensus with the pre-trained segmentation model in a self-supervised manner. This is achieved by explicitly maintaining hierarchical consistency between the auxiliary network and the pre-trained segmentation model on the feature representations of samples from seen categories.
Intuitively, whether an in-distribution sample is normal or hard can be determined by its distance to the prototypes in the MulMem feature bank. By this means, we divide the in-distribution samples into normal and hard sets. Then we compute the averaged anomaly scores for normal in-distribution samples, hard in-distribution samples and OOD samples according to MulMem (the minimum distance to prototypes) and AuxCon (the feature inconsistency), respectively. As illustrated in~\cref{fig:avg_ano}, the memory-based module can easily differentiate normal in-distribution samples from OOD samples, but its ability to distinguish hard in-distribution samples is hardly guaranteed. In contrast, the consensus-based module shows a clearly better differentiation ability between hard in-distribution samples and OOD samples than the memory-based module, while its overall discrimination between in-distribution samples and OOD samples is relatively smaller. These observations suggest good complementarity between MulMem and AuxCon. And thus, a simple combination of them, \emph{i.e.}, CosMe, has a strong ability for anomaly segmentation, especially can favorably distinguish hard in-distribution samples from anomalies.
Experimental results show that CosMe achieves consistent and substantial improvements over the state-of-the-art anomaly segmentation approaches on several urban scene datasets, and its performance is even on par with those methods using extra OOD samples for re-training.
In summary, our contributions are as follows:
\begin{itemize}
\item
We are the first to explicitly design a mechanism to tackle the challenge of hard in-distribution samples in anomaly segmentation.
\item We propose CosMe, a novel approach which enjoys the benefits from the complementarity between memory-based prototype-level distance and feature-level inconsistency, with a good ability in differentiating between hard in-distribution samples and OOD samples.
\item CosMe significantly outperforms the current state-of-the-art anomaly segmentation approach and even is comparable with the methods using extra OOD samples for re-training.
\end{itemize}
\section{Related Work}
The problem of detecting and segmenting unseen anomalous objects gradually attracts more and more attention. It is also related to out-of-distribution (OOD) detection. In this section, we first give a brief review of OOD detection, then describe previous approaches for anomaly segmentation in urban scenes.
\subsection{Out of Distribution (OOD) detection}
OOD detection is a broad concept, which is critical to ensuring the reliability of machine learning systems. There are many sub-tasks under this task, such as open set recognition~\cite{openset}, novelty detection~\cite{nd1,nd2}, \emph{etc}. Since Hendryck and Gimpel~\cite{MSP} proposed the first OOD detection baseline in 2017, a plethora of approaches were developed, which can be categorized into clustering-based~\cite{clus,clus2}, uncertainty-based~\cite{streethazards,SML,odin,deepmetric}, reconstruction-based~\cite{memAE,imgr,SynthCP,synboost} and density-based~\cite{density1,density2}, \emph{etc}. As clustering-based is one type of most straightforward OOD detection method, we revisit it for anomaly segmentation.
\subsection{Anomaly Segmentation}
\subsubsection{Uncertainty-based}
Detecting anomalies based on model uncertainties is intuitive, since they are unseen during model training. Hendryck and Gimpel~\cite{MSP} proposed a baseline for OOD detection named ``maximum softmax probability'' (MSP), which measures anomaly scores by the maximum softmax probability outputted by the softmax classifier. Then they proposed ``maxlogit''~\cite{streethazards}. In maxlogit, the logits, \emph{i.e.}, the inputs of the softmax classifier are used as the anomaly scores instead. Jung \emph{et al.}~\cite{SML} proposed ``standardized maximum logit'' (SML) by improving ``maxlogit''. They used the statistics of the training set to standardize the ``maxlogit'' scores for each seen category, leading to a large improvement in anomaly segmentation results. However, these approaches lack a mechanism to distinguish hard in-distribution samples from anomalies. To address this challenge, some other approaches tried to first make the segmentation networks more sensitive to anomalous samples by either re-training them with a new loss function~\cite{deepmetric} or re-designing them by a new network architecture~\cite{Dense}, then applied the uncertainty measures. However, the segmentation networks modified by these strategies sacrifice their performance on seen categories. Chan \emph{et al.}~\cite{EntMax} utilized samples from the COCO dataset~\cite{coco} as OOD proxy for urban scenes and introduced an extra training objective to maximize the uncertainty on these samples. However, in practice, the categories of anomalous samples are unknown and inaccessible.
\subsubsection{Synthesis-based}
Recently, thanks to the rapid development of generative adversarial networks (GANs)~\cite{GAN,GauGAN}, one can reconstruct complex urban scenes from segmentation results. Some recent studies~\cite{imgr,SynthCP,noti} re-synthesized an image from a segmentation result and then compare it to the original input image to localize the anomalous instances. Synthesis-based approaches are unable to differentiate between hard in-distribution samples and OOD samples since segmentation results of hard samples are usually incorrect. In addition, their anomaly segmentation performance is heavily dependent on the generation quality of GANs and may suffer from degradation due to artifacts and style shifts in the synthesized images. Last, the serialized processing, \emph{i.e.}, segment-synthesize-compare, make this kind of approach difficult to be applied in practical real-time applications.
\subsubsection{Hybrid}
To achieve better anomaly segmentation results, Giancarlo \emph{et al.}~\cite{synboost} proposed a hybrid approach, which combines both uncertainty-based approach and synthesis-based approach. Nevertheless, their hybrid approach still suffers from the own issues of each component mentioned above. Besides, they implicitly made use of OOD samples to train a discriminator in their approach, which limits the generalization ability of their approach.
\section{Consensus Synergizes with Memory}
In this section, we first formulate the problem of anomaly segmentation, and then sketch out the overall framework of our method CosMe. Next, we describe the details of the two key components of CosMe, including the memory-based baseline \textbf{Multi-layer Memory (MulMem)} (\cref{sec:pm}) and the consensus-based module \textbf{Auxiliary Consensus (AuxCon)} (\cref{sec:cm}).
\begin{figure*}
\centering
\includegraphics[width=0.98\linewidth]{figures/midv100.pdf}
\caption{The overall framework of CosMe. It consists of two main modules: \textbf{Multi-layer Memory} (MulMem) and \textbf{Auxiliary Consensus} (AuxCon). Given a fixed pre-trained segmentation model, MulMem stores prototypes extracted from multiple layers ($\texttt{C2}-\texttt{C5}$ are Conv layers of the backbone, $\texttt{LH}$ and $\texttt{O}$ are the last hidden layer and the last $1\times1$ Conv layer to compute the outputted logits over categories of the segmentation model, respectively) of the pre-trained segmentation model and AuxCon is an auxiliary model which is trained to maintain consistency with the segmentation model on in-distribution data. For the former, the distance to prototypes in MulMem is used as the MulMem anomaly score; For the latter, the inconsistency between the pre-trained model and the auxiliary model is used as the AuxCon anomaly score. These two kinds of anomaly scores are combined to give the final anomaly prediction.
}
\label{fig:framework}
\end{figure*}
\subsection{Problem Setup}
\label{sec:setting}
We first set up the problem of anomaly segmentation.
Given a training set $\mathcal{T}=\{(\mathbf{x}^{(n)},\mathbf{y}^{(n)})_{n=1^N}$ for semantic segmentation with a category set $\mathcal{C}$, where $(\mathbf{x}^{(n)},\mathbf{y}^{(n)})$ denotes a pair of training image and its corresponding segmentation ground-truth and $y_{i,j}^{(n)}\in\mathcal{C}$ denotes the category label for the pixel at location $(i,j)$, a segmentation model $\mathbb{M}$ is pre-trained on $\mathcal{T}$, parameterized by $\bm{\Theta}$. Given a testing image $\mathbf{x}$ with categories from an unseen set $\mathcal{U}\cap\mathcal{C}=\emptyset$, the goal of anomaly segmentation is to segment out pixels that belong to unseen categories, based on $\mathbb{M}$ and $\mathcal{T}$. This can be achieved by assigning an anomaly score $\Upsilon_{i,j}(\mathbf{x})$ to each pixel $(i,j)$, so that
$$\min \Upsilon_{i,j}(\mathbf{x}), \text{if}~~y_{i,j} \in \mathcal{C};$$
\vspace{-6mm}
$$\max \Upsilon_{i,j}(\mathbf{x}), \text{if}~~y_{i,j} \in \mathcal{U}.$$
Some state-of-the-art anomaly segmentation approaches either retrain the pre-trained segmentation model or even make use of OOD data. However, we argue that these approaches have limitations, since 1) retraining may lead to negative effects on in-distribution segmentation performance; 2) the categories of OOD data are unknown and inaccessible in real-world applications.
\subsection{Overall Framework}
\label{sec:framework}
The overall framework of CosMe is shown in \cref{fig:framework}.
Except for the pre-trained segmentation model, there are mainly two parts in CosMe: one is the Multi-layer Memory (MulMem), the other is the Auxiliary Consensus (AuxCon). These two modules are combined to tackle the problem of hard in-distribution samples while fully utilizing the memories embedded in the segmentation model. We give a brief introduction to them as follows:
\begin{itemize}
\item The \textbf{Mul}ti-layer \textbf{Mem}ory (MulMem) is a feature bank consisting of several feature sub-branches, each of which stores representative features, \emph{i.e.}, prototypes, outputted from a specific layer of the segmentation network. Whether a sample is anomalous is determined by the feature distance of the sample to the prototypes in sub-branches.
\item The \textbf{Aux}iliary \textbf{Con}sensus (AuxCon) is an auxiliary model sharing information from the pre-trained model. Its task is to mimic the behavior of the pre-trained model on in-distribution data. When encountered with anomalies, the auxiliary model will show relatively big inconsistency with the pre-trained model. So the mimicking error of the auxiliary model is a kind of anomaly score naturally.
\end{itemize}
CosMe was built on the observation that hard in-distribution samples often lead to segmentation failures and are easily confused with anomalies. We are inspired by how humans distinguish uncertain samples. Humans tend to query others for suggestions when they cannot draw clear conclusions with their own memory. Thus, MulMem in CosMe imitates human memory, while AuxCon imitates someone else with similar experiences. For hard in-distribution samples, AuxCon can show relatively higher consistency than totally unseen anomalies.
\subsection{Multi-layer Memory}
\label{sec:pm}
Let $\mathbf{f}^{(l)}(\mathbf{x};\bm{\Theta})$ be the feature map for an input image $\mathbf{x}$, outputted from layer $l$ of the pre-trained segmentation model $\mathbb{M}$, and let $\mathbf{f}^{(l)}_{i,j}(\mathbf{x};\bm{\Theta})$ denote the feature vector at the location $(i,j)$ of this feature map. Our goal is to build a memory bank $\mathcal{M}=\{\mathcal{S}^{(l)}|l\in\mathcal{L}\}$ to store prototype features of seen samples from the training set $\mathcal{T}$, where $\mathcal{S}^{(l)}$ is a sub-branch of the bank to store prototype features outputted from layer $l$, and $\mathcal{L}$ is the set of layer of interests.
To memorize seen samples, a straightforward way is performing clustering on the training set to generate prototypes. Since the segmentation network processes data samples batch-wise, we propose a batch-based clustering algorithm to generate the prototypes. Without loss of generality, we describe our batch-based clustering algorithm by taking prototype generation for one feature sub-branch $\mathcal{S}^{(l)}$ as an example. We first set the feature sub-branch as an empty set: $\mathcal{S}^{(l)}=\{ \emptyset \}$, then we initialize this set by iteratively adding elements into it to form the prototypes. The elements are features from some randomly selected training images. Given a new element, \emph{e.g.}, $\mathbf{f}^{(l)}_{i,j}(\mathbf{x} ;\bm{\Theta})$, we first compute the cosine similarity $\phi(\cdot,\cdot)$ between the element $\mathbf{f}^{(l)}_{i,j}(\mathbf{x};\bm{\Theta })$ and each prototype $\mathbf{p} \in \mathcal{S}^{(l)}$ in current feature sub-branch, if this element is not similar to all the prototypes (determined by a similarity threshold $\tau$), then we add it to the feature sub-branch as a new prototype, \emph{i.e.}, $\mathcal{S}^{(l)} \leftarrow \mathcal{S}^{(l)} \cup \{\mathbf{f}^{(l)}_{i,j}(\mathbf{x};\bm{\Theta})\}$; otherwise, this element is not considered. This element adding process for prototype initialization is ended until the size of the feature sub-branch reaches a pre-set number $K$. The algorithm of prototype initialization for sub-branch $\mathcal{S}^{(l)}$ is shown in \cref{alg:PI}.
\begin{algorithm}[t!]
\caption{Memory Initialization for Sub-branch $\mathcal{S}^{(l)}$}
\label{alg:PI}
\begin{algorithmic}[1]
\REQUIRE Training batch $\mathcal{B}$, threshold $\tau$, sub-branch size $K$
\ENSURE Initialized $K$ prototypes $\mathcal{S}^{(l)}\leftarrow\{\mathbf{p}_k\}_{k=1}^K$
\STATE $\mathcal{S}^{(l)} = \{\emptyset\}$
\WHILE {$|\mathcal{S}^{(l)}|<K$}
\STATE Randomly select an image $\mathbf{x}$ from $\mathcal{B}$
\STATE Randomly select one $\mathbf{f}^{(l)}_{i,j}(\mathbf{x};\bm{\Theta})$ from its features
\IF{$\max\{\phi\big(\mathbf{p},\mathbf{f}^{(l)}_{i,j}(\mathbf{x};\bm{\Theta})\big)|\forall\mathbf{p}\in\mathcal{S}^{(l)}\}<\tau$}
\STATE $\mathcal{S}^{(l)} \leftarrow \mathcal{S}^{(l)} \cup \{\mathbf{f}^{(l)}_{i,j}(\mathbf{x};\bm{\Theta})\}$
\ENDIF
\ENDWHILE
\end{algorithmic}
\end{algorithm}
After prototype initialization, we learn the prototypes in $\mathcal{S}^{(l)}$ by a momentum update, given each training batch $\mathcal{B}$. Specifically, for each prototype $\mathbf{p}$ in $\mathcal{S}^{(l)}$, we update $\mathbf{p}$ by the features that are closest to $\mathbf{p}$, \emph{i.e.}, the highest cosine similarity. This can be achieved by maintaining a set $\mathcal{S}^{(l)}_p$ to store such features for prototype $\mathbf{p}$:
\begin{equation}
\label{eq:update1}
\mathcal{S}^{(l)}_p \leftarrow \{ \mathbf{f}\in\mathcal{F}_{\mathcal{B}} |\mathbf{p}=\arg\max_{\mathbf{p}'\in\mathcal{S}^{(l)}}\mathcal{Q}_{\mathcal{S}^{(l)}, \mathcal{F}_{\mathcal{B}}}\},
\end{equation}
where $\mathcal{F}_{\mathcal{B}} \leftarrow \{\mathbf{f}^{(l)}_{i,j}(\mathbf{x};\bm{\Theta})| \mathbf{x} \in \mathcal{B} \}$ is the set containing all the features of the images in batch $\mathcal{B}$ and $\mathcal{Q}_{\mathcal{S}^{(l)}, \mathcal{F}_{\mathcal{B}}}\leftarrow \{\phi(\mathbf{p},\mathbf{f})|\forall\mathbf{p}\in\mathcal{S}^{(l)},\forall\mathbf{f}\in\mathcal{F}_{\mathcal{B}}\}$ is the set containing all cosine similarities between each prototype $\mathbf{p}\in\mathcal{S}^{(l)}$ and each feature $\mathcal{F}_{\mathcal{B}}$. Finally, $\mathbf{p}$ is computed by a momentum update:
\begin{equation}
\label{eq:update2}
\mathbf{p} \leftarrow m \cdot \mathbf{p} + (1-m) \cdot \frac{1}{|\mathcal{S}^{(l)}_{p}|}\sum_{s^{(l)}_{p} \in \mathcal{S}^{(l)}_{p}} s^{(l)}_{p},
\end{equation}
where $m$ is a pre-defined momentum coefficient. The algorithm of prototype learning by the momentum update for sub-branch $\mathcal{S}^{(l)}$ is given in \cref{alg:MU}.
\begin{algorithm}[ht!]
\caption{Memory Learning for Sub-branch $\mathcal{S}^{(l)}$}
\label{alg:MU}
\begin{algorithmic}[1]
\REQUIRE Training batch $\mathcal{B}$, coefficient $m$, initialized $\mathcal{S}^{(l)}$
\ENSURE Updated $K$ prototypes $\mathcal{S}^{(l)} =\{\mathbf{p}_k\}_{k=1}^K$
\STATE $\mathcal{F}_{\mathcal{B}} \leftarrow \{\mathbf{f}^{(l)}_{i,j}(\mathbf{x};\bm{\Theta})| \mathbf{x} \in \mathcal{B} \}$
\STATE $\mathcal{Q}_{\mathcal{S}^{(l)}, \mathcal{F}_{\mathcal{B}}}\leftarrow \{\phi(\mathbf{p},\mathbf{f})|\forall\mathbf{p}\in\mathcal{S}^{(l)},\forall\mathbf{f}\in\mathcal{F}_{\mathcal{B}}\}$
\FOR {$\mathbf{p} \in \mathcal{S}^{(l)}$}
\STATE $\mathcal{S}^{(l)}_p \leftarrow \{ \mathbf{f}\in\mathcal{F}_{\mathcal{B}} |\mathbf{p}=\arg\max_{\mathbf{p}'\in\mathcal{S}^{(l)}}\mathcal{Q}_{\mathcal{S}^{(l)}, \mathcal{F}_{\mathcal{B}}}\}$
\STATE $\mathbf{p} \leftarrow m \cdot \mathbf{p} + (1-m) \cdot\frac{1}{|\mathcal{S}^{(l)}_{p}|} \sum_{s^{(l)}_{p} \in \mathcal{S}^{(l)}_{p}} s^{(l)}_{p}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
With the learned feature sub-branch $\mathcal{S}^{(l)}$, given an input image $\mathbf{x}$, an anomaly score map $\bm{\gamma}^{(l)}(\mathbf{x})$ for this input image is computed by
\begin{equation}
\label{eq:as}
\gamma_{i,j}^{(l)} = 1 - \max\{\phi\big(\mathbf{p},\mathbf{f}^{(l)}_{i,j}(\mathbf{x};\bm{\Theta})\big)|\forall\mathbf{p}\in\mathcal{S}^{(l)} \},
\end{equation}
where $\gamma^{(l)}_{i,j}(\mathbf{x})$ denotes the anomaly score at the location $(i,j)$ of the anomaly score map $\bm{\gamma}^{(l)}(\mathbf{x})$.
Since several sub-branches form the feature bank $\mathcal{M}=\{\mathcal{S}^{(l)}|l\in\mathcal{L}\}$ of MulMem, we compute the anomaly score map ${\Gamma}(\mathbf{x})$ given by the feature bank $\mathcal{M}$ by a simple combination of the anomaly score maps given by each sub-branch:
\begin{equation}
\Gamma_{i,j}(\mathbf{x}) = \Pi_{l\in \mathcal{L}}{\gamma}_{i,j}^{(l)}(\mathbf{x}),
\end{equation}
where $\Gamma_{i,j}(\mathbf{x})$ is the MulMem anomaly score at the location $(i,j)$ of the anomaly score map ${\bm{\Gamma}}(\mathbf{x})$. We additionally adopt the standardization strategy in~\cite{SML} to normalize the MulMem anomaly scores in ${\bm{\Gamma}}(\mathbf{x})$.
\subsection{Auxiliary Consensus}
\label{sec:cm}
As shown in \cref{fig:framework}, the auxiliary consensus module explicitly ensures feature consistency between the pre-trained segmentation model and an auxiliary model. This is achieved by self-supervised learning without using any segmentation annotations of the training set $\mathcal{T}$. Given the pre-trained segmentation model $\mathbb{M}$, we build an auxiliary model $\mathbb{M}^{\prime}$ parameterized by $\bm{\Theta}^{\prime}$ which has the same down-sampling schedule as $\mathbb{M}$, so that for a layer $l$ in $\mathbb{M}$ we can find its corresponding layer $l^{\prime}$ in $\mathbb{M}^{\prime}$. For example, ResNet50~\cite{resnet} can be the backbone of an auxiliary model for a segmentation model with ResNet101 as the backbone.
Let $\mathcal{L}_s$ be a set of layers of $\mathbb{M}$, (\emph{e.g.}, for ResNet, $\mathcal{L}_s$ can be the last Conv layers of the five Conv blocks $\mathcal{L}_s=\{\texttt{C1},\texttt{C2},\texttt{C3},\texttt{C4},\texttt{C5}\}$) which is used to supervise the corresponding layers of $\mathbb{M}^{\prime}$. For each layer $l\in\mathcal{L}_s$, let $s^{(l)}$ be the size of the feature map $\mathbf{f}^{(l)}(\mathbf{x};\bm{\Theta})$, and $l^{\prime}$ is its corresponding layer in $\mathbb{M}^{\prime}$, our learning purpose is to enforce the feature map $\mathbf{g}^{(l^{\prime})}(\mathbf{x};\bm{\Theta}^{\prime})$ outputted by layer $l^{\prime}$ of $\mathbb{M}^{\prime}$ to approach $\mathbf{f}^{(l)}(\mathbf{x};\bm{\Theta})$.
Towards this end, we fix $\bm{\Theta}$, and minimize the following loss function on the training set $\mathcal{T}$:
\begin{equation}
\label{eq:sup}
L=\sum_{\mathbf{x}\in\mathcal{T}}\sum_{l\in \mathcal{L}_s} \frac{1}{s^{(l)}}|| \mathbf{f}^{(l)}(\mathbf{x}, \bm{\Theta}) - \mathbf{g}^{(l^{\prime})}(\mathbf{x};\bm{\Theta}^{\prime}) ||_F^2,
\end{equation}
where $||\cdot||_F$ means the Frobenius norm of a matrix. The overall training algorithm is shown by \cref{alg:AML}.
During inference, to compute anomaly scores, we select a subset $\mathcal{L}_e$ from $\mathcal{L}_s$ as an evaluation set. Given a testing image $\mathbf{x}$, the anomaly score $\psi_{i,j}^{(l)}$ at each location $(i,j)$ is computed by:
\begin{equation}
\psi_{i,j}^{(l)} = \frac{1}{C^{(l)}}||\mathbf{f}^{(l)}_{i,j}(\mathbf{x}, \bm{\Theta}) - \mathbf{g}^{(l^\prime)}_{i,j}(\mathbf{x}, \bm{\Theta^\prime})||_2^2,
\end{equation}
where $||\cdot||_2$ means $\ell_2$ norm and $C^{(l)}$ denotes the dimension of the feature channel. Then AuxCon anomaly score at each location $(i,j)$ is:
\begin{equation}
\Psi_{i,j}(\mathbf{x}) = \Pi_{l \in \mathcal{L}_e} \psi_{i,j}^{(l)}(\mathbf{x}).
\end{equation}
Finally, the CosMe anomaly score $\Upsilon_{i,j}(\mathbf{x})$ is calculated by:
\begin{equation}
\Upsilon_{i,j}(\mathbf{x}) = \Psi_{i,j}(\mathbf{x}) \cdot \Gamma_{i,j}(\mathbf{x}).
\end{equation}
\begin{algorithm}[t!]
\caption{Auxiliary Model Learning}
\label{alg:AML}
\begin{algorithmic}[1]
\REQUIRE Training set $\mathcal{T}$, layer set $\mathcal{L}_s$, learning rate $\eta$
\ENSURE The auxiliary model parameterized $\bm{\Theta}'$
\STATE Initialize $\bm{\Theta}^\prime$ randomly
\FOR {each image batch $\mathcal{B}\subset\mathcal{T}$}
\STATE \mbox{$L \leftarrow \sum_{\mathbf{x}\in\mathcal{B}}\sum_{l\in \mathcal{L}_s} \frac{1}{s^{(l)}}|| \mathbf{f}^{(l)}(\mathbf{x}, \bm{\Theta}) - \mathbf{g}^{(l^\prime)}(\mathbf{x}, \bm{\Theta^\prime}) ||_F^2$}
\STATE $\bm{\Theta}^\prime\leftarrow \bm{\Theta}^\prime - \eta \cdot \frac{\partial L}{\partial \bm{\Theta}^\prime}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Experiments}
In this section, we describe the datasets used for our experiments, implementation details, evaluation metrics and experimental results.
\subsection{Datasets}
We conduct our experiments on four widely-used anomaly segmentation datasets: Fishyscapes Lost \& Found~\cite{fishyscapes}, Fishyscapes Static~\cite{fishyscapes}, Road Anomaly~\cite{imgr} and Streethazards~\cite{streethazards}.
\label{sec:lf}
\noindent\textbf{Fishyscapes Lost \& Found.} Fishyscapes Lost (FS) \& Found~\cite{fishyscapes} is a high-quality image dataset containing real obstacles on roads. Based on the original Lost \& Found~\cite{l_f} dataset, the FS Lost \& Found dataset also follows the same setup as Cityscapes\cite{cityscapes}, which is a widely used dataset in urban-scene segmentation. It contains real urban images with 37 types of unexpected road obstacles and 13 different street scenarios (\emph{e.g.}, different road surface appearances, strong illumination changes, etc). FS Lost \& Found includes a public validation set of 100 images and a hidden test set of 275 images for the benchmarking.
\label{sec:static}
\noindent\textbf{Fishyscapes Static.} Fishyscapes (FS) Static~\cite{fishyscapes} is built based on validation set of Cityscapes\cite{cityscapes}. Anomalous objects collected from PASCAL VOC~\cite{voc} are superimposed on Cityscapes seamlessly to ensure matching with the style of Cityscapes. This dataset contains a publicly available validation set with 30 images and a test set hidden for benchmarking with 1,000 images.
\label{sec:ra}
\noindent\textbf{Road Anomaly.} Road Anomaly~\cite{imgr} captures dangerous scenes where vehicles encounter on roads. It consists of 60 images collected from the Internet, including strange objects on roads (\emph{e.g.}, animals, rocks, etc.), with a resolution of 1280 × 720. Since this dataset is not collected under conditions similar to Cityscapes, there is a big domain gap between them. This dataset can be used to verify the generalization ability of an anomaly segmentation approach.
\label{sec:streethazards}
\noindent\textbf{Streethazards.} Streethazards~\cite{streethazards} is an anomaly segmentation dataset created by using the Unreal Engine along with the CARLA simulation environment. This dataset contains 5125 image and semantic segmentation ground-truth pairs for training, 1,031 pairs without anomalies for validation, and 1,500 test pairs with anomalies. There are 250 unique anomaly models of diverse types in total and 12 classes of objects used for training.
\subsection{Implementation Detail}
\label{sec:impd}
\noindent\textbf{Pre-trained model.} For fair comparison, we follow~\cite{streethazards,deepmetric,SynthCP} to adopt PSPnet\cite{PSP} with ResNet101~\cite{resnet} as the segmentation model on Streethazards and follow~\cite{synboost}\cite{SML} to adopt DeepLabV3+\cite{deeplabv3plus} with ResNet101 as the segmentation model on the other three datasets.
\noindent\textbf{Multi-layer Memory.} We maintain three memory sub-branches, in which prototypes are extracted from the outputs of $\texttt{C4}$ layer and $\texttt{C5}$ layer of the ResNet101 backbone as well as the last hidden layer ($\texttt{LH}$) of the segmentation model. The similarity threshold $\tau = 0.85$
\noindent\textbf{Auxiliary Consensus.} We use ResNet50 as the backbone of the auxiliary model. The supervision layer set is $\mathcal{L}_s = \{\texttt{C2},\texttt{C3},\texttt{C4},\texttt{C5},\texttt{LH},\texttt{O}\}$, where $\texttt{O}$ is the output layer of the segmentation model, \emph{i.e.}, the last $1\times 1$ Conv layer to compute the outputted logits over categories. The evaluation layer set for computing AucCon anomaly score is $\mathcal{L}_e = \{\texttt{C5}\}$.
\subsection{Evaluation Metrics}
Following~\cite{streethazards, SML, SynthCP}, three metrics are used for evaluation: area under receiver operating curve (\textbf{AUROC}), false positive rate at 95\% true positive rate (\textbf{FPR95}) and average precision (\textbf{AP}). Since anomalous samples are much less than in-distribution samples, the data imbalance suggests FPR95 and AP are major evaluation metrics.
\subsection{Comparison with Previous Approaches}
\begin{table*}[ht!]
\centering
\begin{tabular}{l|c|c|cc|cc}
\toprule
\multirow{2}*{Method} & \multirow{2}{*}{\shortstack{Utilizing\\OOD Data}}& \multirow{2}{*}{\shortstack{Requiring\\re-training}} & \multicolumn{2}{c}{FS Lost \& Found} & \multicolumn{2}{|c}{FS Static} \\
\cline{4-7} & & & FPR95 $\downarrow$ & AP $\uparrow$ & FPR95 $\downarrow$ & AP $\uparrow$\\
\midrule
\midrule
{MSP}~\cite{MSP}& \xmark &\xmark & 44.85 & 1.77 & 39.83 & 12.88 \\
{Entropy}~\cite{MSP}& \xmark &\xmark & 44.83 & 2.93 & 39.75 & 15.41 \\
{Density - Single-layer NLL}~\cite{fishyscapes}& \xmark &\xmark & 32.90 & 3.01 & 21.29 & 40.86 \\
{kNN Embedding - density}~\cite{fishyscapes} & \xmark &\xmark & 30.02 & 3.55 & 20.25 & 44.03 \\
{Density - Minimum NLL}~\cite{fishyscapes}& \xmark &\xmark & 47.15 & 4.25 & 17.43 & 62.14 \\
{Image Resynthesis}~\cite{imgr}& \xmark &\xmark & 48.05 & 5.70 & 27.13 & 29.60 \\
{SML}~\cite{SML} & \xmark &\xmark & 21.52 & 31.05 & 19.64 & 53.11 \\
\midrule
{Synboost}~\cite{synboost} & \cmark &\xmark & 15.79 & \textbf{43.22} & 18.75 & 72.59 \\
{Density - Logistic Regression}~\cite{fishyscapes} & \cmark &\cmark & 24.36 & 4.65 & 13.39 & 57.16 \\
{Bayesian Deeplab}~\cite{bd} & \xmark &\cmark & 38.46 & 9.81 & 15.50 & 48.70 \\
{OoD Training - Void Class}~\cite{fishyscapes}& \cmark &\cmark & 22.11 & 10.29 & 19.40 & 45.00 \\
{Discriminative Outlier Detection Head}~\cite{Dense} & \cmark &\cmark & 19.02 & 31.31 & \textbf{0.29} & \textbf{96.76} \\
{Dirichlet Deeplab}~\cite{diric}& \cmark &\cmark & 47.43 & 34.28 & 84.60 & 31.3 \\
\midrule
{CosMe (Ours)} & \xmark &\xmark & \textbf{13.32} & \textbf{41.95} & \textbf{5.74} & \textbf{69.72}\\
\bottomrule
\end{tabular}
\caption{Comparison with previous approaches reported in Fishyscapes Leaderboard \protect\footnotemark. The top part of the table shows the approaches with the same setting as our CosMe, \emph{i.e.}, no re-training and no extra OOD data. The bottom part shows the approaches which require retraining or extra OOD data. Our method outperforms the approaches with the same setting and even most of the approaches with re-training or extra OOD data, by large margins.}
\label{tab:FS_leaderboard}
\vspace{-2mm}
\end{table*}
\footnotetext{\url{https://fishyscapes.com/results}}
\noindent\textbf{Fishyscapes test sets.}
We first compare CosMe with other anomaly segmentation approaches on Fishyscapes (FS Lost \& Found and FS Static) test sets. Note that, Fishyscapes test sets are private. We get the results from Fishyscapes Leaderboard. According to \cref{tab:FS_leaderboard}, we achieve a new SOTA performance compared with approaches without re-training or extra OOD data and outperform them by large margins. Moreover, our performance is even better than most of the approaches with re-training or extra OOD data. CosMe is on par with Synboost, which is the SOTA with re-training on FS Lost \& Found. Compared with Synboost, which needs to re-synthesis images from the segmentation label and then compare them with the original input images, calculation in our approaches can be parallelized. Once training is finished, there is no dependency between the pre-trained model and the auxiliary model. This merit makes our model more suitable for practical applications.
\begin{table*}[ht!]
\centering
\setlength{\tabcolsep}{1.85mm}{
\begin{tabular}{l|ccc|ccc|ccc}
\toprule
\multirow{2}*{Method} & \multicolumn{3}{c}{FS Lost \& Found} & \multicolumn{3}{|c}{FS Static}& \multicolumn{3}{|c}{Road Anomaly}\\
\cline{2-10} & FPR95 $\downarrow$ & AUROC $\uparrow$ & AP $\uparrow$ & FPR95 $\downarrow$ & AUROC $\uparrow$ & AP $\uparrow$& FPR95 $\downarrow$ & AUROC $\uparrow$ & AP $\uparrow$ \\
\midrule
\midrule
{MSP}~\cite{MSP} & 45.63 & 86.99 & 6.02 & 34.10 & 88.94 & 14.24 & 68.44 & 73.76 & 20.59\\
{MaxLogit}~\cite{streethazards} & 38.13 & 92.00 & 18.77 & 28.50 & 92.80 & 27.99 & 64.85 & 77.97 & 24.44\\
{SynthCP}~\cite{SynthCP} & 45.95 & 88.34 & 6.54 & 34.02 & 89.90 & 23.22 & 64.69 & 76.08 & 24.87 \\
{SML}~\cite{SML} & 14.53 & 96.88 & 36.55 & 16.75 & 96.69 & 48.67 & \textbf{49.74} & 81.96 & 25.82 \\
\midrule
{$\rm LDN\_BIN^{{\color{cyan}{\P}},{\color[RGB]{246,124,124}{\sharp}}}$}~\cite{Dense} & 23.97 & 95.59 & 45.71 & - & - & - & - & - & - \\
\midrule
{MulMem} (Ours) &14.47 & 97.39 & 41.73 & 5.07 & 98.87 & 65.61 & 63.38 & 80.02 & 29.49 \\
AuxCon (Ours) & 18.68 & 95.79 & 19.52 & 5.45 & 98.76 & 62.01 & 51.33 & 84.61 & 38.40 \\
{CosMe (Ours)} & \textbf{11.65} & \textbf{98.11} & \textbf{50.22} & \textbf{1.47} & \textbf{99.58} & \textbf{79.25} & 51.04 & \textbf{85.92} & \textbf{41.11} \\
\bottomrule
\end{tabular}
}
\caption{Comparison on Fishyscapes validation sets and Road Anomaly. $\color{cyan}{\P}$ and $\color[RGB]{246,124,124}{\sharp}$ indicate re-training and extra OOD data, respectively.}
\label{tab:fishyscapes_val}
\vspace{-2mm}
\end{table*}
\noindent\textbf{Fishyscapes validation sets.}
\label{sec:fvs}
\cref{tab:fishyscapes_val} shows our comparison results on the Fishyscapes (FS Lost \& Found and FS Static) validation sets. The results on Fishyscapes validation set demonstrate that both MulMem and CosMe outperform the previous approaches without re-training or extra OOD data by large margins. Especially, CosMe even outperforms LDN\_BIN, which utilized OOD data to retrain their segmentation model. Compared with SML~\cite{SML}, the previous SOTA approach without re-training or extra OOD data, \textbf{the major improvement of CosMe on FS Lost \& Found comes from AuxCon}, while \textbf{the improvement on FS Static is mainly thanks to MulMem}. This phenomenon further confirms the improvement in CosMe are mainly comes from tackling hard in-distribution samples, since there are more hard in-distribution samples in FS lost \& Found than FS Static: As we introduced in \cref{sec:lf}, the images in FS Lost \& Found come from real driving scenes, while the anomalous samples in FS Static are cut and pasted from other datasets, such as PASCAL VOC. The domain gap between normal background (Cityscapes) and anomalous foreground (PASCAL VOC) in FS Static can benefit memory-based anomaly segmentation, which relatively reduces the difficulty of segmentation.
\begin{table}[ht!]
\centering
\setlength{\tabcolsep}{1.6mm}{
\begin{tabular}{l|ccc}
\toprule
Method & FPR95 $\downarrow$ & AUROC $\uparrow$ & AP $\uparrow$ \\
\midrule
\midrule
{MSP}~\cite{MSP} & 33.7 & 87.7 & 6.6\\
{SynthCP}~\cite{SynthCP} & 28.4 & 88.5 & 9.3\\
{MaxLogit}~\cite{streethazards} & 26.5 & 89.3 & 10.6\\
\midrule
{$\rm Dropout^{\color{cyan}{\P}}$}~\cite{dropout} & 79.4 & 69.9 & 7.5 \\
{$\rm DML^{\color{cyan}{\P}}$ }~\cite{deepmetric} & 17.3 & 93.7 & 14.7\\
{$\rm LDN\_BIN^{{\color{cyan}{\P}},{\color[RGB]{246,124,124}{\sharp}}}$}~\cite{Dense} & 30.9 & 89.7 & 18.8\\
\midrule
CosMe (PSPnet) & 23.2 & 91.3 & 16.8\\
{CosMe (DeepLabV3+)} & \textbf{15.5} & \textbf{94.6} & \textbf{19.7} \\
\bottomrule
\end{tabular}
}
\caption{Comparison results on Streethazards. $\color{cyan}{\P}$ and $\color[RGB]{246,124,124}{\sharp}$ indicate re-training and extra OOD data, respectively.}
\label{tab:streethazards}
\vspace{-2mm}
\end{table}
\noindent\textbf{Road Anomaly.}
\label{sec:RoadAnomaly}
As shown in \cref{tab:fishyscapes_val}, our results on Road Anomaly are significantly better than others and on par with SML~\cite{SML}. Since there is a big domain gap between Road Anomaly and Cityscapes, there are massive hard in-distribution samples in Road Anomaly. In this case, AuxCon helps to improve performance much more than MulMem, which further evidences that AuxCon is adept at detecting hard in-distribution samples.
\noindent\textbf{Streethazards.}
\cref{tab:streethazards} shows the results on the test set of Streethazards. CosMe outperforms the previous approaches without re-training or extra OOD data by large margins and is on par with DML and LDN\_BIN which require re-training. Note that when we replace PSPnet with DeepLabV3+ as our pre-trained segmentation model, the performance is further improved and outperforms LDN\_BIN. This result implies the memories embedded in more powerful models might be stronger.
\subsection{Ablation Study}
We have shown ablation results w.r.t. the two sub-modules of CosMe in ~\cref{tab:fishyscapes_val}. Now, we conduct our ablation study w.r.t. the design in each sub-module on the FS Lost \& Found validation set.
\noindent \textbf{Ablation on layer set $\mathcal{L}$ for sub-branches.} $\mathcal{L}$ is introduced in~\cref{sec:pm}, which contains the layers used for prototype learning in the memory bank. The ablation result is shown in the upper part of~\cref{tab:Li}. Note that this ablation is done solely on MulMem, without the help of AuxCon. It shows that when $\mathcal{L} = \{\texttt{C4},\texttt{C5},\texttt{LH}\}$, MulMem achieves the best performance. This evidences there are rich memories embedded in the segmentation network, which are not fully exploited by the previous approaches.
\begin{table}[t!]
\centering
\setlength{\tabcolsep}{1.6mm}{
\begin{tabular}{c|c|ccc}
\toprule
& Layers in Set & FPR95 $\downarrow$ & AUROC $\uparrow$ & AP $\uparrow$ \\
\midrule
\midrule
\multirow{5}*{$\mathcal{L}$} & $\{\texttt{C4}\}$ & 34.55 & 93.17 & 9.81\\
&$\{\texttt{C5}\}$ & 36.49 & 92.97 & 22.11\\
&$\{\texttt{LH}\}$ & 20.84 & 95.74 & 12.32\\
&$\{\texttt{C4},\texttt{C5}\}$ & 23.94 & 95.34 & 27.38\\
\rowcolor{gray!15}&$\{\texttt{C4},\texttt{C5},\texttt{LH}\}$ & \textbf{14.47} & \textbf{97.39} & \textbf{41.73}\\
\midrule
\multirow{7}*{$\mathcal{L}_e$} & $\{\texttt{C4}\}$ & 10.58 & 98.13 & 44.99\\
\rowcolor{gray!15}&$\{\texttt{C5}\}$ & 11.65 & 98.11 & \textbf{50.22}\\
&$\{\texttt{LH}\}$ & 15.45 & 97.04 & 35.91\\
&$\{\texttt{O}\}$ & 16.90 & 96.50 & 27.95\\
&$\{\texttt{C4},\texttt{C5}\}$ & \textbf{10.39} & \textbf{98.24} & 49.96\\
&$\{\texttt{C4},\texttt{C5},\texttt{LH}\}$ & 13.09 & 97.67 & 42.74\\
&$\{\texttt{C4},\texttt{C5},\texttt{LH},\texttt{O}\}$ & 15.07 & 97.00 & 32.28\\
\bottomrule
\end{tabular}
}
\caption{Ablation results for the selection of $\mathcal{L}$ (upper part) and $\mathcal{L}_e$ (lower part).}
\label{tab:Li}
\vspace{-4mm}
\end{table}
\noindent \textbf{Ablation on evaluation layer set $\mathcal{L}_e$.} By fixing $\mathcal{L} = \{\texttt{C4},\texttt{C5},\texttt{LH}\}$, we then conduct ablation on evaluation layer set $\mathcal{L}_e$. As shown in the lower part of \cref{tab:Li}, when $\mathcal{L}_e = \{\texttt{C5}\}$, CosMe reaches its best performance.
\section{Discussion}
Compared with previous classic OOD detection approaches, maxlogit~\cite{streethazards} and MSP~\cite{MSP}, we measure anomalies in a white box manner, with a requirement to access the internal structure of the pre-trained segmentation model. But, here we give an intuitive explanation for the necessity of the access: According to Information Bottleneck (IB) theory~\cite{infobott}, the calculation process of neural networks can be seen as a kind of filtering process. Redundant information is filtered by the multi-layer architectures of deep networks. Since the optimization goal of training a segmentation model is to maximize the model's prediction toward ground-truth in-distribution categories, information for OOD detection is filtered to a certain extent. In summary, information only from the final prediction is not sufficient for anomaly segmentation.
\section{Conclusion}
In this paper, we pointed out the core challenge of anomaly segmentation is the existence of hard in-distribution samples. Based on the psychology finding of consensus processes in group recognition memory performance, we proposed ``Consensus Synergizes with Memory'' (CosMe), which utilizes inconsistency with an auxiliary model to complement the memory-based prototype-level distance for anomaly segmentation.
Our approach was verified on various datasets and achieved superior results on all of them.
Note that, our approach has no constraint on the segmentation network and can be parallelized. This merit shows its potential in practical applications.
{\small
\bibliographystyle{ieee_fullname}
|
{
"timestamp": "2021-12-01T02:26:26",
"yymm": "2111",
"arxiv_id": "2111.15463",
"language": "en",
"url": "https://arxiv.org/abs/2111.15463"
}
|
\section{Introduction}\label{sec:introduction}}
\IEEEPARstart{G}{raph} representation learning aims to pursue a meaningful vector representation of each node so as to facilitate downstream applications such as node classification, link prediction, \textit{etc}. Traditional methods are developed based on graph statistics \cite{perozzi2014deepwalk} or hand-crafted features~\cite{bhagat2011node,liben2007link}. Recently, a great amount of attention has been paid to graph neural networks (GNNs), such as graph convolutional network (GCNs)~\cite{kipf2016semi}, GraphSAGE~\cite{hamilton2017inductive}, Graph Attention Networks (GATs)~\cite{velivckovic2017graph}, and their extensions~\cite{xu2018powerful,chen2018fastgcn,zou2019layer, li2019deepgcns,chen2020simple,wang2020large}. This is because they can jointly consider the feature and topological information of each node. Most of these approaches, however, focus on static graphs and cannot generalize to the case when new categories of nodes are emerging
In many real world applications, different categories of nodes and their associated edges (in the form of subgraphs) are often continuously emerging in existing graphs. For instance, in a citation network \cite{sen2008collective,wang2020microsoft,mikolov2013distributed}, papers describing new research areas will gradually appear in the citation graph; in a co-purchasing network such as Amazon \cite{Bhatia16}, new types of products will continuously be added to the graph. Given these facts, how to incorporate the feature and topological information of new nodes in a continuous and effective manner such that performance over existing nodes is uninterrupted is a critical problem to investigate, especially when graphs are relatively large and retraining a new model over an entire graph is computationally expensive.
\textcolor{black}{To address this issue,
a few attempts have been made to tailor the existing continual learning techniques to graph data.
For instance, adopting the memory~replay based approaches, Zhou et al.~\cite{zhou2021overcoming} proposed to store a set of representative experience nodes in a buffer and replay them along with new tasks (categories) to prevent forgetting existing tasks (categories). The buffer, however, only stores node features and ignores the topological information of graphs. Inspired by the regularization~based methods, Liu et al.~\cite{liu2020overcoming} developed topology-aware weight preserving~(TWP) that can preserve the topological information of existing graphs. However, its design hinders the capability of learning topology on new tasks (categories). More detailed introduction on these related works can be found in Section \ref{sec:related_work_continual_learning}.}
\textcolor{black}{A desired learning system for continual graph representation learning is expected to continuously grasp knowledge from new categories of emerging nodes and capture their topological structures without interfering with the learned knowledge over existing graphs.
Inspired by the fact that humans learn to recognize objects by forming prototypes in the brain \cite{bowman2020tracking,dandan2013brain,zeithamova2008dissociable} and can achieve extraordinary continual learning capability, we present a completely novel framework, \textit{i.e.}, Hierarchical Prototype Networks (HPNs), to continuously extract different levels of abstract knowledge (in the form of prototypes) from graph data such that new knowledge will be accommodated while earlier experience can still be well retained.
Within this framework, representation learning is simultaneously conducted to avoid catastrophic forgetting, instead of considering these two objectives separately.
Although learning independent prototypes for different tasks is appealing for preventing forgetting, it may incur unbounded memory consumption with an unlimited number of new tasks. Therefore, we propose to denote each node with a composition of basic prototypes so that nodes from different tasks and their associated relational structures are represented with compositions of a limited set of basic prototypes. Take the social network as an example, if a node (person) falls into certain category, it can be decomposed into basic atomic characteristics belonging to a set of attributes~(\textit{e.g.}, gender, nationality, hobby, \textit{etc.},) and the relationship between a pair of nodes can also be categorized into different basic types~(\textit{e.g.}, friends, coworkers, \textit{etc.}).}
\textcolor{black}{Inspired by these facts, we first develop the Atomic Feature Extractors (AFEs) to decompose each node into two sets of atomic embeddings, \textit{i.e.}, atomic node embeddings which encode the node feature information and atomic structure embeddings which encode its relations to neighboring nodes within multi-hop. Next, we present Hierarchical Prototype Networks to adaptively select, compose, and store representative embeddings with three levels of prototypes, \textit{i.e.}, atomic-level, node-level, and class-level. Given a new node, only the relevant AFEs and prototypes in each level will be activated and refined, while others are uninterrupted.}
\textcolor{black}{The memory efficiency and the continual learning capability of the proposed HPNs are theoretically validated. First, adopting concepts from geometry and coding theory, we show the equivalence between the prototypes of HPNs and the spherical codes, thereby deriving the upper bound of the number of prototypes. This indicates that our memory consumption is bounded regardless of the number of tasks encountered. Second, since the forgetting problem can be formalized as the prediction drift of previous data after learning new tasks, we derive the conditions under which the learning on new tasks will not alter the predictions of previous data, \textit{i.e.} forgetting is eliminated.}
\textcolor{black}{The theoretical merits of the proposed HPNs are justified by experiments on five public datasets. First, the performance of HPNs not only achieves the state-of-the-art, but is also comparable to or better than the jointly trained model (set as the upper bound of continual learning models). Second, the HPNs are memory efficient. For instance, on OGB-Products dataset containing more than 2 million nodes and 47 categories of nodes, HPNs achieve around 80$\%$ accuracy with only thousands of parameters.
}
\iffalse
To summarize, the main contributions of our work include:
\begin{itemize}
\item We present a novel framework, \textit{i.e.}, Hierarchical Prototype Networks (HPNs), to continuously extract different levels of abstract knowledge (in the form of prototypes) from the graph data such that new knowledge will be accommodated while earlier experience can be well retained
\item \textcolor{black}{Our theoretical analysis demonstrates that HPNs can avoid forgetting with an upper bounded memory consumption.}
\item Our experiment results on five different public datasets demonstrate that the proposed HPNs not only achieve state-of-the-art performance, exhibiting good continual learning capability, but also use less parameters (more \textcolor{black}{memory} efficient). For instance, on OGB-Products dataset that contains more than 2 million nodes and 47 categories of nodes, HPNs achieves around 80$\%$ accuracy with only thousands of parameters.
\end{itemize}
\fi
\section{Related Works}\label{sec:related_works}
The proposed Hierarchical Prototype Networks (HPNs) are closely related to continual learning and graph representation learning. In this section, we provide more detailed discussions by comparing HPNs with related works, especially on those directly applying existing continual learning techniques on graph data.
\subsection{Continual learning}\label{sec:related_work_continual_learning}
Continual learning aims to overcome the well-known catastrophic forgetting problem that a model's performance on previous tasks decreases significantly after being trained on new tasks. Existing works for continual learning can be categorized as
regularization-based methods, memory-replay based methods, and parametric isolation based methods.
\textcolor{black}{In the following, we give detailed introductions on these approaches, as well as their applicability to graph data.}
Regularization-based methods penalize the model objectives to maintain satisfactory performance on previous tasks \cite{jung2016less,li2017learning,kirkpatrick2017overcoming,farajtabar2020orthogonal,saha2021gradient}.
For instance, Li and Hoiem~\cite{li2017learning} introduced Learning without Forgetting (LwF) which uses knowledge distillation to constrain the shift of parameters for old tasks; Kirkpartick et al.~\cite{kirkpatrick2017overcoming} proposed elastic weight consolidation (EWC) that adds a quadratic penalty to prevent the model weights from shifting too much. Recent works \cite{farajtabar2020orthogonal,saha2021gradient} seek to constrain the gradients for new tasks in a subspace orthogonal to the updating directions that are important for previous tasks. \textcolor{black}{Although the regularization-based methods can alleviate the forgetting on previous tasks, the constraints on the model weights reduce a model's plasticity for new tasks, resulting in inferior performance compared to the other two approaches. Regularization-based methods can be directly applied to graph neural networks and are included as baselines in our experiments.}
Memory-replay based methods constantly feed a model with representative data of previous tasks to prevent forgetting \cite{lopez2017gradient,shin2017continual,aljundi2019gradient,caccia2020online,chrysakis2020online}.
One example is Gradient Episodic Memory (GEM) \cite{lopez2017gradient} that stores representative data in episodic memory and adds a constraint to prevent the loss of the episodic memory from increasing and only allow it to decrease. Instead of storing data, Shin et al.~\cite{shin2017continual} added a generative model to generate pseudo-data of previous tasks to be interleaved with new task data for rehearsal. Recent works also look for better designs of memory replay to facilitate continual learning agents \cite{caccia2020online,chrysakis2020online}. Memory-replay based methods are currently one of the most effective approaches for alleviating catastrophic forgetting, but these methods are not suitable for graph data, as they cannot store the topological information, which is crucial for representing graph data. In contrast, our proposed HPNs explicitly designed Atomic Feature Extractors (AFEs) to encode both node and topological information. Moreover, memory-replayed methods require rehearsal of old data each time when a new task is learned. This introduces extra computation burdens.
Parametric isolation based methods adaptively introduce new parameters for new tasks to avoid the parameters of previous tasks being drastically changed \cite{rusu2016progressive,yoon2017lifelong,yoon2019scalable,wortsman2020supermasks,wu2019large}.
For instance, progressive network~\cite{rusu2016progressive} allocates a new sub-network for each new task and block any modification on the previously learned networks. Yoon et al.~\cite{yoon2017lifelong} proposed a more flexible model (DEN) that dynamically adds new neurons to accommodate new tasks.
Recently, various innovative approaches to allocate separated parameters for different tasks have been developed \cite{wu2019large,yoon2019scalable,wortsman2020supermasks}.
Besides the concrete models, Knoblauch et al.~\cite{knoblauch2020optimal} analyzed the required capability of an optimal continual learning agent. Although parametric isolation based methods are effective for alleviating the catastrophic forgetting problem, they also consistently increase the model complexity and memory consumption which could be a problem. Recently developed methods have used certain mechanisms to control memory consumption, such as merging parameters for similar tasks \cite{yoon2019scalable}. However, as task specific parameters are fitted to each individual task, these parameters are seldom reused. In the proposed HPNs, we consider the properties of graph data and decompose each node into a combination of several prototypes. As these prototypes are largely shared by all tasks, the parameter reuse level is greatly enhanced and only limited memory needs to be consumed. Moreover, we also derive a theoretical memory upper bound for HPNs.
Overall, although existing continual learning methods can perform well on Euclidean data, they are not suitable to be directly applied to graph data\textcolor{black}{, which is also shown in our experiments}. Our proposed HPNs are specially designed to overcome their aforementioned limitations.
\subsection{Graph representation learning}
Graph representation learning aims to encode both the feature information from nodes and the topological structure of the incoming graph. Traditional methods relied on graph statistics or hand-crafted features~\cite{bhagat2011node,liben2007link}. Recently, a great amount of attention has been paid to graph neural networks (GNNs), such as graph convolutional network (GCNs)~\cite{kipf2016semi}, GraphSAGE~\cite{hamilton2017inductive}, Graph Attention Networks (GATs)~\cite{velivckovic2017graph}, and their extensions~\cite{xu2018powerful,chen2018fastgcn,bahonar2019graph,zhang2019graph,zhang2020context,lei2020spherical,gao2021topology}. Instead of focusing on shallow networks, there are also works on building deep GCNs to further increase the capacity of GNNs~\cite{li2019deepgcns,chen2020simple,rong2019dropedge,zhang2020dropping,wang2020large}. \textcolor{black}{These models are not designed for the continual learning setting and will experience catastrophic forgetting problem when learning a sequence of tasks, which is shown in our experiments.}
Currently, only limited efforts have been made to pursuit continual graph representation. Zhou et al.~\cite{zhou2021overcoming} integrated memory-replay to GNNs by storing experience nodes from previous tasks. However, the topological structure of graphs is ignored. Liu et al.~\cite{liu2020overcoming} developed topology-aware weight preserving (TWP) that can preserve the topological information of previous graphs. However, preserving topology of previous graphs will hamper its capability of learning topology on new graphs. Galke et al.\cite{galke2020incremental} considers a scenario where new classes of nodes may appear, which is similar to our consideration. But they focus on adapting the model to new patterns and do not explicitly maintain the performance on previous tasks. Streaming GNN~\cite{wang2020streaming} and Feature graph network (FGN)~\cite{wang2020lifelong} are also related to GNNs and continual learning. However, the setting of Streaming GNN is time step incremental, while our setting is task incremental. \textcolor{black}{FGN transforms node classification into graph classification by constructing feature graphs, so as to apply existing continual learning techniques. However, the topological information is not fully utilized since the information aggregation for each node does not include the neighboring nodes.}
Note that continual graph representation learning is essentially different from dynamic graph representation learning \cite{yu2018netwalk,nguyen2018continuous,zhou2018dynamic,ma2020streaming} and few-shot graph representation learning \cite{zhou2019meta,guo2021few,yao2020graph} . Dynamic graph representation learning focuses on learning the evolving representations of nodes over time, while in a continual learning setting we constantly deal with different graphs (different tasks) and the model is not allowed to access the previously observed graphs. Different from continual learning which aims to overcome the catastrophic forgetting problem, few-shot graph representation learning targets at fast adaption to new tasks and follows a completely different setting. Specifically, for few-shot learning, a model is first trained on several meta-training tasks and then evaluated on the meta-testing tasks. During the evaluation, a few-shot learning model is independently trained and evaluated on each task, while a continual learning model learns sequentially on testing tasks and is eventually evaluated over all previous tasks.
\section{Hierarchical Prototype Networks}
In this section, we first state the problem we aim to study and the notations. Then we present Hierarchical Prototype Networks (HPNs) that consist of two core modules, \textit{i.e.}, Atomic Feature Extractor (AFEs) and Hierarchical \textcolor{black}{Prototypes (HPs)}, as shown in Figure \ref{fig:pipeline}. AFEs serve to extract a set of atomic features from the given graph, and the \textcolor{black}{HPs} aim to select, compose, and store the representative features in the form of different levels of prototypes. During the training stage, each node will only refine the relevant AFEs and prototypes of the model without interfering with the irrelevant parts (\textit{i.e.}, to avoid catastrophic forgetting). In the test stage, the model will activate the relevant AFEs and prototypes to perform the inference
\subsection{Problem Statement and Notations}
We study continual learning on graphs that have new categories of nodes and associated edges (in the form of subgraphs) emerging in a continuous manner. In the context of continual learning, assuming we have a sequence of $p$ tasks $\{\mathcal{T}^i | i=1,..., p \}$, in which each task $\mathcal{T}^i$ aims to learn a satisfied representation for a new subgraph $\mathcal{G}_i$ consisting of nodes belonging to some new categories. A desired model should maintain its performance on all previous tasks after being successively trained on the sequence of $p$ tasks from $\mathcal{T}^1$ to $\mathcal{T}^p$
For simplicity, we omit the subscripts in this section. Full notations will be used in the theoretical analysis. Each graph $\mathcal{G}$ consists of a node set $\mathbb{V}=\{v_i | i=1,...,N\}$ with $N$ nodes and an edge set $\mathbb{E}=\{(v_i,v_j) \}$ denoting the connections of nodes in $\mathbb{V}$. Each node $v_i$ can be represented as a feature vector $\mathbf{x}(v_i)\in\mathbb{R}^{d_v}$ that encodes node attributes, \textit{e.g.}, gender, nationality, hobby, \textit{etc}.
The set of $l$-hop neighboring nodes of $v_i$ is defined as $\mathcal{N}^l (v_i)$, with $\mathcal{N}^0 (v_i)=\{v_i\}$
\begin{figure*}
\centering
\includegraphics[height=8.5cm]{pipeline_new01.jpg
\captionsetup{font=small}
\caption{The framework of HPNs. On the left, subgraphs from different tasks come in sequentially. Given a node $v$, $u_k^j$ denotes the $j$-th sampled node from $k$-hop neighbors. In the middle, node $v$ and the sampled neighbors ($\mathcal{N}_{sub}(v)$) are fed into the selected AFEs ($\mathrm{AFE}_{\textrm{node}}^{\textrm{select}}$ or $\mathrm{AFE}_{\textrm{struct}}^{\textrm{select}}$) to get atomic embeddings ($\mathbb{E}_A^{\mathrm{select}}(v)$), which are either matched to existing A-prototypes or used as new A-prototypes. The selected A-prototypes are further matched to a N- and a C-prototype for the hierarchical representation, which is finally fed into the classifier to perform node classification.}
\label{fig:pipeline
\end{figure*}
\subsection{Atomic Feature Extractors}\label{sec:AFEs}
\textcolor{black}{As mentioned in the Introduction, to handle unlimited number of potential new categories of nodes with a limited memory consumption, we represent each node as a composition of basic prototypes selected from a limited set. To construct or refine these basic prototypes, given the input graph, we first need to obtain basic features with feature extractors.}
Specifically, we develop Atomic Feature Extractors (AFEs) to consider two different sets of atomic embeddings, \textit{i.e.}, atomic node embeddings which encode the target node features and atomic structure embeddings that encode its relations to \textcolor{black}{multi-hop neighbors}. \textcolor{black}{To ensure these generated basic prototypes are capable of encoding low-level features that can be shared across different tasks, we avoid deep structures in the AFEs and formulate them} as learnable linear transformations\textcolor{black}{:} $\mathrm{AFE}_{\textrm{node}} = \{ \mathbf{A}_i \in \mathbb{R}^{d_v \times d_{a}} | i\in\{1,...,l_a\} \}$ and $\mathrm{AFE}_{\textrm{struct}} = \{ \mathbf{R}_j \in \mathbb{R}^{d_v \times d_{r}} | j\in\{1,...,l_r\} \}$ where $\mathbf{A}_i $ and $\mathbf{R}_j$ are real matrices to encode atomic node and structure information, respectively. \textcolor{black}{Each matrix $\mathbf{A}_i $ or $\mathbf{R}_j$ corresponds to a certain type of features, and $l_a$, $l_r$ denote the number of matrices in} $\mathrm{AFE}_{\textrm{node}}$ and $\mathrm{AFE}_{\textrm{struct}}$, respectively. Given a node $v$, a set of atomic node embeddings is obtained by applying $\mathrm{AFE}_{\textrm{node}}$ to the feature vector $\mathbf{x}(v)$:
\vspace{-0cm}
\begin{align}
\mathbb{E}^{\textrm{node}}_A(v) = \{ \mathbf{x}^T(v)\mathbf{A}_i | \mathbf{A}_i\in\mathrm{AFE}_{\textrm{node}}\}.\label{eqn1}
\end{align}
To obtain atomic structure embeddings, the multi-hop neighboring nodes \textcolor{black}{are considered to encode the relationship between node $v$ and neighboring nodes within a fixed number of hops}. \textcolor{black}{Since the number of neighboring nodes varies from node to node, for computation efficiency in the following procedure, we uniformly sample a fixed number of vertices from 1-hop up to $h$-hop neighborhood. The sampled neighborhood set is denoted as},
$\mathcal{N}_{sub}(v) \subseteq \bigcup\limits_{l\in\{1,...,h\}} \mathcal{N}^l (v)$.
Then these selected \textcolor{black}{neighboring} nodes are embedded via matrices in $\mathrm{AFE}_{\textrm{struct}}$ to encode different types of interactions \textcolor{black}{between the target node $v$ and its neighbors}:
\begin{align}
\mathbb{E}^{\textrm{struct}}_A(v) = \{\mathbf{x}^T(u)\mathbf{R}_i | \mathbf{R}_i\in\mathrm{AFE}_{\textrm{struct}}, u\in \mathcal{N}_{sub}\}.\label{eqn2}
\end{align}
\textcolor{black}{Above all, with $\mathrm{AFE}_{\textrm{node}}$ and $\mathrm{AFE}_{\textrm{struct}}$, the attribute and structure information of each node $v$ is encoded in the two set $\mathbb{E}^{\textrm{node}}_A(v)$ and $\mathbb{E}^{\textrm{struct}}_A(v)$. For convenience of notation, we denote the complete atomic feature set of a node $v$ as the union of the node atomic embedding set $\mathrm{AFE}_{\textrm{node}}$ and the structure embedding set $\mathrm{AFE}_{\textrm{struct}}$, \textit{i.e.}},
\begin{align}
\mathbb{E}_A(v) = \mathbb{E}^{\textrm{node}}_A(v) \cup \mathbb{E}^{\textrm{struct}}_A(v).
\end{align}
Note that \textcolor{black}{different} $\mathbf{A}_i$ and $\mathbf{R}_i$ are designed to generate different types of atomic features. \textcolor{black}{Therefore}, we impose a divergence loss on AFEs to ensure they are uncorrelated with each other and thus can map features to different subspaces
\begin{align}
\mathcal{L}_{div} = \sum_{i=1}^{l_a}\sum_{\substack{j=1\\j\neq i}}^{l_a} \mathbf{A}_i^T\mathbf{A}_j + \sum_{i=1}^{l_r}\sum_{\substack{j=1\\j\neq i}}^{l_r} \mathbf{R}_i^T\mathbf{R}_j.
\end{align}
\iffalse
\begin{align}
\mathcal{L}_{div} = \sum_{i\neq j} \mathbf{A}_i^T\mathbf{A}_j + \sum_{i\neq j} \mathbf{R}_i^T\mathbf{R}_j.
\end{align
\begin{align}
\mathcal{L}_{div} = \sum_{i=1}^{l_a}\sum_{j=1,j\neq i}^{l_a} \mathbf{A}_i^T\mathbf{A}_j + \sum_{i=1}^{l_r}\sum_{j=1,j\neq i}^{l_r} \mathbf{R}_i^T\mathbf{R}_j.
\end{align}
\begin{align}
\mathcal{L}_{div} = \sum_{i,j:i\neq j} \mathbf{A}_i^T\mathbf{A}_j + \sum_{i,j:i\neq j} \mathbf{R}_i^T\mathbf{R}_j.
\end{align}
\fi
\subsection{Hierarchical Prototype Networks}\label{sec:HPs}
\textcolor{black}{Based on the atomic embeddings of the incoming subgraph, HPNs will distill and store the representative features in form of different levels of prototypes to memorize the current task}, as shown in Figure \ref{fig:pipeline}. \textcolor{black}{This is achieved by refining existing prototypes if the input subgraph only contains features observed before and creating new prototypes if new features are encountered}. Specifically, HPNs will produce three levels of prototypes. \textcolor{black}{First, the atomic-level prototypes (A-prototypes) are directly refined or created with the atomic embedding set, and they denote the prototypical basic features of nodes (\textit{e.g.}, gender, nationality, and social relation of people in a social network). Since the A-prototypes only describe low-level features of a node, which is similar to the output from the initial layers of a deep neural network, we further develop node-level (N-prototypes) and class-level prototypes (C-prototypes) to capture higher level information of each node. The N-prototypes are generated by associating the A-prototypes with each node to describe the node as a whole. The C-prototypes are generated from the N-prototypes to describe common features shared by a group of similar nodes.} From atomic-level to class-level, the prototypes denote abstract knowledge of the subgraph at different scales which is analog to the feature maps of convolutional neural networks at different layers
\textcolor{black}{Next, we introduce when and how HPNs will refine existing prototypes or establish new ones with the atomic embeddings. As mentioned earlier, for each task that contains certain categories of nodes, only the most relevant AFEs and prototypes will be activated while others remain uninterrupted to avoid forgetting. In other words, for each node, we will choose a subset of $\mathbb{E}_A(v)$ to refine the prototypes.} Specifically, as shown in Figure \ref{fig:pipeline}, given \textcolor{black}{a node from }an incoming subgraph, \textcolor{black}{The AFEs whose atomic embeddings are close enough to existing A-prototypes are chosen as the relevant ones.} Formally, we first obtain $\mathbb{E}^{\textrm{node}}_A(v)$ and $\mathbb{E}^{\textrm{struct}}_A(v)$ via Eq. (\ref{eqn1}) and Eq. (\ref{eqn2}), respectively.
Then, we calculate the maximum cosine similarity between atomic embeddings of each AFE ($\mathbf{e}_i$) and the A-prototypes as:
\begin{align}\label{eq:simmax
\mathrm{SimMAX}^{\textrm{id}}_{i} = \max_{\mathbf{p}}(\frac{\mathbf{e}_i^T\mathbf{p}}{\norm{\mathbf{e}_i}_2 \norm{\mathbf{p}}_2}) , \mathbf{e}_i\in \mathbb{E}^{\textrm{id}}_A(v), \mathbf{p} \in \mathbb{P}_A,
\end{align}
where $\textrm{id} \in \{\textrm{node}, \textrm{struct}\}$, $i$ ranges from $1$ to $l_a$ (or $l_r$), \textcolor{black}{and $\mathbb{P}_A$ is the atomic prototype set containing all A-prototypes}. After that, we sort the AFEs in a descending order according to $\mathrm{SimMAX}^{\mathrm{id}}_{i}$ as $\mathrm{AFE}_{\textrm{node}}^{\textrm{sort}} = \{ \mathbf{A}_{i'} \in \mathbb{R}^{d_v \times d_{a}} | i'\in\{1,...,l_a\} \}$ and $\mathrm{AFE}_{\textrm{struct}}^{\textrm{sort}} = \{ \mathbf{R}_{j'} \in \mathbb{R}^{d_v \times d_{r}} | j'\in\{1,...,l_r\} \}$. Finally, we select the top $l_a'$ and top $l_r'$ ranked AFEs from these two sets as $\mathrm{AFE}_{\textrm{node}}^{\textrm{select}}$ and $\mathrm{AFE}_{\textrm{struct}}^{\textrm{select}}$, respectively. $l_a'$ and $l_r'$ are fixed hyperparameters with $l_a'\leq l_a$ and $l_r'\leq l_r$. \textcolor{black}{Finally, the atomic embeddings generated by these selected AFEs are denoted as $\mathbb{E}^{\textrm{select}}_A(v)$, and we have $\mathbb{E}^{\textrm{select}}_A(v) \subset \mathbb{E}_A(v)$.
\textcolor{black}{The $\mathbb{E}^{\textrm{select}}_A(v)$ is then used for refining existing prototypes or establishing new ones. To determine which embeddings contain observed features and which contain new features, a} matching process is first conducted between the $\mathbb{E}^{\textrm{select}}_A(v)$ and $\mathbb{P}_A$. Formally, we measure the cosine similarity between elements in $\mathbb{E}^{\textrm{select}}_A(v)$ and elements in $\mathbb{P}_A$ as
\begin{align}\label{eq:ea_sim}
\mathrm{Sim}_{{E\rightarrow A}}(v) = \{\frac{\mathbf{e}_i^T\mathbf{p}}{\norm{\mathbf{e}_i}_2 \norm{\mathbf{p}}_2} | \mathbf{e}_i\in \mathbb{E}^{\textrm{select}}_A(v), \mathbf{p} \in \mathbb{P}_A \}.
\end{align}
\textcolor{black}{Given $\mathrm{Sim}_{{E\rightarrow A}}(v)$, the atomic embeddings with cosine similarity (not less than a certain threshold $t_A$) to at least one existing A-prototype are regarded as containing observed features, since they are close to existing prototypes}, \textit{i.e.}
\begin{align}
\mathbb{E}_{old}(v) = \{\mathbf{e}_i |\quad \exists \mathbf{p} \in \mathbb{P}_A \quad s.t.\quad \frac{\mathbf{e}_i^T\mathbf{p}}{\norm{\mathbf{e}_i}_2\norm{\mathbf{p}}_2} \geqslant t_A \}.
\end{align}
\textcolor{black}{$\mathbb{E}_{old}(v)$ is then used to refine $\mathbb{P}_A$}. To this end, \textcolor{black}{a distance loss} $\mathcal{L}_{dis}$ is computed to enhance the cosine similarity between each $\mathbf{e}_i \in \mathbb{E}_{old}(v)$ and its corresponding A-prototype $\mathbf{p}_i \in \mathbb{P}_A$, \textit{i.e.}
\begin{align}
\mathcal{L}_{dis} = -\sum_{\mathbf{e}_i \in \mathbb{E}_{old}(v)} \frac{\mathbf{e}_i^T\mathbf{p}_i}{\norm{\mathbf{e}_i}_2\norm{\mathbf{p}_i}_2
\end{align}
By minimizing $\mathcal{L}_{dis}$, not only the existing A-prototypes in $\mathbb{P}_A$ will get refined, the atomic embeddings will also be closer to `standard' A-prototypes.
Next, we discuss how to deal with the atomic embeddings that are not close to any existing prototypes, \textit{i.e.}, $\mathbb{E}_{new}(v) = \mathbb{E}^{\textrm{select}}_A(v) \backslash \mathbb{E}_{old}(v)$ or $\mathbb{E}_{new}(v) = \{\mathbf{e}_i | \quad \forall \mathbf{p} \in \mathbb{P}_A, \quad \frac{\mathbf{e}_i^T\mathbf{p}}{\norm{\mathbf{e}_i}_2\norm{\mathbf{p}}_2} < t_A \}$.
Contrary to $\mathbb{E}_{old}(v)$, atomic embeddings in $\mathbb{E}_{new}(v)$ are regarded as new atomic features of the corresponding AFEs \textcolor{black}{ and should be accommodated with new prototypes.} \textcolor{black}{Considering that very similar embeddings may exist in $\mathbb{E}_{new}(v)$ and cause HPNs to create redundant prototypes, we first filter $\mathbb{E}_{new}(v)$ to obtain $\mathbb{E}'_{new}(v)$ which only contains the distinctive ones such that}
\begin{align}
\forall \mathbf{e}_i, \mathbf{e}_j \in \mathbb{E}'_{new}(v), \frac{\mathbf{e}_i^T\mathbf{e}_j}{\norm{\mathbf{e}_i}_2\norm{\mathbf{e}_j}_2} < t_A.
\end{align
In this way, \textcolor{black}{the embeddings in $\mathbb{E}_{new}'(v)$ only contain new features, they are included into $\mathbb{P}_A$ as new A-prototypes. Formally, the updated $\mathbb{P}_A$ is the union of the previous A-prototypes $\mathbb{P}_A$ and new A-prototypes $\mathbb{E}_{new}'(v)$
\begin{align}\label{eq:pa_update}
\mathbb{P}_A = \mathbb{P}_A \cup \mathbb{E}'_{new}(v)
\end{align
After \textcolor{black}{updating $\mathbb{P}_A$, $\mathrm{Sim}_{{E\rightarrow A}}(v)$ will also be updated according to Eq. (\ref{eq:ea_sim}). Then, every element in $\mathrm{Sim}_{{E\rightarrow A}}(v)$ would be not less than $t_A$, \textit{i.e.} each element in $\mathbb{E}^{\textrm{select}}_A(v)$ is matched to an A-prototype with cosine similarity larger than the threshold $t_A$. These matched A-prototypes are denoted as $\mathbb{A}(v)$, which represents the basic features of node $v$ as shown in Figure \ref{fig:pipeline}.
\textcolor{black}{As mentioned earlier, besides $\mathbb{A}(v)$, higher level representations are also beneficial for comprehensively describing a node from multiple granularities. Therefore, we further map $\mathbb{A}(v)$ to high level prototypes so as to obtain hierarchical prototype representations,} Firstly, $\mathbb{A}(v)$ is mapped to an N-prototype \textcolor{black}{to describe $v$ as a whole. Specifically, the vectors in $\mathbb{A}(v)$ are concatenated as $\mathbf{a}_{1}\oplus\cdots\oplus\mathbf{a}_{l_a'+l_r'}$ ($\mathbf{a}_i \in \mathbb{A}(v)$), and then fed into a full connected layer $\textrm{FC}_{A\rightarrow N}(\cdot)$ to generate node-level embedding set $\mathbb{E}_N(v) = \{\textrm{FC}_{A\rightarrow N}(\mathbf{a}_{1}\oplus\cdots\oplus\mathbf{a}_{l_a'+l_r'}) \}$. Here, for notation consistency with the aforementioned atomic embedding set, we also formulate the node-level embeddings as a set $\mathbb{E}_N(v)$ although it contains only one element. With $\mathbb{E}_N(v)$, the following procedure is to form a N-prototype representation $\mathbb{N}(v)$ for node $v$, which is same as the process to form $\mathbb{A}(v)$ at atomic level except that the threshold is set as $t_N$, instead of $t_A$. Finally, to enrich the node representation with the common features shared with similar nodes, we develop the C-prototypes. The generation of C-prototypes is similar to the N-prototypes. We first obtain a class-level embedding set $\mathbb{E}_C(v)$ by applying another fully connected layer $\textrm{FC}_{N\rightarrow C}$ to the element in $\mathbb{N}(v)$. Then, we match $\mathbb{E}_C(v)$ to existing C-prototypes or establish new ones. Different from N-prototypes, the threshold used here is $t_C$, which is typically larger than $t_A$ and $t_N$. In this way, each C-prototype will be matched to embeddings within a large space, and therefore representing the common features shared by a wide range of similar nodes.
Again, for notation convenience, we derive $\mathbb{P}_H(v)$ to represent the hierarchical prototype representations of the target node $v$ (as shown in Figure \ref{fig:pipeline}), which is the union of $\mathbb{A}(v)$, $\mathbb{N}(v)$, and $\mathbb{C}(v)$: }
\begin{align}\label{eq:ph}
\mathbb{P}_H(v) = \mathbb{A}(v) \cup \mathbb{N}(v) \cup \mathbb{C}(v)
\end{align}
\begin{algorithm}[t]
\small
\captionsetup{font=small}
\caption{Learning Procedure for HPNs.}
\label{tqnn_qnn_classification_algorithm}
\SetKwInOut{Input}{Input}
\SetKwInOut{Output}{Output}
\Input{Task sequence: $\{\mathcal{T}_1,...,\mathcal{T}_p\}$, HPNs}
\For{$\mathcal{T}\gets1$ \KwTo $p$}{
Get the data of the current task: $\mathbb{V}$, $\mathbb{E}$, $\mathbf{X}(\mathbb{V})=\{\mathbf{x}(v)|v \in \mathbb{V}\}$.
Select $\mathrm{AFE}_{\textrm{node}}^{\textrm{select}}$ and $\mathrm{AFE}_{\textrm{struct}}^{\textrm{select}}$.
Compute $\mathcal{L}$ = HPNs($\mathbb{V}$,$\mathbf{X}(\mathbb{V})$,$\mathbb{E}$).
Optimize $\mathcal{L}$.
}
\Output{updated HPNs
\end{algorithm}
\subsection{Learning Objective}
The obtained hierarchical prototypes of each node are first concatenated into a unified vector and then forward through a fully connected layer $\mathrm{FC}$ to obtain a $c$ (the number of classes) dimensional feature vector, \textit{i.e.}, $\textrm{FC}(\mathbf{h}_{1}\oplus\cdots\oplus\mathbf{h}_{l_a'+l_r'+2}), \forall \mathbf{h}_i\in\mathbb{P}_H(v)$. In this paper, we aim to perform node classification. Therefore, based on the $c$ dimensional feature vector and the softmax function $\sigma(\cdot)$, we can estimate the label with $\hat{y}_i=\sigma(\textrm{FC}(\mathbf{h}_{1}\oplus\cdots\oplus\mathbf{h}_{l_a'+l_r'+2}))_i$ where $i$ is the index of class. To perform node classification, with the output predictions $\hat{y}_i$ and the target label $y_i\in \{1,2,...,c\}$, the corresponding classification loss is given b
\begin{align}
\mathcal{L}_{cls} =\sum_{i=1}^c -y_i\log(\hat{y_i}),
\end{align}
which is essentially the cross entropy loss function. Note that besides node classification, $\mathbb{P}_H(v)$ may also be used for other tasks based on different objective functions. In this paper, we focus on node classification and the overall loss of HPNs is:
\begin{align}
\label{eq:loss}
\mathcal{L} =\mathcal{L}_{cls} + \alpha \cdot \mathcal{L}_{div} + \beta \cdot \mathcal{L}_{dis},
\end{align}
where $\alpha\geq 0$ and $\beta\geq 0$ are hyperparameters to balance the two auxiliary losses.
\subsection{Theoretical Analysis}\label{sec:theoretical_analysis}
In this subsection, we provide the theoretical upper bound on the memory consumption and analyze how the model configuration would affect HPNs' capacity in dealing with different tasks. Both theoretical results are justified and analyzed in the experiments. Only the main results are provided here, while the detailed proof and analysis are given in supplementary materials
We first formulate the upper bound on the numbers of prototypes.
Due to the mechanism to create new prototypes for newly emerging knowledge extracted from the graph, the memory consumption will gradually increase. However, the normalization applied to the prototypes constrains the prototype space, and thereby adds an upper bound on the memory consumption. This can be intuitively understood as the only limited number of points can be scattered on a n-dimensional hypersphere if we force the distance between each pair of points to be larger than a threshold. To formally formulate this, we first borrow the definition of spherical code from geometry and coding theory.
\begin{definition}[Spherical code]\label{def:spherical code}
A spherical code $S(n,N,t)$, with parameters $(n,N,t)$ is defined as the set of $N$ points on the unit hypersphere in an $n$-dimension space for which the dot product of unit vectors from the origin to any two points is larger than or equal to $t$.
\end{definition}
In HPNs, the prototypes at different levels are normalized into unit vectors and can be viewed as spherical codes in their own space. Taking the A-prototypes as an example, given the dimension $d_a$ and the threshold $t_A$, $\mathbb{P}_A$ is a spherical code $\mathbb{P}_A = S_A(d_a, N_A, 1 - t_A)$, where $N_A$ is the cardinality of $\mathbb{P}_A$. Then the upper bound on the number of atom prototypes given $d_a$ and $t_A$ is equal to the maximal cardinality of $S(n,N,t)$ given $n$ and $t$. Since the area of the $n$-dimensional unit sphere surface is limited, there exists a maximal $N$ given a certain $n$ and $t$, denoted as $ \max_{N} S(d_a, N, 1-t_A)$. However, finding $ \max_{N} S(d_a, N, 1-t_A)$ is a complex sphere packing problem, and there is not yet a general formulation of the maximal $N$ for an arbitrary $n$. Therefore, given the number of two types of AFEs as $l_a$ and $l_r$, we can formulate the upper bound on the numbers of different prototypes as:
\begin{thm}[Upper bounds on numbers of prototypes]\label{thm:upper_bound}
Given the notations defined in HPNs, the upper bound on the number of A-prototypes $n_a$ can be given b
\begin{align}
n_A \leqslant (l_a + l_r) \max_{N} S(d_a, N, 1-t_A),
\end{align
and the upper bounds on the number of N-prototypes and the C-prototypes are:
\begin{align}
n_N \leqslant \max_{N} S(d_n, N, 1-t_N) \\ n_C \leqslant \max_{N} S(d_c, N, 1-t_C)
\end{align
where $S(n,N,t)$ is the spherical code defined on a $n$ dimensional hypersphere .
\end{thm
Theorem \ref{thm:upper_bound} provides an upper bound on the memory consumption of HPNs. In our experiments, we show that the number of parameters for most baseline methods are even higher than this upper bound.
Besides memory consumption, the more important problem for a continual learning model is the capability to maintain memory on previously learned tasks. Based on our model design, we formulate this as: whether learning new tasks affect the representations the model generates for old task data. We give explicit definitions on tasks and task distances based on set theory (in supplementary materials), then construct a bound to indicate what configuration would the model have to ensure this capability.
\begin{thm}[Task distance preserving]\label{thm:task distance preserve}
For $\mathrm{HPNs}$ trained on consecutive tasks $\mathcal{T}^p$ and $\mathcal{T}^{p+1}$. If $l_ad_a+l_rd_r \geqslant (l_r+1)d_v $ and $\mathbf{W}$ is column full rank, then as long as
$ t_{A} < \lambda_{\min}(l_r+1) \mathbf{dist}(\mathbb{V}_p, \mathbb{V}_{p+1}) $, learning on $\mathcal{T}^{p+1}$ will not modify representations $\mathrm{HPNs}$ generate for data from $\mathcal{T}^p$
, i.e., catastrophic forgetting is avoided.
\end{thm
The proof of Theorem \ref{thm:task distance preserve} requires a number of definitions, lemmas, and corollaries, therefore the details are provided in the supplementary materials. The theoretical results also provide insights into model implementations. In Theorem \ref{thm:task distance preserve}, $\lambda_i$ is eigenvalues of the $\mathbf{W}^T \mathbf{W}$, where $\mathbf{W}$ denotes the matrix constructed via AFEs (details in supplementary materials). $d_v$, $d_a$ and $d_r$ are dimensions of data and two kinds of atomic embeddings. The bound in this theorem is not tight, since the tight bound would be dependent on the specific dataset properties. But this informs us that either the number of AFEs or the dimension of the prototypes has to be large enough to ensure that data from two tasks can be well separated in the representation space.
Then, according to Theorem \ref{thm:upper_bound}, the upper bound of the memory consumption is dependent on $S(d_a,N, t_A)$, $S(d_n,N, t_N)$, and $S(d_c,N, t_C)$. As $S(n,N,t)$ grows fast with $n$, we prefer larger number of $\mathrm{AFE}$s with smaller prototype dimensions. We also empirically demonstrate this in Section \ref{sec:sensitivity}. Besides, the upper bound proposed in Theorem \ref{thm:upper_bound} is explicitly computed and compared to experimental results.
\section{Experiments
In the experiments, we answer the following six questions: (1) Whether HPNs can outperform state-of-the-art approaches? (2) How does each component of HPNs contribute to its performance? (3) Whether HPNs can memorize previous tasks after learning each new task?
(4) Are HPNs sensitive to the hyperparameters? (5) Whether the theoretical results can be empirically verified? (6) Whether the learned prototypes can be interpreted via visualization?
All experiments are repeated on 5 different datasets. Due to space limit, only main results are presented in this paper, while the details over some datasets are included in the supplementary materials.
\begin{table}[h]
\scriptsize
\centering
\caption{The detailed statistics of 5 datasets used in our experiments.}
\begin{tabular}{lccccc}\toprule
\textbf{Dataset} & Cora & Citeseer & Actor & OGB-Arxiv & OGB-Products\\\midrule
\# nodes & 2,708 & 3,327 & 7,600 & 169,343 & 2,449,029 \\\midrule
\# edges & 5,429 & 4,732 & 33,544 &1,166,243 &61,859,140 \\ \midrule
\# features & 1,433 & 3,703 & 931 & 128 & 100 \\\midrule
\# classes & 7 & 6 & 4 & 40 & 47 \\\midrule
\# tasks & 3 & 3 & 2 & 20 & 23 \\\bottomrule
\end{tabular}
\label{tab:data_statistics}
\end{table}
\subsection{Datasets
To assess the effectiveness of the proposed HPNs, we consider 5 public datasets which include 3 citation networks (Cora \cite{sen2008collective}, Citeseer\cite{sen2008collective}, OGB-Arxiv \cite{wang2020microsoft,mikolov2013distributed}), 1 actor co-occurrence network (Actor) \cite{pei2020geom}, and 1 product co-purchasing networks (OGB-Products \cite{Bhatia16}). The statistics of the 5 datasets are summarized in Table \ref{tab:data_statistics}. Key properties of the datasets are presented here, and more details are included in the supplementary materials.
\subsubsection{Citation networks}
Cora contains 2708 documents, 5429 links denoting the citations among the documents. For training, 140 documents are selected with 20 examples for each class. The validation set contains 500 documents and the test set contains 1000 examples. In our continual learning setting, the first 6 classes are selected and grouped into 3 tasks (2 classes per task) in the original order.
Citeseer contains 3312 documents and 4732 links. 20 documents per class are selected for training, 500 documents are selected for validation, and 1000 documents are selected as the test set. For continual learning setting, the documents from 6 classes are grouped into 3 tasks with 2 classes per task in the original order.
The Cora and Citeseer datasets can be downloaded via \href{https://github.com/tkipf/gcn/tree/master/gcn/data}{Cora$\&$Citeseer}.
The OGB-Arxiv dataset is collected in the Open Graph Benchmark \href{https://ogb.stanford.edu/docs/nodeprop/#ogbn-arxiv}{OGB}.
It is a directed citation network between all Computer Science (CS) arXiv papers indexed by MAG \cite{wang2020microsoft}.
Totally it contains 169,343 nodes and 1,166,243 edges. The dataset contains 40 classes. As the dataset is imbalanced and the numbers of examples in different classes differs significantly, directly grouping the classes into 2-class groups like the Cora and Citeseer will cause the tasks to be imbalanced. Therefore, we reordered the classes in an descending order according to the number of examples contained in each class, and then group the classes according to the new order. In this way, the number of examples contained in different classes of each task are arranged to be as balanced as possible. Specifically, the class indices of each task are: \{(35, 12), (15, 21), (28, 30), (16, 24), (10, 34), (8, 4), (5, 2), (27, 26), (36, 19), (23, 31), (9, 37), (13, 3), (20, 39), (22, 6), (38, 33), (25, 11), (18, 1), (14, 7), (0, 17), (29, 32)\}, where each tuple denotes a task consisting of two classes.
\subsubsection{Actor co-occurrence network}
The actor co-occurrence network is a subgraph of the film-director-actor-writer network \cite{tang2009social}.
Each node in this dataset corresponds to an author, and the edges between the nodes are co-occurrence on the same Wikipedia pages.
The whole dataset contains 7600 nodes and 33544 edges. Each node is accompanied with a feature vector of 931 dimensions. For this dataset, we also constructed 2 tasks with 2 classes per task.
The link to this dataset is \href{https://github.com/graphdml-uiuc-jlu/geom-gcn/tree/master/new_data/film}{Actor}.
The balanced splitting of the classes is \{(0, 1), (2, 3)\}.
\begin{table*}[]
\centering
\captionsetup{font=small}
\caption{Performance comparisons between HPNs and baselines on 5 different datasets.}
\scriptsize
\begin{tabular}{c|c||cc|cc|cc|cc|cc}
\toprule
\multirow{2}{2.5em}{\textbf{C.L.T.}} & \multirow{2}{2em}{\textbf{Base}}& \multicolumn{2}{c|}{Cora} & \multicolumn{2}{c|}{Citeseer} & \multicolumn{2}{c|}{Actor} & \multicolumn{2}{c|}{OGB-Arxiv} & \multicolumn{2}{c}{OGB-Products} \\ \cline{3-12}
& & AM/\% & FM/\% & AM/\% & FM/\% & AM/\% & FM /\% & AM/\% & FM /\% & AM/\% & FM /\% \\ \bottomrule\toprule
\multirow{3}{2.5em}{None} & GCN &63.5$\pm$1.9 &-42.3$\pm$0.4 &64.5$\pm$3.9 & -7.7$\pm$1.6 & 43.6$\pm$3.6 & -9.1$\pm$2.9 & 56.8$\pm$4.3 & -19.8$\pm$3.2 & 45.2$\pm$5.6 & -27.8$\pm$7.1 \\
& GAT &71.9$\pm$3.8 &-33.1$\pm$2.3 &66.8$\pm$0.9 &-19.6$\pm$0.3 & 53.1$\pm$2.7 & -4.3$\pm$1.6 & 54.3$\pm$3.5 & -21.7$\pm$4.6& 44.9$\pm$6.9 & -30.3$\pm$5.2 \\
& GIN &68.3$\pm$2.3 &-35.4$\pm$3.4 &57.7$\pm$2.3 &-36.4$\pm$0.3 & 45.5$\pm$2.3 & -8.9$\pm$2.8 & 53.2$\pm$6.5 & -23.5$\pm$8.1 & 43.1$\pm$7.4 & -31.4$\pm$8.8 \\ \midrule
\multirow{3}{2.5em}{EWC \cite{kirkpatrick2017overcoming}} & GCN &63.1$\pm$1.2 &-42.7$\pm$1.6 &54.4$\pm$4.2 &-30.3$\pm$0.9 & 44.3$\pm$1.1 & -7.1$\pm$1.4 & 72.1$\pm$2.4 & -9.1$\pm$1.9 & 66.7$\pm$0.5 & -8.4$\pm$0.4\\
& GAT &72.2$\pm$1.5 &-32.2$\pm$1.6 &65.7$\pm$2.5 &-19.7$\pm$2.3 & 54.2$\pm$2.5 &-2.5$\pm$1.5 &73.2$\pm$1.1 &-10.8$\pm$2.1 & 67.9$\pm$1.0 & -9.6$\pm$1.3 \\
& GIN &69.6$\pm$2.6 &-28.5$\pm$2.8 &57.9$\pm$3.4 &-36.3$\pm$2.4 & 47.6$\pm$2.1 &-7.2$\pm$1.6 &74.1$\pm$1.7 &-8.3$\pm$2.0 & 67.3$\pm$2.3 & -13.6$\pm$1.5 \\ \midrule
\multirow{3}{2.5em}{LwF \cite{li2017learning}} & GCN &76.1$\pm$1.4 &-21.3$\pm$2.4 &67.0$\pm$0.2 &-8.3$\pm$2.7 & 49.7$\pm$2.5 &-3.6$\pm$1.4 &69.9$\pm$3.9& -12.1$\pm$2.8 & 66.3$\pm$2.5 & -11.8$\pm$3.4 \\
& GAT &70.8$\pm$2.8 &-34.6$\pm$4.1 &66.1$\pm$4.1 &-18.9$\pm$1.5 & 52.8$\pm$2.7 &-6.2$\pm$2.2 & 68.9$\pm$4.4 & -13.6$\pm$3.3 & 65.1$\pm$4.1 & -13.2$\pm$2.9 \\
& GIN &74.1$\pm$2.7 &-23.3$\pm$0.8 &63.1$\pm$1.9 &-16.5$\pm$2.2 &49.7$\pm$2.6 &-4.1$\pm$1.5 &71.4$\pm$4.8 & -15.9$\pm$5.6 & 65.9$\pm$4.0 & -10.7$\pm$3.1 \\ \midrule
\multirow{3}{2.5em}{GEM \cite{lopez2017gradient}} & GCN &75.7$\pm$3.0 &-6.5$\pm$4.4 &41.8$\pm$2.6 &-31.9$\pm$1.4 &52.7$\pm$3.1 & +3.9$\pm$2.9 & 75.4$\pm$1.7 & -13.6$\pm$0.5 & 71.3$\pm$1.7 & -10.5$\pm$0.9 \\
& GAT &69.8$\pm$3.0 &-26.1$\pm$2.6 &71.3$\pm$2.2 &+9.0$\pm$1.5 &54.3$\pm$3.6 &-2.0$\pm$0.9 &76.6$\pm$0.7 & -11.3$\pm$0.4 & 70.4$\pm$0.8 & -10.9$\pm$1.6 \\
& GIN &80.2$\pm$3.3 &-2.0$\pm$4.2 &49.7$\pm$0.5 &-24.5$\pm$0.9 &45.2$\pm$2.8&-11.1$\pm$1.5 &77.3 $\pm$2.1 & -11.2$\pm$1.6 & 76.5$\pm$3.3 & -7.2$\pm$2.5 \\ \midrule
\multirow{3}{2.5em}{MAS \cite{aljundi2018memory}} & GCN &65.5$\pm$1.9 &-21.4$\pm$3.7 &59.5$\pm$3.1 &-0.1$\pm$2.4 &50.7$\pm$2.4&-1.5$\pm$0.8 &69.8$\pm$0.4 & -18.8$\pm$0.9 & 62.0$\pm$1.1 & -17.9$\pm$1.9\\
& GAT &84.7$\pm$0.7 &-5.6$\pm$2.0 &69.1$\pm$1.1 &-4.8$\pm$3.3 &53.7$\pm$2.6&-1.6$\pm$0.8 &70.6$\pm$1.3 &-16.7$\pm$1.6 & 64.4$\pm$2.3 & -14.5$\pm$3.2\\
& GIN &76.7$\pm$2.6 &-4.0$\pm$3.6 &65.2$\pm$3.9 &+0.0$\pm$2.0 &51.6$\pm$2.6&-0.6$\pm$1.3 &65.3 $\pm$2.9 & -17.0$\pm$2.3 & 61.4$\pm$3.8 & -20.9$\pm$2.9\\ \midrule
\multirow{3}{2.5em}{ERGN. \cite{zhou2020continual}} & GCN &63.5$\pm$2.4 &-42.3$\pm$0.7 &54.2$\pm$3.9 &-30.3$\pm$1.9 &52.4$\pm$3.3&+0.6$\pm$1.4 & 63.3$\pm$1.7 & -18.1$\pm$0.9 & 60.7$\pm$2.8 & -26.6$\pm$3.3 \\
& GAT &71.1$\pm$2.5 &-34.3$\pm$1.0 &65.5$\pm$0.3 &-20.4$\pm$3.9 &51.4$\pm$2.2&-7.2$\pm$3.2 & 63.5$\pm$2.4 & -19.5$\pm$1.9 & 61.3$\pm$1.7 & -25.1$\pm$0.8 \\
& GIN &68.3$\pm$0.4 &-35.4$\pm$0.4 &57.7$\pm$3.1 &-36.4$\pm$1.3 &42.7$\pm$3.9&-13.0$\pm$2.1& 69.2$\pm$1.8& -11.8$\pm$1.4 & 61.8$\pm$4.7 & -23.4$\pm$7.9\\ \midrule
\multirow{3}{2.5em}{TWP \cite{liu2020overcoming}} & GCN &68.9$\pm$0.9 &-5.7$\pm$1.5 &60.5$\pm$3.8 &-0.3$\pm$4.4 &50.6$\pm$2.0&-4.8$\pm$1.1 & 75.6$\pm$0.3 & -10.4$\pm$0.5 & 69.9$\pm$0.4 & -9.0$\pm$1.1\\
& GAT &81.3$\pm$3.2 &-14.4$\pm$1.5 &69.8$\pm$1.5 &-8.9$\pm$2.6 &54.0$\pm$1.8&-2.1$\pm$1.9 & 75.8$\pm$0.5 & -5.9$\pm$0.3 & 69.3$\pm$2.3 & -8.9$\pm$1.5 \\
& GIN &73.7$\pm$3.2 &-3.9 $\pm$2.6 &68.9$\pm$0.7 &-2.4$\pm$1.9 &49.9$\pm$1.9&-3.6$\pm$2.0 & 76.6$\pm$1.8 & -11.3$\pm$1.1 & 69.9$\pm$1.4 & -10.3$\pm$2.7\\ \midrule
\multirow{3}{2.5em}{Join.} & GCN &93.7$\pm$ 0.5 & - & 78.9$\pm$0.4 & - &57.0$\pm$0.9 & - &77.2$\pm$0.8 & - &72.9$\pm$1.2 &- \\
& GAT &93.9$\pm$ 0.9 & - & 79.3$\pm$0.8 & - &57.1$\pm$0.9 & - & 81.8$\pm$0.3 & -& 73.7$\pm$2.4 & -\\
& GIN &93.2$\pm$ 1.2 & - & 78.7$\pm$0.9& - &56.9$\pm$0.6 & - &82.3$\pm$1.9 &- &77.9$\pm$2.1 &- \\
\bottomrule\toprule
\multicolumn{2}{c||}{\textbf{HPNs}}&\textbf{93.7$\pm$1.5} &\textbf{+0.6$\pm$1.0} &\textbf{79.0$\pm$0.9} &\textbf{-0.6$\pm$0.7} &\textbf{56.8$\pm$1.4} & \textbf{ -0.9$\pm$0.9} & \textbf{85.8$\pm$ 0.7} & \textbf{+0.6$\pm$0.9}& \textbf{80.1$\pm$0.8} &\textbf{+2.9$\pm$1.0} \\ \bottomrule
\end{tabular}
\label{tab:comparison
\end{table*}
\subsubsection{Product co-purchasing network}
OGB-Products is collected in the Open Graph Benchmark \href{https://ogb.stanford.edu/docs/nodeprop/#ogbn-arxiv}{OGB}, representing an Amazon product co-purchasing network \href{http://manikvarma.org/downloads/XC/XMLRepository.html}{link}. It contains 2,449,029 nodes and 61,859,140 edges. Nodes represent products sold in Amazon, and edges indicate that the connected products are purchased together. In our experiments, we select 46 classes and omit the last class containing only 1 example. Similar to OGB-Arxiv, we reorder the classes and the class indices of each tasks are: \{(4, 7), (6, 3), (12, 2), (0, 8), (1, 13), (16, 21), (9, 10), (18, 24), (17, 5), (11, 42), (15, 20), (19, 23), (14, 25), (28, 29), (43, 22), (36, 44), (26, 37), (32, 31), (30, 27), (34, 38), (41, 35), (39, 33), (45, 40)\}.
\subsection{Experimental Setup and Evaluation Metrics
To perform continual graph representation learning with new categories of nodes continuously emerging, like the traditional continual learning works, we split each dataset into multiple tasks and train the model on them sequentially. Each new task brings a subgraph with new categories of nodes and associated edges, \textit{e.g.}, task 1 contains classes 1 and 2, task 2 contains new classes 3 and 4, \textit{etc.} Each model is trained on a sequence of tasks, and the performance will be evaluated on all previous tasks. Specifically, we adopt accuracy mean (AM) and forgetting mean (FM) as metrics for evaluation. After learning on all tasks, the AM and FM are computed as the average accuracy and the average accuracy decrease on all previous tasks. Negative FM indicates the existence of forgetting
, zero FM denotes no forgetting and positive FM denotes positive knowledge transfer between tasks.
For HPNs, we set $d_a = d_n=d_c=16$, $l_a=l_r=22$, and $h=2$. The threshold $t_A$, $t_N$, and $t_C$ are selected by cross validation on $\{0.01, 0.05,0.1,0.15,0.2,0.25,0.3,0.35, 0.4\}$. The experiments on the important hyperparameters are provided in Section~\ref{sec:sensitivity}. \textcolor{black}{All experiments are repeated 5 times with Nvidia Titan Xp GPU. The average performance and standard deviations are reported. More details and code are provided in supplementary materials.
\begin{figure*}
\centering
\includegraphics[height = 4.0cm]{AM_FM_ta_AM_FM_mat.jpg}
\captionsetup{font=small
\caption{(a) and (b) are AM and FM of HPNs with different number of $\mathrm{AFE}$s and prototype dimensions on OGB-Arxiv. (c) and (d) are AM and FM change with when $t_A$ varies on Cora.
\label{fig:AM_mat}
\end{figure*}
\begin{table}[]
\scriptsize
\centering
\begin{tabular}{c|ccc||ccc}\toprule
& \multicolumn{3}{c||}{\textbf{GAT}}&\multicolumn{3}{c}{\textbf{HPNs}} \\\midrule
\backslashbox{Te.}{Tr.}&$\mathcal{T}_1$ & $\mathcal{T}_{1,2}$&$\mathcal{T}_{1,2,3}$&$\mathcal{T}_1$&$\mathcal{T}_{1,2}$&$\mathcal{T}_{1,2,3}$ \\\midrule
$\mathcal{T}_1$ &94.12\% &47.96\% &71.49\% &95.02\% &95.93\% &94.57\% \\\midrule
$\mathcal{T}_2$ & &93.30\% &49.68\% &&92.66\% &92.22\% \\\midrule
$\mathcal{T}_3$ & & &94.44\% &&&96.83\% \\\bottomrule
\end{tabular}
\caption{Accuracy (\%) changes of GAT and HPNs.}
\label{tab:individual_tasks_GAT_DHPN}
\end{table}
\begin{table}[]
\scriptsize
\centering
\begin{tabular}{c|ccc||ccc}\toprule
& \multicolumn{3}{c||}{\textbf{GCN+GEM}}&\multicolumn{3}{c}{\textbf{GAT+MAS}} \\\midrule
\backslashbox{Te.}{Tr.}&$\mathcal{T}_1$ & $\mathcal{T}_{1,2}$&$\mathcal{T}_{1,2,3}$&$\mathcal{T}_1$&$\mathcal{T}_{1,2}$&$\mathcal{T}_{1,2,3}$ \\\midrule
$\mathcal{T}_1$ &93.67\% &87.33\% &76.92\% &94.12\% &90.50\% &90.05\% \\\midrule
$\mathcal{T}_2$ & &65.66\% &69.33\% &&77.97\% &70.84\% \\\midrule
$\mathcal{T}_3$ & & &80.95\% &&&93.25\% \\\bottomrule
\end{tabular}
\caption{Accuracy (\%) changes of GCN+GEM and GAT+MAS.}
\label{tab:individual_tasks_GCNGEM_GATMAS}
\end{table}
\subsection{Comparisons with Baseline Methods}\label{sec:comparison
We compare HPNs with various baseline methods. Experience Replay based GNN~(ERGNN) \cite{zhou2021overcoming} and Topology-aware Weight Preserving~(TWP) \cite{liu2020overcoming} are developed for continual graph representation learning. The others approaches, including Elastic Weight Consolidation (EWC) \cite{kirkpatrick2017overcoming}, Learning without Forgetting (LwF) \cite{li2017learning}, Gradient Episodic Memory (GEM) \cite{lopez2017gradient}, and Memory Aware Synapses (MAS) \cite{aljundi2018memory}) are popular continual learning methods for Euclidean data. All the baselines are implemented based on three popular backbone models, \textit{i.e.}, Graph Convolutional Networks (GCNs) \cite{kipf2016semi}, Graph Attentional Networks (GATs) \cite{velivckovic2017graph}, and Graph Isomorphism Network (GIN) \cite{xu2018powerful}.
Note that Joint training (Join.) in Table~\ref{tab:comparison} does not represent continual learning. It allows a model to access data of all tasks at any time and thus is often used as an upper bound for continual learning.\cite{van2019three}. Therefore, FM is not applicable to Joint training approach.
In Table \ref{tab:comparison}, we observe that regularization based approaches, \textit{e.g.}, EWC and TWP, generally obtain lower forgetting, but the accuracy (AM) is limited by the constraints. However, the forgetting problem of regularization based methods will become increasingly severe when the number of tasks is relatively large, as shown in Section~\ref{sec:learning_dynamics}.
Memory replay based methods such as GEM achieve better performance without using any constraint. However, the memory consumption is higher (Section~\ref{sec:memory_consumption}).
HPNs significantly outperform all baselines without inheriting their limitations. Compared to regularization based methods, HPNs do not impose constraints to limit the model's expressiveness, therefore the performance is much better. Compared to memory replay based methods, HPNs do not only perform better but also are memory efficient as shown in Section \ref{sec:memory_consumption}.
Joint training (Join.) achieves comparable performance to HPNs on small datasets but is significantly worse on large OGB datasets. This is because joint training (Join.) is a multi-task setting, inter-task interference may cause negative transfer, which is not obvious on small datasets with only a few tasks but becomes prominent on large datasets with tens of tasks. In HPNs, different tasks can choose different combinations of the parameters and thus task interference is dramatically alleviated.
\begin{table}[]
\scriptsize
\centering
\captionsetup{font=small}
\caption{Ablation study on prototypes of different levels of prototypes over Cora.}
\begin{tabular}{c|c|c|c|c|c}
\toprule
Conf. &A-p. & N-p. & C-p. & AM\% & FM\% \\ \midrule
1 &\checkmark & & & 89.2$\pm$1.3 & -0.1$\pm$0.5 \\ \midrule
2 &\checkmark &\checkmark & & 91.7$\pm$1.1 & -0.2$\pm$0.8 \\ \midrule
3 &\checkmark &\checkmark &\checkmark & 93.7$\pm$1.5 & +0.6$\pm$1.0 \\ \bottomrule
\end{tabular
\label{tab:ablation_proto}
\end{table
\begin{table}[]
\scriptsize
\centering
\captionsetup{font=small}
\caption{Ablation study on different loss terms over Cora.}
\begin{tabular}{c|c|c|c|c|c}
\toprule
Conf. &$\mathcal{L}_{cls}$ & $\mathcal{L}_{div}$ & $\mathcal{L}_{dis}$ & AM\% & FM\% \\ \midrule
1 &\checkmark & & &92.4$\pm$1.3 &+0.8$\pm$0.7 \\ \midrule
2 &\checkmark & \checkmark & &92.9$\pm$1.1 &+0.3$\pm$1.0 \\ \midrule
3 &\checkmark & &\checkmark &92.8$\pm$0.9 &+0.0$\pm$1.2 \\ \midrule
4 &\checkmark &\checkmark &\checkmark &93.7$\pm$1.5 &+0.6$\pm$1.0 \\ \bottomrule
\end{tabular
\label{tab:ablation_loss}
\label{tab:ablation}
\end{table
Among all the models in Table \ref{tab:comparison}, we could observe several kinds of forgetting behavior among the baselines. First, some of them are with low AM and severe forgetting (negative FM with large $|\textrm{FM}|$) like the pure GNNs in the first 3 rows of Table 1 in the paper. These models may perform well on individual tasks, but the AM is brought down by catastrophic forgetting. To explain this in details, we expand the results of GAT without any continual learning techniques in Table \ref{tab:individual_tasks_GAT_DHPN}, in which Tr. denotes on which tasks has the model been trained and Te. denotes on which task the model is tested.
The first row of Table \ref{tab:individual_tasks_GAT_DHPN} shows the model performance on task 1 after it has been trained on the task 1, on the first two tasks, and on the first three tasks.
We see that GAT performs well on each individual task it has just learnt. But after being trained on more tasks, the performance on previous tasks drops dramatically, making the AM to be relatively low.
In contrast, the performance change on previous tasks of HPNs is also shown in Table \ref{tab:individual_tasks_GAT_DHPN}, and we could see that HPNs can well maintain the performance on previous tasks throughout the training on all tasks.
Second, there are also some models without severe forgetting but the still low AM. Typically these are the models which preserve the performance on previous tasks with certain constraints on the models. These constraints can indeed alleviates forgetting, but it also limits the model's flexibility in learning new tasks, and thus the performance on new tasks will degrade. For example, comparing Table \ref{tab:individual_tasks_GAT_DHPN} with Table \ref{tab:individual_tasks_GCNGEM_GATMAS}, GAT achieves accuracy higher than 93$\%$ on individual tasks, but for GAT+MAS, although the forgetting on task 1 is alleviated, the performance on task 2 drops significantly. GCN+GEM also suffers from the similar problem.
\begin{figure*}[t]
\centering
\begin{minipage}{0.33\textwidth}
\centerin
\includegraphics[width=0.9\textwidth]{ARS.jpg}
\end{minipage}\hfill
\begin{minipage}{0.33\textwidth}
\centerin
\includegraphics[width=0.9\textwidth]{n_proto_dyna_small.png}
\end{minipage}
\begin{minipage}{0.33\textwidth}
\centerin
\includegraphics[width=0.9\textwidth]{upper_bound_compare.jpg}
\end{minipage}
\captionsetup{font=small}
\caption{Left: dynamics of ARS for continual learning tasks on OGB-Arxiv. Middle: impact of $t_A$ on the number of prototypes in HPNs over Cora. Right: dynamics of memory consumption of HPNs on OGB-Products.}
\label{fig:ARS_para_proto}
\end{figure*}
\begin{table*}[!t]
\scriptsize
\centering
\captionsetup{font=small}
\caption{ Final parameter amount for models trained on OGB-Products}
\begin{tabular}{c|c|c|c|c|c|c|c|c||c}
\toprule
& None & EWC & LwF & GEM & MAS & ERGNN & TWP & Joint & \textbf{HPNs} \\ \midrule
GCN & 2,336 & 46,720 & 4,672 &2,202,336 &2,336 &6,738 &9,344 & 2,336 & \\
GAT & 20,032 & 400,640 & 40,064 &2,220,032 &20,032 &24,432 &80,128 & 20,032 &4,908\\
GIN & 2,352 & 47,040 & 4,704 &2,202,352 & 2,352 &6,752 &9,408 & 2,352 & \\ \midrule
\end{tabular}
\label{tab:memory_consumption}
\end{table*}
\subsection{Ablation Study}
We conduct ablation studies on different levels of prototypes and different combinations of three loss terms. In Table \ref{tab:ablation_proto}, we show the performance of HPNs when A-, N-, and C-Prototypes are gradually added (Cora dataset).
We notice both AM and FM of HPNs increase when higher level prototypes are considered. This suggests that high level prototypes can enhance the model's performance and robustness against forgetting
The effect of different combinations of loss terms are shown in Table \ref{tab:ablation_loss}. The first three rows show that adding $\mathcal{L}_{div}$ or $\mathcal{L}_{dis}$ with $\mathcal{L}_{cls}$ may slightly improve the performance. By jointly considering these three terms, the performance (AM) can be further improved. This is because $\mathcal{L}_{div}$ pushes different $\mathrm{AFE}$s away from each other and $\mathcal{L}_{dis}$ makes the prototypes of each $\mathrm{AFE}$ be more close to its output. Jointly considering $\mathcal{L}_{div}$ and $\mathcal{L}_{dis}$ with $\mathcal{L}_{cls}$ can make the prototype space better separated as shown in Section~\ref{sec:visualization}.
\subsection{Learning Dynamics}\label{sec:learning_dynamics}
For continual learning, it is important to memorize previous tasks after learning each new task. To measure this, instead of directly measuring the average accuracy on previous tasks which may mix up the accuracy change caused by forgetting and task differences, we develop a new metric, \textit{i.e.}, average retaining score (ARS), to address this problem. Specifically, after learning on a task $\mathcal{T}^i$, the ratio between the model's accuracy on a previous task $\mathcal{T}^{i-m}$ and its accuracy on $\mathcal{T}^{i-m}$ after it had been just learned on $\mathcal{T}^{i-m}$ is defined as the retaining ratio. Then the ARS is the average retaining ratio of all previous tasks after learning a new task.
\begin{figure*}[]
\centering
\begin{minipage}{0.245\textwidth}
\centerin
\includegraphics[width=1.\textwidth]{loss_coef_afixed_AM.jpg}
\end{minipage
\begin{minipage}{0.245\textwidth}
\centerin
\includegraphics[width=1.\textwidth]{loss_coef_afixed_FM.jpg}
\end{minipage}
\begin{minipage}{0.245\textwidth}
\centerin
\includegraphics[width=1.\textwidth]{loss_coef_bfixed_AM.jpg}
\end{minipage}
\begin{minipage}{0.245\textwidth}
\centerin
\includegraphics[width=1.\textwidth]{loss_coef_bfixed_FM.jpg}
\end{minipage}
\captionsetup{font=small}
\caption{The left two figures show the results of fixing $\alpha=1.0$ and tuning $\beta$ from 0.0001 to 1000. The right two figures show the results of fixing $\beta=1.0$ and tuning $\alpha$. Logarithmic horizontal axis is adopted, and the results are obtained on both Cora and Citeseer datasets.}
\label{fig:loss_coef}
\end{figure*}
Figure \ref{fig:ARS_para_proto}(left) shows the ARS change of HPNs and two baselines. GAT represents the models without continual learning techniques. TWP+GAT is the best baseline in terms of forgetting. GAT forgets quickly, while TWP significantly alleviates the forgetting problem for GAT. But as more tasks come in, the forgetting of TWP+GAT increases. As different tasks require different parameters, TWP+GAT (regularization based) is seeking a trade-off between old and new tasks. With more new tasks, TWP+GAT tends to gradually adapt to new tasks and forget old ones. On contrary, HPNs maintain the ARS very well. This is because HPNs learn prototypes to denote the common basic features and learning new tasks does not hurt the parameters for old tasks. New tasks can be handled with new combinations of the existing basic prototypes. If necessary, new prototypes can be established for more expressiveness.
\subsection{Parameter Sensitivity}\label{sec:sensitivity}
As discussed in Section \ref{sec:theoretical_analysis}, the number of $\mathrm{AFE}$s and the prototype dimensions are key factors in determining the continual learning capability and memory consumption. Here, we conduct experiments with different number of $\mathrm{AFE}$s and prototype dimensions to justify the theoretical results. We keep the dimensions of different prototypes equal and the number of two types of $\mathrm{AFE}$s equal for simplicity.
As shown in Figure \ref{fig:AM_mat}(a) and (b), larger dimensions and the number of $\mathrm{AFE}$s yield better AM and FM, which is consistent with Theorem \ref{thm:task distance preserve}. Besides, AM is mostly determined by the number of $\mathrm{AFE}$s since HPNs compose prototypes with different $\mathrm{AFE}$s to represent each target node. The number of possible combinations determines its expressiveness.
Considering the above results and the bound (Theorem \ref{thm:upper_bound}) for the number of prototypes, using large number of $\mathrm{AFE}$s and small dimension can ensure both high performance and low memory usage, as verified in Section \ref{sec:memory_consumption}.
We also evaluate the effectiveness of HPNs when prototype thresholds vary from 0.01 to 0.4. In Figure \ref{fig:AM_mat}(c) and (d), we observe that the performance (AM and FM) of HPNs are generally stable when $t_A$ varies and slightly better when $t_A$ is between 0.2 and 0.3. This is because when $t_A$ is too small or too large, we will have too many or too less prototypes (consistent with Theorem \ref{thm:upper_bound}) as shown in Figure \ref{fig:ARS_para_proto}(middle), which may cause the problem of overfitting or underfitting.
\iffalse
Finally, we explore the influence of the coefficients balancing the auxiliary losses ($\alpha$ and $\beta$ in Equation \ref{eq:loss}) on the performance. As shown in Table \ref{tab:coef_loss_cora} and \ref{tab:coef_loss_citeseer}, we iteratively fix $\alpha$ or $\beta$, and tune the other one to check the parameter sensitivity.
\begin{table*}[]
\centering
\scriptsize
\begin{tabular}{c|ccccccccc|ccccccccc}
\toprule
$\alpha$&\multicolumn{9}{c|}{1.0} & 0.0001 & 0.001 & 0.01 & 0.1 & 1.0 & 10 & 100 & 1000 & 10000 \\ \midrule
$\beta$ & 0.0001 & 0.001 & 0.01 & 0.1 & 1.0 & 10 & 100 & 1000 & 10000 & \multicolumn{9}{c}{1.0}\\\midrule
AM/\% & 92.4 & 92.6&92.5& 93.0 &93.4 & 93.3& 93.0& 93.4&83.3 & 92.3 & 92.6 & 92.3 & 93.1 & 93.8 & 93.5 & 93.1 & 93.2 & 81.5\\
FM/\% &+1.4&+1.5&+1.6&+1.4 &+1.4 &+0.9& +1.4 &+1.4 &+6.9 & +1.1 & +1.5 & +1.1 & +1.2 & +1.0 & +1.3 & +1.1 & +1.4 & +6.7\\ \bottomrule
\end{tabular}
\caption{Influence of balance coefficients $\alpha$ and $\beta$ on Cora dataset}
\label{tab:coef_loss_cora}
\end{table*}
\begin{table*}[]
\centering
\scriptsize
\begin{tabular}{c|ccccccccc|ccccccccc}
\toprule
$\alpha$&\multicolumn{9}{c|}{1.0} & 0.0001 & 0.001 & 0.01 & 0.1 & 1.0 & 10 & 100 & 1000 & 10000 \\ \midrule
$\beta$ & 0.0001 & 0.001 & 0.01 & 0.1 & 1.0 & 10 & 100 & 1000 & 10000 & \multicolumn{9}{c}{1.0}\\\midrule
AM/\% & 79.0&80.0&79.8& 79.4 &80.6 &79.1 & 79.2& 80.0&75.4& 79.0&79.7&80.0& 80.0 &80.2 &79.9 & 79.8& 80.4&75.5\\
FM/\% &+1.1&+1.1&+0.6&+1.2 &+0.7 &+1.1& +0.3& +0.8&+1.7 & +1.1&+0.5&+0.9&+0.7 &+0.8 &+0.6& +0.6& +1.0&+1.9\\ \bottomrule
\end{tabular}
\caption{Influence of balance coefficients $\alpha$ and $\beta$ on Citeseer dataset}
\label{tab:coef_loss_citeseer}
\end{table*}
From Table \ref{tab:coef_loss_cora} and \ref{tab:coef_loss_citeseer}, we can observe that the performance is relatively robust when $\alpha$ and $\beta$ vary from 0.0001 to 1000.
\fi
\begin{figure*}[h]
\centering
\begin{minipage}{0.31\textwidth}
\centering
\includegraphics[width=1.\textwidth]{hierarchical_proto_visual01.png}
\end{minipage}
\begin{minipage}{0.31\textwidth}
\centering
\includegraphics[width=1.\textwidth]{hierarchical_proto_visual0123.png}
\end{minipage}
\begin{minipage}{0.31\textwidth}
\centering
\includegraphics[width=1.\textwidth]{hierarchical_proto_visual012345.png}
\end{minipage}
\captionsetup{font=small}
\caption{Visualization of hierarchical prototype representations of nodes in the test set of Cora.}
\label{fig:tsne}
\end{figure*}
Finally, we explore the influence of the hyperparameters of the auxiliary losses ($\alpha$ and $\beta$ in Equation \ref{eq:loss}) on the performance. As shown in Figure \ref{fig:loss_coef}, we fix $\alpha=1.0$ or $\beta=1.0$ in turn, and tune the other one to check the parameter sensitivity.
From Figure \ref{fig:loss_coef}, we can observe that the performance is relatively robust when $\alpha$ and $\beta$ vary from 0.0001 to 1000.
Through monitoring the training process, we suggest the reason is that $\mathcal{L}\_{div}$ and $\mathcal{L}\_{dis}$ decrease faster than $\mathcal{L}\_{cls}$, thus the total loss is not very sensitive to the coefficients on them. Based on this observation, we choose to simply set $\alpha=\beta=1$ in implementations.
In this subsection, we systematically studied the dependency of the model on the hyperparameters. According to the results, HPNs are robust against most of the hyperparameters. The number of AFEs will cause significant difference but the relation between the performance and the number of AFEs is simple. For a given dataset, the performance first increases with the number of AFEs, and becomes stable when the number AFEs reaches a certain threshold (\textit{i.e.} large enough).
\subsection{Memory Consumption}\label{sec:memory_consumption}
We compare memory consumption of different methods, as well as a explicitly theoretical memory upper bound, with the baselines on OGB-Products (the largest dataset). We also show the actual memory consumption of HPNs in the process of continual learning.
In Table \ref{tab:memory_consumption}, even on the dataset with millions of nodes and 23 tasks, HPNs can accommodate all tasks with a small amount of parameters. Besides, the dynamic change of parameter amount is shown in Figure \ref{fig:ARS_para_proto}(right). The red dashed line denotes the theoretical upper bound (6,163), and the computation details are included in supplemental materials.
In Figure \ref{fig:ARS_para_proto}(right), we notice the actual memory usage of HPNs is much lower than the upper bound. Moreover, even the upper bound is among the lowest for memory consumption compared to baselines. The model we use here is the same as the one in Section \ref{sec:comparison}
\subsection{Visualization}\label{sec:visualization}
To concretely show the prototype representations generated by HPNs, we apply t-SNE \cite{van2008visualizing} to visualize the node representations of the Cora dataset (test set) after learning each task, as shown in Figure \ref{fig:tsne}. The three figures display the representations generated after each of the three tasks arrives sequentially. Each task contains two classes denoted with different colors, \textit{i.e.} (red, blue), (green, salmon), and (purple, orange). In Figure \ref{fig:tsne}, with new tasks coming in sequentially from left to right, the representations are consistently well separated, which is beneficial for downstream tasks.
\section{Conclusion}
In this paper, we proposed Hierarchical Prototype Networks (HPNs), to continuously extract different levels of abstract knowledge (in the form of prototypes) from streams of tasks on graph representation learning. The continual learning performance of HPNs is both theoretically analyzed and experimentally justified from different perspectives on multiple public datasets with up to millions of nodes and tens of millions of edges. Besides, the memory consumption of HPNs is also theoretically analyzed and experimentally verified.
HPNs provide a general framework for continual graph representation learning and can be further extended to tackle more challenging tasks. In the future, we will extend HPNs to heterogeneous graphs by adapting the AFEs and prototypes to capture multiple types of nodes and edges. Moreover, we will also equip the prototypes with connections to denote the relationship among the abstract knowledge and investigate more challenging graph reasoning tasks.
\section{\textcolor{black}{Proof Details}}\label{sec:theoretical_analysis}
\subsection{Overview}
The main theoretical results are briefly introduced in the paper. In this section, we provide detailed explanations and proofs for the theoretical results.
\subsection{Memory consumption upper bound}
The proof and detailed analysis on the memory consumption upper bound has already been given in the paper. In this subsection, we give the computation of a specific case of the general theoretical results.
Although the general formulation of the upper bound is not available, we can specially compute $\max_{N} S(d_a, N, 1-t_A)$ for certain $n$s, and verify it with experiments. For example, when $n=2$, the distribution becomes distributing points on a circle with unit radius. Then, $\max_{N} S(d_a, N, 1-t_A)$ can be obtained by evenly distributing the points on the circle with an interval of $t_A$. Finally, the explicit value of $\max_{N} S(d_a, N, 1-t_A)$ can be formulated as:
\begin{align}
\max_{N} S(d_n, N, 1-t_N) = \frac{2\pi}{\arccos(1-t_A)},
\end{align}
then we have:
\begin{align}
n_A \leqslant (l_a + l_r) \frac{2\pi}{\arccos(1-t_A)}.
\end{align
And the upper bound of the number of N- and C-prototypes can be formulated similarly. The above results are used in Section 4.7 in the paper.
\subsection{Task distance preserving}
In this subsection, we give proofs and detailed analysis on the Theorem 2 in the paper.
At below, Lemma \ref{quadrtic_bound}, Lemma \ref{real_symmetric}, Lemma \ref{rank}, and Corollary \ref{cor:1} are from existing knowledge ranging from geometry to linear algebra. The other parts are of our own contributions.
In continual learning, the key challenge is to overcome the catastrophic forgetting, which refers to the performance degradation on previous tasks after training the model on new tasks. Based on our model design, we formulate this as: whether learning new tasks affect the representations the model generates for old task data. First, we give definitions on the tasks and task distances:
\begin{definition}[Task set]\label{def:task_set}
The $p$-th task in a sequence is denoted as $\mathcal{T}^p$ and contains a subgraph $\mathcal{G}_p$ consisting of nodes belonging to some new categories. We denote the associated node set and adjacency matrix as $\mathbb{V}_p$ and $\mathrm{A}_p$.
Each $v_p^i \in \mathbb{V}_p$ has a feature vector $\mathbf{x}(v_p^i)$ and a label $y(v_p^i)$.
\end{definition}
Then, the reason for catastrophic forgetting is that different tasks in a sequence are drawn from heterogeneous distributions, making the model sequentially trained on different tasks
unable to maintain satisfying performances on previous tasks. Therefore, given the definition of the tasks (Definition \ref{def:task_set}), we then give a formal definition to quantify the difference between two tasks.
\begin{definition}[Task distance]\label{def:task_distance}
We define the distance between two tasks as the set distance between the node sets of these two tasks, \textit{i.e.}
$\mathbf{dist}(\mathbb{V}_p, \mathbb{V}_q) = \mathrm{inf}\norm{\mathbf{x}(v_p^i) - \mathbf{x}(v_q^j)}, \forall v_p^i \in \mathbb{V}_p, v_q^j \in \mathbb{V}_q$.
\end{definition}
\begin{lemma}
The distance between any two tasks is non-negative, i.e. $\forall i,j\in \{1,...,\mathrm{M}^T\}, \mathbf{dist}(\mathbb{V}_p, \mathbb{V}_q) \geqslant 0$, where $\mathrm{M}^T$ is the number of tasks contained in the sequence.
\end{lemma}
The real-world data could be complex and sometimes may even contain noises that are impossible for any model to learn, which needs extra considerations when justifying the effectiveness of the model. Formally, we give the definition of the contradictory data.
\begin{definition}[Contradictory data]
$\forall v_p^i \in \mathbb{V}_p, p = 1,...,\mathrm{M}^T$, if $\exists v_q^j \in \mathbb{V}_q, j = 1,...,\mathrm{M}^T$, $\mathrm{st.} \forall l\in \mathbb{N^*}, \forall u \in \mathcal{N}^l(v_p^i) $ and $\forall v \in \mathcal{N}^l(v_q^j)$, $\mathbf{x}(u)=\mathbf{x}(v)$ but $y(v_p^i) \neq y(v_q^j)$, then we say $(v_p^i, y(v_p^i))$ and $(v_q^j, y(v_q^j))$ are contradictory data, as it is contradictory for any model to give different predictions for one node based on the same node features and graph structures. ($\mathbb{N}^*$ denotes the set of non-negative integers)
\end{definition}
\begin{rmrk}
Contradictory data is ignored or simply regarded as outliers in previous works, but in this work, we explicitly analyze its affect for the comprehensiveness of our theory.
contradictory data has different situations.
If $v_p^i$ and $v_q^j$ are from different tasks, then $y(v_p^i) \neq y(v_q^j)$ is plausible. Because they may be describing a same thing from different aspects. For example, an article from the citation network may be both categorized as 'physics related' and 'computer science related'. In this situation, it would be easy to add an task indicator to the feature of the node, then the feature of $v_p^i$ and $v_q^j$ are no longer equal and are not contradictory data anymore.
But within one task, contradictory data are most likely to be wrongly labeled, \textit{e.g.} it does not make sense if an article is both 'related to physics' and 'not related to physics'.
\end{rmrk}
Besides the distance between tasks, the distance between the embeddings obtained by the AFEs will also be a crucial concept in the proof.
\begin{definition}[Embedding distance]
Each input node $v_p^i$ is given a set of atomic embeddings $\mathbb{E}_A(v_p^i) = \mathbb{E}^{\mathrm{node}}_A(v_p^i) \cup \mathbb{E}^{\mathrm{struct}}_A(v_p^i)$, where $\mathbb{E}^{\mathrm{node}}_A(v_p^i) = \{\mathbf{a}_i^j | j\in\{1,...,l_a\}\}_p $ containing the atomic node embeddings of $v_p^i$ and $\mathbb{E}^{\mathrm{struct}}_A(v_p^i) = \{\mathbf{r}_i^j | k \in \{ 1,...,l_r\} \}_p$ containing the atomic structure embeddings. $\mathbf{a}_i^j\in \mathbb{R}^{d_a}$ and $\mathbf{r}_i^j \in \mathbb{R}^{d_r}$. To define the distance between representations of two nodes, we concatenate the atomic embeddings of each node into a single vector in a higher dimensional space, i.e. each node $v_p^i$ corresponds to a latent vector $\mathbf{z}_p^i=[\mathbf{a}_i^1;...;\mathbf{a}_i^{l_a};\mathbf{r}_i^1;...;\mathbf{r}_i^{l_r}]\in\mathbb{R}^{l_a \times d_a + l_r \times d_r}$. Then we define the distance between representations of two nodes $v_p^i$ and $v_q^j$ as the Euclidean distance between their corresponding latent vector $\mathbf{z_p^i}$ and $\mathbf{z_q^j}$, i.e. $\mathrm{dist}(\mathbf{z}_p^i,\mathbf{z}_q^j) = \norm{\mathbf{z}_p^i-\mathbf{z}_q^j}_2$ .
\end{definition}
Then we will give some explanations on the linear algebra related theories.
\begin{lemma}[Bounds for real quadratic forms]\label{quadrtic_bound}
Given a real symmetric matrix $\mathbf{A}$, and an arbitrary real vector variable $\mathbf{x}$, we have
\begin{align}
\lambda_{\min} \leqslant \frac{\mathbf{x}^T \mathbf{A} \mathbf{x}}{\mathbf{x}^T \mathbf{x}} \leqslant \lambda_{\max},
\end{align}
where $\lambda_{\min}$ and $\lambda_{\max}$ are the minimum and maximum eigenvalues of matrix $\mathbf{A}$.
\end{lemma}
\begin{lemma}[Real symmetric matrix]\label{real_symmetric}
For a matrix $\mathbf{A} \in \mathbb{R}^{m\times n}$, $\mathbf{A}^T\mathbf{A} \in \mathbb{R}^{n\times n}$ is a real symmetric matrix, $rank(\mathbf{A}^T\mathbf{A}) = rank(\mathbf{A})$, and the non-zero eigenvalues of $\mathbf{A}^T\mathbf{A}$ are squares of the non-zero singular values of $\mathbf{A}$.
\end{lemma}
\begin{lemma}[Rank and number of non-zero singular values]\label{rank}
For a matrix $\mathbf{A} \in \mathbb{R}^{m\times n}$, the number of non-zero singular values equals the rank of $\mathbf{A}$, i.e. $rank(\mathbf{A})$
\end{lemma}
\begin{cor}\label{cor:1}
For a matrix $\mathbf{A} \in \mathbb{R}^{m\times n}$. Without loss of generality, we assume $n \leqslant m$. If $\mathbf{A}$ is column full rank, i.e. $rank(\mathbf{A})=n$, then $\mathbf{A}$ has $n$ non-zero singular values. Besides, $rank(\mathbf{A}^T\mathbf{A}) = n$, and $\mathbf{A}^T\mathbf{A}$ has $n$ non-zero singular values.
\end{cor}
Given the explanations above, we then derive the bound for the change of the distance among data, which will be further used for analyzing the separation of data from different tasks.
\begin{lemma}[Embedding distance bound]\label{dist_bound}
Given two nodes $v_p^i \in \mathbb{V}_p$ and $v_q^j \in \mathbb{V}_q$ with vertex feature $\mathbf{x}(v_p^i), \mathbf{x}(v_q^j) \in \mathbb{R}^{{d_v}}$, their multi-hop neighboring node sets are denoted as $\bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_p^i)$ and $\bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_q^j)$. The AFEs for generating atomic embeddings are $\mathrm{AFE}_{\mathrm{node}} = \{ \mathbf{A}_i \in \mathbb{R}^{d_{a} \times d_v } | i\in\{1,...,l_a\} $ and $\mathrm{AFE}_{\mathrm{struct}} = \{ \mathbf{R}_j \in \mathbb{R}^{d_r \times d_v } | j\in\{1,...,l_r\}$, corresponding to matrices for atomic node embeddings and atomic structure embeddings, respectively. Then, the square distance
$\mathrm{dist}^2(\mathbf{z}_p^i - \mathbf{z}_q^j) = \norm{\mathbf{z}_p^i - \mathbf{z}_q^j}_2^2 \geqslant \lambda_{\min}(\norm{\mathbf{x}(v_p^i)-\mathbf{x}(v_q^j)}_2^2 + \sum_{k=1}^{l_r}\norm{\mathbf{x}(u_k)-\mathbf{x}(\nu_k)}_2^2)$,
if $l_a \times d_a + l_r \times d_r \geqslant d_v $, where $u_k$ are nodes sampled from $\bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_p^i)$, $\nu_k$ are nodes sampled from $\bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_q^j)$, $\lambda_i$ are the eigenvalues of $\mathbf{W}^T \mathbf{W}$, and $\mathbf{W} \in \mathbb{R}^{(l_r+1)d_v \times (l_a d_a + l_r d_r)}$ is constructed with the matrices in $\mathrm{AFE}_{\mathrm{node}}$ and $\mathrm{AFE}_{\mathrm{struct}}$. Specifically, $\mathbf{W}$ is a block matrix constructed as follows:
1. $\mathbf{W}_{1:l_ad_a,1:d_v}$ are filled by the concatenation of $\{\mathbf{A}_i | i=1,...,l_a \}$, i.e. $[\mathbf{A}_1;...;\mathbf{A}_{l_a}] \in \mathbb{R}^{l_a d_a \times d_v}$.
2. For $\mathbf{W}_{l_ad_a+1:l_ad_a+l_rd_r, 1:(l_r+1)d_v}$, the construction is first filling $\mathbf{W}_{l_ad_a+(k-1)d_r:l_ad_a+kd_r, kd_v:(k+1)d_v}$ with $\mathbf{R}_k$, $k=1,...,l_r$.
3. For other parts, fill with zeros.
\end{lemma
\begin{proof}
Given vertex $v_p^i$, we concatenate its feature vector with the $l_r$ neighbors sampled from $\bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_p^i)$, i.e. $\mathbf{x}_{p,i}' = [\mathbf{x}(v_p^i); \mathbf{x}(u_1); ...; \mathbf{x}(u_{l_r})]\in \mathbb{R}^{(l_r+1)d \times 1}, u_j \in \bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_p^i)$. Then with the constructed block matrix $\mathbf{W}$, we could formulate the generation of $\mathbf{z}_p^i$ as: $\mathbf{z}_p^i = \mathbf{W} \mathbf{x}_{p,i}'$.
Similarly, we can formulate $\mathbf{z}_q^j$ for another vertex $v_q^j$.
And their distance can be formulated as:
\begin{align*}
\scriptsize
\mathrm{dist}(\mathbf{z}_p^i,\mathbf{z}_q^j) = \norm{\mathbf{z}_p^i-\mathbf{z}_q^j}_2 = \sqrt{(\mathbf{z}_p^i-\mathbf{z}_q^j)^T(\mathbf{z}_p^i-\mathbf{z}_q^j)},
\end{align*}
where $\mathbf{z}_p^i-\mathbf{z}_q^j)^T(\mathbf{z}_p^i-\mathbf{z}_q^j)$ can be further expanded as:
\begin{flalign*}
(\mathbf{z}_p^i-\mathbf{z}_q^j)^T(\mathbf{z}_p^i-\mathbf{z}_q^j) &
= (\mathbf{W} \mathbf{x}_{p,i}'-\mathbf{W} \mathbf{x}_{q,j}')^T (\mathbf{W} \mathbf{x}_{p,i}'-\mathbf{W} \mathbf{x}_{q,j}') &&\\
&=\big(\mathbf{W} (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')\big)^T \big(\mathbf{W} (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')\big) &&\\
&= (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')^T\mathbf{W}^T \mathbf{W} (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')&&
\end{flalign*}
According to lemma \ref{real_symmetric}, $\mathbf{W}^T \mathbf{W}$ is a real symmetric matrix, with lemma \ref{quadrtic_bound}, we have
\begin{align*}
\frac{(\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')^T\mathbf{W}^T \mathbf{W} (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')}{(\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')^T (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')} \geqslant \lambda_{min}
\end{align*}\vspace{-2mm}
According to lemma \ref{real_symmetric}, with $l_ad_a+l_rd_r \geqslant (l_r+1)d_v $ and the constraint of column full rank on $\mathbf{W}$, $\mathbf{W}^T \mathbf{W} \in \mathbb{R}^{(l_r+1) \times (l_r+1)}$ has $l_r+1$ positive eigenvalues, thus $\lambda_{min} > 0$.
Then we decompose $(\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')^T (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')$ as:
\begin{flalign*}
&\quad (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')^T (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')&& \\
&= \sum_{k=0}^{(l_r+1)d-1} \big((\mathbf{x}_{p,i}')_k - (\mathbf{x}_{q,j}')\big)^2&&\\
&= \sum_{k=1}^{(l_r+1)d_v}\big((\mathbf{x}_{p,i}')_k - (\mathbf{x}_{q,j}')\big)^2 &&\\
&= \sum_{k=1}^{d_v}\big((\mathbf{x}_{p,i}')_k - (\mathbf{x}_{q,j}')_k\big)^2 + \sum_{k=1}^{l_r}\sum_{m=kd_v+1}^{(k+1)d_v}\big((\mathbf{x}_{p,i}')_k - (\mathbf{x}_{q,j}')_k\big)^2 &&\\
&= \norm{\mathbf{x}(v_p^i)-\mathbf{x}(v_q^j)}_2^2+ \sum_{m=1}^{l_r}\norm{\mathbf{x}(u_m)-\mathbf{x}(\nu_k)}_2^2&&
\end{flalign*}
\noindent$\therefore \mathrm{dist}^2(\mathbf{z}_p^i - \mathbf{z}_q^j) = \norm{\mathbf{z}_p^i - \mathbf{z}_q^j}_2^2 \geqslant \lambda_{\min}(\norm{\mathbf{x}(v_p^i)-\mathbf{x}(v_q^j)}_2^2 + \sum_{k=1}^{l_r}\norm{\mathbf{x}(u_k)-\mathbf{x}(\nu_k)}_2^2)$ \vspace{-1mm}
\end{proof}\vspace{-2mm}
The key point in these theories is that for any task sequence with certain distance among the tasks, there exists a configuration that ensures HPNs to be capable of preserving the task distance after projecting the data into the hidden space, so that only the prototypes associated with the current task are refined and the prototypes corresponding to the other tasks are preserved. Specifically, theorem on zero-forgetting can be formulated as follows:
\begin{thm}[Task distance preserving]\label{zero-forget}
For $\mathrm{HPNs}$ trained on consecutive tasks $\mathcal{T}^p$ and $\mathcal{T}^{p+1}$. If $l_ad_a+l_rd_r \geqslant (l_r+1)d_v $ and $\mathbf{W}$ is column full rank, then as long as
$ t_{A} < \lambda_{\min}(l_r+1) \mathbf{dist}(\mathbb{V}_p, \mathbb{V}_{p+1}) $, learning on $\mathcal{T}^{p+1}$ will not modify representations $\mathrm{HPNs}$ generate for data from $\mathcal{T}^p$, i.e. catastrophic forgetting is avoided.
\end{thm}
In Theorem \ref{zero-forget}, $\lambda_i$ is eigenvalues of the $\mathbf{W}^T \mathbf{W}$, where $\mathbf{W}$ is the matrix mentioned before constructed via AFEs. $d_v$, $d_a$ and $d_r$ are dimensions of data, atomic node embeddings, and atomic structure embeddings.
\begin{proof}
Following the proofs above, suppose two nodes $v_p^i$ and $v_q^j$ are embedded into $\mathbf{z}_p^i$ and $\mathbf{z}_q^j$ with the embedding module. Then the distance between $\mathbf{z}_p^i$ and $\mathbf{z}_q^j$ could be formulated as:
$\mathrm{dist}(\mathbf{z}_p^i,\mathbf{z}_q^j) = ||\mathbf{z}_p^i-\mathbf{z}_q^j||_2 = \sqrt{(\mathbf{z}_p^i-\mathbf{z}_q^j)^T(\mathbf{z}_p^i-\mathbf{z}_q^j)}$
According to lemma \ref{dist_bound}, we have $\mathrm{dist}^2(\mathbf{z}_p^i, \mathbf{z}_q^j) = ||\mathbf{z}_p^i - \mathbf{z}_q^j||_2^2 \geqslant \lambda_{\min}(||\mathbf{x}(v_p^i)-\mathbf{x}(v_q^j)||_2^2 + \sum_{k=1}^{l_r}||\mathbf{x}(u_k)-\mathbf{x}(\nu_k)||_2^2)$.
\noindent$\because v_p^i \in \mathbb{V}_p, v_q^j \in \mathbb{V}_q$,
\noindent$\therefore ||\mathbf{x}(v_p^i)-\mathbf{x}(v_q^j)||_2^2 \geqslant \mathrm{dist}^2(\mathbb{V}_p, \mathbb{V}_q)$.
Similarly, $||\mathbf{x}(u_k)-\mathbf{x}(\nu_k)||_2^2 \geqslant \mathrm{dist}^2(\mathbb{V}_p, \mathbb{V}_q)$, for $\forall k$.
\noindent$\therefore ||\mathbf{z}_p^i, \mathbf{z}_q^j||_2^2 \geqslant \lambda_{\min}(l_r+1) \mathrm{dist}^2(\mathbb{V}_p, \mathbb{V}_q)$
\noindent$\therefore \mathrm{dist}(\mathbf{z}_p^i, \mathbf{z}_q^j) =||\mathbf{z}_p^i - \mathbf{z}_q^j||_2 \geqslant \sqrt{\lambda_{\min}(l_r+1) \mathrm{dist}^2(\mathbb{V}_p, \mathbb{V}_q)} $
\noindent$\therefore$ If $t_A < \sqrt{\lambda_{\min}(l_r+1) \mathrm{dist}^2(\mathbb{V}_p, \mathbb{V}_q)}$, the embeddings of two nodes from two different tasks will not be assigned to same A-prototypes.
Above all, if the conditions in Theorem \ref{zero-forget} are satisfied, learning on new tasks will not modify the prototypes for previous tasks.
Besides, the data from previous tasks will be exactly matched to the correct prototypes after training the model on new tasks. In practice, the conditions may not be easy to be satisfied all the time. However, as mentioned in the paper, the bound given in Theorem \ref{zero-forget} is not tight, thus fully satisfying the conditions may not be necessary. Therefore, in the experimental section in the paper, we practically show how the important factors included in these conditions influence the performance (Section 3.6 in the paper). The results demonstrates that the more we satisfy the conditions, the better performance we will obtain, and certain factors (number of AFEs) influence more than the others.
\end{proof}\vspace{-2mm}
\begin{rmrk}
When $\mathrm{dist}(\mathbb{V}_p, \mathbb{V}_q) = 0$, i.e. there exists a non-empty set $\mathbb{V}_{\cap} = \mathbb{V}_p \cap \mathbb{V}_q$, $\mathrm{st.}$ $\mathrm{dist}(\mathbb{V}_p \setminus \mathbb{V}_{\cap}, \mathbb{V}_q \setminus \mathbb{V}_{\cap}) > 0$, then \textrm{Theorem} \ref{zero-forget} holds. As for the $\mathbb{V}_{\cap}$ containing examples exactly same in $\mathbb{V}_p$ and $\mathbb{V}_q$, there are two situations:
1. $\forall v \in \mathbb{V}_{\cap}$, $y_p(v) = y_q(v)$, where $y_p(\cdot)$ and $y_q(\cdot)$ denote the associated labels in task $p$ and $q$
2. $\exists v \in \mathbb{V}_{\cap}$, $y_p(v) \neq y_q(v)$
For situation 1, $\mathbb{V}_{\cap}$ will not cause forgetting on the previous task, as these shared data are exactly same and will optimize the model to same direction.
For situation 2, if no task indicator is provided, then these data are contradictory data, if task indicator is provided, then the indicator could be merged into the feature vector of the node, i.e. $\mathbf{x}(v_p)$, then $v_p$ will not belong to $\mathbb{V}_{\cap}$.\vspace{-2mm}
\end{rmrk}
\iffalse
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The authors would like to thank...
\fi
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{Details of Theoretical Analysis}\label{sec:theoretical_analysis}
\subsection{Overview}
The main theoretical results are briefly introduced in the paper. In this section, we provide detailed explanations and proofs for the theoretical results.
\subsection{Memory consumption upper bound}
The proof and detailed analysis on the memory consumption upper bound has already been given in the paper. In this subsection, we give the computation of a specific case of the general theoretical results.
Although the general formulation of the upper bound is not available, we can specially compute $\max_{N} S(d_a, N, 1-t_A)$ for certain $n$s, and verify it with experiments. For example, when $n=2$, the distribution becomes distributing points on a circle with unit radius. Then, $\max_{N} S(d_a, N, 1-t_A)$ can be obtained by evenly distributing the points on the circle with an interval of $t_A$. Finally, the explicit value of $\max_{N} S(d_a, N, 1-t_A)$ can be formulated as:
\begin{align}
\max_{N} S(d_n, N, 1-t_N) = \frac{2\pi}{\arccos(1-t_A)},
\end{align}
then we have:
\begin{align}
n_A \leqslant (l_a + l_r) \frac{2\pi}{\arccos(1-t_A)}.
\end{align
And the upper bound of the number of N- and C-prototypes can be formulated similarly. The above results are used in Section 4.7 in the paper.
\subsection{Task distance preserving}
In this subsection, we give proofs and detailed analysis on the Theorem 2 in the paper.
At below, Lemma \ref{quadrtic_bound}, Lemma \ref{real_symmetric}, Lemma \ref{rank}, and Corollary \ref{cor:1} are from existing knowledge ranging from geometry to linear algebra. The other parts are of our own contributions.
In continual learning, the key challenge is to overcome the catastrophic forgetting, which refers to the performance degradation on previous tasks after training the model on new tasks. Based on our model design, we formulate this as: whether learning new tasks affect the representations the model generates for old task data. First, we give definitions on the tasks and task distances:
\begin{definition}[Task set]\label{def:task_set}
The $p$-th task in a sequence is denoted as $\mathcal{T}^p$ and contains a subgraph $\mathcal{G}_p$ consisting of nodes belonging to some new categories. We denote the associated node set and adjacency matrix as $\mathbb{V}_p$ and $\mathrm{A}_p$.
Each $v_p^i \in \mathbb{V}_p$ has a feature vector $\mathbf{x}(v_p^i)$ and a label $y(v_p^i)$.
\end{definition}
Then, the reason for catastrophic forgetting is that different tasks in a sequence are drawn from heterogeneous distributions, making the model sequentially trained on different tasks
unable to maintain satisfying performances on previous tasks. Therefore, given the definition of the tasks (Definition \ref{def:task_set}), we then give a formal definition to quantify the difference between two tasks.
\begin{definition}[Task distance]\label{def:task_distance}
We define the distance between two tasks as the set distance between the node sets of these two tasks, \textit{i.e.}
$\mathbf{dist}(\mathbb{V}_p, \mathbb{V}_q) = \mathrm{inf}\norm{\mathbf{x}(v_p^i) - \mathbf{x}(v_q^j)}, \forall v_p^i \in \mathbb{V}_p, v_q^j \in \mathbb{V}_q$.
\end{definition}
\begin{lemma}
The distance between any two tasks is non-negative, i.e. $\forall i,j\in \{1,...,\mathrm{M}^T\}, \mathbf{dist}(\mathbb{V}_p, \mathbb{V}_q) \geqslant 0$, where $\mathrm{M}^T$ is the number of tasks contained in the sequence.
\end{lemma}
The real-world data could be complex and sometimes may even contain noises that are impossible for any model to learn, which needs extra considerations when justifying the effectiveness of the model. Formally, we give the definition of the contradictory data.
\begin{definition}[Contradictory data]
$\forall v_p^i \in \mathbb{V}_p, p = 1,...,\mathrm{M}^T$, if $\exists v_q^j \in \mathbb{V}_q, j = 1,...,\mathrm{M}^T$, $\mathrm{st.} \forall l\in \mathbb{N^*}, \forall u \in \mathcal{N}^l(v_p^i) $ and $\forall v \in \mathcal{N}^l(v_q^j)$, $\mathbf{x}(u)=\mathbf{x}(v)$ but $y(v_p^i) \neq y(v_q^j)$, then we say $(v_p^i, y(v_p^i))$ and $(v_q^j, y(v_q^j))$ are contradictory data, as it is contradictory for any model to give different predictions for one node based on the same node features and graph structures. ($\mathbb{N}^*$ denotes the set of non-negative integers)
\end{definition}
\begin{rmrk}
Contradictory data is ignored or simply regarded as outliers in previous works, but in this work, we explicitly analyze its affect for the comprehensiveness of our theory.
contradictory data has different situations.
If $v_p^i$ and $v_q^j$ are from different tasks, then $y(v_p^i) \neq y(v_q^j)$ is plausible. Because they may be describing a same thing from different aspects. For example, an article from the citation network may be both categorized as 'physics related' and 'computer science related'. In this situation, it would be easy to add an task indicator to the feature of the node, then the feature of $v_p^i$ and $v_q^j$ are no longer equal and are not contradictory data anymore.
But within one task, contradictory data are most likely to be wrongly labeled, \textit{e.g.} it does not make sense if an article is both 'related to physics' and 'not related to physics'.
\end{rmrk}
Besides the distance between tasks, the distance between the embeddings obtained by the AFEs will also be a crucial concept in the proof.
\begin{definition}[Embedding distance]
Each input node $v_p^i$ is given a set of atomic embeddings $\mathbb{E}_A(v_p^i) = \mathbb{E}^{\mathrm{node}}_A(v_p^i) \cup \mathbb{E}^{\mathrm{struct}}_A(v_p^i)$, where $\mathbb{E}^{\mathrm{node}}_A(v_p^i) = \{\mathbf{a}_i^j | j\in\{1,...,l_a\}\}_p $ containing the atomic node embeddings of $v_p^i$ and $\mathbb{E}^{\mathrm{struct}}_A(v_p^i) = \{\mathbf{r}_i^j | k \in \{ 1,...,l_r\} \}_p$ containing the atomic structure embeddings. $\mathbf{a}_i^j\in \mathbb{R}^{d_a}$ and $\mathbf{r}_i^j \in \mathbb{R}^{d_r}$. To define the distance between representations of two nodes, we concatenate the atomic embeddings of each node into a single vector in a higher dimensional space, i.e. each node $v_p^i$ corresponds to a latent vector $\mathbf{z}_p^i=[\mathbf{a}_i^1;...;\mathbf{a}_i^{l_a};\mathbf{r}_i^1;...;\mathbf{r}_i^{l_r}]\in\mathbb{R}^{l_a \times d_a + l_r \times d_r}$. Then we define the distance between representations of two nodes $v_p^i$ and $v_q^j$ as the Euclidean distance between their corresponding latent vector $\mathbf{z_p^i}$ and $\mathbf{z_q^j}$, i.e. $\mathrm{dist}(\mathbf{z}_p^i,\mathbf{z}_q^j) = \norm{\mathbf{z}_p^i-\mathbf{z}_q^j}_2$ .
\end{definition}
Then we will give some explanations on the linear algebra related theories.
\begin{lemma}[Bounds for real quadratic forms]\label{quadrtic_bound}
Given a real symmetric matrix $\mathbf{A}$, and an arbitrary real vector variable $\mathbf{x}$, we have
\begin{align}
\lambda_{\min} \leqslant \frac{\mathbf{x}^T \mathbf{A} \mathbf{x}}{\mathbf{x}^T \mathbf{x}} \leqslant \lambda_{\max},
\end{align}
where $\lambda_{\min}$ and $\lambda_{\max}$ are the minimum and maximum eigenvalues of matrix $\mathbf{A}$.
\end{lemma}
\begin{lemma}[Real symmetric matrix]\label{real_symmetric}
For a matrix $\mathbf{A} \in \mathbb{R}^{m\times n}$, $\mathbf{A}^T\mathbf{A} \in \mathbb{R}^{n\times n}$ is a real symmetric matrix, $rank(\mathbf{A}^T\mathbf{A}) = rank(\mathbf{A})$, and the non-zero eigenvalues of $\mathbf{A}^T\mathbf{A}$ are squares of the non-zero singular values of $\mathbf{A}$.
\end{lemma}
\begin{lemma}[Rank and number of non-zero singular values]\label{rank}
For a matrix $\mathbf{A} \in \mathbb{R}^{m\times n}$, the number of non-zero singular values equals the rank of $\mathbf{A}$, i.e. $rank(\mathbf{A})$
\end{lemma}
\begin{cor}\label{cor:1}
For a matrix $\mathbf{A} \in \mathbb{R}^{m\times n}$. Without loss of generality, we assume $n \leqslant m$. If $\mathbf{A}$ is column full rank, i.e. $rank(\mathbf{A})=n$, then $\mathbf{A}$ has $n$ non-zero singular values. Besides, $rank(\mathbf{A}^T\mathbf{A}) = n$, and $\mathbf{A}^T\mathbf{A}$ has $n$ non-zero singular values.
\end{cor}
Given the explanations above, we then derive the bound for the change of the distance among data, which will be further used for analyzing the separation of data from different tasks.
\begin{lemma}[Embedding distance bound]\label{dist_bound}
Given two nodes $v_p^i \in \mathbb{V}_p$ and $v_q^j \in \mathbb{V}_q$ with vertex feature $\mathbf{x}(v_p^i), \mathbf{x}(v_q^j) \in \mathbb{R}^{{d_v}}$, their multi-hop neighboring node sets are denoted as $\bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_p^i)$ and $\bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_q^j)$. The AFEs for generating atomic embeddings are $\mathrm{AFE}_{\mathrm{node}} = \{ \mathbf{A}_i \in \mathbb{R}^{d_{a} \times d_v } | i\in\{1,...,l_a\} $ and $\mathrm{AFE}_{\mathrm{struct}} = \{ \mathbf{R}_j \in \mathbb{R}^{d_r \times d_v } | j\in\{1,...,l_r\}$, corresponding to matrices for atomic node embeddings and atomic structure embeddings, respectively. Then, the square distance
$\mathrm{dist}^2(\mathbf{z}_p^i - \mathbf{z}_q^j) = \norm{\mathbf{z}_p^i - \mathbf{z}_q^j}_2^2 \geqslant \lambda_{\min}(\norm{\mathbf{x}(v_p^i)-\mathbf{x}(v_q^j)}_2^2 + \sum_{k=1}^{l_r}\norm{\mathbf{x}(u_k)-\mathbf{x}(\nu_k)}_2^2)$,
if $l_a \times d_a + l_r \times d_r \geqslant d_v $, where $u_k$ are nodes sampled from $\bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_p^i)$, $\nu_k$ are nodes sampled from $\bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_q^j)$, $\lambda_i$ are the eigenvalues of $\mathbf{W}^T \mathbf{W}$, and $\mathbf{W} \in \mathbb{R}^{(l_r+1)d_v \times (l_a d_a + l_r d_r)}$ is constructed with the matrices in $\mathrm{AFE}_{\mathrm{node}}$ and $\mathrm{AFE}_{\mathrm{struct}}$. Specifically, $\mathbf{W}$ is a block matrix constructed as follows:
1. $\mathbf{W}_{1:l_ad_a,1:d_v}$ are filled by the concatenation of $\{\mathbf{A}_i | i=1,...,l_a \}$, i.e. $[\mathbf{A}_1;...;\mathbf{A}_{l_a}] \in \mathbb{R}^{l_a d_a \times d_v}$.
2. For $\mathbf{W}_{l_ad_a+1:l_ad_a+l_rd_r, 1:(l_r+1)d_v}$, the construction is first filling $\mathbf{W}_{l_ad_a+(k-1)d_r:l_ad_a+kd_r, kd_v:(k+1)d_v}$ with $\mathbf{R}_k$, $k=1,...,l_r$.
3. For other parts, fill with zeros.
\end{lemma}
\begin{proof}
Given vertex $v_p^i$, we concatenate its feature vector with the $l_r$ neighbors sampled from $\bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_p^i)$, i.e. $\mathbf{x}_{p,i}' = [\mathbf{x}(v_p^i); \mathbf{x}(u_1); ...; \mathbf{x}(u_{l_r})]\in \mathbb{R}^{(l_r+1)d \times 1}, u_j \in \bigcup\limits_{l\in \mathbb{N}^*} \mathcal{N}^l (v_p^i)$. Then with the constructed block matrix $\mathbf{W}$, we could formulate the generation of $\mathbf{z}_p^i$ as: $\mathbf{z}_p^i = \mathbf{W} \mathbf{x}_{p,i}'$.
Similarly, we can formulate $\mathbf{z}_q^j$ for another vertex $v_q^j$.
And their distance can be formulated as:
\begin{align*}
\scriptsize
\mathrm{dist}(\mathbf{z}_p^i,\mathbf{z}_q^j) = \norm{\mathbf{z}_p^i-\mathbf{z}_q^j}_2 = \sqrt{(\mathbf{z}_p^i-\mathbf{z}_q^j)^T(\mathbf{z}_p^i-\mathbf{z}_q^j)},
\end{align*}
where $\mathbf{z}_p^i-\mathbf{z}_q^j)^T(\mathbf{z}_p^i-\mathbf{z}_q^j)$ can be further expanded as:
\begin{flalign*}
(\mathbf{z}_p^i-\mathbf{z}_q^j)^T(\mathbf{z}_p^i-\mathbf{z}_q^j) &
= (\mathbf{W} \mathbf{x}_{p,i}'-\mathbf{W} \mathbf{x}_{q,j}')^T (\mathbf{W} \mathbf{x}_{p,i}'-\mathbf{W} \mathbf{x}_{q,j}') &&\\
&=\big(\mathbf{W} (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')\big)^T \big(\mathbf{W} (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')\big) &&\\
&= (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')^T\mathbf{W}^T \mathbf{W} (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')&& \\
\end{flalign*}
According to lemma \ref{real_symmetric}, $\mathbf{W}^T \mathbf{W}$ is a real symmetric matrix, with lemma \ref{quadrtic_bound}, we have
\begin{align*}
\frac{(\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')^T\mathbf{W}^T \mathbf{W} (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')}{(\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')^T (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')} \geqslant \lambda_{min}
\end{align*}
According to lemma \ref{real_symmetric}, with $l_ad_a+l_rd_r \geqslant (l_r+1)d_v $ and the constraint of column full rank on $\mathbf{W}$, $\mathbf{W}^T \mathbf{W} \in \mathbb{R}^{(l_r+1) \times (l_r+1)}$ has $l_r+1$ positive eigenvalues, thus $\lambda_{min} > 0$.
Then we decompose $(\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')^T (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')$ as:
\begin{flalign*}
&\quad (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')^T (\mathbf{x}_{p,i}'- \mathbf{x}_{q,j}')&& \\
&= \sum_{k=0}^{(l_r+1)d-1} \big((\mathbf{x}_{p,i}')_k - (\mathbf{x}_{q,j}')\big)^2&&\\
&= \sum_{k=1}^{(l_r+1)d_v}\big((\mathbf{x}_{p,i}')_k - (\mathbf{x}_{q,j}')\big)^2 &&\\
&= \sum_{k=1}^{d_v}\big((\mathbf{x}_{p,i}')_k - (\mathbf{x}_{q,j}')_k\big)^2 + \sum_{k=1}^{l_r}\sum_{m=kd_v+1}^{(k+1)d_v}\big((\mathbf{x}_{p,i}')_k - (\mathbf{x}_{q,j}')_k\big)^2 &&\\
&= \norm{\mathbf{x}(v_p^i)-\mathbf{x}(v_q^j)}_2^2+ \sum_{m=1}^{l_r}\norm{\mathbf{x}(u_m)-\mathbf{x}(\nu_k)}_2^2&&
\end{flalign*}
\noindent$\therefore \mathrm{dist}^2(\mathbf{z}_p^i - \mathbf{z}_q^j) = \norm{\mathbf{z}_p^i - \mathbf{z}_q^j}_2^2 \geqslant \lambda_{\min}(\norm{\mathbf{x}(v_p^i)-\mathbf{x}(v_q^j)}_2^2 + \sum_{k=1}^{l_r}\norm{\mathbf{x}(u_k)-\mathbf{x}(\nu_k)}_2^2)$
\end{proof}
The key point in these theories is that for any task sequence with certain distance among the tasks, there exists a configuration that ensures HPNs to be capable of preserving the task distance after projecting the data into the hidden space, so that only the prototypes associated with the current task are refined and the prototypes corresponding to the other tasks are preserved. Specifically, theorem on zero-forgetting can be formulated as follows:
\begin{thm}[Task distance preserving]\label{zero-forget}
For $\mathrm{HPNs}$ trained on consecutive tasks $\mathcal{T}^p$ and $\mathcal{T}^{p+1}$. If $l_ad_a+l_rd_r \geqslant (l_r+1)d_v $ and $\mathbf{W}$ is column full rank, then as long as
$ t_{A} < \lambda_{\min}(l_r+1) \mathbf{dist}(\mathbb{V}_p, \mathbb{V}_{p+1}) $, learning on $\mathcal{T}^{p+1}$ will not modify representations $\mathrm{HPNs}$ generate for data from $\mathcal{T}^p$, i.e. catastrophic forgetting is avoided.
\end{thm}
In Theorem \ref{zero-forget}, $\lambda_i$ is eigenvalues of the $\mathbf{W}^T \mathbf{W}$, where $\mathbf{W}$ is the matrix mentioned before constructed via AFEs. $d_v$, $d_a$ and $d_r$ are dimensions of data, atomic node embeddings, and atomic structure embeddings.
\begin{proof}
Following the proofs above, suppose two nodes $v_p^i$ and $v_q^j$ are embedded into $\mathbf{z}_p^i$ and $\mathbf{z}_q^j$ with the embedding module. Then the distance between $\mathbf{z}_p^i$ and $\mathbf{z}_q^j$ could be formulated as:
$\mathrm{dist}(\mathbf{z}_p^i,\mathbf{z}_q^j) = ||\mathbf{z}_p^i-\mathbf{z}_q^j||_2 = \sqrt{(\mathbf{z}_p^i-\mathbf{z}_q^j)^T(\mathbf{z}_p^i-\mathbf{z}_q^j)}$
According to lemma \ref{dist_bound}, we have $\mathrm{dist}^2(\mathbf{z}_p^i, \mathbf{z}_q^j) = ||\mathbf{z}_p^i - \mathbf{z}_q^j||_2^2 \geqslant \lambda_{\min}(||\mathbf{x}(v_p^i)-\mathbf{x}(v_q^j)||_2^2 + \sum_{k=1}^{l_r}||\mathbf{x}(u_k)-\mathbf{x}(\nu_k)||_2^2)$.
\noindent$\because v_p^i \in \mathbb{V}_p, v_q^j \in \mathbb{V}_q$,
\noindent$\therefore ||\mathbf{x}(v_p^i)-\mathbf{x}(v_q^j)||_2^2 \geqslant \mathrm{dist}^2(\mathbb{V}_p, \mathbb{V}_q)$.
Similarly, $||\mathbf{x}(u_k)-\mathbf{x}(\nu_k)||_2^2 \geqslant \mathrm{dist}^2(\mathbb{V}_p, \mathbb{V}_q)$, for $\forall k$.
\noindent$\therefore ||\mathbf{z}_p^i, \mathbf{z}_q^j||_2^2 \geqslant \lambda_{\min}(l_r+1) \mathrm{dist}^2(\mathbb{V}_p, \mathbb{V}_q)$
\noindent$\therefore \mathrm{dist}(\mathbf{z}_p^i, \mathbf{z}_q^j) =||\mathbf{z}_p^i - \mathbf{z}_q^j||_2 \geqslant \sqrt{\lambda_{\min}(l_r+1) \mathrm{dist}^2(\mathbb{V}_p, \mathbb{V}_q)} $
\noindent$\therefore$ If $t_A < \sqrt{\lambda_{\min}(l_r+1) \mathrm{dist}^2(\mathbb{V}_p, \mathbb{V}_q)}$, the embeddings of two nodes from two different tasks will not be assigned to same A-prototypes.
Above all, if the conditions in Theorem \ref{zero-forget} are satisfied, learning on new tasks will not modify the prototypes for previous tasks.
Besides, the data from previous tasks will be exactly matched to the correct prototypes after training the model on new tasks. In practice, the conditions may not be easy to be satisfied all the time. However, as mentioned in the paper, the bound given in Theorem \ref{zero-forget} is not tight, thus fully satisfying the conditions may not be necessary. Therefore, in the experimental section in the paper, we practically show how the important factors included in these conditions influence the performance (Section 3.6 in the paper). The results demonstrates that the more we satisfy the conditions, the better performance we will obtain, and certain factors (number of AFEs) influence more than the others.
\end{proof}
\begin{rmrk}
When $\mathrm{dist}(\mathbb{V}_p, \mathbb{V}_q) = 0$, i.e. there exists a non-empty set $\mathbb{V}_{\cap} = \mathbb{V}_p \cap \mathbb{V}_q$, $\mathrm{st.}$ $\mathrm{dist}(\mathbb{V}_p \setminus \mathbb{V}_{\cap}, \mathbb{V}_q \setminus \mathbb{V}_{\cap}) > 0$, then \textrm{Theorem} \ref{zero-forget} holds. As for the $\mathbb{V}_{\cap}$ containing examples exactly same in $\mathbb{V}_p$ and $\mathbb{V}_q$, there are two situations:
1. $\forall v \in \mathbb{V}_{\cap}$, $y_p(v) = y_q(v)$, where $y_p(\cdot)$ and $y_q(\cdot)$ denote the associated labels in task $p$ and $q$
2. $\exists v \in \mathbb{V}_{\cap}$, $y_p(v) \neq y_q(v)$
For situation 1, $\mathbb{V}_{\cap}$ will not cause the model to forget about the previous task, as these shared data are exactly same and will optimize the model to same direction.
For situation 2, if no task indicator is provided, then these data are contradictory data, if task indicator is provided, then the indicator could be merged into the feature vector of the node, i.e. $\mathbf{x}(v_p)$, then $v_p$ will not belong to $\mathbb{V}_{\cap}$.
\end{rmrk}
\iffalse
\begin{rmrk}
Theorem \ref{zero-forget} implies that our model is capable of handling any arbitrarily collected tasks. However, our model actually does not depend on the splitting of the tasks. Instead, it will automatically discover different tasks and generate similar representations for similar tasks, vice versa. This is explained in the proof of Theorem \ref{zero-forget}.
\end{rmrk}
The constraints on the AFEs and the range for $t_A$ are negatively correlated. In Theorem \ref{zero-forget}, the constraints on AFEs are tight thus the range for $t_A$ is wide. In the following, we give another version of Theorem \ref{zero-forget} in which the constraints on AFEs is relaxed but the range for $t_A$ shrinks. The two versions of theorems may help understanding our model. Also, the different versions of constraints provide more flexible instructions on configuring the implementations.
\begin{thm}[Task distance preserving v2]\label{zero-forget2}
For $\mathrm{HPNs}$ trained on consecutive tasks $\mathcal{T}^p$ and $\mathcal{T}^{p+1}$. If $l_ad_a \geqslant d_v$ and $\mathbf{W}$ is column full rank, then as long as
$ t_{A} < \lambda_{\min}\mathrm{dist}(\mathbb{V}_p, \mathbb{V}_{p+1}) $, learning on $\mathcal{T}^{p+1}$ will not modify representations $\mathrm{HPNs}$ generate for data from $\mathcal{T}^p$, i.e. catastrophic forgetting is avoided.
\end{thm}
The proof of Theorem \ref{zero-forget2} is similar to the proof of Theorem \ref{zero-forget}.
\fi
\fi
\section{Details of Implementation}\label{sec:implementation_details}
\subsection{Datasets and task splitting}
In this subsection, we introduce the datasets we used and the details of how each dataset is split into different tasks.
We use 5 publicly datasets which include 2 citation networks (Cora\cite{sen2008collective}, Citeseer \cite{sen2008collective}, OGB-Arxiv \cite{wang2020microsoft,mikolov2013distributed}), 1 actor co-occurrence network (Actor) \cite{pei2020geom}, and 1 product co-purchasing network (OGB-Products \cite{Bhatia16}).
\iffalse
\begin{table*}[]
\scriptsize
\centering
\begin{tabular}{lcccccccc}\toprule
\textbf{Dataset} & Cornell & Texas & Wisconsin & Cora & Citeseer & Actor & OGB-Arxiv & OGB-Products\\\midrule
\# nodes & 183 & 183 & 251 & 2,708 & 3,327 & 7,600 & 169,343 & 2,449,029 \\\midrule
\# edges & 295 & 309 & 499 & 5,429 & 4,732 & 33,544 &1,166,243 &61,859,140 \\ \midrule
\# features & 1,703 & 1,703 & 1,703 & 1,433 & 3,703 & 931 & 128 & 100 \\\midrule
\# classes & 5 & 5 & 5 & 7 & 6 & 4 & 40 & 47 \\\midrule
\# tasks &2 &2 &2 & 3 & 3 & 2 & 20 & 23 \\\bottomrule
\end{tabular}
\caption{The detailed statistics of 8 datasets used in our experiments.}
\label{tab:data_statistics}
\end{table*}
\fi
\subsubsection{Citation networks}
The original Cora \cite{mccallum2000automating} and Citeseer \cite{giles1998citeseer} are pre-processed by Sen et al. \cite{sen2008collective} with stemming and removing stop words as well as words with document frequency less than 10. Finally, Cora contains 2708 documents, 5429 links denoting the citations among the documents, and each document is represented with 1433 distinct words. Cora contains 7 classes. For training, 140 documents are selected with 20 examples for each class. The validation set contains 500 documents and the test set contains 1000 examples. In our continual learning setting, the first 6 classes are selected and grouped into 3 tasks (2 classes for each task) in the original order.
Citeseer results in 3312 documents with each document being represented with 3703 distinct words, and 4732 links. Citeseer contains 6 classes. 20 documents per class are selected, for training, 500 documents are selected, for validation, and 1000 documents are selected as the test set. For continual learning setting, the documents from 6 classes are grouped into 3 tasks with 2 classes per task in the original order.
The Cora and Citeseer datasets can be downloaded via \href{https://github.com/tkipf/gcn/tree/master/gcn/data}{Cora$\&$Citeseer}.
The OGB-Arxiv dataset is collected in the Open Graph Benchmark \href{https://ogb.stanford.edu/docs/nodeprop/#ogbn-arxiv}{OGB}. It is a directed citation network between all Computer Science (CS) arXiv papers indexed by MAG \cite{wang2020microsoft}. Totally it contains 169,343 nodes and 1,166,243 edges. Each node is an paper and each directed edge indicates that one paper cites another one. Each paper comes with a 128-dimensional feature vector. The dataset contains 40 classes. As the dataset is not balanced and the numbers of examples in different classes differs significantly, directly grouping the classes into 2-class groups like the Cora and Citeseer will cause certain tasks to be imbalanced. Therefore, we reordered the classes in an descending order according to the number of examples contained in each class, and then group the classes according to the new order. In this way, the number of examples contained in different classes of each task are arranged to be as balanced as possible. Specifically, the class indices of each task are: \{(35, 12),(15, 21),(28, 30), (16, 24), (10, 34), (8, 4), (5, 2), (27, 26), (36, 19), (23, 31), (9, 37), (13, 3), (20, 39), (22, 6), (38, 33), (25, 11), (18, 1), (14, 7), (0, 17), (29, 32)\}.
\iffalse
\subsubsection{Web page networks}
WebKB dataset is collected from different universities by Carnegie Mellon University. The nodes in the datasets are web pages with bag-of-words representation, and edges are hyperlinks between the pages. The web pages are manually classified into 5 classes including student, project, course, staff, and faculty. Following setting in \cite{pei2020geom}, we use three subsets including Wisconsin with 251 web pages, Cornell with 183 web pages, and Texas with 183 web pages. For all these datasets, 60\% nodes are used for training, 20\% for validation, and 20\% for testing. For each of the web page networks, we constructed 2 tasks with 2 classes per task.
The three web page network datasets can be accessed via \href{https://github.com/graphdml-uiuc-jlu/geom-gcn/tree/master/new_data}{Web Pages}. The balanced splitting of the classes for the three web page networks is \{(2, 3), (0, 4)\}.
\fi
\subsubsection{Actor co-occurrence network}
The actor co-occurrence network is a subgraph of the film-director-actor-writer network \cite{tang2009social}. Each node in this dataset corresponds to an author, and the edges between the nodes are co-occurrence on the same Wikipedia pages. The whole dataset contains 7600 nodes and 33544 edges. Each node is accompanied with a feature vector of 931 dimensions. The nodes are classified into 4 classes according to the number of the average monthly traffic of the web page. For this dataset, we also constructed 2 tasks with 2 classes per task. The link to this dataset is \href{https://github.com/graphdml-uiuc-jlu/geom-gcn/tree/master/new_data/film}{Actor}. The balanced splitting of the classes is \{(0, 1), (2, 3)\}.
\subsubsection{Product co-purchasing network}
OGB-Products is also collected in the Open Graph Benchmark \href{https://ogb.stanford.edu/docs/nodeprop/#ogbn-arxiv}{OGB}, and is an undirected and unweighted graph, representing an Amazon product co-purchasing network \href{http://manikvarma.org/downloads/XC/XMLRepository.html}{link}. In total, it contains 2,449,029 nodes and 61,859,140 edges. Nodes represent products sold in Amazon, and edges between two products indicate that the products are purchased together. Node features are generated by extracting bag-of-words features from the product descriptions followed by a Principal Component Analysis to reduce the dimension to 100. 47 top-level categories are used for target labels, in our experiments, we select 46 classes and omit the final class containing only 1 example. Similar to OGB-Arxiv, we reorder the classes in an descending order according to the number of examples contained in each class, and then group the classes according to the new order. The class indices of each tasks are: \{(4, 7), (6, 3), (12, 2), (0, 8), (1, 13), (16, 21), (9, 10), (18, 24), (17, 5), (11, 42), (15, 20), (19, 23), (14, 25), (28, 29), (43, 22), (36, 44), (26, 37), (32, 31), (30, 27), (34, 38), (41, 35), (39, 33), (45, 40)\}.
\begin{figure*}
\centering
\includegraphics[height=8cm]{module_structure.jpg}
\caption{Details of modules in HPNs.}
\label{fig:module_structure}
\end{figure*}
\subsection{Experiment Setup}
All models are implemented in PyTorch with SGD optimizer and repeated 5 times on a Nvidia Titan Xp GPU. The average performance and standard deviations are reported for comparison. The network architecture of HPNs is detailed in Figure \ref{fig:module_structure}, and the specific values of hyperparameters will be given in the following. The hyperparameters we provide here correspond to the models used in comparisons with the baselines, while in other experiments the hyperparameters are the research objects and will not be kept unchanged.
As the sizes of the datasets we used are greatly different, we adopt different hyperparameters for small datasets and large datasets.
The small datasets include Cora, Citeseer, Actor, Wisconsin, Cornell, and Texas. The large datasets include OGB-Arxiv and OGB-Products.
For the small datasets, we set $l_a'=1$, $l_r'=1$, and $h=2$. We randomly sample 5 one-hop neighbors and 7 two-hop neighbors. The learning rates are managed separately for different modules of the model. For the AFEs, the learning rate is set as 0.1 at the beginning and decays to 0.001 at epoch 35. The learning rate for the prototypes are initialized as 0.1 at epoch 35 and decays to 0.01 at epoch 85. And the learning rates for the other trainable parameters are the same as the AFEs. During training, the AFEs would change rapidly at first and slow down after several epochs. Therefore, at the starting period of training, the same node would not be stably matched to the same set of prototypes due to the rapidly changing AFEs. To avoid this from creating too many redundant prototypes, we start to establish prototypes after training the AFEs at the 35th epochs. The input data has a dimension of 1433, and we set the dimensions of A-, N-, and C-prototypes to be 16. The number of training epochs is 90. Although the training procedure is designed in a delicate way, the model is actually rather robust and can perform well without these delicate procedures. For example, on the largest dataset OGB-Products, we only train the model for 10 epochs, and do not decay the learning rate, the prototypes are established at the beginning, and the model still obtained good results, as shown in in the paper (e.g. results in Section 3.3). For the large datasets, for higher efficiency, we set $h=1$, and only uniformly sample one neighbor from the neighbors. On the OGB-Products, we shrink the dimensions of A-, N-, and C-prototypes to be 2, in order to control the number of prototypes.
For both HPNs, the threshold $t_A$, $t_N$, and $t_C$ are selected by cross validation on $\{0.01, 0.05,0.1,0.15,0.2,0.25,0.3,0.35, 0.4\}$. According to the experimental results, there is a wide range for choosing the thresholds. Finally we choose $t_A=t_N=0.3$ and $t_C=0.4$.
Figure \ref{fig:module_structure} describes the shapes of the modules in HPNs. The input data are $\mathrm{d_{input}}$ node feature vectors. With the $\mathrm{AFEs}$, the data are transformed into atomic embeddings, which are $\mathrm{d_a}$ dimensional vectors. Then they are mapped to A-prototypes with same dimensions. After that, the matched A-prototypes are further mapped to higher level N- and C-prototypes with the corresponding Fc layers. The dimensions of N- and C- prototypes are $\mathrm{d_n}$ and $\mathrm{d_c}$. Finally, all the prototypes of different levels are concatenated into a single vector with length of $\mathrm{(l_a'+l_r')d_a+d_n+d_c}$ and fed into the classifier (Fc layer) for classification results. The number of logits output by the classifier is $\mathrm{num\_{class}}$, which is the number of classes in each task. Some existing continual learning works with the task-incremental setting will expand the number of logits output by the classifier to $\mathrm{num\_{class}}\cdot \mathrm{num\_{task}}$, where $\mathrm{num\_task}$ is the total number of tasks the model is going to encounter. however, we argue that this is a very impractical scenario because of the following reasons: 1. A continual learning model should not know the number of tasks to learn in advance, therefore $\mathrm{num\_{task}}$ is unknown. 2. Setting the number of logits as $\mathrm{num\_{class}}\cdot \mathrm{num\_{task}}$ causes the memory consumption of the model to grows linearly with the number of tasks to learn, which is highly undesirable for continual learning models. Therefore, we set the number of output logits as $\mathrm{num\_{class}}$ and force different tasks to share the output head, which increases the hardness of learning but is much more practical.
The baselines have different settings. For the baselines with GCN backbone, 16 is approximately the best for the number of hidden unit. For the GAT based baselines, we set the number of heads and number of hidden units as 8. For GIN, the number of hidden units is 32. For all these baselines, the above mentioned settings are applied on most datasets. For some datasets on which the baselines cannot perform well, we will further tune the models carefully to get better results.
\section{Additional Experimental Results and Detailed Analysis}\label{sec:experiments}
To further validate our proposed model, in this section, we report additional experimental results by extending the experiments reported in the paper to more datasets. We will also give more detailed analysis on the results, which is omitted in the paper due to space limitations.
\begin{figure}[h]
\centering
\centering
\includegraphics[height=4cm]{ARS_ogbn_products.jpg}
\caption{Dynamics of ARS for continual learning tasks on OGB-Products dataset.}
\label{fig:ARS_Memory}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[height=4cm]{upper_bound_compare_2OGB_datasets.jpg}
\caption{Dynamics of memory consumption of HPNs on both OGB-Arxiv and OGB-Products.}
\label{fig:Param_amount}
\end{figure}
\iffalse
\subsection{Comparisons with Baseline Methods on Additional Datasets}
In this subsection, we include the comparison results with baseline methods on the other four datasets including Wisconsin, Cornell, Texas, and Actor, the results are shown in Table \ref{tab:comparison}.
\begin{table*}[]
\centering
\caption{Performance comparisons between HPNs and baseline models on four other datasets.}
\begin{tabular}{c|c||cc|cc|cc|cc}
\toprule
\multirow{2}{2.5em}{\textbf{C.L.T.}} & \multirow{2}{2em}{\textbf{Base}} & \multicolumn{2}{c|}{Actor}& \multicolumn{2}{c|}{Wisc.}& \multicolumn{2}{c|}{Corn.}& \multicolumn{2}{c}{Texas}\\ \cline{3-10}
& & AM & FM & AM & FM & AM & FM & AM & FM \\ \bottomrule\toprule
\multirow{3}{2.5em}{None} & GCN & 43.63\% & -9.11\% & 74.71\% & -9.52\% & 34.92\% & -68.00\% & 80.15\% & -12.00\%\\
& GAT & 53.10\% & -4.33\% & 78.82\% & -4.76\% & 46.77\% & -56.00\% & 74.62\% & -16.00\%\\
& GIN & 45.51\% & -8.88\% & 76.44\% & -4.76\% & 34.92\% & -64.00\% & 78.31\% & -12.00\%\\ \midrule
\multirow{3}{2.5em}{EWC} & GCN & 44.29\% & -7.06\% & 74.71\% & -9.52\% & 38.92\% & -60.00\% & 82.15\% & -8.00\%\\
& GAT & 54.23\%&-2.51\% &78.82\% &-4.76\%&48.92\%&-44.00\%&78.62\%&-8.00\%\\
& GIN & 47.61\%&-7.29\%&77.09\%&0.00\%&33.23\%&-52.00\%&78.31\% & -12.00\%\\ \midrule
\multirow{3}{2.5em}{LwF} & GCN & 49.77\%&-3.65\%&84.65\%&-9.52\%&62.77\%&-20.00\% &56.31\%&-52.00\%\\
& GAT & 52.82\%&-6.15\%&81.20\%&0.00\%&46.77\%&-52.00\%&78.46\%&-20.00\%\\
& GIN &49.70\%&-4.10\%&74.71\% &0.00\%&34.92\%&-64.00\%&34.92\%&-52.00\%\\ \midrule
\multirow{3}{2.5em}{GEM} & GCN &52.66\%&+3.91\%&88.71\%&6.25\%&65.08\% &+0.00\%&80.46\%&+4.00\%\\
& GAT &54.31\% &-2.05\% &77.09\% &-9.52\% &65.08\% &-4.00\% &76.77\% & -4.00\% \\
& GIN &45.23\%&-11.16\%&72.78\%&-6.25\%&76.62\%&+4.00\%&72.77\%&+8.00\%\\ \midrule
\multirow{3}{2.5em}{MAS} & GCN &50.73\%&-1.59\%&77.75\%& -9.52\%&61.23\%&+0.00\%&78.46\%&+0.00\%\\
& GAT &53.67\%&-1.60\%&76.01\%&-6.25\%&62.62\%&-32.00\%&84.46\%&+0.00\%\\
& GIN &51.69\%&-0.69\%&77.75\%&-4.76\%&63.08\%&+0.00\%&82.86\%&+0.00\%\\ \midrule
\multirow{3}{2.5em}{ERGN.} & GCN &52.44\%&+0.69\%&74.71\%&-9.52\%&34.92\%&-68.00\%&80.15\%&-12.00\%\\
& GAT &51.40\%&-7.29\%&78.16\%&-9.52\%&48.77\%&-52.00\%&80.46\%&-12.00\%\\
& GIN &42.72\%&-12.98\%&76.44\% & -4.76\%&34.92\%&-64.00\%&78.31\%&-12.00\%\\ \midrule
\multirow{3}{2.5em}{TWP} & GCN &50.59\%&-4.79\%&66.09\%&-14.28\%&56.77\%&-32.00\%&82.31\% &-4.00\%\\
& GAT &54.01\%&-2.05\%&80.54\%&-9.52\%&46.92\%&-48.00\%&78.62\%&-8.00\%\\
& GIN &49.91\%&-3.64\%&71.17\%&-6.25\%&51.08\%&-24.00\%&74.62\%&-4.00\%\\ \midrule
\multirow{3}{2.5em}{Join.} & GCN &57.01\% & +0.00\% & 96.72\% & +0.00\% &88.09\% & +0.00\% &86.43\% &+0.00\% \\
& GAT &57.15\% & +0.00\% & 95.94\% & +0.00\% & 89.46\% & +0.00\% & 86.30\% & +0.00\%\\
& GIN &56.97\% & +0.00\% & 96.88\% & +0.00\% & 88.82\% &+0.00\% &86.95\% &+0.00\% \\
\bottomrule\toprule
\multicolumn{2}{c||}{\textbf{HPNs}} &\textbf{56.80\%} & \textbf{ -0.92\%} &\textbf{96.55\%} &\textbf{+0.00\% } &\textbf{88.23\%} &\textbf{+0.00\% } &\textbf{86.31\%} &\textbf{+2.77\%}\\ \bottomrule
\end{tabular}
\label{tab:comparison}
\end{table*}
\iffalse
\begin{figure}
\centering
\begin{minipage}{0.24\textwidth}
\centering
\includegraphics[width=1.\textwidth]{n_proto_small.png}
\end{minipage}\hfill
\begin{minipage}{0.24\textwidth}
\centering
\includegraphics[width=1.\textwidth]{n_proto_dyna_small.png}
\end{minipage}
\caption{Study on impact of $t_A$ on number of different prototypes of DHPNs-C ($left$) and DHPNs ($right$) over Cora dataset.}
\label{fig:n_proto}
\end{figure}
\begin{figure*}[]
\centering
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[width=1.\textwidth]{hierarchical_proto_visual01.png}
\end{minipage}\hfill
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[width=1.\textwidth]{hierarchical_proto_visual0123.png}
\end{minipage}\hfill
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[width=1.\textwidth]{hierarchical_proto_visual012345.png}
\end{minipage}\hfill
\caption{Visualization of hierarchical prototype representations of test data of different tasks from Cora via TSNE.}
\label{fig:tsne}
\end{figure*}
\fi
\fi
\begin{figure*}[h]
\centering
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=1.\textwidth]{AM_ta_citeseer.png}
\end{minipage}\hfill
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=1.\textwidth]{FM_ta_citeseer.png}
\end{minipage}\hfill
\begin{minipage}{0.33\textwidth}
\centering
\includegraphics[width=1.\textwidth]{n_proto_citeseer.png}
\end{minipage}
\caption{Left and Middle: AM and FM change when $t_A$ varies on Citeseer. Right: impact of $t_A$ on the number of prototypes over Citeseer.}
\label{fig:parameter_sensitivity}
\end{figure*}
\begin{figure*}[h]
\centering
\begin{minipage}{0.29\textwidth}
\centering
\includegraphics[width=1.\textwidth]{tsne_citeseer_cls_01.jpg}
\end{minipage}\hfill
\begin{minipage}{0.29\textwidth}
\centering
\includegraphics[width=1.\textwidth]{tsne_citeseer_cls_0123.jpg}
\end{minipage}\hfill
\begin{minipage}{0.4\textwidth}
\centering
\includegraphics[width=1.\textwidth]{tsne_citeseer_cls_012345.jpg}
\end{minipage}
\caption{Visualization of hierarchical prototype representations of test data of different tasks from Citeseer via t-SNE.}
\label{fig:tsne_citeseer}
\end{figure*}
\subsection{Additional Results on Ablation Study}
In this subsection, we provide the ablation study results on another large dataset OGB-Arxiv, and the results are shown in Table \ref{tab:ablation_proto} and \ref{tab:ablation_loss}.
\begin{table}[]
\centering
\caption{Ablation study on prototypes of different levels of prototypes over OGB-Arxiv.}
\begin{tabular}{c|c|c|c|c|c}
\toprule
Conf. &A-p. & N-p. & C-p. & AM\% & FM\% \\ \midrule
1 &\checkmark & & & 82.1$\pm$0.9 & +0.0$\pm$1.1 \\ \midrule
2 &\checkmark &\checkmark & & 83.6$\pm$1.2 & +0.2$\pm$0.9 \\ \midrule
3 &\checkmark &\checkmark &\checkmark & 85.8$\pm$0.7 & +0.6$\pm$0.9 \\ \bottomrule
\end{tabular}
\label{tab:ablation_proto}
\end{table}
\begin{table}[]
\centering
\caption{Ablation study on different loss terms over OGB-Arxiv.}
\begin{tabular}{c|c|c|c|c|c}
\toprule
Conf. &$\mathcal{L}_{cls}$ & $\mathcal{L}_{div}$ & $\mathcal{L}_{dis}$ & AM\% & FM\% \\ \midrule
1 &\checkmark & & &79.6$\pm$1.5 &-0.3$\pm$1.3 \\ \midrule
2 &\checkmark & \checkmark & &82.3$\pm$1.0 &+0.4$\pm$0.9 \\ \midrule
3 &\checkmark & &\checkmark &80.7$\pm$1.2 &+0.0$\pm$1.4 \\ \midrule
4 &\checkmark &\checkmark &\checkmark & 85.8$\pm$0.7 & +0.6$\pm$0.9 \\ \bottomrule
\end{tabular
\label{tab:ablation_loss}
\end{table}
From Table \ref{tab:ablation_proto}, we can observe that on the large dataset, the improvements brought by high level prototypes are more significant than on the small dataset (reported in Table 2 in the paper). Similarly, in Table \ref{tab:ablation_loss}, the influence of different loss terms is also more prominent compared to the results reported in Table 3 in the paper. The above results imply that our proposed hierarchical prototypes and different loss terms are effective, and the effectiveness becomes increasingly significant on larger datasets with richer information.
\subsection{Additional Results on Learning Dynamics}
Besides the learning dynamics on OGB-Arxiv provided in Section 3.5 in the paper, we further provide the results on OGB-Products, as shown in Figure \ref{fig:ARS_Memory}.
The learning dynamics shown in Figure \ref{fig:Param_amount} is similar to the one on OGB-Products shown in the paper. The only difference is that OGB-Products contains more tasks and the ARS of the baselines decrease more than on OGB-Arxiv.
\subsection{Additional Results on Parameter Sensitivity}
In Figure \ref{fig:parameter_sensitivity}, we further provide the parameter sensitivity results on citeseer dataset. The results have similar patterns with the results provided in the paper.
\subsection{Additional Results on Memory Consumption}
In Figure \ref{fig:Param_amount}, we simultaneously show the memory consumption change via the number of tasks on both OGB-Arxiv and OGB-Products. We use same model configurations for both datasets, thus the upper bounds of the memory consumption are same. From Figure \ref{fig:ARS_Memory}, we could see that the memory consumption of HPNs on both datasets increases slowly and far less than the upper bound. Although OGB-Products is more than ten times larger than OGB-Arxiv, the memory used on OGB-Products is only slightly more than on OGB-Arxiv. demonstrating the memory efficiency of HPNs.
\subsection{Additional Results on Visualization}
In Figure \ref{fig:tsne_citeseer}, we visualize the hierarchical prototype representations of the test nodes on Citeseer by t-SNE~\cite{van2008visualizing}. Similar to the visualization results shown in the paper, Figure \ref{fig:tsne_citeseer} sequentially show the classes of task 1 (Left), task 1,2 (Middle), and task 1,2,3 (Right). The examples belonging to different classes are denoted with different shapes and colors, as shown in the legend on the right.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-12-01T02:24:45",
"yymm": "2111",
"arxiv_id": "2111.15422",
"language": "en",
"url": "https://arxiv.org/abs/2111.15422"
}
|
\section{Introduction}
\label{sec:intro}
One of the fundamental problems in machine learning is to learn a proper low-dimensional representation efficiently that captures the intrinsic structures of data and facilitates downstream tasks~\cite{bengio2013representation, devlin2018bert, korbar2018cooperative, icml2020simclr}.
Data mixing, as a means of generating symmetric mixed data and labels, largely improves the quality of deep neural networks (DNNs) learning discriminative representation in various scenarios~\cite{zhang2017mixup, kim2020puzzle, dabouei2021supermix, 2021iclrimix}.
Despite its general application, the policy of the generation process in data mixing requires an \textit{explicit} design.
For example, in computer vision, mixed samples are constructed by linear convex interpolation or random local patch replacement of sample pairs~\cite{zhang2017mixup, yun2019cutmix}.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{Figs/automix_v2-intro.pdf}
\vspace{-6pt}
\caption{Mixed samples with green boxes rely on labels, while those with blue boxes do not. The solid line indicates that the local relationship directly influence to the mixed sample, and the dashed line indicates that the other class samples are a global constraint on the current mixed sample. Compared to handcrafted mixup policies that focus only on local sample pair, SAMix can generate semantic mixed samples by exploiting the local and global information and has no dependency on labels.}
\label{fig:intro}
\vspace{-15pt}
\end{figure}
In addition, classification labels can be used for generating task-relevant mixed samples to match labels such as offline maximizing saliency information (e.g., gradCAM~\cite{selvaraju2017gradcam}) of related samples~\cite{kim2020puzzle, kim2021comixup, uddin2020saliencymix} and sample interpolation by adversarial training~\cite{eccv2020automix}.
These handcrafted mixing policies as shown in the red box of Figure~\ref{fig:intro}, however, fixed the objective of mixed data generation task in the data pairs (used to generate mixed data) and the label-dependent approaches are only limited in supervised learning (SL) scenarios, which may no be available in other scenarios such as self-supervise learning (SSL)~\cite{nips2020byol, he2020momentum}.
There are two remaining open problems: \textbf{how to design a learnable scenario-agnostic mixup policy and a proper mixup generation objective for preserving the task-relevant semantic correspondence.}
Most current works tried to solve the simplified above questions, which directly transfers linear mixup methods into contrastive learning~\cite{2021iclrimix, nips2020mochi, 2021cvprunmix}. Although simple, these approaches do not exploit the underlying structure of the data manifold.
In this paper, we propose SAMix (Figure \ref{fig:intro}), which stands for \textbf{S}cenario-\textbf{A}gnostic \textbf{Mix}up, an framework employs \textit{Mixer} to generate mixed samples adaptively either at \textit{instance-level} or \textit{cluster-level}. To guarantee the task-relevant information can be captured by Mixer, we propose \textit{$\eta$-balanced mixup loss} for treating mixup generation and classification differently from local and global perspective.
Furthermore, for SSL scenarios, we propose a simple and effective \textit{cross-view pipeline}, which significantly improves the performance of mixup methods without changing to original algorithms. Extensive experiments on both SL and SSL demonstrate the effectiveness and generalizability of SAMix.
Our contributions are as follows:
\begin{itemize}
\item We design an learnable mixed sample generator, Mixer, adopting mixing attention and non-linear content modeling to capture task-relevant information.
\item We summarize the mixup generation objective as optimizing the local smoothness subject to global discrimination and propose $\eta$-balanced mixup loss.
\item We analyze the properties of mixup classification and propose an efficient cross-view pipeline for SSL.
\item Combining the above, a scenario-agnostic mixup training framework is proposed for supervised and self-supervised learning and we conduct comprehensive experiments to prove the state-of-the-art performance.
\end{itemize}
\section{Related Work}
\textbf{Mixup of class level} \quad
There are three types of class-level mixup: linear mixup of input space~\cite{zhang2017mixup, yun2019cutmix, hendrycks2019augmix, harris2020fmix, qin2020resizemix} and latent space~\cite{verma2019manifold, faramarzi2020patchup}, saliency-based~\cite{uddin2020saliencymix, kim2020puzzle, kim2021comixup}, and learning mixup generation and classification end-to-end~\cite{liu2021automix, dabouei2021supermix}. SAMix belongs to the third type and learns both class- and instance-level mixup relationships. See \ref{app_sec:relatedwork} for details.
\textbf{Mixup of instance level} \quad
A complementary method for better instance-level representation learning is to apply mixup in SSL scenarios. Most approaches are limited to linear mixup methods, such as using MixUp and CutMix in the input space or latent space mixup~\cite{2021cvprunmix} for SSL without ground-truth labels. MoChi~\cite{nips2020mochi} propose mixing the negative sample in the embedding space to increase the number of hard negatives to improve CL. i-Mix~\cite{2021iclrimix} and BSIM~\cite{2020bsim} demonstrated how to regularize CL by mixing instances in input/latent spaces. We introduce SAMix for SSL, which adaptively learns the mixup policy online.
\section{Problem Definition}
\label{sec:problem}
Given a finite set of i.i.d samples, $X=[x_i]_{i=1}^{n} \in \mathbb{R}^{D\times n}$, each data $x_{i}\in \mathbb{R}^{D}$ is drawn from a mixture of, say $C$, distributions $\mathcal{D}=\{ \mathcal{D}_{c}\}_{c=1}^C$. Our basic assumption for discriminative representations is that the each component distribution $\mathcal{D}_{c}$ has relatively low-dimensional intrinsic structures, \textit{i.e.,} the distribution $\mathcal{D}_{c}$ is constrained on a sub-manifold, say $\mathcal{M}_c$ with dimension $d_c\ll D$. The distribution $\mathcal{D}$ of $X$ is consisted of these sub-manifolds, $\mathcal{M} = \cup_{c=1}^C\mathcal{M}_c$. In a discriminative problem, we seek a low-dimensional representation $z_i\in \mathcal{M}$ of $x_i$ by learning a continuous mapping modeled by a network encoder, $f_{\theta}(x):x\longmapsto z$ with the parameter $\theta\in \Theta$, which captures intrinsic structures of $\mathcal{M}$ and facilitates the discriminative tasks.
\subsection{Discriminative Representation Learning}
\label{subsec:discriminative}
\textbf{Parametric training with class supervision.}\quad
\textit{Some supervised class information} is available to ease the discriminative tasks in practical scenarios, such as the class labels in supervised learning (SL) or the cluster number $C$ in clustering (C). Here, we assume that a one-hot label $y_i\in \mathbb{R}^C$ of each sample $x_i$ can be somehow obtained, $Y=[y_1, y_2, ..., y_n]\in \mathbb{R}^{C\times n}$. We denote the labels generated or adjusted during training as pseudo labels (PL), while the fixed as ground truth labels (L). Notice that each component $\mathcal{D}_{c}$ is considered \textit{separated} according to $Y$ in this scenario.
Then, a \textit{parametric} classifier, $g_{\omega}(z):z\longmapsto p$ with the parameter $\omega\in \Omega$, can be learned to map the representation $z_i$ of each sample to its class label $y_i$ by predicting the probability of $p_i$ being assigned to the $c$-th class using the softmax criterion, $p_{c|i} = \frac{\exp(w_{c}^T z_{i})}{\sum_{j=1}^{C}\exp(w_{j}^T z_{i})}$, where $w_c$ is a weight vector for the class $c$, and $w_c^Tz_i$ measures how similar between $z_i$ and the class $c$. The learning objective is to minimize the cross-entropy loss (CE) between $y_{i}$ and $p_{i}$,
\begin{equation}
\vspace{-1pt}
\ell^{\,CE}(y_i, p_i) = - y_{i} \log p_{i}.
\vspace{-1pt}
\label{eq:param}
\end{equation}
As in information bottleneck (IB)~\cite{Tishby2000TheIB}, optimizing Eq.~\ref{eq:param} is equal to maximizing the mutural information $I(z, y)$ between $z$ and $y$ (as task-relevant information) while minimizing $I(x, z)$ between $x$ and $z$ (as task-irrelevant information), $\mathop{\max}\limits_{\theta, \omega} I(z, y) - \beta I(x, z)$, where $\beta > 0$.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{Figs/automix_v2-problem.pdf}
\vspace{-6pt}
\caption{
The class $a$ and $b$ are constrained on class sub-manifolds $\mathcal{M}_a$ and $\mathcal{M}_{b}$, while a neighborhood system $\mathcal{S}_i$ is defined by augmented views of $x_i$. We hope that inter-class mixed samples can prompt more discriminative representations.
}
\label{fig:problem}
\vspace{-12pt}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\linewidth]{Figs/automix_v2-info.pdf}
\vspace{-18pt}
\caption{(a) Graphical models and information diagrams of \textit{same-view} and \textit{cross-view} training pipeline for instance-level mixup. Taking the \textit{cross-view} as an example, $x_{m} = h(x_{i}^{\tau_2}, x_{j}^{\tau_2}, \lambda)$, the $\lambda$ region denotes the corresponding information partition for $\lambda I(z_{m}^{\tau_q}, z_{i}^{\tau_k})$ and the $1-\lambda$ region for $(1-\lambda) I(z_{m}^{\tau_q}, z_{j}^{\tau_k})$. (b) Linear evaluation (Tiny top-1 accuracy) under whether to use the cross-view pipeline and to combine the original and mixup infoNCE loss. (c) A heat map of linear evaluation (Tiny top-1 accuracy) represents the effects of using MixUp and CutMix as the inter-class (y-axis) and intra-class mixup (x-axis) using various $\alpha$.}
\vspace{-12pt}
\label{fig:pipeline}
\end{figure*}
\textbf{Non-parametric training as instance discrimination.}\quad
Complementary to the above parametric settings, \textit{non-parametric} approaches are usually adopted in unsupervised scenarios (label-free). Due to the \textit{lack of class information}, an instance discriminative task can be designed based on an assumption of local compactness: the low-dimensional neighborhood systems $\mathcal{S}_i\in \mathbb{R}^{d_i}$ of $x_i$ is invariant to a set of predefined augmentations $\mathcal{T}$, i.e., $x_i\in \mathcal{S}_i$ iff $\tau(x_i)\in \mathcal{S}_i$ for all $\tau\in \mathcal{T}$.
We mainly discuss contrastive learning (CL) and take MoCo~\cite{he2020momentum} as an example. Consider a pair of augmented image $(x_{i}^{\tau_{q}}, x_{i}^{\tau_k})$ from the same instance $x_{i}\in \mathbb{R}^{C\times H\times W}$, the local compactness is introduced by alignment of the encoded representation pair $(z_{i}^{\tau_q},z_{i}^{\tau_k})$ from $f_{\theta,q}$ and the momentum $f_{\theta,k}$, and constrained to the global uniformity by contrasting $z_{i}^{\tau_q}$ to a momentum dictionary of encoded keys from other images, $\{z_{j}^{\tau_k}\}_{j=1}^{K}$, where $K$ denotes the length of the dictionary. It can be achieved by the popular non-parametric CL loss, called infoNCE~\cite{oord2019CPC}:
\begin{equation}
\vspace{-2pt}
\ell^{\,NCE}(z_{i}^{\tau_q}, z_{i}^{\tau_k}) = -\log\frac{\exp(z_{i}^{\tau_q} z_{i}^{\tau_k}/t)}{\sum^K_{j=1}\exp(z_{i}^{\tau_q} z_{j}^{\tau_k}/t)},
\vspace{-2pt}
\label{eq:infonce}
\end{equation}
where $t$ is a temperature hyper-parameter. Notice that a MLP projection neck, $g_{\omega}(z):z\longmapsto p$, is commonly adopted in CL to calculate Eq.~\ref{eq:infonce} since~\cite{icml2020simclr}, $\ell^{NCE}(p_i^{\tau_q}, p_{i}^{\tau_k})$, where $p = \frac{g(p)}{\vert|g(p)\vert|}$. We still use the notation $z$ as the SSL representation for simplicity. Comparing to Eq.~\ref{eq:param}, minimizing Eq.~\ref{eq:infonce} equivalently maximizes a lower bound on $I(z_i^{\tau_q}, z_i^{\tau_k})$ as discussed in~\cite{oord2019CPC, eccv2020CMC}, \textit{i.e.,} $I(z_i^{\tau_q}, z_i^{\tau_k}) \ge \log(K) - \ell^{NCE}(z_i^{\tau_q}, z_i^{\tau_k})$.
\subsection{Mixup for Discriminative Representation}
\label{subsec:mixup_problem}
Recall two sub-tasks in mixup training: (a) \textit{mixed data generation} and (b) \textit{mixup classification}. As for the sub-task (a), two mixup functions are defined, $h(\cdot)$ and $v(\cdot)$, to generate mixed samples and corresponding the mixed labels with a mixing ratio $\lambda \sim Beta(\alpha, \alpha)$. Given the mixed data, (b) defines a mixup training objective to optimize the inter-class discriminative relationships.
\textbf{Mixup classification as the main task.}\quad
We first define two types of the mixup classification objective $\mathcal{L}_{\theta, \omega}$ for parametric and non-parametric training scenarios, \textit{class-level} and \textit{instance-level} mixup. As for parametric training, given two randomly selected data pairs, $(x_i,y_i)$ and $(x_j,y_j)$, the mixed data is generated as $x_{m} = h(x_i, x_j, \lambda)$ and $y_{m} = v(y_i, y_j, \lambda)$. The class-level mixup objective is:
\begin{equation}
\vspace{-2pt}
\ell^{\,CE}(p_{m}) = \lambda \ell^{\,CE}(y_{m}, p_{m}) + (1-\lambda) \ell^{\,CE}(y_{m}, p_{m}).
\label{eq:mixup_cls}
\vspace{-2pt}
\end{equation}
Notice that we fix $v(\cdot)$ as the linear interpolation in our discussions, \textit{i.e.}, $v(y_i, y_j, \lambda) \triangleq \lambda y_i + (1-\lambda)y_j$. Symmetrically, we denote $h(\cdot)$ as a pixel-wise mixing policy with element-wise product $\odot$ for most input mixup methods~\cite{zhang2017mixup, yun2019cutmix, kim2020puzzle}, \textit{i.e.,} $x_{m} = s_{i}\odot x_{i} + s_{j}\odot x_{j}$, where $s_{i} \in \mathbb{R}^{H\times W}$ is a pixel-wise mask and $s_{j} = 1-s_{i}$. Notice that each coordinate $s_{w,h}\in [0,1]$. Similarly to Eq.~\ref{eq:mixup_cls}, we can generate $x_{m}$ with a pair of randomly selected samples $(x_{i}, x_{j})$ and formulate mixup infoNCE loss for instance-level mixup:
\begin{equation}
\vspace{-2pt}
\ell^{NCE}(z_{m}) = \lambda\ell^{NCE}(z_{m}, z_{i}) + (1-\lambda)\ell^{NCE}(z_{m}, z_{j}),
\vspace{-2pt}
\label{eq:mixnce}
\end{equation}
where $z_{m}$, $z_{i}$ and $z_{j}$ denote the representation of $x_{m}$ and corresponding instances.
\textbf{Mixup generation as the auxiliary task.}\quad
Different from the learning object on the \textit{fixed} data $X$ in Sec.~\ref{subsec:discriminative}, the performance of mixup classification is depending on the sub-task (a) because the mixup policies $h(\cdot)$ and $v(\cdot)$ reflect a certain relationship between the two classes (sub-manifolds). Therefore, we regard (b) as an auxiliary task to (a) and model $h(\cdot)$ as a sub-network $\mathcal{M}_{\phi}$ with the parameter $\phi\in \Phi$, called Mixer (see Sec.~\ref{subsec:mixblock}). to generate a pixel-wise mask $s\in \mathbb{R}^{H\times W}$ for sample mixup. Intuitively, the mixup mask $s_{i}$ is directly related to $\lambda$ and the contents of $(x_i,x_j)$, $\mathcal{M}_{\phi}$ also takes $l$-th layer feature maps $z^{l}\in \mathbb{R}^{C_{l}\times H_{l}\times W_{l}}$ as the input,
$\mathcal{M}_{\phi}: x_{i},x_{j},z_{i}^{l},z_{j}^{l},\lambda \longmapsto x_{m}$. The generation process of $\mathcal{M}_{\phi}$ can be supervised by a mixup classification loss (see Sec.~\ref{subsec:loss}) as $\mathcal{L}_{\phi}^{cls}$, and a mask loss designed for generated mask $s_{i}$ (see Sec.~\ref{subsec:mixblock}) denoted as $\mathcal{L}_{\phi}^{mask}$. Formally, we have the mixup generation loss as $\mathcal{L}_{\phi} = \mathcal{L}_{\phi}^{cls} + \mathcal{L}_{\phi}^{mask}$, and the final learning objective for SAMix is,
\begin{equation}
\vspace{-2pt}
\mathop{\min}\limits_{\theta, \omega, \phi} \mathcal{L}_{\theta, \omega} + \mathcal{L}_{\phi}.
\label{eq:total}
\vspace{-2pt}
\end{equation}
Both $\mathcal{L}_{\theta, \omega}$ and $\mathcal{L}_{\phi}$ can be optimized alternatively in a unified framework using a momentum pipeline with the stop-gradient operation~\cite{nips2020byol, liu2021automix}, as shown in Figure~\ref{fig:mixer} (left).
\section{SAMix for Discriminative Representations}
\label{sec:method}
\subsection{Instance-level Mixup Classification}
\label{subsec:properties}
\textbf{Cross-view training pipeline.}\quad
We begin by analyzing the learning objective of mixup classification sub-task. According to IB, the objective of class-level mixup classification is consistent with Eq.~\ref{eq:param} as, $\mathop{\max}\limits_{\theta, \omega} I(z_{m}, y_{m})$. However, there are two possible objectives for instance-level mixup in Eq.~\ref{eq:mixnce}: the \textit{same-view}, $\mathop{\max}\limits_{\theta, \omega} \lambda I(z_{m}^{\tau_q}, z_{i}^{\tau_q}) + (1-\lambda) I(z_{m}^{\tau_q}, z_{j}^{\tau_q})$, and \textit{cross-view} objective, $\mathop{\max}\limits_{\theta, \omega} \lambda I(z_{m}^{\tau_q}, z_{i}^{\tau_k}) + (1-\lambda) I(z_{m}^{\tau_q}, z_{j}^{\tau_k})$. As shown in Figure~\ref{fig:pipeline} (a), we hypothesize that the cross-view objective yields better CL performance than the same-view because the mutual information between two augmented view should be reduced while keeping task-relevant information~\cite{nips2020infomin, iclr2021ssl_multiview}. To verify this hypothesis, we design an experiment of various mixup methods with $\alpha=1$ on STL-10 with ResNet-18 (detailed in Sec.~\ref{exp:ssl} and \ref{app_sec:exp_settings}), as shown in Figure~\ref{fig:pipeline} (b), and conclude: (i) Degenerated solutions occur when use the same-view pipeline while using the cross-view pipeline outperforms the CL baseline. It is mainly caused by degenerated mixed samples which contain parts of the same view of two source images. Therefore, we propose the cross-view pipeline for the instance-level mixup, where $z_{i}$ and $z_{j}$ in Eq.~\ref{eq:mixnce} are representations of $x_{i}^{\tau_{k}}$ and $x_{j}^{\tau_{k}}$. (ii) Combining both the original and mixup infoNCE loss, $\ell^{NCE}(z_{i}^{\tau_q}, z_{i}^{\tau_k}) + \ell^{NCE}(z_{m})$, surpasses only using one of them, which indicates that mixup enables $f_{\theta}$ to learn relationship between local neighborhood systems.
\textbf{Properties of mixup classification.}\quad
We then discuss the properties of instance-level mixup classification from two aspects: \textit{inter-class} and \textit{intra-class} mixup. In the case of class-level mixup, compared to the CE loss in Eq.~\ref{eq:param}, the mixup CE enhances the inter-class relationship as soft labels classification task while considering the intra-class relationship compact. In instance-level mixup, we hypothesize that the class information is still decisive for treating the inter- and intra-class relationship differently: \textit{strong} inter-class mixup for discrimination and \textit{weak} intra-class mixup for compactness. To verify the hypothesis, we conducted a CL experiment on Tiny-ImageNet (Tiny), which uses different intensities of inter- and intra-class mixup (detailed in \ref{app_sec:exp_settings}). Notice that mixed samples from CutMix~\cite{yun2019cutmix} are more discriminative than MixUp~\cite{zhang2017mixup} and using $\alpha \in [0.2,1,2,4]$ represents the mixup intensity from weak to strong. As shown in Figure~\ref{fig:pipeline} (c), the top performance is achieved by using MixUp with large $\alpha$ as the intra-class mixup while CutMix with small $\alpha$ as the inter-class, which supports our hypothesis.
\subsection{Learning Objective for Mixup Generation}
\label{subsec:loss}
\textbf{Properties of mixup generation.}\quad
Asymmetric to the mixup classification, we decompose the objective of mixup generation into \textit{local} and \textit{global} terms, as shown in Table~\ref{tab:analysis_mixup}. We argue that mixup generation is aimed to \textit{optimize the local term subject to the global term}. In the case of class-level mixup, for example, optimize $(1-\lambda) I(x_{m}, y_i) = \lambda I(x_{m}, y_j)$ (local) while $I(x_{m}, y_{c})$ is minimized for all classes $c$ which are not belong to class $y_i$ and $y_j$ (global). Formally, assuming $y_i$ and $y_j$ belong to the class $a$ and class $b$, we call the local term in Eq.~\ref{eq:mixup_cls} as parametric binary cross-entropy mixup loss (pBCE) for SL:
\begin{equation}
\vspace{-1pt}
\ell_{+}^{\,CE}(p_{m}) = -\lambda y_{i,a}\log p_{m} - (1-\lambda) y_{j,b}\log p_{m},
\label{ep:pBCE}
\vspace{-1pt}
\end{equation}
where $y_{i,a}=1$ and $y_{i,b}=1$ denote the one-hot label for the class $a$ and $b$. Notice that we use $\ell_{+}$ and $\ell_{-}$ to represent the local and global parts.
Symmetrically, we have non-parametric binary cross-entropy mixup loss (BCE) for CL:
\begin{equation}
\vspace{-1pt}
\ell_{+}^{NCE}(z_{m}) = \frac{\lambda z_{m} z_{i}}{z_{m}z_i + z_{m}z_j} + \frac{(1-\lambda) z_{m}z_j}{z_{m}z_i + z_{m}z_j},
\label{ep:BCE}
\vspace{-1pt}
\end{equation}
According to IB, the global term serves as a constrain to compress the task-irrelevant information in $x_{m}$. We verify the \textit{necessity} of this global constrain by visualization the top-1 and top-2 accuracy on mixed data with $\lambda \in [0,1]$ in Figure~\ref{fig:mix_acc}. Notice that the generation objective in PuzzleMix~\cite{kim2020puzzle} is to maximize the saliency information of $x_{i}$ and $x_{j}$ in $x_{m}$, which is similar to the pBCE loss. Obviously, using the mixup CE loss as the objective for Mixer (see Sec.~\ref{subsec:mixblock}) yields better global discrimination than using the pBCE loss.
\input{Tabs/analysis_mixup}
\begin{figure}[b]
\centering
\vspace{-5pt}
\includegraphics[width=0.95\linewidth]{Figs/automix_v2-mix_acc.pdf}
\caption{Left: Top-1 accuracy of mixed data on CIFAR-100. Prediction is counted as correct if the top-1 prediction belongs to $\{y_i, y_j\}$. Right: Top-2 accuracy of mixed data. Prediction is counted as correct if the top-2 predictions are equal to $\{y_i, y_j\}$.}
\label{fig:mix_acc}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Figs/automix_v2_loss_ablation_tiny.pdf}
\vspace{-14pt}
\caption{Analysis of the learning objective of Mixer on Tiny ImageNet with ResNet-18. The left figure shows results of various losses on the SL task (\textcolor{RoyalBlue}{left y axis}) and the CL task (\textcolor{YellowOrange}{right y axis}). The right one shows the effect of using various negative weights $\eta$.}
\vspace{-12pt}
\label{fig:loss_analysis}
\end{figure}
\textbf{Objective for mixup generation.}\quad
Since both the local and global terms contribute to mixup generation, we discuss the importance of each term in the SL and SSL tasks to design a balanced learning objective. We first analyze the properties of both terms with two hypothesizes: (i) the local term $\ell_{+}$ \textit{determines} the generation performance, (ii) the global term $\ell_{-}$ improves global discrimination but is sensitive to class information. To verify these properties, we design an empirical experiment based on the proposed Mixer on Tiny (detailed in \ref{app_sec:exp_settings}). Notice that the main difference between the mixup CE (Eq.~\ref{eq:mixup_cls}) and infoNCE (Eq.~\ref{eq:mixnce}) is whether to adopt parametric class centroids. Therefore, we compare the intensity of class information among unlabeled (UL), pseudo labels (PL), and ground truth labels (L). Notice that PL is generated by ODC~\cite{cvpr2020odc} with the cluster number $C$ and the class supervision can be imported to mixup infoNCE loss by filtering out negative samples with PL or L as~\cite{nips2020SupCon} denoted as infoNCE (L) and infoNCE (PL). As shown in the left of Figure~\ref{fig:loss_analysis}, our hypothesizes are verified in the SL task (as the performance decreases from CE(L) to pBCE(L) and CE(PL) losses), but the opposite result appears in the CL task. The performance increases from InfoNCE(UL) to InfoNCE(L) as the false negative samples are removed~\cite{2021iclrHCL, nips2020SupCon} while trivial solutions occur using BCE(UL) (as shown in Figure~\ref{fig:vis_main}). We argue that the local objective of instance-level mixup is relies heavily on its global constrain while the global constrain depends on class information.
Thus, we propose it is better to explicitly import class information as PL for instance-level mixup to generate "strong" inter-class mixed samples while preserving intra-class compactness. Practically, we provide two versions of the learning objective: mixup CE loss in Eq.~\ref{eq:mixup_cls} with PL as clustering version (SAMix-C), mixup inforNCE loss in Eq.~\ref{eq:mixnce} as infoNCE version (SAMix-I). We verify the MI between $x_{m}$ and $x_{i}$ v.s. the given $\lambda$ for the instance-level mixup in Figure~\ref{fig:scatter} (d), which shows mixed samples from SAMix-C contain more task-relevant information than SAMix-I.
\textbf{\textit{$\eta$-balanced mixup loss}.}\quad
Then, we hypothesize that the best performing mixed samples will be close to the sweet spot: achieving $\lambda$ smoothness locally between two classes or neighborhood systems while globally discriminating from other classes or instances. We propose an \textit{$\eta$-balanced mixup loss} as the objective of mixup generation,
\begin{equation}
\vspace{-1pt}
\ell_{\eta} = \ell_{+} + \eta \ell_{-},\ \eta \in [0,1].
\label{eq:eta_loss}
\vspace{-1pt}
\end{equation}
We analyze the performance of using various $\eta$ in Eq.~\ref{eq:eta_loss} on Tiny, as shown in the right of Figure~\ref{fig:loss_analysis}, and find that using $\eta=0.5$ performs best on both the SL and CL tasks. In the end, we provide the learning objective,
$\mathcal{L}_{\phi}^{cls} \triangleq \ell^{\,CE}_{+} + \eta \ell^{\,CE}_{-}$, with L for class-level mixup and with PL for SAMix-C, $\mathcal{L}_{\phi}^{cls} \triangleq \ell^{NCE}_{+} + \eta \ell^{NCE}_{-}$ for SAMix-I (detailed in~\ref{app_sec:implementation}).
\begin{figure}[b]
\vspace{-8pt}
\centering
\includegraphics[width=\linewidth]{Figs/automix_v2-framework.pdf}
\vspace{-14pt}
\caption{Overall training framework of SAMix (left) and network architecture of Mixer $\mathcal{M}_{\phi}$ (right). $X$ and $Z$ denote the input sample and corresponding feature maps from Momentum Encoder. The blue modules on the left are not updated by the back-propagation in the dashed box. That means the two parallel pipelines optimize Mixer and Encoder alternatively, and update each other's frozen modules by weight sharing and moving average (see~\ref{app_subsec:SL}).}
\label{fig:mixer}
\end{figure}
\subsection{Mixer for Mixup Generation}
\label{subsec:mixblock}
Inspired by self-attention mechanism~\cite{nonlocal2018cvpr}, we design Mixer $\mathcal{M}_{\phi}$ to solve three sub-problems: (a) how to encode the mixing ratio $\lambda$, (b) how to model the mixup relationship between two samples, and (c) how to encode the prior knowledge of mixup.
\textbf{Adaptive $\lambda$ encoding and mixing attention.}\quad
Since mixup generation is directly guided by the randomly sampled mixing ratio $\lambda$, the predicted mask should be proportional to $\lambda$. Here, we regard $\lambda$ as the prior knowledge and propose an \textit{adaptive $\lambda$ encoding} as, $z^{l}_{i, \lambda} = (1+\gamma \lambda)z_i^{l}$, where $\gamma$ is a learnable scalar that constrained to $[0,1]$. Symmetrically, we have $z^{l}_{j, 1-\lambda} = (1+\gamma(1 - \lambda))z_j^{l}$. Notice that $\gamma$ is initialized to $0$ during training.
Then, Mixer models the mixing relationship between $z^{l}_{i,\lambda}$ and $z^{l}_{j,1-\lambda}$ using self-attention and predicts $s_{i}$ by three steps:
(1) it models the content of $z_i$ by a sub-module, $C_{i} = \mathcal{C}(z_{i})$, where $C_{i}\in \mathbb{R}^{H_l\times W_l}$ is a 2D tensor like the final mask $s_{i}$.
(2) it computes the mixing relationship between two samples using a new \textit{mixing attention}: we concatenate $(z^l_{i,\lambda},z^l_{j,1-\lambda})$ as the input, $\tilde z^{l} = {\rm{concat}}(z^l_{i,\lambda}, z^l_{j,1-\lambda})$, and compute the attention weight as, $P_{i,j} = {\rm{softmax}}(\frac{(W_{P}\tilde z^{l})^T \otimes W_{P} \tilde z^{l}}{\mathcal{N}(\tilde z^{l})})$, where $\mathcal{N}(\tilde z^{l})$ denotes a normalization factor and $\otimes$ is matrix multiplication. Notice that the mixing attention provides both the cross-attention between $z^l_{i,\lambda}$ and $z^l_{j,\lambda}$ and the self-attention of each feature itself.
(3) it predicts the probability of each coordinate belonging to $x_i$ as, $s_{i} = U(\sigma(P_{i,j} \otimes C_{i}))$, where $U(\cdot)$ is an upsampling and $\sigma(\cdot)$ denotes sigmoid activation.
\begin{figure*}[t]
\centering
\includegraphics[width=1.0\linewidth]{Figs/automix_v2-vis_main.pdf}
\vspace{-14pt}
\caption{Visualization and comparison of mixed samples from Mixer in various learning scenarios on IN-1k and iNat2017. Note that $\lambda=0.5$ and $\eta=0.5$ if the balance coefficient $\eta$ is included. CL(C) and CL(I) denote using SAMix-C and SAMix-I separately.}
\vspace{-12pt}
\label{fig:vis_main}
\end{figure*}
\textbf{Non-linear content modeling.}\quad
In common cases, the content sub-module $\mathcal{C}$ is a linear projection in self-attention, $C_{i} = W_{z} \tilde z^{l}$, where $W_{z}$ denotes a $1\times 1$ convolution. However, we find the training process of Mixer is unstable with the linear $\mathcal{C}$ in the early period and sometimes trapped in trivial solutions (especially in SSL tasks), such as all coordinates on $s_{i}$ predicted as a constant. As shown in~\ref{app_sec:exp_settings}, we visualize $C_{i}$ and $P_{i,j}$ to compare trivial results with some non-trivial ones, and find that the constant $s_{i}$ is usually caused by a constant $C_{i}$. We hypothesize that trivial solutions happen earlier in the linear $\mathcal{C}$ than in $P_{i,j}$, which linearly projects the high-dimensional feature to $1$-dim. Hence, we design a \textit{non-linear content modeling} sub-module $\mathcal{C}_{NC}$ that contains two $1\times 1$ convolution layers with a batch normalization layer and a ReLU layer in between, as shown in Figure~\ref{fig:mixer} (left). To increase the robustness and randomness of mixup training, we add a Dropout layer with a dropout ratio of $0.1$ in $\mathcal{C}_{NC}$. Formally, Mixer $\mathcal{M}_{\phi}$ can be written as,
\begin{equation}
\vspace{-2pt}
s_{i} = U\bigg (
\sigma \Big (
{\rm{softmax}}\big (\frac{(W_{P}\tilde z^{l})^T \otimes W_{P}\tilde z^{l}}{N(\tilde z^{l})}\big )
\otimes \mathcal{C}_{NC}(z^l_{i,\lambda})\Big )\bigg ).
\label{eq:mixer}
\vspace{-2pt}
\end{equation}
\textbf{Prior knowledge of mixup.}\quad
Moreover, we summarize some prior knowledge commonly adopted in input space mixup as two aspects: (a) adjusting the mean of $s_{i}$ correlated with $\lambda$, and (b) balancing the smoothness of local image patches while maintaining discrimination of $x_{m}$.
As for the first aspect, a mask loss is introduced to align the mean of $s_{i}$ to $\lambda$, $\ell_{mean} = \beta \max(| \lambda - \mu_{i} | - \epsilon, 0)$, where $\mu_{i} = \frac{1}{HW}\sum_{h,w} s_{i,h,w}$ is the mean and $\epsilon=0.1$ as a margin. Meanwhile, we propose a test time \textit{$\lambda$ adjusting} method. Assuming $\mu_{i}<\lambda$, we adjust each coordinate on $s_{i}$ as $\hat s_{i} = \frac{\mu_{i}}{\lambda} s_{i}$, and $\hat s_{j} = 1 - \hat s_{i}$.
As for the second aspect,
we adopt a bilinear upsampling as $U(\cdot)$ for smoother masks and propose a variance loss to encourage the sparsity of learned masks, $\ell_{var} = \frac{1}{WH} \sum_{w,h}(\mu_{i} - s_{w,h})^2$. Finally, we summarize the mask loss as, $\mathcal{L}_{\phi}^{mask} = \beta(\ell_{mean} + \ell_{var})$, where $\beta$ is a balancing weight. We initialize $\beta$ to $0.1$ and linearly decrease to 0 during training.
\subsection{Discussion and Visualization of SAMix}
\label{subsec:discussion}
To show the influence of local and global constraints on mixup generation, we visualize mixed samples that are generated from Mixer on various scenarios in Figure~\ref{fig:vis_main}.
\textbf{Class-level.}\quad In the supervised classification task, global constraint localizes key features by discriminating to other classes, while local term is prone to preserve more information related to current two samples and classes. For example, comparing the mixed results with and without $\eta$-balanced mixup loss, it was found that pixels of the foreground target was of interest to Mixer. When the global constraint is balanced ($\eta=0.5$), the foreground target is retained more completely. Importantly, our designed Mixer remains invariant to the background for the more challenging fine-grained classification and preserves discriminative features.
\textbf{Instance-level.}\quad Since no label supervisions are available for SSL, the global and local terms are transformed from class to instance. Similar results are shown in the top row, the only difference is that SAMix-C has a more precise target correspondence compared to SAMix-I via introducing class information by PL, which further indicates the importance of the information of classes. If we only focus on local relationships, Mixer can only generate mixed samples with fixed patterns (the last two results in the top row). These failure cases imply the effect of global constraints.
\section{Experiments}
\label{sec:expt}
We first evaluate SAMix for supervised learning (SL) in Sec.~\ref{exp:sl_cls} and self-supervised learning (SSL) in Sec.~\ref{exp:ssl}, and then perform ablation studies in Sec.~\ref{exp:ablation}. Six benchmarks are used for evaluation: CIFAR-100~\cite{krizhevsky2009learning}, Tiny-ImageNet (Tiny)~\cite{2017tinyimagenet}, ImageNet-1k (IN-1k)~\cite{russakovsky2015imagenet}, STL-10~\cite{coates2011analysis}, CUB-200~\cite{wah2011caltech}, FGVC-Aircraft (Aircraft)~\cite{maji2013fine}, and iNaturalist2017 (iNat2017)~\cite{cvpr2018inaturalist}. All experiments are conducted with PyTorch and reported the \textit{mean of 3 trials}. SAMix uses $\alpha=2$ and the feature layer $l=3$ for all datasets.
\subsection{Evaluation on Supervised Image Classification}
\label{exp:sl_cls}
This subsection evaluates the performance gain of SAMix for fine-grained, small- and large-scale image classification tasks.
We adopt ResNet~\cite{cvpr2016resnet} (R) and ResNeXt(32x4d) (RX)~\cite{xie2017aggregated} as backbone networks. We use SGD optimizer with cosine scheduler~\cite{loshchilov2016sgdr} for all SL experiments. For a fair comparison, grid search is performed for hyper-parameters $\alpha\in \{0.2, 0.5, 1.0, 2.0, 4.0\}$ of all mixup methods. We use $\alpha=1$ and follow other hyper-parameters the original paper by default. Notice that $*$ denotes unpublished work on arxiv. Following~\cite{liu2021automix}, momentum training coefficient is gradually increased from $0.999$ to $1$ in a cosine curve by default. The \textit{median} of validation results in the last 10 training epochs is recorded for each trial.
\textbf{Setups}\quad
For a fair comparison, we adopt the following basic training settings identically for all methods in SL tasks. We adopt \textit{RandomFlip} and \textit{RandomCrop} with 4 pixels reflect padding as basic data augmentations for CIFAR-100 while \textit{RandomFlip} and \textit{RandomResizedCrop} for other datasets.
For CIFAR-100, the SGD weight decay is $0.0001$, momentum is 0.9, initial learning rate $lr=0.1$, and train 800 epochs with the batch size of 100. For Tiny, we train 400 epochs with the initial learning rate $lr=0.2$ and the batch size of 100. For IN-1k, the total epoch is 300 with the initial learning rate $lr=0.1$ and the batch size of 256.
For small-scale fine-grained classification tasks on CUB-200 and Aircraft, we use pre-trained models on IN-1k provided by PyTorch~\cite{nips2019pytorch} as initialization, and train 200 epochs with the initial learning rate $lr=0.001$, the weight decay $0.0005$, and the batch size of 16. For the large scale fine-grained recognition task on iNat2017, we train 100 epochs with $lr=0.1$.
\textbf{Comparison and discussion}\quad
On the small-scale and fine-grained classification tasks, as shown in Table~\ref{tab:cls_s}, SAMix consistently improves the classification performance over the previous best algorithm AutoMix on CIFAR-100, CUB-200, and Aircraft by improving the design of Mixer. Notice that SAMix significantly improved the performance of CUB-200 and Aircraft by 1.24\% and 0.78\% based on ResNet-18, and continued to expand its dominance on Tiny by bringing 1.23\% and 1.40\% improvement on ResNet-18 and ResNeXt-50. As for the large-scale classification task, SAMix also outperforms all existing methods on IN-1k. Specially, SAMix improves the second best method by 0.25\% with ResNet-50.
\subsection{Evaluation on Self-supervised Learning}
\label{exp:ssl}
In this subsection, we evaluate SAMix on SSL tasks pre-training on STL-10, Tiny, and IN-1k. All comparing methods are based on MoCo.V2 excepts SwAV~\cite{nips2020swav}. We adopt all the hyper-parameter configurations from MoCo.V2 for pre-training on these datasets unless otherwise stated.
We compared SAMix in two dimensions in CL: (i) compare with other mixup variants, based on our proposed cross-view pipeline, and the predefined cluster information is given (denotes by C) or not, as shown in Table~\ref{tab:stl_ssl}.
(ii) longitudinal comparison with CL methods that utilize \textit{input space} (\textit{i.e.,} Mixup and CutMix) and \textit{input+latent} space mixup strategies, including MoCHi~\cite{nips2020mochi}, i-Mix~\cite{2021iclrimix}, Un-Mix~\cite{2021cvprunmix} and WBSIM~\cite{2020bsim}, as shown in Table~\ref{tab:imagenet_ssl}. Notice that $\star$ denotes our modified methods (PuzzleMix$^*$ uses PL and Inter-Intra$^{*}$ combines inter-class CutMix with intra-class MixUp in Sec.~\ref{sec:method}), $\dagger$ denotes reproduced results by official source code, $\ddagger$ denotes original reported results, and other results are reproduced by us (see \ref{app_sec:implementation}).
\input{Tabs/sl_cifar_finegrained}
\textbf{Linear Classification}\quad
Following the linear classification protocol proposed in MoCo, we train a linear classifier on the top of frozen backbone features with the supervised train set. We train 100 epochs using SGD with a batch size of 256. The initialized learning rate is set to $0.1$ for Tiny and STL-10 while $30$ for IN-1k, and decay by $0.1$ at epoch 30 and 60.
As shown in Table~\ref{tab:stl_ssl}, SAMix-I outperforms all the linear mixup methods by a large margin while SAMix-C surpasses the saliency-based PuzzleMix when PL is available.
Meanwhile, Table~\ref{tab:imagenet_ssl} demonstrates that both SAMix-I and SAMix-C surpass other CL methods combined with the predefined mixup. Overall, SAMix-C yields best performance in CL takes which indicates it provides more task-relevant information with the help of class information in PL.
\textbf{Downstream Tasks}\quad
\label{exp:ssl_detection}
Following the transfer learning protocol in MoCo, we evaluate transferable abilities of the learned representation of comparing methods to object detection task on PASCAL VOC~\cite{2010pascalvoc} and COCO~\cite{eccv2014MSCOCO} in Detectron2~\cite{wu2019detectron2}. In Table~\ref{tab:detection}, we fine-tune Faster R-CNN~\cite{ren2015faster} with R50-C4 backbone with pre-trained models on VOC~\textit{trainval07+12} and evaluate on the VOC~\textit{test2007} set. Similarly, Mask R-CNN~\cite{2017iccvmaskrcnn} is fine-tuned (2$\times$ schedule) on the COCO~\textit{train2017} and evaluated on the COCO~\textit{val2017} set.
\input{Tabs/ssl_det_seg}
\vspace{-10pt}
\subsection{Ablation Study}
\label{exp:ablation}
We conduct ablation studies in five aspects:
(i) \textbf{Mixer}: Table~\ref{tab:ablation_mixblock} verifies the effectiveness of each proposed module in both SL and CL tasks on Tiny. The first three modules enable Mixer to model the non-linear mixup relationship, while the next two modules enhance Mixer especially in CL tasks.
(ii) \textbf{Learning objectives}: We analyze the effectiveness of proposed $\ell_{\eta}$ with other losses, as shown in Table~\ref{tab:ablation_loss}. Using $\ell_{\eta}$ for the mixup CE and infoNCE consistently improves the performance both for the CL task on STL-10 and Tiny.
(iii) \textbf{Time complexity analysis}: Figure~\ref{fig:scatter} (c) shows computational analysis conducted on the SL task on IN-1k using 100-epochs protocol~\cite{wong2020fast}. It reflects that the overall accuracy v.s. time efficiency of SAMix is superior in contrast to other methods.
(iv) \textbf{Hyper-parameter}: Figure~\ref{fig:scatter} (a) and (b) show ablation results of the hyper-parameter $\alpha$ and the clustering number $C$ for SAMix-C. We empirically choose $\alpha$=2.0 and $C=200$ as default.
(v) \textbf{MI of mixup}: Figure~\ref{fig:scatter} (d) shows estimated $I(x_{m}, x_{i})$ on Tiny by MINE~\cite{icml2018mine} (detailed in \ref{app_sec:exp_settings}), which indicates SAMix-C provides more task-relevant information than SAMix-I and other methods.
\begin{figure}[h]
\vspace{-10pt}
\begin{minipage}{0.49\linewidth}
\centering
\input{Tabs/ablation_loss}
\end{minipage}
~\begin{minipage}{0.49\linewidth}
\centering
\input{Tabs/ablation_mixblock}
\end{minipage}
\vspace{-12pt}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.95\linewidth]{Figs/automix_v2-scatter.pdf}
\vspace{-10pt}
\caption{Ablation of SAMix. (a) hyper-parameter $\alpha$ for mixup (b) the cluster number $C$ for SAMix-C in CL tasks on Tiny, (c) top-1 accuracy v.s. training time on IN-1k based on ResNet-50 with 100 epochs, and (d) estimated MI v.s. mixing ratio $\lambda$ on Tiny.}
\label{fig:scatter}
\vspace{-10pt}
\end{figure}
\section{Limitations and Discussion}
In this work, with the motivation of designing a scenario-agnostic mixup framework, we study the objective of mixup generation as a local-emphasized and global-constrained sub-task for learning adaptive mixup policy at both in class and instance level. SAMix provides a unified framework for improving discriminative representation learning based on our proposed learnable Mixer and cross-view pipeline. As limitations, the Mixer only takes two samples as input and conflicts when the task-relevant information is overlapping. In the future, more than two samples or conflict-aware Mixer is another promising avenue for future research.
\section{Appendix}
We first provide implementation details for supervised (SL) and self-supervised learning (SSL) tasks in~\ref{app_sec:implementation}, and detailed experiment settings for Sec.~\ref{sec:method} in~\ref{app_sec:exp_settings}. Then, we visualize mixed samples in~\ref{app_sec:visualize}. Moreover, we provide more experiment results in \ref{app_sec:results} and detailed related work in \ref{app_sec:relatedwork}.
\subsection{Implementation Details}
\label{app_sec:implementation}
\subsubsection{Basic Settings}
\label{app_subsec:basic}
\paragraph{Reproduction details.}
We use MMclassification\footnote{https://github.com/open-mmlab/MMclassification} and OpenSelfSup\footnote{https://github.com/open-mmlab/OpenSelfSup} in PyTorch~\cite{nips2019pytorch} as our code-base for both supervised image classification and contrastive learning (CL) tasks. Except results marked by $\dag$ and $\ddagger$, we reproduce most experiment results of compared methods, including Mixup~\cite{zhang2017mixup}, CutMix~\cite{yun2019cutmix}, ManifoldMix~\cite{verma2019manifold}, SaliencyMix~\cite{uddin2020saliencymix}, FMix~\cite{harris2020fmix}, and ResizeMix~\cite{qin2020resizemix}.
\vspace{-8pt}
\paragraph{Dataset information.}
We briefly introduce image datasets used in Sec.~\ref{sec:expt}:
(1) CIFAR-100~\cite{krizhevsky2009learning} contains 50k training images and 10K test images of 100 classes. (2) ImageNet-1k (IN-1k)~\cite{krizhevsky2012imagenet} contrains 1.28 million training images and 50k validation images of 1000 classes. (3) Tiny-ImageNet (Tiny)~\cite{2017tinyimagenet} is a rescaled version of ImageNet-1k, which has 100k training images and 10k validation images of 200 classes.
(4) STL-10~\cite{coates2011analysis} benckmark is designed for semi- or unsupervised learning, which consists of 5k labeled training images for 10 classes and 100k unlabelled training images, and a test set of 8k images.
(5) CUB-200-2011 (CUB)~\cite{wah2011caltech} contains over 11.8k images from 200 wild bird species for fine-grained classification. (6) FGVC-Aircraft (Aircraft)~\cite{maji2013fine} contains 10k images of 100 classes of aircrafts.
(7) iNaturalist2017 (iNat2017)~\cite{cvpr2018inaturalist} is a large-scale fine-grained classification benchmark consisting of 579.2k images for training and 96k images for validation from over 5k different wild species.
(8) PASCAL VOC~\cite{2010pascalvoc} is a classical objection detection and segmentation dataset containing 16.5k images for 20 classes. (9) COCO~\cite{eccv2014MSCOCO} is an objection detection and segmentation benchmark containing 118k scenic images with many objects for 80 classes.
\vspace{-5pt}
\subsubsection{Supervised Image Classification}
\label{app_subsec:SL}
\paragraph{Implementation of SAMix.}
We provide detailed implementation of SAMix in SL tasks. As shown in Figure~\ref{fig:pipeline} (left), we adopt the momentum pipeline~\cite{nips2020byol, liu2021automix} to optimize $\mathcal{L}_{\theta,\omega}$ for mixup classification and $\mathcal{L}_{\phi}$ for mixup generation in Eq.~\ref{eq:total} in an end-to-end manner:
\begin{align}
\vspace{-2pt}
\theta_{q}^{t},\omega_{q}^{t} &\leftarrow \mathop{{\rm argmin}}\limits_{\theta,\omega} \mathcal{L}_{\theta_{q}^{t-1}, \omega_{q}^{t-1}}, \label{eq:qem} \\
\phi^{t} &\leftarrow \mathop{{\rm argmin}}\limits_{\phi} \mathcal{L}_{\theta_{k}^{t}, \omega_{k}^{t}} + \mathcal{L}_{\phi^{t-1}}, \label{eq:kem}
\vspace{-3pt}
\end{align}
where $t$ is the iteration step, $\theta_{q}, \omega_{q}$ and $\theta_{k}, \omega_{k}$ denote the parameters of online and momentum networks, respectively. The parameters in the momentum networks are an exponential moving average of the online networks with a momentum decay coefficient $m$, taking $\theta_{k}$ as an example,
\begin{equation}
\vspace{-2pt}
\theta_{k}^{t}\leftarrow m\theta_{k}^{t-1} + (1-m)\theta_{q}^{t}.
\label{eq:momentum}
\vspace{-2pt}
\end{equation}
The SAMix training process is summarized as four steps: (1) using the momentum encoder to generate the feature maps $Z^{l}$ for Mixer $\mathcal{M}_{\phi}$; (2) generating $X_{mix}^{q}$ and $X_{mix}^{k}$ by Mixer for the online networks and Mixer; (3) training the online networks by Eq.~\ref{eq:qem} and the Mixer by Eq.~\ref{eq:kem} separately; (4) updating the momentum networks by Eq.~\ref{eq:momentum}.
\vspace{-8pt}
\paragraph{Hyper-parameter settings.}
As for hyper-parameters of SAMix, we follow the basic setting in AutoMix for both SL and SSL tasks: SAMix adopts $\alpha=2$, the feature layer $l=3$, the bilinear upsampling, and the weight $\beta=0.1$ which linearly decays to $0$. We use $\eta=0.5$ for small-scale datasets (CIFAR-100, Tiny, CUB and Aircraft) and $\eta=0.1$ for large-scale datasets (IN-1k and iNat2017).
As for other methods, PuzzleMix~\cite{kim2020puzzle}, Co-Mixup~\cite{kim2021comixup}, and AugMix~\cite{hendrycks2019augmix} are reproduced by their official implementations with $\alpha=1,2,1$ for all datasets.
We provide dataset-specific hyper-parameter settings for our reproduced mixup methods: For CIFAR-100, Mixup and ResizeMix use $\alpha=1$, and CutMix, FMix and SaliencyMix use $\alpha=0.2$, and ManifoldMix uses $\alpha=2$. For Tiny, IN-1k, and iNat2017, ManifoldMix uses $\alpha=0.2$, and the rest methods adopt $\alpha=1$ for median and large backbones (e.g., ResNet-50). Specially, all these methods use $\alpha=0.2$ (only) for ResNet-18. For small-scale fine-grained datasets (CUB-200 and Aircraft), SaliencyMix and FMix use $\alpha=0.2$, and ManifoldMix uses $\alpha=0.5$, while the rest use $\alpha=1$.
\vspace{-6pt}
\subsubsection{Contrastive Learning}
\label{app_subsec:CL}
\paragraph{Implementation of SAMix-C and SAMix-I.}
As for SSL tasks, we adopt the cross-view objective, $\ell^{NCE}(z_{i}^{\tau_q}, z_{i}^{\tau_k}) + \ell^{NCE}(z_{m})$, where $z_{i} = z_{i}^{\tau_k}$ and $z_j = z_{j}^{\tau_k}$, for instance-level mixup classification in all methods (except for $\dag$ and $\ddagger$ marked methods).
We provide two variants, SAMix-C and SAMix-I, which use different learning objectives of mixup classification. Based on network structures (an encoder $f_{\theta}$ and a projector $g_{\omega}$) in MoCo.V2~\cite{2020mocov2}, SAMix-C is similar to SAMix in SL tasks, using a parametric cluster classification head $g_{\psi}^{C}$ for online clustering~\cite{eccv2018deepcluster, cvpr2020odc}. Notice that $g_{\psi}^{C}$ is used for $\mathcal{L}_{\phi}^{cls}$. It takes feature vectors from the momentum encoder as the input (optimized by Eq.~\ref{eq:kem}) and does not affect the mixup classification objective for online networks. Meanwhile, SAMix-I uses the instance-level classification loss for both $\mathcal{L}_{\theta,\omega}$ and $\mathcal{L}_{\phi}^{cls}$. Notice that we use $\eta$-balanced mixup loss $\mathcal{L}_{\phi}^{cls}$ for both SAMix-C and SAMix-I with $\eta=0.5$ and the objective $\mathcal{L}_{\phi}$ for Mixer is the same as in SL tasks.
\vspace{-8pt}
\paragraph{Hyper-parameter settings.}
Except for SwAV~\cite{nips2020swav}, all CL-based methods use MoCo.V2 pre-training settings, which uses ResNet~\cite{he2016deep} as the encoder $f_{\theta}$ with two-layer MLP projector $g_{\omega}$ and is optimized by SGD optimizer and Cosine scheduler with the initial learning rate of $0.03$ and the batch size of $256$. The length of the momentum dictionary is 65536 for IN-1k and 16384 for STL-10 and Tiny. The data augmentation strategy is based on IN-1k in MoCo.v2 as following: Geometric augmentation is \texttt{RandomResizedCrop} with the scale in $[0.2,1.0]$ and \texttt{RandomHorizontalFlip}. Color augmentation is \texttt{ColorJitter} with \{brightness, contrast, saturation, hue\} strength of $\{0.4, 0.4, 0.4, 0.1\}$ with a probability of $0.8$, and \texttt{RandomGrayscale} with a probability of $0.2$. Blurring augmentation is using square Gaussian kernel of size $23\times 23$ with a std uniformly sampled in $[0.1, 2.0]$. We use 224$\times$224 resolutions for IN-1k and 96$\times$96 resolutions for STL-10 and Tiny.
\vspace{-8pt}
\paragraph{Evaluation protocols.}
We evaluate the SSL representation with a linear classification protocol proposed in MoCo~\cite{he2020momentum}, which trains a linear classifier on top of the frozen representation on the training set. The linear classifier is trained 100 epochs by a SGD optimizer with the SGD momentum of $0.9$ and the weight decay of $0$. We set the initial learning rate of $30$ for IN-1k as MoCo, and $0.1$ for STL-10 and Tiny. The learning rate decays by $0.1$ at epoch 60 and 80. Moreover, we use object detection task to evaluate transfer learning abilities following MoCo, which use the $4$-th layer feature maps of ResNet (ResNet-C4) to train Faster R-CNN~\cite{ren2015faster} with 24k iterations on the \textit{trainval07+12} set (16.5k images) and Mask R-CNN~\cite{2017iccvmaskrcnn} with 2$\times$ schedule on the \textit{train2017} set (118k images).
\subsection{Empirical Experimental Settings}
\label{app_sec:exp_settings}
\subsubsection{Analysis of Instance-level Mixup}
\label{app_subsec:instance_mix}
In Sec.~\ref{subsec:properties}, we propose the cross-view training pipeline for mixup classification (shown in Figure~\ref{fig:pipeline} (a)) and discuss inter- and intra-class proprieties of instance-level mixup. We verify them by two experiments:
Firstly, as shown in Figure~\ref{fig:pipeline} (b), we compare using the same-view or cross-view pipelines combined with using $\ell^{NCE}(z_{i}^{\tau_q}, z_{i}^{\tau_k}) + \ell^{NCE}(z_{m})$ or only using $\ell^{NCE}(z_{m})$ with ResNet-18 pre-training 400-epoch on Tiny.
Then, as shown in Figure~\ref{fig:pipeline} (c), we adopt inter-cluster and intra-cluster mixup from \{Mixup, CutMix\} with $\alpha \in \{0.2, 1, 2, 4\}$ to verify that instance-level mixup should treat inter- and intra-class mixup differently. Empirically, mixed samples provided by Mixup preserve global information of both source samples (smoother) while samples generated by CutMix preserve local patches (more discriminative). And we introduce pseudo labels (PL) to indicate different clusters by clustering method ODC~\cite{cvpr2020odc} with the class (cluster) number $C$. Based on experiment results, we can conclude that inter-class mixup requires \textit{discriminative} mixed samples with \textit{strong} intensity while the intra-class needs \textit{smooth} samples with \textit{low} intensity.
Moreover, we provide two cluster-based instance-level mixup methods in Table~\ref{tab:stl_ssl} and \ref{tab:imagenet_ssl} (denoting by $*$): (a) Inter-Intra$^*$. We use CutMix with $\alpha \ge 2$ as inter-cluster mixup and Mixup with $\alpha=0.2$ as intra-cluster mixup. (b) PuzzleMix$^*$. We introduce saliency-based mixup methods to SSL tasks by introducing PL and a parametric cluster classifier $g_{\psi}^{C}$ after the encoder. This classifier $g_{\psi}^{C}$ and encoder $f_{\theta}$ are optimized alternatively like SAMix mentioned in~\ref{app_subsec:SL}. Based on gradCAM~\cite{selvaraju2017gradcam} calculated from the classifier, PuzzleMix can be adopted on SSL tasks.
\vspace{-5pt}
\subsubsection{Analysis of Mixup Generation Objectives}
\label{app_subsec:gen_loss}
In Sec.~\ref{subsec:loss}, we design experiments to analyze various losses for mixup generation in Figure~\ref{fig:loss_analysis} (left) and the proposed $\eta$-balanced loss in Figure~\ref{fig:loss_analysis} (right) for both SL and SSL tasks with ResNet-18 on STL-10 and Tiny.
Basically, we assume both STL-10 and Tiny datasets have 200 classes on their 100k images. Since STL-10 does not provide ground truth labels (L) for 100k unlabeled data, we introduce PL generated by a supervised pertained classifier on Tiny as the "ground truth" for its 100k training set. Notice that L denotes ground truth labels and PL denotes pseudo labels generated by ODC~\cite{cvpr2020odc} with $C=200$.
As for the SL task, we use the labeled training set for mixup classification (100k on Tiny v.s. 5k on STL-10). Notice that SL results are worse than using SSL settings on STL-10, since the SL task only trains a randomly initialized classifier on 5k labeled data. Because the infoNCE and BCE loss require cross-view augmentation (or they will produce trivial solutions), we adopt MoCo.V2 augmentation settings for these two losses when performing the SL task. Compared to CE (L), we corrupt the global term in CE as CE (PL) or directly remove them as pBCE (L) to show that pBCE is vital to optimizing mixed samples. Similarly, we show that the global term is used as the global constrain by comparing BCE (UL) with infoNCE (UL), infoNCE (PL) and infoNCE (L).
As for the SSL task, we use the same training setting as \ref{app_subsec:CL}, and verify the conclusions drawn from the SL task. We can conclude that (a) the local term optimizes mixup generation directly, corresponding to the smoothness property, (b) the global term serves as the global constraint corresponding to the discriminative property.
Moreover, we verified that using the $\eta$-balanced loss as $\mathcal{L}_{\phi}^{cls}$ yields best performace on SL and SSL tasks. Notice that we use $\eta=0.5$ on small-scale datasets and $\eta=0.1$ on large-scale datasets for SL tasks, and use $\eta=0.5$ for all SSL tasks.
\vspace{-5pt}
\subsubsection{Analysis of Mutual Information for Mixup}
\label{app_subsec:MI}
Since mutual information (MI) as usually adopted to analyze contrastive-based augmentations~\cite{eccv2020CMC, nips2020infomin}, we estimate MI between $x_{m}$ of various methods and $x_{i}$ by MINE~\cite{icml2018mine} with 100k images in 64$\times$64 resolutions on Tiny. We sample $\lambda=$ from 0 to 1 with the step of $0.125$ and plot results in Figure~\ref{fig:scatter} (d). Here we see that SAMix-C and SAMix-I with more MI when $\lambda \approx 0.5$ perform better.
\subsection{Visualization of SAMix}
\label{app_sec:visualize}
\subsubsection{Mixing Attention and Content in Mixer}
\label{app_subsec:mixer}
In Sec.~\ref{subsec:mixblock}, we discuss the trivial solutions of Mixer, which are usually occurred in SSL tasks. Given the sample pair $(x_i,x_j)$ and $\lambda=0.5$, we visualize the content $C_{i}$ and $P_{i,j}$ to compare the trivial and non-trivial results in the SSL task on STL-10, as shown in Figure~\ref{fig:app_trivial}. As we can see, both $C_{i}$ and $P_{i,j}$ from the trivial solution has extremely large or small scale values while $C_{i}$ generated by $C_{NCL}$ containing more balanced values. Since the attention weight $P_{i,j}$ is normalized by softmax, we hypothesize that $ C_i$ more likely causes trivial solutions. To verify our hypothesis, we freeze $W_{P}$ in the original MB and compare the original linear content projection $W_{z}$ with the non-linear content modeling. The results confirm that the non-linear module can prevent large-scale values on $C_{i}$ and eliminate the trivial solutions.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{Figs/automix_v2-trivial.pdf}
\vspace{-4pt}
\caption{Visualization of the trivial solution of Mixer in SAMix-I on STL-10. The upper row shows non-trivial results produced by Mixer with the non-linear content sub-module $\mathcal{C}_{NC}$ while the lower shows the trivial solution (such as constant $s_{i}$) using the linear $\mathcal{C}$.}
\label{fig:app_trivial}
\vspace{-10pt}
\end{figure}
\begin{figure}[b]
\centering
\includegraphics[width=1.0\linewidth]{Figs/automix_v2-loss.pdf}
\vspace{-12pt}
\caption{Visualization of loss effect. Both infoNCE and BCE loss have different emphases: infoNCE shows the similar effect of supervised fine-grained classification, focusing on fragmented and essential features, while BCE focuses on object completeness.}
\label{fig:app_loss}
\vspace{-10pt}
\end{figure}
\vspace{-8pt}
\subsubsection{Effects of Mixup Generation Loss}
In addition to Sec.\ref{subsec:discussion}, we further provide visualization of mixed samples using the infoNCE (Eq.~\ref{eq:mixnce}), BCE (Eq.~\ref{ep:BCE}), and $\eta$-balanced infoNCE loss (Eq.~\ref{eq:eta_loss}) for Mixer. As shown in Figure~\ref{fig:app_loss}, we find that mixed samples using infoNCE mixup loss prefer instance-specific and fine-grained features. On the contrary, mixed samples of the BCE loss seem only to consider discrimination between two corresponding neighborhood systems. It is more inclined to maintain the continuity of the whole object relative to infoNCE. Thus, combining both the characteristics, the $\eta$-balanced infoNCE loss yields mixed samples that retain both instance-specific features and global discrimination.
\vspace{-6pt}
\subsubsection{Visualization of Mixed Samples in SAMix}
\label{app_subsec:mix_sample}
\paragraph{SAMix in various scenarios.}
In addition to Sec.~\ref{subsec:discussion}, we visualize the mixed samples of SAMix in various scenarios to show the relationship between mixed samples and class (cluster) information. Since IN-1k contains some samples in CUB and Aircraft, we choose the overlapped samples to visualize SAMix trained for the fine-grained SL task (CUB and Aircraft) and SSL tasks (SAMix-I and SAMix-C). As shown in Figure~\ref{fig:app_scenarios}, mixed samples reflect the granularity of class information adopted in mixup training. Specifically, we find that mixed samples using infoNCE mixup loss (Eq.\ref{eq:mixnce}) is more closely to the fine-grained SL because they both have many fine-grained centroids.
\begin{figure}[t]
\vspace{-5pt}
\centering
\includegraphics[width=1.0\linewidth]{Figs/automix_v2-app_old.pdf}
\vspace{-12pt}
\caption{Visualization of SAMix in various scenarios on CUB and Aircraft. Given images A and B, the middle three mixed samples are generated by SAMix with $\lambda=0.5$ trained in the fine-grained SL task and the SSL tasks (SAMix-C and SAMix-I).}
\label{fig:app_scenarios}
\vspace{-10pt}
\end{figure}
\vspace{-8pt}
\paragraph{Comparison with PuzzleMix in SL tasks.}
To highlight the accurate mixup relationship modeling in SAMix compared to PuzzleMix (standing for saliency-based methods), we visualize results of mixed samples from these two methods in the supervised case in Figure~\ref{fig:SL_AutoMix_SAMix}. There are three main difference: (a) bilinear upsampling strategy in SAMix makes the mixed samples more smooth in local patches. (b) adaptive $\lambda$ encoding and mixing attention enhance the correspondence between mixed samples and $\lambda$ value. (c) $\eta$-balanced mixup loss enables SAMix to balance global discriminative and fine-grained features.
\vspace{-8pt}
\paragraph{Comparison of SAMix-I and SAMix-C in SSL tasks.}
As shown in Figure~\ref{fig:SSL_SAMIX_I_C}, we provide more mixed samples of SAMix-I and SAMix-C in the SSL tasks to show that introducing class information by PL can help Mixer generate mixed samples that retain both the fine-grained features (instance discrimination) and whole targets.
\subsection{More Experiments}
\label{app_sec:results}
We provide more results of SL tasks on CIFAR-100 and IN-1k. Firstly, we train SAMix and compared methods with 400, 800, 1200 epochs on CIFAR-100 based on ResNet-18 (R-18) and ResNeXt-50 (32x4d) (RX-50), as shown in Table~\ref{tab:app_sl_cifar}. SAMix steadily outperforms previous methods regardless of the training time setting. Notice that $\dagger$ denotes reproduced results by official source code and other results are reproduced by us. And $*$ denotes unpublished work on arxiv. Then we provide comparing results under 100-epoch training protocol on IN-1k with various network architectures, as shown in Table~\ref{tab:app_sl_imagenet}. SAMix outperforms previous methods and improves the vanilla by 0.85\%, 1.11\%, 1.18\%, 1.69\%, and 1.73\% based on R-18, R-34, R-50, R-101, and RX-101 on IN-1k.
\input{Tabs/app_sl_cifar}
\input{Tabs/app_sl_imagenet}
\subsection{Detailed related work}
\label{app_sec:relatedwork}
\paragraph{Contrastive Learning.} CL amplifies the potential of SSL by achieving significant improvements on classification~\cite{icml2020simclr, he2020momentum, nips2020swav, chen2020improved}, which maximizes similarities of positive pairs while minimizes similarities of negative pairs. To provide a global view of CL, MoCo~\cite{he2020momentum} proposes a memory-based framework with a large number of negative samples and model differentiation using the exponential moving average. SimCLR~\cite{icml2020simclr} demonstrates a simple memory-free approach with a large batch size and strong data augmentations that is also competitive in performance to memory-based methods. Unlike other CL approaches, BYOL~\cite{nips2020byol} does not require negative pairs or a large batch size for the proposed pretext task, which tries to estimate latent representations from the same instance.
\vspace{-8pt}
\paragraph{Mixup.}
MixUp~\cite{zhang2017mixup}, convex interpolations of any two samples and their unique one-hot labels, were presented as the first mixing-based data augmentation approach for regularising the training of networks. ManifoldMix~\cite{verma2019manifold} and PatchUp~\cite{faramarzi2020patchup} expand it to the hidden space. CutMix~\cite{yun2019cutmix} suggests a mixing strategy based on the patch of the image, \textit{i.e.}, randomly replacing a local rectangular section in images. Based on CutMix, ResizeMix~\cite{qin2020resizemix} inserts a whole image into a local rectangular area of another image after scaling down. FMix~\cite{harris2020fmix} converts the image to Fourier space (spectrum domain) to create binary masks.
To generate more semantic virtual samples, offline optimization algorithms are introduced for the saliency regions. SaliencyMix~\cite{uddin2020saliencymix} obtains the saliency using a universal saliency detector. With optimization transportation, PuzzleMix~\cite{kim2020puzzle} and Co-Mixup~\cite{kim2021comixup} present more precise methods for finding appropriate mixup masks based on saliency statistics. SuperMix~\cite{dabouei2021supermix} combines mixup with knowledge distillation, which learns a pixel-wise sample mixing policy via a teacher-student framework to distill class knowledge. Differing from previous methods, AutoMix~\cite{liu2021automix} can learn the mixup generation by a sub-network end-to-end which generated mixed samples via feature maps and the mixing ratio.
\vspace{-8pt}
\paragraph{Mixup for contrastive learning.}
A complementary method for better instance-level representation learning is to use mixup on CL~\cite{nips2020mochi, 2021cvprunmix}. When used in collaboration with CE loss, Mixup and its several variants provide highly efficient data augmentation for SL by establishing a relationship between samples. Without a ground-truth label, the most of approaches are limited to linear mixup methods. For example, Un-mix~\cite{2021cvprunmix} attempts to use MixUp in the input space for self-supervised learning, whereas the developers of MoChi~\cite{nips2020mochi} propose mixing the negative sample in the embedding space to increase the number of hard negatives but at the expense of classification accuracy. i-Mix~\cite{2021iclrimix} and BSIM~\cite{2020bsim} demonstrated how to regularize contrastive learning by mixing instances in the input or latent spaces. We introduce automatic mixup for SSL tasks, which adaptively learns the instance relationship based on inter- and intra-cluster properties online.
\begin{figure*}[t]
\centering
\includegraphics[width=0.85\linewidth]{Figs/automix_v2-compare_sl.pdf}
\vspace{-5pt}
\caption{Visualization of PuzzleMix v.s. SAMix for SL tasks on IN-1k. In each four rows, the upper and lower two rows represent mixed samples generated by PuzzleMix and SAMix, respectively. $\lambda$ value changes from left ($\lambda=0$) to right ($\lambda=1$) by an equal step.}
\label{fig:SL_AutoMix_SAMix}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.85\linewidth]{Figs/automix_v2-compare_ssl.pdf}
\vspace{-5pt}
\caption{Visualization of SAMix-I v.s. SAMix-C for SSL tasks on IN-1k. In each four rows, the upper and lower two rows represent mixed samples generated by SAMix-I and SAMix-C, respectively. $\lambda$ value changes from left ($\lambda=0$) to right ($\lambda=1$) by an equal step.}
\label{fig:SSL_SAMIX_I_C}
\end{figure*}
|
{
"timestamp": "2021-12-01T02:25:40",
"yymm": "2111",
"arxiv_id": "2111.15454",
"language": "en",
"url": "https://arxiv.org/abs/2111.15454"
}
|
\section{introduction}
The experimental system consists of a layer of liquid helium with a 2D sheet of electrons above its surface. The surface electrons (SEs) and liquid helium are sandwiched between electrodes forming a parallel-plate capacitor (see Fig 1 of main paper). An electric field $E_{\perp}$ is created perpendicular to the surface by applying a positive ``pressing voltage'' to the bottom electrode. The SE occupy quantized energy levels called subbands~\cite{monarkha2019magneto}, which are determined by competing forces in the system: an electric field force from $E_\perp$, an attractive image force from within the body of liquid helium and a repulsive surface barrier caused by the helium atoms~\cite{monarkha2013two}.
In the experiment, transitions between the first and second subbands are excited by resonant microwave radiation with angular frequency $\omega$ such that $\hbar\omega \approx \epsilon_2-\epsilon_1$ by tuning $\omega$ with $E_{\perp}$through the linear Stark shift. A strong magnetic field $B$ is applied perpendicular to the motion of electrons to fulfill the relation $ \omega/\omega_c \,=\,l+1/4$ in order to achieve the zero resistance regime.
The aim of the experiment is to understand the spontaneous oscillatory behaviour of the SEs and, in particular, how it changes as a function of electron density and pressing voltage. Nonlinear dynamics methods are used to explore the characteristic features of this system. The time evolution of electric currents induced in five electrodes above the liquid surface was measured. They are labelled C, E1, E2, E3 and E4 to represent the central electrode and the four edge electrodes as shown in Fig ~\ref{fig:1}. Both the top and bottom electrodes are surrounded by the guard rings that are negatively biased ($V_{\si{TG}}$ and $V_{\si{BG}}$\,=\,$-$\,0.5~V, respectively) to confine the electrons within their pool and form a sharp edge on the electron profile. The distance between the top and bottom electrodes is $D$\,=\,2.6~mm. The depth of liquid helium ${}^{4}$He $d$\,=\,1.3~mm and the magnetic field $B$\,=\,0.81~T are kept constant throughout the experiment. The pressing voltage ($V$) is varied from $V$\,=\,4.16 to 4.22~V; and the electron density is varied within the range $n_e\,=\, 1.4$\,to\,$2.2 \times 10^6$~\si{cm^{-2}}.
\begin{figure*}[h!]
\includegraphics[width=5cm]{TE.PNG}
\caption{\label{fig:1} Schematic view of the five top electrodes with their guard ring. The five electric current signals, $I_{c}$, $I_{1}$, $I_{2}$, $I_{3}$ and $I_{4}$, are measured simultaneously from the central electrode and the four segmented electrodes E1, E2, E3, E4, respectively. The gaps between the electrodes are 0.2~mm. The microwaves propagate parallel to the helium surface with a frequency $f$\,=$\,139/2\pi$~GHz, and 0~dBm power.}
\end{figure*}
For all of the measurements the MW circular frequency is 139~GHz. The parameters of the current preamplifiers and the oscilloscope are $10^{-9}$~A/V gain, 10~kHz preamplifier bandwidth and 100~kHz sampling frequency. Fig.~\ref{fig:1} shows the electrode arrangement used to measure the five current signals ($I_{c}$, $I_{1}$, $I_{2}$, $I_{3}$ and $I_{4}$), which were recorded for 60 sec from each of the five electrodes C, E1, E2, E3 and E4. Fig.~\ref{fig:t} (a) shows an example of a recorded time trace to indicate the length of the data. Measurements were performed at T\,=\,0.3~K, in magnetic field $B$\,=\,0.81~T. Application of the MW resulted in an increase in amplitude of the measured signals, as seen in Fig. 2. Fig.~\ref{fig:t} (b) shows part of the signal in Fig. 1(a) with length 1.4 sec that we used to study the characteristics of the oscillations, and the inset with an even shorter segment is provided to show the pattern of the signal.
\begin{figure*}[h!]
\includegraphics[width=17cm]{Time2.PNG}
\caption{\label{fig:t} (a) Example of the whole length of time series recorded for 60 seconds at a sampling frequency of 100~kHz resampled to 10~kHz. The MW (139~GHz) is switched On and Off where the On state is inside the green dashed lines. (b) The section of the signal in red, recorded for 1.4 seconds, is investigated to elucidate the characteristics of the oscillations. Inset shows current oscillations on a shorter time interval.}
\end{figure*}
Such systems have traditionally been treated in terms of quantum mechanics~\cite{kawakami2019image, monarkha2019magneto, zadorozhko2021motional}. They can, however, be entirely deterministic and we are currently analyzing this system within a classical framework. We apply time-resolved, nonautonomous, nonlinear dynamics methods to extract a wealth of information from the experimental data. This method provides temporal resolution and yields information about localized waves. In using it to reconstruct the dynamics of the SE we observe:
\begin{itemize}
\item A variability in the frequency of the spontaneous kHz oscillations. The main reason is that the electron is moving in three\,-\,dimensional (3D) relative to the electrodes gravity waves moving the liquid surface vertically: the surface is not flat as conventionally assumed in quantum theoretical descriptions of the system).
\item Motion of the electrons around the cell. The pattern and velocity of movement change with the pressing potential and electron density.
\item Resonance conditions in term of the phase coherence, which becomes constant between the five signals.\
\end{itemize}
\subsection{Power Spectral Density}
Systems that change their behaviour in a predictable way with time are usually described as deterministic. In the opposite limit, physical systems that seem at first sight to behave randomly are often treated as being stochastic, though this is not always appropriate. It may lead to a loss of potentially valuable information that can be recovered by use of time-resolved nonlinear dynamical methods \cite{clemson2016reconstructing,clemson2014discerning,iatsenko2015linear,Newman:2018}. The commonest method of visualizing the dynamical properties of a signal is the Fourier transform. It can be used to present time traces in the frequency domain. The time series are characterised by the total length of the signal, $T\,=\,N\Delta$, where $\Delta$ is the time interval between samplings and $N$ is the number of data points. The other important parameters are the sampling frequency $f_s$\,=\,1/$\Delta$ and the maximum observable frequency, equal to half the sampling frequency ($f_s$/2). The latter is called the Nyquist critical value, where the extracted frequency modes are limited to the range $f_k$\,=\,$\frac{k}{N\Delta}$, where $k$\,=\,0, 1, .., $\frac{N}{2}$. The discrete Fourier transform (DFT) \cite{press2007numerical} is given by
\begin{equation}
\label{eqn:1}
G_{k} = \sum_{j=0}^{N-1}x_{j}e^{2\pi ijk/N}.
\end{equation}
The power spectral density (PSD) represents the power distribution over the frequency modes within the time series. The power content of the signal \cite{press2007numerical} is given by
\begin{align}
\label{eqn:eqlabel}
\begin{split}
P(f_{0}) = \frac{1}{N^2} |G_0|^{2},
\\
P(f_k) = \frac{1}{N^2} \biggl[ |G_k|^{2} + |G_{N-k}|^2 \biggr],
\\
P(f_{N/2}) = \frac{1}{N^2} |G_{N/2}|^{2}.
\end{split}
\end{align}
The power spectral density of a stochastic process has the form
\begin{equation}
\label{eqn:e}
P(f) = \frac{\rm constant}{f^\alpha},
\end{equation}
where $\alpha$ $\geq$ 0 and determines the functional dependence of the spectrum. Taking the logarithm of each side of Eq.(3), we obtain
\begin{equation}
\label{eqn:4}
\log(P) = \log({\rm constant}/{f^\alpha}) = - \alpha{}\, \log(f) + \log({\rm constant}),
\end{equation}
\begin{figure*}[h!]
\includegraphics[width=17.2cm]{Psd2.png}
\caption{\label{fig:psd} Power spectral density (PSD) of the time series presented in Fig.~\ref{fig:t} (b). The sampling
frequency is 10~kHz. The signal contains sharp peaks at frequencies 0.17 0.95, 1.94 and 2.9~kHz.
}
\end{figure*}
where $\alpha$ is a parameter that is computed from the gradient of the straight line graph. Fig.~\ref{fig:psd} shows the power spectral density of the time trace shown in Fig.~\ref{fig:t} b) by using equations \ref{eqn:1}, \ref{eqn:eqlabel} and \ref{eqn:4}.
Our computations are for an $\alpha$ value that is~-1 $\le$~ $\alpha$~$\le$ 0.9, leading to a.
spectral density commonly referred to as ``1/f" or ``white" noise \cite{Ward:2007}.
\section{Signal analysis}
Physical systems whose parameters vary in time are called ``dynamical systems''.
Electrons above liquid helium in the presence of a magnetic field directed normally and exposed to microwave radiation is a good example of such a system. There exists an external mechanism (such as the microwave-electron interaction) which allows the system to exchange energy with its surroundings, resulting in an inherent time-variability and high nonlinearity.
We need suitable analysis methods to acquire insight into the behavior of the system generating the data. Thus, we aim to use nonlinear dynamical methods to extract information about time-varying oscillatory modes and their mutual interactions.
Figure~\ref{fig:2} provides an overview of the methods that were used to study the oscillatory dynamics of the recorded signals.
\begin{figure*}[h!]
\includegraphics[width=17.2cm]{Methodes2.PNG}
\caption{\label{fig:2} A workflow chart of the methods discussed in this paper.}
\end{figure*}
\subsection{Pre-processing}
Pre-processing of the time series is required to provide optimal conditions for analysis.
\paragraph{\textbf {Down-sampling}}
\noindent Down-sampling is a process of reducing the initial sampling rate of the data. The sampling frequency is 100~kHz, which gives a higher resolution than is needed, so all the signals are down-sampled to 10~kHz.
\paragraph {\textbf{De-trending}}
\noindent Prior to the analysis of the signals, all data were de-trended by using a moving average (default in Matlab commands). Non-oscillatory trends appear as low-frequency variations of the signals, and the purpose of detrending is to remove their possible interference with low-frequency oscillatory modes within the range of interest.
\subsection{Time-frequency representations}
A nonautonomous system is affected by external perturbations, producing explicit time-dependences in the system. Methods of analysis that assume a stationary frequency distribution (Fourier transform) are insufficient to compute all relevant information from the data.
Instead, windowing methods such as windowed Fourier transform (WFT) and the wavelet transform (WT) may be used to extract spectral information.
The windowed Fourier transform was developed to overcome limitations of the Fourier transform when used to analyze the nonstationary signals, and to create time-frequency representations of the signals. The WFT works with a time series $x(u)$ of length $T$, by sliding a rectangular function of length {$\tau$}\,$\textless$\,$T$ ( window ) over the signal. Within the window, the signal can be treated as stationary and the Fourier Transform (FT) can usefully be applied. However, the WFT method still has limitations because of a trade-off between time localisation and frequency resolution, due to the uncertainty principle, which states that we cannot determine the precise frequency and the exact time simultaneously. Therefore, the size of the window is adjusted depending on whether the user needs good frequency resolution or a good time localisation. By using a large window we can obtain good frequency resolution but poor time localisation: the frequency resolution is proportional to the length of the window. Similarly, the low-frequency information cannot be extracted when using a narrow window.
The wavelet transform (WT) was developed to overcome these problems. It makes use of an adaptive window that yields good resolution in both time and frequency, with logarithmic frequency resolution. It is calculated by moving a wavelet function along the signal and it can be tuned according to the frequency ranges that we want to investigate \cite{clemson2014discerning}. The wavelet transform is obtained from \cite{clemson2016reconstructing},
\begin{equation}
W_T(s,t)=\frac{1}{\sqrt s} \int_{-\frac{L}{2}}^{\frac{L}{2}}\varPsi(s, u-t) f(u) du,
\label{eq: wavelet CWT}
\end{equation}
where $\varPsi(s,t)$ is the mother wavelet (providing the basis of the continuous WT method), $s$ is a scaling parameter used to change the wavelet frequency distribution and shift its time as needed. The frequency scale in the WT is continuous so that, for any arbitrary frequency, the wavelet components can be calculated. The mother wavelet that we choose is the Morlet wavelet, given by
\begin{equation}
\varPsi(s,t)= \frac{1}{\sqrt[4]{\pi}} (e^{\frac{2 \pi if_rt}{s}}-e^{\frac{-2 \pi {f_r}^2}{2}}) e^{-\frac{t^2}{2s^2}}.
\label{eq:morlet wavelet }
\end{equation}
The parameter $f_r$ is the frequency resolution, which determines the trade-off between time localisation and frequency resolution in the wavelet coefficients calculated from the signal under study. Higher values of $f_r$ give lower time resolution, but have good frequency resolution. However, at very small frequencies the wavelet becomes meaningless.
Coefficients from performing a WT with the Morlet wavelet can be complex-valued. This property can be used to determine the instantaneous amplitude and phase for each frequency and time \cite{Kaiser:94} .
\subsection{Wavelet power}
The wavelet power spectrum can be obtained by calculating the integral of the square of the modulus of the wavelet transform over frequency \cite{clemson2014discerning,clemson2016reconstructing}:
\begin{equation}
P_W(w,t)= \int_{w-\frac{dw}{2}}^{w+ \frac{dw}{2}}|W_T(w,t)|^2 dw.
\label{eq:wavelet power}
\end{equation}
It gives a vector representing the power of the whole data set. It can be plotted as a function of frequency to show the power distribution between different spectral components and to identify the frequency range of the main oscillatory components in the signal under investigation.
\subsection{Harmonic detection}\label{sec:H}
\noindent We applied the wavelet transform (WT) to the data in order to trace the time variability of the fundamental oscillation frequencies, and to obtain time-dependent phase information. However, we may observe other frequency modes appearing in the WT, that are integer multiples of the fundamental frequency. Where We expect high frequency harmonics in the data due to the nonlinearity of the measured signal, we need a method to detect relationships between the oscillations and identify the harmonics of any component in the signal. The method provided by Sheppard et al \cite{Sheppard:11} can determine whether the oscillation is a harmonic of the main frequency of the signal or an independent oscillation of higher-frequency. This method of finding harmonics is based on mutual information, by analogy with the Shannon entropy and surrogate testing. The mutual information (M) provides a measure of the missing entropy from a conditional distribution while considering information about another variable. The Shannon entropy introduces a measure of the unpredictability. The entropy is high if the distribution is uniform, and it is low if the distribution is sharply peaked.
\textbf{Procedure for identifying harmonics}:
First, we need to extract the phases from wavelet transform at each point in time for each frequency in the signal. The phases are then discretized into 24 bins. These phases are obtained to calculate the mutual information, which is calculated for every possible pair of phases in the signal.
\begin{figure}[h!]
\includegraphics[width=17.2cm]{H2a.png}
\caption{\label{fig:HAR} (a-b) Current signal for electrodes (2,3) (4 s sections) recorded for 60 seconds in total at a sampling frequency of 100~kHz resampled to 10~kHz, pressing voltage 4.20~V, B\,=\,0.81~T and $n_e\,=\,1.4 \times 10 ^{6}$~\si{cm^{-2}}. (c-d) Wavelet transforms with frequency resolution 4~Hz and (e-f) time-averaged wavelet power of the signals in (a-b) showing the fundamental frequency and higher harmonic oscillations. The $z$-axes in panels (c-d) represents the wavelet amplitude; the y-axis of (b) and the x-axis of (e-f) are on logarithmic scales. (g-h) The mutual information M. (i-j) shows the actual mutual information values relative to the surrogate distribution, which we will refer to as an M-plot. The threshold of standard deviations above the mean of the 100 surrogate distribution is marked in blue. High harmonics appears in the representation due to nonlinearity. The central frequency used in calculations is 1~Hz.}
\end{figure}
The discrete entropy of the 24 phases is given by
\begin{equation}
H(\phi_1) = - \sum_{\phi_1=1}^{24} p(\phi_1){}\log_2(\phi_1).
\label{eq: entropy}
\end{equation}
The mean entropy of the conditional distribution is calculated using the equation
\begin{equation}
H(\phi_1||\phi_2) = - \sum_{\phi_1=1}^{24}p(\phi_2) \sum_{\phi_1=1}^{24} p(\phi_1||\phi_2) {}\log_2 p(\phi_1||\phi_2).
\label{eq: conditional entropy}
\end{equation}
The mutual information of the data is calculated from the difference between equation \ref{eq: entropy} and equation \ref{eq: conditional entropy},
\begin{equation}
M(\phi_1, \phi_2)= H(\phi_1)- H(\phi_1||\phi_2).
\label{eq: mutual information}
\end{equation}
The two frequencies $\omega_1$ and $\omega_2$ of the phases approach each other as the mutual information for $\phi(\omega_1,t)$ and $\phi(\omega_2,t)$ approaches unity. Since the mutual information is biased by the correlation of the phase signal caused by binning, it is necessary to perform surrogate-testing \cite{Lancaster:18a}.
Surrogates are signals that are designed to preserve all of the properties of the original signals, except the property relating to the hypothesis that is being tested.
Commonly-used methods for generating surrogates are the amplitude-adjusted Fourier transform (AAFT) and the iterative amplitude-adjusted Fourier transform (IAAFT). These methods conserve the amplitude distribution in real space and reproduce the power spectrum (PS) of the original signals. The basic assumption of (AAFT) and (IAAFT) is that higher-order correlations can be destroyed by randomization of Fourier phases in time, while preserving the linear correlations \cite{raeth2009surrogates}. A local maximum in the mutual information calculated for a pair of phases from the original data is deemed to indicate a harmonic if it occurs some number of standard deviations above the mean value of the surrogate's mutual information.
Fig.~\ref{fig:HAR}(a) and (b) show 4-second sections of the current signals from the 60 second measurements by electrodes E2 and E3. The measurements are taken simultaneously with a sampling frequency of 100~kHz, resampled to 10~kHz, and with a pressing voltage of 4.20~V, $B\,=\,0.81$~T, and with $n_e$\,=\,1.4$\times10^6$~\si{cm^{-2}}.
The time frequency representation (WT) and the time-averaged wavelet power are presented in Fig.~\ref{fig:HAR} (c,d) and Fig.~\ref{fig:HAR} (e,f) for electrodes E2 and E3, respectively, with frequency resolution frequency~3~Hz. The E2 and E3 signals oscillate at the same frequency, the mean frequency of the main mode of each signal being around 951~Hz where the second and third frequencies are 1884 and 2886~Hz, respectively, as shown in Fig.~\ref{fig:HAR} (c).
It appears from the identical frequencies that E2 and the E3 are coupled. We have calculated the coherence of the two signals, and found that they are coherent in the frequency range of the oscillatory modes: see also below.
To determine whether the second and third harmonics of the signals from E2 and E3 are independent modes, or higher harmonics of the fundamental mode, the harmonic finder method is used (see above). Figures~\ref{fig:HAR} (g,i) for E2, (h,j) for E3 show the mutual information plot relative to the mean and standard deviation of the surrogates. A threshold of standard deviations above the mean of the 100 surrogate distribution is marked in blue. High harmonics appear in the representation due to nonlinearity in both data. It was found that the second and third frequencies for each electrode (E2,E3) are higher harmonics of the main mode, which results from a strong nonlinearity that is generated from the coupling of the surface liquid helium and the moving of the electrons.
\subsection{Ridge Curve Extraction}
\begin{figure*}[h!]
\includegraphics[width=17.2cm]{ridgg.PNG}
\caption{\label{fig:ridg} Time-frequency representation (TFR) and average amplitude plot of the set signal presented in Fig.~\ref{fig:t}. Oscillatory components are shown as amplitude peaks in the TFR and average amplitude plot. The frequency band chosen for ridge extraction include the entirety of the oscillatory components (limits shown by the grey dashed lines).}
\end{figure*}
Ridge curve extraction is a method for extracting the oscillatory modes from a signal \cite{iatsenko2016extraction}. The time\,-\,frequency representations of the signals are computed by use of the WT. The oscillatory components appear as amplitude peaks in the time-frequency representations (TFRs) and are called ``ridge curves". We can determine the oscillatory components by using both the time-frequency representation and the time-averaged plot. The ridge extraction can be defined by selecting the frequency band which includes the entirety of the oscillatory components within the signal. The frequency band should include the whole width of the oscillatory component, including the peak and the surrounding blur due to the uncertainty principle as shown in Fig.~\ref{fig:ridg}. We need to extract only one oscillatory component that appears in the frequency band.
\subsection{Phase coherence and phase difference.}{\label{sec:1}}
\begin{figure*}[h!]
\includegraphics[width=17cm]{coh18.png}
\caption{\label{fig:cE}(a-b) Wavelet phase coherence and phase difference of signals from the two of electrodes (E1,E2). The main frequency component is chosen as the significant phase coherence and the corresponding phase difference is shown by the black dashed line. Coherence below the 95th percentile of the surrogates (yellow curve) is not considered significant.}
\end{figure*}
When we observe oscillations at the same frequency in two different time series and find that the difference between their instantaneous phases $\phi_{1 k,n}$, $\phi_{2 k,n}$ is constant, then we can say that the oscillations are coherent at that frequency. The wavelet phase coherence (WPC) between the two signals $x_1(t)$ and $x_2(t)$ is obtained through their respective wavelet transforms as defined by equation (6). $W_{si}$ is the wavelet transform of signal $x_i$. The WPC of the two signals \cite{Bandrivskyy:04a} are given as
\begin{equation}
WPC_{x_1;x_2} (f)=\frac{1}{L}\int_{o}^{L} e^{iarg[W_{x_1}(s,t)W^*_{x_2}(s,t)]}dt|\label{eq: phas},
\end{equation}
where the scale $s$ is related to the frequency $f$ as $s\,=\,1/(2\pi f)$. The phase coherence function $C_{\theta}(fk)$ is obtained by computing and averaging over time the sine and cosine components of the phase differences for the whole signal, effectively defining the time-averaged WPC as
\begin{equation}
C_ \theta(\omega_k)=\sqrt{\langle \cos\Delta \theta_k{}_n \rangle^2 + \langle \sin\Delta \theta_k{}_n\rangle^2}.
\label{ref:eqn7}
\end{equation}
The phase coherence function $C_{\theta}(fk)$ as defined in equation (\ref{ref:eqn7}) is exactly the discrete version of the phase coherence formula in equation (\ref{eq: phas}). In addition, we can also calculate the phase difference $\Delta\theta_{kn}$ between two signals according to
\begin{equation}
\Delta\theta_{kn}=\theta_{2k,n}-\theta_{1k,n}.
\end{equation}
The value of $\Delta\theta_{kn}$ lies within $180^{\circ}$ and gives information about the phase of one oscillator relative to the other.
Connections may be found between two signals or components due to the interactions between them, or common external influences affecting them.
The calculated coherence between processes is not reliable unless we can repeat the calculation while the interaction or common influence is absent. One way to test for significance coherence uses surrogate signals \cite{kugiumtzis2002surrogate}. The type of the surrogates that we use with our signals is the iterative amplitude adjusted Fourier transform (IAAFT) that we introduced in (\ref{sec:H}). The null hypothesis here is that the phase between the signals is independent for all frequencies. It means that surrogates can be produced by randomization of the time-phase information \cite{clemson2016reconstructing}.
In our study, the wavelet phase coherence and the corresponding phase difference were calculated between pairs of signals, for example: electrode 1 (E1) and electrode 2 (E2) signals as shown in Fig~\ref{fig:cE}. The phase coherence at each frequency was considered significant, if the original signal is higher than 95th percentile of 100 surrogates in the frequency range of interest. Fig~\ref{fig:cE} shows that the phase coherence exists in the range 826\,-\,111~Hz where the two peaks above this range are considered to be high harmonics. The phase difference (constant) was only taken into consideration at the frequencies where significant coherence was observed.
\newpage
\section{Results and Discussion}
\subsection{Absence and presence of electrons}
Before introducing the main results, we present a preliminary survey of the results. We consider measurements made both in the absence and presence of surface electrons, for different values of pressing voltage (V), magnetic field (B), and microwave radiation (MW).
\begin{figure*}[h!]
\includegraphics[width=17.2cm]{fig1.PNG}
\caption{\label{fig:d} Time-averaged power as a function of frequency, calculated from the wavelet transforms of signals measured for various experimental parameters, keeping the electron density at 0~$\si{cm}^{-2}$, $n_e$\,=\,1.4 $\times 10^{6}$~$\si{cm}^{-2}$, $n_e$\,=\,1.8 $\times 10^{6}$~$\si{cm}^{-2}$ and $n_e$\,=\,2.2 $\times 10^{6}$~$\si{cm}^{-2}$, throughout. The WTs were obtained for frequency resolution~3~Hz in the frequency range 0.2\,-\,4~kHz, with frequency plotted on a logarithmic scale. For each set of experimental values, all electrode measurements are plotted allowing for comparison. (i). In case $n\,=\,0~\si{cm}^{-2}$, while keeping the pressing voltage $V\,=\,0$~V at $B\,=\,0$~T in the presence of MW and at $B\,=\,0.81$~T with MW and without MW. (ii). In case $n_e$\,=\,1.8 $\times 10^{6}$~$\si{cm}^{-2}$ at fixed the value of pressing voltage $V\,=\,4.18$~V, $B\,=\,0.85$~T and with/without MW. (iii). In case $n_e$\,=\,2.2 $\times 10^{6}$~$\si{cm}^{-2}$ at fixed the value of pressing voltage $V\,=\,4.20$~V, the first set with no magnetic field $B\\,=\,0$~T with/without MW, the second set with $B\,=\,0.77$~T with MW. (iv) In case $n_e$\,=\,1.4 $\times 10^{6}$~$\si{cm}^{-2}$ at fixed the value of pressing voltage $V\,=\,4.16$~V and $B\,=\,0.81$~T with/without MW. (v) In case $n_e$\,=\,2.2 $\times 10^{6}$~$\si{cm}^{-2}$ at fixed the value of pressing voltage $V\,=\,4.20$~V and $B\,=\,0.81$~T with/without MW.}
\end{figure*}
Figure \ref{fig:d} of the time-averaged power of frequencies summarizes the results of the experiment that was performed with a cell containing liquid helium but with absence and presence of electrons. It shows one expects signals with such a high frequency and ill-defined frequency value (notice the smeared nature of the power over a large frequency-range) to be a result of inherent noise in the system in case (i),(ii), (iii) and (v) (when there is no MW applied). Therefore, in these cases there is no evidence for coherent behaviour in the oscillations in the frequency range (0.2\,-\,4~kHz). We classify signals producing these measurements as noise inherent to the system.
The small power peaks measured for frequencies 0.1~kHz also exhibit the feature of existing for a range of different experimental parameters. Such an oscillation is certainly not due to electrons responding to applied fields/MWs and must be an artefact exclusive to electrode 2 (main power).
In case (iv) at low electron density, the frequencies of the oscillations are low, meaning that most electrons are at the centre of the chamber, especially before resonance values (shown later).
The electrons are concentrated at edges of the cell when the frequencies are higher (we will show through the text). Indeed, the oscillations occur when switching on the MW and applying a magnetic field of 0.81~T. In case (v), when MW are applied and at high electron density, the electrons appear to be at the edge of the cell, based on the readout from the five electrode system.
From cases (iv) and (v), the electron density and the pressing voltages can change the frequency of the oscillations. This indicates that the pressing voltage influences how the electrons orbit the cell.
\subsection{Time-frequency analysis}
We show the time - frequency analysis for the whole signal presented in Fig \ref{fig:t} a), where the microwaves (MW) switched (Off - On - Off). Fig \ref{fig:Wo} a) indicates the evolution of a couple of frequency components, or modes, after MW irradiation. Note that the peak that is manifested in fig ~\ref{fig:Wo} b) at a frequency of 117~ Hz is the second harmonic of the main oscillation.
\begin{figure*}[h!]
\includegraphics[width=17.2cm]{W_ow.png}
\caption{\label{fig:Wo}(a)Time-frequency representation (TFR) and (b) average amplitude plot of the whole signal presented in Fig.~\ref{fig:t}.
}
\end{figure*}
The sets of measurements provided below are for two different electron densities $n_e$:
\begin{itemize}
\item [A.] For $n_e$\,=\,1.4\,$\times 10^{6}$\,$\si{cm}^{-2}$ in Figures~\ref{fig:onesix}--\ref{fig:22} (lower electron density)
\item [B.] For $n_e$\,=\,2.2\,$\times 10^{6}$\,$\si{cm}^{-2}$ in Figures~\ref{fig:216}--\ref{fig:222} (higher electron density)
\end{itemize}
\noindent In each case, results are given for pressing voltages in the range $4.16 \leq V \leq 4.22$~V, with increments of 0.1~V. For all measurements, the magnetic field and the depth of liquid were fixed at {$B\,=\,0.81$~T} and {$d=1.3$~mm}, respectively. For each set of conditions, we present current time series similar to those shown in Fig.~\ref{fig:t} together with the corresponding WT and the time-averaged power in the middle 1.4\,sec of the signal. The group of figures presented for each set of conditions relates to the current recorded from each of the individual electrodes C, E1, E2, E3, and E4. The WTs are calculated with a frequency resolution of 3~Hz in the frequency range 0.011\,-\,4~kHz for the part of the time series 30\,-\,31.4~s. Where any of these parameters differed when calculating the WT, this is specified.
\clearpage
\centerline{\bf A. LOW ELECTRON DENSITY}
\vspace{0.3cm}
\underbar{\bf Pressing voltage $V\,=\,4.16$~V}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.86]{16c.PNG}
\includegraphics[scale=0.86]{161.PNG}
\caption{\label{fig:onesix} Signals from electrodes C and E1, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.16$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic}.
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure}[b!]
\centering
\includegraphics[scale=0.9]{162.PNG}
\includegraphics[scale=0.9]{163.PNG}
\caption{(continued) Signals from electrodes E2 and E3, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.16$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4\,s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure*}
\includegraphics[scale=0.9]{164.PNG}[c]
\caption{(continued) Signal from electrode E4, and its spectral analysis. (a) The signal was recorded for 60 seconds at a pressing voltage of $V\,=\,4.16$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\label{fig:16}
\end{figure*}
\clearpage
\underbar{\bf Pressing voltage $V\,=\,4.17$~V}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.87]{17c.PNG}
\includegraphics[scale=0.87]{171.PNG}
\caption{ Signals from electrodes C and E1, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.17$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure}[b!]
\centering
\includegraphics[scale=0.9]{172.PNG}
\includegraphics[scale=0.9]{173.PNG}
\caption{(continued) Signals from electrodes E2 and E3, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.17$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure*}
\includegraphics[scale=0.9]{174.PNG}
\caption{(continued) Signal from electrode E4, and its spectral analysis. (a) The signal was recorded for 60 seconds at a pressing voltage of $V\,=\,4.17$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\label{fig:16}
\end{figure*}
\clearpage
\underbar{\bf Pressing voltage $V\,=\,4.18$~V}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.87]{18c.PNG}
\includegraphics[scale=0.87]{181.PNG}
\caption{ Signals from electrodes C and E1, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.18$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure}[b!]
\centering
\includegraphics[scale=0.9]{182.PNG}
\includegraphics[scale=0.9]{183.PNG}
\caption{(continued) Signals from electrodes E2 and E3, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.18$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure*}
\includegraphics[scale=0.9]{184.PNG}[c]
\caption{(continued) Signal from electrode E4, and its spectral analysis. (a) The signal was recorded for 60 seconds at a pressing voltage of $V\,=\,4.18$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\label{fig:16}
\end{figure*}
\clearpage
\underbar{\bf Pressing voltage $V\,=\,4.19$~V}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.87]{19c.PNG}
\includegraphics[scale=0.87]{191.PNG}
\caption{ Signals from electrodes C and E1, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.19$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure}[b!]
\centering
\includegraphics[scale=0.9]{192.PNG}
\includegraphics[scale=0.9]{193.PNG}
\caption{(continued) Signals from electrodes E2 and E3, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.19$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure*}
\includegraphics[scale=0.9]{194.PNG}[c]
\caption{(continued) Signal from electrode E4, and its spectral analysis. (a) The signal was recorded for 60 seconds at a pressing voltage of $V\,=\,4.19$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\label{fig:16}
\end{figure*}
\clearpage
\underbar{\bf Pressing voltage $V\,=\,4.20$~V}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.87]{20c.PNG}
\includegraphics[scale=0.87]{201.PNG}
\caption{ Signals from electrodes C and E1, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.20$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure}[b!]
\centering
\includegraphics[scale=0.9]{202.PNG}
\includegraphics[scale=0.9]{203.PNG}
\caption{(continued) Signals from electrodes E2 and E3, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.20$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure*}
\includegraphics[scale=0.9]{204.PNG}[c]
\caption{(continued) Signal from electrode E4, and its spectral analysis. (a) The signal was recorded for 60 seconds at a pressing voltage of $V\,=\, 4.20$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\label{fig:16}
\end{figure*}
\clearpage
\underbar{\bf Pressing voltage $V\,=\,4.21$~V}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.87]{21c.PNG}
\includegraphics[scale=0.87]{211.PNG}
\caption{ Signals from electrodes C and E1, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.21$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure}[b!]
\centering
\includegraphics[scale=0.9]{212.PNG}
\includegraphics[scale=0.9]{213.PNG}
\caption{(continued) Signals from electrodes E2 and E3, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.21$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure*}
\includegraphics[scale=0.9]{214.PNG}[c]
\caption{(continued) Signal from electrode E4, and its spectral analysis. (a) The signal was recorded for 60 seconds at a pressing voltage of $V\,=\,4.21$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4\,s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\label{fig:16}
\end{figure*}
\clearpage
\underbar{\bf Pressing voltage $V\,=\,4.22$~V}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.87]{22c.PNG}
\includegraphics[scale=0.87]{221.PNG}
\caption{ Signals from electrodes C and E1, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.22$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure}[b!]
\centering
\includegraphics[scale=0.9]{222.PNG}
\includegraphics[scale=0.9]{223.PNG}
\caption{(continued) Signals from electrodes E2 and E3, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.22$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure*}
\includegraphics[scale=0.9]{224.PNG}[c]
\caption{(continued) Signal from electrode E4, and its spectral analysis. (a) The signal was recorded for 60 seconds at a pressing voltage of $V\,=\,4.22$~V and electron density $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\label{fig:22}
\end{figure*}
\clearpage
\centerline{\bf B. HIGH ELECTRON DENSITY}
\vspace{0.3cm}
\underbar{\bf Pressing voltage $V\,=\,4.16$~V}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.80]{216c.PNG}
\includegraphics[scale=0.80]{2161.PNG}
\caption{ Signals from electrodes C and E1, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.16$\,V and electron density $n_e=2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\label{fig:216}
\end{figure}
\clearpage
\begin{figure}[b!]
\includegraphics[scale=0.84]{2162.PNG}
\includegraphics[scale=0.85]{2163.PNG}
\caption{(continued) Signals from electrodes E2 and E3, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V$\,=\,4.16~V and electron density $n_e$\,=\,2.2$\times {10^{6}}$~$\si{cm^{-2}}$. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\clearpage
\begin{figure*}
\includegraphics[scale=0.87]{2164.PNG}[c]
\caption{(continued) Signal from electrode E4, and its spectral analysis. (a) The signal was recorded for 60 seconds at a pressing voltage of $V\,=\,4.16$~V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\label{fig:16}
\end{figure*}
\clearpage
\underbar{\bf Pressing voltage $V\,=\,4.17$~V}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.85]{217c.PNG}
\includegraphics[scale=0.85]{2171.PNG}
\caption{ Signals from electrodes C and E1, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.17$~V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure}[b!]
\centering
\includegraphics[scale=0.87]{2172.PNG}
\includegraphics[scale=0.87]{2173.PNG}
\caption{(continued) Signals from electrodes E2 and E3, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V = 4.17$\,V and electron density $n_e=2.2\times {10^{6}}$\,\si{cm^{-2}}. The MW (139\,GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4\,s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution $f_{0}$\,=\,3\,Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure*}
\includegraphics[scale=0.9]{2174.PNG}[c]
\caption{(continued) Signal from electrode E4, and its spectral analysis. (a) The signal was recorded for 60 seconds at a pressing voltage of $V\,=\,4.17$~V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\label{fig:16}
\end{figure*}
\clearpage
\underbar{\bf Pressing voltage $V\,=\,4.18$~V}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.80]{218c.PNG}
\includegraphics[scale=0.80]{2181.PNG}
\caption{ Signals from electrodes C and E1, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.18$~V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure}[b!]
\centering
\includegraphics[scale=0.85]{2182.PNG}
\includegraphics[scale=0.85]{2183.PNG}
\caption{(continued) Signals from electrodes E2 and E3, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.18$~V and electron density $n_e=2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure*}
\includegraphics[scale=0.9]{2184.PNG}[c]
\caption{(continued) Signal from electrode E4, and its spectral analysis. (a) The signal was recorded for 60 seconds at a pressing voltage of $V\,=\,4.18$~V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\label{fig:16}
\end{figure*}
\clearpage
\underbar{\bf Pressing voltage $V\,=\,4.19$~V}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.85]{219c.PNG}
\includegraphics[scale=0.85]{2191.PNG}
\caption{ Signals from electrodes C and E1, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.19$~V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139\,GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure}[b!]
\centering
\includegraphics[scale=0.85]{2192.PNG}
\includegraphics[scale=0.85]{2193.PNG}
\caption{(continued) Signals from electrodes E2 and E3, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.19$~V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139\,GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure*}
\includegraphics[scale=0.9]{2194.PNG}[c]
\caption{(continued) Signal from electrode E4, and its spectral analysis. (a) The signal was recorded for 60 seconds at a pressing voltage of $V\,=\,4.19$~V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\label{fig:16}
\end{figure*}
\clearpage
\underbar{\bf Pressing voltage $V\,=\,4.20$~V}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.70]{220cd.PNG}
\includegraphics[scale=0.70]{2201d.PNG}
\caption{ Signals from electrodes C and E1, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.20$~V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\label{fig:220}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure}[b!]
\centering
\includegraphics[scale=0.7]{2202d.PNG}
\includegraphics[scale=0.7]{2203d.PNG}
\caption{(continued) Signals from electrodes E2 and E3, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.20$~V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure*}
\includegraphics[scale=0.7]{2204d.PNG}[c]
\caption{(continued) Signal from electrode E4, and its spectral analysis. (a) The signal was recorded for 60 seconds at a pressing voltage of $V\,=\,4.20$~V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\label{fig:16}
\end{figure*}
\clearpage
\underbar{\bf Pressing voltage $V\,=\,4.21$~V}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.80]{221c.PNG}
\includegraphics[scale=0.80]{2211.PNG}
\caption{ Signals from electrodes C and E1, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.21$~V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure}[b!]
\centering
\includegraphics[scale=0.85]{2212.PNG}
\includegraphics[scale=0.85]{2213.PNG}
\caption{(continued) Signals from electrodes E2 and E3, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.21$~V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139\,GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure*}
\includegraphics[scale=0.9]{2214.PNG}[c]
\caption{(continued) Signal from electrode E4, and its spectral analysis. (a) The signal was recorded for 60 seconds at a pressing voltage of $V\,=\,4.21$~V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\label{fig:16}
\end{figure*}
\clearpage
\underbar{\bf Pressing voltage $V\,=\,4.22$~V}
\begin{figure}[b!]
\centering
\includegraphics[scale=0.85]{222c.PNG}
\includegraphics[scale=0.85]{2221.PNG}
\caption{ Signals from electrodes C and E1, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.22$~V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139\,GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution $f_{0}$\,=\,3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure}[b!]
\centering
\includegraphics[scale=0.9]{2222.PNG}
\includegraphics[scale=0.9]{2223.PNG}
\caption{(continued) Signals from electrodes E2 and E3, and their spectral analyses. (a) Signals were recorded for 60 seconds at a pressing voltage of $V\,=\,4.22$\,V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139~GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\end{figure}
\addtocounter{figure}{-1}
\clearpage
\begin{figure*}
\includegraphics[scale=0.9]{2224.PNG}[c]
\caption{(continued) Signal from electrode E4, and its spectral analysis. (a) The signal was recorded for 60 seconds at a pressing voltage of $V = 4.22$\,V and electron density $n_e\,=\,2.2\times {10^{6}}$~\si{cm^{-2}}. The MW (139\,GHz) is switched On/Off where indicated. (b) Enlarged reddened 1.4~s part of (a) used for analysis, where the inset shows a further zoom. (c) Wavelet transform with frequency resolution~3~Hz. (d) Time-averaged wavelet power of signals in (b) showing the fundamental frequency and higher harmonics. The $y$-axis of (c) and the $x$-axes of (d) are logarithmic.}
\label{fig:222}
\end{figure*}
\clearpage
By implementing the wavelet transform on all the signals, for both low and high electron density as shown in figures~(\ref{fig:onesix}~-~\ref{fig:22}) and ~(\ref{fig:216}~-~\ref{fig:222}), respectively, while varying the pressing voltage from 4.16~V to 4.22~V, we observe that the the frequency varies with pressing voltage. For both electron densities, we found that the oscillations for all five electrodes (C, E1, E2, E3 and E4) occurred at the same frequency. We observe in the time-frequency representation that there is more than one frequency component for each electrode. We have therefore checked in each case whether the the second frequency is an independent mode, or is a higher harmonic of the first mode, by using the harmonic finder method (see section~\ref{sec:H}). This tool extracted the phases from the TFR of the time series, divided them into bins, and then computed the mutual information between two of the phases. High mutual information implies that the phases are mutually dependent. In each case it was found that the second frequency is a higher harmonic of the main mode, resulting from nonlinearity in the interaction between the electron and the surface of the liquid helium. At low electron density, the frequency-modulated oscillations are clearly seen at pressing voltages from 4.16 to 4.19 V. The frequencies are most well-defined for $V\,=\,4.20$~V. We therefore claim that a small electron density together with a pressing voltage of 4.20~V represent resonance conditions for this system. Data for high electron density appear to follow the trend that a larger current amplitude causes a higher frequency of oscillation, which means physically that the oscillations are more pronounced relative to the background. Some small peaks are clearly evident in the E3 signal at 1kHz in figures~\ref{fig:220} and fig~\ref{fig:222}. They may result from the intermittency (modulation). This modulation implies that the electron direction is changed.
\clearpage
\newpage
\subsection{Time-averaged wavelet power analysis}
We now present the time-averaged wavelet power for both electron densities and the current from each of the five electrodes (C, E1, E2, E3 and E4) with pressing voltages in the range $4.16$ $\leq V \leq 4.22$~V. Fig~\ref{fig:TP}. (a) shows the time-averaged wavelet power for low electron density. It is immediately evident that significant changes occur in the spectra for all electrodes as the pressing voltage changes. We found that the most intense oscillation peak is at the centre electrode for pressing voltages $V$ in the range 4.16 to 4.19~V; the peak starts to decrease as $V$ is increased from 4.20 to 4.22~V; while, for E1, E2, E3 and E4, the most intense oscillation peak is at a pressing voltage of 4.20~V. This indicates that the most of the electrons are located under electrode C, with some under electrodes 3 and 4 for $V \leq 4.20$~V. For higher $V$, the electrons tend to concentrate near the edge of the cell: see fig~\ref{fig:TP}(b).
\begin{figure*}[h!]
\centering
\includegraphics[width=17.5cm]{3d1.PNG} %
\caption{(a) 3D time-averaged wavelet power of signals with MW On for each of the electrodes, for the lower $n_e$\,=\,1.4$\times {10^{6}}$~\si{cm^{-2}} electron density with different pressing voltages. The frequency axes are logarithmic. }%
\label{fig:L}%
\end{figure*}
\addtocounter{figure}{-1}
\begin{figure*}[h!]
\centering
{{\includegraphics[width=17.5cm]{p22.PNG} }}%
\caption{(b) 3D time-averaged wavelet power of signals with MW On for the higher $n_e$\,=\,2.2$\times {10^{6}}$~\si{cm^{-2}} electron density and the same pressing voltages as in (a). The frequency axes are logarithmic. }%
\label{fig:TP}%
\end{figure*}
\clearpage
\newpage
\subsection{Ridge Curve Extraction}\
The oscillatory components are extracted by ridge extraction from the TFR. The results include the frequency and amplitude of the oscillatory components over time. Figures~\ref{fig:lo} and ~\ref{fig:l1o} show the ridge extractions (black lines), obtained from the time frequency representations for low and high electron densities, respectively, and computed for each of the five electrodes (C, E1, E2, E3 and E4). They show the instantaneous frequency of oscillation for the main component, at different pressing voltages.
Tables~\ref{table:T1} and~\ref{table:T2} show the mean, median, frequency modulation and standard deviation of the instantaneous oscillation frequency for different pressing voltage, calculated for each of the five electrodes for both low and high electron densities, respectively. Both tables show the maximum and minimum frequencies of the main frequency mode of oscillations. The median and the mean of the main frequency component increase with increasing pressing voltage. The frequency of modulation (number of cycles completed per second) is independent of pressing voltage and is the same for both electron densities. However, in table~\ref{table:T1}, the standard deviation of the instantaneous frequency oscillations for the main component for all five electrodes decreases with rising pressing voltage until 4.20~V, and then starts to increase. Table ~\ref{table:T2} shows that, for all five electrodes, the standard deviation of the main component decreases with rising pressing voltage. Figure ~\ref{fig:me} shows how the mean frequencies of the oscillations for the five electrodes change with increasing pressing voltage, at different values of electron density.
We note that, when the electron density is $n_e$\,=\,1.4 $\times 10^{6}$~$\si{cm}^{-2}$, the mean frequency for all electrodes starts to increase at pressing voltage 4.19~V with small differences between the maxima and minima of the oscillations (red shadow). This may be due to changes in the electron distribution within the cell as the electrons start to occupy the area under the edge electrodes.
In contrast, the mean frequency at higher electron density ($n_e$\,=\,2.2 $\times 10^{6}$~$\si{cm}^{-2}$) is higher than at the lower density and hardly depends at all on the pressing voltage applied with wide range of oscillations. Also, the change in the distribution of the electrons in the cell is weak, possibly due to the low probability of electrons moving at higher electron density. This indicates that the interaction of the electrons with the helium surface at lower electron density is larger than in the case of higher electron density.
\newpage
\begin{figure*}[h!]
\includegraphics[width=17.2cm]{1allridg1a.png}%
\caption{\label{fig:lo} Ridge extractions (black line), obtained from the time frequency representations for each of the five electrodes (C, E1, E2, E3 and E4), of the instantaneous frequency oscillations and for the main component with varying pressing voltage. The periodicity is lost for pressing voltage higher than 4.19~V. Colour bars show the intensity of the oscillations.}
\end{figure*}
\newpage
\begin {table}[h!]
{\scriptsize
\begin{center}
\begin{tabular}{|p{1.4cm}|p{1.3cm}|p{1.3cm}|p{1.2cm}|p{2.8cm}|p{2cm}|p{2.4cm}|}
\hline
\multicolumn{7}{|c|}{C} \\
\hline
Pressing Voltage\,(V)& $f_{max}$~(Hz) & $f_{min}$~(Hz) & Median $f$~(Hz) & Frequency modulation $f_m$~(Hz) & Mean Frequency $\langle f \rangle$~(Hz) & Standard deviation $f$~(Hz)\\
\hline
4.16 & 683 & 285 & 430 & 5.25 & 463 &116 \\
\hline
4.17 & 597 & 319 & 454 & 5.10 & 454 & 78 \\
\hline
4.18 & 643 & 331 & 453& 5.14 & 488 & 81 \\
\hline
4.19 & 656 & 474& 549& 5.22 & 553 & 47 \\
\hline
4.20 & 1102 & 875 & 965 & 5.25 & 975 & 44 \\
\hline
4.21 & 1466 & 1149& 1297& 5.25 & 1300 & 68 \\
\hline
4.22 & 1763 & 1358 & 1539& 5.33 & 1555 & 97 \\
\hline
\end {tabular}
\label{table:1}
\end{center}
\vspace*{-1pt}}
\end{table}
\begin {table}[h!]
{\scriptsize
\begin{center}
\begin{tabular}{|p{1.4cm}|p{1.3cm}|p{1.3cm}|p{1.2cm}|p{2.8cm}|p{2cm}|p{2.4cm}|}
\hline
\multicolumn{7}{|c|}{E1} \\
\hline
Pressing Voltage\,(V)& $f_{max}$~(Hz) & $f_{min}$~(Hz) & Median $f$~(Hz) & Frequency modulation $f_m$~(Hz) & Mean Frequency $\langle f \rangle$~(Hz) & Standard deviation $f$~(Hz)\\
\hline
4.16 & 701 & 261 & 349 & 5.88 & 472 &117 \\
\hline
4.17 & 602 & 290 & 413 & 5.00 & 479 & 80 \\
\hline
4.18 & 631 & 343 & 450 & 5.14 & 486 & 81 \\
\hline
4.19 & 650 & 468& 547& 5.18 & 555 & 46 \\
\hline
4.20 & 1101 & 833 & 968 & 5.25 & 971 & 44 \\
\hline
4.21 & 1468 & 1205& 1298& 5.25 & 1322 & 67 \\
\hline
4.22 & 1780 & 1351 & 1532& 5.14 & 1563 & 93 \\
\hline
\end {tabular}
\label{table:parameters}
\end{center}
\vspace*{-1pt}}
\end{table}
\begin {table}[h!]
{\scriptsize
\begin{center}
\begin{tabular}{|p{1.4cm}|p{1.3cm}|p{1.3cm}|p{1.2cm}|p{2.8cm}|p{2cm}|p{2.4cm}|}
\hline
\multicolumn{7}{|c|}{E2} \\
\hline
Pressing Voltage\,(V)& $f_{max}$~(Hz) & $f_{min}$~(Hz) & Median $f$~(Hz) & Frequency modulation $f_m$~(Hz) & Mean Frequency $\langle f \rangle$~(Hz) & Standard deviation $f$~(Hz)\\
\hline
4.16 & 684 & 294 & 445 & 5.10 & 470 &117 \\
\hline
4.17 & 627 & 306 & 480 & 5.07 & 462 & 74 \\
\hline
4.18 & 638 & 347 & 456 & 5.14 & 487 & 82 \\
\hline
4.19 & 653 & 466& 546& 5.18 & 555 & 45 \\
\hline
4.20 & 1111 & 826 & 967 & 5.25 & 968 & 44 \\
\hline
4.21 & 1479 & 1207& 1298& 5.25 & 1322 & 66 \\
\hline
4.22 & 1771 & 1358 & 1531& 5.25 & 1568 & 92 \\
\hline
\end {tabular}
\label{table:parameters}
\end{center}
\vspace*{-1pt}}
\end{table}
\begin {table}[h!]
{\scriptsize
\begin{center}
\begin{tabular}{|p{1.4cm}|p{1.3cm}|p{1.3cm}|p{1.2cm}|p{2.8cm}|p{2cm}|p{2.4cm}|}
\hline
\multicolumn{7}{|c|}{E3} \\
\hline
Pressing Voltage\,(V)& $f_{max}$~(Hz) & $f_{min}$~(Hz) & Median $f$~(Hz) & Frequency modulation $f_m$~(Hz) & Mean Frequency $\langle f \rangle$~(Hz) & Standard deviation $f$~(Hz)\\
\hline
4.16 & 685 & 285 & 442 & 5.00 & 477 &117 \\
\hline
4.17 & 627 & 290 & 455 & 5.18 & 457 & 79 \\
\hline
4.18 & 634 & 336 & 452 & 5.18 & 484 & 81 \\
\hline
4.19 & 656 & 456& 551& 5.18 & 557 & 45 \\
\hline
4.20 & 1135 & 810 & 965 & 5.25 & 971 & 45 \\
\hline
4.21 & 1469 & 1200& 1296& 5.25 & 1322 & 65 \\
\hline
4.22 & 1788 & 1347 & 1534& 5.30 & 1568 & 96 \\
\hline
\end {tabular}
\label{table:parameters}
\end{center}
\vspace*{-1pt}}
\end{table}
\begin {table}[h!]
{\scriptsize
\begin{center}
\begin{tabular}{|p{1.4cm}|p{1.3cm}|p{1.3cm}|p{1.2cm}|p{2.8cm}|p{2cm}|p{2.4cm}|}
\hline
\multicolumn{7}{|c|}{E4} \\
\hline
Pressing Voltage\,(V)& $f_{max}$~(Hz) & $f_{min}$~(Hz) & Median $f$~(Hz) & Frequency modulation $f_m$~(Hz) & Mean Frequency $\langle f \rangle$~(Hz) & Standard deviation $f$~(Hz)\\
\hline
4.16 & 684 & 289 & 433 & 5.01 & 477 &117 \\
\hline
4.17 & 620 & 295 & 457 & 5.18 & 459 & 78 \\
\hline
4.18 & 643 & 338 & 453 & 5.14 & 488 & 81 \\
\hline
4.19 & 656 & 462& 547& 5.18 & 557 & 47 \\
\hline
4.20 & 1131 & 815 & 966 & 5.25 & 969 & 43 \\
\hline
4.21 & 1471 & 1207& 1299& 5.25 & 1321 & 66 \\
\hline
4.22 & 1786 & 1347 & 1534& 5.25 & 1567 & 96 \\
\hline
\end {tabular}
\caption {Maximum, minimum, median frequency, frequency modulation, mean frequency and deviation of the instantaneous frequency oscillations for the main component with varying pressing voltage calculated to each of the five electrodes at low electron density.}
\label{table:T1}
\end{center}
\vspace*{-1pt}}
\end{table}
\begin{figure*}[h!]
\includegraphics[width=17.2cm]{all2ridg2a.png}%
\caption{\label{fig:l1o}Ridge extractions (black line), obtained from the time frequency representation method for all the five electrodes (C, E1, E2, E3 and E4), for the main component of the instantaneous frequency oscillations and at various pressing voltages. Colour bars show the intensity of the oscillations.}
\end{figure*}
\newpage
\clearpage
\begin {table}[h!]
{\scriptsize
\begin{center}
\begin{tabular}{|p{1.4cm}|p{1.3cm}|p{1.3cm}|p{1.2cm}|p{2.8cm}|p{2cm}|p{2.4cm}|}
\hline
\multicolumn{7}{|c|}{C}\\
\hline
Pressing Voltage\,(V)& $f_{max}$~(Hz) & $f_{min}$~(Hz) & Median $f$~(Hz) & Frequency modulation $f_m$~(Hz) & Mean Frequency $\langle f \rangle$~(Hz) & Standard deviation $f$~(Hz)\\
\hline
4.16 & 2203 & 1583 & 1888 & 5.00 & 1888 &143 \\
\hline
4.17 & 2122 & 1106 & 1563 & 5.00 & 1567 & 162 \\
\hline
4.18 & 2248 & 1265 & 1669& 5.80 & 1704 & 180 \\
\hline
4.19 & 2184 & 1246& 1669& 5.00 & 1675 & 143 \\
\hline
4.20 & 2184 & 1447 &1845 & 5.00 & 1842 & 121 \\
\hline
4.21 & 2247 & 1572& 1932& 5.10 & 1939 & 108 \\
\hline
4.22 & 2380 & 1750 & 2116& 5.50 & 2105 & 68 \\
\hline
\end {tabular}
\label{table:parameters}
\end{center}
\vspace*{-1pt}}
\end{table}
\begin {table}[h!]
{\scriptsize
\begin{center}
\begin{tabular}{|p{1.4cm}|p{1.3cm}|p{1.3cm}|p{1.2cm}|p{2.8cm}|p{2cm}|p{2.4cm}|}
\hline
\multicolumn{7}{|c|}{E1} \\
\hline
Pressing Voltage\,(V)& $f_{max}$~(Hz) & $f_{min}$~(Hz) & Median $f$~(Hz) & Frequency modulation $f_m$~(Hz) & Mean Frequency $\langle f \rangle$~(Hz) & Standard deviation $f$~(Hz)\\
\hline
4.16 & 2380 & 1392 & 1759 & 4.60 & 1770 &162 \\
\hline
4.17 & 2117 & 1331& 1610 & 5.20 & 1623 & 129 \\
\hline
4.18 & 2247& 1363 & 1726& 5.00 & 1735 & 162 \\
\hline
4.19 & 2190& 1260& 1687& 5.00 & 1706 & 161 \\
\hline
4.20 & 2168 & 1448 & 1866 & 5.00 & 1858 & 111 \\
\hline
4.21 & 2200 & 1583& 1938 & 5.10 & 1935 & 90 \\
\hline
4.22 & 2484 & 1775 & 2105& 5.70 & 2114 & 123 \\
\hline
\end {tabular}
\label{table:parameters}
\end{center}
\vspace*{-1pt}}
\end{table}
\begin {table}[h!]
{\scriptsize
\begin{center}
\begin{tabular}{|p{1.4cm}|p{1.3cm}|p{1.3cm}|p{1.2cm}|p{2.8cm}|p{2cm}|p{2.4cm}|}
\hline
\multicolumn{7}{|c|}{E2} \\
\hline
Pressing Voltage\,(V)& $f_{max}$~(Hz) & $f_{min}$~(Hz) & Median $f$~(Hz) & Frequency modulation $f_m$~(Hz) & Mean Frequency $\langle f \rangle$~(Hz) & Standard deviation $f$~(Hz)\\
\hline
4.16 & 2329& 1251 & 1740 & 5.70 & 1758 &171 \\
\hline
4.17 & 2180 & 1382& 1669 & 5.00 & 1712 & 155 \\
\hline
4.18 & 2204& 1363 & 1701& 5.30 & 1712 & 149 \\
\hline
4.19 & 2273& 1251& 1674& 4.80 & 1690 & 159 \\
\hline
4.20 & 2184 & 1481 & 1864 & 4.80 & 1856 & 111 \\
\hline
4.21 & 2231 & 1561& 1937 & 5.10 & 1934 & 89\\
\hline
4.22 & 2373 & 1779 & 2114& 5.70 & 2100& 57 \\
\hline
\end {tabular}
\label{table:parameters}
\end{center}
\vspace*{-1pt}}
\end{table}
\begin {table}[h!]
{\scriptsize
\begin{center}
\begin{tabular}{|p{1.4cm}|p{1.3cm}|p{1.3cm}|p{1.2cm}|p{2.8cm}|p{2cm}|p{2.4cm}|}
\hline
\multicolumn{7}{|c|}{E3} \\
\hline
Pressing Voltage\,(V)& $f_{max}$~(Hz) & $f_{min}$~(Hz) & Median $f$~(Hz) & Frequency modulation $f_m$~(Hz) & Mean Frequency $\langle f \rangle$~(Hz) & Standard deviation $f$~(Hz)\\
\hline
4.16 & 2247 & 1437& 1742& 4.60 & 1754 &122 \\
\hline
4.17 & 2274& 1443& 1845 & 5.30 & 1838 & 146 \\
\hline
4.18 & 2005& 1295 & 1668& 5.20 & 1680 & 139 \\
\hline
4.19 & 2272& 1295& 1703& 5.00 & 1715 & 156 \\
\hline
4.20 & 2210 & 1445 & 1868& 5.00 & 1860 & 111 \\
\hline
4.21 & 2184 & 1611& 1938 & 5.10 & 1937 & 90 \\
\hline
4.22 & 2380 & 1763 & 2115& 5.70 & 2106 & 74\\
\hline
\end {tabular}
\label{table:parameters}
\end{center}
\vspace*{-1pt}}
\end{table}
\begin {table}[h!]
{\scriptsize
\begin{center}
\begin{tabular}{|p{1.4cm}|p{1.3cm}|p{1.3cm}|p{1.2cm}|p{2.8cm}|p{2cm}|p{2.4cm}|}
\hline
\multicolumn{7}{|c|}{E4} \\
\hline
Pressing Voltage\,(V)& $f_{max}$~(Hz) & $f_{min}$~(Hz) & Median $f$~(Hz) & Frequency modulation $f_m$~(Hz) & Mean Frequency $\langle f \rangle$~(Hz) & Standard deviation $f$~(Hz)\\
\hline
4.16 & 2359 & 1436& 1792& 5.70 & 1811 &162 \\
\hline
4.17 & 2189& 1402& 1666 & 4.70 & 1700 & 142 \\
\hline
4.18 & 2234& 1400 & 1703& 5.10 & 1715 & 163 \\
\hline
4.19 & 2280& 1324& 1703& 4.80 & 1713 & 162 \\
\hline
4.20 & 2168 & 1413 & 1865& 5.00 & 1858 & 113 \\
\hline
4.21 & 2231 & 1539& 1939 & 5.10 & 1935 & 88 \\
\hline
4.22 & 2449 & 1788 & 2115& 5.70 & 2105 & 69\\
\hline
\end {tabular}
\caption {Maximum, minimum, median frequency, frequency modulation, mean frequency and deviation of the instantaneous frequency oscillations for the main component with varying pressing voltage calculated to each of the five electrodes at high electron density.}
\label{table:T2}
\end{center}
\vspace*{-1pt}}
\end{table}
\newpage
\begin{figure}[H]
\centering
\includegraphics[width=8cm]{mean.PNG}%
\caption{\label{fig:me}The mean frequencies of oscillations at different pressing voltages for low electron density $n_e$\,=\,1.4 $\times10^{6}$~$\si{cm}^{-2}$ (red squares) and high electron density $n_e$\,=\,2.2 $\times 10^{6}$~ $\si{cm}^{-2}$ (blue squares). The shadows show the full range of frequencies of the oscillations for all electrodes. Increasing the electron density considerably decreases the dependence of frequency upon pressing voltage.
}
\end{figure}
\newpage
\subsection {Phase coherence and phase difference and motion of the electrons}
We now present the results of the phase coherence analysis and the phases differences for both electron densities (lower and higher). We also present the inferred motion of the electrons inside the cell, as obtained from the phase difference. We calculated the phase coherence and the phase difference as we described in section~(\ref{sec:1}). Next, we computed the values of the maximum coherence and the corresponding frequencies as functions of the pressing voltage, which is provided in the tables for low and high electron density.
\begin{itemize}
\item [a)] low electron density
\end{itemize}
\begin{figure}[h!]%
\includegraphics[width=17cm]{Coh11a.png} %
\caption{ Significant wavelet phase coherence of pairs of electrodes for an electron density of $n_e$\,=\,1.4 $\times 10^{6}$~\si{cm^{-2}} at different pressing voltages. The lines are color-coded to indicate the particular electrode pairs.}%
\label{fig:coh}%
\end{figure}
\begin{figure}[h!]%
\includegraphics[width=17.2cm]{phas11a.png} %
\caption{ The phase difference of pairs of electrodes for electron density of $n_e$\,=\,1.4 $\times 10^{6}$ ~\si{cm^{-2}} at different pressing voltages. The lines are color-coded to indicate particular electrode pairs.}%
\label{fig:phas}%
\end{figure}
\newpage
\begin{figure}[h!]%
\includegraphics[width=12cm]{Mov1.PNG} %
\caption{ The circular schematic shows the movement of the electrons below the electrodes for different pressing voltages with electron density $n_e$\,=\,1.4 $\times 10^{6}$~\si{cm^{-2}}. It summarises the coherences and phases shifts between the currents. The thickness of the arrows indicates the magnitude of the coherence (fig~\ref{fig:coh}) and the white/black shading the size of the phase shift (fig~\ref{fig:phas})}%
\label{fig:mov}%
\end{figure}
\clearpage
\newpage
\begin {table}[!h]
{\scriptsize
\vspace*{25pt}
\begin{center}
\begin{tabular}{|p{1.1cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|}
\hline
\multicolumn {11} {|c|} {Max Coherence}\\
\hline
Pressing Voltage{ }(V) & CE1 & CE2 & CE3{} & CE4{} & E12{} & E13{} & E23{} & E24{} & E34{} & E14{}\\
\hline
4.16 & 0.17 & 0.25& 0.27& 0.3 & 0.11 & 0.25 & 0.33 &0.24 & 0.35 & 0.23\\
\hline
4.17 & 0.27 & 0.37& 0.40& 0.44 & 0.18 & 0.26& 0.36 & 0.40 &0.40& 0.31\\
\hline
4.18 & 0.32 & 0.36& 0.42& 0.45 & 0.24 & 0.31 & 0.41 &0.38 &0.46&0.35\\
\hline
4.19 & 0.54& 0.58& 0.60& 0.63 & 0.48 & 0.54 & 0.61 &0.60 &0.65&0.59\\
\hline
4.20 & 0.66 &0.66& 0.66&0.67 & 0.69 & o.71 & 0.70 &0.70 &0.70&0.73\\
\hline
4.21 & 0.64 & 0.64& 0.65& 0.67& 0.72 & 0.71 & 0.72 &0.72 & 0.71 &0.73\\
\hline
4.22 & 0.53&0.54& 0.53& 0.56& 0.63 &0.60 & 0.64 &0.65&0.64&0.63\\
\hline
\end {tabular}
\end{center}
\vspace*{1pt}}
\end{table}
\begin {table}[h!]
{\scriptsize
\vspace*{25pt}
\begin{center}
\begin{tabular}{|p{1.1cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|}
\hline
\multicolumn {11} {|c|} {Frequency of Max coherence}\\
\hline
Pressing Voltage{ }(V) & $f$CE1{}(Hz) & $f$CE2{}(Hz) & $f$CE3{}(Hz) & $f$CE4{}(Hz) & $f$E12{}(Hz) & $f$E13{}(Hz) & $f$E23{}(Hz) & $f$E24{}(Hz) & $f$E34{}(Hz) & $f$E14{}(Hz)\\
\hline
4.16 & 490 & 523& 523& 512 & 501 & 558 & 558 &501 & 546 & 570\\
\hline
4.17 & 512 & 490& 490& 501 & 490 & 523 & 546 & 490 &512& 501\\
\hline
4.18 & 523 & 523& 423& 523 & 512 & 534 & 546 &512 &534&523\\
\hline
4.19 & 570 & 570& 570& 570 & 570 & 583 & 570 &570 &570&570\\
\hline
4.20 & 959 &959& 959& 959 & 959 & 959 & 959 &959 &959&959\\
\hline
4.21 & 1300 & 1300& 1300& 1300 & 1300 & 1300 & 1300 &1300 & 1300 &1300\\
\hline
4.22 & 1512&1512& 1512& 1512 & 1512 & 1512 & 1512 &1512 &1512&1512\\
\hline
\end {tabular}
\caption {The maximum coherence and the frequency of the maximum coherence for the lower electron density, calculated from the phase coherence shown in figure~\ref{fig:coh}.}
\label{table:m1}
\end{center}
\vspace*{1pt}}
\end{table}
\clearpage
\newpage
\begin{itemize}
\item [b)] High electron density
\end{itemize}
\begin{figure}[h!]%
\includegraphics[width=17.2cm]{coh2aa.png} %
\caption{ Significant wavelet phase coherence of pairs of electrodes for electron density $n_e$\,=\,2.2 $\times 10^{6}$~\si{cm^{-2}} at different pressing voltages. The lines are color-coded to indicate particular electrode pairs.}%
\label{fig:coh2}%
\end{figure}
\begin{figure}[h!]%
\includegraphics[width=17.2cm]{phas2aa.png} %
\caption{ The phase difference of pairs of electrodes for electron density $n_e$\,=\,2.2 $\times 10^{6}$ ~\si{cm^{-2}} at different pressing voltages. The lines are color-coded to indicate particular electrode pairs. }%
\label{fig:phas2}%
\end{figure}
\begin{figure}[h!]%
\includegraphics[width=12cm]{Mov2.PNG} %
\caption{ The circular schematics of the motion of the electrons below the electrodes at different pressing voltages and at electron density $n_e$\,=\,2.2 $\times 10^{6}$~\si{cm^{-2}}. It summarises the coherences and phases shifts between the currents. The thickness of the arrows indicates the magnitude of the coherence (fig~\ref{fig:coh2}) and the white/black shading the size of the phase shift (fig~\ref{fig:phas2})}%
\label{fig:mov2}%
\end{figure}
\clearpage
\begin {table}[h!]
{\scriptsize
\vspace*{25pt}
\begin{center}
\begin{tabular}{|p{1.1cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|}
\hline
\multicolumn {11} {|c|} { Max Coherence}\\
\hline
Pressing Voltage{ }(V) & CE1 & CE2 & CE3{} & CE4{} & E12{} & E13{} & E23{} & E24{} & E34{} & E14{}\\
\hline
4.16 & 0.076 & 0.077& 0.065 & 0.057 & 0.34 & 0.22 & 0.25 & 0.41 & 0.24 & 0.36\\
\hline
4.17 & 0.11 & 0.12& 0.08 & 0.14 & 0.43 & 0.33& 0.37 & 0.5 &0.38& 0.46\\
\hline
4.18 & 0.20 & 0.22 & 0.20& 0.28 & 0.44 & 0.44 & 0.41 &0.52 & 0.31 & 0.42\\
\hline
4.19 & 0.33& 0.33& 0.34& 0.41 & 0.64 & 0.60 & 0.59 & 0.63 &0.63 & 0.65\\
\hline
4.20 & 0.46 & 0.40 & 0.48 & 0.53 & 0.84 & 0.81 & 0.81 &0.82 &0.83 & 0.84\\
\hline
4.21 & 0.41 & 0.34& 0.43& 0.5& 0.90 & 0.88 & 0.87 &0.86 & 0.89 &0.89\\
\hline
4.22 & 0.33 &0.28& 0.36& 0.41& 0.88 &0.85 & 0.85 &0.85&0.87&0.87\\
\hline
\end {tabular}
\end{center}
\vspace*{1pt}}
\end{table}
\begin {table}[h!]
{\scriptsize
\vspace*{25pt}
\begin{center}
\begin{tabular}{|p{1.1cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|p{1.3cm}|}
\hline
\multicolumn {11} {|c|} {Frequency of the Max coherence}\\
\hline
Pressing Voltage (V) & $f$CE1\,(Hz) & $f$CE2\,(Hz) & $f$CE3\,(Hz) & $f$CE4\,(Hz) & $f$E12\,(Hz) & $f$E13\,(Hz) & $f$E23\,(Hz) & $f$E24\,(Hz) & $f$E34\,(Hz) & $f$E14\,(Hz)\\
\hline
4.16 & 1878 & 1798& 1838& 1798& 1722 & 1760 & 1685 &1722 & 1708 & 1722\\
\hline
4.17 & 1685 & 1685& 1685& 1685 & 1685 & 1649 & 1649 &1685 &1614& 1685\\
\hline
4.18 & 1614 & 1649& 1649& 1714 & 1722 & 1685 & 1649 & 1685 & 1798 & 1722\\
\hline
4.19 & 1685 & 1649 & 1685& 1685 & 1685 & 1685 & 1685 & 1685 & 1684 &1685\\
\hline
4.20 & 1878 & 1878 & 1878 & 1878 & 1878& 1919 & 1878 & 1919 & 1878 & 1778\\
\hline
4.21 & 1919 & 1919& 1919& 1961 & 1919 & 1919 & 1919 & 1919 & 1919 &1919\\
\hline
4.22 & 2093 & 2093 & 2093 & 2093 & 2048 & 2048 & 2048 &2048 & 2048 & 2048\\
\hline
\end {tabular}
\caption {The maximum coherence and the frequency of the maximum coherence for the high electron density, calculated from the phase coherence shown in figure~\ref{fig:coh2}.}
\label{table:m2}
\end{center}
\vspace*{1pt}}
\end{table}
Significant coherence was found between the signals from paired electrodes recorded for different pressing voltages at both low and high electron density, as shown in figs~\ref{fig:coh} and~\ref{fig:coh2}. At low electron density (Fig.~\ref{fig:coh}), clear coherence peaks were well-defined and mostly constant for all electrode pairs with pressing voltage 4.19 and 4.20~V. Therefore, we can say that the resonance condition is satisfied at pressing voltage V\,=\,4.20~V which means that the oscillations are uniform over the electrodes that have been measured. At high electron density (Fig.~\ref{fig:coh2}), the peaks in phase coherence are less dependent on the pressing voltage. The phase coherence is lower in the case when the central electrode is paired with one of the four edge electrodes, whilst the coherence is higher for pairings of the four edge electrodes.
For the phase difference, we can see in Fig.~\ref{fig:phas} that the phase difference in the case of low electron density is constant at pressing voltage 4.19 and 4.20~V. However, the phase difference for the high electron density (intervals Fig.~\ref{fig:phas2} exhibits more slipping and they complete 360 degrees compared to the Fig.~\ref{fig:phas}). They happen when the coherence is quite low (although still nonzero), and that may result of a phase transitions due to external perturbation.
Studying the phase difference helps us to reveal the direction of electron motion inside the cell. The circular schematics in Figs.~\ref{fig:mov} and~\ref{fig:mov2} show the movement of the electrons below the electrodes inside the cell at different pressing voltages for two values of electron densities. It is clear when changing the electron density and pressing voltage it becomes possible to change the direction of electron motion below all five electrodes.
We provide in table (\ref{table:m1},~\ref{table:m2}) the values of the maximum coherence and their corresponding frequencies for different pressing voltages and for both electron densities. All these values are calculated from the phase coherences presented in figure~\ref{fig:coh},~\ref{fig:coh2} for the low and high electron density, respectively. The values of the maximum of the coherence for the both electron densities change with pressing voltages for all pairs of electrodes. However, the values of the frequency at which the maximum coherence is observed at the low electron density increase with increasing pressing voltage, but they hardly increase when the when the electron density is high: see tables ~\ref{table:m1}, ~\ref{table:m2}.
\subsection{Summary}
We have presented the significant results and information that we obtained by applying nonlinear dynamics
methods. We have explored the characteristic features of the current oscillations induced in the five electrodes for different electron densities and pressing voltages, but with fixed magnetic field, depth of liquid helium and temperature.
We have revealed oscillatory electron motion with varying frequency but with a constant modulation frequency that was found by extracting the instantaneous frequencies from ridges in time-frequency representations. We find that a high electron density significantly decreases the dependence of frequency on pressing voltage. The constant modulation frequency may result from the interaction between the electrons and gravity eaves on the surface of the liquid helium.
The time-averaged wavelet power provides additional information about the distribution of electrons inside the cell, and how it changes under different conditions of electron density and pressing voltage. At low electron density, it seems that the electrons are mostly at the center for low pressing voltages of 4.16-4.19\,V, but with some at the edge electrode E3 sand E4. Increasing the pressing voltage displaces the electrons from the centre towards the edge. Whilst, with high electron density and low pressing voltage, the electrons are unaffected. With increasing pressing voltage, the electrons seem to crawl along the edge of the liquid surface.
There is a significant phase coherence between the oscillations of current in the different electrodes for both electron densities. We have shown the motion of electrons inside the cell between all the electrodes by using the information of the phase difference.
\FloatBarrier
|
{
"timestamp": "2021-12-01T02:26:20",
"yymm": "2111",
"arxiv_id": "2111.15462",
"language": "en",
"url": "https://arxiv.org/abs/2111.15462"
}
|
\section{Introduction}
Radio frequency (RF) spike noise in magnetic
resonance imaging (MRI) typically manifests as bursts of
high-amplitude corruptions in the Fourier domain ({$k$-space}) that lead to
Moir\'e patterns in the reconstructed image. Spikes originate from
brief disruptions in the electromagnetic field near the receive coil
during periods when the receiver channel is open during an exam.
Common sources are static discharge in clothes, mechanical stress and
vibration in the scanner, receive hardware failures, and leaks in the
RF shield that permit external RF interference.
Traditional spike detection techniques rely on thresholding based on
the RF receive signal amplitude \cite{Staemmler1986,Foo1994},
statistical analyses of data \cite{Zhang2001,Chavez2009}, or window
filters \cite{Kao2000}. With the exception of \cite{Kao2000}, all
detection has been performed entirely in $k$-space. Detected spikes
are then typically replaced by zeros or by local interpolation of
{$k$-space}~neighbors \cite{Staemmler1986,Foo1994,Chavez2009}. In one notable
case \cite{Kao2000}, an analytic solution to the missing data was
used.
The recent application of compressed sensing (CS)
\cite{Candes2006,Donoho2006a} to MRI has allowed the acquisition of MRI data
with a significant relaxation of sampling requirements
\cite{Lustig2008}. In many cases, CS techniques can allow for
reconstruction of full MRI data sets from just 25\% or less of the
full data set, as would historically be defined by the Nyquist-Shannon
sampling theorem. The ephemeral nature of RF spikes, which leads to local corruption of the {$k$-space}~data, suggests that a CS
reconstruction of the corrupted data should be extremely accurate.
We were initially motivated to find an improved spike detection and
removal algorithm because we experienced severely corrupted data in a
longitudinal fat-water imaging study. The data were taken with a
faulty RF room shield, and the built-in commercial software was unable
to correct the severely corrupted data on a majority of slices. In
this work, we present our resulting improved and robust methods for
both spike detection and removal that require no user intervention or
prior information about the acquisition of the data. This method
can also be run retrospectively when post processing already acquired
data.
In Section 2, we describe a new method for spike detection and
removal. In Section 3, we show the results of this method on both
in vivo brain data with synthesized spikes and
severely corrupted actual whole-body gradient echo data, acquired with a
damaged RF room shield beneath the patient support and containing an
unknown number of spikes arising from RF leakage secondary to loss of RF shield integrity. In Section 4, we discuss the practical uses
for our methods and possible improvements.
\section{Materials and Methods}\label{sec:methods}
\subsection{Spike Detection Method}
During collection of the data at an MRI scanner, the imaging volume is
encoded by variable spatial frequencies that are then interpreted as
the coefficients of the discrete Fourier transform (DFT) of the
target. Perturbing a single Fourier coefficient in {$k$-space}~will lead to
global sinusoidal intensity variations in the image. In one sense,
the coefficients of the DFT are the coefficients which minimize the
approximation error of the imaging target. Since anatomical
structures are typically smooth objects on a flat (signal free)
background, the coefficients that cause the most ``rippling'' in the
image should be the least accurate. Furthermore, since all coefficients
are corrupted at some level by noise, we would like to distinguish a
point of diminishing returns, beyond which we are replacing noisy, but
probably valid, data. Optimal image improvement should result when
all data corrupted by RF spikes are replaced and all valid points are
retained. In practice, however, this may not necessarily be possible, so we must plan to strike a balance between possibly missing spikes and not correcting all image artifacts or possibly discarding valid data and losing the information that it contained.
We define the collected data, or $k$-space, of a complex-valued image
$\boldsymbol{x}$ as $\boldsymbol{d} = \mathcal{F}\boldsymbol{x}$,
where $\mathcal{F}$ is the DFT operator. To begin spike detection, each element
of $\boldsymbol{d}$ was zeroed in turn, and a corresponding
modified magnitude image $\boldsymbol{x}^k$ was created with an inverse DFT: \begin{equation} \boldsymbol{x}^k =|
\mathcal{F}^{-1} \boldsymbol{I}^k \boldsymbol{d}|,\end{equation} where
$\boldsymbol{I}^k$ is the identity matrix with the $k$-th element
along the diagonal set to zero. In theory, if the datum $d_k$ deleted was valid, the image quality of $\boldsymbol{x}^k$ should decrease; if $d_k$ had been corrupted, the image quality should improve. Since the total variation (TV) is very sensitive to alterations in the Fourier domain, we hypothesized that the TV should separate the corrupted data from the valid data.
A vector of aggregated TV values $\boldsymbol{t}$ was then constructed, where the $k$-th element of $\boldsymbol{t}$ was computed as the TV of the derived image $\boldsymbol{x}^k$: \begin{equation} t_k= \sum_{s=1}^N
\left\|\nabla_s \boldsymbol{x}^k\right\|_1,\end{equation} where $N$ is the total number of data points, $\nabla_s$ is
the forward difference along the dimension $s$ (e.g. $s=\{1,2,3\}$ for
a 3-D protocol) and $\|\cdot\|_1$ is the $\ell_1$-norm, defined for a
vector $\boldsymbol{v}$ as $ \|\boldsymbol{v}\|_1 = \sum_i
|v_i|.$
Next, the upper half of $\boldsymbol{t}$ was assumed to be valid and were discarded (these points had the least effect on image total variation). The remaining $\boldsymbol{t}$ values were retained and normalized to lie between 0 and 1. Then a threshold $\theta$ was selected using Otsu's method \cite{Smith1979} (see, e.g., Fig.~\ref{tv}). Since the penalty in terms of image quality is more severe when corrupted data is missed than when clean data is thought to be corrupted, we increased the cutoff between corrupted and clean data to $\sqrt{\theta}$ (recall that $\boldsymbol{t}$ has been normalized to be between zero and one). This provided a more aggressive selection of spiked data and will be shown below to be the most robustly accurate method.
All $\boldsymbol{t} < \sqrt{\theta}$ were assumed to be spikes.
The suspected spikes were
then discarded to create a data constraint mask $\boldsymbol{M}$ for
the reconstruction: $ \boldsymbol{M} = \prod_{\hat{k}}
\boldsymbol{I}^{\hat{k}},$ where $\hat{k} \in
\{k : t_k < \sqrt{\theta}\}$. The spike detection algorithm is listed in
Algorithm 1.\begin{algorithm}
\caption{:: Spike Detection}
\begin{algorithmic}
\REQUIRE input data $\boldsymbol{d}$
\FORALL{data points $k$}
\STATE $\boldsymbol{g} \leftarrow \boldsymbol{d}$
\STATE ${g}_k \leftarrow 0$
\STATE ${t}_k \leftarrow \mathrm{TV}(\mathcal{F}^{-1} \boldsymbol{g})$
\ENDFOR
\STATE Discard upper half of $\boldsymbol{t}$.
\STATE Normalize remaining $\boldsymbol{t}$ to lie on $[0,1]$ interval.
\STATE Calculate Otsu threshold $\theta$ for $\boldsymbol{t}$.
\STATE All $\boldsymbol{t} < \sqrt{\theta}$ are labeled as assumed spikes.
\RETURN
\end{algorithmic}
\end{algorithm}
\subsection{Collection of Real Data Free of Spikes}
First, a gradient echo data set was used that was free of spikes to use as a baseline for adding synthetic spikes. This gave us a benchmark for accuracy testing of the algorithm. This data was acquired on a Philips 7T scanner with an axial field of view (FOV) of 256 mm $\times$ 256 mm and an in-plane voxel size of 1 mm $\times$ 1 mm. The slice thickness was 2.5 mm. The flip angle was 35 deg, and no partial Fourier or parallel imaging was used. The TE1/$\Delta$TE/TR was 1.74/2.3/200 ms. Total scan time for the acquisition of 32 echoes was 205 s.
A reconstructed matrix size of 256$^2$ was produced by gridding 512 radial profiles. This gridded Cartesian data was inverse transformed to produce a 256 $\times$ 256 Cartesian image. This final, complex image was our baseline ``clean'' image that synthetic spikes were then added to for the synthetic spike corruption experiment.
\subsection{Collection of Real Data Corrupted by Spikes}
As a real-world application, we applied the algorithm to gradient
echo data from an \emph{in vivo} fat-water MRI protocol. For this
acquisition, the subject entered a
dual-transmit Philips 3.0 T Achieva TX scanner (Philips Healthcare,
Best, The Netherlands) feet-first in a supine position with arms
extended above the head. The integrated quadrature body coil was used
for both transmit and receive. A multi-station protocol with 20 table
positions was used to acquire whole-body data. Each of the 20 stacks
consisted of a multi-slice, multiple gradient echo acquisition with 12
contiguous 8 mm slices (240 slices total). Other acquisition details
include: TR/TE1/TE2/TE3 [ms] = 75/1.34/2.87/4.40; flip angle
20$^\circ$; water fat shift = 0.325 pixels (BW = 1335.5 Hz/pixel); axial FOV
= 500 mm $\times$ 390 mm, acquired matrix size = 252 $\times$ 195;
acquired voxel size = 2 mm $\times$ 2 mm $\times$ 8 mm. First-order
shimming was performed for each slice stack. The total duration of
data acquisition was 4 minutes and 16 seconds. Approximately 5 minutes
of additional time was needed for table movement, preparation phases
at each table position, and for breath holding pauses. Breath holding
was performed for table positions covering the waist to the shoulders.
Reconstruction of water and fat images from the acquired multi-echo
data was performed using a generalized three-point Dixon approach
\cite{Berglund2010} in which two solutions were found analytically in
each voxel. Fat and water signal components were found by least
squares fitting after the true solution was identified by imposing
spatial smoothness in a 3D multi-seeded region growing scheme with a
dynamic path that allowed low confidence regions to be solved after
high confidence regions were solved.
\subsection{Reconstruction of Corrected Images}
Images cleaned of spikes were then reconstructed using an
unconstrained TV-regularized compressed sensing reconstruction that
iteratively solved the following optimization problem: \begin{equation}
\boldsymbol{x}_\mathrm{clean} = \operatornamewithlimits{arg\ min~}_{\boldsymbol{x}}
\mathrm{TV}(\boldsymbol{x}) + \frac{\lambda}{2}\|\boldsymbol{M}
\mathcal{F} \boldsymbol{x} - \boldsymbol{M} \boldsymbol{d}\|_2^2,\label{reconeq}\end{equation}
where $\lambda$ is a scalar weighting factor that controls the balance
between promoting sparsity of the gradient image and fidelity to the
measured non-spiked data. (The total variation is the $\ell_1$-norm of the gradient image.) For the results here, $\lambda$ was chosen to be high enough that excessive smoothing of the images was avoided; e.g., $\lambda = 50$ for unit-normed data. Since we only tested this on Cartesian
data, we were able to use a fast split Bregman solver
\cite{Goldstein2009} to solve the reconstruction in Eq. \ref{reconeq}. The reconstruction code
was adapted to run on a compute server with graphics processing unit
(GPU) acceleration using Jacket 2.2 (AccelerEyes, Atlanta, GA) and
MATLAB R2012a (MathWorks, Natick, MA).
\section{Results}\label{sec:results}
\subsection{Application to Data Synthetically Corrupted}
We first applied the spike detection algorithm to a clean gradient echo data set
of a healthy brain. This data set was originally spike free to the best of our knowledge. To simulate the effect of spiking on the data set in a situation in which the positions of the spikes were known, we added varying numbers of synthetically generated spikes to this data.
Random locations in the $k$-space data were replaced with spikes with magnitudes equal to the DC coefficient and a phases randomly chosen between 0 and $2\pi$.
Real spikes can, and probably will, have varying amplitudes, but choosing to fix the magnitude allowed us to concentrate on the detection accuracy in controlled conditions. For example, spikes of smaller magnitude would have been harder to detect, but they also would have a smaller influence on image quality, so the penalty for not detecting them would be smaller. Rather than explore the full problem space of possible spike forms, we chose a simple case for this test. The detection and correction required 7.4 s per case on a GPU-accelerated workstation.
The spike locations, corrupted images, and cleaned images are shown in Fig. \ref{montage} for two cases: (1) the case with the largest number of spikes that still achieved perfect recovery, and (2) the case with the worst corruption in which a low-frequency datum was corrupted and our algorithm failed to recover it correctly.
While Case 2 appears to be a failure of the algorithm, it was notably a case with 404 spikes. Case 1 was corrupted by 243 spikes and still achieved perfect recovery. One can see by comparing the middle images of each row in Fig. \ref{montage} to the rightmost images how stark the difference is between the corrupted and recovered images and how dramatically the algorithm cleans up the corrupted images.
\begin{figure*}[h]
\centering
\includegraphics[width=\textwidth]{fig_synthetic_montage_243_404.png}
\caption{Two examples of 7T brain gradient echo images with synthetic spikes added. The first (top row) is the case with the largest number of spikes (243) that still achieved a perfect reconstruction. The second case (bottom row) was the most severely corrupted case, with 404 spikes added. In this case, the algorithm labeled several points at low spatial frequencies as spikes (arrow), and the severe image artifact can be seen to be a very low spatial frequency feature. This suggests that some of the points deleted and replaced were in fact valid and that they were incorrectly replaced.}
\label{montage}
\end{figure*}
A demonstration of the way in which the algorithm separates spikes from genuine data is shown in Fig. \ref{tv}. Here the lower half of the TV has been normalized and plotted in a histogram. The vertical dashed line is placed at the cutoff $\sqrt{\theta}$. Data that, when zeroed out, produce TV values less than the cutoff---to the left of the dashed line---are suspected spikes. Points to the right of the line are assumed to be correct. The histogram shows a bimodal distribution of TV values: a broad distribution at low values and a very narrow, highly peaked distribution near unity. Otsu's method chooses a TV cutoff value that minimizes the variance between these two distributions, but the histogram suggests that both distributions have overlapping tails, and hence the detection algorithm will in case with many spikes incorrectly label some valid points as spikes and spikes as valid. The goal is to minimize the number of such misclassifications, and Otsu's method is well suited to that goal.
\begin{figure}[h]
\centering
\includegraphics[width=3in]{fig_tv_hist}
\caption{Total variation histogram produced when synthetic spikes were added to 7T gradient echo brain data. Only the lower half of TV values are retained, and then Otsu's method is performed to determine a cutoff below which points are considered to be spike corrupted. }
\label{tv}
\end{figure}
To quantitatively evaluate the performance of our spike detection algorithm, we measured the sensitivity, specificity, and Matthews correlation coefficient of the algorithm as the number of synthetically generated spikes increased using four different exponents for the Otsu's method cutoff: $1/p$, where $p=\{1,2,3,4\}$. The results are shown in Fig. \ref{analysis}. The sensitivity and specificity can be written as
$\mathrm{TP}/(\mathrm{TP}+\mathrm{FN})$
and
$\mathrm{TN}/(\mathrm{TN}+\fp),$ respectively,
where $\mathrm{TP}$ is the number of true positives, $\mathrm{TN}$ is the number of true negatives, $\fp$ is the number of false positives, and $\mathrm{FN}$ is the number of false negatives.
\begin{figure}[h]
\centering
\includegraphics[width=2.8in]{fig_accuracy_analysis}
\caption{Numerical experiment showing the robustness of the spike detection accuracy to the number of spikes added in the synthetically spiked brain example and to adjustments to the threshold given by Otsu's method. Each curve with points corresponds to raising the Otsu's threshold to the specified fractional power ($1/p$, where $p \in \{1,2,3,4\}$), and the solid black line is perfect agreement with the number of spikes added (i.e. 100\% detection accuracy). Taking the square root of the Otsu threshold ($p=2$) seems to give the most robust accuracy as the number of spikes increases. }
\label{analysis}
\end{figure}
Since spike corruption tends to affect only a small fraction of the data set, the usual measure of detection accuracy, defined as $(\mathrm{TP}+\mathrm{TN})/\mathrm{total\ points}$, is unreliable. Instead we choose to present the Matthews correlation coefficient (MCC) of the classifications produced by the algorithm (Fig. \ref{analysis}c). The MCC can be thought of as a correlation coefficient between the measured and predicted classification of each point (corrupted or ok). The MCC can be calculated as $$\mathrm{MCC} = \frac{\mathrm{TP} \times \mathrm{TN} - \fp \times \mathrm{FN}}{\sqrt{(\mathrm{TP}+\fp)(\mathrm{TP}+\mathrm{FN})(\mathrm{TN}+\fp)(\mathrm{TN}+\mathrm{FN})}}.$$
We found that all three measures decline as more spikes were added, presumably because the number of real points that randomly had low TV values increased. Sensitivities were very uniformly high ($>$ 0.95) for $p > 1$. The worst sensitivity was 0.85 for $p=1$ at 404 spikes added.
The specificity of the technique is uniformly very high ($>$ 0.999) no matter how many spikes were added or the exact value of $p$. This is because the vast majority of data points were never selected, and even a few false positives are insignificant compared to the total number of data points.
Finally, the MCC was above 0.9 for all $p$ up to the maximum number of spikes, but $p=2$ was the highest at moderate to high spike corruption levels. For very small numbers of spikes, $p=1$ produced a higher MCC, but all $p$ produced high coefficients ($>0.98$) at low corruption levels. Since correcting data sets with higher levels of spike correction is a more difficult problem, we believe that $p=2$ is the best choice. This has suggested the choice of $\theta^{1/2}$ for the cutoff, which we adopt throughout.
\subsection{Application to Data Corrupted by Real Spikes}
Next, to test Algorithm 1 under real-world conditions with spikes at
uncertain locations, we applied it to
\emph{in vivo} whole-body fat-water imaging gradient echo data
acquired with a damaged RF room shield. RF interference had
severely corrupted the data on almost every slice. These data were
corrected using Algorithm 1 and then passed to a fat-water separation
algorithm \cite{Berglund2010} to compare the effects of the data scrubbing on
derived quantitative parameters. Data from each echo time were processed separately. The total processing time necessary to
correct all slices and echoes of this data set was approximately 30 min
on a GPU-accelerated workstation using MATLAB R2012a
(MathWorks, Natick, MA).
While the provenance of individual $k$-space points cannot be fully
known in this case, visually the data appeared to contain hundreds of
spikes spread across 240 slices. An example slice containing spikes
is shown in Fig. \ref{realdata}. Algorithm 1 automatically detected
scores of inconsistent points and eliminated them. The resulting
reconstructed images were dramatically improved, and as a side-effect
many places throughout the derived fat-water images where water and
fat fractions were swapped were corrected. This suggests that spike
corruption can have a significant effect on derived image
quantities. The histogram of TV values produced is shown in Fig. \ref{realtv}. With a much smaller number of spikes in this case, the form of the distribution is not as clear, but the cutoff produced by the algorithm seems to separate the relatively few number of corrupt points from the much more numerous valid points. The resulting images are shown in Fig. \ref{real}. Not only
were corrupted points correctly located, but they were replaced with
enough accuracy to improve the separation of water and fat in the
imaging data.
We measured the artifact level reduction as the mean image magnitude in the region outside of the body. When the spikes were replaced with zeros, we found a 42\% decrease in the artifact level outside the body. When a CS reconstruction was used to replace the points, we found a 47\% decrease in artifact level.
\begin{figure}[h]
\centering
\includegraphics[width=3.5in]{fig3a_fwi_data}
\caption{Example of spike corruption in $k$-space from the fat-water imaging data set acquired in the \emph{in vivo} experiment. The bright points that don't follow the general pattern of data are almost certainly corrupted. Low intensity features have been scaled up to aid visualization by taking the square root of the magnitude of the coefficients. This is the same data used for the uncorrected images in Fig. \ref{real}.}
\label{realdata}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=3in]{fig_tv_hist_fwi}
\caption{Histogram of normalized total variation for the corrupted fat-water imaging data. The number of spikes is much smaller in this real-world case, but the general characteristics of the histogram are the same. The cutoff obtained using Otsu's method shows again a good division between the few spikes and the much more numerous non-spike-corrupted data.}
\label{realtv}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=\textwidth]{fig3_fwi_example}
\caption{Algorithm performance with an unknown number of spikes in
real data acquired with a faulty RF room shield during a whole-body
gradient echo scan for fat-water imaging. The upper row (a--c)
shows the water images; the lower row (d--f) shows fat images. The
original, corrupted data is shown in the center column (b,e). The
degradation in image quality is clear, and an area of fat
misclassification is visible in the upper left (patient right
anterior) of the images. After running this algorithm on the data,
the images are significantly cleaned, whether the corrupted data was
replaced with zeros (images a, d) or using a compressed sensing
reconstruction (c,f). Replacing the spikes with zeros resulted in a 42\% artifact level reduction, and the CS reconstruction provided an additional 5\% reduction.}
\label{real}
\end{figure}
\section{Conclusion}
We have shown a method for detecting and eliminating radio frequency
spikes in MRI data that is automated, robust, and extremely
accurate. We demonstrated the success of the technique on both
synthetic corrupted data with known spikes and on
whole-body gradient echo data corrupted by real RF spike noise. Our method
achieved markedly improved images in both cases, and in the \emph{in
vivo} experiment also improved the derived fat and water images.
The most significant contribution of Algorithm 1 is the ability to
detect corrupted $k$-space locations without human intervention. Whatever the source of error, $k$-space data
that is inconsistent should produce a large effect on the image total
variation, a feature that can be exploited to clean images by filtering out corrupted $k$-space data. Humans are excellent pattern recognizers, but
requiring human intervention is slow and expensive. While
Fourier coefficient thresholding certainly works in many cases, it is easy to show an
example in which the spikes do not cross the threshold or where the
assumptions about the falloff of complex coefficients at high spatial
frequencies is incorrect. In fact, the real spike experiment
shown here was made possible only because the spiking in this case was
missed by the MRI scanner's commercial built-in algorithm that uses Fourier amplitude thresholding.
The second step of the algorithm, to reconstruct the corrupted data
with compressed sensing, is actually only a small (11\% relative) improvement over
replacing the spiked data with zeroes, but it demonstrates the ability
of CS to accurately estimate missing data. The use of CS is justified here because the points deleted were chosen because of their effect on the TV, and CS replaces data such that the TV effect is minimized. Given the small effect of CS, though, we stress that detection and replacement of spikes, even with zeroes, is more important than implementing the ability to reconstruct the missing data with CS. We choose to show the CS results here for completeness.
Our algorithm does not blur or excessively smooth the image, as can
occur with TV-based denoising. Since we are only undersampling the
image by a very small amount in $k$-space, the data constraint is
enforced in the Fourier domain, and we are not affecting the image as much
as image-domain techniques like TV denoising. In fact, the amount of
data altered here is so slight that only the sinusoidal ripples were
removed, and even the background Rician noise in the magnitude image
was left intact.
We note that the artifacts due to spiking are coherent and structured,
but CS still works perfectly. The literature of compressed sensing MRI
frequently refers to the requirement that image artifacts due to the
random undersampling be ``incoherent,'' but this is somewhat
incorrect. CS theory states that the measurement basis and the sparse
basis should be incoherent, in this case the image gradient and the
Fourier measurement. This allows the sparse basis to maximally
constrain the data in the measurement domain, effectively turning a
global effect into information about single $k$-space points. In fact,
in the case of RF spike noise with one spike, the image artifact will
be extremely coherent, consisting of just one frequency, but TV
minimization works extraordinarily well to eliminate it because a
single errant point in $k$-space produces a global effect on the gradient
image.
While Algorithm 1 appears to be robust and flexible, many improvements
could be added to increase its accuracy and sensitivity. For example,
inconsistencies across multiple receive coil channels or dynamic
acquisitions could supplement detection criteria to increase accuracy,
perhaps by comparing the variance of $k$-space points across time or
coil. Most functional MRI and diffusion tensor imaging studies acquire
dozens or hundreds of sequential dynamic images with echo-planar
readouts that are demanding of gradient hardware and that can
frequently cause spiking. The advantage of the large number of
dynamics acquired in these studies is that each $k$-space location is
acquired many times, allowing temporal correlations and statistical
methods to be added to the basic TV method presented here. Presumably, the spike distribution would be broader with a higher mean in the histogram of temporal variances for each point. Otsu's method could determine a separation threshold based on this criteria instead of the TV value alone, or perhaps both could be included in the categorization criteria.
Also, contiguous data samples instead of single $k$-space points could be deleted. Because
spike occurrence is a temporal phenomenon and $k$-space is traversed
through time, portions of $k$-space larger than a single point may be
corrupted by a single RF event. The data in Fig. \ref{realdata} show an example of this: the bright spikes are clustered and trail along the vertical (readout) axis. In the future, this could possibly be exploited to improve detection accuracy.
If $k$-space locations were
considered in pairs or larger groups, as well as in isolation, the
detection might be improved. The computational penalty for this would
be large, however, since each comparison requires a full 2D or 3D fast
Fourier transform. Note that examples shown here are 2D only, and
moving to a full 3D treatment would be significantly more
computationally intensive, but not impossible.
With different data corruption models, this approach could even be
extended to motion compensation, Nyquist ghosting, pulsatile
artifacts, etc., in which entire readout lines are corrupted or have a
systematic error. In these cases, the image TV could constrain the
replacement of entire readout lines or help constrain empirical fits
in parametric models, such as in the case of gradient delay
estimation.
However, it is important to note that the concept of identifying inconsistent or corrupted lines of $k$-space caused by motion or other sources of data perturbation is not novel. Many methods are described in the literature to detect and compensate for artifacts caused by such k-space corruption including (but not limited to) dedicated navigator echoes \cite{Ehman1989,Brau2006}, self-navigation \cite{Pipe1999b,Welch2004a}, iterative autofocusing \cite{Atkinson1997a,Manduca2000}, and methods based on the data redundancy afforded by multi-channel receive coils \cite{Bydder2002,Larkman2004,Atkinson2006}.
The ultimate goal of this type of approach is to create an automated
data post-processing step that detects and eliminates RF spikes and
creates a ``clean'' $k$-space for the reconstruction pipeline. The
method presented here is compatible with a reconstruction
post-processing pipeline that works with little or no user
intervention to clean up data. Additionally, Algorithm 1 could be
modified to aid quality control checks of clinical systems and monitor
hardware function. Finally, we have provided evidence here that with
an effective spike detection and correction method, discarding a data
set due to spiking artifacts may rarely be necessary.
\section*{Acknowledgments}
Financial support from NIBIB T32 EB001628 and NCI R25CA092043, NCATS UL1 TR000445.
\bibliographystyle{unsrt}
|
{
"timestamp": "2021-12-01T02:26:32",
"yymm": "2111",
"arxiv_id": "2111.15471",
"language": "en",
"url": "https://arxiv.org/abs/2111.15471"
}
|
\section{Introduction}
Automatic differentiation (AD) refers to a family of techniques for mechanically computing derivatives of user-defined functions. When applied to straight-line programs built from differentiable primitives (without \texttt{if} statements, loops, and other control flow), AD has a straightforward justification using the chain rule. But in the presence of more expressive programming constructs, AD is harder to reason about: some programs encode partial or non-differentiable functions, and even when programs \textit{are} differentiable, AD can fail to compute their true derivatives at some inputs.
The aim of this work is to provide a useful mathematical model of (1) the class of partial functions that recursive, higher-order programs built from ``AD-friendly'' primitives can express, and (2) the guarantees AD makes when applied to such programs. Our model helps answer questions like:
\begin{itemize}[leftmargin=*]
\item \textbf{If AD is applied to a recursive program, does the derivative halt on the same inputs as the original program?} We show that the answer is yes, and that restricted to this common domain of definition, AD computes an \textit{intensional derivative} of the input program~\citep{lee2020correctness}.
\item \textbf{Is it sound to use AD for ``Jacobian determinant'' corrections in probabilistic programming languages (PPLs)?} Many probabilistic programming systems use AD to compute densities of transformed random variables~\citep{radul2021base} and reversible-jump MCMC acceptance probabilities~\citep{cusumano2020automating}. AD produces correct derivatives for almost all inputs~\citep{mazza2021automatic}, but PPLs evaluate derivatives at inputs sampled by probabilistic programs, \textit{which may have support entirely on Lebesgue-measure-zero manifolds of $\mathbb{R}^n$}. We may then wonder: could PPLs, for certain input programs, produce wrong answers \textit{with probability 1}? We show that fortunately, even when AD gives wrong answers, it does so \textit{in a way} that does not compromise the correctness of the Jacobian determinants that probabilistic programming systems compute.
%
\end{itemize}
Our approach is inspired by \citet{lee2020correctness}, who provided a similar characterization of AD for a first-order language with branching (but not recursion). Section 2 reviews their development, and gives intuition for why recursion, higher-order functions, and partiality present additional challenges. In Section~3, we give our solution to these challenges, culminating in a similar result to that of \citet{lee2020correctness} but for a more expressive language. We recover as a straightforward corollary the result of \citet{mazza2021automatic} that when applied to recursive, higher-order programs, AD fails at a measure-zero set of inputs. In Section 4, we briefly discuss the implications of our characterization for PPLs. Indeed, reasoning about AD in probabilistic programs is a key motivation for our work: perhaps more so than typical differentiable programs, probabilistic programs often employ higher-order functions and recursion as modeling tools. For example, early Church programs used recursive, higher-order functions to express non-parametric probabilistic grammars~\citep{goodman2012church}, and modern PPLs such as Gen~\citep{cusumano2019gen} and Pyro~\citep{bingham2019pyro} use higher-order primitives with custom derivatives or other specialized inference logic to scale to larger datasets.%
\footnote{Higher-order recursive combinators like \texttt{map} and \texttt{unfold} enforce conditional independence patterns that systems can exploit for subsampling-based gradient estimates (in Pyro) or incremental computation (in Gen).} Furthermore, as mentioned above, probabilistic programs may have support entirely on Lebesgue-measure-zero manifolds, so the intuition that AD is correct ``almost everywhere'' becomes less useful as a reasoning aide\textemdash motivating the need for more precise models of AD's behavior.
{\bf Related work.} The growing importance of AD for learning and inference has inspired a torrent of work on the semantics of differentiable languages, summarized in Table~\ref{table:comparison}. We build on existing denotational approaches, particularly those of \citet{huot2020correctness} and \citet{vakar2020denotational}, but incorporate ideas from \citet{lee2020correctness} to handle piecewise functions, and \citet{vakar2019domain} to model probabilistic programs. \citet{mazza2021automatic} consider a language equally expressive as our deterministic fragment; we give a novel and complementary denotational account (their approach is operational). \citet{mak2021densities} do not consider AD, but do give an operational semantics for a Turing-complete PPL, and tools for reasoning about differentiability of density functions.
\begin{table}[]
\centering
\footnotesize{
\begin{tabular}{|r|c|c|c|c|c|c|}
\hline
{\bf Semantic Framework} & {\bf Piecewise} & {\bf Recursion} & {\bf Higher-Order} & {\bf Approach} & {\bf AD} & {\bf PPL} \\\hline
\citet{huot2020correctness} & & & \ding{52} & Denotational & \ding{52} & \\
\citet{vakar2020denotational} & & \ding{52} & \ding{52} & Denotational & \ding{52} & \\
\citet{lee2020correctness} & \ding{52} & & & Denotational & \ding{52} & \\
\citet{abadi-plotkin2020} & & \ding{52} & & Both & \ding{52} & \\
\citet{mazza2021automatic} & \ding{52} & \ding{52} & \ding{52} & Operational & \ding{52} & \\
\citet{mak2021densities} & \ding{52} & \ding{52} & \ding{52} & Operational & & \ding{52} \\
Ours & \ding{52} & \ding{52} & \ding{52} & Denotational & \ding{52} & \ding{52}\\\hline
\end{tabular}
}
\vspace{1mm}
\caption{Approaches to reasoning about differentiable programming languages and AD. ``Piecewise'': the semantics accounts for total but discontinuous functions such as $<$ and $==$ for reals. ``PPL'': the same semantic framework can be used to reason about probabilistic programs and the differentiable properties of deterministic ones. ``AD'': the framework supports reasoning about soundness of AD (NB: we handle only forward-mode, whereas others handle reverse-mode or both).}
\vspace{-8mm}
\label{table:comparison}
\end{table}
\vspace{-3mm}
\section{Background: PAP functions and intensional derivatives}
\vspace{-2mm}
In this section, we recall \citet{lee2020correctness}'s approach to understanding a simple differentiable programming language, and describe the key challenges for extending their approach to a more complex language, with partiality, recursion, and higher-order functions.
\vspace{-3mm}
\subsection{A First-Order Differentiable Programming Language}
\vspace{-2mm}
\citet{lee2020correctness} consider a first-order language with real number constants $c$, primitive real-valued functions $f : \mathbb{R}^{N_f} \rightarrow \mathbb{R}$, as well as an \texttt{if} construct for branching: $e ::= c \mid x_i \mid f(e_1,...,e_n) \mid \texttt{if} \, (e_1 > 0) \, e_2 \, e_3$. Expressions $e$ in the language denote functions $\llbracket e \rrbracket : \mathbb{R}^n \rightarrow \mathbb{R}$ of an \textit{input vector} $\mathbf{x} \in \mathbb{R}^n$ (for some $n$). We have $\llbracket c\rrbracket \mathbf{x} = c$, $\llbracket x_i \rrbracket \mathbf{x} = \mathbf{x}[i]$, $\llbracket f(e_1, \dots, e_n) \rrbracket \mathbf{x} = f(\llbracket e_1\rrbracket \mathbf{x}, \dots, \llbracket e_n\rrbracket\mathbf{x})$, and $\llbracket \texttt{if }(e_1 > 0)\, e_2\, e_3\rrbracket \mathbf{x} = [ \llbracket e_1\rrbracket\mathbf{x} > 0] \cdot (\llbracket e_2\rrbracket \mathbf{x}) + [\llbracket e_1\rrbracket\mathbf{x} \leq 0] \cdot (\llbracket e_3\rrbracket\mathbf{x})$.
A key insight of \citet{lee2020correctness} is that if the primitive functions $f$ are \textit{piecewise analytic under analytic partition}, or \textit{PAP}, then so is any program written in the language. They define PAP functions in stages, starting with the concept of a \textit{piecewise representation} of a function $f$:
\begin{wrapfigure}{r}{0.55\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{correctness_ad.pdf}
\caption{\citet{lee2020correctness}'s denotational characterization of AD, which we extend. For a simple differentiable programming language, they define a class of functions (\textit{PAP functions}) expressive enough to interpret all programs in the language. Not all PAP functions are differentiable, but \citet{lee2020correctness} propose a relaxed notion of derivatives, called \textit{intensional derivatives}, that always exist for PAP functions, and show that AD yields intensional derivatives. We extend their argument to handle recursion and higher-order functions, by defining a \textit{generalized} class of \textit{$\omega$PAP functions} and a corresponding generalization of the notion of intensional derivative.}
\vspace{-10mm}
\label{fig:my_label}
\end{wrapfigure}
\textbf{Definition.} Let $U \subseteq \mathbb{R}^n$, $V \subseteq \mathbb{R}^m$, and $f : U \rightarrow V$. A \textit{piecewise representation} of $f$ is a countable family $\{(A_i, f_i)\}_{i \in I}$ such that: (1) the sets $A_i$ form a partition of $U$; (2) each ${f_i : U_i \rightarrow \mathbb{R}^m}$ is defined on an open domain $U_i\supseteq A_i$; and (3) when $x \in A_i$, $f_i(x) = f(x)$.
The PAP functions are those with \textit{analytic} piecewise representations:
\textbf{Definition.} If the Taylor series of a smooth function $f$ converges pointwise to $f$ in a neighborhood around $x$, we say $f$ is \textit{analytic} at $x$. An \textit{analytic function} is analytic everywhere in its domain.
We call a set $A \subseteq \mathbb{R}^n$ an \textit{analytic set} iff there exist finite collections $\{g^+_i\}_{i \in I}$ and $\{g^-_j\}_{j \in J}$ of analytic functions into $\mathbb{R}$, with open domains $X^+_i$ and $X^-_j$, such that $A = \{x \in (\bigcap_{i} X^+_i) \cap (\bigcap_{j} X^-_j) \mid \forall i. g^+_i(x) > 0 \wedge \forall j. g^-_j \leq 0\}$ (i.e., analytic sets are subsets of open sets carved out using a finite number of analytic inequalities.)
\textbf{Definition.} We say $f$ is \textit{piecewise analytic under analytic partition (PAP)} if there exists a piecewise representation $\{(A_i, f_i)\}_{i \in I}$ of $f$ such that the $A_i$ are analytic sets and the $f_i$ are analytic functions. We call such a representation a \textit{PAP representation}.
\textbf{Proposition (Lee et al. 2020).} Constant functions and projection functions are PAP. Supposing $f : \mathbb{R}^n \rightarrow \mathbb{R}$ is PAP, and $\llbracket e_1\rrbracket, \dots, \llbracket e_n\rrbracket$ are all PAP, $\llbracket f(e_1, \dots, e_n)\rrbracket$ and $\llbracket \texttt{if } (e_1 > 0) \, e_2 \, e_3\rrbracket$ are also PAP. Therefore, by induction, all expressions $e$ denote PAP functions $\llbracket e\rrbracket$.
PAP functions are not necessarily differentiable. But \citet{lee2020correctness} define \textit{intensional derivatives}, which do always exist for PAP functions (though unlike traditional derivatives, they are not unique):
\textbf{Definition.} A function $g$ is \textit{an intensional derivative} of a PAP function $f$ if there exists a PAP representation $\{(A_i, f_i)\}_{i \in I}$ of $f$ such that when $x \in A_i$, $g(x) = f_i'(x)$.
\citet{lee2020correctness} then give a standard AD algorithm for their language and show that, when applied to an expression $e$, it is guaranteed to yield \textit{some} intensional derivative of $\llbracket e\rrbracket$ as long as each primitive $f$ comes equipped with an intensional derivative. Essentially, this proof is based on an analogue to the chain rule for intensional derivatives. The result is depicted schematically in Figure~\ref{fig:my_label}.
\vspace{-2mm}
\subsection{Challenges: Partiality, Higher-Order Functions, and Recursion}
\begin{wrapfigure}{l}{0.40\textwidth}
\vspace{-3mm}
\centering
\includegraphics[width=0.35\textwidth]{cantor_func.pdf}
\caption{This program can be seen as a piecewise function, constantly $\bot$ on the $\frac{1}{3}$-Cantor set and $x^2$ elsewhere in $(0, 1)$. But the $\frac{1}{3}$-Cantor set is not expressible as a countable union of analytic sets, and so this representation is not PAP.}
\vspace{-14mm}
\label{fig:cantor_func}
\end{wrapfigure}
Can a similar analysis be carried out for a more complex language, with higher-order and recursive functions? One challenge is that as defined above, only total, first-order functions can be PAP: unless we can generalize the definition to cover partial and higher-order functions, we cannot reproduce the inductive proof that PAP functions are closed under the programming language's constructs. This is a roadblock even if we care only about differentiating first-order, total programs.
To see why, recall that we required primitives $f$ to be PAP in the section above.
What alternative requirements should we place on partial or higher-order primitives, to ensure that first-order programs built from them will be PAP? How should these primitives' built-in intensional derivatives behave?
These are non-trivial challenges, and it is possible to formulate reasonable-sounding solutions that turn out not to work. For example, we might hypothesize that \textit{partial} functions $f : U \rightharpoonup V$ definable using recursion are \textit{almost} PAP: perhaps there still exists an analytic partition $\{A_i\}_{i \in I}$ of $U$, and analytic functions $\{f_j\}_{j \in J}$ for some \textit{subset} of indices $J \subseteq I$, such that $f$ is defined exactly on $\bigcup_{j \in J} A_j$ and $f(x) = f_j(x)$ whenever $x \in A_j$. But consider the program $\texttt{cantor}$ in Figure 2. It denotes a partial function that is undefined outside of $[0, 1]$, and also on the $\frac{1}{3}$-Cantor set. This region \textit{cannot} be expressed as a countable union of analytic sets $\cup_{i \in I \setminus J} A_i$. Therefore, this candidate notion of PAP partial function is too restrictive: some recursive programs do not satisfy it.
\section{$\omega$PAP Semantics}
\vspace{-3mm}
In this section, we present an expressive differentiable programming language, with higher-order functions, branching, and general recursion. We then generalize the definitions of PAP functions and intensional derivatives to include higher-order and partial functions, and show that: (1) all programs in our language denote (this generalized variant of) PAP functions, and (2) a standard forward-mode AD algorithm computes valid intensional derivatives.
\vspace{-2mm}
\subsection{A Higher-Order, Recursive Differentiable Language}
\textbf{Syntax.} Consider a language with types $\tau ::= \mathbb{R}^k \mid \tau_1 \times \tau_2 \mid \tau_1 \rightarrow \tau_2$, and terms $e ::= x \mid c \mid e_1\, e_2 \mid \texttt{if } (e_1 > 0) \, e_2\, e_3 \mid \lambda x : \tau. e \mid \mu f : \tau_1 \rightarrow \tau_2. e$. Here, $c$ ranges over constants \textit{of any type}, including constant numeric literals such as $\texttt{3}$ and $\pi$, as well as primitive functions such as $\texttt{+}$ and $\texttt{sin}$. We write $(e_1, e_2)$ as sugar for $\texttt{pair}_{\tau_1, \tau_2}\, e_1\, e_2$, where $\texttt{pair}_{\tau_1, \tau_2} : \tau_1 \rightarrow \tau_2 \rightarrow \tau_1 \times \tau_2$ is a constant for constructing tuples. The $\mu$ expression creates a recursive function of type $\tau_1 \rightarrow \tau_2$, binding the name $f$ for the recursive call. For example, a version of the factorial function that works on natural number inputs is $\mu f : \mathbb{R} \rightarrow \mathbb{R}. (\lambda x : \mathbb{R}. \texttt{if } (x > 0) \,\, (x * f (x-1)) \,\, 1)$.
\textbf{Semantics of types.} For each type $\tau$ we choose a set $\llbracket \tau\rrbracket$ of values. We have $\llbracket\mathbb{R}^k\rrbracket = \mathbb{R}^k$ and $\llbracket\tau_1 \times \tau_2\rrbracket = \llbracket\tau_1\rrbracket \times \llbracket\tau_2\rrbracket$. Function types are slightly more complicated because we wish to represent \textit{partial} functions. Given a set $X$, we define $X_\bot = \{\uparrow x \mid x \in X\} \cup \{\bot\}$. (The tag $\uparrow$ is useful to avoid ambiguity when $\bot$ is already a member of $X$: then $X_\bot$ contains as distinct elements the newly adjoined $\bot$ and the representation $\uparrow \bot$ of the original $\bot$ from $X$.) Using this construction, we define $\llbracket \tau_1 \rightarrow \tau_2 \rrbracket = \llbracket\tau_1\rrbracket \rightarrow \llbracket\tau_2\rrbracket_\bot$: we represent partial functions returning $\tau_2$ as total functions into $\llbracket\tau_2\rrbracket_\bot$.
\textbf{Semantics of terms.} We interpret expressions of type $\tau$ as functions from \textit{environments} $\gamma$ (mapping variables $x$ to their values $\gamma[x]$) to $\llbracket\tau\rrbracket_\bot$. If $a \in \llbracket\tau\rrbracket_\bot$ and $b \in \llbracket\tau \rightarrow \sigma\rrbracket$, we write (as a mathematical expression, not an object language expression) $a \texttt{>>=} b$ to mean $\bot$ if $a = \bot$, and $b(x)$ if $a = \, \uparrow x$. Using this notation, we can define the interpretation of each construct in our language. We define: $\llbracket c\rrbracket\gamma = \uparrow c$, $\llbracket x\rrbracket\gamma = \uparrow(\gamma[x])$, $\llbracket e_1\, e_2\rrbracket\gamma = (\llbracket e_1\rrbracket\gamma) \texttt{>>=} (\lambda f. \llbracket e_2\rrbracket\gamma \texttt{>>=} f)$, $\llbracket \lambda x. e\rrbracket\gamma = \uparrow \lambda v. \llbracket e\rrbracket(\gamma[x \mapsto v])$, and $\llbracket \texttt{if } (e_1 > 0) \, e_2 \, e_3\rrbracket\gamma = \llbracket e_1\rrbracket\gamma \texttt{>>=} \lambda x. [x > 0 \mapsto \llbracket e_2\rrbracket\gamma, x \leq 0 \mapsto \llbracket e_3\rrbracket\gamma]$. To interpret recursion, we use the standard domain-theoretic approach. We first define partial orders $\preceq_\tau$ inductively for each type $\tau$: we define $x \preceq_{\mathbb{R}^k} y \iff x = y$, $(x_1, x_2) \preceq_{\tau_1 \times \tau_2} (y_1, y_2) \iff x_1 \preceq_{\tau_1} y_1 \wedge x_2 \preceq_{\tau_2} y_2$, and $f \preceq_{\tau \rightarrow \sigma} g \iff \forall x \in \llbracket\tau\rrbracket, f(x) \preceq^\bot_\sigma g(x)$. (The relation $x \preceq^\bot_\tau y$ holds when $x = \bot$ or when $x = \uparrow a$, $y = \uparrow b$, and $a \preceq_\tau b$ for some $a, b \in \llbracket\tau\rrbracket$.) Intuitively, $x \preceq y$ if $y$ is ``at least as defined'' as $x$. For the types $\tau$ in our language, given an infinite non-decreasing sequence $x_1 \preceq_\tau x_2 \preceq_\tau \dots$ of values, there exists a least upper bound $\vee_{i\in\mathbb{N}} x_i$. If the primitives in our language are \textit{Scott-continuous} (monotone with respect to $\preceq$, with the property that $\bigvee_i f(x_i) = f(\bigvee_i x_i)$), we can interpret recursion: $\llbracket \mu f : \tau \rightarrow \sigma. \lambda x. e\rrbracket\gamma = \uparrow \bigvee_{i \in \mathbb{N}} f_i$, where $f_0 = \lambda x. \bot$ and $f_i = \lambda v. \llbracket e\rrbracket(\gamma[x \mapsto v, f \mapsto f_{i-1}])$.
\subsection{$\omega$PAP functions}
We have given a standard denotational semantics to our language, interpreting terms as partial functions. We now generalize the notion of a PAP function to that of an \textit{$\omega$PAP partial function}, and show that if all the primitives are $\omega$PAP, so is any program. The definition relies on the choice, for each type $\tau$, of a set of well-behaved or ``PAP-like'' functions from Euclidean space \textit{into} $\llbracket\tau\rrbracket$, called the \textit{$\omega$PAP diffeology} of $\tau$, by analogy with diffeological spaces~\citep{iglesias2013diffeology}.
\textbf{Definition.} A set $U \subseteq \mathbb{R}^n$ is \textit{c-analytic} if it is a countable union of analytic sets.
\textbf{Definition.} Let $\tau$ be a type. An \textit{$\omega$PAP diffeology} $\mathcal{P}_\tau$ for $\tau$ assigns to each c-analytic set $U$ a set $\mathcal{P}^U_\tau$ of \textit{PAP plots in $\tau$}, functions from $U$ into $\llbracket\tau\rrbracket$ satisfying the following closure properties:
\vspace{-2mm}
\begin{itemize}
\item \textbf{(Constants.)} All constant functions are plots in $\tau$.
\item \textbf{(Closure under PAP precomposition.)} If $V \subseteq \mathbb{R}^m$ is a c-analytic set, and $f : V \rightarrow U$ is PAP, then $\phi \circ f$ is a plot in $\tau$ if $\phi : U \rightarrow \llbracket\tau\rrbracket$ is.
\item \textbf{(Closure under piecewise gluing.)} If $\phi : U \rightarrow \llbracket\tau\rrbracket$ is such that the restriction $\phi|_{A_i} : A_i \rightarrow \llbracket\tau\rrbracket$ is a plot in $\tau$ for each $A_i$ in some c-analytic partition of $U$, then $\phi$ is a plot in $\tau$.
\item \textbf{(Closure under least upper bounds.)} Suppose $\phi_1, \phi_2, \dots$ is a sequence of plots in $\tau$ such that for all $x \in U$, $\phi_i(x) \preceq_\tau \phi_{i+1}(x)$. Then $\bigvee_{i \in \mathbb{N}} \phi_i = \lambda x. \bigvee_{i \in \mathbb{N}} \phi_i(x)$ is a plot.
\end{itemize}
\vspace{-3mm}
\textbf{Choosing $\omega$PAP diffeologies.} We set $\mathcal{P}^U_{\mathbb{R}^k}$ to be all PAP functions from $U$ to $\mathbb{R}^k$. (These trivially satisfy condition 4 above, since $\preceq_{\mathbb{R}^k}$ only relates equal vectors.) For $\tau_1 \times \tau_2$, we include $f \in \mathcal{P}^U_{\tau_1 \times \tau_2}$ iff $f \circ \pi_1 \in \mathcal{P}^U_{\tau_1}$ and $f \circ \pi_2 \in \mathcal{P}^U_{\tau_2}$. Function types are more interesting. A function $f : U \rightarrow \llbracket \tau_1 \rightarrow \tau_2\rrbracket$ is a plot if, for all PAP functions $\phi_1 : V \rightarrow U$ and plots $\phi_2 \in \mathcal{P}^V_{\tau_1}$, the function $\lambda v. f(\phi_1(v))(\phi_2(v))$ is defined (i.e., not $\bot$) on a c-analytic subset of $V$, restricted to which it is a plot in $\tau_2$.
\textbf{Definition.} Let $\tau_1$ and $\tau_2$ be types. Then a \textit{partial $\omega$PAP function} $f : \tau_1 \rightarrow \tau_2$ is a Scott-continuous function from $\llbracket\tau_1\rrbracket$ to $\llbracket\tau_2\rrbracket_\bot$, such that if $\phi \in \mathcal{P}^U_{\tau_1}$, $f \circ \phi$ is defined on a c-analytic subset of $U$, restricted to which it is a plot in $\tau_2$.%
\footnote{For readers familiar with category theory, the $\omega$PAP spaces ($\omega$cpos $(X, \preceq_X)$ equipped with $\omega$PAP diffeologies $\mathcal{P}_X$) and the total $\omega$PAP functions form a CCC, enriched over $\omega$Cpo. In fact, $\omega$PAP is equivalent to a category of models of an essentially algebraic theory, which means it also has all small limits and colimits. It can also be seen as a category of concrete sheaves valued in $\omega$Cpo, making it a Grothendieck quasi-topos.}
We revise our earlier interprtation of $\llbracket\tau_1 \rightarrow\tau_2\rrbracket$ to include only the partial $\omega$PAP functions.
We note that under this definition, a total function $f : \mathbb{R}^n \rightarrow \mathbb{R}^m$ is $\omega$PAP if and only if it is PAP. The generalization becomes apparent only when working with function types and partiality.
\textbf{Proposition.} If every primitive function is $\omega$PAP, then every expression $e$ of type $\tau$ with free variables of type $\tau_1, \dots, \tau_n$ denotes a partial $\omega$PAP function $f : \tau_1 \times \dots \times \tau_n \rightarrow \tau$. In particular, programs that denote total functions from $\mathbb{R}^n$ to $\mathbb{R}^m$, even if they use recursion and higher-order functions internally, always denote (ordinary) PAP functions.
\vspace{-3mm}
\subsection{Automatic Differentiation}
\vspace{-3mm}
\textbf{Implementation of AD.} We now describe a standard forward-mode AD macro, adapted from \citet{huot2020correctness} and \citet{vakar2020denotational}. For each type $\tau$, we define a ``dual number type'' $\mathcal{D}\llbracket\tau\rrbracket$: $\mathcal{D}\llbracket\mathbb{R}^k\rrbracket = \mathbb{R}^k \times \mathbb{R}^k$, $\mathcal{D}\llbracket\tau_1 \times \tau_2\rrbracket = \mathcal{D}\llbracket\tau_1\rrbracket \times \mathcal{D}\llbracket\tau_2\rrbracket$, and $\mathcal{D}\llbracket\tau_1 \rightarrow \tau_2\rrbracket = \mathcal{D}\llbracket\tau_1\rrbracket \rightarrow \mathcal{D}\llbracket\tau_2\rrbracket$.
The AD macro translates terms of type $\tau$ into terms of type $\mathcal{D}\llbracket\tau\rrbracket$: $\mathcal{D}\llbracket x\rrbracket = x$, $\mathcal{D}\llbracket e_1\, e_2\rrbracket = \mathcal{D}\llbracket e_1\rrbracket\, \, \mathcal{D}\llbracket e_2\rrbracket$, $\mathcal{D}\llbracket \texttt{if } (e_1 > 0) \, e_2\, e_3\rrbracket = \texttt{if } (\pi_1(\mathcal{D}\llbracket e_1\rrbracket) > 0)\,\,\mathcal{D}\llbracket e_2\rrbracket \,\, \mathcal{D}\llbracket e_3\rrbracket$, $\mathcal{D}\llbracket \lambda x: \tau. e\rrbracket = \lambda x: \mathcal{D}\llbracket\tau\rrbracket. \mathcal{D}\llbracket e\rrbracket$, and $\mathcal{D}\llbracket \mu f : \tau \rightarrow \sigma. e\rrbracket = \mu f : \mathcal{D} \llbracket\tau\rrbracket \rightarrow \mathcal{D}\llbracket\sigma\rrbracket. \mathcal{D}\llbracket e\rrbracket$. Constants $c$ come equipped with their own translations $c_\mathcal{D}$: $\mathcal{D}\llbracket c\rrbracket = c_\mathcal{D}$.
For constants of type $\mathbb{R}^k$, $c_\mathcal{D} = (c, \mathbf{0})$, but for functions, $c_\mathcal{D}$ encodes a primitive's intensional derivative. For example, when $c : \mathbb{R} \rightarrow \mathbb{R}$, we require that $c_\mathcal{D} : \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R} \times \mathbb{R}$ be such that, for any PAP function $f : \mathbb{R} \rightarrow \mathbb{R}$ with intensional derivative $g$, there is an intensional derivative $h$ of $c \circ f$ such that $c_\mathcal{D} \circ \langle f, g\rangle = \lambda x. ((c \circ f)(x), h(x))$. The constant \texttt{log}, for example, could have $\texttt{log}_\mathcal{D}((x, v)) = (\log x, \frac{v}{x})$.
\textbf{Behavior / Correctness of AD.} For each type $\tau$, we define a family of relations $S^U_\tau \subseteq \mathcal{P}^U_\tau \times \mathcal{P}^U_{\mathcal{D}\llbracket\tau\rrbracket}$, indexed by the c-analytic sets $U$. The basic idea is that if $(f, g) \in S^U_\tau$, then $g$ is a correct ``intensional dual number representation'' of $f$. Since $S^U_\tau$ is a relation, there may be multiple such $g$'s, just as how in \citet{lee2020correctness}'s definition, a single PAP function may have multiple valid intensional derivatives.
For $\tau = \mathbb{R}^k$, we build directly on \citet{lee2020correctness}'s notion of intensional derivative: $f$ and $g$ are related in $S^U_{\mathbb{R}^k}$ if and only if, for all PAP functions $h : V \rightarrow U$ and intensional derivatives $h'$ of $h$, there is some intensional derivative $G$ of $f \circ h$ such that for all $v \in V$, $g((h(v), h'(v))) = ((f \circ h)(v), G(v))$. For other types, we use the Artin gluing approach of \citet{vakar2020denotational} to derive canonical definitions of $S_\tau$ for products and function types that make the following result possible:
\textbf{Proposition.} Suppose $(\phi, \phi') \in S^U_\tau$, and that $e : \sigma$ has a single free variable $x : \tau$. Then $(\llbracket e\rrbracket \circ \phi, \llbracket\mathcal{D}\llbracket e\rrbracket\rrbracket \circ \phi')$ is defined on the same c-analytic subset $V \subseteq U$, and restricted to $V$, is in $S^V_\sigma$.
Specializing to the case where $\tau$ and $\sigma$ are $\mathbb{R}^n$ and $\mathbb{R}^m$, this implies that AD gives sound intensional derivatives even when programs use recursion and higher-order functions. Intensional derivatives agree with derivatives almost everywhere, so this implies the result of~\citet{mazza2021automatic}.
\vspace{-4mm}
\section{Applications to Probabilistic Programming}
\label{sec:probabilistic}
\vspace{-3mm}
In this section, we briefly describe some applications of the $\omega$PAP semantics to \textit{probabilistic} languages. First, we give an example of applying our framework to reason about soundness for AD-powered PPL features. Second, we recover a recent result of~\citet{mak2021densities} via a novel denotational argument.
\textbf{AD for sound change-of-variables corrections.} Consider a PPL that represents primitive distributions by a pair of a \textit{sampler} $\mu$ and a \textit{density} $\rho$. Some systems support automatically generating a new sampler and density, $f_*\mu$ and $\rho_f$, for the \textit{pushforward} of $\mu$ by a user-specified deterministic bijection $f$. Such systems compute the density $\rho_f$ using a change-of-variables formula, which relies on $f$'s Jacobian~\citep{radul2021base}. We show in Appendix~\ref{sec:cov} that such algorithms are sound even when: (1) $f$ is not differentiable, but rather PAP; and (2) we use not $f$'s Jacobian but \textit{any} intensional Jacobian of $f$. This may be surprising, because intensional Jacobians can disagree with true Jacobians on Lebesgue-measure-zero sets of inputs, and the support of $\mu$ may lie entirely within such a set. Indeed, there are samplers $\mu$ and programs $f$ for which AD's Jacobians are wrong everywhere within $\mu$'s support. Our result shows that this does not matter: intensional Jacobians are ``wrong in the right ways,'' yielding correct densities $\rho_f$ even when the derivatives themselves are incorrect.
\textbf{Trace densities of probabilistic programs are $\omega$PAP.} Now consider extending our language with constructs for probabilistic programming: a type $\mathcal{M}\,\tau$ of $\tau$-valued probabilistic programs, and constructs $\mathbf{return}_\tau : \tau \rightarrow \mathcal{M}\,\tau$, $\mathbf{sample} : \mathcal{M} \mathbb{R}$, $\mathbf{score} : [0, \infty) \rightarrow \mathcal{M}\,\mathbf{1}$, and $\mathbf{do} \, \{x \gets t; s\} : \mathcal{M}\, \sigma$ (where $t : \mathcal{M}\,\tau$ and $s : \mathcal{M}\,\sigma$ in environments with $x : \tau$). Some PPLs use probabilistic programs only to specify density functions, for downstream use by inference algorithms like Hamiltonian Monte Carlo. To reason about such languages, we can interpret $\mathcal{M}\,\tau$ as comprising \textit{deterministic} functions computing values and densities of traces. More precisely, let $\llbracket\mathcal{M}\, \tau\rrbracket = \llbracket \sqcup_{i \in \mathbb{N}} \mathbb{R}^i \rightarrow \textbf{Maybe }\tau \times \mathbb{R} \times \sqcup_{i \in \mathbb{N}} \mathbb{R}^i \rrbracket$:%
\footnote{This definition includes two new types: $\textbf{Maybe }\,\tau$ and $\sqcup_{i \in \mathbb{N}}\mathbb{R}^i$. The $\textbf{Maybe }\,\tau$ type has as elements $\textbf{Just}\, x$, where $x \in \mathbb{\tau}$, and $\textbf{Nothing}$. We take $\textbf{Just}\,x \preceq \textbf{Just}\,y$ if $x \preceq y$, but $\textbf{Nothing}$ and $\textbf{Just}$ values are not comparable. A function $\phi$ is a plot in $\textbf{Maybe }\tau$ if $\phi^{-1}(\{\textbf{Just} \, x \mid x \in \llbracket\tau\rrbracket\})$ and $\phi^{-1}(\{\textbf{Nothing}\})$ are both c-analytic sets, restricted to each of which $\phi$ is a plot. Similarly, for the list type, a function $\phi$ is a plot if, for each length $i \in \mathbb{N}$, the preimage of lists of length $i$ is c-analytic in $U$, and the restriction of $\phi$ to each preimage is a plot in $\mathbb{R}^i$.}
the meaning of a probabilistic program is a function mapping lists of real-valued random samples, called traces, to: (1) the output values they possibly induce in $\llbracket \tau \rrbracket$, (2) a \textit{density} in $[0, \infty)$, and (3) a remainder of the trace, containing any samples not yet used. We can then define $\llbracket\textbf{return}_\tau\rrbracket = \llbracket\lambda x. \lambda\textit{trace}. (\textbf{Just }x, 1.0, \textit{trace})\rrbracket$, $\llbracket \textbf{sample}\rrbracket = \llbracket\lambda \textit{trace}. (\texttt{head}\, \textit{trace}, \texttt{if } (\texttt{length }\textit{trace} > 0)\, (1.0) \, (0.0), \texttt{tail}\, \textit{trace})\rrbracket$, $\llbracket \textbf{score}\rrbracket = \llbracket\lambda w. \lambda \textit{trace}. (\textbf{Just }\, \langle\rangle, w, \textit{trace})\rrbracket$, and $\llbracket\textbf{do }\{x \gets t; s\}\rrbracket = \llbracket\lambda \textit{trace}. \textbf{let }(x_?, w, r) = t\,\textit{trace}\textbf{ in } \texttt{case}_\textbf{Maybe}\, x_?\, (\textbf{Nothing}, 0.0, \textit{trace})\, (\lambda x. \textbf{ let } (y_?, v, u) = (s)\, r\, \textbf{ in } (y_?, w * v, u))\rrbracket$.%
\footnote{Depending on whether its first argument is $\textbf{Nothing}$, $\texttt{case}_\textbf{Maybe}$ either returns the default value passed as the second argument, or calls the third argument on the value wrapped inside the $\textbf{Just}$.}
Let $e : \mathcal{M}\, \tau$ be a closed probabilistic program. Suppose that on all but a Lebesgue-measure-zero set of traces, $\llbracket e\rrbracket(\textit{trace})$ is defined (i.e., not $\bot$\textemdash although the first component of the tuple it returns may be \textbf{Nothing}, e.g. if the input trace does not provide enough randomness to simulate the entire program).\footnote{This condition is implied by almost-sure termination, but is weaker in general. For example, there are probabilistic context-free grammars with infinite expected output lengths, i.e., without almost-sure termination. But considered as deterministic functions of traces (as we do in this section), these grammars halt on all inputs.} On traces where $\llbracket e\rrbracket$ is defined, the following \textit{density function} is also defined: $\llbracket \lambda \textit{trace}. \textbf{let }\, (x_?, w, r) = e\, \textit{trace} \textbf{ in if } (\texttt{length }\, r > 0)\,\,(0.0)\,\,(w)\rrbracket$. Furthermore, as a function in the language, this density function is $\omega$PAP. Therefore, excepting the measure-zero set on which it is undefined, for each trace length, the density function is PAP in the ordinary sense\textemdash and therefore almost-everywhere differentiable.
This result was recently proved using an operational semantics argument by \citet{mak2021densities}. The PAP perspective helps reason denotationally about such questions, and validates that AD on trace density functions in PPLs produces a.e.-correct derivatives.
\textbf{Future work.} Besides the ``trace-based'' approach described above, we are working on an extensional monad of measures similar to that of \citet{vakar2019domain}, but in the $\omega$PAP category. This could yield a semantics in which results from measure theory and the differential calculus can be combined, to establish the correctness of AD-powered PPL features like automated involutive MCMC~\citep{cusumano2020automating}. However, more work may be needed to account for the variational inference, which uses gradients of expectations. In our current formulation of the measure monad, the real expectation operator $\mathbb{E}_\mathbb{R} : \mathcal{M\,\tau} \rightarrow (\tau \rightarrow \mathbb{R}) \rightarrow \mathbb{R}$ is not $\omega$PAP, and so we cannot reason in general about when the expectation of an $\omega$PAP function under a probabilistic program will be differentiable.
\bibliographystyle{plainnat}
|
{
"timestamp": "2021-12-01T02:25:53",
"yymm": "2111",
"arxiv_id": "2111.15456",
"language": "en",
"url": "https://arxiv.org/abs/2111.15456"
}
|
\section{Introduction and the main theorem}
\noindent
Let $\mathcal{G}$ be the Lie group $SU(n,\mathbb{C})$ (the group of unitary matrices of determinant 1) and $g$ its Lie algebra $su(n,\mathbb{C})$ (the algebra of trace-free skew hermitian matrices) with Lie bracket $[X,Y] = XY-YX$ (the matrix commutator).
For given $A_{\alpha}: \mathbb{R}^{1+3} \rightarrow g $ we define the curvature $F=F[A]$ by
\begin{equation}
\label{curv}
F_{\mu \nu} = \partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu} + [A_{\mu},A_{\nu}] \, ,
\end{equation}
where $\mu,\nu \in \{0,1,2,3\}$ and $D_{\mu} = \partial_{\mu} + [A_{\mu}, \cdot \,]$ .
Then the Yang-Mills system is given by
\begin{equation}
\label{0}
D^{\mu} F_{\mu \nu} = 0
\end{equation}
in Minkowski space $\mathbb{R}^{1+3} = \mathbb{R}_t \times \mathbb{R}^3_x$ , with metric $diag(-1,1,1,1)$. Greek indices run over $\{0,1,2,3\}$, Latin indices over $\{1,2,3\}$, and the usual summation convention is used.
We use the notation $\partial_{\mu} = \frac{\partial}{\partial x_{\mu}}$, where we write $(x^0,x^1,x^2,x^3)=(t,x^1,x^2,x^3)$ and also $\partial_0 = \partial_t$.
This system is coupled with a Dirac spinor field $\psi: \R^{1+3} \to \mathbb{C}^4$ . Let $T_a$ be the set of generators
of $SU(n,\mathbb{C})$ and $A_{\mu} = A^a_{\mu} T_a$ , $F_{\mu \nu} = F^a_{\mu \nu} T_a$ , $[T^\lambda,T^b]_a =: f^{ab \lambda}$ .
For the following considerations and also for the physical background we refer to the monograph by Matthew D. Schwartz \cite{Sz} . We also refer to the pioneering work for the Yang-Mills, Higgs and spinor field equations by Y. Choquet-Bruhat and D. Christodoulou \cite{CC} , and G. Schwarz and J. Sniatycki \cite{SS}.
The kinetic Lagrangian with $N$ Dirac fermions and the Yang-Mills Lagrangian are given by
$\mathcal{L} = \sum_{j=1}^N \bar{\psi}_j (i \gamma^{\mu} \partial_{\mu}-m)\psi_j $ and $\mathcal{L}_{YM} = - \frac{1}{4} (F^a_{\mu \nu})^2$ , respectively. Here $\bar{\psi} = \psi^{\dagger} \gamma^0$ , where $\psi^{\dagger}$ is the complex conjugate transpose of $\psi$ .
Here $\gamma^{\mu}$ are the (4x4) Dirac matrices given by
$ \, \, \gamma^0 = \left( \begin{array}{cc}
I & 0 \\
0 & -I \end{array} \right)\, \,$
, $\, \, \gamma^j = \left( \begin{array}{cc}
0 & \sigma^j \\
-\sigma^j & 0 \end{array} \right) \, \, $ , where $\, \, \sigma^1 = \left( \begin{array}{cc}
0 & 1 \\
1 & 0 \end{array} \right)$ ,
$ \sigma^2 = \left( \begin{array}{cc}
0 & -i \\
i & 0 \end{array} \right)$ ,
$ \sigma^3 = \left( \begin{array}{cc}
1 & 0 \\
0 & -1 \end{array} \right)$ .
Then we consider the following Lagrangian for the (minimally) coupled system
\begin{align*}
\mathcal{L}&=- \frac{1}{4} (F^a_{\mu \nu})^2 + \sum_{i,j=1}^N \bar{\psi}_i (\delta_{ij} i \gamma^{\mu} \partial_{\mu}+\gamma^{\mu}A^a_{\mu} T^a_{ij} -m \delta_{ij})\psi_j \\
& =-\frac{1}{4}(\partial_{\mu} A^a_{\nu}-\partial_{\nu} A^a_{\mu} + f^{abc} A^b_{\mu} A^c_{\nu})^2 +\sum_{i,j=1}^N \bar{\psi}_i (\delta_{ij} i \gamma^{\mu} \partial_{\mu}+\gamma^{\mu}A^a_{\mu} T^a_{ij} -m \delta_{ij})\psi_j \, .
\end{align*}
Here $T^a_{ij} \in \mathbb{C}$ are the entries of the matrix $T^a$ .
The corresponding equations of motion are given by the following coupled Yang-Mills-Dirac system (YMD)
\begin{align*}
\partial^{\mu} F^a_{\mu \nu} + f^{abc} A^b_{\mu} F^c_{\mu \nu} &= - \langle \psi_i,\gamma_0 \gamma_{\nu} T^a_{ij} \psi_j \rangle \\
(i \gamma^{\mu} \partial_{\mu} -m) \psi_i & = - A^a_{\mu} \gamma^{\mu} T^a_{ij} \psi_j \, .
\end{align*}
Using $D^{\mu} F_{\mu \nu} = \partial^{\mu} F_{\mu \nu} + [A^{\mu},F_{\mu \nu}]$ and
$$ [A^{\mu},F_{\mu \nu}]_a = [A^{\lambda}_{\mu} T_{\lambda},F^b_{\mu \nu} T_b]_a = A^{\lambda}_{\mu} F^b_{\mu \nu} [T_{\lambda},T_b]_a = A^b_{\mu} F^b_{\mu \nu} f_{ab\lambda} $$
we obtain the following system which we intend to treat:
\begin{align}
\label{0.1}
D^{\mu} F_{\mu \nu} & = - \langle \psi^i,\alpha_{\nu} T^a_{ij} \psi^j \rangle T_a \, , \\
\label{0.2}
i \alpha^{\mu} \partial_{\mu} \psi_i & = -A^a_{\mu} \alpha^{\mu} T^a_{ij} \psi_j \, ,
\end{align}
if we choose $m=0$ just for simplicity and define the matrices $\alpha^{\mu} = \gamma^0 \gamma^{\mu}$ , so that $\, \,\alpha^0 = I_{4x4}$ and $ \alpha^j = \left( \begin{array}{cc}
0 & \sigma^j \\
\sigma^j & 0 \end{array} \right)$.
$\alpha^{\mu}$ are hermitian matrices with $(\alpha^{\mu})^2 = I_{4x4}$ , $\alpha^j \alpha^k + \alpha^k \alpha^j = 0$ for $j \neq k$ .
Setting $\nu =0$ in (\ref{0.1}) we obtain the Gauss-law constraint
\begin{equation}
\nonumber
\partial^j F_{j 0} = -[A^j,F_{j0}] + \langle \psi^i,T^a_{ij} \psi^j \rangle T_a \,.
\end{equation}
The system is gauge invariant. Given a sufficiently smooth function $U: {\mathbb R}^{1+3} \rightarrow \mathcal{G}$ we define the gauge transformation $T$ by $T A_0 = A_0'$ ,
$T(A_1,A_2,A_3) = (A_1',A_2',A_3')$ , $T\psi = \psi'$ , where
\begin{align*}
A_{\alpha} & \longmapsto A_{\alpha}' = U A_{\alpha} U^{-1} - (\partial_{\alpha} U) U^{-1} \\
\psi & \longmapsto \psi' = U \psi \, .
\end{align*}
Following \cite{AFS1} and \cite{HO} in order to rewrite the Dirac equation we define the projections
$$\Pi(\xi) := \half(I_{4x4} + \frac{\xi_j \alpha^j}{|\xi|})$$ and $\Pi_{\pm}(\xi):= \Pi(\pm \xi)$ , so that $\Pi_{\pm}(\xi)^2 = \Pi_{\pm}(\xi)$ , $\Pi_+(\xi) \Pi_-(\xi) =0 $ , $\Pi_+ (\xi) + \Pi_-(\xi) = I_{4x4}$ , $\Pi_{\pm}(\xi) = \Pi_{\mp}(-\xi) $ .
We obtain
\begin{equation}
\label{2.6'}
\alpha^j \Pi(\xi) = \Pi(\pm \xi) \alpha^j + \frac{\xi_j}{|\xi|} I_{4x4} \, .
\end{equation}
Using the notation $\Pi_{\pm} = \Pi_{\pm}(\frac{\nabla}{i})$ we obtain
\begin{equation}
\label{2.8}
-i\alpha^j \partial_j = |\nabla|\Pi_+ - |\nabla|\Pi_- \, ,
\end{equation}
where $|\nabla|$ has symbol $|\xi| $ . Moreover defining the modified Riesz transform by $R^j_{\pm} = \mp(\frac{\partial_j}{i|\nabla|}) $ with symbol $ \mp \frac{\xi_j}{|\xi|}$ and $R^0_{\pm} = -1$ the identity (\ref{2.6'}) implies
\begin{equation}
\label{2.7}
\alpha^j \Pi_{\pm} = (\alpha^j \Pi_{\pm})\Pi_{\pm} = \Pi_{\mp} \alpha^j \Pi_{\pm}- R^j_{\pm} \Pi_{\pm} \, , \, \alpha^0 \Pi_{\pm}= \Pi_{\pm} = \Pi_{\mp} \alpha^0 \Pi{\pm} - R^0_{\pm} \Pi_{\pm} \, .
\end{equation}
If we define $\psi_{i,\pm} = \Pi_{\pm} \psi_i$ we obtain by applying the projection $\Pi_{\pm}$ and (\ref{2.8}) the Dirac type equation in the form
\begin{equation}
\label{0.3}
(i \partial_t \pm |\nabla|)\psi_{i,\pm} = \Pi_{\pm}(A^a_{\mu} \alpha^{\mu} T^a_{ij} \psi^j) = : H_{i,\pm}(A,\psi) \, .
\end{equation}
The Yang-Mills equation (\ref{0.1}) may be written as
$$\square A_{\nu} = \partial_{\nu} \partial^{\mu} A_{\mu}-[\partial^{\mu} A_{\mu},A_{\nu}] - [A_{\mu},\partial^{\mu} A_{\nu}] - [A^{\mu},F_{\mu \nu}] - \langle \psi_i,\alpha_{\nu} T^a_{ij} \psi^j \rangle T_a \, . $$
From now on we impose the temporal gauge
$$ A_0 = 0 \, . $$
This implies the wave equation
\begin{align}
\nonumber
\square A_j &= \partial_j div\, A - [div\, A,A_j] - [A_i,\partial^i A_j] - [A^i,F_{ij}]- \langle \psi^i,\alpha_k T^a_{ik} \psi^k \rangle T_a \\
&= \partial_j div\, A - [div\, A,A_j] - 2[A_i,\partial^i A_j] + [A^i,\partial_j A_i] - [A^i,[A_i,A_j]]\\
& - \langle \psi^i,\alpha_k T^a_{ik} \psi^k \rangle T_a
\label{0.4}
\end{align}
and
$$
0 = \partial_t div \, A - [A_i,\partial_t A^i] - \langle \psi^i,\alpha_k T^a_{ik} \psi^k \rangle T_a \, . $$
Now we use the Hodge decomposition of $A=(A_1,A_2,A_3)$ into its divergence-free and curl-free parts:
$$A= A^{df} + A ^{cf} \, , $$
where
$$PA := A^{df} = |\nabla|^{-2} \nabla \times(\nabla \times A) \,
\Leftrightarrow \, A^{df}_j = R^k(R_j A_k-R_k A^j) $$
and
$$A^{cf} = -|\nabla|^{-2} \nabla(div \, A) \quad \Leftrightarrow \quad A ^{cf}_j =-R_j R_k A^k \, . $$
Here $R_j = \frac{\partial_j}{|\nabla|}$ is the Riesz transform. \\
Then we obtain the following system:
\begin{align}
\label{6}
\partial_t A^{cf} & = |\nabla|^{-2} \nabla [A_i,\partial_t A^i] + |\nabla|^{-2} \nabla \langle \psi^i,\alpha_k T^a_{ik} \psi^k \rangle T_a \, , \\
\nonumber
\square A_j^{df}
&= -P [div\, A^{cf},A_j] - 2P[A_i,\partial^i A_j] + P[A^i,\partial_j A_i] - P[A^i,[A_i,A_j]] \\
\label{7}
& \quad - P \langle \psi^i,\alpha_k T^a_{ik} \psi^k \rangle T_a \\
\label{8}
(i \partial_t \pm |\nabla|)\psi_{i,\pm} & = \Pi_{\pm} (A^a_k \alpha^k T^a_{ij} \psi^j) \, .
\end{align}
We want to solve the system (\ref{6}),(\ref{7}),(\ref{8}) simultaneously for $A^{cf}$ , $A^{df}$ and $\psi_{\pm}$ .
So to pose the Cauchy problem for this system, we consider initial data for $(A^{df},A^{cf},\psi)$ at $t=0$:
\begin{equation}\label{Data}
\begin{split}
&A^{df}(0) = a_0^{df}, \, (\partial_t A^{df})(0) = a_1^{df},
\, A^{cf}(0) = a_0^{cf}\\
& \, \psi_{i,\pm}(0) = \psi_{i,\pm}^0 = \Pi_{\pm} \psi_0^i .
\end{split}
\end{equation}
Let us make some historical remarks. As is well-known we may impose a gauge condition. Convenient gauges are the Coulomb gauge $\partial^j A_j=0$ , the Lorenz gauge $\partial^{\alpha}A_{\alpha} =0$ and the temporal gauge $A_0 =0$. It is well-known that for the low regularity well-posedness problem for the Yang-Mills equation a null structure for some of the nonlinear terms plays a crucial role. This was first detected by Klainerman and Machedon \cite{KM}, who proved global well-posedness in the case of three space dimensions in temporal and in Coulomb gauge in energy space. The corresponding result in Lorenz gauge, where the Yang-Mills equation can be formulated as a system of nonlinear wave equations, was shown by Selberg and Tesfahun \cite{ST}, who discovered that also in this case some of the nonlinearities have a null structure. Tesfahun \cite{Te} improved the local well-posedness result to data without finite energy, namely for $(A(0),(\partial_t A)(0) \in H^s \times H^{s-1}$ and $(F(0),(\partial_t F)(0) \in H^r \times H^{r-1}$ with $s > \frac{6}{7}$ and $r > -\frac{1}{14}$, by discovering an additional partial null structure.
Local well-posedness in energy space was also shown by Oh \cite{O} using a new gauge, namely the Yang-Mills heat flow. He was also able to show that this solution can be globally extended \cite{O1}. Tao \cite{T1} showed local well-posedness for small data in $H^s \times H^{s-1}$ for $ s > \frac{3}{4}$ in temporal gauge.
The coupled Yang-Mills and Dirac system in Lorenz gauge was considered from the physical point of view by M. D. Schwartz \cite{Sz}. Local existence for smooth initial data, uniqueness in suitable gauges under appropriate conditions on the data and global existence for small and smooth data , i.e. $(A(0),(\partial_t A)(0),F(0),(\partial_t F)(0),\\ \psi(0)) \in H^s \times H^{s-1} \times H^{s-1} \times H^{s-2} \times H^s$ with $s \ge 2$ was proven by Y. Choquet-Bruhat and D. Christodoulou \cite{CC}, and G. Schwarz and J. Sniatycki \cite{SS}.
In \cite{P1} the author considered this problem in Lorenz gauge and obtained local well-posedness for $s > \frac{3}{4}$ , $r >-\frac{1}{8}$ and $l > \frac{3}{8}$ , where existence holds in $ A \in C^0([0,T],H^s) \cap C^1([0,T],H^{s-1}) \, , \, F \in C^0([0,T],H^r) \cap C^1([0,T],H^{r-1}) $, $\psi \in C^0([0,T],H^l)$ and (existence and) uniqueness in a certain subspace of Bourgain-Klainerman-Machedon type $X^{s,b}$ . We relied on Selberg-Tesfahun \cite{ST} and Tesfahun's result \cite{Te}, who detected the null structure in most - unfortunately not all - critical nonlinear terms. We also made use of the methods used by Huh and Oh \cite{HO} for the Chern-Simons-Dirac equation.
We now study the Yang-Mills-Dirac system in temporal gauge for low regularity data, which fulfill a smallness assumption, which reads as follows
$$ \|A(0)\|_{H^s} + \|(\partial_t A)(0)\|_{H^{s-1}} + \|\psi(0)\|_{H^l} < \epsilon $$
with a sufficiently small $\epsilon > 0$ , under the assumption $s> \frac{3}{4}$ and $l > \frac{1}{4}$ . We obtain a solution which satisfies $A \in C^0([0,1],H^s) \cap C^1([0,1],H^{s-1})$, $\psi \in C^0([0,1],H^l)$.
Uniqueness holds in spaces of Bourgain-Klainerman-Machedon type.
Thus the parameter $l$ can be weakened compared to the result in Lorenz gauge at the expense of a smallness assumption on the data.
The basis for our results is Tao's paper \cite{T1}, who considered the corresponding result for the Yang-Mills equation. We carry over his result to the more general Yang-Mills-Dirac equation. The result relies on the null structure of all the critical bilinear terms. We review this null structure which was partly detected already by Klainerman-Machedon \cite{KM1} in the situation of the Lorenz gauge. The necessary estimates for those nonlinear terms, which contain no terms depending on $\psi$ , in spaces of $X^{s,b}$-type then reduce essentially to Tao's result \cite{T1}. One of these estimates is responsible for the small data assumption. Because these local well-posedness results can initially only be shown under the condition that the curl-free part $A^{cf}$ of $A$ (as defined below) vanishes for $t=0$ we have to show that this assumption can be removed by a suitable gauge transformation (Lemma \ref{Lemma}) which preserves the regularity of the solution. This uses an idea of Keel and Tao \cite{T1}. A proof for the Yang-Mills and Yang-Mills-Higgs case was given by \cite{P1}.\\[0.5em]
Our main theorem reads as follows:
\begin{theorem}
\label{Theorem1.1}
Let $s > \frac{3}{4}$ , $l > \frac{1}{4}$ , $s \ge l \ge s-1$ , $2s-l > 1$ and $l-s \ge -\half$. Let $a_0 \in H^s({\mathbb R}^3)$ , $a_1 \in H^{s-1}({\mathbb R}^3)$ , $\psi_0 \in H^l({\mathbb R}^3)$ be given satisfying the Gauss law constraint $\partial^j a_j^1 = -\partial^j a_j^1+ \langle \psi^i_0,T^a_{ij} \psi^j_0 \rangle T_a$. Assume
$$ \|a_0\|_{H^s} + \|a_1\|_{H^{s-1}} + \|\psi_0\|_{H^l} \le \epsilon \, , $$
where $\epsilon > 0$ is sufficiently small. Then the Yang-Mills-Dirac equations (\ref{0.1}) , (\ref{0.2}) in temporal gauge $A_0=0$ with initial conditions
$$ A(0)=a_0 \, , \, (\partial_t A)(0) = a_1 \, , \, \psi(0)=\psi_0 \,,$$
where $A=(A_1,A_2,A_3)$,
has a unique local solution $A= A_+^{df} + A_-^{df} +A^{cf}$ and $\phi = \phi_+ + \phi_-$ , where
$$ A^{df}_{\pm} \in X^{s,\frac{3}{4}+}_{\pm}[0,1] , A^{cf} \in X^{s+\frac{1}{4},\frac{1}{2}+}_{\tau=0}[0,1] , \partial_t A^{cf} \in C^0([0,1],H^{s-1}) ,
\psi_{\pm} \in X^{l,\half+}_{\pm}[0,1] \, , $$
where these spaces are defined below. This solution fulfills
$$ A \in C^0([0,1],H^s({\mathbb R}^3)) \cap C^1([0,1],H^{s-1}({\mathbb R}^3)) \, , \, \psi \in C^0([0,1],H^l({\mathbb R}^3)) \, .$$
\end{theorem}
\begin{Def}
\label{Def.1.2}
The standard spaces $X^{s,b}_{\pm}$ of Bourgain-Klainerman-Machedon type belonging to the half waves are the completion of the Schwarz space $\mathcal{S}({\mathbb R}^4)$ with respect to the norm
$$ \|u\|_{X^{s,b}_{\pm}} = \| \langle \xi \rangle^s \langle \tau \mp |\xi| \rangle^b \widehat{u}(\tau,\xi) \|_{L^2_{\tau \xi}} \, . $$
Similarly we define the wave-Sobolev spaces $X^{s,b}_{|\tau|=|\xi|}$ with norm
$$ \|u\|_{X^{s,b}_{|\tau|=|\xi|}} = \| \langle \xi \rangle^s \langle |\tau| - |\xi| \rangle^b \widehat{u}(\tau,\xi) \|_{L^2_{\tau \xi}} $$ and also $X^{s,b}_{\tau =0}$ with norm
$$\|u\|_{X^{s,b}_{\tau=0}} = \| \langle \xi \rangle^s \langle \tau \rangle^b \widehat{u}(\tau,\xi) \|_{L^2_{\tau \xi}} \, .$$
We also define $X^{s,b}_{\pm}[0,T]$ as the space of the restrictions of functions in $X^{s,b}_{\pm}$ to $[0,T] \times \mathbb{R}^3$ and similarly $X^{s,b}_{|\tau| = |\xi|}[0,T]$ and $X^{s,b}_{\tau =0}[0,T]$. We frequently use the estimates $\|u\|_{X^{s,b}_{\pm}} \le \|u\|_{X^{s,b}_{|\tau|=|\xi|}}$ for $b \le 0$ and the reverse estimate for $b \ge 0$.
\end{Def}
We recall the fact that
$$
X^{s,b}_\pm [0,T] \hookrightarrow C^0([-T,T];H^s) \quad \text{for} \ b > \half.
$$
We use the following notation:
let $\langle \nabla \rangle^{\alpha}$ , $D^{\alpha} = |\nabla|^{\alpha}$ and $D_{-}^{\alpha}$ be the multipliers with symbols $
\langle\xi \rangle^\alpha$ , $
\abs{\xi}^\alpha$ and $ ||\tau|-|\xi||^\alpha$ ,
respectively, where $\langle \cdot \rangle = (1+|\cdot|^2)^{\half}$ . Finally $a{\pm}$ and $a{\pm}{\pm}$ is short for $a{\pm}\epsilon$ and $a{\pm}2\epsilon$ for a sufficiently small $\epsilon >0$ .
\section{Preliminaries}
The following product estimates for wave-Sobolev spaces were proven in \cite{AFS}.
\begin{prop}
\label{Prop.1.2'}
For $s_0,s_1,s_2,b_0,b_1,b_2 \in {\mathbb R}$ and $u,v \in {\mathcal S} ({\mathbb R}^{3+1})$ the estimate
$$\|uv\|_{H^{-s_0,-b_0}} \lesssim \|u\|_{H^{s_1,b_1}} \|v\|_{H^{s_2,b_2}} $$
holds, provided the following conditions are satisfied:
\begin{align*}
\nonumber
& b_0 + b_1 + b_2 > \frac{1}{2} \, ,
& b_0 + b_1 \ge 0 \, ,\quad \qquad
& b_0 + b_2 \ge 0 \, ,
& b_1 + b_2 \ge 0
\end{align*}
\begin{align*}
\nonumber
&s_0+s_1+s_2 > 2 -(b_0+b_1+b_2) \\
\nonumber
&s_0+s_1+s_2 > \frac{3}{2} -\min(b_0+b_1,b_0+b_2,b_1+b_2) \\
\nonumber
&s_0+s_1+s_2 > 1 - \min(b_0,b_1,b_2) \\
\nonumber
&s_0+s_1+s_2 > 1 \\
&(s_0 + b_0) +2s_1 + 2s_2 > \frac{3}{2} \\
\nonumber
&2s_0+(s_1+b_1)+2s_2 > \frac{3}{2} \\
\nonumber
&2s_0+2s_1+(s_2+b_2) > \frac{3}{2}
\end{align*}
\begin{align*}
\nonumber
&s_1 + s_2 \ge \max(0,-b_0) \, ,\quad
\nonumber
s_0 + s_2 \ge \max(0,-b_1) \, ,\quad
\nonumber
s_0 + s_1 \ge \max(0,-b_2) \, .
\end{align*}
\end{prop}
\begin{prop}[Null form estimates, \cite{ST} ]
\label{Prop.1.2}
Let $\sigma_0,\sigma_1,\sigma_2,\beta_0,\beta_1,\beta_2 \in \R$. Assume that
\begin{equation*}
\left\{
\begin{aligned}
& 0 \le \beta_0 < \frac12 < \beta_1,\beta_2 < 1,
\\
& \sum \sigma_i + \beta_0 > \frac32 - (\beta_0 + \sigma_1 + \sigma_2),
\\
& \sum \sigma_i > \frac32 - (\sigma_0 + \beta_1 + \sigma_2),
\\
& \sum \sigma_i > \frac32 - (\sigma_0 + \sigma_1 + \beta_2),
\\
&\sum \sigma_i + \beta_0 \ge 1,
\\
& \min(\sigma_0 + \sigma_1, \sigma_0 + \sigma_2, \beta_0 + \sigma_1 + \sigma_2) \ge 0,
\end{aligned}
\right.
\end{equation*}
and that the last two inequalities are not both equalities. Let
\begin{align}
\nonumber
&{\mathcal F}(B_{\pm_1,\pm_2} (\psi_{1_{\pm_1}}, \psi_{2_{\pm_2}}))(\tau_0,\xi_0) \\
\label{2}
& := \int_{\tau_1+\tau_2= \tau_0\, \xi_1+\xi_2=\xi_0} |\angle(\pm_1 \xi_1,\pm_2 \xi_2)| \widehat{\psi_{1_{\pm_1}}}(\tau_1,\xi_1) \widehat{\psi_{2_{\pm_2}}}(\tau_2,\xi_2) d\tau_1 d\xi_1 \, .
\end{align}
Then we have the null form estimate
$$
\norm{B_{(\pm_1 \xi_1,\pm_2 \xi_2)}(u,v)}_{H^{-\sigma_0,-\beta_0}}
\lesssim
\norm{u}_{X^{\sigma_1,\beta_1}_{\pm_1}} \norm{v}_{X^{\sigma_2,\beta_2}_{\pm 2}}\, .
$$
\end{prop}
The following multiplication law is well-known:
\begin{prop} {\bf (Sobolev multiplication law)}
\label{SML}
Let $s_0,s_1,s_2 \in \R$ . Assume
$s_0+s_1+s_2 > \frac{3}{2}$ , $s_0+s_1 \ge 0$ , $s_0+s_2 \ge 0$ , $s_1+s_2 \ge 0$. Then the following product estimate holds:
$$ \|uv\|_{H^{-s_0}} \lesssim \|u\|_{H^{s_1}} \|v\|_{H^{s_2}} \, .$$
\end{prop}
\begin{prop}
\label{Prop.2}
\begin{enumerate}
\item For $2 < q \le \infty $ , $ 2 \le r < \infty$ , $ \frac{1}{q} = \frac{1}{2}-\frac{1}{r}$ , $ \mu = 3(\frac{1}{2}-\frac{1}{r})-\frac{1}{q}$ the following estimate holds
\begin{equation}
\label{15}
\|u\|_{L^q_t L^r_x} \lesssim \|u\|_{X^{\mu,\frac{1}{2}+}_{|\tau|=|\xi|}} \, .
\end{equation}
\item For $k \ge 0$ , $ p < \infty$ and $ \frac{1}{4} \ge \frac{1}{p} \ge \frac{1}{4} - \frac{k}{3}$ the following estimate holds:
\begin{equation}
\label{Tao}
\|u\|_{ L^p_x L^2_t} \lesssim \|u\|_{X^{k+\frac{1}{4},\frac{1}{2}+}_{|\tau|=|\xi|}} \, .
\end{equation}
\end{enumerate}
\end{prop}
\begin{proof}
(\ref{15}) is the Strichartz type estimate, which can be found for e.g. in \cite{GV}, Prop. 2.1, combined with the transfer principle (cf. e.g. \cite{KS}, Prop. 3.5).
Concerning (\ref{Tao}) we refer to \cite{T}, Prop. 4.1, or use \cite{KMBT}, Thm. B.2:
$$ \|\mathcal{F}_t u \|_{L^2_{\tau} L_x^4} \lesssim \|u_0\|_{\dot{H}^{\frac{1}{4}}} \, , $$
if $u=e^{it |\nabla|} u_0$ and $\mathcal{F}_t$ denotes the Fourier transform with respect to time. This immediately implies by Plancherel, Minkowski's inequality and Sobolev's embedding theorem
$$\|u\|_{L^p_x L^2_t} = \|\mathcal{F}_t u \|_{L^p_x L^2_\tau} \le \|\mathcal{F}_t u \|_{L^2_{\tau} L^p_x} \lesssim \|\mathcal{F}_t u \|_{L^2_{\tau} H^{k,4}_x} \lesssim \|u_0\|_{H^{k+\frac{1}{4}}} \, . $$
The transfer principle implies (\ref{Tao}).
\end{proof}
\section{Preliminary local well-posedness}
Defining
\begin{align*}
A^{df}_{\pm} = \frac{1}{2}(A^{df} \mp i \langle \nabla \rangle^{-1} \partial_t A^{df}) & \Longleftrightarrow A^{df} = A^{df}_+ + A_-^{df} \, , \, \partial_t A^{df} = i \langle \nabla \rangle(A^{df}_+ - A^{df}_-)
\end{align*}
we can rewrite (\ref{7}) as
\begin{align}
\label{7'}
(i \partial_t \pm \langle \nabla \rangle)A_{j,\pm}^{df} & = \mp 2^{-1} \langle \nabla \rangle^{-1} ( R.H.S. \, of \, (\ref{7}) - A^{df}_j) \, .
\end{align}
with initial data
\begin{align}
\label{1.15*'}
A^{df}_\pm(0) & = \frac{1}{2}(A^{df}(0) \mp i^{-1} \langle \nabla \rangle^{-1} (\partial_t A^{df})(0)) \, .
\end{align}
We now state and prove a preliminary local well-posedness result for (\ref{6}),(\ref{7}), (\ref{8}), for which it is essential to have data for $A$ with vanishing curl-free part.
\begin{prop}
\label{Prop}
Assume $s>\frac{3}{4}$ , $l > \frac{1}{4}$ , $s \ge l \ge s-1$ , $2s-l > 1$ and $l-s \ge -\half$.
Let
$a_0^{df} \in H^s$ , $a_1^{df} \in H^{s-1}$ , $\psi_0 \in H^l$ be given satisfying the Gauss law constraint $\partial^j a_j^1 = -\partial^j a_j^1+ \langle \psi^i_0,T^a_{ij} \psi^j_0 \rangle T_a$ (necessary by (\ref{0.1}) with $\nu=0$) with
$$ \|a_0^{df}\|_{H^s} + \|a_1^{df}\|_{H^{s-1}} + \|\psi_0 \|_{H^l} \le \epsilon_0$$
where $\epsilon_0 >0$ is sufficiently small. Then the system (\ref{6}),(\ref{7})(\ref{8}) with initial conditions
$$ A^{df}(0)=a_0^{df} \, , \, (\partial_t A^{df})(0) = {a_1}^{df} \, , \, A^{cf}(0) = 0 \, , \, \psi(0)= \psi_0$$
has a unique local solution
$$ A= A^{df}_+ + A^{df}_- + A^{cf} \, , \, \psi = \psi_+ + \psi_- \, , $$
where
$$ A^{df}_{\pm} \in X^{s,\frac{3}{4}+}_{\pm}[0,1] \, , \, A^{cf} \in X^{s+\frac{1}{4},\frac{1}{2}+}_{\tau=0}[0,1] \, , \, \partial_t A^{cf} \in C^0([0,1],H^{s-1}) \, , \, \psi_{\pm} \in X^{l,\half+}_{\pm}[0,1] .$$
Uniqueness holds (of course) for not necessarily vanishing initial data $A^{cf}(0) = a^{cf}$. The solution satisfies
$$ A \in C^0([0,1],H^s) \cap C^1([0,1],H^{s-1}) \, , \, \psi \in C^0([0,1],H^l) \, .$$
\end{prop}
We want to use a contraction argument for $A_{\pm}^{df} \in X_{\pm}^{s,\frac{3}{4}+\epsilon}[0,1] \, , \, A^{cf} \in X^{s+\frac{1}{4},\frac{1}{2}+\epsilon}_{\tau=0}$ $[0,1]$ , $\partial_t A^{cf} \in C^0([0,1],H^{s-1}$ ) and $\psi_{\pm} \in X_{\pm}^{l,\half+ \epsilon}[0,1]$ .
Provided that our small data assumption holds this can be reduced by well-known arguments to suitable multilinear estimates of the right hand sides of these equations.
For (\ref{7'}) e.g. we make use of the following well-known estimate:
$$
\|A^{df}_{\pm}\|_{X^{s,b}_{\pm}[0,1]} \lesssim \|A^{df}_{\pm}(0)\|_{H^s} + \| R.H.S. \, of \, (\ref{7'}) \|_{X^{s,b-1}_{\pm}[0,1]} \, , $$
which holds for $s\in{\mathbb R}$ , $\frac{1}{2} < b \le 1$ .
For (\ref{6}) we make use of the estimate:
$$
\|A^{cf}\|_{X^{s+\frac{1}{4},b}_{\tau=0}[0,1]} \lesssim \| R.H.S. \, of \, (\ref{6}) \|_{X^{s+\frac{1}{4},b-1}_{\pm}[0,1]} \, . $$
Here it is essential, that no term containing $A^{cf}(0)$ appears on the right hand, because we do not want to assume $A^{cf}(0) \in H^{s+\frac{1}{4}}$ . Therefore in a first step we assume $A^{cf}(0)=0$ , which we later remove by an application of a suitable gauge transform. We are forced to admit the same parameter $b$ on both sides of the latter inequality at one point, which prevents a large data result.
We now show that all the critical terms in (\ref{6}), (\ref{7}) and (\ref{8}), namely the quadratic terms which contain only $A^{df}$ or $\psi_{\pm}$ have null structure. Those quadratic terms which contain $A^{cf}$ are less critical, because $A^{cf}$ is shown to be more regular than $A^{df}$, and the cubic terms are also less critical, because they contain no derivatives.
What we have to prove are estimates for the right hand sides of (\ref{6}), (\ref{7'}) and (\ref{8}).
First we consider the terms which do not contain $A^{cf}$ .
For the right hand side of (\ref{7'}) we have to prove the following estimates:
\begin{align}
\label{7.1}
\|P[A_{i,\pm_1}^{df},\partial^i A^{df}_{j,\pm_2}]\|_{H^{s-1,-\frac{1}{4}++}} & \lesssim \|A_{i,\pm_1}^{df}\|_{X^{s,\frac{3}{4}+}_{\pm_1}} \|A_{j,\pm_2}^{df}\|_{X^{s-1,\frac{3}{4}+}_{\pm_2}} \\
\label{7.2}
\|P[A_{\pm_1}^{df,i},\partial^j A^{df}_{i,\pm_2}]\|_{H^{s-1,-\frac{1}{4}++}} & \lesssim \|A_{i,\pm_1}^{df}\|_{X^{s,\frac{3}{4}+}_{\pm_1}} \|A_{i,\pm_2}^{df}\|_{X^{s-1,\frac{3}{4}+}_{\pm_2}} \\
\label{7.3}
\|P \langle \psi_{1,\pm_1},\alpha_j \psi_{2,\pm_2} \rangle \|_{X^{s-1,-\frac{1}{4}++}_{\pm_0}} & \lesssim \|\psi_{1,\pm_1}\|_{X^{l,\half+}_{\pm_1}}\|\psi_{2,\pm_2}\|_{X^{l,\half+}_{\pm_2}}
\end{align}
Concerning the right side of (\ref{8}), ignoring the irrelevant term $T^a_{ij}$ , we have to prove
\begin{equation}
\label{8.1}
\|\Pi_{\pm_0} (A_{k,df}^{\pm} \alpha^k \psi) \|_{X^{l,-\half++}_{\pm_1}} \lesssim \|\psi\|_{X^{l,\half+}_{\pm_1}} \|A_{k,df}^{\pm} \|_{X^{s,\frac{3}{4}+}_{\pm}} \, .
\end{equation}
In order to control $A^{cf}$ in (\ref{6}) we need
\begin{align}
\label{16}
\| |\nabla|^{-1} (\phi_1 \partial_t \phi_2)\|_{X^{s+\frac{1}{4},-\frac{1}{2}+\epsilon+}_{\tau=0}} &\lesssim \|\phi_1\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \|\phi_2\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \\
\label{17}
\| |\nabla|^{-1} (\phi_1 \partial_t \phi_2)\|_{X^{s+\frac{1}{4},-\frac{1}{2}+2\epsilon-}_{\tau=0}} &\lesssim \|\phi_1\|_{X^{s+\frac{1}{4},\frac{1}{2}+\epsilon}_{\tau=0}} \|\phi_2\|_{X^{s+\frac{1}{4},\frac{1}{2}+\epsilon}_{\tau=0}} \\
\label{18}
\| |\nabla|^{-1} (\phi_1 \partial_t \phi_2)\|_{X^{s+\frac{1}{4},-\frac{1}{2}+\epsilon}_{\tau=0}} &+ \| |\nabla|^{-1} (\phi_2 \partial_t \phi_1)\|_{X^{s+\frac{1}{4},-\frac{1}{2}+\epsilon}_{\tau=0}} \\ \nonumber
&\lesssim \|\phi_1\|_{X^{s+\frac{1}{4},\frac{1}{2}+\epsilon}_{\tau=0}} \|\phi_2\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \, , \\
\label{17a}
\| |\nabla|^{-1} (\psi_1 \psi_2)\|_{X^{s+\frac{1}{4},-\half+\epsilon+}_{\tau =0}} & \lesssim \|\psi_1\|_{X^{l,\half+\epsilon}_{|\tau|=|\xi|}}\|\psi_2\|_{X^{l,\half+\epsilon}_{|\tau|=|\xi|}} \, .
\end{align}
In order to control $\partial_t A^{cf}$ we need
\begin{align}
\label{19}
\| |\nabla|^{-1} (A_1 \partial_t A_2)\|_{C^0(H^{s-1})} \lesssim &
(\|A_1^{cf}\|_{X^{s+\frac{1}{4},\frac{1}{2}+}_{\tau=0}} + \sum_{\pm}
\|A^{df}_{1\pm}\|_{X^{s,\frac{1}{2}+}_{\pm}})\\
\nonumber
&(\|\partial_t
A^{cf}_2\|_{C^0(H^{s-1})} + \sum_{\pm} \|A^{df}_{2\pm}\|_{X^{s,\frac{1}{2}+}_{\pm}}) \, .
\end{align}
and
\begin{equation}
\label{19a}
\| |\nabla|^{-2} \nabla \langle \psi_i, T^a_{ij} \psi_j \rangle T_a \|_{C^0(H^{s-1})} \lesssim \sum_{\pm_1} \|\psi_{i,\pm}\|_{X^{l,\half+}_{\pm_1}} \sum_{\pm_2} \|\psi_{j,\pm}\|_{X^{l,\half+}_{\pm_2}} \, .
\end{equation}
Concerning (\ref{0.2}) it remains to consider
the terms, which contain a factor $A^{cf}$. We need
\begin{equation}
\label{29}
\| \nabla A^{cf} A^{df} \|_{X^{s-1,-\frac{1}{4}+2\epsilon}_{|\tau|=|\xi|}} +
\| A^{cf} \nabla A^{df} \|_{X^{s-1,-\frac{1}{4}+2\epsilon}_{|\tau|=|\xi|}} \lesssim \|A^{cf}\|_{X^{s+\frac{1}{4},\frac{1}{2}+\epsilon}_{\tau =0}} \|A^{df}\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}}
\end{equation}
and
\begin{equation}
\label{30}
\| \nabla A^{cf} A^{cf} \|_{X^{s-1,-\frac{1}{4}+2\epsilon}_{|\tau|=|\xi|}} \lesssim \|A^{cf}\|_{X^{s+\frac{1}{4},\frac{1}{2}+\epsilon}_{\tau =0}}^2 \, .
\end{equation}
All the cubic terms are estimated by
\begin{equation}
\label{31}
\| A_1 A_2 A_3 \|_{X^{s-1,-\frac{1}{4}+2\epsilon}_{|\tau|=|\xi|}} \lesssim \prod_{i=1}^3 \min(\|A_i\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}},\|A_i\|_{X^{s+\frac{1}{4},\frac{1}{2}+\epsilon}_{\tau =0}} ) \, .
\end{equation}
Concerning (\ref{0.3}) it remains to prove the following estimate:
\begin{equation}
\label{43}
\|A^{cf} \psi\|_{X^{l,-\half++}_{|\tau|=|\xi|}} \lesssim \|A^{cf}\|_{X^{s+\frac{1}{4},\half+}_{\tau =0}} \|\psi\|_{X^{l,\half+}_{|\tau|=|\xi|}} \, .
\end{equation}
\begin{proof}[Proof of (\ref{7.1})]
We conclude
\begin{align}
\nonumber
&[A^{df}_i,\partial^i A^{df}] = [R^k(R_i A_k - R_k A_i),\partial^i A^{df}] \\
\nonumber
&= \frac{1}{2} \big([R^k(R_i A_k - R_k A_i),\partial^i A^{df}] + [R^i(R_k A_i - R_i A_k),\partial^k A^{df}]\big) \\
\nonumber
&=\frac{1}{2} \big([R^k(R_i A_k - R_k A_i),\partial^i A^{df}] - [R^i(R_i A_k - R_k A_i),\partial^k A^{df}]\big) \\
\label{50}
&= \frac{1}{2} Q^{ik} [ |\nabla|^{-1}(R_i A_k - R_k A_i),A^{df}] \,,
\end{align}
where
$$ Q_{ij}[u,v] := [\partial_i u,\partial_jv] - [\partial_j u,\partial_i v] = Q_{ij}(u,v) + Q_{ji}(v,u) $$
with the standard null form
$$ Q_{ij}(u,v) := \partial_i u \partial_j v - \partial_j u \partial_i v \, . $$
Thus, ignoring $P$, which is a bounded operator, we obtain
\begin{equation}
\label{N2}
P[A_i^{df},\partial^i A^{df}] \sim \sum Q_{ik}[|\nabla|^{-1} A^{df},A^{df}] \, .
\end{equation}
It is well-known (cf. e.g. \cite{ST}) that the bilinear form $Q^{jk}_{\pm_{1}, \pm_{2}}$ , defined by
\begin{align*}
& Q^{jk}_{\pm_{1}, \pm_{2}} (\phi_{1_{\pm_{1}}}, \phi_{2_{ \pm_{2}}})
:= R^j_{\pm_{1}} \phi_{1_{\pm_{1}}} R^k_{\pm_{2}} \phi_{2_{ \pm_{2}}} - R^k_{\pm_{2}} \phi_{1_{ \pm_{1}}} R^j_{\pm_{1}} \phi_{1_{ \pm_{2}}} \, ,
\end{align*}
similarly to the standard null form $Q_{jk}$ , which is defined by replacing the modified Riesz transforms $R^k_{\pm}$ by $\partial^k$, fulfills the following estimate:
$$ Q^{jk}_{\pm_{1}, \pm_{2}} (\phi_{1_{ \pm_{1}}}, \phi_{2_{ \pm_{2}}}) \precsim B_{\pm_1,\pm_2}(\psi_{1_{\pm_1}},\psi_{2_{\pm_2}} )\, . $$
Let $u \precsim v$ be defined by $|\widehat{u}| \lesssim |\widehat{v}|$ .
We have to prove
$$ \|B_{\pm_1,\pm_2}(u,v) \|_{H^{s-1,-\frac{1}{4}+}} \lesssim \|u\|_{X^{s,\frac{3}{4}+}_{\pm_1}} \|v\|_{X^{s-1,\frac{3}{4}+}_{\pm_2}} \,. $$
This is implied by Prop. \ref{Prop.1.2} with parameters $\sigma_0=1-s$ , $\sigma_1 =s$ , $\sigma_2=s-1$ , $\beta_0=\frac{1}{4}-$ , $\beta_1=\beta_2=\frac{3}{4}+$ , provided $s >\frac{3}{4}$ .
\end{proof}
\begin{proof}[Proof of (\ref{7.2})]
\begin{align*}
(P(A_i^{df} \nabla A_i^{df}))_j & = R^k(R_j(A_i^{df} \partial_k A_i^{df}) - R_k( A_i^{df} \partial_j A_i^{df})) \\
& = |\nabla|^{-2} \partial^k(\partial_j ( A_i^{df} \partial_k A_i^{df}) - \partial_k ( A_i^{df} \partial_j A_i^{df}))) \\
& = |\nabla|^{-2} \partial^k(\partial_j A_i^{df} \partial_k A_i^{df} - \partial_k A_i^{df} \partial_j A_i^{df}) \\
& = |\nabla|^{-2} \partial^k Q_{jk}( A_i^{df}, A_i^{df})
\end{align*}
so that
\begin{equation}
\label{N3}
P[ A_i^{df},\nabla A_i^{df}] \sim \sum |\nabla|^{-1} Q_{jk} [ A^{df}, A^{df}] \, .
\end{equation}
We have to prove
$$\| Q_{jk}(u,v) \|_{H^{s-2,-\frac{1}{4}+}} \lesssim \|u\|_{H^{s,\frac{3}{4}+}} \|v\|_{H^{s,\frac{3}{4}+}} \, . $$
We use the estimate
$$ Q_{jk}(u,v) \precsim D^{\half} D_-^{\half} (D^{\half} u D^{\half} v) + D^{\half}(D^{\half} D_-^{\half} u D^{\half} v) + D^{\half}(D^{\half} u D^{\half} D_-^{\half} v) \, , $$
which was proven by \cite{KM2}, Prop. 1.
This reduces the proof to the estimates
\begin{align*}
\|uv\|_{H^{s-\frac{3}{2},\frac{1}{4}+}} & \lesssim \|u\|_{H^{s-\half,\frac{3}{4}+}} \|v\|_{H^{s-\half,\frac{3}{4}+}} \, , \\
\|uv\|_{H^{s-\frac{3}{2},-\frac{1}{4}+}} & \lesssim \|u\|_{H^{s-\half,\frac{1}{4}+}} \|v\|_{H^{s-\half,\frac{3}{4}+}} \, .
\end{align*}
Both are implied by Prop. \ref{Prop.1.2'} with parameters $s_0= \frac{3}{2}-s$ , $s_1=s_2= s-\half$ , and $b_0=-\frac{1}{4}-$ , $b_1=b_2=\frac{3}{4}+$ for the first one and $b_0= \frac{1}{4}-$ , $b_1=\frac{1}{4}+$ , $b_2=\frac{3}{4}+$ for the second one, which both require the assumption $ s >\frac {3}{4}$ .
\end{proof}
\begin{proof}[Proof of (\ref{7.3})]
Using the definition of $P$ and ignoring the irrelevant term $T^a$ we have to prove
\begin{equation}
\nonumber
\| R^k(R_j \langle \psi_1,\alpha_k \psi_2 \rangle - R_k \langle \psi_1,\alpha_j \psi_2 \rangle) \|_{X_{\pm_0}^{s-1,-\frac{1}{4}++}} \lesssim \|\psi_1\|_{X^{l,\half+}_{\pm_1}} \|\psi_2\|_{X^{l,\half+}_{\pm_2}} \, .
\end{equation}
We obtain by (\ref{2.7}) :
\begin{align*}
& R^k(R_j \langle \psi_1,\alpha_k \psi_2 \rangle - R_k \langle \psi_1,\alpha_j \psi_2 \rangle)\\
&= \sum_{\pm_1,\pm_2}R^k (R_j \langle \psi_{1_{\pm_1}}, \alpha_k \Pi_{\pm_2} \psi_{2_{\pm_2}} \rangle - R_k \langle \psi_{1_{\pm_1}}, \alpha_j \Pi_{\pm_2} \psi_{2_{\pm_2}} \rangle) \\
& = \sum_{\pm_1,\pm_2} R^k (R_j \langle \psi_{1_{\pm_1}}, \Pi_{\mp_2}(\alpha_k \psi_{2_{\pm_2}}) \rangle - R_k \langle \psi_{1_{\pm_1}}, \Pi_{\mp_2}(\alpha_j \psi_{2_{\pm_2}}) \rangle)\\
& \hspace{1em} - \sum_{\pm_1,\pm_2} R^k(R_j \langle \psi_{1_{\pm_1}}, R^k_{\pm_2} \psi_{2_{\pm_2}} \rangle - R_k \langle \psi_{1_{\pm_1}}, R^j_{\pm_2} \psi_{2_{\pm_2}} \rangle)\\
&= I + II \, .
\end{align*}
Both terms are null forms.
Concerning I we consider each term separately and remark that at this point $R_k$ and $R_j$ are irrelevant. We obtain
\begin{align}
\label{30a}
&{\mathcal F}(\langle \Pi_{\pm_1} \psi_{1_{\pm_1}},\Pi_{\mp_2}\alpha_k \psi_{2_{\pm_2}}\rangle) (\tau_0,\xi_0) \\
\nonumber
&= \int_{\tau_1+\tau_2=\tau_0 \, , \, \xi_1 + \xi_2= \xi_0} \langle \Pi(\pm_1 \xi_1) \widehat{\psi_{1_{\pm_1}}} (\tau_1,\xi_1),\Pi(\mp_2 \xi_2)\alpha_k \Pi(\pm_2 \xi_2) \widehat{\psi_{2_{\pm_2}}}(\tau_2,\xi_2)\rangle d\tau_1 d \xi_1 \\ \nonumber
&= \int_{\tau_1+\tau_2=\tau_0 \, , \, \xi_1 + \xi_2= \xi_0} \langle \widehat{\psi_{1_{\pm_1}}} (\tau_1,\xi_1),\Pi(\pm_1 \xi_1)\Pi(\mp_2 \xi_2)\alpha_k \Pi(\pm_2 \xi_2) \widehat{\psi_{2_{\pm_2}}}(\tau_2,\xi_2)\rangle d\tau_1 d \xi_1 .
\end{align}
Now we use the estimate ("spinorial null structure")
$$ | \Pi(\pm \xi_1) \Pi(\mp \xi_2) z| \lesssim |z| \angle (\pm \xi_1,\pm \xi_2) $$
proven by \cite{AFS}, Lemma 2. This implies
$$ I \lesssim B_{\pm_1,\pm_2} (\psi_{1_{\pm_1}}, \psi_{2_{\pm_2}}) \, , $$
where $ B_{\pm_1,\pm_2}$ is defined by (\ref{2}).
We have to prove
\begin{equation}
\label{3}
\| B_{\pm_1,\pm_2} (\psi_{1_{\pm_1}}, \psi_{2_{\pm_2}}) \|_{X_{\pm_0}^{s-1,-\frac{1}{4}++}} \lesssim \|\psi_{1_{\pm_1}} \|_{X^{l,\half+}_{\pm_1}} \|\psi_{2_{\pm_2}} \|_{X^{l,\half+}_{\pm_2}} \, .
\end{equation}
We apply Prop. \ref{Prop.1.2} with parameters $\sigma_0=1-s$ , $\sigma_1=\sigma_2=l$ , $\beta_0=\frac{1}{4}-$ , $\beta_1=\beta_2=\half+$ . This requires $2l-s > - \frac{1}{4}$ , $4l-s >0$ and $3l-2s > -1$ , which follows from our assumptions.
Next we obtain
$$ II \precsim \sum_{\pm_1,\pm_2} (R_{j,\pm 0} \langle \psi_{1_{\pm_1}}, R^k_{\pm_2} \psi_{2_{\pm_2}} \rangle - R_{k,\pm 0} \langle \psi_{1_{\pm_1}}, R^j_{\pm_2} \psi_{2_{\pm_2}} \rangle) \, . $$
By duality we have to prove
\begin{align*}
&\left|\int\left(\langle \psi_{1_{\pm_1}},R^k_{\pm_2} \psi_{2_{\pm_2}} \rangle R^j_{\pm_0} \psi_{0_{\pm_0}} - \langle \psi_{1_{\pm_1}},R^j_{\pm_2} \psi_{2_{\pm_2}} \rangle R^k_{\pm_0} \psi_{0_{\pm_0}} \right) dx dt \right| \\
& \hspace{1em}
\lesssim \|\psi_{1_{\pm_1}}\|_{X^{l,\half+}_{\pm_1}} \|\psi_{2_{\pm_2}}\|_{X^{l,\half+}_{\pm_2}} \|\psi_{0_{\pm_0}}\|_{X^{-s+1,\frac{3}{4}--}_{\pm_0}} \, .
\end{align*}
We remark that the left hand side possesses a $Q^{jk}$-type null form between $\psi_{2_{\pm_2}}$ and $\psi_{0_{\pm_0}}$ . We have to prove
\begin{equation}
\label{4}
\|B_{\pm_2,\pm_0}(\psi_{2_{\pm_2}},\psi_{0_{\pm_0}}) \|_{X^{-l,-\half-}_{\pm_1}} \lesssim \|\psi_{2_{\pm_2}}\|_{X^{l,\half+}_{\pm_2}} \|\psi_{0_{\pm_0}}\|_{X^{-s+1,\frac{3}{4}--}_{\pm_0}} \, .
\end{equation}
We apply Prop. \ref{Prop.1.2} with $\sigma_0=\sigma_1=l$ , $\sigma_2 = 1-s$, $\beta_0=\beta_1=\half+$ , $ \beta_2 = \frac{3}{4}-$ , which requires $3l-2s >-1$ and $4l-s > 0$ as before.
\end{proof}
\begin{proof}[Proof of (\ref{8.1})]
Using (\ref{2.7}) we obtain
\begin{align*}
\Pi_{\pm_0} (A_{k,df}^{\pm_2} \alpha^k \psi) & = \sum_{\pm} \Pi_{\pm_0} (A_{k,df}^{\pm_2} \alpha^k \Pi_{\pm} \psi) \\
& = \sum_{\pm} \Pi_{\pm_0} (A_{k,df}^{\pm_2} \Pi_{\mp}(\alpha^k \Pi_{\pm} \psi)) - \sum_{\pm} \Pi_{\pm_0} (A_{k,df}^{\pm_2} R^k_{\pm}\psi_{\pm}) = I + II \, .
\end{align*}
Concerning I we have to prove by duality
\begin{align*}
& | \int \int \langle \Pi_{\pm_0} (A_{k,df}^{\pm_2} \Pi_{\mp_1}(\alpha^k \psi_{\pm_1})),\psi_{0_{\pm_0}} \rangle dx \, dt |\\
& \lesssim \|A_{k,df}^{\pm_2}\|_{X^{s,\frac{3}{4}+}_{\pm_2}} \|\psi_{1_{\pm_1}}\|_{X^{l,\half+}_{\pm_1}} \|\Pi_{\pm 0} \psi_{0_{\pm_0}}\|_{X^{-l,\half-}_{\pm_0}} \, .
\end{align*}
The left hand side equals
\begin{align*}
&| \int \int (A_{k,df}^{\pm_2} \langle \Pi_{\mp_1}(\alpha^k \psi_{\pm_1})),\Pi_{\pm_0} \psi_{0_{\pm_0}} \rangle dx \, dt | \\
&= | \int \int (A_{k,df}^{\pm_2} \langle \Pi_{\pm_0} \Pi_{\mp_1}(\alpha^k \psi_{\pm_1})),\psi_{0_{\pm_0}} \rangle dx \, dt | \,.
\end{align*}
It contains a spinorial null form between $\psi_{\pm_1}$ and $\psi_{0_{\pm_0}}$ as in (\ref{30a}), so that it remains to prove
\begin{equation}
\label{71}
\|B_{\pm_1,\pm_0}(\psi_{\pm_1},\psi_{0_{\pm_0}})\|_{X^{-s,-\frac{3}{4}-}_{\pm_2}} \lesssim \|\psi_{\pm_1}\|_{X^{l,\half+}_{\pm_1}} \|\psi_{0_{\pm_0}}\|_{X_{\pm_0}^{-l,\half--}} \, .
\end{equation}
We apply Prop. \ref{Prop.1.2} with parameters $\sigma_0 = s$ , $\sigma_1=l$ , $\sigma_2 = -l$ , $\beta_0=\frac{3}{4}+$, $\beta_1= \half+$, $\beta_2= \half--$ . We need $2s-l > 1$ and $s >\frac{3}{4}$ .
Concerning II we remark that
\begin{align*}
A^{df} &= |\nabla|^{-2} \nabla \times (\nabla \times A) = |\nabla|^{-2} \nabla \times (\nabla \times A^{df}) + |\nabla|^{-2} \nabla \times (\nabla \times A^{cf}) \\ &=|\nabla|^{-2} \nabla \times (\nabla \times A^{df}) \, .
\end{align*}
This implies
$$A_{l}^{df} R_{\pm_1} \psi_{\pm_1} = \epsilon^{lkm} \partial_k w_m R^l_{\pm_1} \psi_{\pm_1} = (\nabla w_m \times \frac{\nabla}{|\nabla|} \psi_{\pm_1})^m \, , $$
where $\epsilon^{lkm}$ denotes the Levi-Civita symbol with $\epsilon^{123} = 1$ and $w= |\nabla|^{-2} \nabla \times A^{df}$ , so that $\partial_j w_m = |\nabla|^{-2} \partial_j \partial_k A^{df}_l \epsilon^{lkm}$ . This is a $Q_{ij}$-type null form between $w_m$ and $ |\nabla|^{-1} \psi_{\pm_1}$ , so that we have to prove
$$\|B_{\pm_1,\pm_2}(A^{df,\pm_2}_l ,\psi_{\pm_1})\|_{X^{l,-\half++}_{\pm}} \lesssim \|A^{df,\pm_2}_l \|_{X^{s,\frac{3}{4}+}_{\pm_2}} \|\psi_{\pm_1}\|_{X^{l,\half+}_{\pm_1}} \, . $$
This is implied by Prop. \ref{Prop.1.2} with parameters $\sigma_0 = -l$ , $\sigma_1=l$ , $\sigma_2 = s$ , $\beta_0=\half --$, $\beta_1= \half+$ , $\beta_2= \frac{3}{4}+$ , if $2s-l > 1$ and $s >\frac{3}{4}$ .
\end{proof}
The estimates (\ref{16})-(\ref{18}) have been essentially given by Tao \cite{T1}. For the sake of completeness we give the details. We remark that it is especially (\ref{18}) which prevents a large data result, because it seems to be difficult to replace $X^{s+\frac{1}{4},-\frac{1}{2}+\epsilon}_{\tau=0}$ by $X^{s+\frac{1}{4},-\frac{1}{2}+\epsilon+}_{\tau=0}$ on the left hand side.\\
\begin{proof}[Proof of (\ref{17})]
As usual the singularity of $|\nabla|^{-1}$ is harmless in dimension 3 (\cite{T}, Cor. 8.2) and it can be replaced by $\langle \nabla \rangle^{-1}$. Taking care of the time derivative we reduce to
\begin{align*}
\big|\int \int u_1 u_2 u_3 dx dt\big| \lesssim \|u_1\|_{X^{s+\frac{1}{4},\frac{1}{2}+\epsilon}_{\tau =0}}
\|u_2\|_{X^{s+\frac{1}{4},-\frac{1}{2}+\epsilon}_{\tau =0}}
\|u_3\|_{X^{\frac{3}{4} -s,\frac{1}{2}-2\epsilon+}_{\tau =0}} \, ,
\end{align*}
which follows from Sobolev's multiplication rule, because $s>\frac{1}{4}$ .
\end{proof}
\begin{proof}[Proof of (\ref{18})]
a. If $\widehat{\phi}$ is supported in $ ||\tau|-|\xi|| \gtrsim |\xi| $ , we obtain
$$ \|\phi\|_{X^{s+\frac{1}{4},\frac{1}{2}+\epsilon}_{\tau=0}} \lesssim \|\phi\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \,. $$
Thus (\ref{18}) follows from (\ref{17}).\\
b. It remains to show
$$ \big|\int\int (uv_t w + uvw_t) dxdt \big| \lesssim
\|u\|_{X^{\frac{3}{4}-s,\frac{1}{2}-\epsilon}_{\tau =0}}
\|w\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau| =|\xi|}}
\|v\|_{X^{s+\frac{1}{4}-\epsilon,\frac{1}{2}+\epsilon}_{\tau =0}} \, $$
whenever $\widehat w$ is supported in $||\tau|-|\xi|| \ll |\xi|$.
This is equivalent to
$$ \int_* m(\xi_1,\xi_2,\xi_3,\tau_1,\tau_2,\tau_3) \prod_{i=1}^3 \widehat{u}_i(\xi_i,\tau_i) d\xi d\tau \lesssim \prod_{i=1}^3 \|u_i\|_{L^2_{xt}} \, $$
where $d\xi = d\xi_1 d\xi_2 d\xi_3$ , $d\tau = d\tau_1 d\tau_2 d\tau_n$ and * denotes integration over $\sum_{i=1}^3 \xi_i = \sum_{i=1}^3 \tau_i = 0$. The Fourier transforms are nonnegative without loss of generality. Here
$$ m= \frac{(|\tau_2|+|\tau_3|) \chi_{||\tau_3|-|\xi_3|| \ll |\xi_3|}}{\langle \xi_1 \rangle^{\frac{3}{4}-s} \langle \tau_1 \rangle^{\frac{1}{2}-\epsilon} \langle \xi_2 \rangle^{s+\frac{1}{4}-\epsilon} \langle \tau_2 \rangle^{\frac{1}{2}+\epsilon} \langle \xi_3 \rangle^s \langle |\tau_3|-|\xi_3|\rangle^{\frac{3}{4}+\epsilon}} \, . $$
Since $\langle \tau_3 \rangle \sim \langle \xi_3 \rangle$ and $\tau_1+\tau_2+\tau_3=0$ we have
\begin{equation}
\label{N4'}
|\tau_2| + |\tau_3| \lesssim \langle \tau_1 \rangle^{\frac{1}{2}-\epsilon} \langle \tau_2 \rangle^{\frac{1}{2}+\epsilon} +\langle \tau_1 \rangle^{\frac{1}{2}-\epsilon} \langle \xi_3 \rangle^{\frac{1}{2}+\epsilon} +\langle \tau_2 \rangle^{\frac{1}{2}+\epsilon} \langle \xi_3 \rangle^{\frac{1}{2}-\epsilon} ,
\end{equation}
so that concerning the first term on the right hand side of (\ref{N4'}) we have to show
$$\big|\int\int uvw dx dt\big| \lesssim \|u\|_{X^{\frac{3}{4}-s,0}_{\tau=0}} \|v\|_{X^{s+\frac{1}{4}-\epsilon,0}_{\tau=0}} \|w\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \ , $$
which follows from Sobolev's multiplication rule, because $s> \half$. This is sharp with respect to the time derivative. As a consequence we need the smallness assumption on the data for local existence.\\
Concerning the second term on the right hand side of (\ref{N4'}) we use $\langle \xi_1 \rangle^{s-\frac{3}{4}} \lesssim \langle \xi_2 \rangle^{s-\frac{3}{4}} + \langle \xi_3 \rangle^{s-\frac{3}{4}}$, so that we reduce to
\begin{equation}
\label{51}
\big|\int\int uvw dx dt\big|
\lesssim\|u\|_{X^{0,0}_{\tau=0}} \|v\|_{X^{1-\epsilon,\frac{1}{2}+\epsilon}_{\tau=0}} \|w\|_{X^{s-\frac{1}{2}-\epsilon,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}}
\end{equation}
and
\begin{equation}
\label{52}
\big|\int\int uvw dx dt\big|
\lesssim\|u\|_{X^{0,0}_{\tau=0}}
\|v\|_{X^{s+\frac{1}{4}-\epsilon,\frac{1}{2}+\epsilon}_{\tau=0}} \|w\|_{X^{\frac{1}{4}-\epsilon,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \,.
\end{equation}
(\ref{51}) is implied by Sobolev and (\ref{Tao}) as follows:
$$ \big| \int\int uvw dx dt \big| \le \|u\|_{L^2_xL^2_t} \|v\|_{L^4_x L^{\infty}_t} \|w\|_{L^4_xL^2_t}
\lesssim \|u\|_{X^{0,0}_{\tau =0}} \|v\|_{X^{1-\epsilon,\frac{1}{2}+}_{\tau = 0}} \|w\|_{X^{\frac{1}{4},\frac{1}{2}+}_{|\tau|=|\xi|}} $$
For (\ref{52}) we obtain
\begin{align*}
\big| \int\int uvw dx dt \big| & \le \|u\|_{L^2_x L^2_t} \|v\|_{L^q_x L^{\infty}_t} \|w\|_{L^p_x L^2_t} \,
\end{align*}
where $\frac{1}{q}=\frac{1}{4}-\epsilon$ and $\frac{1}{p} =\frac{1}{4}+\epsilon$ . For $s >\frac{3}{4}$ we obtain by Sobolev $$\|v\|_{L^q_x L^{\infty}_t} \lesssim\|v\|_{X^{s+\frac{1}{4}-\epsilon,\frac{1}{2}+\epsilon}_{\tau=0}} \, .$$
Interpolation between (\ref{Tao}) $\|w\|_{L^4_x L^2_t} \lesssim \|w\|_{X_{|\tau|=|\xi|}^{\frac{1}{4},\half+}}$ and the trivial identity $\|w\|_{L^2_x L^2_t} = \|w\|_{X^{0,0}_{|\tau|=|\xi|}}$ implies
$$\|w\|_{L^p_x L^2_t} \lesssim \|w\|_{X^{\frac{1}{4}-\epsilon,\half+}_{|\tau|=|\xi|}} \, $$
so that (\ref{52}) follows.
Concerning the last term on the right hand side of (\ref{N4'}) we use
$\langle \xi_1 \rangle^{s-\frac{3}{4}} \lesssim \langle \xi_2 \rangle^{s-\frac{3}{4}} + \langle \xi_3 \rangle^{s-\frac{3}{4}}$ so that we reduce to
\begin{equation}
\label{53}
\big|\int\int uvw dx dt\big|
\lesssim \|u\|_{X^{0,\frac{1}{2}-\epsilon}_{\tau=0}}
\|v\|_{X^{1-\epsilon,0}_{\tau=0}}
\|w\|_{X^{s-\frac{1}{2}+\epsilon,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}}
\end{equation}
and
\begin{equation}
\label{54}
\big|\int\int uvw dx dt\big|
\lesssim \|u\|_{X^{0,\frac{1}{2}-\epsilon}_{\tau=0}}
\|v\|_{X^{s+\frac{1}{4}-\epsilon,0}_{\tau=0}} \|w\|_{X^{\frac{1}{4}+\epsilon,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \,.
\end{equation}
We estimate as follows:
\begin{align}
\nonumber
\big| \int\int uvw dx dt \big| &\lesssim \|u\|_{L^2_x L^{\frac{1}{\epsilon}-}_t} \|v\|_{L^{p}_x L^2_t} \|w\|_{L^{r}_x L^{q}_t} \\
\label{54'}
&\lesssim \|u\|_{X^{0,\frac{1}{2}-\epsilon}_{\tau=0}}
\|v\|_{X^{1-\epsilon,0}_{\tau=0}}
\|w\|_{X^{\frac{1}{6}+\epsilon+,\frac{1}{2}+}_{|\tau|=|\xi|}} \, ,
\end{align}
which would be sufficient for (\ref{53}) and (\ref{54}) under our assumption $ s > \frac{3}{4} $ . For the proof of (\ref{54'}) we choose $\frac{1}{q}=\half-\epsilon+$ , $\frac{1}{p}=\frac{1}{6}+\frac{\epsilon}{3}$ and $\frac{1}{r}=\frac{1}{3}-\frac{\epsilon}{3}$ so that $\|u\|_{L^2_x L^{\frac{1}{\epsilon}-}_t} \lesssim \|u\|_{X^{0,\half-\epsilon}}$ and by Sobolev $\|v\|_{L^{p}_x L^2_t} \lesssim \|v\|_{X^{1-\epsilon,0}_{\tau=0}}$ . Moreover interpolation between (\ref{Tao}) $\|w\|_{L^4_x L^2_t} \lesssim \|w\|_{X^{\frac{1}{4},\half+}_{|\tau|=|\xi|}}$ and the trivial identity $\|w\|_{L^2_x L^2_t} = \|w\|_{X^{0,0}_{|\tau|=|\xi|}}$ implies
$$\|w\|_{L^r_x L^2_t} \lesssim \|w\|_{X^{\half-\frac{1}{r},\half+}_{|\tau|=|\xi|}} \, , $$
Interpolation between Strichartz' inequality (\ref{15}) $\|w\|_{L^4_{xt}} \lesssim \|w\|_{X^{\half,\half+}_{|\tau|=|\xi|}}$ and the trivial identity gives
$$\|w\|_{L^r_x L^r_t} \lesssim \|w\|_{X^{1-\frac{2}{r},\half+}_{|\tau|=|\xi|}} \,, $$
so that another interpolation between these estimates implies the following bound for the last factor
$$\|w\|_{L^r_x L^q_t} \lesssim \|w\|_{X^{\frac{1}{6}+\epsilon+,\half+}_{|\tau|=|\xi|}} \, , $$
as one easily checks. This implies (\ref{54'}).
\end{proof}
\begin{proof}[Proof of (\ref{16})]
If $\widehat{\phi}$ is supported in $||\tau|-|\xi|| \gtrsim |\xi|$ we obtain
$$\|\phi\|_{X^{s+\frac{1}{4},\frac{1}{2}+\epsilon}_{\tau =0}}
\lesssim \|\phi\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \, . $$
which implies that (\ref{16}) follows from (\ref{18}), if $\widehat{\phi}_1$ or $\widehat{\phi}_2$ have this support property. So we may assume that both functions are supported in $||\tau|-|\xi|| \ll |\xi|$. This means that it suffices to show
$$ \int_* m(\xi_1,\xi_2,\xi_3,\tau_1,\tau_2,\tau_3) \prod_{i=1}^3 \widehat{u}_i(\xi_i,\tau_i) d\xi d\tau \lesssim \prod_{i=1}^3 \|u_i\|_{L^2_{xt}} \, , $$
where
$$m= \frac{|\tau_3|\chi_{||\tau_2|-|\xi_2|| \ll |\xi_2|} \chi_{||\tau_3|-|\xi_3|| \ll |\xi_3|}}{\langle \xi_1 \rangle^{\frac{3}{4}-s} \langle \tau_1 \rangle^{\frac{1}{2}-\epsilon-} \langle \xi_2 \rangle^s \langle |\tau_2|-|\xi_2| \rangle^{\frac{3}{4}+\epsilon} \langle \xi_3 \rangle^s \langle |\tau_3|-|\xi_3|\rangle^{\frac{3}{4}+\epsilon}} \, . $$
Since $\langle \tau_3 \rangle \sim \langle \xi_3 \rangle$ , $\langle \tau_2 \rangle \sim \langle \xi_2 \rangle$ and $\tau_1+\tau_2+\tau_3=0$ we have
\begin{equation}
|\tau_3| \lesssim \langle \tau_1 \rangle^{\frac{1}{2}-\epsilon-} \langle \xi_3 \rangle^{\frac{1}{2}+\epsilon+} +\langle \xi_2 \rangle^{\frac{1}{2}-\epsilon-} \langle \xi_3 \rangle^{\frac{1}{2}+\epsilon+} ,
\end{equation}
Concerning the first term on the right hand side we have to show
$$\big|\int \int uvw dx dt\big| \lesssim \|u\|_{X^{\frac{3}{4}-s,0}_{\tau=0}} \|v\|_{X^{s,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \|w\|_{X^{s-\frac{1}{2}-\epsilon-,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \, .$$
We use Prop. \ref{Prop.1.2} , which shows
$$\|vw\|_{L^2_t H^{s-\frac{3}{4}}_x} \lesssim \|v\|_{X^{s,\frac{1}{2}+}_{|\tau|=|\xi|}} \|w\|_{X^{s-\frac{1}{2}-\epsilon-,\frac{1}{2}+}_{|\tau|=|\xi|}} $$
under the assumption $s > \frac{3}{4}$. \\
Concerning the second term on the right hand side we use $\langle \xi_1 \rangle^{s-\frac{3}{4}} \lesssim \langle \xi_2 \rangle^{s-\frac{3}{4}} + \langle \xi_3 \rangle^{s-\frac{3}{4}}$ , so that we reduce to
$$
\big|\int\int uvw dx dt\big|
\lesssim \|u\|_{X^{0,\frac{1}{2}-\epsilon-}_{\tau=0}}
\|v\|_{X^{\frac{1}{4}+\epsilon+,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}}
\|w\|_{X^{s-\frac{1}{2}-\epsilon-,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}}
$$
and
$$
\big|\int\int uvw dx dt\big|
\lesssim \|u\|_{X^{0,\frac{1}{2}-\epsilon-}_{\tau=0}}
\|v\|_{X^{s-\frac{1}{2}+\epsilon+,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \|w\|_{X^{\frac{1}{4}-\epsilon-,\frac{3}{4}+\epsilon}_{|\tau|=|\xi|}} \, .
$$
We obtain
\begin{align*}
\big|\int\int uvw dx dt\big| & \lesssim \|u\|_{L^2_x L^{\frac{1}{2\epsilon}-}_t} \|v\|_{L^{r+}_x L^{q+}_t} \|w\|_{L^{p-}_x L^2_t} \\
&
\lesssim \|u\|_{X^{0,\half-2\epsilon+}_{\tau=0}}
\|v\|_{X^{\frac{1}{4}+5\epsilon+,\half+}_{|\tau|=|\xi|}} \|w\|_{X^{\frac{1}{4}-\epsilon-,\half+}_{|\tau|=|\xi|}} \, ,
\end{align*}
which implies both estimates for $s>\frac{3}{4}$ and $\epsilon>0$ sufficiently small.. Here we choose $\frac{1}{r}= \frac{1}{4}-\epsilon$ , $\frac{1}{q}= \half-2\epsilon$ and $\frac{1}{p}=\frac{1}{4}+\epsilon$ . Here for the first factor we interpolated between $\|u\|_{L^2_x L^{\infty}_2} \lesssim \|u\|_{X^{0,\half+}_{|\tau|=|\xi|}}$ and the trivial identity $\|u\|_{L^2_{xt}} = \|u\|_{X^{0,0}_{|\tau|=|\xi|}}$, for the second factor between (\ref{Tao}) $\|v\|_{L^4_x L^2_t} \lesssim \|v\|_{X^{\frac{1}{4},\half+}_{|\tau|=|\xi|}}$ and the Sobolev type inequality $\|v\|_{L^{\infty}_{xt}} \lesssim \|v\|_{X^{\frac{3}{2}+,\half+}_{|\tau|=|\xi|}}$ , and for the last factor between (\ref{Tao}) and the trivial identity, both with interpolation parameter $\theta = 1-4\epsilon-$ .
\end{proof}
\begin{proof}[Proof of (\ref{17a})]
By the fractional Leibniz rule we have to prove
$$\|uv\|_{X^{0,-\half+\epsilon+}_{\tau=0}} \lesssim \|u\|_{X^{l-s+\frac{3}{4},\half+}_{|\tau|=|\xi|}}
\|v\|_{X^{l,\half+}_{|\tau|=|\xi|}} \, , $$
which is equivalent to
$$| \int \int uvw \, dx\, dt| \lesssim \|u\|_{X^{l-s+\frac{3}{4},\half+}_{|\tau|=|\xi|}}
\|v\|_{X^{l,\half+}_{|\tau|=|\xi|}} \|w\|_{X^{0,\half-\epsilon-}_{\tau=0}} \, . $$
By H\"older's inequality we obtain:
$$| \int \int uvw \, dx\, dt| \lesssim \|u\|_{L^4_x L^2_t}
\|v\|_{L^4_x L^{\frac{2}{1-2\epsilon}+}_t} \|w\|_{L^2_x L^{\frac{1}{\epsilon}-}_t} \, . $$
The first factor is estimated by (\ref{Tao}) using the assumption $l-s\ge -\half$ and the last factor by Sobolev. For the second factor we interpolate between (\ref{Tao}) $\|v\|_{L^4_x L^2_t} \lesssim \|v\|_{X^{\frac{1}{4},\half+}_{|\tau|=|\xi|}}$ and Strichartz' estimate $\|v\|_{L^4_x L^4_t} \lesssim \|v\|_{X^{\half,\half+}_{|\tau|=|\xi|}}$ , which implies $\|v\|_{L^4_x L^{\frac{2}{1-2\epsilon}+}_t} \lesssim \|v\|_{X^{\frac{1}{4}+\epsilon+,\half+}_{|\tau|=|\xi|}} \lesssim \|v\|_{X^{l,\half+}_{|\tau|=|\xi|}}$ for $l > \frac{1}{4}$ ,
so that (\ref{17a}) is proven.
\end{proof}
\begin{proof}[Proof of (\ref{19})] Sobolev's multiplication law shows the estimate
$$ \| |\nabla|^{-1} (A_1 \partial_t A_2)\|_{C^0(H^{s-1})} \lesssim \|A_1\|_{C^0(H^s)} \|\partial_t A_2\|_{C^0(H^{s-1})}$$
for $s > \half$. Use now $$ A=A^{cf} + \sum_{\pm} A^{df}_{\pm} \quad , \quad \partial_t A = \partial_t A^{cf} + i \langle \nabla \rangle(A_+^{df} -A_-^{df}) \, $$
from which the estimate (\ref{19}) easily follows.
\end{proof}
\begin{proof}[Proof of (\ref{19a})]
This reduces to the estimate
$$\| \psi_1 \psi_2\|_{C^0(H^{s-2})} \lesssim \|\psi_1\|_{C^0(H^l)} \|\psi_s\|_{C^0(H^l)} \, , $$
which by the Sobolev multiplication law requires $2-s+2l > \frac{3}{2}$ . This is implied by our assumption $l-s \ge - \half$ and $l>\frac{1}{4}$ .
\end{proof}
\begin{proof}[Proof of (\ref{29})]
This a variant of a proof given by Tao (\cite{T1}) for the Yang-Mills case.
We have to show
$$
\int_* m(\xi,\tau) \prod_{i=1}^3 \widehat{u}_i(\xi_i,\tau_i) d\xi d\tau \lesssim \prod_{i=1}^3 \|u_i\|_{L^2_{xt}} \, ,
$$
where $\xi=(\xi_1,\xi_2,\xi_3) \, , \,\tau=(\tau_1,\tau_2,\tau_3)$ , * denotes integration over $ \sum_{i=1}^3 \xi_i = \sum_{i=1}^3 \tau_i = 0$ , and
$$ m = \frac{(|\xi_2|+|\xi_3|) \langle \xi_1 \rangle^{s-1} \langle |\tau_1|-|\xi_1|) \rangle^{-\frac{1}{4}+2\epsilon}}{\langle \xi_2 \rangle^s \langle |\tau_2| - |\xi_2|\rangle^{\frac{3}{4}+\epsilon} \langle \xi_3 \rangle^{s+\frac{1}{4}}\langle \tau_3 \rangle^{\frac{1}{2}+\epsilon}} \, .$$
Case 1: $|\xi_2| \le |\xi_1|$ ($\Rightarrow$ $|\xi_2|+|\xi_3| \lesssim |\xi_1|$). \\
By two applications of the averaging principle (\cite{T}, Prop. 5.1) we may replace $m$ by
$$ m' = \frac{ \langle \xi_1 \rangle^s \chi_{||\tau_2|-|\xi_2||\sim 1} \chi_{|\tau_3| \sim 1}}{ \langle \xi_2 \rangle^s \langle \xi_3 \rangle^{s+\frac{1}{4}}} \, . $$
Let now $\tau_2$ be restricted to the region $\tau_2 =T + O(1)$ for some integer $T$. Then $\tau_1$ is restricted to $\tau_1 = -T + O(1)$, because $\tau_1 + \tau_2 + \tau_3 =0$, and $\xi_2$ is restricted to $|\xi_2| = |T| + O(1)$. The $\tau_1$-regions are essentially disjoint for $T \in {\mathbb Z}$ and similarly the $\tau_2$-regions. Thus by Schur's test (\cite{T}, Lemma 3.11) we only have to show
\begin{align*}
&\sup_{T \in {\mathbb Z}} \int_* \frac{\langle \xi_1 \rangle^s \chi_{\tau_1=-T+O(1)} \chi_{\tau_2=T+O(1)} \chi_{|\tau_3|\sim 1} \chi_{|\xi_2|=|T|+O(1)}}{\langle \xi_2 \rangle^s \langle \xi_3 \rangle^{s+\frac{1}{4}}} \prod_{i=1} \widehat{u}_i(\xi_i,\tau_i) d\xi d\tau \\
& \hspace{25em} \lesssim \prod_{i=1}^3 \|u_i\|_{L^2_{xt}} \, .
\end{align*}
The $\tau$-behaviour of the integral is now trivial, thus we reduce to
\begin{equation}
\label{55}
\sup_{T \in {\mathbb N}} \int_{\sum_{i=1}^3 \xi_i =0} \frac{ \langle \xi_1 \rangle^s \chi_{|\xi_2|=|T|+O(1)}}{ \langle T \rangle^s \langle \xi_3 \rangle^{s+\frac{1}{4}}} \widehat{f}_1(\xi_1)\widehat{f}_2(\xi_2)\widehat{f}_3(\xi_3)d\xi \lesssim \prod_{i=1}^3 \|f_i\|_{L^2_x} \, .
\end{equation}
Assuming now $|\xi_3| \le |\xi_1|$ (the other case being simpler)
it only remains to consider the following two cases: \\
Case 1.1: $|\xi_1| \sim |\xi_3| \gtrsim T$. We obtain in this case
\begin{align*}
L.H.S. \, of \, (\ref{55})
&\lesssim \sup_{T \in{\mathbb N}} \frac{1}{T^{s+\frac{1}{4}}} \|f_1\|_{L^2} \|f_3\|_{L^2} \| {\mathcal F}^{-1}(\chi_{|\xi|=T+O(1)} \widehat{f}_2)\|_{L^{\infty}({\mathbb R}^3)} \\
&\lesssim \sup_{T \in{\mathbb N}} \frac{1}{ T^{s+\frac{1}{4}}}
\|f_1\|_{L^2} \|f_3\|_{L^2} \| \chi_{|\xi|=T+O(1)} \widehat{f}_2\|_{L^1({\mathbb R}^3)} \\
&\lesssim \hspace{-0.1em}\sup_{T \in {\mathbb N}} \frac{T}{T^{s+\frac{1}{4}}} \prod_{i=1}^3 \|f_i\|_{L^2} \lesssim\hspace{-0.1em}
\prod_{i=1}^3 \|f_i\|_{L^2} \, ,
\end{align*}
provided $s \ge \frac{3}{4}$ .\\
Case 1.2: $|\xi_1| \sim T \gtrsim |\xi_3|$.
An elementary calculation shows that
\begin{align*}
L.H.S. \, of \, (\ref{55})
\lesssim \sup_{T \in{\mathbb N}} \| \chi_{|\xi|=T+O(1)} \ast \langle \xi \rangle^{-2(s+\frac{1}{4})}\|^{\frac{1}{2}}_{L^{\infty}(\mathbb{R}^2)} \prod_{i=1}^3 \|f_i\|_{L^2_x} \lesssim \prod_{i=1}^3 \|f_i\|_{L^2_x}
\end{align*}
for $s > \frac{3}{4}$ ,
so that the desired estimate follows.\\
Case 2. $|\xi_1| \le |\xi_2|$ ($\Rightarrow$ $|\xi_2|+|\xi_3| \lesssim |\xi_2|$). \\
Exactly as in case 1 we reduce to
$$
\sup_{T \in {\mathbb N}} \int_{\sum_{i=1}^3 \xi_i =0} \frac{ \langle \xi_1 \rangle^{s-1} \chi_{|\xi_2|=|T|+O(1)}}{ \langle T \rangle^{s-1} \langle \xi_3 \rangle^{s+\frac{1}{4}}} \widehat{f}_1(\xi_1)\widehat{f}_2(\xi_2)\widehat{f}_2(\xi_3)d\xi \lesssim \prod_{i=1}^3 \|f_i\|_{L^2_x} \, .
$$
This can be treated as in case 1.
\end{proof}
\begin{proof}[Proof of (\ref{30})]
By the Sobolev multiplication law we obtain
$$ |\int \int fgh \, dx\, dt| \lesssim \|f\|_{X^{s+\frac{1}{4},\half+\epsilon}_{\tau=0}} \|g\|_{X^{s-\frac{3}{4},\half+\epsilon}_{\tau=0}} \|h\|_{X^{\frac{5}{4}-s-2\epsilon,-\half}_{\tau=0}} $$
for $s > \frac{3}{4}$ . The elementary estimate $\langle \xi \rangle^{\frac{1}{4}-2\epsilon} \langle \tau \rangle^{-(\frac{1}{4}-2\epsilon)} \lesssim \langle |\tau|-|\xi| \rangle^{\frac{1}{4}-2\epsilon}$ implies
$$ \|h\|_{X^{\frac{5}{4}-s-2\epsilon,-\half}_{\tau=0}} \lesssim \|h\|_{X^{1-s,\frac{1}{4}-2\epsilon}_{|\tau|=|\xi|}} \,, $$
thus the claimed estimate.
\end{proof}
\begin{proof}[Proof of (\ref{31})]
We may assume $\frac{3}{4}<s\le 1$ , because the case $s>1$ follows easily by the fractional Leibniz rule.
We obtain $$\|A\|_{L^{4-}_t L^2_x} \lesssim \|A\|_{X^{0,\frac{1}{4}-}_{|\tau|=|\xi|}} \lesssim \|A\|_{X^{1-s,\frac{1}{4}-}_{|\tau|=|\xi|}}\, $$
so that by duality
$$ \|A_1 A_2 A_3\|_{X^{s-1,-\frac{1}{4}+}_{|\tau|=|\xi|}} \lesssim \|A_1 A_2 A_3\|_{L^{\frac{4}{3}+}_t L^2_x} \lesssim \prod_{i=1}^3 \|A_i\|_{L^{4+}_t L^6_x} \, . $$
Now by Sobolev
$$ \|A_i\|_{L^{4+}_t L^6_x} \lesssim \|A_i\|_{L^{4+}_t H^1_x} \lesssim \|A_i\|_{X^{s+\frac{1}{4},\half+}_{\tau=0}} $$
and by Strichartz and Sobolev as well
$$\|A_i\|_{L^{4+}_t L^6_x} \lesssim \|A_i\|_{L^{4+}_t H^{\frac{1}{4},4+}_x} \lesssim \|A_i\|_{X^{\frac{3}{4}+,\half+}_{|\tau|=|\xi|}} \lesssim \|A_i\|_{X^{s,\half+}_{|\tau|=|\xi|}} \, ,$$
which implies the claim.
\end{proof}
\begin{proof}[Proof of (\ref{43})]
We apply again Tao's method (\cite{T1}). We have to show
$$
\int_* m(\xi,\tau) \prod_{i=1}^3 \widehat{u}_i(\xi_i,\tau_i) d\xi d\tau \lesssim \prod_{i=1}^3 \|u_i\|_{L^2_{xt}} \, ,
$$
where
$$ m = \frac{ \langle \xi_1 \rangle^{l} \langle |\tau_1|-|\xi_1|) \rangle^{-\frac{1}{2}++}}{\langle \xi_2 \rangle^l \langle |\tau_2| - |\xi_2|\rangle^{\half+} \langle \xi_3 \rangle^{s+\frac{1}{4}}\langle \tau_3 \rangle^{\frac{1}{2}+}} \, .$$
By two applications of the averaging principle (\cite{T}, Prop. 5.1) we may replace $m$ by
$$ m' = \frac{ \langle \xi_1 \rangle^l \chi_{||\tau_2|-|\xi_2||\sim 1} \chi_{|\tau_3| \sim 1}}{ \langle \xi_2 \rangle^l \langle \xi_3 \rangle^{s+\frac{1}{4}}} \, . $$
Let now $\tau_2$ be restricted to the region $\tau_2 =T + O(1)$ for some integer $T$. Then $\tau_1$ is restricted to $\tau_1 = -T + O(1)$, because $\tau_1 + \tau_2 + \tau_3 =0$, and $\xi_2$ is restricted to $|\xi_2| = |T| + O(1)$. The $\tau_1$-regions are essentially disjoint for $T \in {\mathbb Z}$ and similarly the $\tau_2$-regions. Thus by Schur's test (\cite{T}, Lemma 3.11) we only have to show
\begin{align*}
&\sup_{T \in {\mathbb Z}} \int_* \frac{\langle \xi_1 \rangle^l \chi_{\tau_1=-T+O(1)} \chi_{\tau_2=T+O(1)} \chi_{|\tau_3|\sim 1} \chi_{|\xi_2|=|T|+O(1)}}{\langle \xi_2 \rangle^l \langle \xi_3 \rangle^{s+\frac{1}{4}}} \prod_{i=1} \widehat{u}_i(\xi_i,\tau_i) d\xi d\tau \\
& \hspace{25em} \lesssim \prod_{i=1}^3 \|u_i\|_{L^2_{xt}} \, .
\end{align*}
The $\tau$-behaviour of the integral is now trivial, thus we reduce to
\begin{equation}
\label{55'}
\sup_{T \in {\mathbb N}} \int_{\sum_{i=1}^3 \xi_i =0} \frac{ \langle \xi_1 \rangle^l \chi_{|\xi_2|=|T|+O(1)}}{ \langle T \rangle^l \langle \xi_3 \rangle^{s+\frac{1}{4}}} \widehat{f}_1(\xi_1)\widehat{f}_2(\xi_2)\widehat{f}_3(\xi_3)d\xi \lesssim \prod_{i=1}^3 \|f_i\|_{L^2_x} \, .
\end{equation}
Assuming now $|\xi_3| \le |\xi_1|$ (the other case being simpler)
it only remains to consider the following two cases: \\
Case 1.1: $|\xi_1| \sim |\xi_3| \gtrsim T$. We obtain in this case
\begin{align*}
L.H.S. \, of \, (\ref{55'})
&\lesssim \sup_{T \in{\mathbb N}} \frac{1}{T^{s+\frac{1}{4}}} \|f_1\|_{L^2} \|f_3\|_{L^2} \| {\mathcal F}^{-1}(\chi_{|\xi|=T+O(1)} \widehat{f}_2)\|_{L^{\infty}({\mathbb R}^3)} \\
&\lesssim \sup_{T \in{\mathbb N}} \frac{1}{ T^{s+\frac{1}{4}}}
\|f_1\|_{L^2} \|f_3\|_{L^2} \| \chi_{|\xi|=T+O(1)} \widehat{f}_2\|_{L^1({\mathbb R}^3)} \\
&\lesssim \hspace{-0.1em}\sup_{T \in {\mathbb N}} \frac{T}{T^{s+\frac{1}{4}}} \prod_{i=1}^3 \|f_i\|_{L^2} \lesssim\hspace{-0.1em}
\prod_{i=1}^3 \|f_i\|_{L^2} \, ,
\end{align*}
because $s \ge \frac{3}{4}$ .\\
Case 1.2: $|\xi_1| \sim T \gtrsim |\xi_3|$.
An elementary calculation shows that
\begin{align*}
L.H.S. \, of \, (\ref{55'})
\lesssim \sup_{T \in{\mathbb N}} \| \chi_{|\xi|=T+O(1)} \ast \langle \xi \rangle^{-2(s+\frac{1}{4})}\|^{\frac{1}{2}}_{L^{\infty}(\mathbb{R}^2)} \prod_{i=1}^3 \|f_i\|_{L^2_x} \lesssim \prod_{i=1}^3 \|f_i\|_{L^2_x} \, ,
\end{align*}
using that $2(s+\frac{1}{4}) > 2$ ,
so that the desired estimate follows.
\end{proof}
\section{Removal of the assumption $A^{cf}(0)=0$}
Applying an idea of Keel and Tao \cite{T1} we use the gauge invariance of the Yang-Mills-Dirac system to show that the condition $A^{cf}(0)=0$, which had to be assumed in Prop. \ref{Prop}, can be removed.
\begin{lemma}
\label{Lemma}
Let $s>\frac{3}{4}$ and $0 < \epsilon \ll 1$. Assume
$(A,\psi)\in( C^0([0,1],H^s) \cap C^1([0,1],H^{s-1}) \times (C^0([0,1],H^{l})$ , $A_0 = 0$ and
\begin{equation}
\label{***}
\|A^{df}(0)\|_{H^s} + \|(\partial_t A)^{df}(0)\|_{H^{s-1}} + \|A^{cf}(0)\|_{H^s} + \|\psi(0)\|_{H^l} \le \epsilon \, .
\end{equation}
Then there exists a gauge transformation $T$ preserving the temporal gauge such that $(TA)^{cf}(0) = 0$ and
\begin{align}
\label{T1}
\|(TA)^{df}(0)\|_{H^s} + \|(\partial_t TA)^{df}(0)\|_{H^{s-1}} + \|(T \psi)(0)\|_{H^l} \lesssim \epsilon \, .
\end{align}
T preserves also the regularity, i.e. $TA\in C^0([0,1],H^s) \cap C^1([0,1],H^{s-1})$ , $T\psi \in C^0([0,1],H^l)$. If
$A \in X^{s,\frac{3}{4}+}_+[0,1] + X^{s,\frac{3}{4}+}_-[0,1] + X^{s+\frac{1}{4},\frac{1}{2}+}_{\tau=0}[0,1]$, $\partial_t A^{cf} \in C^0([0,1],\\H^{s-1})$ and $\psi\in X^{l,\half+}_+[0,1] + X^{l,\half+}_-[0,1]$ , then $TA$ , $T\psi$ belong to the same spaces. Its inverse $T^{-1}$ has the same properties.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{Lemma}]
For details of the proof we refer to a similar result for the Yang-Mills and Yang-Mills-Higgs equation in in \cite{P}, Lemma 4.1 (cf. also the sketch of proof in \cite{T1}).
It is achieved by an iteration argument. We use the Hodge decomposition of $A$:
$$A=A^{cf}+A^{df} = -|\nabla|^{-2} \nabla \,div \,A + A^{df}\, . $$
We define $V_1 := -|\nabla|^{-2} \,div \,A(0)$ , so that $\nabla V_1 = A^{cf}(0)$. Thus $$ \|V_1\|_X := \| \nabla V_1\|_{H^s} = \|A^{cf}(0)\|_{H^s} \le \epsilon \, . $$ We define $U_1 := \exp(V_1)$ and consider the gauge transformation $T_1$ with
\begin{align*}
A_0 & \longmapsto U_1 A_0 U_1^{-1} - (\partial_t U_1) U_1^{-1} \\
A & \longmapsto U_1 A U_1^{-1} - (\nabla U_1) U_1^{-1} \\
\psi_{\pm} & \longmapsto U_1 \psi_{\pm} \, .
\end{align*}
Then $T_1$ preserves the temporal gauge, because $U_1$ is independent of $t$ .
We obtain by Sobolev
$$\|(T_1 \psi_{\pm})(0)\|_{H^l} \lesssim \|\exp V_1\|_X \|\psi_{\pm}(0)\|_{H^l} \lesssim (1+\epsilon) \|\psi_{\pm}(0)\|_{H^l} \, . $$
Iteratively we define for $k \ge 2$ :
$\nabla V_k := (T_{k-1}A)^{cf}(0)$ and $U_k := \prod_{l=k}^1 \exp V_l ,$ so that as in \cite{P}, Lemma 4.1 we obtain
$$\|V_k\|_X = \|(T_{k-1} A)^{cf}(0)\|_{H^s} \lesssim \epsilon^{\frac{k+1}{2}} \quad \forall k \ge 2 \, $$
and $\|U_k-I\|_X \lesssim \epsilon$ (thus $\|U_k\|_X \lesssim 1$) .
Let the gauge transformation $T_k$ be defined by
\begin{align*}
A_0 & \longmapsto U_k A_0 U_k^{-1} - (\partial_t U_k) U_k^{-1} \\
A & \longmapsto U_k A U_k^{-1} - (\nabla U_k) U_k^{-1} \\
\psi_{\pm} & \longmapsto U_k \psi_{\pm} \, .
\end{align*}
This implies
$$ \|(T_k \psi_{\pm})(0) \|_{H^l} \lesssim \|\exp V_k\|_X \|\psi_{\pm}(0)\|_{H^l} \lesssim (1+\epsilon) \|\psi_{\pm}(0)\|_{H^l} $$
independently of $k$ .
As in \cite{P}, Lemma 4.1 this allows to define a gauge transformation $T$ by
$TA := \lim_{k \to \infty} T_k A$ in $C^0([0,1];H^s)$ , $\partial_t TA := \lim_{k \to \infty} \partial_t T_k A$ in $C^0([0,1];H^{s-1})$ and $T \psi_{\pm} := \lim_{k \to \infty} T_k \psi_{\pm}$ in $C^0([0,1];H^l)$ , which fulfills
$(TA)^{cf}(0)=0$ .
We also deduce
$$ \|(TA)^{df}(0)\|_{H^s} + \|(\partial_t TA)^{df}(0)\|_{H^{s-1}} + \|(T\psi_{\pm})(0)\|_{H^l} \lesssim \epsilon \, $$
and $$TA=U A U^{-1} -\nabla U U^{-1} \, , \, T\psi_{\pm} = U\psi_{\pm} \, ,$$
where $U = \prod_{l=\infty}^1 \exp V_l$ , $U^{-1} = \prod_{l=1}^{\infty} \exp(-V_l)$ and the limits are taken with respect to $\|\cdot\|_X$ . It has the property $\|U\|_X = \|\nabla U\|_{H^s} \lesssim 1$ .
We want to show that $T$ preserves the regularity. That $TA$ has the same regularity was shown in \cite{P}, Lemma 4.1. Let now $\chi=\chi(t)$ be a smooth function with $\chi(t)=1$ for $0 \le t \le 1$ and $\chi(t)=0$ for $t\ge 2$. We obtain :
\begin{align*}
\|U \psi_{\pm}\|_{X^{l,\half+}_{\pm}[0,1]} \lesssim \|U \psi_{\pm} \chi\|_{X^{l,\half+}_{\pm}} & \lesssim \|\nabla U \chi\|_{X^{s,1}_{\pm}} \|\psi_{\pm}\|_{X^{l,\half+}_{\pm}} \lesssim \|\nabla U\|_{H^s} \|\psi_{\pm}\|_{X^{l,\half+}_{\pm}} \\
&\lesssim \|\psi_{\pm}\|_{X^{l,\half+}_{\pm}} < \infty\,
\end{align*}
Here we applied the estimate
$$\|uv\|_{X^{l,\half+\epsilon}_{\pm}} \lesssim \|\nabla u\|_{X^{s,1}_{\pm}} \|v\|_{X^{l,\half+\epsilon}_{\pm}} \,,$$
provided $ s > \half$ ,
for the second step, which is proved as \cite{P}, Lemma 4.2. Thus the regularity of $\psi_{\pm}$ is also preserved.
The inverse $T^{-1}$ , defined by
$$T^{-1}B=U^{-1} B U + U^{-1}\nabla U \, , \, T^{-1}\psi_{\pm} = U^{-1}\psi_{\pm} \,, $$
has the same properties as $T$ .
\end{proof}
\section{Proof of Theorem \ref{Theorem1.1}}
\begin{proof}
It suffices to construct a unique local solution of (\ref{6}),(\ref{7}),(\ref{8}) with initial conditions
$$ A^{df}(0) = a^{df} \, , \, (\partial_t A^{df})(0) = {a'}^{df} \, , \,
A^{cf}(0) = a^{cf} \, , \,\psi(0)=\psi_0 \, , $$
which fulfill
$$ \|A^{df}(0)\|_{H^s} + \|(\partial_t A)^{df}(0)\|_{H^{s-1}} + \|A^{cf}(0)\|_{H^s} + \|\psi(0)\|_{H^l} \le \epsilon $$
for a sufficiently small $\epsilon > 0$.
By Lemma \ref{Lemma} there exists a gauge transformation $T$ which fulfills the smallness condition (\ref{T1}) and $(TA)^{cf}(0) =0$. We use Prop. \ref{Prop} to construct a unique solution $(\tilde{A},\tilde{\phi})$ of (\ref{6}),(\ref{7}),(\ref{8}) , where $\tilde{A}=\tilde{A}_+^{df} + \tilde{A}_-^{df} +\tilde{A}^{cf}$ and $\tilde{\psi} = \tilde{\psi}_+ + \tilde{\psi}_-$ , with data
$$\tilde{A}^{df}(0)= (TA)^{df}(0) \, , \, (\partial_t \tilde{A})^{df}(0) = (\partial_t (TA)^{df})(0) \, , \, \tilde{A}^{cf}(0)= (TA)^{cf}(0)=0 \, ,$$
$$ \, \tilde{\psi}(0) = (T\psi)(0) \, , $$
with the regularity
$$ \tilde{A}^{df}_{\pm} \in X^{s,\frac{3}{4}+}_{\pm}[0,1] , \tilde{A}^{cf} \in X^{s+\frac{1}{4},\frac{1}{2}+}_{\tau=0}[0,1] , \partial_t \tilde{A}^{cf} \in C^0([0,1],H^{s-1}) ,
\tilde{\psi}_{\pm} \in X^{l,\half+}_{\pm}[0,1] \, . $$
This solution satisfies also $\tilde{A} \in C^0([0,1],H^s) \cap C^1[0,1],H^{s-1})$, $\tilde{\psi}\in C^0([0,1],H^l)$ .
Applying the inverse gauge transformation $T^{-1}$ according to Lemma \ref{Lemma} we obtain a unique solution of (\ref{6}),(\ref{7}),(\ref{8}) with the required initial data and also the same regularity.
\end{proof}
|
{
"timestamp": "2021-12-01T02:27:55",
"yymm": "2111",
"arxiv_id": "2111.15511",
"language": "en",
"url": "https://arxiv.org/abs/2111.15511"
}
|
\section{Introduction}
Several publications have shown that Face Recognition Systems (FRSs) and humans are vulnerable to \textit{morphing attacks}. A morph is an image that contains identity information from the faces of two different individuals. When such a morphed image is compared to an image of either contributing identity it is likely to be accepted as a match. This can lead to security risks in e.g. border control, since a criminal could travel using the identity document of an accomplice. Over the past few years, several methods to address such attacks have been proposed. In most cases, the morphs on which Morphing Attack Detection (MAD) methods are trained and tested are generated in-house, and are usually created by detecting corresponding landmarks in the faces, warping the images to an average geometry and then blending the pixel values (see Fig. \ref{morphing}) \cite{FFM14,Sc19,Ro17,MW18}.
The fact that MAD methods are often tested on datasets with similar characteristics as the sets used for training them may lead to overfitting to certain dataset-specific characteristics - especially if only one morphing algorithm was used \cite{SRB18,8553018}. In many countries (e.g. the Netherlands) someone applying for an identity document can provide their own printed passport photo \cite{BZK}. This means that before a passport photo is stored in an electronic Machine Readable Travel Document (eMRTD), it has been printed and scanned (P\&S). During this process some morphing traces may be masked, which means an MAD method may perform differently on P\&S morphs. Furthermore, the quality of automatically generated morphs is limited by the quality of the landmark detector used, while a criminal might spend more time to manually select landmarks. These are ways in which morphs can vary, even when the underlying algorithm with which they were generated is the same.
However, even if MAD methods are tested on datasets from different sources and that include P\&S images, it is possible that there are other tools for generating morphs of which the research community is not yet aware. Therefore, it is necessary to explore other potential morphing tools in order to better understand the weaknesses of FRSs. Benchmarks for validating MAD methods - such as \cite{SOTAMD,NIST,BOEP} - can be extended and improved by evaluating morphs generated with different methods.
We introduce a new method for creating morphs in order to fill this gap. More precisely, we first examine which morphs are most challenging from the point of view of an FRS by exploiting the fact that FRSs are equipped with a (dis)similarity measure that is used to measure how similar two images are. Since this allows us to define the theoretically most challenging morph - the image that is most similar to two input images according to an FRS - we call these \textit{worst-case morphs}. We then train a neural network to generate images that approximate these worst-case morphs. We examine how well our approximations of worst-case morphs can fool the FRS and whether this extends to other systems. Our results show that our morphs pose a significant threat to the FRS that was used to generate them, but that more work is needed to generate images that are truly close to worst-case morphs.
Our proposed system differs from GAN-based morphs \cite{9404267,VZ20} that are also generated using a neural network. While GANs are trained to generate images from a low-dimensional embedding space (that is different from the embedding space of an FRS) that look as ``real" as possible, we directly work with the embedding space of an FRS and its associated (dis)similarity measure to approximate the most challenging morph, without having the constraint that the image needs to look ``real". These two different goals could potentially be combined in future research.\\
Our main contributions are:
\begin{itemize}
\item a theoretical framework that when given a face recognition system and two images can be used to define a \textit{worst-case morph},
\item showing how a deep-learning-based method can be applied to approximate worst-case morphs,
\item examining how vulnerable two existing FRSs are to these morphs,
\item providing a new method for generating morphs that leads to more variation in morph datasets and enables researchers to more accurate validate MAD methods in future work.
\end{itemize}
\begin{figure}[h]
\centering
\resizebox{0.5\columnwidth}{!}{%
\begin{tikzpicture}
\node[inner sep=0pt] (whitehead) at (-2.5,0)
{\includegraphics[width=0.08\paperwidth]{img/04211d138}};
\draw[black, ->] (-1.6,0.1) -- (-0.9,0.1);
\node[inner sep=0pt] (whitehead) at (-2.5,-3)
{\includegraphics[width=0.08\paperwidth]{img/04212d211}};
\draw[black, ->] (-1.6,-3.1) -- (-0.9,-3.1);
\filldraw[black] (-1.05,0) node[anchor=west,rotate=-90] {\textit{Landmark detection}};
\filldraw[black] (-1.35,-0.2) node[anchor=west,rotate=-90] {\textit{\& triangulation}};
\node[inner sep=0pt] (whitehead) at (0,0)
{\includegraphics[width=0.08\paperwidth]{img/Im0_landmarks}};
\node[inner sep=0pt] (whitehead) at (0,-3)
{\includegraphics[width=0.08\paperwidth]{img/Im1_landmarks}};
\draw[black, ->] (1,-0.2) -- (1.4,-0.8);
\draw[black, ->] (1,-2.8) -- (1.4,-2.2);
\filldraw[black] (1.1,-0.8) node[anchor=west,rotate=-90] {\textit{Morphing}};
\node[inner sep=0pt] (whitehead) at (2.35,-1.5)
{\includegraphics[width=0.08\paperwidth]{img/morph}};
\draw[black, ->] (3.25,-1.5) -- (3.8,-1.5);
\node[inner sep=0pt] (2) (whitehead) at (4.7,-1.5)
{\includegraphics[width=0.08\paperwidth]{img/alpha0_5}};
\draw[dashed,->,black] (1,-3.5) to [out=-35,in=-130] (4.8,-2.8);
\filldraw[black] (3.5,-2) node[anchor=west,rotate=-90] {\textit{Splicing}};
\end{tikzpicture}%
}
\caption{\label{morphing}Landmark-based morphing process resulting in a \textit{spliced} (sometimes called \textit{complete}) morph.}
\end{figure}
\section{Related Work}
This section discusses variation in morphing algorithms in existing research. These morphs can be used to train and evaluate MAD methods, which can be either Single-Image MAD (S-MAD) methods - also called no-reference detection - or Differential MAD (D-MAD) methods.
\cite{8897214} post-processes landmark-based morphs using a style transfer-based method in order to mask certain effects caused by the morphing process. \cite{FFM21} introduces a model to simulate the effects of P\&S on images and \cite{9304856} considers the influence of ageing on morphing attacks.
However, if the underlying morphing algorithms use the same landmark-based method this provides limited information on the robustness of the detection method.
GANs were used in \cite{VZ20,8698563} in an attempt to create a different type of morph, which was shown to be able to fool FRSs, if not as consistently as landmark-based morphs. By using existing generation networks from StyleGAN \cite{StyleGAN} and StyleGAN2 \cite{StyleGAN}, and introducing Identity Priors, the results were improved in \cite{9404267}. Since GANs are notoriously difficult to train \cite{8253599} more stable methods for generating morphs would be useful.
\cite{SKR20} introduces an overview of S-MAD methods and discusses characteristics of the datasets used to train them. They address the lack of variation in morphing techniques by including morphs created using different algorithms as well as printed-and-scanned morphs in the evaluation of a multi-scale block binary pattern fusion approach for S-MAD. However, all morphing algorithms used are landmark-based and GAN morphs or other methods are not taken into consideration.
\section{Proposed System \label{approach}}
In this section we introduce a framework to generate images that approximate the worst-case morph, which is the image that is most similar to two input images according to the FRS.
Landmark-based morphing attempts to create images in image space that are similar to two given images. Instead, we use the embedding space of an FRS, since this should contain more structured information on the similarity of images. Let $f$ be the function that describes an FRS's mapping from the image space $X$ to the embedding space $Z$:
\begin{align*}
f: X &\rightarrow Z\\
\text{I} &\mapsto z.
\end{align*}
Let $d$ be a distance metric applied to $Z$ by the FRS (i.e. it returns dissimilarity scores). In that case the \textit{worst-case embedding} for two images $I_1$ and $I_2$ is
\begin{equation}
z^* := \text{argmin}_{z \in Z} \left( \max \left[ d(z,z_1), d(z,z_2) \right] \right), \label{z_wc}
\end{equation}
where $z_1=f(I_1), \ z_2=f(I_2)$. For example, in the case that $d(z_1, z_2)$ returns the euclidean distance between two embeddings $z_1$ and $z_2$, the worst-case embedding becomes $z^*=\frac{z_1+z_2}{2}$.
Equation \ref{z_wc} can be used for an FRS that returns dissimilarity scores. In the case of an FRS that uses similarity scores, defined by a function $S$, it becomes
\begin{equation}
z^*:=\text{argmax}_{z \in Z} \left( \min \left[ S(z,z1), S(z,z2) \right] \right) .
\end{equation}
For example, if $S(z_1, z_2)$ returns the cosine similarity, then $S(z_1, z_2) =\cos(\theta)$, where $\theta$ is the angle between $z_1$ and $z_2$. In that case $z^*$ is any $z$ for which $S(z_1,z)=S(z,z_2)=\cos(\theta/2)$.
A worst-case morph is an image $M^*$ for which $f(M^*)=z^*$. We approximate $M^*$ using a decoder $D$ that maps from $Z$ back to $X$, inverting the mapping of the FRS. $D$ is trained to approximate a function $f^{-1}$ that returns a preimage\footnote{It is highly unlikely that $f$ is invertible.} of an embedding in $Z$:
\begin{align*}
f^{-1}: Z \rightarrow & X\\
z \mapsto & I' \in \{I | f(I)=z\}.
\end{align*}
If $D$ can successfully reconstruct images, then our hypothesis is that $D(z^*) = M \approx M^*$. We call such morphs \textit{embedding-based} morphs. While GAN-based morphs \cite{9404267,VZ20} are also generated from embeddings using a decoder network, this is a different approach, since it does not directly use embeddings from the embedding space of an FRS.
\subsection{Morph generation \label{MorphGenSection}}
For our experiments, we rely on a ResNet34-based face recognition system. \cite{HeZRS15} We use the pretrained model {\fontfamily{Courier}\selectfont dlib\_face\_recognition\_resnet\_model\_v1} from Dlib \cite{dlib09}. When given input images $I_1$ and $I_2$, this FRS returns a 128-dimensional embedding for each image, resulting in $z_1=f(I_1)$ and $z_2=f(I_2)$, where $f$ describes the mapping defined by the neural network from the image space to the embedding space:
\begin{align*}
X = [0,1]^{n_c\times w\times h} \quad \text{ and } \quad Z = \mathbb{R}^{128},
\end{align*}
where the number of colour channels $n_c=3$ and the image size $w\times h = 224\times224$.
It can be used as a face verification system by calculating the euclidean distance between $z_1$ and $z_2$. $I_1$ and $I_2$ are not accepted as a match if $||z_1-z_2||_2>t$, where $t=0.6$ is the recommended decision threshold. When using this threshold the system achieves an accuracy of 99.38\% on LFW \cite{dlib09,LFWTech}.
In this case, the worst-case embedding $z^*$ that minimises Eq. \ref{z_wc} is the embedding that lies on the interpolating line exactly between $z_1$ and $z_2$. Thus a worst-case morph is any image $M^*$ for which $f(M^*)=\frac{z_1+z_2}{2}$. Note that, as mentioned in Section \ref{approach}, using a different dissimilarity measure $d$ would lead to a different $z^*$ and $M^*$.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\node[inner sep=0pt] (whitehead) at (-2.9,0)
{\includegraphics[width=0.07\paperwidth]{img/04211d138}};
\draw[black, <-] (-2,0) -- (-1.1,0);
\node[inner sep=0pt] (whitehead) at (-2.5,-3)
{\includegraphics[width=0.07\paperwidth]{img/04212d211}};
\draw[black, <-] (-1.6,-3) -- (-0.7,-3);
\node at (-0.4,0) [circle,fill,inner sep=1.5pt]{};
\node[] at (-0.7,0)
{\text{$z_1$}};
\node at (0,-3) [circle,fill,inner sep=1.5pt]{};
\node[inner sep=0pt] (whitehead) at (-0.3,-3)
{\text{$z_2$}};
\node at (-0.2,-1.5) [circle,fill,inner sep=1.5pt]{};
\node[inner sep=0pt] (whitehead) at (0.1,-1.5)
{\text{$z^*$}};
\node at (0.4,-1) [circle,fill,inner sep=1.5pt]{};
\node[inner sep=0pt] (whitehead) at (1,-1.1)
{\text{$z_{\text{Morph}}$}};
\draw[dashed, black, -] (-0.4,0) -- (0,-3);
\draw[black, ->] (1,-0.9) -- (2,-0.5);
\draw[black, ->] (0.1,-1.7) -- (1.5,-2.5);
\node[inner sep=0pt] (whitehead) at (3,0)
{\includegraphics[width=0.07\paperwidth]{img/morph}};
\node[inner sep=0pt, opacity=0.3] (2) (whitehead) at (2.5,-3)
{\includegraphics[width=0.07\paperwidth]{img/morph}};
\node[scale=8,inner sep=0pt, white] (whitehead) at (2.5,-3)
{\text{?}};
\end{tikzpicture}
\caption{\label{worstcase}Visualisation of the difference between a landmark-based morph and the worst-case morph.}
\end{figure}
We train a network $D$ with six deconvolutional layers to approximate $f^{-1}$ by minimising
\begin{equation}
\pazocal{L}^i = \frac{1}{N} \sum_{(x,y) \in \text{pixels}} \left( I^i_{x,y}-D(f(I^i))_{x,y} \right)^2,
\end{equation}
where $N$ is the number of pixels in the $i$-th image $I^i$ in a mini-batch. The hyperparameters of this network can be found in the appendix. We train the network for 100 epochs. The loss function for one mini-batch with $M$ images is
\begin{equation}
\pazocal{L} = \frac{1}{M} \sum_{i=1}^M \pazocal{L}^i.\label{loss}
\end{equation}
When given two images $I_1$ and $I_2$ with corresponding embeddings $z_1$ and $z_2$, we create an embedding-based morph by forwarding $(z_1+z_2)/2$ through the network.
\section{Morph Evaluation \label{metrics}}
In this section we describe how to measure the vulnerability of two FRSs to landmark-based and embedding-based morphs. We examine the morphs with the same deep-learning-based FRS used for training the inverse network and a commercial system (Cognitec) \cite{COG}, using the Mated Morph Presentation Match Rate (MMPMR($t$)), which is the proportion of (morphing) attacks for which both contributing identities are considered a match by the FRS when using a threshold~$t$ \cite{8053499}.
It is not always relevant whether an FRS considers \textit{both} identities as a match, since for example during application for an eMRTD, the comparison of the applicant with the passport image may be performed by a human officer. For that reason we also share the scores of morphs compared with images of contributing identities in Section~ \ref{results}.
We set the threshold $t$ to the recommended values for the respective FRSs and evaluate using a validation set. Dlib uses dissimilarity scores (distances) and has a recommended decision threshold of 0.6, the commercial system uses similarity scores and has a recommended decision threshold of 0.5.
\section{Creation of Morphing Dataset}
We train a network as described in Section \ref{approach}. We create two sets of morphs using the validation set, the first using the landmark-based method and the second using the embedding-based approach.
\begin{table}[h]
\footnotesize
\centering
\caption{\label{tab1}Training and validation sets.}
\begin{tabular}{|l|l|l|}
\hline
\rowcolor[gray]{.9}
& Training & Validation \\
\hline
\# real IDs & 482 & \ 86\\
\hline
\# real imgs & 18,143 & \ 3,629 \\
\hline
\# morph IDs & - & \ 115 \\
\hline
\# morph imgs (embedding) & - & \ 7,410 \\
\hline
\# morph imgs (landmark) & - & \ 22,230 \\
\hline
\end{tabular}
\end{table}
We use a dataset of in total 21,772 facial images, which is the subset of all portrait images provided in the FRGC dataset \cite{FRGC}. For each identity in the validation set, we select the next four most similar identities. We do this because selecting identities that resemble each other leads to more challenging morphs than e.g. randomly selecting pairs of identities for morphing \cite{8987316}.
We define a mean embedding for each identity by computing the embedding returned by Dlib for each image of that identity and averaging these embeddings, i.e. the mean embedding for the $i$-th identity is
\begin{equation}
\overline{z}_i = \frac{1}{N_i} \sum_{k_i=1}^{N_i}
f(I_{k_i}),
\end{equation}
where the $I_{k_i}$ is an image of the $i$-th identity, $N_i$ is the number of available images of the $i$-th identity and $f$ is defined as in Section \ref{MorphGenSection}. For each identity $i$ we select four most similar identities by finding those $z_j, j\neq i$ that have the smallest euclidean distance to $z_i$. This results in $4\cdot 482=1928$ pairs of identities. We remove any duplicate pairs and split the dataset of normal images into training and validation set, see Table \ref{tab1}. We generate morphed images using only those identity pairs for which both identities are in the validation set to prevent any overlap between training and validation set, after which 115 pairs of identities remain. For each identity pair (id$_1$, id$_2$) we then randomly select image pairs from the subset of faces with neutral expression and create landmark-based morphs as described in Fig. \ref{morphing}. Both warping and blending parameter are set to 0.5. We generate embedding-based morphs by passing embeddings through the Decoder as described in Section \ref{MorphGenSection}. For each pair of images ($\text{Im}_1, \text{Im}_2$) selected for morphing we create one embedding-based morph and three landmark-based morphs: one full morph (with ghosting artifacts in the background) and two spliced morphs, resulting in 7,410 embedding-based morphs and 22,230 landmark-based morphs. An example is shown in Fig.~\ref{worst-case}.
\begin{figure}[h]
\centering
\resizebox{0.6\columnwidth}{!}{%
\begin{tikzpicture}
\filldraw[black] (-5.35,1.25) node[anchor=west] {Im$_1$};
\node[inner sep=0pt] (whitehead) at (-5,0)
{\includegraphics[width=0.1\paperwidth]{img/04361d00}};
\filldraw[black] (4.65,1.25) node[anchor=west] {Im$_2$};
\node[inner sep=0pt] (whitehead) at (5,0)
{\includegraphics[width=0.1\paperwidth]{img/04484d06}};
\draw[black, ->] (-3.5,0.5) -- (-2,0.8);
\draw[black, ->] (3.5,0.5) -- (2,0.8);
\filldraw[black] (-1.75,3.65) node[anchor=west] {\textit{Landmark-based morphs}};
\filldraw[black] (-3.4,3.25) node[anchor=west] {Spliced into Im$_1$};
\node[inner sep=0pt] (whitehead) at (-2.25,2)
{\includegraphics[width=0.1\paperwidth]{img/04361d00_04484d06_50_in0}};
\filldraw[black] (-0.8,3.25) node[anchor=west] {Full morph};
\node[inner sep=0pt] (whitehead) at (0,2)
{\includegraphics[width=0.1\paperwidth]{img/04361d00_04484d06_50}};
\filldraw[black] (1.1,3.25) node[anchor=west] {Spliced into Im$_2$};
\node[inner sep=0pt] (whitehead) at (2.25,2)
{\includegraphics[width=0.1\paperwidth]{img/04361d00_04484d06_50_in1}};
\draw[black, ->] (-3.5,-0.5) -- (-2,-1);
\draw[black, ->] (3.5,-0.5) -- (2,-1);
\filldraw[black] (-1.7,0) node[anchor=west] {\textit{Embedding-based morph}};
\node[inner sep=0pt] (whitehead) at (0,-1.2)
{\includegraphics[width=0.1\paperwidth]{img/04361d00_04484d06_50_WC}};
\end{tikzpicture}
}
\caption{\label{worst-case}Embedding-based and landmark-based morphs.}
\end{figure}
\section{Results \& Discussion}\label{results}
We first examine the resulting embedding-based morphs using the same FRS (Dlib) used to generate them. The resulting histograms of genuine, impostor and morph scores are shown in Fig. \ref{histograms}. The left half of what seems like a bi-modal distribution of the morph scores seems to be caused by the fact that there are certain impostor pairs (the impostor pairs that lead to scores between approximately 0.4 and 0.6.) that contribute to a higher false match rate. These include several non-white faces that are considered similar by the FRS. The MMPMR for the embedding-based morphs is 65.2\%, while it is close to 100\% for landmark-based morphs. This means that the majority of the embedding-based morphs contain enough identity information of the two contributing identities to fool the FRS, if not as consistently as the landmark-based morphs.
\begin{figure}
\centering
\includegraphics[width=0.3\paperwidth]{img/Dlib_scores_FRGC_similar_worstcase}
\includegraphics[width=0.3\paperwidth]{img/Dlib_scores_FRGC_similar3}
\caption{\label{histograms}Vulnerability of dlib face recognition to embedding-based morphs (top) and landmark-based morphs (bottom). The blue histograms describe the genuine comparison scores, red the impostor comparison scores and green the morph scores (morph compared to a different image of one of the contributors).}
\end{figure}
Since the decoder network is trained using a pixel-based loss, this leads to the output images being blurry, especially outside the facial area. The fact that the images do not convincingly look like real images may cause the lower MMPMR for embedding-based morphs. This might be resolved by improving the realness of the output images, which could for example be achieved by only training the decoder to reconstruct a region of interest and splicing the morph into a different background, just as is done for landmark-based morphs. Another option to improve reconstruction quality would be to introduce a discriminating loss, as is done in GANs. In that case, the approach becomes more similar to existing GAN-based morphing techniques, with the difference that embedding-based morphs depend on worst-case embeddings.
Another improvement may be realised by adapting the pixel-based loss, since it is not ideal for preserving identity information. If the FRS's mapping $f$ to embedding space is known and we can backpropagate gradients through it, we can extend the loss function in Eq. \ref{loss} with a term that encourages the network to return an image that maps onto the embedding it was given as input
\begin{equation}
\pazocal{L}^i_{\text{rec}} = ||z^i - f(f^{-1}(z^i)) ||_2. \label{rec_loss}
\end{equation}
The loss function can then be extended to
\begin{equation*}
\pazocal{L} = \frac{1}{M} \sum_{i=1}^M \pazocal{L}^i + \alpha \pazocal{L}^i_{\text{rec}},
\end{equation*}
where $\alpha>0$ is the weight given to reconstructing the input embedding. In this case the attacks may become more similar to adversarial attacks and do not necessarily result in visually convincing images. This may be prevented by using for example a combination of different FRSs to estimate the reconstruction loss in Eq. \ref{rec_loss}. If the mapping of an FRS is unknown, but the embeddings corresponding to a set of images are given, we can train an approximation of the original FRS and use this to calculate the loss given in Eq. \ref{rec_loss}.
The embedding-based morphs are also not as challenging for the Cognitec FRS (MMPMR$\approx$0.9\%) as landmark-based morphs (MMPMR$\approx$90\%), see Fig.~\ref{histograms2}. This is not entirely surprising, since our method is trained to generate approximations of worst-case morphs specifically for one FRS, which does not necessarily generalise to other FRSs. The morph comparison scores are however significantly different from the impostor scores, indicating that they may still pose a risk. Especially if the morphs are further improved, they may be useful in understanding an FRS's vulnerability.
\begin{figure}
\centering
\includegraphics[width=0.3\paperwidth]{img/FaceVacs_scoreswc_FRGC_test}
\includegraphics[width=0.3\paperwidth]{img/morph_scores_FRGC}
\caption{\label{histograms2}Vulnerability of FaceVacs face recognition to embedding-based morphs (top) and landmark-based morphs (bottom). Blue: genuine scores, red: impostor scores, green: morph scores. While few morph comparison scores are above the decision threshold of 0.5, they are significantly larger than the impostor comparison scores.}
\end{figure}
\section{Conclusion \& Future Work}
We introduced the concept of worst-case morphs, which are morphs that are most similar to the contributing identities according to an FRS. We trained a decoder network to generate embedding-based morphs that approximate such worst-case morphs. The majority of these morphs successfully fooled the FRS they were trained to fool. However, they were not (yet) as successful as landmark-based morphs. We also showed that this does not necessarily translate to the ability to fool other FRSs. We suggested several possible ways to improve the approximation of worst-case morphs.
Embedding-based approximations of worst-case morphs do not yet fool FRSs to the same extent that landmark-based morphs do. However, they can be helpful in understanding and visualising the vulnerabilities of FRSs, and may also offer some insight into the robustness of MAD techniques. Furthermore, our approach has the advantage of being far more robust to train than GANs.
Since GAN-based methods are currently the only existing alternative for generating morphs that are different from landmark-based morphs, our method is a valuable contribution to creating more varied morph databases.
For now though, Face Recognition Systems can rest easy knowing that embedding-based morphs are more a bad dream than a nightmare.
\bibliographystyle{unsrt}
|
{
"timestamp": "2021-12-01T02:24:34",
"yymm": "2111",
"arxiv_id": "2111.15416",
"language": "en",
"url": "https://arxiv.org/abs/2111.15416"
}
|
\section{Introduction}
A {\em map M} is an embedding of a graph {\em G} on a surface {\em S} such that the closure of components of $S \setminus G$, called the {\em faces} of $M$, are homeomorphic to $2$-discs. A map $M$ is said to be a {\em polyhedral map} if the intersection of any two distinct faces is either empty, a common vertex, or a common edge. Here map means a polyhedral map.
The $face\mbox{-}cycle$ $C_u$ of a vertex $u$ (also called the {\em vertex-figure} at $u$) in a map is the ordered sequence of faces incident to $u$.
So, $C_u$ is of the form $(F_{1,1}\mbox{-}\cdots \mbox{-}F_{1,n_1})\mbox{-}\cdots\mbox{-}(F_{k,1}\mbox{-}$ $\cdots \mbox{-}F_{k,n_k})\mbox{-}F_{1,1}$, where $F_{i,\ell}$ is a $p_i$-gon for $1\leq \ell \leq n_i$, $1\leq i \leq k$, $p_r\neq p_{r+1}$ for $1\leq r\leq k-1$ and $p_n\neq p_1$. The types of the faces in $C_u$ defines the type of $C_u$. In this case, the type of face-cycle($u$) is $[p_1^{n_1}, \dots, p_k^{n_k}]$, is called vertex type of $u$. A map $M$ is called {\em semi-equivelar} (\cite{DM2018}, we are including the same definition for the sake of completeness) if $C_u$ and $C_v$ are of same type for all $u, v \in V(X)$. More precisely, there exist integers $p_1, \dots, p_k\geq 3$ and $n_1, \dots, n_k\geq 1$, $p_i\neq p_{i+1}$ (addition in the suffix is modulo $k$) such that $C_u$ is of the form as above for all $u\in V(X)$. In such a case, $X$ is called a semi-equivelar map of type (or vertex type) $[p_1^{n_1}, \dots, p_k^{n_k}]$ (or, a map of type $[p_1^{n_1}, \dots, p_k^{n_k}]$).
Two maps of fixed type on the torus are {\em isomorphic} if there exists a {\em homeomorphism} of the torus which maps vertices to vertices, edges to edges, faces to faces and preserves incidents. More precisely,
if we consider two polyhedral complexes $M_{1}$ and $M_{2}$ then an isomorphism to be a map $f ~:~ M_{1}\rightarrow M_{2}$ such that $f|_{V(M_{1})} : V(M_{1}) \rightarrow V(M_{2})$ is a bijection and $f(\sigma)$ is a cell in $M_{2}$ if and only if $\sigma$ is a cell in $M_{1}$. In particular, if $M_1 = M_2$, then $f$ is called an $automorphism$. The \emph{automorphism group $Aut(M)$} of $M$ is the group consisting of automorphisms of $M$.
Throughout the last few decades there have been many results about maps and semi-equivelar maps that are highly symmetric. In particular, there has been
recent interest in the study of discrete objects using combinatorial, geometric, and algebraic approaches, with the topic of symmetries of maps receiving a lot of interest. There is a great history of work surrounding maps on the Euclidean plane $\mathbb{R}^2$ and the $2$-dimensional torus.
An {\em Archimedean} tiling of the plane $\mathbb{R}^2$ is a tiling of $\mathbb{R}^2$ by regular polygons such that all the vertices of the tiling are of same type.
Gr\"{u}nbaum and Shephard \cite{GS1977} showed that there are exactly eleven types of Archimedean tilings on the plane (see Example \ref{exam:plane}). These types are $[3^6]$, $[4^4]$, $[6^3]$, $[3^4,6^1]$, $[3^3,4^2]$, $[3^2,4^1,3^1,4^1]$, $[3^1,6^1,3^1,6^1]$, $[3^1,4^1,6^1,4^1]$, $[3^1,12^2]$, $[4^1,6^1,12^1]$, $[4^1,8^2]$.
Clearly, these tilings are also semi-equivelar on $\mathbb{R}^2$. But, there are semi-equivelar maps on $\mathbb{R}^2$ which are not (not isomorphic to) Archimedean tilings. In fact, there exists $[p^q]$ equivelar maps on $\mathbb{R}^2$ whenever $1/p+1/q < 1/2$ (e.g., \cite{CM1957}, \cite{FT1965}). We know from \cite{DU2005, DM2017, DM2018} that the Archimedean tilings $E_i$ $(1 \le i \le 11)$ (in Section \ref{sec:examples}) are unique as semi-equivelar maps. That is, we have
\begin{proposition} \label{theo:plane}
Let $E_1, \dots, E_{11}$ be the Archimedean tilings on the plane given in Example $\ref{exam:plane}$. Let $X$ be a semi-equivelar map on the plane. If the type of $X$ is same as the type of $E_i$, for some $i\leq 11$, then $X\cong E_i$. In particular, $X$ is vertex-transitive.
\end{proposition}
As a consequence of Proposition \ref{theo:plane} we have
\begin{proposition} \label{propo-1}
All semi-equivelar maps on the torus are the quotient of an Archimedean tiling on the plane by a discrete subgroup of the automorphism group of the tiling.
\end{proposition}
A map is {\em regular} if its automorphism group acts regularly on flags (which, in nondegenerate cases, may be identified with mutually incident vertex-edge-face triples). In general, a map is \emph{semiregular} (or \emph{almost regular}) if it has as few flag orbits as possible for its type. A map is \emph{$k$-regular} if it is equivelar and the number of flag orbits of the map $k$ under the automorphism group. In particular, if $k =1$, its called regular. Similarly, a map is called \emph{$k$-semiregular} if it contains more number of flags as compared to its type and the number of flags orbits $k$.
The study of regular maps on compact surfaces has a long and rich history. Its early stages go back to the ancient Greeks' interest in highly symmetric solids and (much later) to Kepler's discovery of stellated polyhedra. A new dimension to the combinatorial and group-theoretic nature of the study of highly symmetric maps was added in the late 19th century in the work of Klein and Poincar{\'e} by revealing facts that relate the theory of maps to hyperbolic geometry and automorphic functions.
A systematic approach to classification of regular maps on a given surface was initiated by Brahana in the early 20th century. In the span of the following 70 years this was gradually extended by contributions of numerous authors,
resulting by the end of 1980's in a classification of all chiral and regular maps on orientable surfaces of genus up to 7, and regular maps on nonorientable surfaces of genus at most 8. Details of this development are summarized in
the survey paper \cite{siran2006}.
In 2000, the classification was extended with the help of computing power to orientable and nonorientable surfaces of genus up to 101 and 202, respectively \cite{conder2001}. Nevertheless, by the end of 20th century, classification of regular maps
was available only for a finite number of surfaces.
There is also much interest in finding minimal regular covers of different families of maps and polytopes (see \cite{HW2012, MPW2013, pw2011}). In \cite{drach:2015}, Drach et al. constructed the minimal rotary cover of any equivelar toroidal map. Then, they have extended their idea to toroidal maps that are no longer equivelar, and constructed minimal toroidal covers of the Archimedean toroidal maps with maximal symmetry (see in \cite{drach:2019}), called these covers almost regular; they will no longer be regular (or chiral), but instead will have the same number of flag orbits as their associated tessellation of the Euclidean plane. In this context, we prove the following.
\begin{theorem} \label{no-of-orbits}
Let $X$ be a semi-equivelar map on the torus. Let the flags of $X$ form $m$ ${\rm Aut}(X)$-orbits. \\
{\rm (a)} If the type of $X$ is $[3^6]$, $[4^4]$ or $[6^3]$ then $m \leq 1$.\\
{\rm (b)} If the type of $X$ is $[3^3, 4^2]$ then $m \leq 4$.\\
{\rm (d)} If the type of $X$ is $[3^1,12^2]$ or $[4^1,8^2]$ then $m\leq 9$.\\
{\rm (e)} If the type of $X$ is $[3^1,6^1,3^1,6^1]$,$[3^1,4^1,6^1,4^1]$ or $[3^4,6^1]$ then $m\leq 12$.\\
{\rm (f)} If the type of $X$ is $[3^2,4^1,3^1,4^1]$ then $m\leq 15$. \\
{\rm (g)} If the type of $X$ is $[4^1,6^1, 12^1]$ then $m\leq 36$.
These bounds are also sharp.
\end{theorem}
Many ideas of the discrete symmetric structures on torus follow from the concepts introduced by Coxeter and Moser in \cite{CM1957}. A surjective mapping $\eta \colon X \to Y$ from a map $X$ to a map $Y$ is called a $covering$ if it preserves adjacency and sends vertices, edges, faces of $X$ to vertices, edges, faces of $Y$ respectively. That is, let $G \leq$Aut($X$) be a discrete group acting on a map $X$ \emph{properly discontinuously} (\cite[Chapter 2]{katok:1992}). This means that each element $g$ of $G$ is associated with an automorphism $h_g$ of $X$ onto itself, in such a way that $h_{gh}$ is always equal to $h_g h_h$ for any two elements $g$ and $h$ of $G$, and $G$-orbit of any vertex $u\in V(X)$ is locally finite. Then, there exists $\Gamma \leq $Aut($X$) such that $Y = X/\Gamma$. In such a case, $X$ is called a cover of $Y$. A map $X$ is called regular if the automorphism group of $X$ acts transitively on the set of flags of $X$. Clearly, if a semi-equivelar map is not equivelar then it cannot be regular.
A natural question then is:
\begin{question}\label{ques}
Let $X$ be a semi-equivelar map on the torus. Let $X$ be $k$-semiregular. Does there exist any cover $Y(\neq X)$ of some $m$-semiregular map? Does this cover exist for every sheet, if so, how many? How the flag orbits of $X$ and $Y$ are related?
\end{question}
In this context, we know the following on the torus.
\begin{proposition} (\cite{drach:2015, drach:2019}) \label{datta2020}
Let $E$ be an Archimedean tiling of type $Z$ and $k$-semiregular. If $X$ is a semi\mbox{-}equivelar toroidal map of type $Z$ then there exists a covering $\eta \colon Y \to X$ where $Y$ is $k$-semiregular and unique.
\end{proposition}
Here we prove the following.
\begin{theorem}\label{thm-main1}
{\rm (a)} If $X_1$ is a $m_1$-semiregular toroidal map of type $[3^3, 4^2]$, then there exists a covering $\eta_{k_1} \colon Y_{k_1} \to X_1$ where $Y_{k_1}$ is $k_1$-semiregular for each $k_1 \le m_1$ such that $4$ divides $k_1$.\\
{\rm (b)} If $X_2$ is a $m_2$-semiregular toroidal map of type $[3^1,12^2]$ or $[4^1,8^2]$, then there exists a covering $\eta_{k_2} \colon Y_{k_2} \to X_2$ where $Y_{k_2}$ is $k_2$-semiregular for each $k_2 \le m_2$ such that $3$ divides $k_2$.\\
{\rm (c)} If $X_3$ is a $m_3$-semiregular toroidal map of type $[3^1,6^1,3^1,6^1]$, then there exists a covering $\eta_{k_3} \colon Y_{k_3} \to X_3$ where $Y_{k_3}$ is $k_3$-semiregular for each $k_3 \le m_1$ such that $2$ divides $k_3$.\\
{\rm (d)} If $X_4$ is a $m_4$-semiregular toroidal map of type $[3^1,4^1,6^1,4^1]$ or $[3^4,6^1]$, then there exists a covering $\eta_{k_4} \colon Y_{k_4} \to X_4$ where $Y_{k_4}$ is $k_4$-semiregular for each $k_4 \le m_4$ such that $4$ divides $k_4$.\\
{\rm (e)} If $X_5$ is a $m_5$-semiregular toroidal map of type $[3^2,4^1,3^1,4^1]$, then there exists a covering $\eta_{k_5} \colon Y_{k_5} \to X_5$ where $Y_{k_5}$ is $k_5$-semiregular for each $k_5 \le m_5$ such that $5$ divides $k_5$.\\
{\rm (f)} If $X_6$ is a $m_6$-semiregular toroidal map of type $[4^1,6^1, 12^1]$, then there exists a covering $\eta_{k_6} \colon Y_{k_6} \to X_6$ where $Y_{k_6}$ is $k_6$-semiregular for each $k_6 \le m_6$ such that $6$ divides $k_6$ except $(k_6,m_6) = (12,18),(18,24),(24,36)$.
\end{theorem}
\begin{theorem}\label{thm-main2}
Let $X$ be a semi\mbox{-}equivelar toroidal map and $k$-semiregular. Then, there exists a $n$ sheeted covering $\eta \colon Y \to X$ for each $n \in \mathbb{N}$ where $Y$ is $m$-semiregular for some $m\le k$.
\end{theorem}
\begin{theorem}\label{thm-main3}
Let $X$ be a $n$ sheeted semi\mbox{-}equivelar $k$-semiregular toroidal map and $\sigma(n) = \sum_{d|n}d$. Then, there exists different $n$ sheeted $m$-semiregular covering $\eta_{\ell} \colon Y_{\ell} \to X$ for $ \ell \in \{1, 2, \dots, \sigma(n)\}$, i.e., $Y_{1},$ $Y_{2},$ $\dots,$ $Y_{\sigma(n)}$ are $n$ sheeted $m$-semiregular covers of $X$ and different upto isomorphism for some $m\le k$.
\end{theorem}
\begin{theorem}\label{thm-main4}
Let $X$ be a $m$-semiregular semi\mbox{-}equivelar toroidal map and $Y$ be a $k$-semiregular covers of $X$. Then, there exists a $k$-semiregular covering map $\eta \colon Z \to X$ such that $Z$ is minimal.
\end{theorem}
We prove above theorems in Section \ref{sec:proofs-1}. The idea is as follows: suppose $X$ be a given map. Then, by Prop. \ref{propo-1}, $X = E/K$ for some discrete subgroup $K$ of Aut($E$). Now for every subgroup $L$ of $K$ the group $K/L$ acts on $E/L$. Hence we get covering $E/L \longrightarrow \frac{E/L}{K/L}$. We construct some suitable subgroup $L$ and $G$ with $L \trianglelefteq G \le Aut(E)$ such that $E/L$ has desired number of orbits by the action of $G/L$ and show that these orbits remain unchanged by the action of some element of $Aut(E/L) \setminus G/L$. To classify these coverings we relate these with some special types of matrices. From the enumeration of these matrices we get the classification of coverings.
\section{Examples} \label{sec:examples}
We first present eleven Archimedean tilings on the plane. We need these examples for the proofs of our results in Section \ref{sec:proofs-1}.
\begin{example} \label{exam:plane}
{\rm Eleven Archimedean tilings on the plane are given in Fig. \ref{fig:Archi}. These are all the Archimedean tilings on the plane $\mathbb{R}^2$ (cf. \cite{GS1977}). All of these are vertex-transitive maps.
}
\end{example}
\bigskip
\setlength{\unitlength}{2.5mm}
\begin{picture}(58,23)(-6,-7)
\thinlines
\put(-6,-3){\line(1,0){40}}\put(-6,0){\line(1,0){40}}
\put(-6,6){\line(1,0){40}}\put(-6,3){\line(1,0){40}}
\put(-6,9){\line(1,0){40}}\put(-6,12){\line(1,0){40}}
\put(-6,15){\line(1,0){40}}
\put(-5.65,14){\line(2,3){1.3}}
\put(-5.65,8){\line(2,3){5.3}} \put(-5.65,2){\line(2,3){9.3}}
\put(-5.65,-4){\line(2,3){13.3}} \put(-1.65,-4){\line(2,3){13.3}}
\put(2.35,-4){\line(2,3){13.3}} \put(6.35,-4){\line(2,3){13.3}}
\put(10.35,-4){\line(2,3){13.3}} \put(14.35,-4){\line(2,3){13.3}}
\put(18.35,-4){\line(2,3){13.3}} \put(22.35,-4){\line(2,3){11.3}}
\put(26.35,-4){\line(2,3){7.3}} \put(30.35,-4){\line(2,3){3.3}}
\put(-4.35,-4){\line(-2,3){1.5}}
\put(-.35,-4){\line(-2,3){5.5}} \put(3.65,-4){\line(-2,3){9.5}}
\put(7.65,-4){\line(-2,3){13.3}} \put(11.65,-4){\line(-2,3){13.3}}
\put(15.65,-4){\line(-2,3){13.3}} \put(19.65,-4){\line(-2,3){13.3}}
\put(23.65,-4){\line(-2,3){13.3}} \put(27.65,-4){\line(-2,3){13.3}}
\put(31.65,-4){\line(-2,3){13.3}} \put(33.65,-1){\line(-2,3){11.3}}
\put(33.3,5.5){\line(-2,3){6.9}} \put(34,10.5){\line(-2,3){3.6}}
\put(-4.3,-2.6){\mbox{\tiny $u_{-3,-3}$}}
\put(-.3,-2.6){\mbox{\tiny $u_{-2,-3}$}}
\put(3.7,-2.6){\mbox{\tiny $u_{-1,-3}$}}
\put(7.7,-2.6){\mbox{\tiny $u_{0,-3}$}}
\put(11.7,-2.6){\mbox{\tiny $u_{1,-3}$}}
\put(15.7,-2.6){\mbox{\tiny $u_{2,-3}$}}
\put(19.7,-2.6){\mbox{\tiny $u_{3,-3}$}}
\put(23.7,-2.6){\mbox{\tiny $u_{4,-3}$}}
\put(27.7,-2.6){\mbox{\tiny $u_{5,-3}$}}
\put(31.7,-2.6){\mbox{\tiny $u_{6,-3}$}}
\put(-2.5,.4){\mbox{\tiny $u_{-3,-2}$}}
\put(1.5,.4){\mbox{\tiny $u_{-2,-2}$}}
\put(5.5,.4){\mbox{\tiny $u_{-1,-2}$}}
\put(9.5,.4){\mbox{\tiny $u_{0,-2}$}}
\put(13.5,.4){\mbox{\tiny $u_{1,-2}$}}
\put(17.5,.4){\mbox{\tiny $u_{2,-2}$}}
\put(21.5,.4){\mbox{\tiny $u_{3,-2}$}}
\put(25.5,.4){\mbox{\tiny $u_{4,-2}$}}
\put(29.5,.4){\mbox{\tiny $u_{5,-2}$}}
\put(33.5,.4){\mbox{\tiny $u_{6,-2}$}}
\put(-4.5,3.4){\mbox{\tiny $u_{-4,-1}$}}
\put(-.5,3.4){\mbox{\tiny $u_{-3,-1}$}}
\put(3.5,3.4){\mbox{\tiny $u_{-2,-1}$}}
\put(7.5,3.4){\mbox{\tiny $u_{-1,-1}$}}
\put(11.5,3.4){\mbox{\tiny $u_{0,-1}$}}
\put(15.5,3.4){\mbox{\tiny $u_{1,-1}$}}
\put(19.5,3.4){\mbox{\tiny $u_{2,-1}$}}
\put(23.5,3.4){\mbox{\tiny $u_{3,-1}$}}
\put(27.5,3.4){\mbox{\tiny $u_{4,-1}$}}
\put(31.5,3.4){\mbox{\tiny $u_{5,-1}$}}
\put(-2.5,6.4){\mbox{\tiny $u_{-4,0}$}}
\put(1.5,6.4){\mbox{\tiny $u_{-3,0}$}}
\put(5.6,6.4){\mbox{\tiny $u_{-2,0}$}}
\put(9.6,6.4){\mbox{\tiny $u_{-1,0}$}}
\put(13.4,6.4){\mbox{\tiny $u_{0,0}$}}
\put(17.4,6.4){\mbox{\tiny $u_{1,0}$}}
\put(21.4,6.4){\mbox{\tiny $u_{2,0}$}}
\put(25.4,6.4){\mbox{\tiny $u_{3,0}$}}
\put(29.4,6.4){\mbox{\tiny $u_{4,0}$}}
\put(33.4,6.4){\mbox{\tiny $u_{5,0}$}}
\put(-4.3,9.4){\mbox{\tiny $u_{-5,1}$}}
\put(-.3,9.4){\mbox{\tiny $u_{-4,1}$}}
\put(3.7,9.4){\mbox{\tiny $u_{-3,1}$}}
\put(7.7,9.4){\mbox{\tiny $u_{-2,1}$}}
\put(11.7,9.4){\mbox{\tiny $u_{-1,1}$}}
\put(15.6,9.4){\mbox{\tiny $u_{0,1}$}}
\put(19.6,9.4){\mbox{\tiny $u_{1,1}$}}
\put(23.6,9.4){\mbox{\tiny $u_{2,1}$}}
\put(27.6,9.4){\mbox{\tiny $u_{3,1}$}}
\put(31.6,9.4){\mbox{\tiny $u_{4,1}$}}
\put(-2.4,12.4){\mbox{\tiny $u_{-5,2}$}}
\put(1.6,12.4){\mbox{\tiny $u_{-4,2}$}}
\put(5.6,12.4){\mbox{\tiny $u_{-3,2}$}}
\put(9.6,12.4){\mbox{\tiny $u_{-2,2}$}}
\put(13.6,12.4){\mbox{\tiny $u_{-1,2}$}}
\put(17.6,12.4){\mbox{\tiny $u_{0,2}$}}
\put(21.6,12.4){\mbox{\tiny $u_{1,2}$}}
\put(25.6,12.4){\mbox{\tiny $u_{2,2}$}}
\put(29.6,12.4){\mbox{\tiny $u_{3,2}$}}
\put(33.6,12.4){\mbox{\tiny $u_{4,2}$}}
\put(-4.4,15.4){\mbox{\tiny $u_{-6,3}$}}
\put(-.4,15.4){\mbox{\tiny $u_{-5,3}$}}
\put(3.6,15.4){\mbox{\tiny $u_{-4,3}$}}
\put(7.6,15.4){\mbox{\tiny $u_{-3,3}$}}
\put(11.6,15.4){\mbox{\tiny $u_{-2,3}$}}
\put(15.6,15.4){\mbox{\tiny $u_{-1,3}$}}
\put(19.6,15.4){\mbox{\tiny $u_{0,3}$}}
\put(23.6,15.4){\mbox{\tiny $u_{1,3}$}}
\put(27.6,15.4){\mbox{\tiny $u_{2,3}$}}
\put(31.6,15.4){\mbox{\tiny $u_{3,3}$}}
\put(6,-6.5){(a) $E_8([3^6])$}
\thinlines
\put(38,-2){\line(1,0){14}} \put(38,2){\line(1,0){14}} \put(38,6){\line(1,0){14}} \put(38,10){\line(1,0){14}}
\put(39,-3){\line(0,1){14}} \put(43,-3){\line(0,1){14}} \put(47,-3){\line(0,1){14}} \put(51,-3){\line(0,1){14}}
\put(35.5,-1.5){\mbox{\tiny $v_{-1,-1}$}} \put(40.2,-1.5){\mbox{\tiny $v_{0,-1}$}} \put(44.2,-1.5){\mbox{\tiny $v_{1,-1}$}} \put(48.2,-1.5){\mbox{\tiny $v_{2,-1}$}}
\put(36.2,2.5){\mbox{\tiny $v_{-1,0}$}} \put(41,2.5){\mbox{\tiny $v_{0,0}$}} \put(45,2.5){\mbox{\tiny $v_{1,0}$}} \put(49,2.5){\mbox{\tiny $v_{2,0}$}}
\put(36.2,6.5){\mbox{\tiny $v_{-1,1}$}} \put(41,6.5){\mbox{\tiny $v_{0,1}$}} \put(45,6.5){\mbox{\tiny $v_{1,1}$}} \put(49,6.5){\mbox{\tiny $v_{2,1}$}}
\put(36.2,10.5){\mbox{\tiny $v_{-1,2}$}} \put(41,10.5){\mbox{\tiny $v_{0,2}$}} \put(45,10.5){\mbox{\tiny $v_{1,2}$}} \put(49,10.5){\mbox{\tiny $v_{2,2}$}}
\put(38,-6.5) {(b) $E_{9}([4^4])$}
\end{picture}
\begin{figure}[ht!]
\tiny
\tikzstyle{ver}=[]
\tikzstyle{vert}=[circle, draw, fill=black!100, inner sep=0pt, minimum width=4pt]
\tikzstyle{vertex}=[circle, draw, fill=black!00, inner sep=0pt, minimum width=4pt]
\tikzstyle{edge} = [draw,thick,-]
\centering
\begin{tikzpicture}[scale=0.4]
\draw ({sqrt(3)}, 1) -- (0, 2) -- ({-sqrt(3)}, 1) -- ({-sqrt(3)}, -1) -- (0, -2) -- ({sqrt(3)}, -1) -- ({sqrt(3)}, 1);
\draw ({6+sqrt(3)}, 1) -- (6+0, 2) -- ({6-sqrt(3)}, 1) -- ({6-sqrt(3)}, -1) -- (6+0, -2) -- ({6+sqrt(3)}, -1) -- ({6+sqrt(3)}, 1);
\draw ({12+sqrt(3)}, 1) -- (12+0, 2) -- ({12-sqrt(3)}, 1) -- ({12-sqrt(3)}, -1) -- (12+0, -2) -- ({12+sqrt(3)}, -1) -- ({12+sqrt(3)}, 1);
\draw ({-4.8+sqrt(3)}, 1) -- ({-sqrt(3)}, 1);
\draw ({-4.8+sqrt(3)}, -1) -- ({-sqrt(3)}, -1);
\draw ({sqrt(3)}, 1) -- ({6-sqrt(3)}, 1);
\draw ({sqrt(3)}, -1) -- ({6-sqrt(3)}, -1);
\draw ({6+sqrt(3)}, 1) -- ({12-sqrt(3)}, 1);
\draw ({6+sqrt(3)}, -1) -- ({12-sqrt(3)}, -1);
\draw ({12+sqrt(3)}, 1) -- ({16.8-sqrt(3)}, 1);
\draw ({12+sqrt(3)}, -1) -- ({16.8-sqrt(3)}, -1);
\draw ({-sqrt(3)}, 1) -- (-3+0, 2.6);
\draw (0, 2) -- (-1.25, 3.6);
\draw (0, 2) -- (1.3, 3.6);
\draw ({sqrt(3)}, 1) -- (3.05, 2.6);
\draw ({6-sqrt(3)}, 1) -- (3+0, 2.6);
\draw (6, 2) -- (4.75, 3.6);
\draw (6, 2) -- (7.3, 3.6);
\draw ({6+sqrt(3)}, 1) -- (9.05, 2.6);
\draw ({12-sqrt(3)}, 1) -- (9, 2.6);
\draw (12, 2) -- (10.75, 3.6);
\draw (12, 2) -- (13.3, 3.6);
\draw ({12+sqrt(3)}, 1) -- (15.05, 2.6);
\draw [xshift = 86, yshift = 130] (-6+0, -2) -- ({-6+sqrt(3)}, -1) -- ({-6+sqrt(3)}, 1) -- ({-6+sqrt(3)}, 1) -- (-6+0, 2);
\draw [xshift = 86, yshift = 130] ({sqrt(3)}, 1) -- (0, 2) -- ({-sqrt(3)}, 1) -- ({-sqrt(3)}, -1) -- (0, -2) -- ({sqrt(3)}, -1) -- ({sqrt(3)}, 1);
\draw [xshift = 86, yshift = 130] ({6+sqrt(3)}, 1) -- (6+0, 2) -- ({6-sqrt(3)}, 1) -- ({6-sqrt(3)}, -1) -- (6+0, -2) -- ({6+sqrt(3)}, -1) -- ({6+sqrt(3)}, 1);
\draw [xshift = 86, yshift = 130] (12+0, 2) -- ({12-sqrt(3)}, 1) -- ({12-sqrt(3)}, -1) -- (12+0, -2) ;
\draw [xshift = 86, yshift = 130] ({-6+sqrt(3)}, 1) -- ({-sqrt(3)}, 1);
\draw [xshift = 86, yshift = 130] ({-6+sqrt(3)}, -1) -- ({-sqrt(3)}, -1);
\draw [xshift = 86, yshift = 130] ({sqrt(3)}, 1) -- ({6-sqrt(3)}, 1);
\draw [xshift = 86, yshift = 130] ({sqrt(3)}, -1) -- ({6-sqrt(3)}, -1);
\draw [xshift = 86, yshift = 130] ({6+sqrt(3)}, 1) -- ({12-sqrt(3)}, 1);
\draw [xshift = 86, yshift = 130] ({6+sqrt(3)}, -1) -- ({12-sqrt(3)}, -1);
\draw [yshift = 260] ({sqrt(3)}, 1) -- (0, 2) -- ({-sqrt(3)}, 1) -- ({-sqrt(3)}, -1) -- (0, -2) -- ({sqrt(3)}, -1) -- ({sqrt(3)}, 1);
\draw [yshift = 260] ({6+sqrt(3)}, 1) -- (6+0, 2) -- ({6-sqrt(3)}, 1) -- ({6-sqrt(3)}, -1) -- (6+0, -2) -- ({6+sqrt(3)}, -1) -- ({6+sqrt(3)}, 1);
\draw [yshift = 260]({12+sqrt(3)}, 1) -- (12+0, 2) -- ({12-sqrt(3)}, 1) -- ({12-sqrt(3)}, -1) -- (12+0, -2) -- ({12+sqrt(3)}, -1) -- ({12+sqrt(3)}, 1);
\draw [yshift = 260] ({-4.8+sqrt(3)}, 1) -- ({-sqrt(3)}, 1);
\draw [yshift = 260] ({-4.8+sqrt(3)}, -1) -- ({-sqrt(3)}, -1);
\draw [yshift = 260] ({sqrt(3)}, 1) -- ({6-sqrt(3)}, 1);
\draw [yshift = 260] ({sqrt(3)}, -1) -- ({6-sqrt(3)}, -1);
\draw [yshift = 260] ({6+sqrt(3)}, 1) -- ({12-sqrt(3)}, 1);
\draw [yshift = 260] ({6+sqrt(3)}, -1) -- ({12-sqrt(3)}, -1);
\draw [yshift = 260] ({12+sqrt(3)}, 1) -- ({16.8-sqrt(3)}, 1);
\draw [yshift = 260] ({12+sqrt(3)}, -1) -- ({16.8-sqrt(3)}, -1);
\draw [xshift = 86, yshift = 130] ({-sqrt(3)}, 1) -- (-3+0, 2.6);
\draw [xshift = 86, yshift = 130](0, 2) -- (-1.25, 3.6);
\draw [xshift = 86, yshift = 130](0, 2) -- (1.3, 3.6);
\draw [xshift = 86, yshift = 130]({sqrt(3)}, 1) -- (3.05, 2.6);
\draw [xshift = 86, yshift = 130]({6-sqrt(3)}, 1) -- (3+0, 2.6);
\draw [xshift = 86, yshift = 130](6, 2) -- (4.75, 3.6);
\draw [xshift = 86, yshift = 130](6, 2) -- (7.3, 3.6);
\draw [xshift = 86, yshift = 130]({6+sqrt(3)}, 1) -- (9.05, 2.6);
\draw [xshift = 86, yshift = 130]({12-sqrt(3)}, 1) -- (9, 2.6);
\draw [xshift = 86, yshift = 130](12, 2) -- (10.75, 3.6);
\draw [yshift = 130](-3, 2) -- (-1.7, 3.6);
\draw [yshift = 130]({-3+sqrt(3)}, 1) -- (.05, 2.6);
\draw [xshift = 86, yshift = -130] ({-.8-sqrt(3)}, 2) -- (-3+0, 2.6);
\draw [xshift = 86, yshift = -130](-.8, 3) -- (-1.3, 3.6);
\draw [yshift = -130](-2.2, 3) -- (-1.7, 3.6);
\draw [yshift = -130]({-2.2+sqrt(3)}, 2) -- (.05, 2.6);
\draw [xshift = 256, yshift = -130] ({-.8-sqrt(3)}, 2) -- (-3+0, 2.6);
\draw [xshift = 256, yshift = -130](-.8, 3) -- (-1.3, 3.6);
\draw [xshift = 170,yshift = -130](-2.2, 3) -- (-1.7, 3.6);
\draw [xshift = 170,yshift = -130]({-2.2+sqrt(3)}, 2) -- (.05, 2.6);
\draw [xshift = 426, yshift = -130] ({-.8-sqrt(3)}, 2) -- (-3+0, 2.6);
\draw [xshift = 426, yshift = -130](-.8, 3) -- (-1.3, 3.6);
\draw [xshift = 340,yshift = -130](-2.2, 3) -- (-1.7, 3.6);
\draw [xshift = 340,yshift = -130]({-2.2+sqrt(3)}, 2) -- (.05, 2.6);
\draw [yshift = 260] ({-sqrt(3)}, 1) -- (-2.2+0, 1.6);
\draw [yshift = 260] (0, 2) -- (-0.5, 2.6);
\draw [yshift = 260] (0, 2) -- (.5, 2.6);
\draw [yshift = 260] ({sqrt(3)}, 1) -- (2.2, 1.6);
\draw [xshift = 170, yshift = 260] ({-sqrt(3)}, 1) -- (-2.2+0, 1.6);
\draw [xshift = 170, yshift = 260] (0, 2) -- (-0.5, 2.6);
\draw [xshift = 170, yshift = 260] (0, 2) -- (.5, 2.6);
\draw [xshift = 170, yshift = 260] ({sqrt(3)}, 1) -- (2.2, 1.6);
\draw [xshift = 340, yshift = 260] ({-sqrt(3)}, 1) -- (-2.2+0, 1.6);
\draw [xshift = 340, yshift = 260] (0, 2) -- (-0.5, 2.6);
\draw [xshift = 340, yshift = 260] (0, 2) -- (.5, 2.6);
\draw [xshift = 340, yshift = 260] ({sqrt(3)}, 1) -- (2.2, 1.6);
\node[ver] () at (-.85,-.8){$u_{0,-2}$};
\node[ver] () at (0,-1.4){$v_{0,-2}$};
\node[ver] () at (.85,-.8){$w_{0,-2}$};
\node[ver] () at (.85,.6){$u_{0,-1}$};
\node[ver] () at (0,1.3){$v_{0,-1}$};
\node[ver] () at (-.85,.6){$w_{0,-1}$};
\node[ver] () at (-.85+6,-.8){$u_{1,-2}$};
\node[ver] () at (0+6,-1.4){$v_{1,-2}$};
\node[ver] () at (.85+6,-.8){$w_{1,-2}$};
\node[ver] () at (.85+6,.6){$u_{1,-1}$};
\node[ver] () at (0+6,1.3){$v_{1,-1}$};
\node[ver] () at (-.85+6,.6){$w_{1,-1}$};
\node[ver] () at (-.85+12,-.8){$u_{2,-2}$};
\node[ver] () at (0+12,-1.4){$v_{2,-2}$};
\node[ver] () at (.85+12,-.8){$w_{2,-2}$};
\node[ver] () at (.85+12,.6){$u_{2,-1}$};
\node[ver] () at (0+12,1.3){$v_{2,-1}$};
\node[ver] () at (-.85+12,.6){$w_{2,-1}$};
\node[ver] () at (0-3,-1.5+4.6){$v_{-1,0}$};
\node[ver] () at (.7-3,-.8+4.5){$w_{-1,0}$};
\node[ver] () at (.7-3,.6+4.5){$u_{-1,1}$};
\node[ver] () at (0-3,1.3+4.6){$v_{-1,1}$};
\node[ver] () at (-1+3,-.8+4.5){$u_{0,0}$};
\node[ver] () at (0+3,-1.5+4.6){$v_{0,0}$};
\node[ver] () at (1+3,-.8+4.5){$w_{0,0}$};
\node[ver] () at (5.5,.6+4.5){$u_{0,1}$};
\node[ver] () at (0+3,1.3+4.6){$v_{0,1}$};
\node[ver] () at (.6,.6+4.5){$w_{0,1}$};
\node[ver] () at (-1+9,-.8+4.5){$u_{1,0}$};
\node[ver] () at (0+9,-1.5+4.6){$v_{1,0}$};
\node[ver] () at (1+9,-.8+4.5){$w_{1,0}$};
\node[ver] () at (1+9,.6+4.5){$u_{1,1}$};
\node[ver] () at (0+9,1.3+4.6){$v_{1,1}$};
\node[ver] () at (-1+9,.6+4.5){$w_{1,1}$};
\node[ver] () at (-1+15,-.8+4.5){$u_{2,0}$};
\node[ver] () at (0+15,-1.5+4.6){$v_{2,0}$};
\node[ver] () at (0+15,1.3+4.6){$v_{2,1}$};
\node[ver] () at (-1+15,.6+4.5){$w_{2,1}$};
\node[ver] () at (-.85,-.8+9){$u_{-1,2}$};
\node[ver] () at (-1,7){$v_{-1,2}$};
\node[ver] () at (2.7,-.8+9.2){$w_{-1,2}$};
\node[ver] () at (.85,.6+9){$u_{-1,3}$};
\node[ver] () at (0,1.3+9.1){$v_{-1,3}$};
\node[ver] () at (-.85,.6+9){$w_{-1,3}$};
\node[ver] () at (-.85+5.7,-.8+9.2){$u_{0,2}$};
\node[ver] () at (0+6,-1.5+9.1){$v_{0,2}$};
\node[ver] () at (.85+6,-.8+9){$w_{0,2}$};
\node[ver] () at (.85+6,.6+9){$u_{0,3}$};
\node[ver] () at (0+6,1.3+9.1){$v_{0,3}$};
\node[ver] () at (-.85+6,.6+9){$w_{0,3}$};
\node[ver] () at (-1+12,-.8+9){$u_{1,2}$};
\node[ver] () at (0+12,-1.5+9.1){$v_{1,2}$};
\node[ver] () at (.85+12,-.8+9){$w_{1,2}$};
\node[ver] () at (.85+12,.6+9){$u_{1,3}$};
\node[ver] () at (0+12,1.3+9.1){$v_{1,3}$};
\node[ver] () at (-1+12,.6+9){$w_{1,3}$};
\draw [thick, dotted] (3,4.5) -- (9,4.5);
\draw [thick, dotted] (3,4.5) -- (6,9);
\draw [thick, dotted] (3,4.5) -- (0,9);
\node[ver] () at (3,4.5){$\bullet$};
\put(3.6,6.8){\mbox{$O$}}
\node[ver] () at (9,4.5){$\bullet$};
\put(14.6,7.2){\mbox{$A_5$}}
\node[ver] () at (6,9){$\bullet$};
\put(10,14){\mbox{$B_5$}}
\node[ver] () at (0,9){$\bullet$};
\node[ver] () at (5.2, -4){\normalsize (c) $E_5([3^1, 4^1, 6^1, 4^1])$};
\end{tikzpicture}\hfill
\begin{tikzpicture}[scale=0.48]
\draw [xshift = -55, yshift = 285] ({2*cos(195)},{2*sin(185)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)});
\draw [xshift = 55, yshift = 285] ({2*cos(195)},{2*sin(185)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)});
\draw [xshift = 165, yshift = 285] ({2*cos(195)},{2*sin(185)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)});
\draw [xshift = 275, yshift = 285] ({2*cos(195)},{2*sin(185)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(270)},{2*sin(270)});
\draw [xshift = -110] ({2*cos(90)},{2*sin(90)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(15)},{2*sin(15)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(270)},{2*sin(270)});
\draw ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = 110] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = 220] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = -55, yshift = 95] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = 55, yshift = 95] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = 165, yshift = 95] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = 275, yshift = 95] ({2*cos(90)},{2*sin(90)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(270)},{2*sin(270)});
\draw [xshift = -110, yshift = 190] ({2*cos(90)},{2*sin(90)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(15)},{2*sin(15)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(270)},{2*sin(270)});
\draw [yshift = 190] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = 110, yshift = 190] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = 220, yshift = 190] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = -55, yshift = -95] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(165)},{2*sin(172.5)});
\draw [xshift = 55, yshift = -95] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(165)},{2*sin(172.5)});
\draw [xshift = 165, yshift = -95] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(165)},{2*sin(172.5)});
\draw [xshift = 275, yshift = -95] ({2*cos(90)},{2*sin(90)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(165)},{2*sin(172.5)});
\node[ver] () at (1.1-4.4,-1.3){\tiny $w_{-1,-1}$};
\node[ver] () at (-2.3,-2.2){\tiny $w_{-2,-1}$};
\node[ver] () at (1.1-4.2,1.2){\tiny $v_{-1,0}$};
\node[ver] () at (.5-4.3,1.7){\tiny $v_{-2,0}$};
\node[ver] () at (-.9,-.6){\tiny $u_{-1,-1}$};
\node[ver] () at (-.8,-1.3){\tiny $v_{0,-1}$};
\node[ver] () at (0,-1.7){\tiny $v_{1,-1}$};
\node[ver] () at (2.7,-.6){\tiny $u_{1,-1}$};
\node[ver] () at (.8,-1.3){\tiny $w_{1,-1}$};
\node[ver] () at (1.3,-2.2){\tiny $w_{0,-1}$};
\node[ver] () at (2.6,.4){\tiny $u_{0,0}$};
\node[ver] () at (1,1.2){\tiny $v_{1,0}$};
\node[ver] () at (1.1,1.9){\tiny $v_{0,0}$};
\node[ver] () at (-1,.4){\tiny $u_{-2,0}$};
\node[ver] () at (-.5,1.2){\tiny $w_{-2,0}$};
\node[ver] () at (-1.4,1.9){\tiny $w_{-1,0}$};
\node[ver] () at (3.2,-1.3){\tiny $v_{2,-1}$};
\node[ver] () at (3.8,-1.7){\tiny $v_{3,-1}$};
\node[ver] () at (5.1,-.5){\tiny $u_{3,-1}$};
\node[ver] () at (1.1+3.6,-1.3){\tiny $w_{3,-1}$};
\node[ver] () at (1.3+3.8,-2.2){\tiny $w_{2,-1}$};
\node[ver] () at (1.4+3.8,.5){\tiny $u_{2,0}$};
\node[ver] () at (1.1+3.8,1.1){\tiny $v_{3,0}$};
\node[ver] () at (5,1.9){\tiny $v_{2,0}$};
\node[ver] () at (-.7+3.8,1.3){\tiny $w_{0,0}$};
\node[ver] () at (2.7,1.9){\tiny $w_{1,0}$};
\node[ver] () at (6.9,-1.3){\tiny $v_{4,-1}$};
\node[ver] () at (6.7,-2.2){\tiny $v_{5,-1}$};
\node[ver] () at (1.4+7.6,-.5){\tiny $u_{5,-1}$};
\node[ver] () at (8.4,-1.3){\tiny $w_{5,-1}$};
\node[ver] () at (1.3+7.6,-2.2){\tiny $w_{4,-1}$};
\node[ver] () at (1.5+7.6,.5){\tiny $u_{4,0}$};
\node[ver] () at (1.1+7.6,1.1){\tiny $v_{5,0}$};
\node[ver] () at (8.8,1.9){\tiny $v_{4,0}$};
\node[ver] () at (-.7+7.6,1.3){\tiny $w_{2,0}$};
\node[ver] () at (-.4+7,1.9){\tiny $w_{3,0}$};
\node[ver] () at (1-4.1,-1.2 + 4){\tiny $u_{-3,0}$};
\node[ver] () at (0.8,-1.2 + 4){\tiny $u_{-1,0}$};
\node[ver] () at (4.5,-1.2 + 4){\tiny $u_{1,0}$};
\node[ver] () at (8.3,-1.2 + 4){\tiny $u_{3,0}$};
\node[ver] () at (1.2-4,-1.2 - 1.7){\tiny $u_{-2,-1}$};
\node[ver] () at (1.4-.4,-1.2 - 1.7){\tiny $u_{0,-1}$};
\node[ver] () at (1.4+3.4,-1.2 - 1.7){\tiny $u_{2,-1}$};
\node[ver] () at (1.6+7,-1.2 - 1.7){\tiny $u_{4,-1}$};
\node[ver] () at (-3.3,5.3){\tiny $w_{-3,1}$};
\node[ver] () at (-2.5,4.6){\tiny $w_{-4,1}$};
\node[ver] () at (1.1-4.2,7.8){\tiny $v_{-3,2}$};
\node[ver] () at (.5-4.3,1.7+6.6){\tiny $v_{-4,2}$};
\node[ver] () at (-1.2,-.5+6.7){\tiny $u_{-3,1}$};
\node[ver] () at (-.7,5.3){\tiny $v_{-2,1}$};
\node[ver] () at (-1.2,4.6){\tiny $v_{-1,1}$};
\node[ver] () at (1.2,-.5+6.7){\tiny $u_{-1,1}$};
\node[ver] () at (.8,5.3){\tiny $w_{-1,1}$};
\node[ver] () at (1.4,4.6){\tiny $w_{-2,1}$};
\node[ver] () at (1.1,.5+6.7){\tiny $u_{-2,2}$};
\node[ver] () at (.8,7.8){\tiny $v_{-1,2}$};
\node[ver] () at (1.2,8.7){\tiny $v_{-2,2}$};
\node[ver] () at (-2.8,.5+6.7){\tiny $u_{-4,2}$};
\node[ver] () at (-.6,7.8){\tiny $w_{-4,2}$};
\node[ver] () at (-1.2,8.7){\tiny $w_{-3,2}$};
\node[ver] () at (2.1,5){\tiny $v_{0,1}$};
\node[ver] () at (3.7,4.9){\tiny $v_{1,1}$};
\node[ver] () at (1.4+3.8,-.5+6.7){\tiny $u_{1,1}$};
\node[ver] () at (1.1+3.8,-1.1+6.7){\tiny $w_{1,1}$};
\node[ver] () at (1.3+3.8,-1.6+6.2){\tiny $w_{0,1}$};
\node[ver] () at (1.4+3.8,.5+6.7){\tiny $u_{0,2}$};
\node[ver] () at (1.1+3.8,7.8){\tiny $v_{1,2}$};
\node[ver] () at (4.8,8.7){\tiny $v_{0,2}$};
\node[ver] () at (3.4,7.8){\tiny $w_{-2,2}$};
\node[ver] () at (3.8,8.2){\tiny $w_{-1,2}$};
\node[ver] () at (-1+7.6,-1.1+6.7){\tiny $v_{2,1}$};
\node[ver] () at (-.5+7.6,-1.6+6.7){\tiny $v_{3,1}$};
\node[ver] () at (1.4+7.6,-.5+6.7){\tiny $u_{3,1}$};
\node[ver] () at (1.1+7.6,-1.1+6.7){\tiny $w_{3,1}$};
\node[ver] () at (.5+7.6,-1.6+6.7){\tiny $w_{2,1}$};
\node[ver] () at (1.5+7.6,.5+6.7){\tiny $u_{2,2}$};
\node[ver] () at (1.1+7.6,7.8){\tiny $v_{3,2}$};
\node[ver] () at (8.7,8.7){\tiny $v_{2,2}$};
\node[ver] () at (-.7+7.6,7.8){\tiny $w_{0,2}$};
\node[ver] () at (6.5,8.7){\tiny $w_{1,2}$};
\node[ver] () at (1-4.1,-1.2 + 4+6.7){\tiny $u_{-5,2}$};
\node[ver] () at (0.8,-1.2 + 4+6.7){\tiny $u_{-3,2}$};
\node[ver] () at (4.6,-1.2 + 4+6.7){\tiny $u_{-1,2}$};
\node[ver] () at (8.4,-1.2 + 4+6.7){\tiny $u_{1,2}$};
\node[ver] () at (1.1-4.2,-1.2 - 1.7+6.7){\tiny $u_{-4,1}$};
\node[ver] () at (1.2-.4,-1.2 - 1.7+6.7){\tiny $u_{-2,1}$};
\node[ver] () at (1+3.5,-1.2 - 1.7+6.7){\tiny $u_{0,1}$};
\node[ver] () at (1.4+7,-1.2 - 1.7+6.7){\tiny $u_{2,1}$};
\draw [thick, dotted] (2,3.3) -- (5.8,3.3);
\draw [thick, dotted] (2,3.3) -- (3.7,6.7);
\draw [thick, dotted] (2,3.3) -- (0,6.7);
\node[ver] () at (2,3.3){$\bullet$};
\put(2.8,6.3){\mbox{$O$}}
\node[ver] () at (5.8,3.3){$\bullet$};
\put(12,6.3){\mbox{$A_6$}}
\node[ver] () at (3.7,6.7){$\bullet$};
\put(8,13){\mbox{$B_6$}}
\node[ver] () at (0,6.7){$\bullet$};
\node[ver] () at (3, -4){\normalsize (d) $E_{6}([3^1, 12^2])$};
\end{tikzpicture}
\end{figure}
\smallskip
\begin{figure}[ht!]
\tiny
\tikzstyle{ver}=[]
\tikzstyle{vert}=[circle, draw, fill=black!100, inner sep=0pt, minimum width=4pt]
\tikzstyle{vertex}=[circle, draw, fill=black!00, inner sep=0pt, minimum width=4pt]
\tikzstyle{edge} = [draw,thick,-]
\centering
\begin{tikzpicture}[scale=0.2]
\draw[edge, thin](5,5)--(10,5)--(10,10)--(5,10)--(5,5);
\draw[edge, thin](15,2.5)--(20,2.5)--(20,7.5)--(15,7.5)--(15,2.5);
\draw[edge, thin](25,0)--(30,0)--(30,5)--(25,5)--(25,0);
\draw[edge, thin](10,5)--(15,2.5);
\draw[edge, thin](10,10)--(15,7.5);
\draw[edge, thin](10,5)--(15,7.5);
\draw[edge](4,5.5)--(5,5)--(10,5)--(15,2.5)--(20,2.5)--(25,0)--(30,0)--(31,-0.5);
\draw[edge](12,-8.5)--(12.5,-7.5)--(12.5,-2.5)--(15,2.5)--(15,7.5)--(17.5,12.5)--(17.5,17.5)--(18,18.5);
\node[ver] () at (10.3,-5.2){\scriptsize $u_{-1,-2}$};
\node[ver] () at (15,-7){\scriptsize $u_{0,-2}$};
\node[ver] () at (20.2,-7.7){\scriptsize $u_{1,-2}$};
\node[ver] () at (10.2,0){\scriptsize $u_{-1,-1}$};
\node[ver] () at (15.1,-3.5){\scriptsize $u_{0,-1}$};
\node[ver] () at (19.8,-2.7){\scriptsize $u_{1,-1}$};
\node[ver] () at (24.9,-4.5){\scriptsize $u_{2,-1}$};
\node[ver] () at (29.7,-5){\scriptsize $u_{3,-1}$};
\node[ver] () at (7,5.5){\scriptsize $u_{-2,0}$};
\node[ver] () at (13,4.8){\scriptsize $u_{-1,0}$};
\node[ver] () at (13,1.8){\scriptsize $u_{0,0}$};
\node[ver] () at (22.5,2.3){\scriptsize $u_{1,0}$};
\node[ver] () at (26.7,.5){\scriptsize $u_{2,0}$};
\node[ver] () at (32,-.2){\scriptsize $u_{3,0}$};
\node[ver] () at (7.5,10.5){\scriptsize $u_{-2,1}$};
\node[ver] () at (12.5,10){\scriptsize $u_{-1,1}$};
\node[ver] () at (17,8){\scriptsize $u_{0,1}$};
\node[ver] () at (22,7.3){\scriptsize $u_{1,1}$};
\node[ver] () at (27,5.5){\scriptsize $u_{2,1}$};
\node[ver] () at (32,4.8){\scriptsize $u_{3,1}$};
\node[ver] () at (10,14.2){\scriptsize $u_{-2,2}$};
\node[ver] () at (15.5,14.6){\scriptsize $u_{-1,2}$};
\node[ver] () at (19.6,11.7){\scriptsize $u_{0,2}$};
\node[ver] () at (25,12.2){\scriptsize $u_{1,2}$};
\node[ver] () at (29.1,10.6){\scriptsize $u_{2,2}$};
\node[ver] () at (19.2,16.7){\scriptsize $u_{0,3}$};
\node[ver] () at (24.7,17.3){\scriptsize $u_{1,3}$};
\node[ver] () at (29.1,14){\scriptsize $u_{2,3}$};
\draw[edge, thin](20,2.5)--(25,0);
\draw[edge, thin](20,7.5)--(25,5);
\draw[edge, thin](20,2.5)--(25,5);
\draw[edge, thin](30,5)--(31,7);
\draw[edge, thin](30,5)--(31,4.5);
\draw[edge, thin](30,0)--(31,0.5);
\draw[edge, thin](5,10)--(4,10.4);
\draw[edge, thin](10,10)--(7.5,15)--(5,10);
\draw[edge, thin](10,10)--(7.5,15)--(12.5,15)--(10,10);
\draw[edge, thin](20,7.5)--(17.5,12.5)--(15,7.5);
\draw[edge, thin](20,7.5)--(17.5,12.5)--(22.5,12.5)--(20,7.5);
\draw[edge, thin](30,5)--(27.5,10)--(25,5);
\draw[edge, thin](12.5,15)--(17.5,12.5);
\draw[edge, thin](22.5,12.5)--(27.5,10);
\draw[edge, thin](17.5,17.5)--(12.5,15);
\draw[edge, thin](17.5,12.5)--(22.5,12.5)--(22.5,17.5)--(17.5,17.5)--(17.5,12.5);
\draw[edge, thin](22.5,17.5)--(27.5,15)--(22.5,12.5);
\draw[edge, thin](27.5,15)--(27.5,10);
\draw[edge, thin](27.5,15)--(31,15);
\draw[edge, thin](27.5,10)--(31,10);
\draw[edge, thin](7.5,15)--(5.5,16);
\draw[edge, thin](12.5,15)--(12.5,18);
\draw[edge, thin](17.5,17.5)--(15.5,18.5);
\draw[edge, thin](5,10)--(4,9.5);
\draw[edge, thin](5,5)--(3.5,3);
\draw[edge, thin](7.5,15)--(7.5,18);
\draw[edge, thin](22.5,17.5)--(23.5,19);
\draw[edge, thin](22.5,17.5)--(21.5,19);
\draw[edge, thin](27.5,15)--(28.5,17);
\draw[edge, thin](10,5)--(7.5,0)--(5,5);
\draw[edge, thin](10,5)--(7.5,0)--(12.5,-2.5)--(15,2.5);
\draw[edge, thin](20,2.5)--(17.5,-2.5)--(15,2.5);
\draw[edge, thin](17.5,-2.5)--(12.5,-2.5);
\draw[edge, thin](30,0)--(27.5,-5)--(25,0);
\draw[edge, thin](27.5,-5)--(22.5,-5)--(25,0);
\draw[edge, thin](22.5,-5)--(17.5,-2.5);
\draw[edge, thin](17.5,-2.5)--(12.5,-2.5)--(12.5,-7.5)--(17.5,-7.5)--(17.5,-2.5);
\draw[edge, thin](22.5,-5)--(17.5,-7.5);
\draw[edge, thin](12.5,-2.5)--(12.5,-7.5)--(7.5,-5)--(12.5,-2.5);
\draw[edge, thin](7.5,-5)--(7.5,0);
\draw[edge, thin](7.5,-5)--(6.5,-7);
\draw[edge, thin](7.5,-5)--(4.5,-5);
\draw[edge, thin](7.5,0)--(4.5,0);
\draw[edge, thin](22.5,-5)--(22.5,-7.5);
\draw[edge, thin](17.5,-7.5)--(19.5,-8.5);
\draw[edge, thin](17.5,-7.5)--(17,-8.5);
\draw[edge, thin](12.5,-7.5)--(13,-8.5);
\draw[edge, thin](27.5,-5)--(27.5,-7);
\draw[edge, thin](27.5,-5)--(29.5,-6);
\node[ver] () at (17.5,5){\scriptsize $\bullet$};
\node[ver] () at (27.5,2.5){\scriptsize $\bullet$};
\node[ver] () at (7.5,7.5){\scriptsize $\bullet$};
\node[ver] () at (15,-5){\scriptsize $\bullet$};
\node[ver] () at (20,15){\scriptsize $\bullet$};
\put(22.4,2.2){\mbox{$A_1$}}
\put(16.4,12.2){\mbox{$B_1$}}
\put(13,3.3){\mbox{$O$}}
\draw [dashed] (3.5,8.5) -- (31.5,1.5);
\draw [dashed] (14,-9) -- (21,19);
\node[ver] () at (16, -11){\normalsize (e) $E_{1}([3^2, 4^1, 3^1, 4^1])$};
\end{tikzpicture}\hfill
\begin{tikzpicture}[scale=.14]
\draw[edge, thin](-1.5,0)--(51.5,0);
\draw[edge, thin](-1.5,10)--(51.5,10);
\draw[edge, thin](-1.5,20)--(51.5,20);
\draw[edge, thin](-1.5,30)--(51.5,30);
\draw[edge, thin](-0.5,19)--(5.5,31);
\draw[edge, thin](-.5,-1)--(15.5,31);
\draw[edge, thin](9.5,-1)--(25.5,31);
\draw[edge, thin](19.5,-1)--(35.5,31);
\draw[edge, thin](29.5,-1)--(45.5,31);
\draw[edge, thin](39.5,-1)--(51.5,23);
\draw[edge, thin](49.5,-1)--(51.5,3);
\draw[edge, thin](5.5,-1)--(-0.5,11);
\draw[edge, thin](15.5,-1)--(-0.5,31);
\draw[edge, thin](25.5,-1)--(9.5,31);
\draw[edge, thin](35.5,-1)--(19.5,31);
\draw[edge, thin](45.5,-1)--(29.5,31);
\draw[edge, thin](50.5,9)--(39.5,31);
\draw[edge, thin](51.5,27)--(49.5,31);
\draw [dashed] (22.5,15) -- (32.5,15);
\draw [dashed] (22.5,15) -- (17.5,25);
\draw [dashed] (22.5,15) -- (27.5,25);
\node[ver] () at (22.5,15){\scriptsize $\bullet$};
\node[ver] () at (32.5,15){\scriptsize $\bullet$};
\node[ver] () at (27.5,25){\scriptsize $\bullet$};
\node[ver] () at (17.5,25){\scriptsize $\bullet$};
\put(18.5,8.5){\mbox{$A_4$}}
\put(16,13.5){\mbox{$B_4$}}
\put(12,7.5){\mbox{$O$}}
\node[ver] () at (2.5,-1.7){\scriptsize ${w_{-2,-1}}$};
\node[ver] () at (7.5,1.2){\scriptsize ${v_{-1,-1}}$};
\node[ver] () at (12.5,-1.7){\scriptsize ${w_{-1,-1}}$};
\node[ver] () at (17.5,1.2){\scriptsize ${v_{0,-1}}$};
\node[ver] () at (22.5,-1.7){\scriptsize ${w_{0,-1}}$};
\node[ver] () at (27.5,1.2){\scriptsize ${v_{1,-1}}$};
\node[ver] () at (32.5,-1.7){\scriptsize ${w_{1,-1}}$};
\node[ver] () at (37.5,1.2){\scriptsize ${v_{2,-1}}$};
\node[ver] () at (42.5,-1.7){\scriptsize ${w_{2,-1}}$};
\node[ver] () at (47.5,1.2){\scriptsize ${v_{3,-1}}$};
\node[ver] () at (52.5,-1.7){\scriptsize ${w_{3,-1}}$};
\node[ver] () at (6.3,5){\scriptsize ${u_{-2,-1}}$};
\node[ver] () at (16.3,5){\scriptsize ${u_{-1,-1}}$};
\node[ver] () at (25.5,5){\scriptsize ${u_{0,-1}}$};
\node[ver] () at (35.5,5){\scriptsize ${u_{1,-1}}$};
\node[ver] () at (45.5,5){\scriptsize ${u_{2,-1}}$};
\node[ver] () at (2.9,11){\scriptsize ${v_{-2,0}}$};
\node[ver] () at (7.5,8.7){\scriptsize ${w_{-2,0}}$};
\node[ver] () at (12.5,11){\scriptsize ${v_{-1,0}}$};
\node[ver] () at (17.5,8.7){\scriptsize ${w_{-1,0}}$};
\node[ver] () at (22,11){\scriptsize ${v_{0,0}}$};
\node[ver] () at (27,8.7){\scriptsize ${w_{0,0}}$};
\node[ver] () at (32,11){\scriptsize ${v_{1,0}}$};
\node[ver] () at (37,8.7){\scriptsize ${w_{1,0}}$};
\node[ver] () at (42,11){\scriptsize ${v_{2,0}}$};
\node[ver] () at (47,8.7){\scriptsize ${w_{2,0}}$};
\node[ver] () at (52,11){\scriptsize ${v_{3,0}}$};
\node[ver] () at (5,15){\scriptsize ${u_{-2,0}}$};
\node[ver] () at (15,15){\scriptsize ${u_{-1,0}}$};
\node[ver] () at (30,13.5){\scriptsize ${u_{0,0}}$};
\node[ver] () at (40.5,15){\scriptsize ${u_{1,0}}$};
\node[ver] () at (50,15){\scriptsize ${u_{2,0}}$};
\node[ver] () at (2.5,18.5){\scriptsize ${w_{-1,1}}$};
\node[ver] () at (8,20.7){\scriptsize ${v_{-2,1}}$};
\node[ver] () at (13,18.5){\scriptsize ${w_{-2,1}}$};
\node[ver] () at (18,20.7){\scriptsize ${v_{-1,1}}$};
\node[ver] () at (23,18.5){\scriptsize ${w_{-1,1}}$};
\node[ver] () at (27.5,20.7){\scriptsize ${v_{0,1}}$};
\node[ver] () at (32,18.5){\scriptsize ${w_{0,1}}$};
\node[ver] () at (37.5,20.7){\scriptsize ${v_{1,1}}$};
\node[ver] () at (42,18.5){\scriptsize ${w_{1,1}}$};
\node[ver] () at (47.5,20.7){\scriptsize ${v_{2,1}}$};
\node[ver] () at (52,18.5){\scriptsize ${w_{2,1}}$};
\node[ver] () at (6,25){\scriptsize ${u_{-3,1}}$};
\node[ver] () at (15.5,24){\scriptsize ${u_{-2,1}}$};
\node[ver] () at (25.5,24){\scriptsize ${u_{-1,1}}$};
\node[ver] () at (35.5,25){\scriptsize ${u_{0,1}}$};
\node[ver] () at (45,25){\scriptsize ${u_{1,1}}$};
\node[ver] () at (2.5,31){\scriptsize ${v_{-3,2}}$};
\node[ver] () at (7.7,28.5){\scriptsize ${w_{-3,2}}$};
\node[ver] () at (12.5,31){\scriptsize ${v_{-2,2}}$};
\node[ver] () at (17.5,28.5){\scriptsize ${w_{-2,2}}$};
\node[ver] () at (22.5,31){\scriptsize ${v_{-1,2}}$};
\node[ver] () at (27.5,28.5){\scriptsize ${w_{-1,2}}$};
\node[ver] () at (32.5,31){\scriptsize ${v_{0,2}}$};
\node[ver] () at (37.5,28.5){\scriptsize ${w_{0,2}}$};
\node[ver] () at (42.5,31){\scriptsize ${v_{1,2}}$};
\node[ver] () at (47.5,28.5){\scriptsize ${w_{1,2}}$};
\node[ver] () at (52.5,31){\scriptsize ${v_{2,2}}$};
\node[ver] () at (28,-6) {\normalsize (f) $E_4([3^1, 6^1, 3^1, 6^1])$};
\end{tikzpicture}
\end{figure}
\setlength{\unitlength}{3mm}
\begin{picture}(58,23)(-6,-7)
\thinlines
\put(-6,-3){\line(1,0){17}}\put(19,-3){\line(1,0){15}}
\put(-6,0){\line(1,0){7}}\put(9,0){\line(1,0){20}}
\put(-1,3){\line(1,0){20}}\put(27,3){\line(1,0){7}}
\put(-6,6){\line(1,0){15}}\put(17,6){\line(1,0){17}}
\put(-6,9){\line(1,0){5}}\put(7,9){\line(1,0){20}}
\put(-6,12){\line(1,0){23}}\put(25,12){\line(1,0){9}}
\put(-6,15){\line(1,0){13}}\put(15,15){\line(1,0){19}}
\put(10.7,14.85){\mbox{${}_\bullet$}}
\put(12.7,5.85){\mbox{${}_\bullet$}}
\put(20.7,11.85){\mbox{${}_\bullet$}}
\put(22.7,2.85){\mbox{${}_\bullet$}}
\put(13.27,5.45){\mbox{$\cdot$}}
\put(13.77,5.3){\mbox{$\cdot$}}
\put(14.27,5.15){\mbox{$\cdot$}}
\put(14.77,5){\mbox{$\cdot$}}
\put(15.27,4.85){\mbox{$\cdot$}}
\put(15.77,4.7){\mbox{$\cdot$}}
\put(16.27,4.55){\mbox{$\cdot$}}
\put(16.77,4.4){\mbox{$\cdot$}}
\put(17.27,4.25){\mbox{$\cdot$}}
\put(17.77,4.1){\mbox{$\cdot$}}
\put(18.27,3.95){\mbox{$\cdot$}}
\put(18.77,3.8){\mbox{$\cdot$}}
\put(19.27,3.65){\mbox{$\cdot$}}
\put(19.77,3.5){\mbox{$\cdot$}}
\put(20.27,3.35){\mbox{$\cdot$}}
\put(20.77,3.2){\mbox{$\cdot$}}
\put(21.27,3.05){\mbox{$\cdot$}}
\put(21.77,2.9){\mbox{$\cdot$}}
\put(22.27,2.75){\mbox{$\cdot$}}
\put(23.5,2.75){\mbox{$A_3$}}
\put(13.17,5.9){\mbox{$\cdot$}}
\put(13.57,6.2){\mbox{$\cdot$}}
\put(13.97,6.5){\mbox{$\cdot$}}
\put(14.37,6.8){\mbox{$\cdot$}}
\put(14.77,7.1){\mbox{$\cdot$}}
\put(15.17,7.4){\mbox{$\cdot$}}
\put(15.57,7.7){\mbox{$\cdot$}}
\put(15.97,8){\mbox{$\cdot$}}
\put(16.37,8.3){\mbox{$\cdot$}}
\put(16.77,8.6){\mbox{$\cdot$}}
\put(17.17,8.9){\mbox{$\cdot$}}
\put(17.57,9.2){\mbox{$\cdot$}}
\put(17.97,9.5){\mbox{$\cdot$}}
\put(18.37,9.8){\mbox{$\cdot$}}
\put(18.77,10.1){\mbox{$\cdot$}}
\put(19.17,10.4){\mbox{$\cdot$}}
\put(19.57,10.7){\mbox{$\cdot$}}
\put(19.97,11){\mbox{$\cdot$}}
\put(20.37,11.3){\mbox{$\cdot$}}
\put(21.5,11.3){\mbox{$B_3$}}
\put(10.87,14.15){\mbox{$\cdot$}}
\put(10.97,13.7){\mbox{$\cdot$}}
\put(11.07,13.25){\mbox{$\cdot$}}
\put(11.17,12.8){\mbox{$\cdot$}}
\put(11.27,12.35){\mbox{$\cdot$}}
\put(11.37,11.9){\mbox{$\cdot$}}
\put(11.47,11.45){\mbox{$\cdot$}}
\put(11.57,11){\mbox{$\cdot$}}
\put(11.67,10.55){\mbox{$\cdot$}}
\put(11.77,10.1){\mbox{$\cdot$}}
\put(11.87,9.65){\mbox{$\cdot$}}
\put(11.97,9.2){\mbox{$\cdot$}}
\put(12.07,8.75){\mbox{$\cdot$}}
\put(12.17,8.3){\mbox{$\cdot$}}
\put(12.27,7.85){\mbox{$\cdot$}}
\put(12.37,7.4){\mbox{$\cdot$}}
\put(12.47,6.95){\mbox{$\cdot$}}
\put(12.57,5.5){\mbox{$\cdot$}}
\put(12.67,6.05){\mbox{$\cdot$}}
\put(11.87,14.15){\mbox{$F_3$}}
\put(11.4,5){\mbox{$O$}}
\put(-5.65,14){\line(2,3){1.3}}
\put(-5.65,8){\line(2,3){5.3}}
\put(-3,6){\line(2,3){6.6}}
\put(-5.65,-4){\line(2,3){6.65}}\put(5,12){\line(2,3){2.5}}
\put(-1.65,-4){\line(2,3){10.65}}
\put(2,-4.5){\line(2,3){1}}\put(7,3){\line(2,3){8.7}}
\put(6.35,-4){\line(2,3){4.7}} \put(15,9){\line(2,3){4.7}}
\put(10.35,-4){\line(2,3){8.65}} \put(23,15){\line(2,3){1}}
\put(17,0){\line(2,3){10.7}}
\put(18.35,-4){\line(2,3){2.7}} \put(25,6){\line(2,3){6.5}}
\put(22.35,-4){\line(2,3){6.7}} \put(33,12){\line(2,3){1}}
\put(26.35,-4){\line(2,3){7.3}}
\put(30,-4.5){\line(2,3){1}}
\put(-4.35,-4){\line(-2,3){1.5}}
\put(-.35,-4){\line(-2,3){2.7}}
\put(3.65,-4){\line(-2,3){9.5}}
\put(8,-4.5){\line(-2,3){1}} \put(3,3){\line(-2,3){8.5}}
\put(11.65,-4){\line(-2,3){6.7}}\put(1,12){\line(-2,3){2.7}}
\put(13,0){\line(-2,3){10.7}}
\put(19.65,-4){\line(-2,3){4.7}}\put(11,9){\line(-2,3){4.7}}
\put(23.65,-4){\line(-2,3){10.7}}
\put(27.65,-4){\line(-2,3){2.7}}\put(21,6){\line(-2,3){6.7}}
\put(31.65,-4){\line(-2,3){8.7}} \put(19,15){\line(-2,3){1}}
\put(31,3){\line(-2,3){8.7}}
\put(29,12){\line(-2,3){2.7}} \put(34,4.5){\line(-2,3){1}}
\put(34,10.5){\line(-2,3){3.6}}
\put(-4.5,-2.6){\mbox{\tiny $u_{-3,-3}$}}
\put(-.5,-2.6){\mbox{\tiny $u_{-2,-3}$}}
\put(3.5,-2.6){\mbox{\tiny $u_{-1,-3}$}}
\put(7.5,-2.6){\mbox{\tiny $u_{0,-3}$}}
\put(11.5,-2.6){\mbox{\tiny $u_{1,-3}$}}
\put(19.5,-2.6){\mbox{\tiny $u_{3,-3}$}}
\put(23.5,-2.6){\mbox{\tiny $u_{4,-3}$}}
\put(27.5,-2.6){\mbox{\tiny $u_{5,-3}$}}
\put(31,-2.6){\mbox{\tiny $u_{6,-3}$}}
\put(-2.5,.4){\mbox{\tiny $u_{-3,-2}$}}
\put(1.5,.4){\mbox{\tiny $u_{-2,-2}$}}
\put(9.5,.4){\mbox{\tiny $u_{0,-2}$}}
\put(13.5,.4){\mbox{\tiny $u_{1,-2}$}}
\put(17.5,.4){\mbox{\tiny $u_{2,-2}$}}
\put(21.5,.4){\mbox{\tiny $u_{3,-2}$}}
\put(25.5,.4){\mbox{\tiny $u_{4,-2}$}}
\put(29.5,.4){\mbox{\tiny $u_{5,-2}$}}
\put(-.5,3.4){\mbox{\tiny $u_{-3,-1}$}}
\put(3.5,3.4){\mbox{\tiny $u_{-2,-1}$}}
\put(7.5,3.4){\mbox{\tiny $u_{-1,-1}$}}
\put(11,3.4){\mbox{\tiny $u_{0,-1}$}}
\put(15.5,3.4){\mbox{\tiny $u_{1,-1}$}}
\put(19.5,2.5){\mbox{\tiny $u_{2,-1}$}}
\put(27.5,3.4){\mbox{\tiny $u_{4,-1}$}}
\put(31.5,3.4){\mbox{\tiny $u_{5,-1}$}}
\put(-2.5,6.4){\mbox{\tiny $u_{-4,0}$}}
\put(1 ,6.4){\mbox{\tiny $u_{-3,0}$}}
\put(5.6,6.4){\mbox{\tiny $u_{-2,0}$}}
\put(9.6,6.4){\mbox{\tiny $u_{-1,0}$}}
\put(17.4,6.4){\mbox{\tiny $u_{1,0}$}}
\put(21.4,6.4){\mbox{\tiny $u_{2,0}$}}
\put(25.4,6.4){\mbox{\tiny $u_{3,0}$}}
\put(29 ,6.4){\mbox{\tiny $u_{4,0}$}}
\put(33.4,6.4){\mbox{\tiny $u_{5,0}$}}
\put(-4.2,9.4){\mbox{\tiny $u_{-5,1}$}}
\put(-.3,9.4){\mbox{\tiny $u_{-4,1}$}}
\put(7.7,9.4){\mbox{\tiny $u_{-2,1}$}}
\put(11.7,9.4){\mbox{\tiny $u_{-1,1}$}}
\put(15.6,9.4){\mbox{\tiny $u_{0,1}$}}
\put(19,9.4){\mbox{\tiny $u_{1,1}$}}
\put(23.6,9.4){\mbox{\tiny $u_{2,1}$}}
\put(27.6,9.4){\mbox{\tiny $u_{3,1}$}}
\put(-2.4,12.4){\mbox{\tiny $u_{-5,2}$}}
\put(1.6,12.4){\mbox{\tiny $u_{-4,2}$}}
\put(5.6,12.4){\mbox{\tiny $u_{-3,2}$}}
\put(9.3,12.4){\mbox{\tiny $u_{-2,2}$}}
\put(13.6,12.4){\mbox{\tiny $u_{-1,2}$}}
\put(17.6,12.4){\mbox{\tiny $u_{0,2}$}}
\put(25.6,12.4){\mbox{\tiny $u_{2,2}$}}
\put(29.6,12.4){\mbox{\tiny $u_{3,2}$}}
\put(33.6,12.4){\mbox{\tiny $u_{4,2}$}}
\put(-4.4,15.4){\mbox{\tiny $u_{-6,3}$}}
\put(-.4,15.4){\mbox{\tiny $u_{-5,3}$}}
\put(3.6,15.4){\mbox{\tiny $u_{-4,3}$}}
\put(7.6,15.4){\mbox{\tiny $u_{-3,3}$}}
\put(15.6,15.4){\mbox{\tiny $u_{-1,3}$}}
\put(19.6,15.4){\mbox{\tiny $u_{0,3}$}}
\put(23.6,15.4){\mbox{\tiny $u_{1,3}$}}
\put(27.6,15.4){\mbox{\tiny $u_{2,3}$}}
\put(31.6,15.4){\mbox{\tiny $u_{3,3}$}}
\put(6,-6.5) {(g) $E_3([3^4, 6^1])$}
\end{picture}
\begin{figure}[ht!]
\tiny
\tikzstyle{ver}=[]
\tikzstyle{vert}=[circle, draw, fill=black!100, inner sep=0pt, minimum width=4pt]
\tikzstyle{vertex}=[circle, draw, fill=black!00, inner sep=0pt, minimum width=4pt]
\tikzstyle{edge} = [draw,thick,-]
\centering
\begin{tikzpicture}[scale=0.45]
\draw [xshift = -80, yshift = 8] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = -80, yshift = -22] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = -140] ({2*cos(90)},{2*sin(90)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(15)},{2*sin(15)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(270)},{2*sin(270)});
\draw [xshift = 60, yshift = 8] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = 60, yshift = -22] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = -78, yshift = -22] (-1.7,2.7) -- (-1.2,3.6);
\draw [xshift = -90, yshift = 10] (-.35,1.05) -- (0.1,1.95);
\draw [xshift = 62, yshift = -22] (-1.7,2.7) -- (-1.2,3.6);
\draw [xshift = 50, yshift = 10] (-.35,1.05) -- (0.1,1.95);
\draw [xshift = 202, yshift = -22] (-1.7,2.7) -- (-1.2,3.6);
\draw [xshift = 190, yshift = 10] (-.35,1.05) -- (0.1,1.95);
\draw [xshift = 342, yshift = -22] (-1.7,2.7) -- (-1.2,3.6);
\draw [xshift = 330, yshift = 10] (-.35,1.05) -- (0.1,1.95);
\draw [xshift = 200, yshift = 8] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = 200, yshift = -22] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = 140] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = 280] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = 0, yshift = 50] (-.5,.15) -- (-1.13,1.05);
\draw [xshift = -26, yshift = 35] (-.5,.15) -- (-1.13,1.05);
\draw [xshift = 140, yshift = 50] (-.5,.15) -- (-1.13,1.05);
\draw [xshift = 114, yshift = 35] (-.5,.15) -- (-1.13,1.05);
\draw [xshift = 280, yshift = 50] (-.5,.15) -- (-1.13,1.05);
\draw [xshift = 254, yshift = 35] (-.5,.15) -- (-1.13,1.05);
\draw [xshift = -68, yshift = -70] (-.92,.7) -- (-1.13,1.05);
\draw [xshift = -93, yshift = -85] (-.92,.7) -- (-1.13,1.05);
\draw [xshift = 72, yshift = -70] (-.92,.7) -- (-1.13,1.05);
\draw [xshift = 47, yshift = -85] (-.92,.7) -- (-1.13,1.05);
\draw [xshift = 212, yshift = -70] (-.92,.7) -- (-1.13,1.05);
\draw [xshift = 187, yshift = -85] (-.92,.7) -- (-1.13,1.05);
\draw [xshift = 352, yshift = -70] (-.92,.7) -- (-1.13,1.05);
\draw [xshift = 327, yshift = -85] (-.92,.7) -- (-1.13,1.05);
\draw [xshift = -70, yshift = 170] (-.6,.15) -- (-1.13,1.05);
\draw [xshift = -96, yshift = 155] (-.6,.15) -- (-1.13,1.05);
\draw [xshift = 70, yshift = 170] (-.6,.15) -- (-1.13,1.05);
\draw [xshift = 44, yshift = 155] (-.6,.15) -- (-1.13,1.05);
\draw [xshift = 210, yshift = 170] (-.6,.15) -- (-1.13,1.05);
\draw [xshift = 184, yshift = 155] (-.6,.15) -- (-1.13,1.05);
\draw [xshift = 350, yshift = 170] (-.6,.15) -- (-1.13,1.05);
\draw [xshift = 324, yshift = 155] (-.6,.15) -- (-1.13,1.05);
\draw [xshift = -72, yshift = 120] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = -12, yshift = 128] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = -12, yshift = 98] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = 68, yshift = 120] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = 128, yshift = 128] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = 128, yshift = 98] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = 208, yshift = 120] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = 268, yshift = 128] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = 268, yshift = 98] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = 348, yshift = 120] ({2*cos(90)},{2*sin(90)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(270)},{2*sin(270)});
\draw [xshift = 272, yshift = 98] (-1.7,2.7) -- (-1.25,3.58);
\draw [xshift = 259, yshift = 130] (-.35,1.05) -- (0.1,1.95);
\draw [xshift = 132, yshift = 98] (-1.7,2.7) -- (-1.25,3.58);
\draw [xshift = 119, yshift = 130] (-.35,1.05) -- (0.1,1.95);
\draw [xshift = -8, yshift = 98] (-1.7,2.7) -- (-1.25,3.58);
\draw [xshift = -21, yshift = 130] (-.35,1.05) -- (0.1,1.95);
\draw [xshift = -83, yshift = 248] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = -83, yshift = 218] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = -143, yshift = 240] ({2*cos(90)},{2*sin(90)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(15)},{2*sin(15)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(270)},{2*sin(270)});
\draw [xshift = 57, yshift = 248] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = 57, yshift = 218] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = -3, yshift = 240] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = 197, yshift = 248] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = 197, yshift = 218] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(100)},{1*sin(14)});
\draw [xshift = 317, yshift = 248] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(60)},{1*sin(14)});
\draw [xshift = 317, yshift = 218] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(60)},{1*sin(14)});
\draw [xshift = 320, yshift = 7] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(60)},{1*sin(14)});
\draw [xshift = 320, yshift = -21] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(60)},{1*sin(14)});
\draw [xshift = -153, yshift = 128] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(60)},{1*sin(14)});
\draw [xshift = -153, yshift = 98] ({1*cos(25)},{1*sin(14)}) -- ({1*cos(60)},{1*sin(14)});
\draw [xshift = 137, yshift = 240] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = 277, yshift = 240] ({2*cos(15)},{2*sin(15)}) -- ({2*cos(45)},{2*sin(45)}) -- ({2*cos(75)},{2*sin(75)}) -- ({2*cos(105)},{2*sin(105)}) -- ({2*cos(135)},{2*sin(135)}) -- ({2*cos(165)},{2*sin(165)}) -- ({2*cos(195)},{2*sin(195)}) -- ({2*cos(225)},{2*sin(225)}) -- ({2*cos(255)},{2*sin(255)}) -- ({2*cos(285)},{2*sin(285)}) -- ({2*cos(315)},{2*sin(315)}) -- ({2*cos(345)},{2*sin(345)}) -- ({2*cos(15)},{2*sin(15)});
\draw [xshift = 275, yshift = -142] (-1.45,3.2) -- (-1.25,3.58);
\draw [xshift = 262, yshift = -110] (-.11,1.55) -- (0.1,1.95);
\draw [xshift = 135, yshift = -142] (-1.45,3.2) -- (-1.25,3.58);
\draw [xshift = 122, yshift = -110] (-.11,1.55) -- (0.1,1.95);
\draw [xshift = -5, yshift = -142] (-1.45,3.2) -- (-1.25,3.58);
\draw [xshift = -18, yshift = -110] (-.11,1.55) -- (0.1,1.95);
\draw [xshift = -80, yshift = 218] (-1.7,2.7) -- (-1.47,3.1);
\draw [xshift = -93, yshift = 250] (-.35,1.05) -- (-.1,1.5);
\draw [xshift = 60, yshift = 218] (-1.7,2.7) -- (-1.47,3.1);
\draw [xshift = 47, yshift = 250] (-.35,1.05) -- (-.1,1.5);
\draw [xshift = 200, yshift = 218] (-1.7,2.7) -- (-1.47,3.1);
\draw [xshift = 187, yshift = 250] (-.35,1.05) -- (-.1,1.5);
\draw [xshift = 340, yshift = 218] (-1.7,2.7) -- (-1.47,3.1);
\draw [xshift = 327, yshift = 250] (-.35,1.05) -- (-.1,1.5);
\draw [xshift = -1, yshift = 290] (-.6,.15) -- (-0.83,.55);
\draw [xshift = -26, yshift = 275] (-.6,.15) -- (-0.83,.55);
\draw [xshift = 139, yshift = 290] (-.6,.15) -- (-0.83,.55);
\draw [xshift = 114, yshift = 275] (-.6,.15) -- (-0.83,.55);
\draw [xshift = 279, yshift = 290] (-.6,.15) -- (-0.83,.55);
\draw [xshift = 254, yshift = 275] (-.6,.15) -- (-0.83,.55);
\node[ver] () at (1-5,-.5){\tiny $x_{-1,-2}$};
\node[ver] () at (-4.3,-1.3){\tiny $y_{-1,-2}$};
\node[ver] () at (-3.3,-2.1){\tiny $z_{-1,-2}$};
\node[ver] () at (1-5,.5){\tiny $u_{-1,-1}$};
\node[ver] () at (-4.3,1.3){\tiny $v_{-1,-1}$};
\node[ver] () at (.1-5,1.7){\tiny $w_{-1,-1}$};
\node[ver] () at (-1.1,-.5){\tiny $u_{0,-2}$};
\node[ver] () at (-.8,-1.3){\tiny $v_{0,-2}$};
\node[ver] () at (-1.5,-2){\tiny $w_{0,-2}$};
\node[ver] () at (1.1,-.5){\tiny $x_{0,-2}$};
\node[ver] () at (.8,-1.3){\tiny $y_{0,-2}$};
\node[ver] () at (1.4,-2.1){\tiny $z_{0,-2}$};
\node[ver] () at (1.1,.5){\tiny $u_{0,-1}$};
\node[ver] () at (.8,1.1){\tiny $v_{0,-1}$};
\node[ver] () at (1.4,1.9){\tiny $w_{0,-1}$};
\node[ver] () at (-1.1,.5){\tiny $x_{0,-1}$};
\node[ver] () at (-.8,1.1){\tiny $y_{0,-1}$};
\node[ver] () at (.1,1.5){\tiny $z_{0,-1}$};
\node[ver] () at (-1.1+5,-.5){\tiny $u_{1,-2}$};
\node[ver] () at (-.8+5,-1.3){\tiny $v_{1,-2}$};
\node[ver] () at (3.5,-2){\tiny $w_{1,-2}$};
\node[ver] () at (1.1+5,-.5){\tiny $x_{1,-2}$};
\node[ver] () at (.7+5,-1.3){\tiny $y_{1,-2}$};
\node[ver] () at (6.4,-2.1){\tiny $z_{1,-2}$};
\node[ver] () at (1.1+5,.5){\tiny $u_{1,-1}$};
\node[ver] () at (.8+5,1.1){\tiny $v_{1,-1}$};
\node[ver] () at (6.3,1.9){\tiny $w_{1,-1}$};
\node[ver] () at (-1.1+5,.5){\tiny $x_{1,-1}$};
\node[ver] () at (-.8+5,1.1){\tiny $y_{1,-1}$};
\node[ver] () at (5.1,1.5){\tiny $z_{1,-1}$};
\node[ver] () at (-1.2+10,-.5){\tiny $u_{2,-2}$};
\node[ver] () at (-.9+10,-1.3){\tiny $v_{2,-2}$};
\node[ver] () at (8.4,-2){\tiny $w_{2,-2}$};
\node[ver] () at (1+10,-.5){\tiny $x_{2,-2}$};
\node[ver] () at (.6+10,-1.3){\tiny $y_{2,-2}$};
\node[ver] () at (11.4,-2.1){\tiny $z_{2,-2}$};
\node[ver] () at (1+10,.5){\tiny $u_{2,-1}$};
\node[ver] () at (.7+10,1.1){\tiny $v_{2,-1}$};
\node[ver] () at (11.3,1.9){\tiny $w_{2,-1}$};
\node[ver] () at (-1.2+10,.5){\tiny $x_{2,-1}$};
\node[ver] () at (-.8+10,1.1){\tiny $y_{2,-1}$};
\node[ver] () at (10,1.5){\tiny $z_{2,-1}$};
\node[ver] () at (-1.1-2.5,-.5+4){\tiny $u_{-1,0}$};
\node[ver] () at (-1-2.3,-1.1+4){\tiny $v_{-1,0}$};
\node[ver] () at (-4,2.3){\tiny $w_{-1,0}$};
\node[ver] () at (1.2-2.6,-.5+4){\tiny $x_{-1,0}$};
\node[ver] () at (1-2.8,-1.1+4){\tiny $y_{-1,0}$};
\node[ver] () at (-1.1,2.2){\tiny $z_{-1,0}$};
\node[ver] () at (1.2-2.6,.5+4){\tiny $u_{-1,1}$};
\node[ver] () at (1-2.5,1.1+4){\tiny $v_{-1,1}$};
\node[ver] () at (.6-2.5,1.8+4){\tiny $w_{-1,1}$};
\node[ver] () at (-1.2-2.5,.5+4){\tiny $x_{-1,1}$};
\node[ver] () at (-1-2.5,1.1+4){\tiny $y_{-1,1}$};
\node[ver] () at (-.6-2.4,1.6+4){\tiny $z_{-1,1}$};
\node[ver] () at (-1.4+2.5,-.5+4.2){\tiny $u_{0,0}$};
\node[ver] () at (-1.2+2.6,-1.1+4){\tiny $v_{0,0}$};
\node[ver] () at (-.6+3,-1.6+4){\tiny $w_{0,0}$};
\node[ver] () at (1.2+2.5,-.5+4.2){\tiny $x_{0,0}$};
\node[ver] () at (1.1+2.2,-1.1+4){\tiny $y_{0,0}$};
\node[ver] () at (3.5,2.1){\tiny $z_{0,0}$};
\node[ver] () at (1.4+2.3,.5+4.2){\tiny $u_{0,1}$};
\node[ver] () at (1.4+3,1.1+4.4){\tiny $v_{0,1}$};
\node[ver] () at (.5+2.2,1.6+4.2){\tiny $w_{0,1}$};
\node[ver] () at (-1.4+2.5,.5+4.2){\tiny $x_{0,1}$};
\node[ver] () at (-1.2+1.5,1.1+4.4){\tiny $y_{0,1}$};
\node[ver] () at (2.4,6.3){\tiny $z_{0,1}$};
\node[ver] () at (-1.4+7.5,-.5+4.2){\tiny $u_{1,0}$};
\node[ver] () at (-1.2+7.6,-1.1+4){\tiny $v_{1,0}$};
\node[ver] () at (7.2,-1.6+4.1){\tiny $w_{1,0}$};
\node[ver] () at (1.2+7.5,-.5+4.2){\tiny $x_{1,0}$};
\node[ver] () at (1.1+7.1,-1.1+4){\tiny $y_{1,0}$};
\node[ver] () at (8.5,2.1){\tiny $z_{1,0}$};
\node[ver] () at (1.4+7.3,.5+4.2){\tiny $u_{1,1}$};
\node[ver] () at (1.1+7.2,1.1+4.3){\tiny $v_{1,1}$};
\node[ver] () at (8.5,1.6+4.5){\tiny $w_{1,1}$};
\node[ver] () at (-1.4+7.5,.5+4.2){\tiny $x_{1,1}$};
\node[ver] () at (-1.2+7.5,1.1+4.2){\tiny $y_{1,1}$};
\node[ver] () at (7.3,1.6+4.2){\tiny $z_{1,1}$};
\node[ver] () at (-1.6+12.5,-.5+4.2){\tiny $u_{2,0}$};
\node[ver] () at (-1.2+12.5,-1.1+4){\tiny $v_{2,0}$};
\node[ver] () at (-.6+12.5,-1.8+4.2){\tiny $w_{2,0}$};
\node[ver] () at (-1.6+12.5,.5+4.2){\tiny $x_{2,1}$};
\node[ver] () at (-1.2+12.5,1.1+4.4){\tiny $y_{2,1}$};
\node[ver] () at (-.4+12.5,1.6+4.2){\tiny $z_{2,1}$};
\node[ver] () at (2-6,-.5+8.3){\tiny $x_{-2,2}$};
\node[ver] () at (1.7-6,-1.3+8.5){\tiny $y_{-2,2}$};
\node[ver] () at (.1-5,-1.8+8.5){\tiny $z_{-2,2}$};
\node[ver] () at (1.2-5,.5+8.3){\tiny $u_{-2,3}$};
\node[ver] () at (.8-5,1.3+8.3){\tiny $v_{-2,3}$};
\node[ver] () at (.1-5,1.7+8.3){\tiny $w_{-2,3}$};
\node[ver] () at (-1.2,-.5+8.3){\tiny $u_{-1,2}$};
\node[ver] () at (-.8,-1.1+8.3){\tiny $v_{-1,2}$};
\node[ver] () at (-1.6,-1.6+8){\tiny $w_{-1,2}$};
\node[ver] () at (1.1,-.5+8.5){\tiny $x_{-1,2}$};
\node[ver] () at (2.2,-1.1+8.1){\tiny $y_{-1,2}$};
\node[ver] () at (0,-1.6+8.4){\tiny $z_{-1,2}$};
\node[ver] () at (1.1,.5+8.4){\tiny $u_{-1,3}$};
\node[ver] () at (1.3,10.4){\tiny $v_{-1,3}$};
\node[ver] () at (.6,1.5+8.2){\tiny $w_{-1,3}$};
\node[ver] () at (-1.2,.5+8.3){\tiny $x_{-1,3}$};
\node[ver] () at (-.8,1.1+8.3){\tiny $y_{-1,3}$};
\node[ver] () at (0,1.6+8.5){\tiny $z_{-1,3}$};
\node[ver] () at (-1.5+5,-.5+8.3){\tiny $u_{0,2}$};
\node[ver] () at (-1.3+5,-1.1+8.3){\tiny $v_{0,2}$};
\node[ver] () at (4.8,6.2){\tiny $w_{0,2}$};
\node[ver] () at (1.2+5,-.5+8.3){\tiny $x_{0,2}$};
\node[ver] () at (.7+5,-1.1+8.3){\tiny $y_{0,2}$};
\node[ver] () at (6,-1.6+8){\tiny $z_{0,2}$};
\node[ver] () at (1.2+5,.5+8.3){\tiny $u_{0,3}$};
\node[ver] () at (.7+5,1.1+8.3){\tiny $v_{0,3}$};
\node[ver] () at (6,10.4){\tiny $w_{0,3}$};
\node[ver] () at (-1.6+5.1,.5+8.3){\tiny $x_{0,3}$};
\node[ver] () at (-1.2+5,1.1+8.3){\tiny $y_{0,3}$};
\node[ver] () at (4.6,1.6+8.4){\tiny $z_{0,3}$};
\node[ver] () at (-1.5+10,-.5+8.3){\tiny $u_{1,2}$};
\node[ver] () at (-1.3+10.2,-1.1+8.1){\tiny $v_{1,2}$};
\node[ver] () at (9.8,6.2){\tiny $w_{1,2}$};
\node[ver] () at (1+10,-.5+8.3){\tiny $x_{1,2}$};
\node[ver] () at (.6+10,-1.1+8.2){\tiny $y_{1,2}$};
\node[ver] () at (11,-1.6+8){\tiny $z_{1,2}$};
\node[ver] () at (1.1+10,.5+8.3){\tiny $u_{1,3}$};
\node[ver] () at (.7+10,1.1+8.5){\tiny $v_{1,3}$};
\node[ver] () at (11,10.4){\tiny $w_{1,3}$};
\node[ver] () at (-1.6+10,.5+8.3){\tiny $x_{1,3}$};
\node[ver] () at (-1.2+10,1.1+8.3){\tiny $y_{1,3}$};
\node[ver] () at (-.6+10,1.6+8.4){\tiny $z_{1,3}$};
\draw [thick, dotted] (2.5,4.2) -- (7.5,4.2);
\draw [thick, dotted] (2.5,4.2) -- (5,8.4);
\draw [thick, dotted] (2.5,4.2) -- (0,8.4);
\node[ver] () at (2.5,4.2){$\bullet$};
\put(3.5,5.5){\mbox{$O$}}
\node[ver] () at (7.5,4.2){$\bullet$};
\put(11.5,6.2){\mbox{$A_7$}}
\node[ver] () at (5,8.4){$\bullet$};
\put(7,13.2){\mbox{$B_7$}}
\node[ver] () at (0,8.4){$\bullet$};
\node[ver] () at (3, -3.5){\normalsize (h) $E_{7}([4^1, 6^1, 12^1])$};
\end{tikzpicture}\hfill
\begin{tikzpicture}[scale=0.41]
\draw [yshift = -106] ({-3.7+2*cos(337.5)}, {1+2*sin(337.5)}) -- ({-3.7+2*cos(22.5)}, {2*sin(22.5)}) -- ({-3.7+2*cos(67.5)}, {2*sin(67.5)}) -- ({-5.7+2*cos(22.5)}, {2*sin(67.5)});
\draw [yshift = -106] ({2*cos(22.5)}, {2*sin(22.5)}) -- ({2*cos(67.5)}, {2*sin(67.5)}) -- ({2*cos(112.5)}, {2*sin(112.5)}) -- ({2*cos(157.5)}, {2*sin(157.5)});
\draw [yshift = -106] ({3.7+2*cos(337.5)}, {1+2*sin(337.5)}) -- ({3.7+2*cos(22.5)}, {2*sin(22.5)}) -- ({3.7+2*cos(67.5)}, {2*sin(67.5)}) -- ({3.7+2*cos(112.5)}, {2*sin(112.5)}) -- ({3.7+2*cos(157.5)}, {2*sin(157.5)}) -- ({3.7+2*cos(202.5)}, {1+2*sin(202.5)});
\draw [yshift = -106] ({7.4+2*cos(337.5)}, {1+ 2*sin(337.5)}) -- ({7.4+2*cos(22.5)}, {2*sin(22.5)}) -- ({7.4+2*cos(67.5)}, {2*sin(67.5)}) -- ({7.4+2*cos(112.5)}, {2*sin(112.5)}) -- ({7.4+2*cos(157.5)}, {2*sin(157.5)}) -- ({7.4+2*cos(202.5)}, {1+2*sin(202.5)});
\draw [yshift = -106] ({10.4+2*cos(67.5)}, {2*sin(67.5)}) -- ({11.1+2*cos(112.5)}, {2*sin(112.5)}) -- ({11.1+2*cos(157.5)}, {2*sin(157.5)});
\draw ({-3+2*cos(247.5)}, {2*sin(247.5)}) -- ({-3.7+2*cos(292.5)}, {2*sin(292.5)}) -- ({-3.7+2*cos(337.5)}, {2*sin(337.5)}) -- ({-3.7+2*cos(22.5)}, {2*sin(22.5)}) -- ({-3.7+2*cos(67.5)}, {2*sin(67.5)}) -- ({-5.7+2*cos(22.5)}, {2*sin(67.5)});
\draw ({2*cos(22.5)}, {2*sin(22.5)}) -- ({2*cos(67.5)}, {2*sin(67.5)}) -- ({2*cos(112.5)}, {2*sin(112.5)}) -- ({2*cos(157.5)}, {2*sin(157.5)}) -- ({2*cos(202.5)}, {2*sin(202.5)}) -- ({2*cos(247.5)}, {2*sin(247.5)}) -- ({2*cos(292.5)}, {2*sin(292.5)}) -- ({2*cos(337.5)}, {2*sin(337.5)}) -- ({2*cos(22.5)}, {2*sin(22.5)});
\draw ({3.7+2*cos(22.5)}, {2*sin(22.5)}) -- ({3.7+2*cos(67.5)}, {2*sin(67.5)}) -- ({3.7+2*cos(112.5)}, {2*sin(112.5)}) -- ({3.7+2*cos(157.5)}, {2*sin(157.5)}) -- ({3.7+2*cos(202.5)}, {2*sin(202.5)}) -- ({3.7+2*cos(247.5)}, {2*sin(247.5)}) -- ({3.7+2*cos(292.5)}, {2*sin(292.5)}) -- ({3.7+2*cos(337.5)}, {2*sin(337.5)}) -- ({3.7+2*cos(22.5)}, {2*sin(22.5)});
\draw ({7.4+2*cos(22.5)}, {2*sin(22.5)}) -- ({7.4+2*cos(67.5)}, {2*sin(67.5)}) -- ({7.4+2*cos(112.5)}, {2*sin(112.5)}) -- ({7.4+2*cos(157.5)}, {2*sin(157.5)}) -- ({7.4+2*cos(202.5)}, {2*sin(202.5)}) -- ({7.4+2*cos(247.5)}, {2*sin(247.5)}) -- ({7.4+2*cos(292.5)}, {2*sin(292.5)}) -- ({7.4+2*cos(337.5)}, {2*sin(337.5)}) -- ({7.4+2*cos(22.5)}, {2*sin(22.5)});
\draw ({10.4+2*cos(67.5)}, {2*sin(67.5)}) -- ({11.1+2*cos(112.5)}, {2*sin(112.5)}) -- ({11.1+2*cos(157.5)}, {2*sin(157.5)}) -- ({11.1+2*cos(202.5)}, {2*sin(202.5)}) -- ({11.1+2*cos(247.5)}, {2*sin(247.5)}) -- ({10.4+2*cos(292.5)}, {2*sin(292.5)});
\draw [yshift = 106] ({-3+2*cos(247.5)}, {2*sin(247.5)}) -- ({-3.7+2*cos(292.5)}, {2*sin(292.5)}) -- ({-3.7+2*cos(337.5)}, {2*sin(337.5)}) -- ({-3.7+2*cos(22.5)}, {2*sin(22.5)}) -- ({-3.7+2*cos(67.5)}, {2*sin(67.5)}) -- ({-5.7+2*cos(22.5)}, {2*sin(67.5)});
\draw [yshift = 106] ({2*cos(22.5)}, {2*sin(22.5)}) -- ({2*cos(67.5)}, {2*sin(67.5)}) -- ({2*cos(112.5)}, {2*sin(112.5)}) -- ({2*cos(157.5)}, {2*sin(157.5)}) -- ({2*cos(202.5)}, {2*sin(202.5)}) -- ({2*cos(247.5)}, {2*sin(247.5)}) -- ({2*cos(292.5)}, {2*sin(292.5)}) -- ({2*cos(337.5)}, {2*sin(337.5)}) -- ({2*cos(22.5)}, {2*sin(22.5)});
\draw [yshift = 106] ({3.7+2*cos(22.5)}, {2*sin(22.5)}) -- ({3.7+2*cos(67.5)}, {2*sin(67.5)}) -- ({3.7+2*cos(112.5)}, {2*sin(112.5)}) -- ({3.7+2*cos(157.5)}, {2*sin(157.5)}) -- ({3.7+2*cos(202.5)}, {2*sin(202.5)}) -- ({3.7+2*cos(247.5)}, {2*sin(247.5)}) -- ({3.7+2*cos(292.5)}, {2*sin(292.5)}) -- ({3.7+2*cos(337.5)}, {2*sin(337.5)}) -- ({3.7+2*cos(22.5)}, {2*sin(22.5)});
\draw [yshift = 106] ({7.4+2*cos(22.5)}, {2*sin(22.5)}) -- ({7.4+2*cos(67.5)}, {2*sin(67.5)}) -- ({7.4+2*cos(112.5)}, {2*sin(112.5)}) -- ({7.4+2*cos(157.5)}, {2*sin(157.5)}) -- ({7.4+2*cos(202.5)}, {2*sin(202.5)}) -- ({7.4+2*cos(247.5)}, {2*sin(247.5)}) -- ({7.4+2*cos(292.5)}, {2*sin(292.5)}) -- ({7.4+2*cos(337.5)}, {2*sin(337.5)}) -- ({7.4+2*cos(22.5)}, {2*sin(22.5)});
\draw [yshift = 106] ({10.4+2*cos(67.5)}, {2*sin(67.5)}) -- ({11.1+2*cos(112.5)}, {2*sin(112.5)}) -- ({11.1+2*cos(157.5)}, {2*sin(157.5)}) -- ({11.1+2*cos(202.5)}, {2*sin(202.5)}) -- ({11.1+2*cos(247.5)}, {2*sin(247.5)}) -- ({10.4+2*cos(292.5)}, {2*sin(292.5)});
\draw [yshift = 212] ({-3+2*cos(247.5)}, {2*sin(247.5)}) -- ({-3.7+2*cos(292.5)}, {2*sin(292.5)}) -- ({-3.7+2*cos(337.5)}, {2*sin(337.5)}) -- ({-3.7+2*cos(22.5)}, {2*sin(22.5)}) -- ({-3.7+2*cos(67.5)}, {2*sin(67.5)}) -- ({-5.7+2*cos(22.5)}, {2*sin(67.5)});
\draw [yshift = 212] ({2*cos(22.5)}, {2*sin(22.5)}) -- ({2*cos(67.5)}, {2*sin(67.5)}) -- ({2*cos(112.5)}, {2*sin(112.5)}) -- ({2*cos(157.5)}, {2*sin(157.5)}) -- ({2*cos(202.5)}, {2*sin(202.5)}) -- ({2*cos(247.5)}, {2*sin(247.5)}) -- ({2*cos(292.5)}, {2*sin(292.5)}) -- ({2*cos(337.5)}, {2*sin(337.5)}) -- ({2*cos(22.5)}, {2*sin(22.5)});
\draw [yshift = 212] ({3.7+2*cos(22.5)}, {2*sin(22.5)}) -- ({3.7+2*cos(67.5)}, {2*sin(67.5)}) -- ({3.7+2*cos(112.5)}, {2*sin(112.5)}) -- ({3.7+2*cos(157.5)}, {2*sin(157.5)}) -- ({3.7+2*cos(202.5)}, {2*sin(202.5)}) -- ({3.7+2*cos(247.5)}, {2*sin(247.5)}) -- ({3.7+2*cos(292.5)}, {2*sin(292.5)}) -- ({3.7+2*cos(337.5)}, {2*sin(337.5)}) -- ({3.7+2*cos(22.5)}, {2*sin(22.5)});
\draw [yshift = 212] ({7.4+2*cos(22.5)}, {2*sin(22.5)}) -- ({7.4+2*cos(67.5)}, {2*sin(67.5)}) -- ({7.4+2*cos(112.5)}, {2*sin(112.5)}) -- ({7.4+2*cos(157.5)}, {2*sin(157.5)}) -- ({7.4+2*cos(202.5)}, {2*sin(202.5)}) -- ({7.4+2*cos(247.5)}, {2*sin(247.5)}) -- ({7.4+2*cos(292.5)}, {2*sin(292.5)}) -- ({7.4+2*cos(337.5)}, {2*sin(337.5)}) -- ({7.4+2*cos(22.5)}, {2*sin(22.5)});
\draw [yshift = 212] ({10.4+2*cos(67.5)}, {2*sin(67.5)}) -- ({11.1+2*cos(112.5)}, {2*sin(112.5)}) -- ({11.1+2*cos(157.5)}, {2*sin(157.5)}) -- ({11.1+2*cos(202.5)}, {2*sin(202.5)}) -- ({11.1+2*cos(247.5)}, {2*sin(247.5)}) -- ({10.4+2*cos(292.5)}, {2*sin(292.5)});
\draw [yshift = 318] ({-3+2*cos(247.5)}, {2*sin(247.5)}) -- ({-3.7+2*cos(292.5)}, {2*sin(292.5)}) -- ({-3.7+2*cos(337.5)}, {2*sin(337.5)}) -- ({-3.7+2*cos(22.5)}, {-1+2*sin(22.5)});
\draw [yshift = 318] ({2*cos(202.5)}, {2*sin(202.5)}) -- ({2*cos(247.5)}, {2*sin(247.5)}) -- ({2*cos(292.5)}, {2*sin(292.5)}) -- ({2*cos(337.5)}, {2*sin(337.5)}) -- ({2*cos(22.5)}, {-1+2*sin(22.5)});
\draw [yshift = 318] ({3.7+2*cos(202.5)}, {2*sin(202.5)}) -- ({3.7+2*cos(247.5)}, {2*sin(247.5)}) -- ({3.7+2*cos(292.5)}, {2*sin(292.5)}) -- ({3.7+2*cos(337.5)}, {2*sin(337.5)}) -- ({3.7+2*cos(22.5)}, {-1+2*sin(22.5)});
\draw [yshift = 318] ({7.4+2*cos(202.5)}, {2*sin(202.5)}) -- ({7.4+2*cos(247.5)}, {2*sin(247.5)}) -- ({7.4+2*cos(292.5)}, {2*sin(292.5)}) -- ({7.4+2*cos(337.5)}, {2*sin(337.5)}) -- ({7.4+2*cos(22.5)}, {-1+2*sin(22.5)});
\draw [yshift = 318] ({11.1+2*cos(202.5)}, {2*sin(202.5)}) -- ({11.1+2*cos(247.5)}, {2*sin(247.5)}) -- ({10.4+2*cos(292.5)}, {2*sin(292.5)});
\node[ver] () at (-1.8,-2){\tiny $v_{-3,-1}$};
\node[ver] () at (0,-1.5){\tiny $v_{-2,-1}$};
\node[ver] () at (1.8,-2){\tiny $v_{-1,-1}$};
\node[ver] () at (3.4,-1.5){\tiny $v_{0,-1}$};
\node[ver] () at (5.5,-2){\tiny $v_{1,-1}$};
\node[ver] () at (7.1,-1.5){\tiny $v_{2,-1}$};
\node[ver] () at (9.2,-2){\tiny $v_{3,-1}$};
\node[ver] () at (10.7,-1.5){\tiny $v_{4,-1}$};
\node[ver] () at (-1.8,-1.5+3.3){\tiny $v_{-3,0}$};
\node[ver] () at (0,-1.5+3.6){\tiny $v_{-2,0}$};
\node[ver] () at (1.8,-1.5+3.3){\tiny $v_{-1,0}$};
\node[ver] () at (3.4,-1.5+3.6){\tiny $v_{0,0}$};
\node[ver] () at (5.5,-1.5+3.3){\tiny $v_{1,0}$};
\node[ver] () at (7.1,-1.5+3.6){\tiny $v_{2,0}$};
\node[ver] () at (9.2,-1.5+3.3){\tiny $v_{3,0}$};
\node[ver] () at (10.7,-1.5+3.6){\tiny $v_{4,0}$};
\node[ver] () at (-1.8,5.5){\tiny $v_{-3,1}$};
\node[ver] () at (0,5.8){\tiny $v_{-2,1}$};
\node[ver] () at (1.8,5.5){\tiny $v_{-1,1}$};
\node[ver] () at (3.15,6){\tiny $v_{0,1}$};
\node[ver] () at (5.5,5.5){\tiny $v_{1,1}$};
\node[ver] () at (7.1,5.8){\tiny $v_{2,1}$};
\node[ver] () at (9.2,5.5){\tiny $v_{3,1}$};
\node[ver] () at (10.7,5.8){\tiny $v_{4,1}$};
\node[ver] () at (-1.8,9.3){\tiny $v_{-3,2}$};
\node[ver] () at (-.1,9.6){\tiny $v_{-2,2}$};
\node[ver] () at (1.8,9.3){\tiny $v_{-1,2}$};
\node[ver] () at (3.4,9.6){\tiny $v_{0,2}$};
\node[ver] () at (5.5,9.3){\tiny $v_{1,2}$};
\node[ver] () at (7.1,9.6){\tiny $v_{2,2}$};
\node[ver] () at (9.2,9.3){\tiny $v_{3,2}$};
\node[ver] () at (10.7,9.6){\tiny $v_{4,2}$};
\node[ver] () at (10,3+3.7+3.7){\tiny $u_{2,4}$};
\node[ver] () at (10,.8+3.6+3.7){\tiny $u_{2,3}$};
\node[ver] () at (10,3+3.7){\tiny $u_{2,2}$};
\node[ver] () at (10,.8+3.6){\tiny $u_{2,1}$};
\node[ver] () at (10,3){\tiny $u_{2,0}$};
\node[ver] () at (10.2,.8+3.6-3.7){\tiny $u_{2,-1}$};
\node[ver] () at (10.2,-3.7+3){\tiny $u_{2,-2}$};
\node[ver] () at (10.2,-3.2){\tiny $u_{2,-3}$};
\node[ver] () at (-1+7.2,3+3.7+3.7){\tiny $u_{1,4}$};
\node[ver] () at (-1+7.3,.8+3.6+3.7){\tiny $u_{1,3}$};
\node[ver] () at (-1+7.2,3+3.7){\tiny $u_{1,2}$};
\node[ver] () at (-1+7.3,.8+3.6){\tiny $u_{1,1}$};
\node[ver] () at (-1+7.2,3){\tiny $u_{1,0}$};
\node[ver] () at (-1+7.5,.8+3.5-3.7){\tiny $u_{1,-1}$};
\node[ver] () at (-1+7.4,-3.7+3){\tiny $u_{1,-2}$};
\node[ver] () at (6.6,-3.2){\tiny $u_{1,-3}$};
\node[ver] () at (-.9+3.6,3+3.7+3.7){\tiny $u_{0,4}$};
\node[ver] () at (-.9+3.6,.8+3.6+3.7){\tiny $u_{0,3}$};
\node[ver] () at (-.9+3.6,3+3.7){\tiny $u_{0,2}$};
\node[ver] () at (-.9+3.6,.8+3.6){\tiny $u_{0,1}$};
\node[ver] () at (-.9+3.6,3){\tiny $u_{0,0}$};
\node[ver] () at (-.9+3.7,.8+3.6-3.7){\tiny $u_{0,-1}$};
\node[ver] () at (-.9+3.6,-3.7+3){\tiny $u_{0,-2}$};
\node[ver] () at (2.8,-3.2){\tiny $u_{0,-3}$};
\node[ver] () at (-.8,3+3.7+3.7){\tiny $u_{-2,4}$};
\node[ver] () at (-.8,.8+3.6+3.7){\tiny $u_{-2,3}$};
\node[ver] () at (-.8,3+3.7){\tiny $u_{-2,2}$};
\node[ver] () at (-.8,.8+3.6){\tiny $u_{-2,1}$};
\node[ver] () at (-.8,3){\tiny $u_{-2,0}$};
\node[ver] () at (-.7,.5){\tiny $u_{-2,-1}$};
\node[ver] () at (-.7,-.7){\tiny $u_{-2,-2}$};
\node[ver] () at (-.7,-3.2){\tiny $u_{-2,-3}$};
\draw [thick, dotted] (3.7,3.7) -- (7.4,3.7);
\draw [thick, dotted] (3.7,3.7) -- (3.7,7.4);
\node[ver] () at (3.7,3.7){$\bullet$};
\put(5,4.2){\mbox{$O$}}
\node[ver] () at (7.4,3.7){$\bullet$};
\put(10.6,4.5){\mbox{$A_2$}}
\node[ver] () at (3.7,7.4){$\bullet$};
\put(5.4,10){\mbox{$B_2$}}
\node[ver] () at (3.5, -4.5){\normalsize (i) $E_{2}([4^1, 8^2])$};
\end{tikzpicture}
\end{figure}
\begin{figure}[ht!]
\tiny
\tikzstyle{ver}=[]
\tikzstyle{vert}=[circle, draw, fill=black!100, inner sep=0pt, minimum width=4pt]
\tikzstyle{vertex}=[circle, draw, fill=black!00, inner sep=0pt, minimum width=4pt]
\tikzstyle{edge} = [draw,thick,-]
\begin{tikzpicture}[scale=.7]
\draw(1,-.5)--(2,-1)--(3,-.5)--(3,.5)--(2,1)--(1,.5)--(1,-.5);
\draw(3,-.5)--(4,-1)--(5,-.5)--(5,.5)--(4,1)--(3,.5)--(3,-.5);
\draw(5,-.5)--(6,-1)--(7,-.5)--(7,.5)--(6,1)--(5,.5)--(5,-.5);
\draw(1,-.5)--(.5,-.7);\draw(1,.5)--(.5,.7);
\draw(1,2.5)--(.5,2.3);\draw(1,3.5)--(.5,3.7);
\draw(7,-.5)--(7.5,-.7);\draw(7,.5)--(7.5,.7);
\draw(7,2.5)--(7.5,2.3);\draw(7,3.5)--(7.5,3.7);
\draw(2,-1)--(2,-1.2);\draw(4,-1)--(4,-1.2);\draw(6,-1)--(6,-1.2);
\draw(2,4)--(2,4.2);\draw(4,4)--(4,4.2);\draw(6,4)--(6,4.2);
\draw(2,1)--(3,.5)--(4,1)--(4,2)--(3,2.5)--(2,2)--(2,1);
\draw(4,1)--(5,.5)--(6,1)--(6,2)--(5,2.5)--(4,2)--(4,1);
\draw(5,2.5)--(6,2)--(7,2.5)--(7,3.5)--(6,4)--(5,3.5)--(5,2.5);
\draw(1,2.5)--(2,2)--(3,2.5)--(3,3.5)--(2,4)--(1,3.5)--(1,2.5);
\draw(3,2.5)--(4,2)--(5,2.5)--(5,3.5)--(4,4)--(3,3.5)--(3,2.5);
\node[ver] () at (1.7,-.4){$u_{-2,-1}$};
\node[ver] () at (2.7,-1.2){$u_{-1,-1}$};
\node[ver] () at (3.6,-.4){$u_{0,-1}$};
\node[ver] () at (4.7,-1.2){$u_{1,-1}$};
\node[ver] () at (5.6,-.4){$u_{2,-1}$};
\node[ver] () at (6.7,-1.2){$u_{3,-1}$};
\node[ver] () at (7.6,-.4){$u_{4,-1}$};
\node[ver] () at (1.6,.3){$u_{-2,0}$};
\node[ver] () at (2.6,1.1){$u_{-1,0}$};
\node[ver] () at (3.6,.3){$u_{0,0}$};
\node[ver] () at (4.6,1.1){$u_{1,0}$};
\node[ver] () at (5.6,.3){$u_{2,0}$};
\node[ver] () at (6.6,1.1){$u_{3,0}$};
\node[ver] () at (7.6,.3){$u_{4,0}$};
\node[ver] () at (1.7,2.6){$u_{-3,1}$};
\node[ver] () at (2.7,1.8){$u_{-2,1}$};
\node[ver] () at (3.6,2.6){$u_{-1,1}$};
\node[ver] () at (4.7,1.8){$u_{0,1}$};
\node[ver] () at (5.6,2.6){$u_{1,1}$};
\node[ver] () at (6.7,1.8){$u_{2,1}$};
\node[ver] () at (7.6,2.6){$u_{3,1}$};
\node[ver] () at (1.6,3.3){$u_{-4,2}$};
\node[ver] () at (2.6,4.1){$u_{-3,2}$};
\node[ver] () at (3.6,3.3){$u_{-2,2}$};
\node[ver] () at (4.6,4.1){$u_{-1,2}$};
\node[ver] () at (5.6,3.3){$u_{0,2}$};
\node[ver] () at (6.6,4.1){$u_{1,2}$};
\node[ver] () at (7.6,3.3){$u_{2,2}$};
\draw [dashed] (4,0) -- (6,0);
\draw [dashed] (4,0) -- (5,1.5);
\node[ver] () at (4,0){\scriptsize $\bullet$};
\put(8.4,-0.2){\mbox{$O$}}
\node[ver] () at (6,0){\scriptsize $\bullet$};
\put(14.2,0){\mbox{$A_{10}$}}
\node[ver] () at (5,1.5){\scriptsize $\bullet$};
\put(11.8,4){\mbox{$B_{10}$}}
\node[ver] () at (3.7,-2){\normalsize (j) $E_{10}([6^3])$ };
\end{tikzpicture}\hfill
\begin{tikzpicture}[scale=0.2]
\draw[edge, thin](8.5,5)--(36.5,5);
\draw[edge, thin](8.5,10)--(36.5,10);
\draw[edge, thin](8.5,15)--(36.5,15);
\draw[edge, thin](8.5,20)--(36.5,20);
\draw[edge, thin](10,5)--(12.5,10);
\draw[edge, thin](12.5,10)--(15,5);
\draw[edge, thin](15,5)--(17.5,10);
\draw[edge, thin](17.5,10)--(20,5);
\draw[edge, thin](20,5)--(22.5,10);
\draw[edge, thin](22.5,10)--(25,5);
\draw[edge, thin](25,5)--(27.5,10);
\draw[edge, thin](27.5,10)--(30,5);
\draw[edge, thin](30,5)--(32.5,10);
\draw[edge, thin](32.5,10)--(35,5);
\draw[edge, thin](12.5,10)--(12.5,15);
\draw[edge, thin](17.5,10)--(17.5,15);
\draw[edge, thin](22.5,10)--(22.5,15);
\draw[edge, thin](27.5,10)--(27.5,15);
\draw[edge, thin](32.5,10)--(32.5,15);
\draw[edge, thin] (10,5)--(10,4.3);\draw[edge, thin] (15,5)--(15,4.3);
\draw[edge, thin] (20,5)--(20,4.3);\draw[edge, thin] (25,5)--(25,4.3);
\draw[edge, thin] (30,5)--(30,4.3);\draw[edge, thin] (35,5)--(35,4.3);
\draw[edge, thin](12.5,15)--(10,20);
\draw[edge, thin](15,20)--(12.5,15);
\draw[edge, thin](17.5,15)--(15,20);
\draw[edge, thin](20,20)--(17.5,15);
\draw[edge, thin](22.5,15)--(20,20);
\draw[edge, thin](25,20)--(22.5,15);
\draw[edge, thin](27.5,15)--(25,20);
\draw[edge, thin](30,20)--(27.5,15);
\draw[edge, thin](32.5,15)--(30,20);
\draw[edge, thin](35,20)--(32.5,15);
\draw[edge, thin](10,20)--(10,20.7);
\draw[edge, thin](15,20)--(15,20.7);
\draw[edge, thin](20,20)--(20,20.7);
\draw[edge, thin](25,20)--(25,20.7);
\draw[edge, thin](30,20)--(30,20.7);
\draw[edge, thin](35,20)--(35,20.7);
\draw [dashed] (22.5,15) -- (32.5,15);
\node[ver] () at (21,3){\tiny $u_{-1,-1}$};
\node[ver] () at (26,3){\tiny $u_{0,-1}$};
\node[ver] () at (31,3){\tiny $u_{1,-1}$};
\node[ver] () at (35.8,3){\tiny $u_{2,-1}$};
\node[ver] () at (16.3,3){\tiny $u_{-2,-1}$};
\node[ver] () at (11.3,3){\tiny $u_{-3,-1}$};
\node[ver] () at (24.5,10.5){\tiny $u_{0,0}$};
\node[ver] () at (29.5,10.5){\tiny $u_{1,0}$};
\node[ver] () at (34.5,10.5){\tiny $u_{2,0}$};
\node[ver] () at (20,10.5){\tiny $u_{-1,0}$};
\node[ver] () at (15,10.5){\tiny $u_{-2,0}$};
\node[ver] () at (24.5,14){\tiny $u_{0,1}$};
\node[ver] () at (29.5,14){\tiny $u_{1,1}$};
\node[ver] () at (34.5,14){\tiny $u_{2,1}$};
\node[ver] () at (20,14){\tiny $u_{-1,1}$};
\node[ver] () at (15,14){\tiny $u_{-2,1}$};
\node[ver] () at (21.5,21){\tiny $u_{0,2}$};
\node[ver] () at (26,21){\tiny $u_{1,2}$};
\node[ver] () at (31,21){\tiny $u_{2,2}$};
\node[ver] () at (36,21){\tiny $u_{3,2}$};
\node[ver] () at (16.5,21){\tiny $u_{-1,2}$};
\node[ver] () at (11.5,21){\tiny $u_{-2,2}$};
\draw [dashed] (20,7.5) -- (25,7.5);
\draw [dashed] (20,7.5) -- (22.5,17.5);
\node[ver] () at (20,7.5){\scriptsize $\bullet$};
\put(16.3,5.5){\mbox{$A_{11}$}}
\put(13,4.2){\mbox{$O$}}
\node[ver] () at (25,7.5){\scriptsize $\bullet$};
\node[ver] () at (22.5,17.5){\scriptsize $\bullet$};
\put(14.6,12.5){\mbox{$B_{11}$}}
\node[ver] () at (23, 0){\normalsize (k) $E_{11}([3^3,4^2])$};
\end{tikzpicture}
\caption{{\bf Archimedean tilings on the plane $\mathbb{R}^2$}}\label{fig:Archi}
\end{figure}
\section{Semi-equivelar maps and classification of their $k$-semiregular covers}\label{sec:proofs-1}
Before going to the proofs of main theorems we need following series of results.
From \cite[Proposition 3.2-3.7]{drach:2019} we get
\begin{proposition}
Let $E$ be a semi\mbox{-}equivelar tiling on the plane. Suppose $E$ has $m$ flag\mbox{-}orbits. Then
{\rm (a)} If the type of $E$ is $[3^6]$, $[4^4]$ or $[6^3]$ then $m = 1$.\\
{\rm (b)} If the type of $E$ is $[3^1,6^1,3^1,6^1]$ then $m = 2$.\\
{\rm (c)} If the type of $E$ is $[3^1, 12^2]$ or $[4^1,8^2]$ then $m = 3$.\\
{\rm (d)} If the type of $E$ is $[3^1,4^1,6^1,4^1], [3^3,4^2]$ or $[3^4,6^1]$ then $m = 4$.\\
{\rm (e)} If the type of $E$ is $ [3^2,4^1,3^1,4^1]$ then $m = 5$. \\
{\rm (f)} If the type of $E$ is $[4^1,6^1, 12^1]$ then $m = 6$.
\end{proposition}
\begin{proof}[Proof of Theorem \ref{no-of-orbits}]
Let for $i=1,2,\dots 11$ $E_i$ be the Archimedean tiling of the plane as in Example \ref{exam:plane}. Consider $\alpha_i$ and $\beta_i$ be the fundamental translations of $E_i$. $\alpha_i:z\mapsto z+A_i$ and $\beta_i:z\mapsto z+B_i$. Let $X$ be a semi\mbox{-}equivelar map of type $[p_1^{r_1},\dots p_k^{r_k}]$. Then there exists a discrete subgroup $K_i$ of Aut($E_i$) with out any fixed element such that $X=E_i/K_i$. Let $p_i:E_i \to X$ be the polyhedral covering map.
By above description of $K_i$, it contains only translations and glide reflections. Since, $X$ is orientable so $K_i$ does not contain any glide reflections. Thus $K_i \le H_i$. Suppose $K_i = \langle \gamma_i, \delta_i \rangle$.
Let $\chi_i$ denotes the reflection about origin in $E_i$. Then $\chi_i \in {\rm Aut}(E_i)$. Consider the group $G_i = \langle \alpha_i, \beta_i, \chi_i \rangle \le {\rm Aut}(E_i)$.
\begin{claim}
$K_i \trianglelefteq G_i$.
\end{claim}
To prove this it is enough to show that $\chi_i\circ\gamma_i \circ \chi_i^{-1}$ and $\chi_i\circ\delta_i \circ \chi_i^{-1} \in K_i$. We know that conjugation of a translation by reflection is translation by the reflected vector.
Let $\gamma_i$ and $\delta_i$ are translation by vectors $C_i$ and $D_i$ respectively. Then $\chi_i\circ\gamma_i \circ \chi_i^{-1}$ and $\chi_i\circ\delta_i \circ \chi_i^{-1}$ are translation by $-C_i$ and $-D_i$. Clearly these vectors are belongs to lattice of $K_i$. Our claim follows from this.
Now we will proceed case by case.
\smallskip
Case 1. Let $X$ is of type $[3^6], [4^4]$ or $[6^3]$. Suppose $X =E_i/K_i$. $E_i$ has one flag orbit by action of $H_i$. Hence action of $H_i/K_i$ on flags of $X$ also gives one orbit. $H_i/K_i \le {\rm Aut}(X)$. Thus $F(X)$ has one ${\rm Aut}(X)\mbox{-}$orbit.
\smallskip
Case 2. Let $X$ be a semi-equivelar map of type $[3^3,4^2]$. Then by Proposition \ref{propo-1} we can assume $X = E_{11}/K_{11}$ for some subgroup $K_{11}$ of Aut($E_{11}$). Now $F(E_{11})$ has $4$ $G_{11}$ orbits. Hence $X$ also has $4$ $G_{11}/K_{11}\mbox{-}$orbits. As $G_{11}/K_{11} \le {\rm Aut}(X)$. Therefore number of ${\rm Aut}(X)\mbox-$orbits of $F(X)$ is less than or equals to $4$.
\smallskip
Case 3. Let $X$ be a semi-equivelar map of type $[3^1,6^1,3^1,6^1]$. Then by Proposition \ref{propo-1} we can assume $X = E_{4}/K_{4}$ for some subgroup $K_{4}$ of Aut($E_{4}$). Now $F(E_{4})$ has $6$ $G_{4}$ orbits. Hence $X$ also has $6$ $G_{4}/K_{4}\mbox{-}$orbits. As $G_{4}/K_{4} \le {\rm Aut}(X)$. Therefore number of ${\rm Aut}(X)\mbox-$orbits of $F(X)$ is less than or equals to $6$.
Case 4. Let $X$ be a semi-equivelar map of type $[3^1, 12^2]$ or $[4^1,8^2]$. Then by Proposition \ref{propo-1} we can assume $X = E_{j}/K_{j}$ for some subgroup $K_{j}$ of Aut($E_{j}$) for $j = 2,6$. Now $F(E_{j})$ has $9$ $G_{j}$ orbits. Hence $X$ also has $9$ $G_{j}/K_{j}\mbox{-}$orbits. As $G_{j}/K_{j} \le {\rm Aut}(X)$. Therefore number of ${\rm Aut}(X)\mbox-$orbits of $F(X)$ is less than or equals to $9$.
Case 5. Let $X$ be a semi-equivelar map of type $[3^1,4^1,6^1,4^1]$ or $[3^4,6^1]$. Then by Proposition \ref{propo-1} we can assume $X = E_{j}/K_{j}$ for some subgroup $K_{j}$ of Aut($E_{j}$) for $j = 3,5$. Now $F(E_{j})$ has $12$ $G_{j}$ orbits. Hence $X$ also has $12$ $G_{j}/K_{j}\mbox{-}$orbits. As $G_{j}/K_{j} \le {\rm Aut}(X)$. Therefore number of ${\rm Aut}(X)\mbox-$orbits of $F(X)$ is less than or equals to $12$.
Case 6. Let $X$ be a semi-equivelar map of type $ [3^2,4^1,3^1,4^1]$. Then by Proposition \ref{propo-1} we can assume $X = E_{1}/K_{1}$ for some subgroup $K_{1}$ of Aut($E_{1}$). Now $F(E_{1})$ has $15$ $G_{1}$ orbits. Hence $X$ also has $15$ $G_{1}/K_{1}\mbox{-}$orbits. As $G_{1}/K_{1} \le {\rm Aut}(X)$. Therefore number of ${\rm Aut}(X)\mbox-$orbits of $F(X)$ is less than or equals to $15$.
Case 7. Let $X$ be a semi-equivelar map of type $[4^1,6^1, 12^1]$. Then by Proposition \ref{propo-1} we can assume $X = E_{7}/K_{7}$ for some subgroup $K_{7}$ of Aut($E_{7}$). Now $F(E_{7})$ has $36$ $G_{7}$ orbits. Hence $X$ also has $36$ $G_{7}/K_{7}\mbox{-}$orbits. As $G_{7}/K_{7} \le {\rm Aut}(X)$. Therefore number of ${\rm Aut}(X)\mbox-$orbits of $F(X)$ is less than or equals to $36$.
\end{proof}
\begin{lemma}\label{lemm1}
Suppose $X$ be a semi\mbox{-}equivelar toroidal map with vertex type $[p_1^{r_1}, \dots p_k^{r_k}]$. Let $N_f$ and $N_v$ denotes number of flag\mbox{-}orbits and vertex\mbox{-}orbits of $X$ respectively. Then $N_f = mN_v$ where $m$ be the number of flag orbits of $E$.
\end{lemma}
\begin{proof}
Let $u$ be a vertex of $X$. Then there exists $m$ many different orbital flags containing $u$. Now if two vertex $u$ and $v$ contained in different vertex\mbox{-}orbits then two flags with vertex $u$ and $v$ can not lie in same flag orbit. Hence $N_f = mN_v$ $\implies $ $m$ divides $N_f$.
\end{proof}
Thus we have number of flag orbits of universal cover always divides number of flag orbits of a map.
Now we move to classify the covers of a given map $X$. For that we need,
\begin{proposition}(\cite{KM2021})\label{vertex}
{\rm (a)} If $X_1$ is a $m_1$-orbital toroidal map of type $[3^2,4^1,3^1,4^1]$ or $[4^1,8^2]$, then there exists a covering $\eta_{K_3} \colon Y_{K_3} \to X_1$ where $Y_{K_3}$ is $K_3$-orbital for each $K_3 \le m_1$.\\
{\rm (b)} If $X_2$ is a $m_2$-orbital toroidal map of type $[3^1,6^1,3^1,6^1]$, $[3^4,6^1]$, $[3^1,4^1,6^1, 4^1]$ or $[3^1, 12^2]$, then there exists a covering $\eta_{k_2} \colon Y_{k_2} \to X_2$ where $Y_{k_2}$ is $k_2$-orbital for each $k_2 \le m_2$.\\
{\rm (c)} If $X_3$ is a $m_3$-orbital toroidal map of type $[4^1,6^1,12^1]$, then there exists a covering $\eta_{k_3} \colon Y_{k_3} \to X_3$ where $Y_{k_3}$ is $k_3$-orbital for each $k_3 (\neq 5) \le m_3$ except $(k_3, m_3) = (2,3), (3, 4), (4, 6)$.
\end{proposition}
Here we brifly indecate the idea of the proof for details see \cite{KM2021}. Suppose $X = E/K$. $K \le {\rm Aut}(X)$ a discrete subgroup. To make a cover of $X$ first we have to take some subgroup $G$ of Aut($E$) such that $V(E)$ has $m_i$ $G$-orbits and $K \trianglelefteq G$. Then $G/K$ acts on $V(X)$ and gives $m_i$ orbits. Now we have to add some symmetry of $E$ in $G$ such that the action of new group $G'$ on $V(E)$ gives $k_i$ many orbits. Now any cover of $X$ will be of the form $Y=E/L$ for some subgroup $L$ of $K$. To make $Y$ $k_i$-orbital we need $L \trianglelefteq G'$. Then with that suitable $L$ we get $Y$.
\begin{proof}[Proof of Theorem \ref{thm-main1}]
By Theorem \ref{no-of-orbits} and Lemma \ref{lemm1} we have the following possibilities.
$k_1 = 4$, $k_2 \in \{ 3,6,9 \}$, $k_3 \in \{4,8,12\}, k_4 \in \{5,10,15\},$ and $k_5 \in \{6,12,18,24,30,36\}$.
Now if $X$ is $N_f$ flag orbital then by Lemma \ref{lemm1} number of vertex orbit of $X = N_v = N_f/m$. From \cite{KM2021} we get there does not exists a $5$ vertex orbital toroidal map of vertex type $[4^1,6^1,12^1]$. Hence there does not exists a $30$-semiregular map of type $[4^1,6^1,12^1]$. Thus $k_5 \neq 30$. By \cite[Theorem 1.7]{KM2021} we can conclude that there does not exists a $k_5$-semiregular cover of a $m_5$-semiregular map of type $[4^1,6^1,12^1]$ for $(k_5,m_5) \neq (12,18),(18,24),(24,36)$.
Now the process to construct a $k$-semiregular cover $Y$ of a given $m$-semiregular map $X$ is same for all type of maps.
Let $X$ be a $N_f$-semiregular map of type $[p_1^{r_1}, p_2^{r_2}, \dots, p_n^{r_n}]$. Let $X$ has $N_v$ vertex orbits. Let $\mathbb{R}^2$ with this type of tiling has $m$ flag orbits. Then $N_f = mN_v$.
Now from Lemma \ref{lemm1} we can conclude that a $k$-orbital cover of a map becomes a $mk$-semiregular map. From this the proof of Theorem \ref{thm-main1} follows from Proposition \ref{vertex}.
\end{proof}
The proofs of Theorems \ref{thm-main2}, \ref{thm-main3}, \ref{thm-main4} are exactly same as in \cite[Theorems 1.8, 1.9,1.10]{KM2021}.
\section{Acknowledgements}
Authors are supported by NBHM, DAE (No. 02011/9/2021-NBHM(R.P.)/R$\&$D-II/9101).
{\small
|
{
"timestamp": "2021-12-01T02:26:44",
"yymm": "2111",
"arxiv_id": "2111.15484",
"language": "en",
"url": "https://arxiv.org/abs/2111.15484"
}
|
\section{Introduction}
\label{sec1}
On a probability space $(\Omega, \mathcal{F}, \mathbb{P})$, suppose $f_1, f_2, \cdots \,$ are independent measurable functions with the same distribution, and integrable: $\mathbb E ( | f_1|) < \infty\,$. The renowned \textsc{Kolmogorov} strong law of large numbers (\cite{Chu}, p.\,126) states that the ``sample average" $ \, ( f_1 + \cdots + f_N) / N\,$ converges $\mathbb{P}-$a.e.\,to the ``ensemble average" $\,\mathbb E (f_1) = \int_\Omega f_1 \, \mathrm{d} \mathbb{P}\,,$ as $ N \to \infty$.
A deep result of \textsc{Koml\'os} \cite{K}, already 55 years old but always very striking, says that such an averaging occurs within {\it any} sequence $f_1, f_2, \cdots \,$ of measurable functions with $\, \sup_{n \in \mathbb N} \mathbb E ( | f_n|) < \infty\,.$ More precisely, there exist then a function $f$ and a subsequence $f_{n_1}, f_{n_2}, \cdots \,,$ such that $ \, ( f_{n_1} + \cdots + f_{n_K}) / K\,$ converges $\mathbb{P}-$a.e.\,to $f$ as $ K \to \infty$; and the same is true for any further subsequence of $\big\{ f_{n_k} \big\}_{k \in \mathbb N}\,.$
This result, and its ramifications involving forward convex combinations in \cite{DS1}--\cite{DS2}, are very useful in the context of convex optimization; and more generally, when one seeks objects with specific properties and tries to ascertain their existence using weak compactness arguments. Stochastic control, optimal stopping and hypothesis testing provide instances
of the former (e.g.,\,\cite{KS},\,\cite{KW},\,\cite{CK}); whereas the \textsc{Doob-Meyer} and \textsc{Bichteler-Dellacherie} theorems in stochastic analysis are examples of the latter (e.g.,\,\cite{J},\,\cite{BSV1},\,\cite{BSV2}).
We develop here a very simple argument for the \textsc{Koml\'os} theorem, in the important case of nonnegative $f_1, f_2, \cdots \,.$ The proof dispenses with boundedness in $ \mathbb{L}^1$, at the cost of allowing the function $f$ to take infinite values. When the sequence $\{ f_n \}_{n \in \mathbb N}$ {\it is} bounded in $\mathbb{L}^1$, the method provides also an elementary proof for the original \textsc{Koml\'os} result.
\section{Background}
\label{sec2}
We place ourselves on a given, fixed probability space $(\Omega, \mathcal{F}, \mathbb{P})$, and consider a sequence $ \{f_n \}_{n \in \mathbb{N}}$ of measurable, real-valued functions defined on it.
We say that this sequence {\it converges hereditarily in \textsc{Ces\`aro} mean} to some measurable $f:\Omega \to \mathbb{R} \cup \{\pm \infty\}$, and write
$
f_n \xrightarrow[n \to \infty]{hC}f,~~ \mathbb{P} -\hbox{a.e.,}
$
if, for {\it every} subsequence $\big\{f_{n_k} \big\}_{k \in \mathbb{N}}$ of the original sequence, we have
\begin{equation}
\label{1}
\lim_{K\to \infty} \frac{1}{K} \sum_{k=1}^K f_{n_k} = f,\qquad \mathbb{P} -\hbox{a.e.}
\end{equation}
Clearly then, every other such sequence $ \{g_n \}_{n \in \mathbb{N}}$ which is \textsc{Borel-Cantelli} {\it equivalent to} $ \{f_n \}_{n \in \mathbb{N}}\,,$ in the sense $\, \sum_{n \in \mathbb N} \mathbb{P}( f_n \neq g_n) < \infty\,,$ also has this property.
\smallskip
In 1967, \textsc{Koml\'os} proved the following remarkable result. The proof in \cite{K} is very clear, but also long and quite involved. Simpler arguments have appeared since (e.g.,\,\cite{S}). We provide such an argument in \S \,\ref{sec5d}.
\begin{theorem}
[\textsc{Koml\'os} (1967)]
\label{Kom}
If the sequence $\{f_n \}_{n \in \mathbb{N}}$
satisfies $\sup_{n \in \mathbb{N}} \mathbb{E} (|f_n|) < \infty\,,$ there exist an integrable function $f:\Omega \to \mathbb{R}$ and a subsequence $\big\{f_{n_k}\big\}_{k \in \mathbb{N}}$ of $\{f_n \}_{n \in \mathbb{N}}\,,$ which converges hereditarily in \textsc{Ces\`aro} mean to $f$:
\begin{equation}
\label{02}
f_{n_k} \xrightarrow[k \to \infty]{hC}f, \qquad \mathbb{P}-\hbox{a.e.}
\end{equation}
\end{theorem}
This result was motivated by an earlier one of \textsc{R\'ev\'esz} (\cite{R}; \cite{Re}, p.\,103; \cite{Ch}, pp.\,137-141) whose proof is straightforward, based on simple $\mathbb{L}^2-$martingale convergence theory.
\begin{theorem}
[\textsc{R\'ev\'esz} (1965)]
\label{Rev}
If the sequence $ \{f_n \}_{n \in \mathbb{N}}$ satisfies $\,\sup_{n \in \mathbb{N}} \mathbb{E} (f_n^2) < \infty\,,$ there exist a function $f \in \mathbb{L}^2$ and a subsequence $ \{f_{n_k} \}_{k \in \mathbb{N}}\,,$ such that
$
\, \sum_{k \in \mathbb{N}} a_k \big(f_{n_k} -f\big)\,
$
converges $\,\mathbb{P}-$a.e., for any $ \{a_n \}_{n \in \mathbb{N}} \subset \mathbb R$ with $\,\sum_{n \in \mathbb{N}} a^2_n < \infty$.
\end{theorem}
\section{Results}
\label{sec3}
The purpose of this note is to state and prove the following version of Theorem \ref{Kom} and its companion result, Theorem \ref{theorem4} below.
\begin{theorem}
\label{theorem3}
Given a sequence $ \{f_n \}_{n \in \mathbb{N}}$ of {\rm nonnegative}, measurable functions on a probability space $(\Omega, \mathcal{F}, \mathbb{P})$, there exist a measurable function $f:\Omega \to [0, \infty]$ and a subsequence $\big\{f_{n_k}\big\}_{k \in \mathbb{N}}$ of the original sequence, such that \eqref{02} holds.
\end{theorem}
We note that the result imposes no condition on the functions $f_1, f_2, \cdots$, apart from measurability and nonnegativity. This comes at a price: the function $f$, constructed carefully starting with \eqref{5} below, can take the value $+\infty$ on a set of positive measure.
In a related development, \textsc{Delbaen \& Schachermayer} (\cite{DS1},\,Lemma A1.1;\,\cite{DS2}) showed with very simple arguments that, from every sequence $\{f_n \}_{n \in \mathbb{N}}$ of nonnegative, measurable functions, a sequence of convex combinations $\,g_n \in \text{conv}(f_n, f_{n+1} \cdots ), ~ n \in \mathbb N\,$ of its elements can be extracted, which converges $\mathbb{P}-$a.e.\,to a measurable $f : \Omega \to [0, \infty]$. Clearly Theorem \ref{theorem3} implies this result, which was called ``a somewhat vulgar version of \textsc{Koml\'os}'s theorem" in \cite{DS2}. Indeed, the assertion of convergence is much more precise for \textsc{Ces\`aro} means, than it is for unspecified forward convex combinations.
In several contexts, however, including optimization problems treated via convex duality, nonnegativity is often no restriction at all, but rather the natural setting of things (e.g.,\,\cite{KS}; \cite{KK}, Chapter 3 and Appendix). Then, in the presence of convexity, Lemma A1.1 in \cite{DS1}, or Theorem \ref{theorem3} here, are very useful analogues of Theorem \ref{Kom}: they lead to limit functions $f$ in convex sets (such as the positive orthant in $ \mathbb{L}^1$, or the unit ball in $ \mathbb{L}^1$) which are not compact in the usual sense, but are ``convexly compact" as in \textsc{\v Zitkovi\'c} \cite{Z}.
We formulate now our second result, a direct consequence of the first, recalling the notation $x^\pm = \max ( \pm x, 0)$ for the positive and negative parts of a real number $x$.
\begin{theorem}
\label{theorem4}
Given a sequence $ \{f_n \}_{n \in \mathbb{N}}$ of real-valued, measurable functions on a probability space $(\Omega, \mathcal{F}, \mathbb{P})$ with $\sup_{n \in \mathbb N} \mathbb E \big( f_n^- \big) < \infty\,,$ there exist a measurable function $f:\Omega \to (-\infty, \infty]$ and a subsequence $\big\{f_{n_k}\big\}_{k \in \mathbb{N}}$ of the original sequence, such that \eqref{02} holds.
\end{theorem}
\begin{remark}
\label{rem0}
{\rm
The function $f$ in Theorem \ref{theorem4} is integrable if, in addition to the conditions there, we have $\,\sup_{n \in \mathbb N} \mathbb E \big( f_n^+ \big) < \infty\,$ (equivalently, $\,\sup_{n \in \mathbb N} \mathbb E \big( | f_n | \big) < \infty$) as well. Thus, Theorem \ref{Kom} is a consequence of Theorem \ref{theorem4}.
}
\end{remark}
\section{Preparation}
\label{sec4}
We place ourselves in the setting of Theorem \ref{theorem3}. In the arguments that follow we shall pass often to subsequences, and to diagonal subsequences, of a given $ \{f_n \}_{n \in \mathbb{N}}$. To simplify typography, we denote frequently such subsequences by the same symbols, $ \{f_n \}_{n \in \mathbb{N}}$.
For each integer $k \in \mathbb{N}$, we introduce now the ``truncated'' functions
\begin{equation}
\label{3}
f_n^{(k)}\, :=\,\mathbf{ 1}_{ [ k-1 , k ) }(f_n) \cdot f_n, \qquad n \in \mathbb{N}
\end{equation}
and note the partition of unity
$\,
\sum_{k \in \mathbb{N}} f^{(k)}_n = f_n\,, ~ \forall ~ n \in \mathbb{N}\,.
$
\begin{lemma}
\label{lemma04}
For the sequence of functions $ \{f_n \}_{n \in \mathbb{N}}$ in Theorem \ref{theorem3}, there exists a subsequence, denoted by the same symbols and such that the functions of \eqref{3} converge for every $k\in \mathbb{N}$ to an appropriate measurable function $f^{(k)}:\Omega \to [0,\infty)\,,$ in the sense
\newpage
\begin{equation}
\label{4}
f_n^{(k)} \, \xrightarrow[n \to \infty]{hC} \, f^{(k)}, \qquad \mathbb{P}-\text{a.e.}
\end{equation}
\end{lemma}
\noindent
{\it Proof} (cf.\,\cite{Ch}, pp.\,145--146): For arbitrary, fixed $k\in \mathbb{N}\,,$ the sequence $ \big\{f_n^{(k)} \big\}_{n \in \mathbb{N}}$ of \eqref{3} is bounded in $\mathbb{L}^\infty,$ thus also in $\mathbb{L}^2$. Theorem \ref{Rev} provides a function $ f^{(k)} \in \mathbb{L}^2$ and a subsequence $ \big\{ f_{n_j}^{(k)} \big\}_{j \in \mathbb{N}}$ of $ \big\{f_n \big\}_{n \in \mathbb{N}}\,$, such that $\,\sum_{j \in \mathbb{N}} \big(f_{n_j}^{(k)} -f^{(k)} \big)/ j\,$ converges $\,\mathbb{P}-$a.e. We obtain now from the \textsc{Kronecker} Lemma (\cite{Chu}, p.\,123)
$$
0= \lim_{J\to \infty} \frac{1}{J} \sum_{j=1}^J \big( f_{n_j}^{(k)} - f^{(k)} \big)=\lim_{ J \to \infty} \frac{1}{J} \sum_{j=1}^J f_{n_j}^{(k)} - f^{(k)} ,\qquad \mathbb{P} -\hbox{a.e.}
$$
for the sequence $ \big\{ f_{n_j}^{(k)} \big\}_{j \in \mathbb{N}}$ and all its subsequences. We pass now to a diagonal subsequence, still denoted $ \big\{f_n \big\}_{n \in \mathbb{N}}\,$, and such that \eqref{4} holds for {\it every} $k \in \mathbb N\,.$ \qed
\bigskip
With these ingredients, we introduce the measurable function $f:\Omega \to [0,\infty]$ via
\begin{equation}
\label{5}
f \,:=\, \sum_{k \in \mathbb{N}} f^{(k)}, \qquad \text{and consider the set} \quad A_\infty \,:= \,\{f=\infty\}.
\end{equation}
With the help of \textsc{Fatou}'s Lemma, and the notation of (\ref{3})--(\ref{5}), we obtain then
\begin{equation}
\label{6}
\varliminf_{N\to \infty} \frac{1}{N} \sum^{N}_{n=1} f_n \geq f\,, \qquad \mathbb{P} -\text{a.e.}
\end{equation}
\begin{equation}
\label{7}
\lim_{N\to \infty} \frac{1}{N} \sum^{N}_{n=1} f_n = \infty =f\,, \qquad \mathbb{P} -\text{a.e.} \quad \text{on} \quad A_\infty
\end{equation}
from Lemma \ref{lemma04} , for a suitable subsequence (denoted by the same symbols) of the original sequence $ \{f_n \}_{n \in \mathbb{N}}$ and for all further subsequences of this subsequence.
\begin{remark}
\label{rem1}
{\rm
The inequality in \eqref{6} can easily be strict. Consider, for instance, $f_n \equiv n\,,$ so that $f_n^{(k)} =0$ holds in \eqref{3} for every fixed $k \in \mathbb{N}$ and all $n \in \mathbb{N}$ sufficiently large. This leads to $f^{(k)} =0$ in \eqref{4}, thus $f=0$ in \eqref{5}; but $\frac{1}{N} \sum^N_{n=1} f_n \to \infty$ as $N\to \infty$.
}
\end{remark}
This preparation allows us to formulate a somewhat more technical and precise version of Theorem \ref{theorem3}, as follows.
\begin{proposition}
\label{prop05}
Fix a sequence $\{ f_n \}_{n \in \mathbb N}$ of nonnegative, measurable functions on the probability space $(\Omega, \mathcal{F}, \mathbb{P})$, and recall the notation of \eqref{3}, \eqref{5}. There exist then a subsequence, denoted again as $\{f_n\}_{n\in \mathbb{N}}\,$, and a set $A \supseteq A_\infty\,,$ such that
\begin{equation}
\label{8}
f_n \, \xrightarrow[n \to \infty]{hC} \,f^A := \max \big(f, \, \infty \cdot \mathbf{1}_A\big)\,, \qquad \mathbb{P} -\text{a.e.}
\end{equation}
\end{proposition}
We are employing, here and below, the familiar convention $ \,\infty \cdot 0=0$. It is clear that Theorem \ref{theorem3} will have been established, once Proposition
\ref{prop05} is.
\begin{remark}
\label{rem2}
{\rm
When $ \,C := \sup_{n \in \mathbb{N}} \mathbb{E} (f_n)< \infty \, $ holds, $f$ in \eqref{5} is integrable: $\,\mathbb{E} (f ) \le C,$ from \eqref{6} and \textsc{Fatou}. In particular, $f$ is then real-valued, thus $ f^A \equiv f $ in \eqref{8}.
}
\end{remark}
\section{Proofs}
\label{sec5}
We shall need a couple of auxiliary results. First, and always with the notation of (\ref{3})--(\ref{5}), we note the following consequence of monotone and dominated convergence.
\begin{lemma}
\label{lemma_2}
Suppose the set $\,D\subseteq \Omega \backslash A_\infty = \{f < \infty \}\,$ satisfies $\,\mathbb{E}\big(f \, \mathbf{1}_D\big) < \infty\,.$
Then for any given $\varepsilon \in (0,1)$ there exists, after passing to a suitable subsequence, an integer $K \in \mathbb N $ such that
\begin{equation}
\label{10}
\lim_{n \to \infty} \mathbb{E} \Big[\,f_n^{\,[K,L)}\, \mathbf{ 1}_D\, \Big] < \varepsilon\,, \qquad \forall ~~L = K+1, K+2, \cdots \,.
\end{equation}
\end{lemma}
We are using throughout the notation
\begin{equation}
\label{11}
f_n^{\,[K,L)} \,:= \sum^L_{k=K+1} f_n^{(k)} = f_n \, \mathbf{1}_{[K,L)} (f_n)\,, \quad ~~~f_n^{\,[K,\infty)} \,:=\, \sum_{k \geq K+1} f_n^{(k)} = f_n \, \mathbf{1}_{[K, \infty)} (f_n)\,,
\end{equation}
and in an analogous manner
$\,
f^{\,[K,L)} \,:=\, \sum^L_{k=K+1} f^{(k)}\,, ~~ f^{\,[K,\infty)} \,:=\, \sum_{k \geq K+1} f^{(k)}.
\,$
\smallskip
Secondly, we observe the following dichotomy.
\begin{lemma}
\label{lemma_3}
In the setting of Proposition \ref{prop05}, suppose that a measurable set $B$ exists, such that $\,B \supseteq A_\infty\, $ and $\,f_n \xrightarrow[n \to \infty]{hC} \infty\,$ holds $\,\mathbb{P}-$a.e.\,on $B$.
Then, either
\medskip
\noindent
(i) there exist a set $C \supseteq B$ with $\mathbb{P} (C) > \mathbb{P} (B)$ and a subsequence, still denoted $ \{f_n\ \}_{n \in \mathbb{N}}\,,$ such that
\begin{equation}
\label{9}
f_n \xrightarrow[n \to \infty]{hC} \infty \qquad \text{holds} \quad \mathbb{P} -\text{a.e.} ~ \text{on} \quad C\,; \qquad \text{or}
\end{equation}
(ii) the \textsc{Ces\`aro} convergence $~f_n \xrightarrow[n \to \infty]{hC} f < \infty \quad \text{holds} \quad \mathbb{P} -a.e. ~ \text{on} \quad \Omega \setminus B \subseteq \{f< \infty\}$.
\end{lemma}
\smallskip
When the contingency $(ii)$ prevails, the set $B$ is ``maximal'' for the convergence in \eqref{9}, i.e., not contained in any set with bigger measure on which $f_n \stackrel{hC}{\longrightarrow} \infty$ holds $\mathbb{P} -$a.e. This leads then to claim (a) of Proposition \ref{prop05}, and thus to Theorem \ref{theorem3} as well.
\subsection{Proof of Lemma \ref{lemma_2}}
\label{sec5a}
We shall argue by contradiction, assuming the existence of an $\varepsilon \in (0,1)$ with the property that, for every $K \in \mathbb{N}$, there exists an integer $L>K$ such that
\begin{equation}
\label{11b}
\mathbb{E} \bigg[\sum^L_{k=K+1} f^{(k)}_n \, \mathbf{1}_D \bigg] = \mathbb{E} \Big(f_n^{[K,L)} \, \mathbf{1}_D \Big) \geq \varepsilon
\end{equation}
holds for infinitely many integers $n \in \mathbb{N}$. But this means that there is a subsequence, again denoted by $\{f_n\}_{n \in \mathbb{N}}\,,$ {\it along which \eqref{11b} holds for every $n \in \mathbb{N}$.} As a result, also
\begin{equation}
\label{11c}
\mathbb{E} \bigg[\sum^L_{k=K+1} \Big(\frac{1}{N} \sum^N_{n=1} f_n^{(k)}\Big) \, \mathbf{1}_D \bigg] \geq \varepsilon
\end{equation}
holds for every $N \in \mathbb{N}$. Now all the truncated functions $f_n^{(k)}$ as in \eqref{3}, for $k=K+1, \dots, L$ and $n \in \mathbb{N}$, take values ``on the Procrustean bed" $\{ 0 \} \cup [K,L)$; and $\, \lim_{N\to\infty} \frac{1}{N} \sum^N_{k=1} f_n^{(k)} = f^{(k)}\,$ holds $\mathbb{P} -$a.e., on account of Lemma \ref{lemma04}. Thus, $\,
\mathbb{E} \big[\sum^L_{k=K+1} f^{(k)} \, \mathbf{1}_D \big] \geq \varepsilon
\,$ from bounded convergence and \eqref{11c}; and the nonnegativity of these $f^{(k)}$'s implies also
\begin{equation}\label{11d}
\mathbb{E} \bigg(\sum_{k\geq K+1} f^{(k)} \, \mathbf{1}_D \bigg) = \mathbb{E} \Big(f^{\,[K, \infty)} \, \mathbf{1}_D \Big) \geq \varepsilon\,, \qquad \forall ~~K \in \mathbb{N}\,.
\end{equation}
This nonnegativity gives also $\,\lim_{K\to \infty} \uparrow \sum^K_{k=1} f^{(k)} =f$, both $\mathbb{P} -$a.e.\,and in $\mathbb{L}^1$ on $D$. Consequently, $ \mathbb{E} \big[f^{\,[K, \infty)} \, \mathbf{1}_D\big] < \varepsilon/2\,$ holds for all $K \in \mathbb{N}$ large enough, by monotone convergence. This contradicts \eqref{11d}, and we are done. \qed
\subsection{Proof of Lemma \ref{lemma_3}}
\label{sec5b}
We fix $j \in \mathbb{N}$, define
\begin{equation}
\label{12}
D_j := \{f \leq j\}\backslash B\,, \qquad
E_n^{[K, \infty)} \,:=\, \big\{f_n^{\,[K,\infty)} \geq K \big\} \cap D_j \,=\, \big\{f_n \geq K \big\} \cap D_j \,,
\end{equation}
and distinguish two cases:
\smallskip
\noindent
{\it Case A:} $~\alpha \,:=\, \lim_{K\to \infty} \varliminf_{n \to \infty} \mathbb{P} \big(E_n^{[K, \infty)}\big) > 0\,.$
\smallskip
\noindent
{\it Case B:} $~\alpha = 0\,.$
\medskip
\noindent
$\bullet~$
In Case A, we pass to a subsequence $\{f_n\}_{n \in \mathbb{N}}$ with $\mathbb{P} \big(E_n^{\,[n^2, \infty)}\big) \geq \alpha /2 \,,$ for all $\,n \in \mathbb{N}$; consider the indicators
$\,g_n := \mathbf{1}_{\mathbb{E}_n^{[n^2, \infty)}}, ~n \in \mathbb{N}\,;$
and find a subsequence, still denoted by $\{g_n\} _{n \in \mathbb{N}}$, with the property
\begin{equation}
g_n \xrightarrow[n \to \infty]{hC} g, \qquad \mathbb{P} -\hbox{a.e.}
\end{equation}
for some measurable $g:\Omega \to [0,1]$ with $\mathbb{E}(g) \geq \alpha /2$, by bounded convergence. We are appealing here also to Theorem \ref{Rev} and \textsc{Kronecker}, as we did in the proof of Lemma \ref{lemma04}. In this manner, we obtain $f_n \xrightarrow[n \to \infty]{hC} \infty\,,$ $\, \mathbb{P} -$a.e. on $\,\{g >0\}\,$.
\smallskip
This set satisfies $\{g >0\} \cap B= \emptyset\,$, and has measure
$\,
\mathbb{P} \{g>0\} = \mathbb{E} [\mathbf{1}_{\{g>0\}} ] \geq \mathbb{E}[g] \geq \alpha/2 \,;
$
thus we are in case $(i)$ of Lemma \ref{lemma_3}, with $C := \{g >0\} \cup B$ and $\mathbb{P}(C) > \mathbb{P}(B)$.
\medskip
\noindent
$\bullet~$
Now we pass to Case B. We fix $\varepsilon >0$ and $D_j=\{f \leq j\} \backslash B$, and apply Lemma \ref{lemma_2} to find $K \in \mathbb{N}$ for which \eqref{10} holds. We construct by induction a subsequence, this time indexed by $\{n_m\}_{m\in \mathbb{N}}$, and an increasing sequence $\{K_m\}_{m \in \mathbb{N}}$ of integers, as follows:
{\it Start with $n_1=1, K_1=K$, and suppose $n_1, \dots, n_m$ as well as $K_1, \dots, K_m$ have been constructed. Select, by the premise of Case B, an integer $K_{m+1} > K_m$, such that
\begin{equation}
\label{13}
\varlimsup_{n\to \infty} \mathbb{P} \big(E_n^{\,[K_{m+1}, \infty ) }\big) < 2^{-m}\,.
\end{equation}
Using \eqref{10}, select now an integer $n_{m+1} > n_m$ so large, that
$\,
\mathbb{E} \big(f_{n_{m+1}}^{\,[K,K_{m+1})} \, \mathbf{1}_{D_j} \big) < \varepsilon\,
$
holds, therefore also
\begin{equation}
\label{14}
\mathbb{E} \big( f_{n_{m+1}}^{\,[0,K_{m+1} )} \, \mathbf{1}_{D_j} \big)\, < \,\mathbb{E} \big(f_{n_{m+1}}^{\,[0,K)} \, \mathbf{1}_{D_j} \big) + \varepsilon \,,
\end{equation}
completing the induction.} Now, from \eqref{13}, \eqref{12} and \textsc{Borel-Cantelli}, we have $\mathbb{P} -$a.e.
\begin{equation}
\label{15}
\lim_{M\to\infty}\bigg( \frac{1}{M} \sum^M_{m=1} f^{\,[0,K_m)}_{n_m} \cdot \mathbf{1}_{D_j}\bigg) \,= \,\lim_{M\to \infty} \bigg( \frac{1}{M} \sum^M_{m=1} f_{n_m} \cdot \mathbf{1}_{D_j} \bigg)\,.
\end{equation}
On account of \eqref{14}, the integral of the function on the left-hand side in \eqref{15} satisfies
\begin{equation}
\label{16}
\begin{split}
&\varlimsup_{M\to\infty} \mathbb{E} \bigg[ \bigg( \frac{1}{M} \sum^M_{m=1} f_{n_m}^{\,[0,K_m)} \bigg) \, \mathbf{1}_{D_j} \bigg] \, \le \, \varlimsup_{M\to\infty} \mathbb{E} \bigg[ \bigg( \frac{1}{M} \sum^M_{m=1} f_{n_m}^{\,[0,K)} \bigg) \mathbf{1}_{D_j} \bigg] +\varepsilon \\
& \qquad \qquad \qquad \qquad \qquad \qquad \qquad ~ \leq \mathbb{E}(f \,\mathbf{1}_{D_j})+\varepsilon\,.
\end{split}
\end{equation}
At this point, we need to let $\varepsilon \downarrow 0, \,j\to \infty$. We take $\varepsilon \downarrow 0$ first, with $j \in \mathbb N$ fixed, and find a diagonal subsequence, still denoted by $\{f_n\}_{n \in \mathbb{N}}\,,$ with
\begin{equation}
\label{5.13}
f_n \xrightarrow[n \to \infty]{hC}f,~~ ~~~\mathbb{P} -\hbox{a.e. on $D_j\,;$}
\end{equation}
details for this argument are supplied right below. Thus, on account of \eqref{15}, \eqref{16}:
\begin{equation}
\label{5.14}
\mathbb{E} \bigg[\bigg( \lim_{N\to \infty} \bigg(\frac{1}{N} \sum^N_{n=1} f_{n } \bigg) -f \bigg) \, \mathbf{1}_{D_j} \bigg] =0\,, \qquad \forall ~~ j\in \mathbb{N}\,.
\end{equation}
The next step is to let $ \,j\to \infty$. We do this again by extracting subsequences, successively for each $j \in \mathbb N\,$, then passing to a diagonal subsequence. In this manner, we establish \eqref{5.14} with $D_j$ replaced there by the set $D= \bigcup_{j \in \mathbb N}D_j =\{ f < \infty \} \backslash B\,.$
We invoke at this point \eqref{6}, which gives $\lim_{N\to \infty} \frac{1}{N} \sum^N_{n=1} f_{n } =f$, $\mathbb{P}-$a.e.\,on $D$, and deduce that we are in Case (ii) of Lemma \ref{lemma_3}. \qed
\bigskip
\noindent
{\it Proof of \eqref{5.13}}: Let us recapitulate what has been done up to this point, for $\varepsilon^{(1)} = \varepsilon >0$: We have seen that the subsequence
$\{ f_{n_m} \mathbf{1}_{D_j} \}_{m \in \mathbb{N}}\,$ of $\{f_{n } \mathbf{1}_{D_j}\}_{n \in \mathbb{N}}\,$ is \textsc{Borel-Cantelli} equivalent (cf.\,section \ref{sec2})
to $\, h_m^{(1)}:= f_{n_m}^{\,[0, K_m)} \mathbf{1}_{D_j}$, $\,m \in \mathbb N\,;$ namely, $\sum_{m \in \mathbb N} \mathbb{P}
\big( f_{n_m} \mathbf{1}_{D_j} \neq f_{n_m}^{\,[0, K_m)} \mathbf{1}_{D_j} \big) < \infty\,.
$ And for a given $\varepsilon^{(1)} >0,$ we have found an integer $ K_1\in \mathbb N$ with the property $\, \mathbb E \big[ \, h_m^{(1)} \,\mathbf{ 1}_{ \{ h_m^{(1)} \ge K_1 \}\cap D_j } \big] < \varepsilon^{(1)}\,$ for all $m \in \mathbb N$ large enough.
\smallskip
Repeating this argument with $\varepsilon^{(2)} >0, $ we extract a subsequence $\{f_{n_{m_\ell}} \mathbf{1}_{D_j} \}_{\ell \in \mathbb{N}}\, $ of $\{f_{n_m}\mathbf{1}_{D_j} \}_{n \in \mathbb{N}}\,$, and find $\, K_2 \in \mathbb N$ and a sequence $\, \big\{ h_\ell^{(2)} \big\}_{\ell \in \mathbb N}\,$ which is \textsc{Borel-Cantelli} equivalent to $\{f_{n_{m_\ell}} \mathbf{1}_{D_j} \}_{\ell \in \mathbb{N}}\, $ with
$$
\mathbb E \Big[ \, h_\ell^{(2)} \,\mathbf{ 1}_{\{ h_\ell^{(2)} \ge K_2 \} \cap D_j} \Big] < \varepsilon^{(2)} \quad \text{for all all $\,\ell \in \mathbb N\,$ sufficiently large.}
$$
Continuing in this manner, then passing to a diagonal subsequence, we obtain a sequence $\, \{ f_n \big\}_{n \in \mathbb N}$ such that, for every term $\varepsilon^{(i)} >0$ in a sequence with $\varepsilon^{(i)} \downarrow 0\,$, there exist $\, K_i \in \mathbb N\,$ and a sequence $\, \big\{ h_n^{(i)} \big\}_{n \in \mathbb N}\,,$ \textsc{Borel-Cantelli} equivalent to $\, \{ f_n \big\}_{n \in \mathbb N}\,,$ with
$$
\mathbb E \Big[ \, h_n^{(i)} \,\mathbf{ 1}_{\{ h_n^{(i)} \ge K_i \} \cap D_j} \Big] < \varepsilon^{(i)} \quad \text{~ for all all $\,n \in \mathbb N\,$ sufficiently large.}
$$
But this implies $ \,
f_n \xrightarrow[n \to \infty]{hC}f,~~ \mathbb{P} -\hbox{a.e. on $D_j\,,$}
$ as claimed in (\ref{5.13}). \qed
\subsection{Proof of Proposition \ref{prop05}}
\label{sec5c}
On the strength of Lemma \ref{lemma_3} we construct, by exhaustion or transfinite induction arguments and as long as we are in the realm of Case 1, an increasing sequence $B_1 \subseteq B_2 \subseteq \dots$ of sets as postulated there, whose union $B_\infty := \bigcup_{j \in \mathbb{N}} B_j$ is maximal with the property \eqref{9} for an appropriate subsequence.
But such maximality means that, on the complement $\Omega \backslash B_\infty$ of this set, we must be in the contingency of Case 2. This establishes the Proposition, thus also Theorem \ref{theorem3}. \qed
\subsection{Proof of Theorems \ref{theorem4} and \ref{Kom}}
\label{sec5d}
The sequence $ \{f_n^- \}_{n \in \mathbb{N}}$ satisfies the conditions of Theorem \ref{theorem3}, and $\sup_{n \in \mathbb N} \mathbb E \big( f_n^- \big) < \infty\,.$ Thus, from Theorem \ref{theorem3} and \textsc{Fatou} we obtain, after passing to a subsequence,
$$
f_{n_k}^- \xrightarrow[k \to \infty]{hC}f^{\,(-)}, \qquad \mathbb{P}-\hbox{a.e.}
$$
for some $f^{\,(-)}: \Omega \to [0, \infty)$ which is integrable: $\mathbb E \big( f^{\,(-)} \big) \le \sup_{n \in \mathbb N} \mathbb E \big( f_n^{-} \big) < \infty\,.$ Passing yet again to a subsequence, still denoted $\{f_{n_k} \}_{k \in \mathbb{N}}\,,$ we apply Theorem \ref{theorem3} to the sequence $\big\{f^+_{n_k}\big\}_{k \in \mathbb{N}}$ of its positive parts and obtain
$
\,f_{n_k}^+ \xrightarrow[k \to \infty]{hC}f^{\,(+)}, ~~ \mathbb{P}-\hbox{a.e.,}
$
for some measurable $f^{\,(+)}: \Omega \to [0, \infty]$. The proof of Theorem \ref{theorem4} is completed by the observation
$$
f_{n_k}^+ - f_{n_k}^- =f_{n_k} \, \xrightarrow[k \to \infty]{hC} \, f:= f^{\,(+)} - f^{\,(-)}, \quad \mathbb{P}-\hbox{a.e.}
$$
If, in addition, $\sup_{n \in \mathbb N} \mathbb E \big( f_n^+ \big) < \infty\,$ holds, we have as before $\mathbb E \big( f^{\,(+)} \big) \le \sup_{n \in \mathbb N} \mathbb E \big( f_n^{+} \big) < \infty\,,$ the just defined function $f$ is integrable, and Theorem \ref{Kom} with $\,\mathbb E (|f|)<\infty\,$ follows. \qed
|
{
"timestamp": "2021-12-01T02:26:29",
"yymm": "2111",
"arxiv_id": "2111.15469",
"language": "en",
"url": "https://arxiv.org/abs/2111.15469"
}
|
\section{Introduction}
\label{sec:introduction}
Deep Neural Network (DNN) achieves state-of-the-art results in a wide range of areas and has various applications across industries, including self driving cars \cite{rao2018deep}, virtual assistants \cite{rawassizadeh2019manifestation}, intelligent healthcare \cite{miotto2018deep}, personalized recommendation \cite{naumov2019deep}, etc. DNN's success usually relies on a plenty amount of training data. However, DNN's generalization is often hampered in many domains where training data is insufficient because data annotation is labor-intensive and expensive.
Regularization is a popular method which helps to improve generalization via introducing inductive bias. Regularization is one of the key elements of machine learning, particularly of deep learning \cite{goodfellow2016deep}. Specifically, inductive bias represents assumptions about the model properties other than the consistency of outputs with targets. There have been tremendous efforts in identifying such desired properties, which results in a series of widely used regularization methods. For example, L2 Regularization \cite{plaut1986experiments, lang1990dimensionality} penalizes large norms of model weights, which puts constraints on "parameter scale". L1 regularization improves "sparseness" by rewarding zero weight or neuron response. Jacobian regularization \cite{sokolic2017robust, hoffman2019robust} minimizes the norm of the input-output Jacobian matrix to improve "smoothness" of the learned mapping function. Orthogonal regularization \cite{cui2020towards, brock2016neural} enlarges "weight diversity" to reduce the feature redundancy. Batch Normalization \cite{ioffe2015batch} promotes "training dynamics stability" by reducing the internal covariate shift.
Although the existing works have already proposed various inductive biases from diverse perspectives, including aforementioned "parameter scale", "sparseness", "smoothness", "weight diversity", "training dynamics stability", to the best of our knowledge, there is no work to explore inductive bias from the perspective regarding the characteristics of neuron response distribution on each class. From another point of view, the existing works leverage the information related to weights (parameter scale regularization), weight correlations (orthogonal regularization), derivatives of mapping function (smoothness regularization), collective neurons responses (sparseness regularization, Batch Normalization), but none of them considers the intra-class response distribution of individual neurons.
\begin{figure*}[!ht]
\centering
\includegraphics[width=1.0\textwidth]{fig/class.pdf}
\caption{\label{fig:correctness}Comparison of the intra-class response variance of correctly and incorrectly classified testing samples for different architectures: four-layer MLP for MNIST, ResNet-18 for CIFAR-10, and GraphSAGE for PubMed. The horizontal axis and the vertical axis represent class indexes and the value of intra-class response variance, respectively. Each bar represents the intra-class response variance aggregated from all neurons in the penultimate layer.}
\end{figure*}
\begin{figure*}[!ht]
\centering
\subfigure[Testing accuracy]{
\begin{minipage}[b]{0.30\textwidth}
\centering
\includegraphics[width=1\textwidth]{fig/mlp_val_acc.pdf}
\end{minipage}
}
\subfigure[Cross entropy loss]{
\begin{minipage}[b]{0.30\textwidth}
\centering
\includegraphics[width=1\textwidth]{fig/_cross_entropy.pdf}
\end{minipage}
}
\subfigure[Average intra-class variance]{
\begin{minipage}[b]{0.30\textwidth}
\centering
\includegraphics[width=1\textwidth]{fig/_node_stable.pdf}
\end{minipage}
}
\caption{\label{fig:hyperparam}Training procedure of the vanilla four-layer MLP and the four-layer MLP with NSR on MNIST. (a) represents the testing accuracy; (b) and (c) illustrate the corresponding cross entropy loss and average intra-class response variance of these two models on training set.}
\end{figure*}
In this paper, we study the characteristics of the intra-class response distribution of each individual neuron to identify the new regularization method. In more detail, for each individual neuron, we analyze the variance of its response to samples of the same class, which is called \textbf{neuron intra-class response variance}. We find that such intra-class response variance has an obvious correlation with classification correctness. As shown in Figure \ref{fig:correctness}, we find the correctly classified samples usually have smaller intra-class response variance compared to the misclassified samples. Besides, it can be observed that the vanilla model with cross entropy as the optimization target usually could not control intra-class response variance well, as shown in Figure \ref{fig:hyperparam}, which leaves a potential improvement space for the regularization. The details of experiments and observations are explained in section \ref{sec:Observations}.
Based on these observations, we articulate the \textbf{Neuron Steadiness Hypothesis}, neuron with similar responses to instances of the same class, i.e., smaller neuron intra-class response variance, can lead to better generalization.
Accordingly, we propose the regularization method called \textbf{Neuron Steadiness Regularization} to improve generalization by penalizing large neuron intra-class response variance.
Our regularization method shows significant improvement on various network architectures, including Graph Neural Network (GNN), Convolution Neural Network (CNN), and Multilayer Perceptron (MLP).
To sum up, our contributions are as follows:
\begin{itemize}
\item We articulate the Neuron Steadiness Hypothesis and demonstrate its validity, which provides a new regularization perspective based on the neuron-level class-dependent response distribution.
\item We propose a new regularization method called Neuron Steadiness Regularization that improves generalization ability. The proposed method is computationally efficient and adaptive to various architectures and tasks.
\item Extensive experiments are conducted on multiple kinds of datasets, like images, citation graphs, and product graphs, with various network architectures, including GNN, CNN, and MLP. The significant accuracy improvement evidently verifies the effectiveness of the proposed Neuron Steadiness Regularization.
\end{itemize}
\section{Observations}\label{sec:Observations}
In this section, we verify the validity of our proposed Neuron Steadiness Hypothesis by experiments with the following identified observations.
\subsection{\textbf{Correlation between neuron intra-class response variance and classification correctness}}
The neuron intra-class response variance is derived from neuron response distribution. For any neuron in a given neural network, its response distribution could be obtained by recording responses when we feed each input sample to the model. In this paper, for the neuron with ReLU as its activation function, we do not take its zero-response into account for calculation, because zero-response corresponds to the inactivated state where the neuron does not respond at all. In other words, for neurons with ReLU as the activation function, we only record its non-zero responses to represent its response distribution.
With the obtained response distribution of any given neuron, it is relatively straightforward to calculate our discussed statistics, i.e., intra-class response variance, which is the mean value of squared intra-class response deviation. Such deviation is the difference between the neuron response to a particular sample and the neuron's average response to the sample's belonging class. Then, for each neuron, we calculate intra-class response variance corresponding to the correctly classified samples and misclassified samples, respectively. Finally, we aggregate such two respective variances of all neurons in the penultimate layer separately and present them in Figure \ref{fig:correctness}.
Figure \ref{fig:correctness} shows that, for different networks, the average intra-class response variance of correctly classified samples is smaller than that of misclassified ones on arbitrary class. It illustrates that there is a strong correlation between classification correctness and neuron intra-class response variance.
\subsection{\textbf{Dynamics of neuron intra-class response variance during training procedure}} \label{sec:2.2}
We investigate the tendency of neuron intra-class response variance along with the training procedure. For comparison, we perform the analysis on the vanilla model and the model with our proposed neuron steadiness regularization. We calculate the intra-class response variance of each neuron on the entire training set after each training epoch. Then, the intra-class response variance of all neurons are averaged and denoted as average variance in Figure \ref{fig:hyperparam} (c). We also show the testing accuracy and the training cross entropy loss in Figure \ref{fig:hyperparam} (a) and (b), respectively. Other architectures demonstrate similar tendencies and can be found in Appendix.
From Figure \ref{fig:hyperparam}, we could see that the cross entropy objective keeps being optimized during the training procedure which leads to increasing classification accuracy for both models. However, for the vanilla version, the average neuron intra-class response variance is growing larger because the model training does not impose constraints on neuron intra-class response variance.
For the model trained with our proposed regularization, the neuron intra-class response variance is controlled and decreases after a few epochs. More importantly, the learned model with well-controlled intra-class response variance has higher testing accuracy than the vanilla version although its corresponding cross entropy loss is even larger. One potential reason is that cross entropy only increases the distance among different classes while ignoring the intra-class distribution, and the training procedure of the model with our regularization gradually enlarges decision boundary distance by reducing neuron intra-class response variance, which results in higher testing accuracy. From Figure \ref{fig:hyperparam}, we could also see that regulating neuron intra-class response variance may also help the optimization procedure to achieve higher accuracy in earlier epochs.
To conclude all above observations, it is reasonable to design the regularization based on the neuron steadiness hypothesis, i.e., reducing neuron intra-class response variance, for better generalization.
\section{Method}
In this section, we first describe the proposed \textbf{Neuron Steadiness Regularization}. Then we further introduce several techniques for computational efficiency with mini-batch.
\subsection{Definition of \textbf{Neuron Steadiness Regularization}}
\textbf{\textbf{Neuron Steadiness Regularization}} (NSR) for a specific neuron is defined as the summation of intra-class response variances of different classes. The NSR for the $n_{th}$ neuron can be formulated as:
\begin{equation}
\label{equ:neuron_stable_reg}
\mathrm{\sigma}_{n}= \sum_{j=1}^{J}{\alpha_{j} \cdot \operatorname{Var}\left (X_{n,j}\right)} =
\sum_{j=1}^{J}{\alpha_{j} \cdot\mathbb{E}\left[(X_{n,j}-\mathbb{E}\left[X_{n,j}\right])^{2}\right]}
\end{equation}
where $X_{n,j}$ is a random variable denoting the $n_{th}$ neuron's response for a sample belonging to the $j_{th}$ class. $J$ is the number of classes.
$\alpha_{j} = \frac{z_j}{\sum_i z_i}$ is the prior probability of the $j_{th}$ class where $z_j$ is the sample amount of $j_{th}$ class. Notice that $\alpha_{j}$ is not a hyper-parameter, and it only presents the importance of different classes.
With the NSR term defined for each individual neuron, the overall regularization is derived by applying NSR term on all neurons in the network as follows:
\begin{equation}
\mathcal{L}_{S}=\sum_{n=1}^{N}{\lambda_{n} \mathrm{\sigma}_{n}}
\end{equation}
where $\mathcal{L}_{S}$ represents the NSR term for the entire network, $N$ is the number of neurons in the network. $\lambda_{n}$ is the hyper-parameter to present the significance of the $n_{th}$ neuron. In this paper, for practical simplicity, $\lambda$ is set as the same value for all neurons in the same layer.
Adding the overall regularization term to the main training target, i.e., the cross entropy loss, the final regularized loss function can be written as:
\begin{equation}
\mathcal{L}=\mathcal{L}_{C}+\mathcal{L}_{S},
\end{equation}
where $\mathcal{L}_{C}$ represents cross entropy loss.
\subsection{Practical Implementation}
\label{implementation}
\subsubsection{Mini-batch Training}
In order to use mini-batch training, we adapt our NSR method to allow forward and backward propagation for mini-batches of samples.
To be specific, we first transform Eq. \eqref{equ:neuron_stable_reg} as:
\begin{equation}
\begin{split}
\sigma_n &= \sum_{j=1}^J \alpha_j
\left(\mathbb{E}\left[X_{n, j}^2\right] - \mathbb{E}^2 \left[X_{n, j}\right]\right) \\
&= \mathbb{E}\left[\sum_{j=1}^J \alpha_j X_{n, j}^2\right] - \sum_{j=1}^J \alpha_j \mathbb{E}^2 \left[ X_{n, j}\right].
\end{split}
\end{equation}
The first term has the same form as typical loss functions, i.e., an expectation of the function of data samples, which can be easily generalized to a mini-batch training setting via estimating the expectation with a batch of samples. Although the second term can be estimated in the same way, the square operation magnifies the estimation error, especially when the batch size is not large enough.
To alleviate the problem while not introduce much computing overhead, we propose a memory queue based estimation method that allows us to leverage more history samples for estimation without additional sampling and forward/backward computation. For each class, we record the number of samples and the summation values of the neurons' response within each batch. To further reduce the storage overhead, we only maintain these values of the latest $M$ batches by two $M$-length queues.
More specifically, an element $c_{m,j}$ in the first memory queue is the count number of $j_{th}$ class instances in the $m_{th}$ batch, represented as:
\begin{equation}
c_{m,j}=\sum_{y_i \in Y_m} \delta\left(y_{i}=j\right),
\end{equation}
where $Y_m$ is the set of labels in the $m_{th}$ batch, $y_i$ is the label of $i_{th}$ sample, and $\delta$ is a characteristic function, i.e., $\delta(condition)=1$ if $condition$ is satisfied, otherwise $\delta(condition)=0$.
An element ${s}_{m,j}^{(n)}$ in the second memory queue is the summation value of $n_{th}$ neuron's response for the samples belonging to the $j_{th}$ class within the $m_{th}$ batch, represented as:
\begin{equation}
{s}_{m,j}^{(n)}=\sum_{x^{(n)}_i\in \mathcal{X}_{m}^{(n)}} \delta\left(y_{i}=j\right) \cdot x^{(n)}_{i},
\end{equation}
where $\mathcal{X}_{m}^{(n)}$ is the set of the $n_{th}$ neuron's response within the $m_{th}$ batch and $x_{i}^{(n)}$ is the $n_{th}$ neuron's response of the $i_{th}$ sample.
When a new batch is fed, the estimation of expectation $\mathbb{E} \left[ X_{n, j}\right]$ can be updated by the following steps:
\begin{equation}
\begin{split}
C_j := C_j - c_{0,j} + c_{*,j},& \quad S^{(n)}_j := S^{(n)}_j - s^{(n)}_{0,j} + s^{(n)}_{*,j} \\
\hat{\mathbb{E}} \left[ X_{n, j}\right] &:= S_j^{(n)} / C_j,
\end{split}
\end{equation}
where $\hat{\mathbb{E}} \left[ X_{n, j}\right]$ is the estimation of expectation $\mathbb{E} \left[ X_{n, j}\right]$, $C_j = \sum_m c_{m,j}$, $S_j^{(n)} = \sum_m s^{(n)}_{m,j}$, and $c_{*,j}$, $s^{(n)}_{*,j}$, as new elements appended to the queues, represent the count number and the summation for the new batch. Based on this dynamic update method, the additional memory overhead is negligible, and we give a space complexity analysis in Appendix.
\subsubsection{Steadiness Redundancy among Neurons}
\label{sec:Steadiness redundancy}
Considering the trade-off between performance gain and computational overhead, it may not be necessary to apply NSR on all neurons. The experiment results in Figure \ref{fig:reduction} show that every layer's variance ratio decreases even only one specific layer is applied with NSR. It indicates the correlation or redundancy among the steadiness constraints of different layers, meaning that applying NSR on different layers of neurons has an overlapping effect on neuron steadiness control. More detailed experiment results in \ref{sec:ablation} show that only applying NSR on a particular layer, i.e., the last layer of MLP or CNN and the first layer of GNN, could achieve comparable accuracy with the best result obtained by applying NSR on multiple layers.
\begin{figure}
\centering
\subfigure[NSR only applied on the second layer]{
\begin{minipage}[b]{0.47\textwidth}
\centering
\includegraphics[width=1\textwidth]{fig/StabilityConduction_1e-3_0.png}
\end{minipage}
}
\subfigure[NSR only applied on the last layer]{
\begin{minipage}[b]{0.47\textwidth}
\centering
\includegraphics[width=1\textwidth]{fig/StabilityConduction_0_1e-4.png}
\end{minipage}
}
\caption{{The trend of variance ratio along with the training procedure on MNIST. The variance ratio is the neuron intra-class response variance of the three-layer MLP applied with NSR, (a) only on the second layer and (b) only on the last layer, divided by the corresponding variance of the vanilla three-layer MLP. }}
\label{fig:reduction}
\end{figure}
\section{Experiment}
In this section, we conduct experiments over a variety of datasets to examine the performance of our neuron steadiness regularization on three extensively used neural network architectures.
We design a series of experiments to answer the following research questions:
\begin{itemize}
\item RQ1: How does NSR perform on various datasets and different neural architectures?
\item RQ2: Does NSR outperform other classical regularization methods?
\item RQ3: What is the effect of combining NSR with other popular methods like Batch Normalization or Dropout?
\item RQ4: How to decide the layer(s) to be applied with NSR?
\end{itemize}
\subsection{Experiment Setup}
\subsubsection{Network Architectures and Datasets}
The three architectures utilized in experiments are Multilayer Perceptron (MLP), Convolutional Neural Network (CNN) and Graph Neural Network (GNN).
For vanilla models used as our baselines in different architectures, we adopt ResNet-18 \cite{he2016deep}, VGG-19 \cite{liu2018rethinking} and ResNet-50 for CNN, GCN \cite{kipf2016semi} and GraphSAGE \cite{hamilton2017inductive} for GNN. We run five MLP models with different layers and MLP with L layers (including input layer) is denoted by MLP-L. We elaborate on the details of these vanilla models in the Appendix.
As for benchmark datasets used in our experiments, MLP and CNN are applied to image recognition task on \textbf{MNIST} \cite{lecun1998mnist}, \textbf{CIFAR-10} \cite{krizhevsky2009learning} and \textbf{ImageNet} \cite{deng2009imagenet} datasets, respectively, while GNN is applied to node classification on four real-world graph datasets: \textbf{WikiCS} \cite{mernyei2020wiki}, \textbf{PubMed} \cite{yang2016revisiting}, \textbf{Amazon-Photo} and \textbf{Amazon-Computers} \cite{shchur2018pitfalls}. Notice that ImageNet is a large benchmark dataset with 1000 classes. We describe the details of these datasets in Appendix.
\begin{table*}[!ht]
\centering
{%
\begin{tabular}{c|ccccc}
\toprule
Model & MLP-3 & MLP-4 & MLP-6 & MLP-8 & MLP-10 \\ \midrule
Vanilla (\%) & 3.09 $\pm$ 0.10 & 2.29 $\pm$ 0.07 & 2.44 $\pm$ 0.09 & 2.87 $\pm$ 0.09 & 3.06 $\pm$ 0.06 \\
Vanilla+NSR (\%) & \textbf{2.80 $\pm$ 0.08} & \textbf{1.64 $\pm$ 0.04} & \textbf{1.76 $\pm$ 0.06} & \textbf{1.98 $\pm$ 0.09} & \textbf{1.72 $\pm$ 0.14}\\
Gain & 9.39\% & 28.38\% & 27.87\% & 30.87\% & 43.79\% \\ \bottomrule
\end{tabular}%
}
\caption{Error rate of applying our NSR on five MLP models for MNIST.}
\label{tab:mlp}
\end{table*}
\begin{table}[!ht]
\centering
\resizebox{1.0\textwidth}{!}{%
\begin{tabular}{c|ccc}
\toprule
Model & ResNet-18 & VGG-19 & ResNet-50 \\ \midrule
Vanilla(\%) & 4.22 $\pm$ 0.07 & 9.19 $\pm$ 0.18 & 7.82 $\pm$ 0.07\\
Vanilla+NSR(\%) & \textbf{3.84 $\pm$ 0.08} & \textbf{8.09 $\pm$ 0.17} & \textbf{7.09 $\pm$ 0.08} \\
Gain & 9.00\% & 11.97\% & 9.34\% \\ \bottomrule
\end{tabular}
}
\caption{Error rate of applying our NSR on ResNet-18 and VGG-19 for CIFAR-10, and top-5 error rate on ResNet-50 for ImageNet.}
\label{tab:cnn}
\end{table}
\subsubsection{Experiment Settings}
\label{sec:experiment_settings}
We divide 60000 training images of MNIST into the training set with 50000 samples and the validation set with the remaining 10000 samples. For each of four graph datasets, it is randomly split into training, validation, and testing sets with ratio 6:2:2. The implementation settings of GraphSAGE and GCN are followed by \cite{du2021understanding, 10.1145/3442381.3449896}.
For ResNet-18 and VGG-19 on CIFAR-10, we follow the implementation setting details of \cite{zhang2019lookahead, gouk2018maxgain} correspondingly.
For ResNet-50 on ImageNet, we follow the official implementation provided by torchversion library \footnote{https://pytorch.org/hub/pytorch\_vision\_resnet/}.
To be specific, SGD \cite{ruder2016overview} is used to optimize MLP, and Adam \cite{kingma2014adam} for other models except for ResNet-18 that is optimized by Momentum \cite{ruder2016overview} according to the implementation of \cite{zhang2019lookahead}.
We use the typical setting of batch size as 100 for all experiments.
To ensure the model convergence, training epochs are set as 100 for both MLP and GNN, and 200 for ResNet and 500 for VGG.
For practical simplicity, we use the same $\lambda_n^l$ for neurons at the $l$-th layer which is denoted as $\lambda^l$, and we apply NSR to only one particular layer in our experiments except for RQ4. We tune this hyper-parameter $\lambda$ like most of the regularization methods. We apply random search strategy to find the proper $\lambda^l$ from 1e-2 to 100. The error rate is the metric and each result is averaged over 5 runs with different random seeds. The hardware environment is detailed in Appendix.
\subsection{Experiment Results}
\subsubsection{\textbf{RQ1: Performance of Neuron Steadiness Regularization (NSR)}}
It is worth mentioning that for the experiments presented under this RQ, we apply NSR to only one specific layer instead of the entire network. One reason is to reduce the number of hyper-parameters $\lambda$. Another reason comes from the steadiness redundancy among neurons. We find that adding NSR in one specific layer could usually achieve results comparable to
the best results obtained by adding it into multiple layers. The detailed comparison results and the criteria for determining the layer index for applying NSR are discussed in RQ4.
Here, we discuss the performance of NSR over different models and present the results on Tab. \ref{tab:mlp} $\sim$ Tab. \ref{tab:gnn} ``Gain'' is the percentage of relative reduction in the error rate. Tab. \ref{tab:mlp} demonstrates that NSR can improve the performance of MLP with different layers: the relative error rate is reduced by 9.39\% at least and 43.79\% at most. Besides, as the number of network layers increases, the gain of NSR shows a roughly upward trend. The four-layer MLP achieves the lowest classification error rate among all vanilla version baselines, and the accuracy becomes worse as the networks grow deeper. It indicates that deep MLP encounters a severe overfitting problem. The success of our regularization in addressing such a problem reveals the importance of stabilizing the response of each individual neuron to instances from the same class.
For the CNN models, Tab. \ref{tab:cnn} demonstrates that NSR can reduce the relative error rate for CIFAR-10 by 11.97\% on VGG-19 and 9.00\% on Resnet-18, and reduce the relative error rate (top-5) for ImageNet by 9.34\% on ResNet-50. Considering that VGG-19, ResNet-18 and ResNet-50 have already adopted Batch Normalization, Dropout regularization and Weight Decay in the vanilla model, the accuracy gain of our method reveals that NSR can have extra benefits to ulteriorly enhance generalization ability when combining with Batch Normalization, Dropout and Weight Decay. We will show more evidence about this in RQ3.
\begin{table*}[!ht]
\centering
\begin{tabular}{l|c|cc|cc}
\toprule
Dataset & Layers & GraphSAGE (\%) & GraphSAGE+NSR (\%) & GCN (\%) & GCN+NSR (\%) \\
\midrule
\multirow{3}{*}{PubMed}
& 2 & 10.73 $\pm$ 0.06 & \textbf{9.89 $\pm$ 0.08} & 12.02 $\pm$ 0.00 & \textbf{11.92 $\pm$ 0.00} \\
& 3 & 10.20 $\pm$ 0.25 & \textbf{9.48 $\pm$ 0.12} & 12.76 $\pm$ 0.18 & \textbf{12.19 $\pm$ 0.11} \\
& 4 & 10.43 $\pm$ 0.17 & \textbf{9.79 $\pm$ 0.19} & 14.01 $\pm$ 0.07 & \textbf{12.96 $\pm$ 0.08} \\
\midrule
\multirow{3}{17mm}{Amazon-Photo}
& 2 & 5.82 $\pm$ 0.00 & \textbf{4.54 $\pm$ 0.10} & 6.73 $\pm$ 0.00 & \textbf{6.27 $\pm$ 0.00} \\
& 3 & 5.20 $\pm$ 0.14 & \textbf{4.86 $\pm$ 0.13} & 8.00 $\pm$ 0.11 & \textbf{7.96 $\pm$ 0.10} \\
& 4 & 6.37 $\pm$ 0.30 & \textbf{5.62 $\pm$ 0.59} & 10.24 $\pm$ 0.14 & \textbf{9.03 $\pm$ 0.25} \\
\midrule
\multirow{3}{17mm}{Amazon-Computers}
& 2 & 11.37 $\pm$ 0.55 & \textbf{10.47 $\pm$ 0.05} & 12.17 $\pm$ 0.07 & \textbf{10.86 $\pm$ 0.03} \\
& 3 & 11.88 $\pm$ 1.05 & \textbf{10.22 $\pm$ 0.54} & 14.90 $\pm$ 0.25 & \textbf{13.66 $\pm$ 0.12} \\
& 4 & 15.49 $\pm$ 0.90 & \textbf{12.86 $\pm$ 0.82} & 18.07 $\pm$ 0.74 & \textbf{16.02 $\pm$ 0.23} \\
\midrule
\multirow{3}{*}{WikiCS}
& 2 & 16.81 $\pm$ 0.21 & \textbf{16.06 $\pm$ 0.33} & 18.41 $\pm$ 0.06 & \textbf{17.99 $\pm$ 0.05} \\
& 3 & 15.97 $\pm$ 0.18 & \textbf{15.27 $\pm$ 0.21} & 18.66 $\pm$ 0.23 & \textbf{18.10 $\pm$ 0.27} \\
& 4 & 16.63 $\pm$ 0.31 & \textbf{15.43 $\pm$ 0.24} & 19.21 $\pm$ 0.31 & \textbf{18.84 $\pm$ 0.26}\\
\bottomrule
\end{tabular}
\caption{Error rate of applying our NSR on GCN and GraphSAGE over four graph datasets.}
\label{tab:gnn}
\end{table*}
\begin{table*}[!h]
\centering
\begin{tabular}{c|ccc}
\toprule
Regularization & MLP-4 (\%) &ResNet-18 (\%) &GraphSAGE (\%) \\ \midrule
Vanilla & 2.29 $\pm$ 0.07& 7.96 $\pm$ 0.12& 11.37 $\pm$ 0.55 \\
L1 & 2.27 $\pm$ 0.05& 7.83 $\pm$ 0.23& 10.81 $\pm$ 0.13 \\
L2 & 2.27 $\pm$ 0.05& 7.67 $\pm$ 0.18& 10.68 $\pm$ 0.35 \\
Jacobian & 2.21 $\pm$ 0.04& 7.90 $\pm$ 0.07& 11.27 $\pm$ 0.45 \\
NSR & \textbf{1.64 $\pm$ 0.04}& \textbf{7.20 $\pm$ 0.09}& \textbf{10.52 $\pm$ 0.22} \\ \bottomrule
\end{tabular}
\caption{Error rate comparison of different regularization methods on different models.}
\label{tab:regularization}
\end{table*}
\begin{table*}[!h]
\centering
\begin{tabular}{c|ccc}
\toprule
MLP-4 & Vanilla (\%) &Vanilla+BN (\%) &Vanilla+BN+NSR (\%) \\ \midrule
Error rate & 2.29 $\pm$ 0.07& 2.22 $\pm$ 0.04& \textbf{1.62 $\pm$ 0.08} \\ \midrule
MLP-4 & Vanilla (\%) &Vanilla+DO (\%) &Vanilla+DO+NSR (\%) \\ \midrule
Error rate & 2.29 $\pm$ 0.07& 2.19 $\pm$ 0.04& \textbf{1.64 $\pm$ 0.04} \\ \bottomrule
\end{tabular}
\caption{Combination of our NSR with Batch Normalization (BN) and Dropout (DO) for training.}
\label{tab:further gain}
\end{table*}
\begin{table}[!h]
\centering
\begin{tabular}{p{1.4cm}|c}
\toprule
Method & Error rate (\%) \\ \midrule
MLP & 2.29 $\pm$ 0.07 \\
MLP$_2$ & 2.22 $\pm$ 0.08 \\
MLP$_3$ & 1.90 $\pm$ 0.13 \\
MLP$_4$ & 1.64 $\pm$ 0.04 \\
MLP$_{3,4}$ & \textbf{1.63 $\pm$ 0.08} \\\bottomrule
\end{tabular}
\caption{Effect of applying NSR on different layer(s) of MLP-4. The number in the subscript indicates which layer(s) NSR is applied on.}
\label{tab:different_layer_MLP}
\end{table}
\begin{table}[!h]
\centering
\begin{tabular}{p{2cm}|c}
\toprule
Method & Error rate (\%) \\ \midrule
GraphSAGE & 11.37 $\pm$ 0.55 \\
GraphSAGE$_1$ & 10.47 $\pm$ 0.05 \\
GraphSAGE$_2$ & 10.52 $\pm$ 0.22 \\
GraphSAGE$_{1,2}$ & \textbf{10.30 $\pm$ 0.17} \\\bottomrule
\end{tabular}
\caption{Effect of applying NSR on different layer(s) of GraphSAGE-2. The number in the subscript indicates which layer(s) NSR is applied on.}
\label{tab:different_layer_GNN}
\end{table}
For the two GNN models, the number of layers varies from 2 to 4. This setting follows the empirical experience as nodes will capture similar information from neighbors and result in over smoothness when GNNs grow deeper. Tab \ref{tab:gnn} shows that GraphSAGE and GCN applied with NSR outperform the vanilla model with different layer depths on all datasets. Specifically, GraphSAGE and GCN achieve an average improvement of 8.6\% and 5.8\%, respectively, and 17.0\% improvement at most.
\subsubsection{\textbf{RQ2: Comparison with Other Regularization}}
We first compare our NSR method with several classical regularization methods shown in Tab. \ref{tab:regularization}. Notice that, the same as the setting in RQ1, our NSR have only one hyper-parameter $\lambda$ to tune. Also, we utilize the same hyper-parameter searching strategy to select the best hyper-parameter values for both our NSR and the other regularization methods. As shown in Tab. \ref{tab:regularization}, our NSR performs best among these regularization methods on all three models with different architectures, i.e., MLP-4 (MLP), ResNet-18 (CNN), GraphSAGE (GNN). The results indicate that our NSR has remarkable improvement and is general for various architectures.
\subsubsection{\textbf{RQ3: Combination with Other Regularization}\label{sec:comparison}}
We further investigate the effect of combining our NSR with other popular methods like Batch Normalization and Dropout. We conduct the experiments on MLP-4 and the result are organized in Tab. \ref{tab:further gain}. From Tab. \ref{tab:further gain} we can find that both Batch Normalization and Dropout can reduce error rate compared with vanilla baseline, and adding our NSR on the top of them can further promote the performance significantly. It indicates that NSR can provide complementary regularization benefits with Batch Normalization and Dropout.
\subsubsection{\textbf{RQ4: Layer selection for applying NSR}}
\label{sec:ablation}
In this section, we shed light on how to decide which layer to apply NSR. As our NSR is used to reduce neuron intra-class response variance, naturally, the intuitive criteria is to apply NSR to the layer with the largest aggregated neuron intra-class response variance. According to the experiments, such criteria works well.
Taking MLP-4 on MNIST and GraphSAGE-2 on Amazon-Computers as two examples, we apply NSR to different layer(s), and their corresponding error rates are listed in Tab. \ref{tab:different_layer_MLP} $\sim$ Tab. \ref{tab:different_layer_GNN}, where model$_{l}$ means NSR is applied to the $l_{th}$ layer of the model. Notice that, except the first layer of MLP-4 which is the input layer, for the second, third, and last layer of MLP-4, their neuron intra-class response variance are 409, 510, and 1660, respectively. For GraphSAGE-2, its neuron intra-class response variance are 4.15 and 2.68, for the first and last layer, respectively. The variance of MLP-4 and GraphSAGE-2 are quite different because of the data characteristic difference between MNIST and Amazon-Computers.
The results in Tab. \ref{tab:different_layer_MLP} $\sim$ Tab. \ref{tab:different_layer_GNN} show that, no matter which layer(s) is applied with NSR, it could always improve the accuracy compared with the vanilla baseline for both MLP-4 and GraphSAGE-2. In addition, applying NSR to the layer with the biggest variance, i.e., the last layer for MLP-4 and the first layer for GraphSAGE-2, could achieve the most significant gain compared with applying NSR to other individual layers. Also, it could achieve similar accuracy compared with the best one obtained by applying NSR to multiple layers. Furthermore, we summarize an empirical guidance based on more experiments: By simply applying NSR to the last layer of MLP and CNN, and the first layer of GNN, we could usually obtain significant accuracy improvement.
\section{Related Work}
Regularization improves generalization ability by introducing the inductive bias based on prior knowledge. According to the type of prior knowledge, existing works can be roughly categorized into Domain Specific Regularization and Model Generic Regularization. Domain Specific Regularization usually encodes the domain specific requirements or preferences based on the domain background knowledge, and usually has a nice gain for the target problems or domains. While, Model Generic Regularization is usually based on the reasonable hypothesis about general properties of DNN, learning dynamics, etc. They could be helpful to a wide range of domains and scenarios.
\textbf{Domain Specific Regularization} shows success in various domains including Face Recognition \cite{wen2016discriminative, liu2017sphereface, cai2018island, liu2017adaptive}, Knowledge Graph Completion \cite{zhang2020duality, lacroix2018canonical, minervini2017regularizing}, Graph Neural Network \cite{chen2020measuring,hou2019measuring,rong2019dropedge,yang2020domain,long2019hierarchical}, and so on \cite{du2021tabularnet,wang2020cocogum,wang2019tag2vec,wang2019tag2gauss}.
In Face Recognition, \cite{cai2018island} points out the existence of external factors, such as different situations of environmental illumination, head poses, facial expressions, which brings the challenge that face images from the same person may have even larger differences than face images from different persons. To address specific challenges in face recognition, \cite{liu2017sphereface} imposes the careful-designed regularization method which enforces the representations of face instances from the same person to be similar.
In Knowledge Graph Completion, learning the embedding of entities and relations is usually a key step of performing statistical analysis. \cite{minervini2017regularizing} regularizes the training of neural knowledge graph embedding by imposing a set of soft model-dependent constraints on the predicate embedding. Those constraints encode the background knowledge of equivalence and inversion axioms.
In Graph Neural Network, the popular used model usually suffers from the over-smoothing problem due to the six degrees of separation \cite{kleinfeld2002could}. \cite{chen2020measuring} statistically analyzes the high correlation between smoothness and the mean average distance (MAD) among node representation. Then a regularization called MADGap is proposed which punishes over smoothness by minimizing MAD.
\textbf{Model Generic Regularization} can be categorized into network-wise Regularization, layer-wise Regulation, and neuron-wise Regularization, according to the granularity of the studied properties.
Network-wise Regularization encodes the desired property of the entire network, like the sparseness of the network, the smoothness of the mapping function between input and output. For example, L2 Regularization \cite{plaut1986experiments, lang1990dimensionality} is the most well-known regularization term which encourages small sum squared magnitude of model weights. L1 Regularization is widely used to improve the sparseness of the network. Recently, \cite{hoffman2019robust} introduces an efficient framework to minimize the norm of input-out Jacobian matrix for noise robustness. Layer regularization becomes popular due to the success achieved by Batch Normalization \cite{ioffe2015batch} and a series of layer-wise normalization methods \cite{ulyanov2016instance, plaut1986experiments}. It should be pointed out that, some methods could work well at network level while producing issues at layer-wise level. For example, \cite{gu2014towards} imposes a layer-wise Jacobian penalty to ensure the smoothness of each mapping function corresponding to individual layers. Although it operates in finer granularity with the similar motivation of traditional Jacobian based regularization, it was claimed with the degradation on noiseless data by \cite{goodfellow2014explaining}.
Neuron-wise Regularization is the method with the most fine-grained granularity. Dropout \cite{hinton2012improving,srivastava2014dropout} is the typical example belonging to this category. It randomly removes neurons with a certain probability to avoid strong co-adaption between neurons. Our proposed Neuron Steadiness Regularization belongs to the neuron-wise regularization which leverages the information of individual neuron response distribution.
\section{Conclusion and Future Work}
\label{sec:conclusion}
We explore the inductive bias from the new perspective of class-dependent response distribution of individual neurons. Based on our experimental observations, we articulate the Neuron Steadiness Hypothesis and propose the Neuron Steadiness Regularization that penalizes large intra-class response variance of neurons. Extensive evaluations conducted on diverse datasets with various network architectures demonstrate its effectiveness of the proposed method.
Especially, we show its effectiveness on the classification task with a large model, i.e., ResNet-50, and a big dataset with many classes, i.e., ImageNet with 1000 classes.
We also carefully consider the border impact from various perspectives such as fairness, security, harm to people and so on, we do not find any apparent risk related with our work.
One future direction is to explore other statistics based on such neuron-level class-dependent response distribution. There may be better statistics for regularization other than the intra-class response variance. Another direction is to explore hyper-parameter setting strategies. In this paper, for the simplicity of hyper-parameter setting, we use the same value of $\lambda$ for all neurons in the same layer. While, as NSR is about individual neuron, $\lambda$ of different neurons is not necessarily the same. One intuitive idea is to set $\lambda$ according to neuron importance like \cite{dhamdhere2018important}.
|
{
"timestamp": "2021-12-01T02:24:33",
"yymm": "2111",
"arxiv_id": "2111.15414",
"language": "en",
"url": "https://arxiv.org/abs/2111.15414"
}
|
\section{History of k-nearest neighbor search, problem statement and overview of results}
\label{sec:intro}
\begin{figure}
\centering
\input images/cover-tree.txt
\hspace{1.5cm}
\input images/explicit_cover_tree.tex
\caption{\textbf{Left:} an implicit form of a cover tree defined in 2006 \cite[Section~2]{beygelzimer2006cover} for the finite set of reference points $R = \{1,2,3,4,5\}$.
\textbf{Right:} a new compressed cover tree in Definition~\ref{dfn:cover_tree_compressed} corrects the past complexity results for $k$-nearest neighbors search in $R$.}
\label{fig:implicitcompressed}
\end{figure}
The search for nearest neighbors was one of the first data-driven problems and led to the neighbor rule for classification \cite{cover1967nearest}.
In a modern formulation, the problem is to find all $k\geq 1$ nearest neighbors in a reference set $R$ for all points from a query set $Q$.
Both sets live in an ambient space $X$ with a given distance $d$ satisfying all metric axioms.
The simplest example is $X=\mathbb{R}^n$ with the Euclidean metric, where a query set $Q$ can be a single point or a subset of a larger set $R$.
\medskip
The \emph{exact} $k$-nearest neighbor problem asks for exact (true) $k$-nearest neighbors of every query point $q$.
Another probabilistic version of the $k$-nearest neighbor search \cite{manocha2007empirical} aims to find exact $k$-nearest neighbors with a given probability.
The approximate version \cite{arya1993approximate}, \cite{krauthgamer2004navigating},\cite{andoni2018approximate},\cite{wang2021comprehensive} for every query point $q \in Q$, looks for its approximate neighbor $r\in R$ satisfying $d(q,r) \leq (1+\epsilon)d(q,\mathrm{NN}(q))$, where $\epsilon>0$ is fixed and $\mathrm{NN}(q)$ is the exact first nearest neighbor of $q$.
\medskip
The naive approach to find all 1st nearest neighbors of points from $Q$ within $R$ is proportional to the product $|Q|\cdot|R|$ of the sizes of $Q, R$.
Already in 1974 real data was big enough to motivate faster algorithms.
Namely, a \emph{quadtree} \cite{finkel1974quad} hierarchically indexed a reference set $R\subset\mathbb{R}^2$ by subdividing its bounding box (a root) into four smaller boxes (children), which are recursively subdivided until final boxes (leaf nodes) contain only a small number of reference points.
\medskip
The straightforward extension of a quadtree to $\mathbb{R}^n$ leads to an exponential dependence on $n$, because the $n$-dimensional box is subdivided into $2^n$ smaller boxes.
The first attempt to overcome this curse of dimensionality was the $kd$-tree \cite{bentley1975multidimensional} subdividing a subset of $R$ at every level into two subsets instead of $2^n$ subsets.
The nearest search algorithms have positively impacted many related problems: a minimum spanning tree \cite{bentley1978fast}, range search \cite{pelleg1999accelerating}, k-means clustering \cite{pelleg1999accelerating} and ray tracing \cite{fussell1988fast}.
The single-tree structures for finding nearest neighbors in the chronological order are $k$-means tree \cite{fukunaga1975branch}, $R$ tree \cite{beckmann1990r}, ball tree \cite{omohundro1989five}, $R^*$ tree \cite{beckmann1990r}, vantage-point tree \cite{yianilos1993data}, Hilbert $R$ tree \cite{ kamel1993hilbert}, TV trees \cite{lin1994tv}, X trees \cite{berchtold1996x}, principal axis tree \cite{mcnames2001fast}, spill tree \cite{liu2004investigation}, cover tree \cite{beygelzimer2006cover}, cosine tree \cite{holmes2008quic}, max-margin tree \cite{ram2012nearest}, cone tree \cite{ram2012maximum} and others.
\medskip
In 2004 the paper \cite{krauthgamer2004navigating} attempted to solve the $k$-nearest neighbor problem in a general metric space in a near-linear time by \cite[Theorem~2.7]{krauthgamer2004navigating} claiming that all $k$-nearest neighbors of a query point $q$ can be found in $R$ by using the navigating nets \cite[Section ~ 2.1]{krauthgamer2004navigating} in time $2^{O(\text{dim}_{KR}(R \cup \{q\})}(k + \log|R|)$, where $\text{dim}_{KR}(R \cup \{q\})$ is the expansion rate from \cite{karger2002finding} and $|R|$ is the size of the set $R$.
The proof was omitted and the authors did not reply to our request for details.
\medskip
In 2006 the authors of \cite{beygelzimer2006cover} introduced a cover tree inspired by the navigating nets \cite{krauthgamer2004navigating} above.
This cover tree was especially designed to prove worst-case bounds on the search complexity in the size $|R|$ and the expansion constant $c$ of the reference set $R$.
In 2015 \cite[Section~5.3]{curtin2015improving} pointed out that the proof of \cite[Theorem~5]{beygelzimer2006cover} estimated the number of iterations in the nearest neighbor search for $k=1$ as $O(c^2\log n)$ and claimed the final complexity $O(c^{12}\log|R|)$ per query point, though the proposed argument guarantees only $O(c^{12}|R|)$.
In the case $Q=R$, the latter crude estimate gives only a quadratic complexity $O(|R|^2)$ for the all $k$-nearest neighbor search as in a brute-force approach.
\medskip
The similar estimate $O(c^2\log n)$ was used in several papers later: for a dual-tree based all-nearest neighbor search \cite[Theorem~3.1]{ram2009linear}, for a Minimum Spanning Tree \cite[Theorem~5.1]{march2010fast}, for a fast exact max-kernel search \cite[Lemma~5.2]{curtin2013fast}.
The current paper fills the first important gap in the literature and rigorously proves new parametrized time complexities in the single-tree case.
Another forthcoming paper will similarly correct the more advanced cases above.
\medskip
To avoid any misunderstanding, we should formally define the \emph{$k$-nearest neighbor set} $\mathrm{NN}_k(q)$, which might contain several points in a singular case when these reference points have equal distances to a query point $q$.
\begin{dfn}[$k$-nearest neighbor set $\mathrm{NN}_k$]
\label{dfn:kNearestNeighbor}
Let $Q,R$ be finite subsets of a metric space $(X,d)$.
For any points $q \in Q$ and $r\in R$, the \emph{neighborhood}
$N(q; u) = \{p \in R \mid d(q,p) \leq d(q,u)\}$ consists of all points that are non-strictly closer than $u$ to $q$.
For any integer $k\geq 1$, the \emph{$k$-nearest neighbor set} $\mathrm{NN}_k(q)$ consists of all points $u\in R$ such that the neighborhood size $|N(q;u)|\geq k$ and any other point $v\in R$ with $d(v,q) > d(u,q)$ has a larger neighborhood of size $|N(q;v)| > k$.
\hfill $\blacksquare$
\end{dfn}
For $Q = R = \{0,1,2,3\}$, the nearest neighbor sets of $0$ are
$\mathrm{NN}_1(0) = \{1\}$, $\mathrm{NN}_2(0) = \{2\}$, $\mathrm{NN}_3(0) = \{3\}$.
Due to the neighborhoods $N(1;0)=\{0,1,2\}=N(1;2)$, both sets $\mathrm{NN}_1(1) = \{0,2\}=\mathrm{NN}_2(1)$ consist of two points $0,2$ at equal distance to $1$.
The 3-nearest neighbor set $\mathrm{NN}_3(1) = \{3\}$ is a single point.
Because of a potential ambiguity of an exact $k$-nearest neighbor, Problem \ref{pro:knn} below allows any neighbors within a set $\mathrm{NN}_k(q)$.
In the above example, point $0$ can be chosen as a 1st neighbor of $1$, then $2$ as a 2nd neighbor of $1$, or these neighbors can be found in a different order.
\begin{pro}[all $k$-nearest neighbors search]
\label{pro:knn}
For any finite subsets $Q,R$ in a metric space $(X,d)$ and any integer $k\geq 1$, design an algorithm to exactly find distinct points $p_i\in\mathrm{NN}_{i}(q) \subseteq R$ for all $i = 1,..., k$ and all points $q\in Q$ such that the total complexity is near linear in $n=\max\{|Q|,|R|\}$ with hidden constant might depend on some structures of $Q,R$.
\hfill $\blacksquare$
\end{pro}
To solve Problem \ref{pro:knn}, Definition \ref{dfn:cover_tree_compressed} introduces a compressed cover tree $\mathcal{T}(R)$ on any finite set $R$ with a metric $d$.
Definition \ref{dfn:depth} introduces a new concept of the height $|H(\mathcal{T}(R))|$ that is the number of levels including at least one node in $\mathcal{T}(R)$.
The new tree $\mathcal{T}(R)$ in the right hand side picture of Fig.~\ref{fig:implicitcompressed} has nodes at levels $-1,0,1,2$, so its height is 4.
\medskip
Theorem \ref{thm:construction_time} will prove that a compressed cover tree $\mathcal{T}(R)$ can be constructed in time $O(c^8 \cdot |H(\mathcal{T}(R))| \cdot |R|)$, where $c$ is an expansion constant depending on $R$, see Definition~\ref{dfn:expansion_constant}.
Then Corollary \ref{cor:cover_tree_knn_time} resolves Problem~\ref{pro:knn} in time
$O( c^{10} \cdot k \log(k) \cdot \log_2(\Delta(R)) \cdot (|Q| + |R|))$, where $\Delta(R)$ is the aspect ratio (diameter divided by the minimum inter-point distance in $\mathbb{R}$), see Definition \ref{dfn:radius+d_min}.
Corollary~\ref{cor:approximate_k_nearestneighbors} will prove a similar parameterized complexity for an $(1+\epsilon)$-approximate $k$-nearest neighbor search.
In all cases we carefully analyze the hidden constants and show typical scenarios when these constants are small.
Then all complexities are near linear in the key input size $n=\max\{|Q|,|R|\}$.
\begin{figure}
\centering
\includegraphics[scale = 0.75]{images/PngDependancies/dependancy1.png}
\caption{Dependancy diagram of Section \ref{sec:cover_tree}, Section \ref{sec:ChallangesCoverTree} and Section \ref{sec:ConstructionCovertree} }
\label{fig:dependancyDiagram}
\end{figure}
\section{A new compressed cover tree for $k$-nearest neighbor search in any metric space}
\label{sec:cover_tree}
\begin{figure}[h]
\centering
\begin{subfigure}{.30\textwidth}
\centering
\input images/easy_tree_one.tex
\label{fig:cover_tree_variant_one}
\end{subfigure}
\begin{subfigure}{.30\textwidth}
\centering
\input images/easy_tree_two.tex
\label{fig:cover_tree_variant_two}
\end{subfigure}
\begin{subfigure}{.30\textwidth}
\centering
\input images/easy_tree_three.tex
\label{fig:cover_tree_variant_three}
\end{subfigure}
\caption{For any integer $i\geq 2$, the set $R = \{0,1,2^{i}\}$ has at least three compressed cover trees $\mathcal{T}(R)$ satisfying Definition \ref{dfn:cover_tree_compressed}. }
\label{fig:cover_tree_easy_example}
\end{figure}
\begin{dfn}[A compressed cover tree $\mathcal{T}(R)$]
\label{dfn:cover_tree_compressed}
Let $R$ be a finite set in an ambient space $X$ with a metric $d$.
\emph{A compressed cover tree} $\mathcal{T}(R)$ has the vertex set $R$ with a root $r \in R$ and a \emph{level} function $l : R \rightarrow \mathbb{Z}$ satisfying the conditions below.
\medskip
\noindent
(\ref{dfn:cover_tree_compressed}a)
\emph{Root condition} :
the level of the root node $r$ is $l(r) \geq 1 + \max_{p \in R \setminus \{r\}}l(p)$.
\medskip
\noindent
(\ref{dfn:cover_tree_compressed}b)
\emph{Covering condition} :
for every non-root node $q \in R\setminus \{r\}$, we select a unique \emph{parent} $p$ and a level $l(q)$ such that $d(q,p) \leq 2^{l(q)+1}$ and $l(q) < l(p)$;
this parent node $p$ has a single link to its \emph{child} node $q$ in the tree $\mathcal{T}(R)$.
\medskip
\noindent
(\ref{dfn:cover_tree_compressed}c)
\emph{Separation condition} :
for $i \in \mathbb{Z}$, the \emph{cover set}
$C_i = \{p \in R \mid l(p) \geq i\}$ has
$d_{\min}(C_i) = \min\limits_{p \in C_{i}}\min\limits_{q \in C_{i}\setminus \{p\}} d(p,q) > 2^{i}$.
\medskip
\noindent
Since there is a 1-1 correspondence between all points of $R$ and all nodes of $\mathcal{T}(R)$, the same notation $p$ can refer to a point in the set $R$ or to a node of the tree $\mathcal{T}(R)$.
Set $l_{\max} = 1 + \max_{p \in R \setminus \{r\}}l(p) $ and $l_{\min} = \min_{p \in R}l(p)$.
For any node $p\in\mathcal{T}(R)$, $\mathrm{Children}(p)$ denotes the set consisting of all children of $p$, including $p$ itself, which will be convenient later.
\medskip
For any node $p \in\mathcal{T}(R)$, define the \emph{node-to-root} path as a unique sequence of nodes $w_0,\dots,w_m$ such that $w_0 = p$, $w_m$ is the root and $w_{j+1}$ is the parent of $w_{j}$ for all $j=0,...,m-1$.
A node $q \in\mathcal{T}(R)$ is called a \emph{descendant} of another node $p$ if $p$ belongs to the node-to-root path of $q$.
A node $p$ is an \emph{ancestor} of $q$ if $q$ belongs to the node-to-root path of $p$.
The set of all descendants of a node $p$ is denoted by $\mathrm{Descendants}(p)$ and includes $p$.
\hfill $\blacksquare$
\end{dfn}
\begin{figure}[h]
\centering
\input images/CoverTreeLongExample/good_tree_example_0.tex
\caption{Compressed cover tree $\mathcal{T}(R)$ built on set $R$ defined in Example \ref{exa:cover_tree_big} with root $16$. }
\label{fig:cover_tree_big}
\end{figure}
\begin{exa}[$\mathcal{T}(R)$ in Fig.~\ref{fig:cover_tree_big}]
\label{exa:cover_tree_big}
Let $(\mathbb{R}, d = |x-y|)$ be the real line with euclidean metric.
Let $R = \{1,2,3,...,15\} $ be its finite subset.
Fig.~\ref{fig:cover_tree_big} shows a compressed cover tree on the set $R$ with the root $r=8$.
The cover sets of $\mathcal{T}(R)$ are $ C_{-1} = \{1,2,3,...,15\}$, $C_0 = \{2,4,6,8,10,12,14\}$, $C_{1} = \{4,8,12\}$ and $C_{2} = \{8\}$.
We check the conditions of Definition \ref{dfn:cover_tree_compressed}.
\begin{itemize}
\item Root condition $(\ref{dfn:cover_tree_compressed}a)$:
since $\max_{p \in R \setminus \{8\}}d(p, 8) = 7$ and $\ceil{\log_2(7)} - 1= 2$, the root can have the level $l(8) = 2$.
\item Covering condition (\ref{dfn:cover_tree_compressed}b) : for any $i \in {-1,0,1,2}$, let $p_i$ be arbitrary point having $l(p_i) = i$. Then we have
$d(p_{-1}, p_{0}) = 1 \leq 2^{0}$,
$d(p_0, p_1) = 2 \leq 2^{1}$ and
$d(p_1, p_2) = 4 \leq 2^{2}$.
\item Separation condition (\ref{dfn:cover_tree_compressed}c) : $d_{\min}(C_{-1}) = 1 > \frac{1}{2} = 2^{-1}$, $d_{\min}(C_{0}) = 2 > 1 = 2^{0}, d_{\min}(C_{1}) = 4 > 2 = 2^{1}$.
\hfill $\blacksquare$
\end{itemize}
\end{exa}
A cover tree was defined in \cite[Section~2]{beygelzimer2006cover} as a tree version of a navigating net from \cite[Section ~ 2.1]{krauthgamer2004navigating}.
For any index $i \in \mathbb{Z}\cup \{\pm\infty\}$, the level $i$ set of this cover tree coincides with the cover set $C_i$ above, which can have nodes at different levels in Definition~\ref{dfn:cover_tree_compressed}.
Any point $p \in C_i$ has a single parent in the set $C_{i+1}$, which satisfied conditions (\ref{dfn:cover_tree_compressed}b,c).
\cite[Section~2]{beygelzimer2006cover} referred to this original tree as an implicit representation of a cover tree.
Such a tree in Figure \ref{fig:tripleexample} (left) contains infinitely many repetitions of every point $p\in R$ in long branches and will be called an \emph{implicit cover tree}.
\medskip
Since an implicit cover tree is formally infinite, for practical implementations, the authors of \cite{beygelzimer2006cover} had to use another version that they named an explicit representation of a cover tree.
We call this version an \emph{explicit cover tree}.
Here is the full defining quote at the end of \cite[Section~2]{beygelzimer2006cover}: "The explicit representation of the tree coalesces all nodes in which the only child is a self-child".
In an explicit cover tree, if a subpath of every node-to-root path consists of all identical nodes without other children, all these identical nodes collapse to a single node, see Figure \ref{fig:tripleexample} (middle).
\medskip
Since an explicit cover tree still contains repeated points, Definition~\ref{dfn:cover_tree_compressed} is well-motivated by the aim to include every point only once, which saves memory and simplifies all subsequent algorithms, see Fig.~\ref{fig:tripleexample} (right).
\begin{figure}
\centering
\input images/tripleExample.tex
\caption{A comparison of past cover trees and a new tree in Example \ref{exa:implicitexplicitexample}. \textbf{Left:} an implicit cover tree contains infinite repetitions of points. \textbf{Middle:} an explicit cover tree. \textbf{Right:} a compressed cover tree from Definition \ref{dfn:cover_tree_compressed} includes every point exactly once. }
\label{fig:tripleexample}
\end{figure}
\begin{exa}[a short train line tree]
\label{exa:implicitexplicitexample}
Let $G$ be the unoriented metric graph consisting of two vertices $r,q$ connected by three different edges $e,h,g$ of lengths $|e| = 2^6$ , $|h| = 2^{3}$ , $|g| = 1$. Let $p_{4}$ be the middle point of the edge $e$.
Let $p_{3}$ be the middle point of the subedge $(p_4 , q)$.
Let $p_{2}$ be the middle point of the edge $h$.
Let $p_{1}$ be the middle point of the subedge $(p_{2}, q)$.
Let $R = \{p_1, p_2,p_3,p_4,r\}$.
We construct a compressed cover tree $\mathcal{T}(R)$ by choosing the level $l(p_i) = i$ and by setting the root $r$ to be the parent of both $p_2$ and $p_4$, $p_4$ to be the parent of $p_{3}$, and $p_{2}$ to be the parent of $p_{1}$.
Then $\mathcal{T}(R)$ satisfies all the conditions of Definition \ref{dfn:cover_tree_compressed}, see a comparison of the three cover trees in Fig.~\ref{fig:tripleexample}.
\hfill $\blacksquare$
\end{exa}
In any metric space $X$, let $\bar B(p,t)\subseteq X$ be the closed ball with a center $p$ and a radius $t$.
If this metric space is finite, $|\bar B(p,t)|$ denotes the number of points in $\bar B(p,t)$.
The expansion constant $c(R)$ below was originally defined in \cite{beygelzimer2006cover}.
\begin{dfn}[Expansion constants $c$ and $c_m$]
\label{dfn:expansion_constant}
Let $R$ be a finite subset of an ambient metric space $(X,d)$.
The \emph{expansion constant} $c(R)$ is the smallest real number $c(R)\geq 2$ such that $|\bar{B}(p,2t)|\leq c(R) \cdot |\bar{B}(p,t)|$ for any $p\in R$ and radius $t\geq 0$.
The \emph{minimized expansion constant} $c_m(R) = \inf\limits_{R\subseteq A\subseteq X}c(A)$ is minimized over all finite subsets $A\subseteq X$ that cover $R$.
\hfill $\blacksquare$
\end{dfn}
\begin{lem}[properties of $c_m$]
\label{lem:expansion_constant_property}
For any finite sets $R\subseteq U$ in a metric space, we have $c_m(R) \leq c_m(U)$, $c_m(R) \leq c(R)$.
\hfill $\blacksquare$
\end{lem}
\begin{proof}
The proof easily follows from Definition \ref{dfn:expansion_constant}.
\end{proof}
\begin{figure}[h]
\centering
\input images/outlierconstruction.tex
\caption{Example~\ref{exa:outlierconstruction} describes a set $R$ with a big expansion constant $c(R)$.
Let $R\setminus \{p\}$ be a finite subset of a unit square lattice in $\mathbb{R}^2$, but a point $p$ is located far away from $R\setminus \{p\}$ at a distance larger than $\mathrm{diam}(R \setminus \{p\})$.
Definition \ref{dfn:expansion_constant} implies that $c(R) = |R|$. }
\label{fig:outlierconstruction}
\end{figure}
Example~\ref{exa:outlierconstruction} shows that expansion constant of a set $R$ can be as big as $|R|$.
\begin{exa}[one outlier can make the expansion constant big]
\label{exa:outlierconstruction}
Let $R$ be a finite metric space and $p \in R$ satisfy $d(p,R \setminus \{t\}) > \mathrm{diam}(R \setminus \{p\})$.
Since $\bar{B}(p, 2d(p,R \setminus \{t\}) = R$ and $\bar{B}(p, d(p,R \setminus \{t\}) = \{p\}$, we get $c(R) = N$, see Fig.~\ref{fig:outlierconstruction}.
\hfill $\blacksquare$
\end{exa}
Example~\ref{exa:minimized_normal_expansion_constant} shows that the minimized expansion can be significantly smaller than the original expansion constant.
\begin{exa}[minimized expansion constants]
\label{exa:minimized_normal_expansion_constant}
Let $(\mathbb{R}, d)$ be the Euclidean line.
For an integer $n>10$, consider the finite sets $R = \{2^{i} \mid i \in [1,n]\}$ and let $Q = \{i \mid i \in [1,2^n]\}$.
If $0<\epsilon < 10^{-9}$, then
$\bar{B}(2^n, 2^{n-1} - \epsilon) = \{2^n\} $ and $\bar{B}(2^n, 2(2^{n-1} - \epsilon)) = R$, so $c(R) = n$.
For any $q \in Q$ and any $t \in \mathbb{R}$, we have the balls $\bar{B}(q,t) = \mathbb{Z} \cap [q - t, q + t]$ and
$\bar{B}(q,2t) = \mathbb{Z} \cap [q - 2t, q + 2t]$, so $c(Q) \leq 4$.
Lemma \ref{lem:expansion_constant_property} implies that $c_m(R) \leq c_m(Q) \leq c(Q) \leq 4$.
\hfill $\blacksquare$
\end{exa}
Lemma~\ref{lem:compressed_cover_tree_descendant_bound} provides an upper bound for a distance between a node and its descendants.
\begin{lem}[a distance bound on descendants]
\label{lem:compressed_cover_tree_descendant_bound}
Let $R$ be a finite subset of an ambient space $X$ with a metric $d$.
In a compressed cover tree $\mathcal{T}(R)$, let $q$ be any descendant of a node $p$. Let the node-to-root path $S$ of $q$ contain a node $u$ satisfying $u \in \mathrm{Children}(p) \setminus \{p\}$. Then $d(p,q) \leq 2^{l(u) + 2} \leq 2^{l(p) + 1}$.
\hfill $\blacksquare$
\end{lem}
\begin{proof}
Let $(w_0, ..., w_m)$ be a subpath of $S$ satisfying $w_0 = q$ , $w_{m-1} = u$ and $w_m = p$. Note that $d(w_{i}, w_{i+1}) \leq 2^{l(w_i) + 1}$ for any $i$.
The first upper bound follows from the triangle inequality:
$$ d(p,q) \leq \sum^{m-1}_{j = 0}d(w_j, w_{j+1}) \leq \sum^{m-1}_{j = 0}2^{l(w_j) + 1} \leq \sum_{t = l_{\min}}^{l(u) + 1}2^{t}\leq 2^{l(u) + 2} $$
Since $l(u) \leq l(p) - 1$, we get the second upper bound $d(p,q) \leq 2^{l(p)+1}$.
\end{proof}
Lemma~\ref{lem:packing} uses the idea of \cite[Lemma~1]{curtin2015plug} to show that if $S$ is a $\delta$-sparse subset of a metric space $X$, then $S$ has at most $(c_m(S))^\mu$ points in the ball $\bar{B}(p,r)$, where $c_m(S)$ is the minimized expansion constant of $S$, while $\mu$ depends on $\delta,r$.
\begin{figure}
\centering
\input \input images/packingLemmaIllustration.tex
\caption{This volume argument proves Lemma~\ref{lem:packing}. By using an expansion constant, we can find an upper bound for the number of smaller balls of radius $\frac{\delta}{2}$ that can fit inside a larger $\bar{B}(p, t)$. }
\label{fig:packingLemma}
\end{figure}
\begin{lem}[packing]
\label{lem:packing}
Let $S$ be a finite $\delta$-sparse set in a metric space $(X,d)$, so $d(a,b) > \delta$ for all $a,b \in S$.
Then, for any point $ p \in X$ and any radius $t > \delta$, we have
$|\bar{B}(p, t) \cap S | \leq (c_m(S))^{\mu}$, where $\mu = \lceil \log_2(\frac{4t}{\delta} + 1) \rceil $.
\hfill $\blacksquare$
\end{lem}
\begin{proof}
Consider two cases: first, let $d(p,q) > t$ for any point $q \in S$. In this case $\bar{B}(p, t) \cap S = \emptyset$ and the lemma holds trivially.
Otherwise $\bar{B}(p, t) \cap S$ is non-empty. By the definition of minimized expansion constant for any $\epsilon > 0$ we can always find a set $A$ satisfying $S \subseteq A \subseteq X$, for which
\begin{ceqn}
\begin{equation}
\label{eqa:dfn_of_exp_constant}
|B(q,2s) \cap A| \leq (c_m(S) + \epsilon) \cdot | B(q,s) \cap A|,
\end{equation}
\end{ceqn}
for any $q \in A$ and $s \in \mathbb{R}$. Note that for any $u \in \bar{B}(p,t) \cap S$ we have $\bar{B}(u, \frac{\delta}{2}) \subseteq \bar{B}(u, t + \frac{\delta}{2})$. Therefore for any $q \in \bar{B}(p,t) \cap S$ it follows:
$$\bigcup_{u \in \bar{B}(p, t) \cap S}\bar{B}(u, \frac{\delta}{2}) \subseteq \bar{B}(p,t + \frac{\delta}{2}) \subseteq \bar{B}(q, 2t + \frac{\delta}{2})$$
Since all the points of $S$ were separated by $\delta$ we have:
\begin{equation*}
\label{eqa:packing_zero}
| \bar{B}(p, t) \cap S| \cdot \min_{u \in \bar{B}(p, t) \cap S}| \bar{B}(u, \frac{\delta}{2}) \cap A| \leq \sum_{u \in \bar{B}(p, t) \cap S} | \bar{B}(u, \frac{\delta}{2}) \cap A |\leq | \bar{B}(q, 2t + \frac{\delta}{2}) \cap A |
\end{equation*}
In particular by setting $q = \mathrm{argmin}_{a \in S \cap \bar{B}(p,t)}| \bar{B}(a, \frac{\delta}{2})| $ we get:
\begin{ceqn}
\begin{equation}
\label{eqa:packing_one}
| \bar{B}(p, t) \cap S| \cdot | \bar{B}(q, \frac{\delta}{2}) \cap A| \leq | \bar{B}(q, 2t + \frac{\delta}{2}) \cap A |
\end{equation}
\end{ceqn}
Inequality (\ref{eqa:dfn_of_exp_constant}) applied $\mu$ times
on radii $s_i = \dfrac{2t + \frac{\delta}{2}}{2^{i}} $ for $i = 1,...,\mu$ implies that:
\begin{ceqn}
\begin{equation}
\label{eqa:packing_two}
|\bar{B}(q,2t + \frac{\delta}{2}) \cap A| \leq (c_m(S) + \epsilon)^{\mu}|\bar{B}(q, \dfrac{2t + \frac{\delta}{2}}{2^{ \mu}}) \cap A | \leq (c_m(S) + \epsilon)^{ \mu}|\bar{B}(q, \frac{\delta}{2}) \cap A|
\end{equation}
\end{ceqn}
By combining inequalities (\ref{eqa:packing_one}) and (\ref{eqa:packing_two}):
$$| \bar{B}(p,t) \cap S |\leq \dfrac{|\bar{B}(q, 2t + \frac{\delta}{2}) \cap A |}{|\bar{B}(q, \frac{\delta}{2}) \cap A|} \leq (c_m(S)+\epsilon)^{\mu}.$$
The required inequality is obtained by letting $\epsilon \rightarrow 0$.
\end{proof}
\cite[Section~1.1]{krauthgamer2004navigating} defined dim($X$) of a space $(X,d)$ as the minimum number $m$ such that every set $U \subseteq X$ can be covered by $2^{m}$ sets whose diameter is a half of the diameter of $U$.
If $U$ is finite, an easy application of Lemma \ref{lem:packing} for $\delta = \frac{r}{2}$ shows that
$\text{dim}(X) \leq \sup_{A \subseteq X}(c_m(A))^4 \leq \sup_{A \subseteq X}\inf_{A \subseteq B \subseteq X}(c(B))^4,$
where $A$ and $B$ are finite subsets of $X$.
\medskip
Let $T(R)$ be an implicit cover tree of \cite{beygelzimer2006cover} on a finite set $R$.
\cite[Lemma~4.1]{beygelzimer2006cover} showed that the number of children of any node $p \in T(R)$ has the upper bound $(c(R))^4$.
Lemma~\ref{lem:compressed_cover_tree_width_bound} generalizes \cite[Lemma~4.1]{beygelzimer2006cover} for a compressed cover tree.
\begin{lem}[width bound]
\label{lem:compressed_cover_tree_width_bound}
Let $R$ be a finite subset of a metric space $(X,d)$.
For any compressed cover tree $\mathcal{T}(R)$, any node $p$ has at most $(c_m(R))^4$ children at every level $i$, where $c_m(R)$ is the minimized expansion constant of the set $R$.
\hfill $\blacksquare$
\end{lem}
\begin{proof}
By the covering condition of $\mathcal{T}(R)$, any child $q$ of $p$ located on the level $i$ has $d(q,p) \leq 2^{i+1}$.
Then the number of children of the node $p$ at level $i$ at most $|\bar{B}(p,2^{i+1})|$.
The separation condition in Definition~\ref{dfn:cover_tree_compressed} implies that the set $C_i$ is a $2^{i}$-sparse subset of $X$.
We apply Lemma \ref{lem:packing} for $t = 2^{i+1}$ and $\delta = 2^{i}$.
Since $4 \cdot \frac{t}{\delta} + 1 \leq 4 \cdot 2 + 1 \leq 2^4$, we get $|\bar{B}(q,2^{i+1}) \cap C_i| \leq (c_m(C_{i}))^4$. Lemma \ref{lem:expansion_constant_property} implies that $(c_m(C_{i}))^4 \leq (c_m(R))^4 $, so the upper bound is proved.
\end{proof}
In the original work \cite{beygelzimer2006cover}, the \emph{explicit depth} of a single point in a cover tree was defined in this quote:
"explicit depth of any point p, defined as the number of explicit grandparent nodes on the path from the root to node p in the lowest level in which p is explicit".
\cite[Lemma~4.3]{beygelzimer2006cover} showed that the depth of any node $p$ has an upper bound $O(c^2\log|R|)$.
\medskip
Example~\ref{exa:tall_imbalanced_tree} demonstrates that, for any $m \in \mathbb{Z}_{+}$, there is a set $R$ whose compressed cover tree $\mathcal{T}(R)$ has $m^2$ levels, but the explicit depth of any node is at most $2m+1$ by Lemma \ref{lem:tall_imbalanced_tree_explicit_depth}.
Counterexample \ref{cexa:construction_algorithm_of_original_cover_tree} and Counterexample \ref{cexa:original_all_nearest_neighbors_algorithm} proves that the explicit depth cannot be used to estimate the complexities of \cite[Algorithms~1 and 2]{beygelzimer2006cover}, respectively.
Therefore a new concept of the height $|H(\mathcal{T})|$ was needed to justify a new near linear parameterized complexity in Theorem~\ref{thm:cover_tree_knn_time}.
\begin{dfn}[the height of a compressed cover tree]
\label{dfn:depth}
For a compressed cover tree $\mathcal{T}(R)$ on a finite set $R$,
the \emph{height set} is $H(\mathcal{T}(R))=\{l_{\max},l_{\min}\}\cup \{ i \mid C_{i-1} \setminus C_{i} \neq \emptyset\}$.
The size $|H(\mathcal{T}(R))|$ of this set is called the \emph{height} of $\mathcal{T}(R)$.
\hfill $\blacksquare$
\end{dfn}
By condition~(\ref{dfn:cover_tree_compressed}b), the height $|H(\mathcal{T}(R))|$ counts the number of levels $i$ whose cover sets $C_i$ include new points that were absent on higher levels.
Since any point can appear alone at its own level, $|H(\mathcal{T})|\leq|R|$ is the worst case upper bound of the height.
The following parameters help prove an upper bound for the height $|H(\mathcal{T}(R))|$ in Lemma~\ref{lem:depth_bound}.
\begin{dfn}[diameter and aspect ratio of a reference set $R$]
\label{dfn:radius+d_min}
For any finite metric set $R$ with a metric $d$, the \emph{diameter} is $\mathrm{diam}(R) = \max_{p \in R}\max_{q \in R}d(p,q)$.
The \emph{aspect ratio} \cite{krauthgamer2004navigating} is $\Delta(R) = \frac{\mathrm{diam}(R)}{d_{\min}(R)}$.
\hfill $\blacksquare$
\end{dfn}
\begin{lem}[upper bound of height $|H(\mathcal{T}(R))|$]
\label{lem:depth_bound}
Any finite set $R$ has the upper bound $|H(\mathcal{T}(R))|\leq 1+\log_2(\Delta(R))$.
\hfill $\blacksquare$
\end{lem}
\begin{proof}
We have $|H(\mathcal{T}(R))|\leq l_{\max} - l_{\min}+1$ by Definition~\ref{dfn:depth}.
We estimate $l_{\max} - l_{\min}$ as follows.
\medskip
Let $p \in R$ be a point such that $\mathrm{diam}(R) = \max_{q \in R}d(p,q)$.
Then $R$ is covered by the closed ball $\bar B(p; \mathrm{diam}(R))$.
Hence the cover set $C_i$ at the level $i=\log_2(\mathrm{diam}(R))$ consists of a single point $p$.
The separation condition in Definition~\ref{dfn:cover_tree_compressed} implies that
$l_{\max}\leq \log_2(d_{\max}(R))$.
Since any distinct points $p,q \in R$ have $d(p,q)\geq d_{\min}(R)$, the covering condition implies that no new points can enter the cover set $C_i$ at the level $i=[\log_2(d_{\min}(R))]$, so $l_{\min}\geq\log_2(d_{\min}(R))$.
Then
$|H(\mathcal{T}(R))| \leq 1+l_{\max} - l_{\min} \leq
1+\log_2(\frac{\mathrm{diam}(R)}{d_{\min}(R)})$.
\end{proof}
If the aspect ratio $\Delta(R) = O(\text{Poly}(|R|))$ polynomially depends on the size $|R|$, then $|H(\mathcal{T}(R))| \leq O(\log(|R|))$.
\section{Challenging data for implicit cover trees}
\label{sec:ChallangesCoverTree}
In this section we show that all the main results of \cite{beygelzimer2006cover} require more justifications.
Counterexample~\ref{cexa:construction_algorithm_of_original_cover_tree} shows a gap in the proof for the complexity of the Insert algorithm for an implicit cover tree \cite[Theorem~6]{beygelzimer2006cover}.
Counterexample \ref{cexa:original_all_nearest_neighbors_algorithm} shows another gap in the proof of \cite[Theorem~5]{beygelzimer2006cover}, which gives an upper bound for the complexity of Algorithm \ref{alg:cover_tree_k-nearest_original}.
Both counterexamples are based on Example \ref{exa:tall_imbalanced_tree}, which extends Example \ref{exa:implicitexplicitexample}.
\begin{exa}[tall imbalanced tree]
\label{exa:tall_imbalanced_tree}
For any integer $m > 10$, let $G$ be a metric graph pictured in Figure \ref{fig:GraphConstructionOfExample} that has $2$ vertices $r,q$ and $m+1$ edges $(e_i)$ for $i \in \{0, ..., m\}$, in such a way that the length of each edge $e_i$ for $i \geq 1$ is $|e_i| = 2^{m \cdot i +2}$ and for $i = 0$ we set $|e_0| = 1$. For every $i \in \{1, ..., m^2\}$ if $i$ is divisible by $m$ we set $p_{i}$ be the middle point of $e_{i / m}$ and for every other $i$ we define $p_i$ to be the middle point of segment $(p_{i+1}, q)$. Let $d$ be an induced metric of metric graph $G$ e.g. $d(q,r) = 1$, $d(r, p_{i}) = 2^{i+1} + 1$, $d(q,p_{i}) = 2^{i} $ and if $i > j $ and $\ceil{\frac{i}{m}} = \ceil{\frac{j}{m}}$ we have $ d(p_{j}, p_{i}) = \sum^{i}_{t = j+1} 2^{t}$. Define $R = \{r\} \cup \{p_{i} \mid i \in \{1,2,3,...,m^2\} \}$.
Let us define a compressed cover tree $\mathcal{T}(R)$ by setting $r$ to be the root node and $l(p_{i}) = i$ for all $i$. If $i$ is divisible by $m$ we set $r$ to be the parent of $p_{i}$. If $i$ is not divisible by $m$ then we set $p_{i+1}$ to be the parent of $p_{i}$. Note that for every $i$ divisible by $m$ the point $p_i$ is in the middle of segment $e_{i / m}$ and therefore $d(p_{i}, r) \leq 2^{i + 1}$ .
For every $i$ not divisible by $m$, by the definition $p_i$ is middle point of $(p_{i+1},q)$ and therefore we have $d(p_i, p_{i+1}) \leq 2^{i+1}$. Since for any point $p_i$ distance to its parent is at most $2^{i+1}$, it follows that $\mathcal{T}(R)$ satisfies the covering condition of Definition \ref{dfn:cover_tree_compressed}.
By definition $C_t = \{r\} \cup \{ p_i \mid i \geq t\}$. We will now prove that $C_t$ satisfies the separation condition. Note first that if $p_{i} \in C_t$ is such that $i$ is divisible by $m$ then
we have $d(r, p_i) = 2^{i+1} \geq 2^{t + 1} > 2^t.$ If $i$ is not divisible by $m$ then
$d(r, p_{i}) = d(r,q) + d(q,p_{i}) = 1 + 2^{i+1} > 2^{t}$. Therefore root $r$ is separated from the other points. Consider now arbitrary points $p_{i}$ and $p_{j}$ that have $i > j \geq t $ and $\ceil{\frac{i}{m}} = \ceil{\frac{j}{m}}$ we have
$$d(p_{i}, p_{j}) = \sum^{i}_{s = j+1} 2^{s} \geq 2^{j+1} \geq 2^{t+1} > 2^{t}$$
On the other hand if $i > j \geq t $ and $\ceil{\frac{i}{m}} \neq \ceil{\frac{j}{m}}$ then we have
$$d(p_{i} , p_{j}) = d(p_{i},q) + d(p_{j} ,q) \geq 2^{i} + 2^{j} \geq 2^{j+1} \geq 2^{t+1} > 2^t $$
Since for all $t$ we have shown that all the pairwise combinations of points of $C_{t}$ satisfy the separation property, the separation condition of Definition \ref{dfn:cover_tree_compressed} is satisfied for the whole tree $\mathcal{T}(R)$.
\hfill $\blacksquare$
\end{exa}
Recall that in \cite[Section~2]{beygelzimer2006cover} the explicit representation of cover tree was defined as
"the explicit representation of the tree coalesces all nodes in which the only child is a self-child". Simplest way to interpret this is to consider cover sets $C_i$ and define $p \in C_i$ to be an explicit node, if $p$ has child at level $i-1$. By \cite[Lemma~4.3]{beygelzimer2006cover} depth of any node $p$ is "defined as the number of explicit grandparent nodes on the path from the root
to p in the lowest level in which $p$ is explicit". Explicit depth of a node $p$ in any compressed tree $\mathcal{T}$ will be defined in Definition \ref{dfn:explicit_depth_for_compressed_cover_tree} using the simplest interpretation of the aforementioned quotes.
\begin{lem}[Explicit depth for compressed cover tree]
\label{dfn:explicit_depth_for_compressed_cover_tree}
Let $R$ be a finite subset of some metric space $(X,d)$ and let $\mathcal{T}(R)$ be a compressed cover tree built on $R$.
For any $p \in \mathcal{T}(R)$ let $s = (w_0, ... , w_m)$ be a node-to-root path of $p$. Then explicit depth $D(p)$ of node $p$ belonging to compressed cover tree can be interpreted as a sum
$$D(p) = \sum^{m-1}_{i = 0}| \{q \in \mathrm{Children}(w_{i+1}) \mid l(q) \in [l(w_i), l(w_{i+1}) - 1] \} | $$
\hfill $\blacksquare$
\end{lem}
\begin{proof}
Note that a node-to-root path of an implicit cover tree constructed on $R$ has $l(w_{j+1}) - l(w_{j}) - 1$ extra copies of $w_{j+1}$ between every $w_{j}$ and $w_{j+1}$ for any index $j \in [0,m-1]$. Recall that a node is explicit, if it contains non-trivial children. Therefore there will be exactly $$| \{q \in \mathrm{Children}(w_{i+1}) \mid l(q) \in [l(w_i), l(w_{i+1}) - 1] \} | $$
explicit nodes between $w_{j}$ and $w_{j+1}$.
\end{proof}
\begin{lem}
\label{lem:tall_imbalanced_tree_explicit_depth}
Let $\mathcal{T}(R)$ be a compressed cover tree and let $R$ be a set as in Example \ref{exa:tall_imbalanced_tree} for some parameter $m \in \mathbb{Z}$. Then for any $p \in R$ the explicit depth $D(p)$ of Definition \ref{dfn:explicit_depth_for_compressed_cover_tree} has an upper bound $2m+1$. \hfill $\blacksquare$
\end{lem}
\begin{proof}
Note that for any $p_i$, if $i$ is divisible by $m$, then $r$ is parent of $p_i$. By definition
$D(p_i) = |\{p \in \mathrm{Children}(r) \mid l(p) \in [l(p_i), m^2] \} |$. Since $r$ contains children on every level $j$ where $j$ is divisible by $m$, we have $D(p_i) = m - \frac{i}{m} + 1$. Let us now consider an index $i$ which is not divisible by $m$. Note that $p_{j+1}$ is a parent of $p_j$ for all $j \in [i, m \cdot \ceil{i / m} - 1]$. Therefore the path consisting of ancestors of $p_i$ from $p_i$ to the root node $r$ has the following form: $(p_{i}, p_{i+1}, ..., p_{m \cdot \ceil{i / m}} , r)$. It follows that
$$D(p_i) = \sum^{m \cdot \ceil{i / m} -1}_{j = i}| \{p \in \mathrm{Children}(p_j) \mid l(p) \in [l(p_{j}), l(p_{j+1}) - 1] \}| + D(p_{m \cdot \ceil{i / m}})
$$
Therefore since $i \geq m \cdot (\ceil{i / m} -1) + 1 $ and $\ceil{i / m} \geq 1$ we have: $$D(p_i) = (m \cdot \ceil{i / m} - i ) + (m - \ceil{i / m} + 1) \leq m + (m+1) = 2m + 1$$
\end{proof}
\begin{figure}
\centering
\input images/MultiGraphExampleExtended.tex
\caption{Illustration of a graph $G$ and a point cloud $R$ defined in Example \ref{exa:tall_imbalanced_tree}}
\label{fig:GraphConstructionOfExample}
\end{figure}
\begin{figure}
\centering
\input images/bad_tree_example_expanded.tex
\caption{Illustration of the a compressed cover tree $\mathcal{T}(R)$ defined in Example \ref{exa:tall_imbalanced_tree}}
\label{fig:bad_cover_tree}
\end{figure}
\begin{algorithm}
\caption{Original Insert() algorithm for inserting point $p$ into an implicit cover tree $T$ \cite[Algorithm~2]{beygelzimer2006cover}. This algorithm is launched with $i = l_{\max}$ and $Q_{i} = \{r\}$, where $r$ is the root node of $T$. }
\label{alg:cover_tree_construction_original}
\begin{algorithmic}[1]
\STATE \textbf{Insert}(point $p$, cover set $Q_i$, level $i$)
\STATE Set $Q = \{\mathrm{Children}(q) \mid q \in Q_i\}$
\IF {$d(p,Q) > 2^{i}$}
\STATE \textbf{return} "no parent found"
\ELSE
\STATE Set $Q_{i-1} = \{q \in Q \mid d(p,q) \leq 2^{i}\}$
\IF{\textbf{Insert}$(p,Q_{i-1}, i-1)$ = "no parent found" and $d(p,Q_{i}) \leq 2^{i}$}
\STATE Pick $q \in Q_i$ satisfying $d(p,q) \leq 2^{i}$ and insert $p$ into $\mathrm{Children}(q)$, \textbf{return} "parent found"
\ELSE
\STATE \textbf{return} "no parent found"
\ENDIF
\ENDIF
\end{algorithmic}
\end{algorithm}
\begin{cexa}[Counterexample for a step in the proof of {\cite[Theorem~6]{beygelzimer2006cover}}]
\label{cexa:construction_algorithm_of_original_cover_tree}
First we cite a part of the proof of \cite[Theorem~6]{beygelzimer2006cover}.
"\textbf{Theorem 6} Any insertion or removal takes time at most $O(c^6\log(n))$"
[In other words the run time of Algorithm \ref{alg:cover_tree_construction_original} is $O(c^6\log(n))$, where $n$ is the number points of original dataset $S$ on which tree $T$ was constructed.]
[\emph{Partial proof:} ]: " Let $k = c^2 \log(|S|)$ be the maximum explicit depth
of any point, given by Lemma 4.3. Then the total
number of cover sets with explicit nodes is at most
$3k+k = 4 k$, where the first term follows from the fact
that any node that is not removed must be explicit at
least once every three iterations, and the additional
$k$ accounts for a single point that may be implicit for
many iterations.
Thus the total amount of work in Steps 1 [Our line 2] and 2 [Our lines 3-5] is
proportional to $O(k \cdot \max_i|Q_i|)$. Step 3 [Our lines 5-11] requires work
no greater than step 1 [Our line 2]."
\medskip
In other words the above arguments says that the total number of times line $1$ [our line 2] was called during the algorithm has the upper bound $4 \cdot \max_{p \in R}D(p)$ , where $D(p)$ is the explicit depth of a point $p$, see Definition \ref{dfn:explicit_depth_for_compressed_cover_tree}.
Take the reference set $R$, the compressed cover tree $\mathcal{T}(R)$ and the point $q$ from Example \ref{exa:tall_imbalanced_tree} for any parameter $m > 200$.
Assume that we have already constructed tree $\mathcal{T}(R)$.
Let us show that $\mathcal{T}(R \cup q)$ constructed by Algorithm~\ref{alg:cover_tree_construction_original} from the input $q, i = m^2+1, Q_i = \{r\}$ runs at least $m^2-2$ self-recursions.
This will lead to a contradiction since by Lemma \ref{lem:tall_imbalanced_tree_explicit_depth} any node $ p\in \mathcal{T}(R)$ has $D(p) \leq 2m + 1$.
We show by induction on $m$ going down that, for every step $i \in [1, m^2]$, we have $Q_i = \{r,p_i\}$.
The proof for the base case $i = m^2$ is similar to the induction step and thus will be omitted.
Assume that $Q_{i}$ has the desired form for some $i$.
Let us show that the claim holds for $i-1$.
For all levels $i-1$ divisible by $m$, the node $p_{i-1}$ is a child of the root $r$.
For all levels $i-1$ not divisible by $m$, the node $p_{i}$ is a child of $p_i$.
Since $\mathcal{T}(R)$ contains exactly one node at each level, in both cases we have $Q = \{r, p_{i}, p_{i-1}\}$.
Since $d(q,r) = 1$, $d(q,p_{i}) = 2^{i+1}$ and $d(q,p_{i-1}) = 2^{i}$ we have $$Q_{i-1} = \{p \in Q_i \mid d(p,q) \leq 2^i\} = \{r,p_i\}.$$
The actual implementation of algorithm \ref{alg:cover_tree_construction_original} iterates over all levels $i$ for which there exists a node in $Q_i$ that contains at least one non-trivial child on level $i-1$ and for which the condition in line $7$ is satisfied. Since for every index $i \in [2,m^2+1]$ we have $Q_i = \{r,p_i\}$ and since either $r$ or $p_{i}$ has a child at level $i-1$ and the condition in line $7$ is always satisfied, it follows that $m^2-2$ is a low bound for the number $\xi$ of self-recursions.
Therefore the contradiction follows from the inequality:
$$m^2-2 \leq \xi \leq 4 \cdot \max_{p \in R}D(p) \leq 8 \cdot (2m + 1) \leq 16 \cdot m + 8$$
where $m > 20$.
\hfill $\blacksquare$
\end{cexa}
\begin{algorithm}
\caption{Original \cite[Algorithm~1]{beygelzimer2006cover} based on an implicit cover tree $T$ \cite[Section ~ 2]{beygelzimer2006cover} for nearest neighborhood search, which is used in Counterexample \ref{cexa:original_all_nearest_neighbors_algorithm}. The children of a node $q$ of an implicit cover tree are defined as the nodes at one level below $q$ that have $q$ as their parent. In the actual implementationm the loop in lines 3-6 runs only for the levels containing nodes with non-trivial children (not coinciding with their parents).
The level $+\infty$ can be replaced by $l_{\max}(T)$ and $-\infty$ can be replaced by $l_{\min}(T)$ in the code.
}
\label{alg:cover_tree_k-nearest_original}
\begin{algorithmic}[1]
\STATE \textbf{Input} : implicit cover tree $T$, a query point $p$
\STATE Set $Q_{\infty} = C_{\infty}$ where $C_{\infty}$ is the root level of $T$
\FOR{$i$ from $\infty$ down to $-\infty$}
\STATE Set $Q = \{\text{Children}(q) \mid q \in Q_i\}.$
\STATE Form cover set $Q_{i-1} = \{q \in Q \mid d(p,q) \leq d(p,Q) + 2^{i}\}$
\ENDFOR
\STATE \textbf{return} $\text{argmin}_{q \in Q_{-\infty}}d(p,q)$
\end{algorithmic}
\end{algorithm}
\begin{cexa}[Counterexample for a step in the proof of {\cite[Theorem~5]{beygelzimer2006cover}}]
\label{cexa:original_all_nearest_neighbors_algorithm}
We cite a part of the proof of \cite[Theorem~5]{beygelzimer2006cover}.
"\textbf{Theorem 5}
If the dataset $S \cup \{p\}$ has expansion constant $c$, the nearest neighbor of $p$ can be found in time $O(c^{12}\log(n))$."
[\emph{Partial proof:}] "Let $Q^{*}$ be the last $Q$ considered by the Algorithm \ref{alg:cover_tree_k-nearest_original} (so $Q^{*}$ consists only of lead nodes with scale $-\infty$). Lemma 4.3 bounds the explicit depth of any node in the tree (and in particular any node in $Q^{*}$) by $k = O(c^2 \log (N))$. Consequently the number of iterations is at most $k|Q^{*}| \leq k \max_i|Q_i|$."
\smallskip
In other words, the above argument claims that the total number $\xi$ of times when Algorithm \ref{alg:cover_tree_k-nearest_original} runs lines 3-6 has an upper bound $$\xi \leq \max_{p \in R}D(p) \cdot \max_i|Q_i|.$$
Take $R,\mathcal{T}(R)$ and $q$ from Example \ref{exa:tall_imbalanced_tree}. We will apply Algorithm \ref{alg:cover_tree_k-nearest_original} to the tree $\mathcal{T}(R)$ and query point $q$.
By Lemma \ref{lem:tall_imbalanced_tree_explicit_depth} the cover tree $\mathcal{T}(R)$ having parameter $m$ has $D(p) \leq 2m+1$ for all $p \in R$. A contradiction to the original argument will follow after showing that $\max|Q_i| \leq 2$ and $\xi \geq m^2 - 2$.
Let us first estimate $\max_i |Q_i|$.
Similarly to Counterexample \ref{cexa:construction_algorithm_of_original_cover_tree} we will show that, for every iteration (lines 3-5) $i \in [1, m^2]$ of Algorithm \ref{alg:cover_tree_k-nearest_original}, we have
$Q_i = \{r,p_i\}$.
The proof for the basecase $i = m^2$ is similar to the induction step and thus will be omitted.
Assume that $Q_{i}$ has the desired form for some $i$.
Let us show that the claim holds for $i-1$.
For all levels $i-1$ divisible by $m$, the node $p_{i-1}$ is a child of the root $r$.
For all levels $i-1$ not divisible by $m$, the node $p_{i-1}$ is a child of $p_{i}$.
Since $\mathcal{T}(R)$ contains exactly one node at each level, in both cases
we have $Q = \{r, p_{i}. p_{i-1}\}$.
Since $d(q,r) = 1$, $d(q,p_{i}) = 2^{i+1}$ and $d(q,p_{i-1}) = 2^{i}$, we have
$$Q_{i-1} = \{p \in Q_t \mid d(p,q) \leq 2^i + 1\} = \{r,p_i\}$$
Therefore it follows that $|Q_i| \leq 2$ for all $i \in [1, m^2]$.
The actual implementation of algorithm \ref{alg:cover_tree_k-nearest_original} iterates over all levels $i$ for which there exists a node in $Q_i$ containing at least one non-trivial child at level $i-1$.
Since $Q_i = \{r,p_i\}$ and for every index $i \in [2,m^2+1]$, either $r$ or $p_{i}$ has a child on level $i-1$, it follows that $m^2-2$ is a low bound for the number $\xi$ of iterations.
A contradiction follows from
$$m^2 - 2 \leq \xi \leq \max_{p \in R}D(p) \cdot \max_i|Q_i| \leq (2m+1) \cdot 2 \leq 4m+2 \text{ for any }m > 20.$$
\hfill $\blacksquare$
\end{cexa}
\section{Building a compressed cover tree with a near linear parameterized complexity}
\label{sec:ConstructionCovertree}
In this section main Theorem \ref{thm:construction_time} will correct the complexity in \cite[Theorem~6]{beygelzimer2006cover} whose proof discussed in Counterexample \ref{cexa:construction_algorithm_of_original_cover_tree}.
\begin{dfn}[$\mathrm{Children}(p,i)$ and $\mathrm{Next}(p,i,\mathcal{T}(R))$ for a compressed cover tree]
\label{dfn:implementation_compressed_cover_tree}
In a compressed cover tree $\mathcal{T}(R)$ on a set $R$, for any level $i$ and a node $p \in R$, set $\mathrm{Children}(p,i) = \{ a \in \mathrm{Children}(p) \mid l(a) = i \}$.
Let $\mathrm{Next}(p,i,\mathcal{T}(R))$ be the maximal level $j$ satisfying $j < i$ and $\mathrm{Children}(p,i) \neq \emptyset$.
For every node $p$, we store its set of children in a linked hash map so that
\begin{enumerate}[label=(\arabic*)]
\item any key $i$ gives access to $\mathrm{Children}(p,i)$,
\item every $\mathrm{Children}(p,i)$ has access to $\mathrm{Children}(p,\mathrm{Next}(p,i, \mathcal{T}(R)))$,
\item we can directly access $\max \{j \mid \mathrm{Children}(p,j) \neq \emptyset\}$.
\hfill $\blacksquare$
\end{enumerate}
\end{dfn}
Let $R$ be a finite subset of a metric space $(X,d)$.
A compressed cover tree $\mathcal{T}(R)$ will be incrementally constructed by adding points one by one as summarized in Algorithm \ref{alg:cover_tree_k-nearest_construction_whole}.
First we select a root node $r \in R$ and form a tree $\mathcal{T}(\{r\})$ of a single node $r$ at the level $l_{\max} = l_{\min} = +\infty$.
Assume that we have a compressed cover tree $\mathcal{T}(W)$ for a subset $W \subset R$.
For any point $p \in R \setminus W$, Algorithm \ref{alg:cover_tree_k-nearest_construction} builds a larger compressed cover tree $\mathcal{T}(W \cup \{p\})$ from $\mathcal{T}(W)$.
\begin{algorithm}
\caption{Building a compressed cover tree $\mathcal{T}(R)$ from Definition \ref{dfn:cover_tree_compressed} for a finite metric space $(R,d)$.}
\label{alg:cover_tree_k-nearest_construction_whole}
\begin{algorithmic}[1]
\STATE \textbf{Input} : a finite subset $R$ of a metric space $(X,d)$
\STATE \textbf{Output} : a compressed cover tree $\mathcal{T}(R)$.
\STATE Choose a random point $r \in R$ to be a root of $\mathcal{T}(R)$
\STATE Build the initial compressed cover tree $\mathcal{T} = \mathcal{T}(\{r\})$ by making $l(r) = +\infty$.
\FOR{$p \in R \setminus \{r\}$}
\STATE $\mathcal{T} \leftarrow $ run AddPoint$(\mathcal{T} , p )$ described in Algorithm \ref{alg:cover_tree_k-nearest_construction}.
\ENDFOR
\STATE For root $r$ of $\mathcal{T}$ set $l(r) = 1 + \max_{p \in R \setminus \{r\}}l(p)$
\STATE \textbf{Return} a compressed cover tree $\mathcal{T}$ built on the set $R$.
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{This algorithm building $\mathcal{T}(W \cup \{p\})$ from $\mathcal{T}(W)$ runs in the main loop (lines 3-5) of Algorithm \ref{alg:cover_tree_k-nearest_construction_whole}.}
\label{alg:cover_tree_k-nearest_construction}
\begin{algorithmic}[1]
\STATE \textbf{Function} AddPoint( a compressed cover tree $\mathcal{T}(W)$ with a root $r$ for a finite subset $W\subseteq X$, a point $p\in X$)
\STATE \textbf{Output} : compressed cover tree $\mathcal{T}(W \cup \{p\})$.
\STATE Set $i \leftarrow l_{\max}(\mathcal{T}(R))$ \COMMENT{If the root $r$ has no children $l_{\max}$ is undefined, then $ i \leftarrow -\infty$}
\STATE Set $m \leftarrow \max_{j \leq i} \{j \mid d(p,R_{i}) > 2^{j}\} $
\STATE Set $R_{i} \leftarrow \{r\}$, $i' \leftarrow +\infty$
\WHILE{$m \leq i$} \label{line:cof:loop_start}
\STATE Set $\mathcal{C}(R_i) \leftarrow \{a \in \mathrm{Children}(p) \text{ for some }p \in R_i \mid l(a) \geq i-1 \}$ \\ \COMMENT{Recall that $\mathrm{Children}(p)$ contains node $p$ }
\label{line:cof:dfn_C}
\STATE Set $R_{i-1} = \{a \in \mathcal{C}(R_i) \mid d(p,a) \leq 2^{i} \}$
\label{line:cof:defRim1}
\STATE $t = \max_{ a \in R_{i-1}} \mathrm{Next}(a,i-1,\mathcal{T}(W)) $ \\ \COMMENT{If $R_{i-1}$ is empty or it has no children we set $t = -\infty$}
\label{line:cof:dfn_t}
\STATE Set $m = \max_{j \leq i-1} \{j \mid d(p,R_{i-1}) > 2^{j}\}$ \label{line:cof:dfn_m}
\STATE Set $R_{t+1} \leftarrow R_{i}$, $i' \leftarrow i$ and $i \leftarrow t + 1$ \COMMENT{if $t = -\infty$, then $t + 1 = -\infty$}
\ENDWHILE \label{line:cof:loop_end}
\STATE Set $j \leftarrow \begin{cases}
i & \text{ if }R_i \neq \emptyset \\
i' & \text{ else}
\end{cases}$
\label{line:cof:selectstart}
\STATE Pick $a \in R_j$ minimizing $d(p,a)$.
\STATE Set $l(p) = m$ and define $a$ to be the parent of $p$
\label{line:cof:selectend}
\end{algorithmic}
\end{algorithm}
Note that during construction of the compressed cover tree in Algorithm \ref{alg:cover_tree_k-nearest_construction} we write down additional information for every node $p$ , which includes number of descendants of node $p$ and the maximal level of nodes in set $\mathrm{Children}(p)$.
\begin{thm}[correctness of Algorithm \ref{alg:cover_tree_k-nearest_construction_whole}]
\label{thm:construction_correctness}
Algorithm \ref{alg:cover_tree_k-nearest_construction_whole} builds a compressed cover tree satisfying Definition~\ref{dfn:cover_tree_compressed}.
\hfill $\blacksquare$
\end{thm}
\begin{proof}
It suffices to prove that Algorithm~\ref{alg:cover_tree_k-nearest_construction} correctly extends a compressed cover tree $\mathcal{T}(W)$ for any finite subset $W\subseteq X$ by adding a point $p$.
Let $C_i$ be the $i$-th cover set of $\mathcal{T}(W)$.
Since $d(p, W) > 0$, the integer $m$ from line~\ref{line:cof:dfn_m} is always well-defined.
Since $\mathcal{T}(W)$ is a finite tree, it will run out of children at a finite level $l_{\min}$.
Thus the condition $m \leq i$ of the while loop (lines \ref{line:cof:loop_start} - \ref{line:cof:loop_end}) will be satisfied at some point of the iteration and the algorithm will terminate.
\smallskip
It remains to prove that $\mathcal{T}(W \cup \{p\})$ satisfies Definition~\ref{dfn:cover_tree_compressed}.
The parent of $p$ is selected at the end (lines \ref{line:cof:selectstart} - \ref{line:cof:selectend}) reached only when $u>t$.
By definition of $m$ in line \ref{line:cof:dfn_m} we have $d(p,R_{i-1}) \leq 2^{m + 1}$.
Therefore a node $v \in R_i$ that minimizes the distance $d(v,R_{i-1})$ satisfies $d(p,v) \leq 2^{m + 1}$.
Since the rest of tree is unchanged, condition (\ref{dfn:cover_tree_compressed}b) holds.
\smallskip
To check (\ref{dfn:cover_tree_compressed}c), let $r \in C_{h}$ be any other node for $h \leq l(p)$.
Let $q$ be a node that has the lowest index $j$ over all ancestor of $r$ that is contained in some set $R_j$. If $j = i-1$ in the last iteration (lines \ref{line:cof:loop_start} - \ref{line:cof:loop_end}) of input $p$, then the separation condition is satisfies trivially by definition of u on line \ref{line:cof:dfn_m}. Otherwise $q \in R_{j} \setminus R_{j-1}$ and thus $d(p,q) > 2^{j}$.
Since $q$ is an ancestor of $r$ there is a chain of nodes $(a_t)$ satisfying $a_0 = r$ and $a_m = q$ so that for all $s \in [0,m-1]$ node $a_{s+1}$ is parent of $a_{s}$. Since $l(r) \geq h $ we know that $l(a_{1}) \geq h + 1$, generally $l(a_{s}) \geq h + s$ for any $s \in [0,m-1]$. Then
$d(q,r) \leq \sum\limits^{m-1}_{s = 0}d(a_s, a_{s+1}) \leq \sum\limits^{m-1}_{s = 0}2^{h+s} \leq \sum\limits^{j-1}_{x = h+1} 2^{x} = (2^{j} - 2^{h+1})$.
The triangle inequality gives
$d(p,r) \geq d(p,q) - d(q,r) > 2^{j} - (2^{j} - 2^{h + 1}) \geq 2^{h+1}$, then $d(p,r)> 2^{h}$.
Since $r$ was arbitrarily, we get $d(C_{h}, p) > 2^{h}$, therefore (\ref{dfn:cover_tree_compressed}c) holds.
\end{proof}
To correct \cite[Theorem~6]{beygelzimer2006cover} in Theorem \ref{thm:construction_time}, the depth estimate $O(c^2\log n)$ is replaced by the concept of the height $|H(\mathcal{T}(R))|$, which has the upper bound $O(\log_2(\Delta(R))$ in Lemma~\ref{lem:depth_bound}, where $\Delta(R)$ is the aspect ratio of a set $R$.
\medskip
Additionally, another step in the past proof of \cite[Theorem~6]{beygelzimer2006cover} estimated the complexity of line $1$ of \cite[Algorithm~2]{beygelzimer2006cover}, corresponding to line \ref{line:cof:dfn_C} of Algorithm \ref{alg:cover_tree_k-nearest_construction} as $c(R)^4$. However, since $\mathcal{C}(R_i)$ is children set of $R_i$ we should have $|\mathcal{C}(R_i)| \leq c_m(R)^4\max_i|R_i|$. In the proof of Theorem \ref{thm:construction_time} it is shown that $\max_i|R_i| \leq c_m(R)^4$, therefore our new estimate shows that $|\mathcal{C}(R_i)| \leq c_m(R)^8$, instead of original estimate $c(R)^4$.
\begin{thm}[complexity of a compressed cover tree]
\label{thm:construction_time}
Let $R$ be a finite subset of a metric space $(X,d)$.
Algorithm~\ref{alg:cover_tree_k-nearest_construction_whole} builds
a compressed cover tree $\mathcal{T}(R)$ with a height $|H(\mathcal{T}(R))|$ from Definition \ref{dfn:depth} in time $O((c_m(R))^8 \cdot |H(\mathcal{T}(R))| \cdot |R|),$
where $c_m(R)$ is the minimized expansion constant from Definition \ref{dfn:expansion_constant}.
\hfill $\blacksquare$
\end{thm}
\begin{proof}
The complexity of Algorithm \ref{alg:cover_tree_k-nearest_construction_whole} is dominated by lines $3-5$ which call Algorithm \ref{alg:cover_tree_k-nearest_construction} $O(|R|)$ times.
\smallskip
Assume that we have already constructed a cover tree on set $\mathcal{T}(W)$, the goal Algorithm \ref{alg:cover_tree_k-nearest_construction} is to construct tree $\mathcal{T}(W \cup \{p\})$ for some $p \in R \setminus W$.
Since $\mathcal{T}(R)$ can have a chain-like structure, in worst case loop determined by lines
\ref{line:cof:loop_start}-\ref{line:cof:loop_end} is performed $|H(\mathcal{T}(R))|$ times. By Lemma \ref{lem:compressed_cover_tree_width_bound} since $W \subseteq R \subseteq X$ we have $|\mathcal{C}(R_i)| \leq c_m(W)^4|R_i| \leq c_m(R)^4|R_i|$ nodes, where $\mathcal{C}(R_i)$ is defined in line \ref{line:cof:dfn_C}. Therefore both, lines \ref{line:cof:defRim1} and \ref{line:cof:dfn_C} take at most $c^4|R_i|$ time. In line \ref{line:cof:dfn_t} we handle $|R_{i-1}|$ elements, for each of them we can retrieve index $\mathrm{Next}(a,i-1, \mathcal{T}(W))$ in $O(1)$ time, since for every $a \in \mathcal{T}(R)$ we can update the last index $j$, when $a$ had children on level $j$ in line \ref{line:cof:dfn_C}. Therefore line \ref{line:cof:dfn_t} takes at most $O(|R_i|)$ time. Line \ref{line:cof:dfn_m} can be computed in time $O(|R_i|)$.
Let us now bound the maximal size of $R_i$ during whole run-time of the algorithm.
Note that $R_{i-1} \subseteq B(p,2^{i}) \cap C_{i-1}$ where $C_{i-1}$ is a $i-1$th cover set of $\mathcal{T}(R)$. Since $C_{i-1}$ is $2^{i-1}$-spares subset of $R$ we can apply packing Lemma \ref{lem:packing} with $r = 2^{i}$ and $\delta = 2^{i-1}$ to obtain
$|B(p,2^{i}) \cap C_{i-1} | \leq (c_m(W))^4 $.
Lemma \ref{lem:expansion_constant_property} implies $(c_m(W))^4 \leq (c_m(R))^4 $, therefore $|B(p,2^{i}) \cap C_{i-1} | \leq (c_m(R))^4$.
\smallskip
The complexity of Algorithm \ref{alg:cover_tree_k-nearest_construction} is dominated by line \ref{line:cof:dfn_C} that has time $O(|C(R_i)|) \leq O((c_m(R))^4\max_i|R_i|) \leq O((c_m(R))^8)$.
Then the whole Algorithm \ref{alg:cover_tree_k-nearest_construction_whole} has time
$O((c_m(R))^8 \cdot |H(\mathcal{T}(R))| \cdot |R|)$ as desired.
\end{proof}
\section{Distinctive descendant set}
\label{sec:distinctive_descendant_set}
In this section we introduce auxiliary concepts for the future. The main concept of this Section is distinctive descendant set of Definition \ref{dfn:distinctive_descendant_set}. Distinctive descendant set at a level $i$ of a node $p \in \mathcal{T}(R)$ in a compressed cover tree corresponds to set of descendants of a copy of node $p$ at level $i$ in the original implicit cover tree $T(R)$.
Other important concepts are $\lambda$-point of Definition \ref{dfn:lambda-point} that is used in Algorithm \ref{alg:cover_tree_k-nearest} as an approximation for $k$-nearest neighboring point. Its $\beta$-point property of Lemma \ref{lem:beta_point} that play a major role in the proof of the main time complexity result Theorem \ref{thm:cover_tree_knn_time}.
\begin{figure}
\centering
\input images/explicit_cover_tree_extension.tex
\caption{Consider a compressed cover tree $\mathcal{T}(R)$ that was built on set $R = \{1,2,3,4,5,7,8\}$. Let $\mathcal{S}_i(p, \mathcal{T}(R))$ be a distinctive descendant set of Definition \ref{dfn:distinctive_descendant_set}. Then $V_2(1) = \emptyset, V_{1}(1) = \{5\}$ and $V_{0}(1) = \{3,5,7\}$.
And also $\mathcal{S}_2(1, \mathcal{T}(R)) = \{1, 2,3,4,5,7,8\}$, $\mathcal{S}_1(1, \mathcal{T}(R)) = \{1,2,3,4\} $ and $\mathcal{S}_{0}(1, \mathcal{T}(R)) = \{1\} $.}
\label{fig:uniqueDescendant}
\end{figure}
\begin{dfn}[Distinctive descendant sets]
\label{dfn:distinctive_descendant_set}
Let $R\subseteq X$ be a finite reference set with a cover tree $\mathcal{T}(R)$.
For any node $p \in \mathcal{T}(R)$ in a compressed cove tree on a finite set $R$ and $i \leq l(p) - 1$, set
$V_{i}(p) = \{u \in \mathrm{Descendants}(p) \mid i \leq l(u)\leq l(p) - 1\}.$
If $i \geq l(p)$, then set $V_i(p) = \emptyset$.
For any level $i \leq l(p) $, the \emph{distinctive descendant set} is
$\mathcal{S}_i(p, \mathcal{T}(R)) = \mathrm{Descendants}(p) \setminus \bigcup_{u \in V_{i}(p)} \mathrm{Descendants}(u)$
whose size is denoted by $|\mathcal{S}_i(p, \mathcal{T}(R)) |$.
\hfill $\blacksquare$
\end{dfn}
\begin{algorithm}
\caption{This algorithm performs depth first traversal of a tree to compute number of descendants of every node $p \in \mathcal{T}(R)$}
\label{alg:cover_tree_number_of_descendants}
\begin{algorithmic}[1]
\STATE \textbf{Function} : CountDescendants(Node $p$ of $\mathcal{T}(R)$)
\STATE \textbf{Output} : Number of descendants of node $p$ in tree $\mathcal{T}(R)$.
\STATE Set $n = 1$
\FOR{$a \in \mathrm{Children}(p)$}
\STATE $n = n + \text{CountDescendants}(a)$
\ENDFOR
\STATE \textbf{Set} $|\mathrm{Descendants}(p)| = n$ and \textbf{return} $n$
\end{algorithmic}
\end{algorithm}
Lemma \ref{lem:descendants_precompute} shows that it is possible to precompute the number of descendants of every node in $O(|R|)$ time.
\begin{lem}
\label{lem:descendants_precompute}
Let $R$ be a finite subset of metric space $(X,d)$. Assume that we have already constructed compressed cover tree $\mathcal{T}(R)$ , then Algorithm \ref{alg:cover_tree_number_of_descendants} finds the number of descendants $|\mathrm{Descendants}(p)|$ for all points $p$ in set $R$ in $O(|R|)$ time.
\end{lem}
\begin{proof}
Proof follows by noting that CountDescendants() has $O(1)$ complexity and is called once for every $p \in R$. Therefore the time complexity of this method is $O(|R|)$.
\end{proof}
\begin{algorithm}
\caption{This algorithm returns sizes of disctinctive descendant set $\mathcal{S}_i(p, \mathcal{T}(R))$ for all levels $i \in H(\mathcal{T}(R))$}
\label{alg:cover_tree_distinctive_descendants}
\begin{algorithmic}[1]
\STATE \textbf{Function} : CountDistinctiveDescendants(Node $p$ of $\mathcal{T}(R)$)
\STATE \textbf{Output} : Distinctive descendant set $\mathcal{S}_i(p, \mathcal{T}(R))$ for any $i \in H(\mathcal{T}(R))$.
\STATE Set $i = l(p)$
\STATE Set $|\mathcal{S}_i(p, \mathcal{T}(R))| = |\mathrm{Descendants}(p)|$
\WHILE{$\mathrm{Next}(p,i,\mathcal{T}(R))$ is defined}
\STATE Set $j = \mathrm{Next}(p,i, \mathcal{T}(R))$
\FOR {$t \in [j+1,i] \cap H(\mathcal{T}(R))$}
\STATE Set $|\mathcal{S}_t(p, \mathcal{T}(R))| = |\mathcal{S}_i(p, \mathcal{T}(R))|$
\ENDFOR
\STATE Set $|\mathcal{S}_{j}(p, \mathcal{T}(R))| = |\mathcal{S}_{i}(p, \mathcal{T}(R))| - \sum_{a \in \mathrm{Children}(p,j)} |\mathrm{Descendants}(a)|$
\STATE Set $i = j$
\ENDWHILE
\end{algorithmic}
\end{algorithm}
\begin{lem}
\label{lem:distinctive_descendants_precompute}
Let $R$ be a finite subset of a metric space $(X,d)$ and let $\mathcal{T}(R)$ be its compressed cover tree. Assume that we have precomputed the number of descendants $|\mathrm{Descendants}(p)|$ for every $p \in R$ using Algorithm \ref{alg:cover_tree_number_of_descendants}.
Then for any fixed $p \in R$ Algorithm \ref{alg:cover_tree_distinctive_descendants} computes for all $i \in H(\mathcal{T}(R))$ the sizes of distinctive descendant sets $\mathcal{S}_i(p, \mathcal{T}(R))$ of Definition \ref{dfn:distinctive_descendant_set} in $O((c_m(R))^4\cdot|H(\mathcal{T}(R))|)$ time.
\end{lem}
\begin{proof}
In each of the loop (lines 5-12) , the most costly operation is performed in line 10. By Lemma \ref{lem:compressed_cover_tree_width_bound} any node $p$ has most $(c_m(R))^4$ children on any level and since the size of descendants was precomputed for all the nodes of $\mathcal{T}(R)$, it follows that the complexity of line 10 is $O((c_m(R))^4)$. Since there at most $|H(\mathcal{T}(R))|$ levels, the final time complexity will be $O((c_m(R))^4\cdot|H(\mathcal{T}(R))|)$.
\end{proof}
Let $i$ be an arbitrary level and let $j = \mathrm{Next}(p,i, \mathcal{T}(R)) $. By definition of $\mathrm{Next}$ it follows that $l(q) \notin [j+1,i-1]$ for all $q \in \mathrm{Descendants}(p)$. Therefore we have $V_i(p) = V_j(p)$ and $\mathcal{S}_i(p, \mathcal{T}(R)) = \mathcal{S}_j(p, \mathcal{T}(R))$. It follows that $|\mathcal{S}_i(p, \mathcal{T}(R))|$ can change only at indices $i \in H(\mathcal{T}(R))$. Therefore Lemma \ref{lem:distinctive_descendants_precompute} shows that all the essential distinctive descendants sets of compressed cover tree $\mathcal{T}(R)$ can be precomputed in $O((c_m(R))^4\cdot|R| \cdot |H(\mathcal{T}(R))|)$ time.
Recall that the neighborhood $N(q;r) = \{p \in C \mid d(q,p) \leq d(q,r)\}$ was introduced in Definition~\ref{dfn:kNearestNeighbor}.
\begin{dfn}[$\lambda$-point]
\label{dfn:lambda-point}
Fix a query point $q$ in a metric space $(X,d)$ and fix any level $i \in \mathbb{Z}$.
Let $\mathcal{T}(R)$ be its compressed cover tree on a finite reference set $R \subseteq X$.
Let $C$ be a subset of a cover set $C_i$ from Definition~\ref{dfn:cover_tree_compressed} satisfying $\sum_{p \in C}|\mathcal{S}_i(p, \mathcal{T}(R))| \geq k$, where $\mathcal{S}_i(p, \mathcal{T}(R))$ is the distinctive descendant set from Definition \ref{dfn:distinctive_descendant_set}.
For any $k\geq 1$, define $\lambda_k(q,C)$ as a point $\lambda\in C$ that minimizes $d(q,\lambda)$ subject to $\sum_{p \in N(q;\lambda)}|\mathcal{S}_i(p, \mathcal{T}(R)) |\geq k$.
\hfill $\blacksquare$
\end{dfn}
\begin{algorithm}
\caption{Computation of a $\lambda$-point of Definition \ref{dfn:lambda-point} in line \ref{line:knns:dfnLambda} of Algorithm \ref{alg:cover_tree_k-nearest} }
\label{alg:lambda}
\begin{algorithmic}[1]
\STATE \textbf{Input:} A point $q \in X$, a subset $C$ of level set $C_i$ of a compressed cover tree $\mathcal{T}(R)$, an integer $k \in \mathbb{Z}$
\STATE Initialize an empty max-binary heap $B$ and an empty array $D$.
\FOR{$p \in C$}
\STATE add $p$ to $B$ with priority $d(q,p)$
\IF{$|H| \geq k$}
\STATE remove the point with a maximal value from $B$
\ENDIF
\ENDFOR
\STATE Transfer points from the binary heap $B$ to the array $D$ in reverse order.
\STATE Find the smallest index $j$ such that $\sum^{j}_{t = 0}\mathcal{S}_i(D[t] , \mathcal{T}(R)) \geq k$.
\STATE \textbf{return} $\lambda = D[j]$.
\end{algorithmic}
\end{algorithm}
\begin{lem}[Time complexity of $\lambda$-point]
\label{lem:time_lambdapoint}
In the conditions of Definition \ref{dfn:lambda-point}, the time complexity of Algorithm \ref{alg:lambda} is $O(C \cdot \max_i |R_i| \cdot \log(k))$.
\end{lem}
\begin{proof}
To compute $\lambda = \lambda_k(q,R)$ in Algorithm \ref{alg:lambda} (launched from line \ref{line:knns:dfnLambda}), we need to select $k$-elements using an ordering from set $C$, which takes takes at most $|C| \cdot \log(k)$ time using binary heap data structure \cite[section.~6.5]{Cormen1990}.
\end{proof}
\begin{lem}[Separation lemma]
\label{lem:separation}
In the conditions of Definition \ref{dfn:distinctive_descendant_set}, let $p$ and $q$ be distinct nodes of $\mathcal{T}(R)$ with levels $l(p) \geq i$ and $l(q) \geq i$. Then $\mathcal{S}_i(p , \mathcal{T}(R)) \cap \mathcal{S}_{i}(q, \mathcal{T}(R)) = \emptyset$.
\end{lem}
\begin{proof}
Without loss of generality assume $l(p) \geq l(q)$. If $q$ is not a descendant of $p$, then the claim holds trivially since $\mathrm{Descendants}(q) \cap \mathrm{Descendants}(p) = \emptyset$. If $q$ is a descendant of $p$, then $l(q) \leq l(p) - 1$ and therefore $q \in V_i(p)$. It follows
$\mathcal{S}_i(p , \mathcal{T}(R)) \cap \mathrm{Descendants}(q) = \emptyset$ and therefore
$$\mathcal{S}_{i}(p, \mathcal{T}(R)) \cap \mathcal{S}_{i}(q, \mathcal{T}(R)) \subseteq \mathcal{S}_{i}(p, \mathcal{T}(R)) \cap \mathrm{Descendants}(q) = \emptyset.$$
\end{proof}
\begin{lem}[Sum lemma]
\label{lem:sum}
In the conditions of Definition \ref{dfn:lambda-point}, for any $V \subseteq C$ where $C$ is a subset of a cover set $C_i$ of $\mathcal{T}(R)$ we have: $$|\bigcup_{p \in V}\mathcal{S}_i(p,\mathcal{T}(R))| = \sum_{p \in V} |\mathcal{S}_i(p, \mathcal{T}(R))| $$
\end{lem}
\begin{proof}
Proof follows from Lemma \ref{lem:separation} by noting that any $p \in V \subseteq C$ has $l(p) \geq i$.
\end{proof}
By Lemma \ref{lem:sum} in Definition \ref{dfn:lambda-point} it is enough to assume that $|\bigcup_{p \in C}\mathcal{S}_i(p, \mathcal{T}(R)) | \geq k$.
\begin{lem}
\label{lem:distinctive_descendant_child_level}
In the condition of Definition \ref{dfn:distinctive_descendant_set} let $p \in \mathcal{T}(R)$ be an arbitrary node. If $w \in \mathcal{S}_i(p)$ then either $w = p$ or there exists $a \in \mathrm{Children}(p) \setminus \{p\}$ for which $l(a) < i$ and $w \in \mathrm{Descendants}(a)$.
\end{lem}
\begin{proof}
Let $w \in \mathcal{S}_i(p)$ be an arbitrary node satisfying $w \neq p$. Let $s$ be the node-to-root path of $w$. Since $\mathcal{S}_i(p) \subseteq \mathrm{Descendants}(p)$ we have $w \in \mathrm{Descendants}(p)$. Let $a \in \mathrm{Children}(p) \setminus \{p\}$ be a child on path $s$. If $l(a) \geq i$ then $a \in V_i(p)$. Note that $w \in \mathrm{Descendants}(a)$, therefore $w \notin \mathcal{S}_i(p)$, which is a contradiction. We conclude that $l(a) < i$.
\end{proof}
\begin{lem}
\label{lem:child_set_equivalence}
Let $R$ be a finite subset of some space $(X,d)$ and let $\mathcal{T}(R)$ be a compressed cover-tree constructed on set $R$. Let $R_i \subseteq C_i$, where $C_i$ is the $i$th cover set of $\mathcal{T}(R)$. Let $\mathcal{C}(R_i) = \{a \in \mathrm{Children}(p) \text{ for some }p \in R_i \mid l(a) \geq i-1 \} $ then
$$\bigcup_{p \in \mathcal{C}(R_i)}\mathcal{S}_{i-1}(p, \mathcal{T}(R)) = \bigcup_{p \in R_i}\mathcal{S}_{i}(p, \mathcal{T}(R))$$
\end{lem}
\begin{proof}
Let $a \in \bigcup_{p \in C}\mathcal{S}_{i-1}(p, \mathcal{T}(R))$ be an arbitrary node. Therefore there exists $v \in C$ having $ a \in \mathcal{S}_{i-1}(v, \mathcal{T}(R))$. Let $w \in R_i$ be the ancestor of $v$, in such a way that has the lowest level among all ancestors located in $R_i$.
Node $v$ has always an ancestor on in $R_i$, since by definition $v \in C$ , which means either $v \in R_i$ or $v$ has a parent in $R_i$. Clearly $a \in \mathrm{Descendants}(w)$. Note that since $w$ was chosen to have the minimal level among all the ancestors of $v$, we have $a \notin \bigcup_{u \in V_i(w)}\mathrm{Descendants}(u)$, therefore
$$a \in \mathcal{S}_i(w, \mathcal{T}(R)) \subseteq \bigcup_{p \in R_i}\mathcal{S}_i(p , \mathcal{T}(R)).$$
Assume now that $a \in \bigcup_{p \in R_i}\mathcal{S}_{i}(p, \mathcal{T}(R))$. Therefore $a \in \mathcal{S}_{i}(v, \mathcal{T}(R))$ for some $w \in R_i$.
Assume first that $w$ does not have children at level $i-1$. Then trivially $V_i(w) = V_{i-1}(w)$ and therefore
$a \in \mathcal{S}_{i-1}(w, \mathcal{T}(R)) \subseteq \bigcup_{p \in \mathcal{C}(R_i)}\mathcal{S}_{i-1}(p ,\mathcal{T}(R))$. Assume now that $w$ has children at level $i-1$. Now if $a \in \mathrm{Descendants}(b)$ for any child $b$, then since $V_{i-1}(b) = \emptyset$ we have $a \in \mathcal{S}_{i-1}(b, \mathcal{T}(R)) \subseteq \bigcup_{p \in C}\mathcal{S}_{i-1}(p ,\mathcal{T}(R))$. If $a \notin \mathrm{Descendants}(b)$ for any $b$ then since $\mathcal{S}_{i-1}(w,\mathcal{T}(R)) = \mathcal{S}_i(w,\mathcal{T}(R)) \cup \{b \in \mathrm{Children}(w) \mid l(w) = i-1\} $. Since $a$ belongs to neither component of the union we have $a \notin \mathcal{S}_{i-1}(w)$ and therefore $a \in \mathcal{S}_{i-1}(w, \mathcal{T}(R))$. It follows that
$$\bigcup_{p \in R_i}\mathcal{S}_{i}(p, \mathcal{T}(R)) \subseteq \bigcup_{p \in \mathcal{C}(R_i)}\mathcal{S}_{i-1}(p, \mathcal{T}(R)). $$
\end{proof}
\begin{lem}[$\beta$-point]
\label{lem:beta_point}
In the conditions of Definition~\ref{dfn:lambda-point}, let $C$ be a subset of cover set $C_i$ in such a way that $\cup_{p \in C}\mathcal{S}_i(p, \mathcal{T}(R))$ contains all $k$-nearest neighbors of $q$. Let $\lambda = \lambda_k(q,C)$ . Then there is a point $\beta\in R$ among $k$-nearest neighbors of $q$ such that $d(q,\lambda) \leq d(q,\beta) + 2^{i+1}$.
\end{lem}
\begin{proof}
Let us first show that there exists a node $\beta$ that is among $k$-nearest neighbors of $q$ and that satisfies
$$\beta \in \bigcup_{p \in C}\mathcal{S}_i(p, \mathcal{T}(R)) \setminus \bigcup_{p \in N(q, \lambda) \setminus \{\lambda\} }\mathcal{S}_i(p, \mathcal{T}(R)).$$
By using Lemma \ref{lem:sum} and Definition \ref{dfn:lambda-point} we obtain:
$$ | \bigcup_{p \in N(q, \lambda) \setminus \{\lambda\} }\mathcal{S}_i(p, \mathcal{T}(R)) |= \sum_{p \in N(q, \lambda) \setminus \{\lambda\} }| \mathcal{S}_i(p, \mathcal{T}(R)) | < k.$$
Since $\cup_{p \in C}s_i(p, \mathcal{T}(R))$ contains all $k$-nearest neighbors of $q$, such $\beta$ exists.
Let us now show that such $\beta$ satisfies $d(q,\lambda) \leq d(q,\beta) + 2^{i+1}$.
Let $\gamma \in C \setminus N(q,\lambda) \cup \{\lambda \}$ be an ancestor of $\beta$. Since $\gamma \notin N(q,\lambda) \setminus \{\lambda\}$ we have $d(\gamma, q) \geq d(q, \lambda)$. By using triangle inequality we obtain $ d(q, \gamma) \leq d(q,\beta) + d(\gamma ,\beta) $ and finally from Lemma \ref{lem:compressed_cover_tree_descendant_bound} we have $d(\gamma, \beta) \leq 2^{i+1}$. It follows:
$$d(q,\lambda) \leq d(q, \gamma) \leq d(q,\beta) + d(\gamma ,\beta) \leq d(q,\beta) +2^{i+1}$$
Now $\beta$ is the desired $k$-nearest neighbors that satisfies the required condition $d(q,\lambda) \leq d(q,\beta) + 2^{i+1}$.
\end{proof}
\section{ Corrected parameterized complexity for exact all $k$-nearest neighbor search}
\label{sec:cover_tree_knn}
In Counterexample \ref{cexa:original_all_nearest_neighbors_algorithm} it was shown that the proof of \cite[Theorem~5]{beygelzimer2006cover} contained mistakes.
New results, Theorem \ref{thm:cover_tree_knn_correct} and Theorem~\ref{thm:cover_tree_knn_time} will not only correct, but also extend the run time bound for $k$-nearest neighbor search \cite[Theorem~5]{beygelzimer2006cover} from $k=1$ to any $k\geq 1$. Note that Algorithm \ref{alg:cover_tree_k-nearest} uses different pruning rule in line 7 than Algorithm \ref{alg:cover_tree_k-nearest_original} in line 5. Distance bound $d(p,Q) + 2^{i}$ is replaced by $d(q,\lambda) + 2^{i+1}$, so any $k$-nearest neighbors of point $q$ are not removed during the traversal of the algorithm.
By lemma \ref{lem:distinctive_descendants_precompute} we can precompute the sizes of distinctive descendants $|\mathcal{S}_i(p, \mathcal{T}(R))|$ faster than building a compressed cover tree.
Hence we can retrieve sizes of $|\mathcal{S}_i(p, \mathcal{T}(R))|$ in time $O(1)$ for any $p \in R$ and $i \in H(\mathcal{T}(R))$.
\begin{algorithm}
\caption{Updated $k$-nearest neighbor search by using a compressed cover tree, see Theorems~\ref{thm:cover_tree_knn_correct}
and~\ref{thm:cover_tree_knn_time}.}
\label{alg:cover_tree_k-nearest}
\begin{algorithmic}[1]
\STATE \textbf{Input} : compressed cover tree $\mathcal{T}(R)$, a query point $q\in X$, positive integer $ k \in \mathbb{Z}_{+} $
\STATE Set $i \leftarrow l_{\max}(\mathcal{T}(R))$
\STATE Let $r$ be the root node of $\mathcal{T}(R)$. Set $R_{i}=\{r\}$.
\WHILE{$i > l_{\min}$} \label{line:knns:loop_begin}
\STATE Assign $\mathcal{C}(R_i) \leftarrow \{a \in \mathrm{Children}(p) \text{ for some }p \in R_i \mid l(a) \geq i-1 \}$ \\ \COMMENT{Recall that $\mathrm{Children}(p)$ contains node $p$ } \label{line:knns:dfn_C}
\STATE Compute $\lambda = \lambda_k(q,\mathcal{C}(R_i))$ from Definition \ref{dfn:lambda-point} \label{line:knns:dfnLambda} , run
Algorithm \ref{alg:lambda} to compute.
\STATE Find $R_{i-1} = \{r \in \mathcal{C}(R_i) \mid d(q,r) \leq d(q,\lambda) + 2^{i+1}\}$ \label{line:knns:dfnRi}
\STATE Set $j \leftarrow \max_{ a \in R_{i-1}} \mathrm{Next}(a,i-1,\mathcal{T}(R))$
\COMMENT{If such $j$ is undefined, we set $j = l_{\min}$} \label{line:knns:dfnindexj}
\STATE Set $R_j = R_{i-1}$ and $i = j$
\ENDWHILE \label{line:knns:loop_end}
\STATE Compute $k$-nearest neighbors of query point $q$ from set $R_{l_{\min}}$ and \textbf{output} them as array.
\label{line:knns:final_line}
\end{algorithmic}
\end{algorithm}
\begin{figure}
\centering
\includegraphics[scale = 0.75]{images/PngDependancies/dependancy2.png}
\caption{Dependancy diagram of Section \ref{sec:cover_tree_knn}}
\label{fig:dependancyDiagram2}
\end{figure}
\begin{exa}[Simulated run of Algorithm \ref{alg:cover_tree_k-nearest}]
\label{exa:simulatedRun}
Let $R$ and $\mathcal{T}(R)$ be as in Example \ref{exa:cover_tree_big}. Let $q = 0$ and $k = 5$. Figures \ref{fig:iteration3goodexample}, \ref{fig:iteration2goodexample}, \ref{fig:iteration1goodexample} and \ref{fig:iteration0goodexample} illustrate simulated run of Algorithm \ref{alg:cover_tree_k-nearest} on input $(\mathcal{T}(R), q, k)$. Recall that $l_{\max} = 2$ and $l_{\min} = -1$. During the iteration $i$ of Algorithm \ref{alg:cover_tree_k-nearest} we maintain the following coloring: Points in $R_i$ are colored orange. Points $\mathcal{C}(R_i)$ (of line 5) that are not contained in $R_i$ are colored yellow. The $\lambda$-point of line \ref{line:knns:dfnLambda} is denoted by using purple color. All the nodes that were present in $R_{i-1}$ , but are no longer included in $R_i$ will be colored red. Finally all the points that are selected as $k$-nearest neighbors of $q$ are colored green in the final iteration. Nodes that haven't been yet visited or that will never be visited are colored white. Consider the following steps:
\smallskip
\textbf{Iteration} $i = 2$: Figure \ref{fig:iteration3goodexample} illustrates iteration $i = 2$ of the Algorithm \ref{alg:cover_tree_k-nearest}. In line \ref{line:knns:dfn_C} we find
$\mathcal{C}(R_2) = \{4,8,12\}$. Since node $4$ minimizes distance $d(\mathcal{C}(R_2),0)$ and distinctive descendant set $\mathcal{S}_2(4, \mathcal{T}(R))$ consists of 7 elements we get $\lambda = 4$.
In line \ref{line:knns:dfnRi} we find $R_{1} = \{r \in C \mid d(0,r) \leq d(q,\lambda) + 2^{3} = 12\} = \{4,8,12\}$.
\smallskip
\textbf{Iteration} $i = 1$: Figure \ref{fig:iteration2goodexample} illustrates iteration $i = 1$ of the Algorithm \ref{alg:cover_tree_k-nearest}. In line \ref{line:knns:dfn_C} we find
$\mathcal{C}(R_1) = \{2,4,6,8,10,12,14\}.$ Since $|\mathcal{S}_1(2, \mathcal{T}(R))| = 3$, $|\mathcal{S}_1(4, \mathcal{T}(R))| = 1$ and $|\mathcal{T}_1(6)| = 3$ and $6$ is the node with smallest to distance $0$ satisfying $\sum_{p \in N(0, 6) = \{2,4,6\}} | \mathcal{S}_1(p, \mathcal{T}(R))| \geq 5 = k.$ It follows that $\lambda = 6$. In line \ref{line:knns:dfnRi} we find $R_{0} = \{r \in \mathcal{C}(R_1) \mid d(0,r) \leq d(q,\lambda) + 2^{2} = 10\} = \{2,4,6,8,10\}$.
\smallskip
\textbf{Iteration} $i = 0$: Figure \ref{fig:iteration1goodexample} illustrates iteration $i = 0$ of the Algorithm \ref{alg:cover_tree_k-nearest}. In line \ref{line:knns:dfn_C} $\mathcal{C}(R_0) = \{1,2,3,4,5,6,7,8,9,10,11\}.$ We note that $|s_0(p, \mathcal{T}(R))| = 1$ for all $p \in \mathcal{C}(R_0)$. Thus $5$ is the first number to satisfy
$\sum_{p \in N(0, 5) = \{1,2,3,4,5\}} | \mathcal{S}_0(p, \mathcal{T}(R))| \geq 5 = k.$ It follows that $\lambda = 5$.
In line \ref{line:knns:dfnRi} : $R_{-1} = \{r \in \mathcal{C}(R_0) \mid d(0,r) \leq d(q,\lambda) + 2^{1} = 7\} = \{1,2,3,4,5,6,7\}$.
\smallskip
\textbf{Final selection}: Figure \ref{fig:iteration0goodexample} illustrates the final iteration of Algorithm \ref{alg:cover_tree_k-nearest}. We simply select $k$-shortest distances $d(q,p)$ for $p \in R_{-1}$ to obtain the final output $\{1,2,3,4,5\}$.
\end{exa}
\begin{figure}[H]
\centering
\input images/CoverTreeLongExample/good_tree_example_1.tex
\caption{Iteration $i = 2$ of simulation in Example \ref{exa:simulatedRun} of Algorithm \ref{alg:cover_tree_k-nearest} }
\label{fig:iteration3goodexample}
\end{figure}
\begin{figure}[H]
\centering
\input images/CoverTreeLongExample/good_tree_example_2.tex
\caption{Iteration $i = 1$ of simulation in Example \ref{exa:simulatedRun} of Algorithm \ref{alg:cover_tree_k-nearest} }
\label{fig:iteration2goodexample}
\end{figure}
\begin{figure}[H]
\centering
\input images/CoverTreeLongExample/good_tree_example_3.tex
\caption{Iteration $i = 0$ of simulation in Example \ref{exa:simulatedRun} of Algorithm \ref{alg:cover_tree_k-nearest} }
\label{fig:iteration1goodexample}
\end{figure}
\begin{figure}[H]
\centering
\input images/CoverTreeLongExample/good_tree_example_4.tex
\caption{Final iteration $i = -1$ of simulation in Example \ref{exa:simulatedRun} of Algorithm \ref{alg:cover_tree_k-nearest} }
\label{fig:iteration0goodexample}
\end{figure}
Note that $\bigcup_{p \in R_i}\mathcal{S}_i(p, \mathcal{T}(R))$ is decreasing set for which $\bigcup_{p \in R_{l_{\max}}}\mathcal{S}_{l_{\max}}(p, \mathcal{T}(R)) = R$ and $\bigcup_{p \in R_{l_{\min}}}\mathcal{S}_{l_{\min}}(p, \mathcal{T}(R)) = R_{l_{\min}}$.
\begin{lem}[Real $k$-nearest neighbors are contained in candidate set for all $i$]
\label{lem:cover_tree_knn_correct_lem}
Let $R$ be a finite subset of an ambient metric space $(X,d)$ and let $k \in \mathbb{Z} \cap [1,\infty)$ be a parameter. Let $\mathcal{T}(R)$ be a compressed cover tree of $R$. Assume that $|R| \geq k$. Then for any iteration $i \in H(\mathcal{T}(R))$ of lines
\ref{line:knns:loop_begin}-\ref{line:knns:loop_end} of Algorithm~\ref{alg:cover_tree_k-nearest} the candidate set $\bigcup_{p \in R_i}\mathcal{S}_i(p, \mathcal{T}(R))$ contains all $k$-nearest neighbors of $q$.
\end{lem}
\begin{proof}
Since $R_{l_{\max}} = \{r\}$, where $r$ is the root $\mathcal{T}(R)$ we have $S_{l_{\max}}(r,\mathcal{T}(R)) = R$ and therefore any point among $k$-nearest neighbor of $q$ is contained in $R_{l_{\max}}$. Let $i$ be the largest index for which there exists a point among $k$-nearest neighbor of $q$ that doesn't belong to $\bigcup_{p \in R_{i-1}}\mathcal{S}_i(p, \mathcal{T}(R))$. Let us denote such point by $\beta$, then:
$$\beta \in \bigcup_{p \in R_i}\mathcal{S}_i(p, \mathcal{T}(R)) \setminus \bigcup_{p \in R_{i-1}}\mathcal{S}_{i-1}(p, \mathcal{T}(R)).$$
By Lemma \ref{lem:child_set_equivalence} we have
\begin{ceqn}
\begin{equation}
\label{eqa:neighborsContained}
\bigcup_{p \in \mathcal{C}(R_i)}\mathcal{S}_{i-1}(p, \mathcal{T}(R)) = \bigcup_{p \in R_i}\mathcal{S}_{i}(p, \mathcal{T}(R))
\end{equation}
\end{ceqn}
Let $\lambda$ be as in line \ref{line:knns:dfnLambda} of Algorithm \ref{alg:cover_tree_k-nearest}. By Equation (\ref{eqa:neighborsContained}) we have $|\bigcup_{p \in C}\mathcal{S}_{i-1}(p, \mathcal{T}(R))| \geq k$, therefore by Definition \ref{dfn:lambda-point} such $\lambda$ exists. Since $\beta \in \bigcup_{p \in C}\mathcal{S}_{i-1}(p, \mathcal{T}(R))$, there exists $\alpha \in C$ satisfying $\beta \in \mathcal{S}_{i-1}(\alpha, \mathcal{T}(R))$. By assumption it follows $\alpha \notin R_{i-1}$. By line \ref{line:knns:dfnRi} of the algorithm we have
\begin{ceqn}
\begin{equation}
\label{eqa:neighborsContained2}
d(\alpha, q) > d(q, \lambda) + 2^{i+1}.
\end{equation}
\end{ceqn}
Let $\theta$ be arbitrary point in set $\bigcup_{p \in N(q;\lambda)}\mathcal{S}_{i-1}(p, \mathcal{T}(R))$. Therefore $\theta \in \mathcal{S}_{i-1}(\gamma, \mathcal{T}(R))$ for some $\gamma \in N(q;\lambda)$. By Lemma \ref{lem:distinctive_descendant_child_level} either $\theta = \gamma$ or $\theta \in \mathrm{Descendants}(a)$ for some $a \in \mathrm{Children}(\gamma) \setminus \{\gamma\}$ for which $l(a) < i$.
If $\theta = \gamma$, then trivially $d(\gamma, \theta) \leq 2^{i}$. Else $\theta$ is a descendant of $a$, which is a child of node $\gamma$ on level $i-1$ or below, therefore by Lemma \ref{lem:compressed_cover_tree_descendant_bound} we have $d(\gamma, \theta) \leq 2^{i}$ anyway. By Definition \ref{dfn:lambda-point} since $\gamma \in N(q;\lambda)$ we have $d(q,\gamma) \leq d(q,\lambda)$. By (\ref{eqa:neighborsContained2}) and the triangle inequality we obtain:
\begin{ceqn}
\begin{equation}
\label{eqa:neighborsContained3}
d(q,\theta) \leq d(q, \gamma) + d(\gamma,\theta) \leq d(q,\lambda) + 2^{i} < d(\alpha,q) - 2^{i}
\end{equation}
\end{ceqn}
On the other hand $\beta$ is a descendant of $\alpha$ thus we can estimate:
\begin{ceqn}
\begin{equation}
\label{eqa:neighborsContained4}
d(q,\beta) \geq d(q,\alpha) - d(\alpha,\beta) \geq d(\alpha,q) - 2^{i}
\end{equation}
\end{ceqn}
By combining Inequality (\ref{eqa:neighborsContained3}) with Inequality (\ref{eqa:neighborsContained4}) we obtain $d(q,\theta) < d(q,\beta)$. Since $\theta$ was arbitrary point from $\bigcup_{p \in N(q;\lambda)}\mathcal{S}_{i-1}(p, \mathcal{T}(R))$, that contains at least $k$ points, $\beta$ cannot be any $k$-nearest neighbor of $q$, which is a contradiction.
\end{proof}
\begin{thm}[correctness of Algorithm~\ref{alg:cover_tree_k-nearest}]
\label{thm:cover_tree_knn_correct}
Algorithm~\ref{alg:cover_tree_k-nearest} correctly finds all $k$ nearest neighbors of query point $q$ within reference set $R$.
\end{thm}
\begin{proof}
Claim follows directly from Lemma \ref{lem:cover_tree_knn_correct_lem} by noting that since
$i = l_{\min}$ all the nodes $p \in R_{l_{\min}}$ do not have any children. Therefore it follows $\bigcup_{p \in R_{l_{\min}}}\mathcal{S}_i(p, \mathcal{T}(R)) = R_{l_{\min}}$. Thus all the $k$-nearest neighbors of $q$ are contained in the set $R_{l_{\min}}$.
\end{proof}
Theorem \ref{thm:cover_tree_knn_time} estimates the complexity as the number of iterations multiplied by the maximal size of the reference subsets $R_i\subseteq R$ in Algorithm~\ref{alg:cover_tree_k-nearest}.
Example \ref{exa:simulatedRun} shows how $R_i$ varies in size during the main iteration phase (lines \ref{line:knns:loop_begin}-\ref{line:knns:loop_end}) of Algorithm~\ref{alg:cover_tree_k-nearest}.
The size of $R_i$ depends on a distance $d(q, \lambda)$ for $\lambda$ in Definition \ref{dfn:lambda-point}.
If $d(q, \lambda) \geq 2^{i}$ then we use the expansion constant $c(R \cup \{q\})$ to estimate size of $R_i$.
Since $\bar{B}(q, \frac{d(q,\lambda)}{2})$ contains at most $k$ points, the definition of the expansion constant will implies that $|R_i|\leq(c(R \cup \{q\}))^3 \cdot k$.
If $d(q, \lambda)$ is small, Lemma \ref{lem:packing} implies that $|R|\leq (c_m(R))^6$.
\begin{thm}[complexity for exact all $k$-nearest neighbors]
\label{thm:cover_tree_knn_time}
Let $R$ be a finite reference set in a metric space $(X,d)$.
Let $q\in X$ be a query point, $c(R \cup \{q\})$ be the expansion constant of $R \cup \{q\}$ and $c_m(R)$ be the minimized expansion constant from Definition \ref{dfn:expansion_constant}.
Given a compressed cover tree $\mathcal{T}(R)$ with a height $|H(\mathcal{T}(R))|$ from Definition~\ref{dfn:depth}, Algorithm~\ref{alg:cover_tree_k-nearest} finds all $k$ nearest neighbors of $q$ in time $O\Big ((c_m(R))^{4} \cdot \max\{(c(R \cup \{q\}))^3 \cdot k, (c_m(R))^6 \} \cdot \log (k) \cdot |H(\mathcal{T}(R))| \Big ).$
\hfill $\blacksquare$
\end{thm}
\begin{proof}
We note that the number of iterations (lines 4-10) are bounded by height $|H(\mathcal{T}(R))|$. The total number of children encountered in line \ref{line:knns:dfn_C} during single iteration (lines 4-10) is at most $(c_m(R))^4 \cdot \max_i|R_i|$ by Lemma \ref{lem:compressed_cover_tree_width_bound}.
From Lemma \ref{lem:time_lambdapoint} we obtain that line \ref{line:knns:dfnLambda}, which launches Algorithm \ref{alg:lambda} takes at most
$|\mathcal{C}(R_i)| \log(k) = (c_m(R))^4 \cdot \max_i |R_i| \cdot \log(k) $ time. Line \ref{line:knns:dfnRi} never does more work than line \ref{line:knns:dfn_C}, because in the worst case scenario $R_i$ is copied to $R_{i-1}$ in its current form. Line \ref{line:knns:dfnindexj} handles $|R_{i-1}|$ nodes, since we can keep track of value of $\mathrm{Next}(a,i,\mathcal{T}(R))$ by updating it when necessary in line \ref{line:knns:dfn_C} we can retrieve its value in $O(1)$ time. Therefore maximal run-time of line \ref{line:knns:dfnindexj} is $O(\max|R_i|)$. Final line \ref{line:knns:final_line} picks $k$-elements from ordered set $R_{l_{\min}}$, which can be done using binary heap data structure similarly to Algorithm \ref{alg:lambda}, which gives time-complexity of $O(\log(k) \cdot \max_i|R_i|)$. Consequently, the running time is bounded by
\begin{ceqn}
\begin{equation} \label{eqa:thm2impeqa}
\centering
O((c_m(R))^4 \cdot \log(k) \cdot \max_i|R_i| \cdot D(\mathcal{T})) .
\end{equation}
\end{ceqn}
To finish the proof we will show that $\max_i|R_i| \leq \max \{ (c_m(R))^6, k \cdot (c(R \cup q))^3 \}$.
Consider any $R_{i-1}$ constructed during the $i$-th iteration and let $d = d(p,\lambda)$. We have
\begin{ceqn}
\begin{align}
\label{eqa:QBoundOne}
R_{i-1} &= \{r \in \mathcal{C}(R_i) \mid d(p,q) \leq d + 2^{i+1}\} \\
&= B(q,d+2^{i+1}) \cap \mathcal{C}(R_i) \\
&\subseteq B(q,d+2^{i+1}) \cap C_{i-1}
\label{eqa:QboundTwo}
\end{align}
\end{ceqn}
\textbf{We assume first that $d > 2^{i+1}$}. Note first that by Lemma \ref{lem:cover_tree_knn_correct_lem} set $\cup_{p \in R_i}\mathcal{S}_i(p, \mathcal{T}(R))$ contains all $k$-nearest neighbors of $q$ and by Lemma \ref{lem:child_set_equivalence} $\cup_{p \in C}\mathcal{S}_{i-1}(p, \mathcal{T}(R)) = \cup_{p \in R_i}\mathcal{S}_{i}(p, \mathcal{T}(R))$. Therefore $\cup_{p \in C}\mathcal{S}_{i-1}(p, \mathcal{T}(R))$ contains all $k$-nearest neighbors of $q$. Using Lemma \ref{lem:beta_point} we find $\beta$ among $k$-nearest neighbors of $q$ satisfying $d \leq d(q,\beta) + 2^{i}$. From assumption It follows $2^{i} \leq d(q,\beta)$ . Combining this with Equation \ref{eqa:QBoundOne} and Definition \ref{dfn:expansion_constant}, we obtain:
\begin{ceqn}
\begin{align}
|R_{i-1}| \leq |B(q,2d(q,\beta) +2^{i+1})| &\leq |B(q,4d(q,\beta))| \leq (c(R \cup \{q\}))^3 \cdot |B(q,\frac{d(q,\beta)}{2})|
\end{align}
\end{ceqn}
We note that $\beta$ is among $k$-nearest neighbors of $q$. It follows that $|R_{i-1}| \leq c^3 \cdot k$.
\textbf{Assume now that $d \leq 2^{i+1}$.} By using (\ref{eqa:QboundTwo}) we obtain:
$$R_{i-1} \subseteq B(q,2^{i+2}) \cap C_{i-1}$$
From cover-tree condition we know that all the points in $C_{i-1}$ are separated by $2^{i-1}$. We will now apply Lemma \ref{lem:packing} with $t = 2^{i+2}$ and $\delta = 2^{i-1}$. Since $4\frac{t}{\delta} + 1 = 2^5 + 1 \leq 2^6$ we obtain $|R_{i-1}| \leq |B(q,2^{i+2}) \cap C_{i-1}| \leq c_m(R)^6$.
By combining both cases we obtain $|R_{i-1}| \leq \max \{c_m(R)^6, (c(R \cup \{q\}))^3\cdot k \}$. By substituting the relevant information in (\ref{eqa:thm2impeqa}) we obtain the final bound.
\end{proof}
Theorem \ref{thm:cover_tree_knn_time} shows that we can find $k$-nearest neighbors of single point $q$ in a near-linear time that depends on expansion constant $c(R \cup \{q\})$ and minimized expansion constant $c_m(R)$. If the ambient space $X = \mathbb{R}^d$ is euclidean, it can be shown that $c_m(R) \leq 2^{d}$. However, no such bound exists for $c(R \cup \{q\})$, since $c(R \cup \{q\})$ depends only on distribution of points in set $R \cup {q}$. In case point $d(q,R) > \mathrm{diam}(R)$ then $c(R \cup \{q\}) = N$ leading to worse-time complexity, than we would have if we had simply used linear-search. In general than further point $q$ is from set $R$ , than less benefit is obtained from compressed cover tree datastructure.
\begin{cor}[solution to Problem \ref{pro:knn}]
\label{cor:cover_tree_knn_time}
In the notations of Theorem~\ref{thm:cover_tree_knn_time},
set $c = \max\limits_{q \in Q}c(R \cup \{q\})$.
Let $\Delta(R)$ be the aspect ratio from Definition \ref{dfn:radius+d_min}.
Algorithms~\ref{alg:cover_tree_k-nearest_construction_whole} and \ref{alg:cover_tree_k-nearest} solve Problem \ref{pro:knn} in time $O( c^{10} k \log(k) \cdot \log_2(\Delta(R)) \cdot (|Q| + |R|)).$
\hfill $\blacksquare$
\end{cor}
\begin{proof}
We estimate the complexity from Theorem \ref{thm:cover_tree_knn_time} by using the upper bounds $c_m(R)\leq c(R\cup\{q\})\leq c$ and $|H(\mathcal{T}(R))| \leq \log_2(\Delta(R))$ by Lemmas~\ref{lem:expansion_constant_property} and~\ref{lem:depth_bound}, respectively.
Then we multiply the result by the size $|Q|$ of the query set and add the complexity $O(c^8 \cdot \Delta(R) \cdot |R|)$ to build a compressed cover tree $\mathcal{T}(R)$ by Theorem \ref{thm:construction_time}.
\end{proof}
\section{Approximate $k$-nearest neighbor search by a compressed cover tree}
\label{sec:approxknearestneighbor}
\begin{figure}
\centering
\includegraphics[scale = 0.75]{images/PngDependancies/Dependancy3.png}
\caption{Dependancy diagram of \ref{sec:approxknearestneighbor}}
\label{fig:dependancyDiagram3}
\end{figure}
The original navigating nets and cover trees were used in \cite[Theorem~2.2]{krauthgamer2004navigating} and \cite[Section~3.2]{beygelzimer2006cover} to solve the $(1+\epsilon)$-approximate nearest neighbor problem for $k=1$.
Theorem~\ref{thm:approximate_k_nearestneighbors} justifies a near linear parameterized complexity to find approximate a $k$-nearest neighbor set $\mathcal{P}$ formalized in Definition \ref{dfn:ApproxKNearestNeighbor}.
\begin{dfn}[approximate $k$-nearest neighbor set $\mathcal{P}$]
\label{dfn:ApproxKNearestNeighbor}
Let $R$ be a finite reference set and let $Q$ be a finite query set of a metric space $(X,d)$.
Let $q \in Q \subseteq X$ be a query point, $k \geq 1$ be an integer and $\epsilon > 0$ be a real number.
Let $\mathcal{N}_k = \cup_{i=1}^k \mathrm{NN}_i(q)$ be the union of neighbor sets from Definition~\ref{dfn:kNearestNeighbor}.
A set $\mathcal{P} \subseteq R$ is called an \emph{approximate $k$-nearest neighbors set}, if $|\mathcal{P}| = k$ and there is an injection $f: \mathcal{P} \rightarrow \mathcal{N}_k$ satisfying $d(q, p) \leq (1+\epsilon) \cdot d(q,f(p)) $ for all $p \in \mathcal{P}$.
\hfill $\blacksquare$
\end{dfn}
We can modify Algorithm \ref{alg:cover_tree_k-nearest} , in such a way that the algorithm would terminate once $\frac{2^{i+1}}{\epsilon} + 2^{i} \leq d(q,\lambda)$ is satisfied between lines \ref{line:knns:dfnLambda} and \ref{line:knns:dfnRi}. This extra method is illustrated in Algorithm \ref{alg:cover_tree_k-nearest_approximate}.
\begin{algorithm}
\caption{Expansion for Algorithm \ref{alg:cover_tree_k-nearest} that can be inserted between lines \ref{line:knns:dfnLambda} and \ref{line:knns:dfnRi} to find approximate $k$-nearest neighbor of Definition \ref{dfn:ApproxKNearestNeighbor} }
\label{alg:cover_tree_k-nearest_approximate}
\begin{algorithmic}[1]
\STATE \textbf{Input} : Compressed cover tree $\mathcal{T}(R)$, a query point $q\in X$, positive integer $ k \in \mathbb{Z}_{+} $, index $i \in \mathbb{Z}$, Subset $\mathcal{C}(R_i)$ of cover set $C_{i-1}$ of $\mathcal{T}(R)$, $\lambda$-point of line \ref{line:knns:dfnLambda} of Algorithm \ref{alg:cover_tree_k-nearest}.
\IF {$\frac{2^{i+1}}{\epsilon} + 2^{i} \leq d(q, \lambda)$}\label{line:aknn:ifcondition}
\STATE Let $\mathcal{P} = \emptyset$.
\FOR {$p \in \mathcal{C}(R_i)$}
\IF {$d(p,q) < d(q,\lambda)$}
\STATE $\mathcal{P} = \mathcal{P} \cup \mathcal{S}_{i-1}(p,\mathcal{T}(R))$
\ENDIF
\ENDFOR
\STATE Fill $\mathcal{P}$ until it has $k$ points by adding any points from sets $\mathcal{S}_{i-1}(p,\mathcal{T}(R))$, where $d(p,q) = d(q, \lambda)$.
\STATE \textbf{return} $\mathcal{P}$.
\ENDIF
\end{algorithmic}
\end{algorithm}
Lemma \ref{lem:approximate_knn_correctness} shows that Algorithm \ref{alg:cover_tree_k-nearest_approximate} correctly returns Approximate $k$-nearest neighbor set of Definition \ref{dfn:ApproxKNearestNeighbor}.
\begin{lem}[correctness of Algorithm \ref{alg:cover_tree_k-nearest_approximate}]
\label{lem:approximate_knn_correctness}
In the notations of Definition~\ref{dfn:ApproxKNearestNeighbor}, Algorithm \ref{alg:cover_tree_k-nearest} modified by inserting Algorithm~\ref{alg:cover_tree_k-nearest_approximate} between lines \ref{line:knns:dfnLambda} and \ref{line:knns:dfnRi} finds an approximate $k$-nearest neighbors set of any query point $q \in X$.
\hfill $\blacksquare$
\end{lem}
\begin{proof}
Note first that by Lemma \ref{lem:cover_tree_knn_correct_lem} set $\cup_{p \in R_i}\mathcal{S}_i(p, \mathcal{T}(R))$ contains all $k$-nearest neighbors of $q$ and by Lemma \ref{lem:child_set_equivalence} $\cup_{p \in \mathcal{C}(R_i)}\mathcal{S}_{i-1}(p, \mathcal{T}(R)) = \cup_{p \in R_i}\mathcal{S}_{i}(p, \mathcal{T}(R))$. Therefore $\cup_{p \in \mathcal{C}(R_i)}\mathcal{S}_{i-1}(p, \mathcal{T}(R))$ contains all $k$-nearest neighbors of $q$.
Assume first that condition on line $2$ of Algorithm \ref{alg:cover_tree_k-nearest_approximate}
is satisfied during some iteration $i \in H(\mathcal{T}(R))$ of Algorithm \ref{alg:cover_tree_k-nearest_approximate}. Let us denote
$$\mathcal{A} = \bigcup_{p \in \mathcal{C}(R_i)} \{\mathcal{S}_{i-1}(p) \mid d(p,q) < d(q,\lambda) \} \text{ and }
\mathcal{B} = \bigcup_{p \in \mathcal{C}(R_i)} \{\mathcal{S}_{i-1}(p) \mid d(p,q) = d(q,\lambda) \}.$$
By Algorithm \ref{alg:cover_tree_k-nearest_approximate} set $\mathcal{P}$ contains all points of $\mathcal{A}$ and rest of the points are filled form $\mathcal{B}$.
We will now form $f: \mathcal{P} \rightarrow \mathcal{N}_k$ by mapping every point $p \in \mathcal{A} \cap \mathcal{P}$ into itself and then by extending $f$ to be injective map on whole set $\mathcal{P}$ . The claim holds trivially for all points $p \in \mathcal{A} \cap \mathcal{P}$. Let us now consider points $p \in \mathcal{P} \setminus \mathcal{A}$. Let $\gamma \in \mathcal{C}(R_i)$ be such that $p \in \mathcal{S}(\gamma, \mathcal{T}(R))$ and let $\psi \in C$ be such that $f(p) \in \mathcal{S}(\psi, \mathcal{T}(R)) $. By using triangle inequality, Lemma \ref{lem:compressed_cover_tree_descendant_bound} and the fact that
$p \in \mathcal{A} \cup \mathcal{B}$ we obtain:
\begin{ceqn}
\begin{equation}
\label{eqa:ANNCorrectness1}
d(q, p) \leq d(q, \gamma) + d(\gamma,p) \leq d(q, \lambda) + 2^{i}
\end{equation}
\end{ceqn}
On the other hand since $f(p) \notin \mathcal{A}$ we have
\begin{ceqn}
\begin{equation}
\label{eqa:ANNCorrectness2}
(1+\epsilon) \cdot d(q, f(p)) \geq (1+\epsilon) \cdot ( d(q, \psi) - d(\psi,f(p))) \geq (1+\epsilon) \cdot (d(q, \lambda) - 2^{i})
\end{equation}
\end{ceqn}
Note that by line \ref{line:aknn:ifcondition} we have $\frac{2^{i+1}}{\epsilon} \leq d(q, \lambda)$, therefore
\begin{ceqn}
\begin{equation}
\label{eqa:ANNCorrectness3}
d(q, \lambda) + 2^{i} \leq d(q,\lambda) + 2^{i+1} - 2^{i} \leq \epsilon (d(q, \lambda) - 2^{i} ) - 2^{i} \leq (1+\epsilon) \cdot (d(q,\lambda) - 2^{i})
\end{equation}
\end{ceqn}
By combining Equations (\ref{eqa:ANNCorrectness1}) - (\ref{eqa:ANNCorrectness3}) we obtain $d(q, p) \leq (1+\epsilon) \cdot d(q,f(p)) $.
If condition on line $2$ of Algorithm \ref{alg:cover_tree_k-nearest_approximate} is never satisfied, then the Algorithm finds real $k$-nearest neighbors of point $q$ in the end of the algorithm and therefore the claim holds.
\end{proof}
\begin{thm}[correctness of modified Algorithm~\ref{alg:cover_tree_k-nearest} ]
\label{thm:approximate_k_nearestneighbors}
In the notations of Definition \ref{dfn:ApproxKNearestNeighbor}, the complexity of modified Algorithm~\ref{alg:cover_tree_k-nearest} after
inserting Algorithm \ref{alg:cover_tree_k-nearest_approximate} between lines \ref{line:knns:dfnLambda} and \ref{line:knns:dfnRi} is
$O\Big( (c_m(R))^{8 + \lceil \log(2 + \frac{1}{\epsilon}) \rceil} \cdot \log(k) \cdot \log_2(\Delta(R)) + k \Big ).$
\hfill $\blacksquare$
\end{thm}
\begin{proof}
Note first that the time complexity of Algorithm \ref{alg:cover_tree_k-nearest_approximate} is $k$.
Then similarly to Theorem \ref{thm:cover_tree_knn_time} it can be shown that the Algorithm is bounded
\begin{ceqn}
\begin{equation}
\label{eqa:boundapproximate}
O((c_m(R))^4 \cdot \log(k) \cdot \max_i|R_i| \cdot |H(\mathcal{T}(R))| + k)
\end{equation}
\end{ceqn}
Let us now bound the size of $R_i$. By line \ref{line:aknn:ifcondition} of Algorithm \ref{alg:cover_tree_k-nearest_approximate} either Algorithm \ref{alg:cover_tree_k-nearest_approximate} is launched that terminates the program or $\frac{2^{i+1}}{\epsilon} + 2^{i} > d(q, \lambda)$. To bound $|R_i|$ we can assume the latter. Similarly to Theorem \ref{thm:cover_tree_knn_time} we have:
\begin{ceqn}
\begin{align}
\label{eqa:QBoundOne1}
R_{i-1} &= \{r \in C \mid d(p,q) \leq d(q,\lambda) + 2^{i+1}\} \\
&= \bar{B}(q,d(q,\lambda)+2^{i+1}) \cap \mathcal{C}(R_i) \\
&\subseteq \bar{B}(q,d(q,\lambda)+2^{i+1}) \cap C_{i-1} \\
&\subseteq \bar{B}(q,2^{i+1}(\frac{3}{2} + \frac{1}{\epsilon})) \cap C_{i-1}
\label{eqa:QboundTwo1}
\end{align}
\end{ceqn}
Since the cover set $C_{i-1}$ is a $2^{i-1}$-sparse subset of the ambient metric space $X$, we can apply Lemma~\ref{lem:packing} with $t = 2^{i+1}(\frac{3}{2} + \frac{1}{\epsilon})$ and $\delta = 2^{i-1}$.
Since $4\frac{t}{\delta} + 1 = 2^4(\frac{3}{2} + \frac{1}{\epsilon}) + 1 \leq 2^4(2 + \frac{1}{\epsilon})$, we get $\max |R_i| \leq (c_m(R))^{4 + \lceil \log_2(2 + \frac{1}{\epsilon}) \rceil}$.
The final complexity is obtained by plugging the upper bound of $|R_i|$ above into \ref{eqa:boundapproximate}.
\end{proof}
\begin{cor}[complexity for approximate $k$-nearest neighbors set $\mathcal{P}$]
\label{cor:approximate_k_nearestneighbors}
In the notations of Definition \ref{dfn:ApproxKNearestNeighbor}, an approximate $k$-nearest neighbors set is found for all $q \in Q$ in time
$O\Big( |Q| \cdot (c_m(R))^{8 + \lceil \log(2 + \frac{1}{\epsilon}) \rceil} \cdot \log(k) \cdot \log_2(\Delta(R)) + |Q| \cdot k \Big ).$
\hfill $\blacksquare$
\end{cor}
\begin{proof}
This corollary follows directly from Theorem \ref{thm:approximate_k_nearestneighbors} .
\end{proof}
\section{Conclusions}
\label{sec:Conclusions}
This paper resolved the complexity problems for the $k$-nearest neighbor search, which is used in many areas of Computer Science.
The motivations were the past challenges in the proofs of time complexities in \cite[Theorem~2.7]{krauthgamer2004navigating}, \cite[Theorem~5]{beygelzimer2006cover} and in other related problems \cite[Theorem~3.1]{ram2009linear}, \cite[Theorem~5.1]{march2010fast}.
Though \cite[Section~5.3]{curtin2015improving} pointed out some difficulties, no corrections were published.
The main results (Theorem~\ref{thm:cover_tree_knn_time} and Corollary~\ref{cor:cover_tree_knn_time}) fill the above gaps in the literature.
\medskip
First, Definition~\ref{dfn:kNearestNeighbor} and Problem~\ref{pro:knn} rigorously deal with a potential ambiguity of $k$-nearest neighbors at equal distances, which wasn't discussed in the past work.
The main new data structure of a compressed cover tree in Definition~\ref{dfn:cover_tree_compressed} substantially simplifies the navigating nets \cite{krauthgamer2004navigating} and original cover trees \cite{beygelzimer2006cover} by avoiding any repetitions of given data points.
This compression has substantially clarified the construction and search Algorithms~\ref{alg:cover_tree_k-nearest_construction_whole} and~\ref{alg:cover_tree_k-nearest}.
\medskip
Second, the past approaches missed broad classes of challenging data, which are constructed in
Counterexamples~\ref{cexa:construction_algorithm_of_original_cover_tree} and~\ref{cexa:original_all_nearest_neighbors_algorithm}.
To estimate the worst-case time complexity for any reference set,
Definition~\ref{dfn:depth} introduced the height of a compressed cover tree, which has become a novel parameter in the proved complexities.
If the height, expansion constants and aspect ratio of a reference set $R$ are fixed, all time complexities are linear in the maximum size of $R$ and a query set $Q$ and near linear $O(k\log k)$ in the number $k$ of neighbors.
Third, the proofs for all $k$-nearest neighbor search are generic enough to hold in any metric space and were extended to the approximate $k$-nearest search in section~\ref{sec:approxknearestneighbor}.
\medskip
The authors are grateful to all reviewers for their valuable time and helpful comments in advance.
This research was supported by the £3.5M EPSRC grant ‘Application-driven Topological Data Analysis’ (2018-2023, EP/R018472/1), the £10M Leverhulme Research Centre for Functional Materials Design and the last author’s Royal Academy of Engineering Fellowship ‘Data Science for Next Generation Engineering of Solid Crystalline Materials’ (IF2122/186).
\bibliographystyle{plainurl
|
{
"timestamp": "2021-12-01T02:26:39",
"yymm": "2111",
"arxiv_id": "2111.15478",
"language": "en",
"url": "https://arxiv.org/abs/2111.15478"
}
|
\section{Introduction}
Named Entity Recognition (NER) is the task of locating and classifying named entities in a given piece of text into pre-defined entity categories such as Person (PER), Location (LOC), Organisation (ORG), etc. NER is considered an essential preprocessing step that can benefit many downstream applications in Natural Language Processing (NLP), such as Machine Translation \cite{babych-hartley-2003-improving}, Information Retrieval \citep{articleir_ner} and Text Classification \citep{article_tc_ner}.
Over the past few years, Deep Learning has been the key to solving not only NER but many other NLP applications \citep{le-etal-2018-deep, kouris-etal-2019-abstractive}.
On the downside, these models also demand a lot of well-structured and annotated data for their training. This restricts the applicability of trained models to a real-world scenario as the model's behavior and predictions become very specific to the type of data they are trained on. To conquer this, many studies have recently evolved that focus on building models that can incorporate world knowledge for enhanced modeling and inference on the task at hand, such as \citet{qizhen2020} for NER, \citet{denk-peleteiro-ramallo-2020-contextual} for Representation Learning and \citet{7451560} for Dependency Parsing, etc.
Although recent works in literature have successfully incorporated world knowledge for Sequence Labeling \citep{qizhen2020}, they come with certain limitations, which we discuss ahead. First, as words in a language can be polysemous \citep{The_Structure_of_Polysemy}, entities and relations in a knowledge graph can be polysemous too \citep{xiao2015}. To introduce Knowledge Graph Embeddings (KGEs), we noticed that previously proposed approaches have primarily used pre-trained static embeddings obtained from extensive sources such as Wikidata. KGEs in these models fundamentally relies on the assumption that the tail entity is a linear transformation of the head entity and the relation, making them non-contextualized in nature. Second, we noticed that prior work only considered head-entity and relation embedding to get the knowledge graph embedding and ignored the tail-entity of the triplet completely. Dropping the tail entity entirely could lead to a potential loss of information. We observed that in addition to carrying information about the triplet itself, the head-relation-tail also helps in understanding and extracting implicit relationships existing between entities across triplets. Therefore, the model must know where the head and the relation are leaning towards to achieve accurate embedding estimation. The final limitation lies in applying a Recurrent architecture to obtain KGEs, introducing time inefficiency and a high computation cost \citep{annervaz2018learning}.
To further understand the importance and our motivation behind using world knowledge for NER, consider a couple of examples mentioned below.\\
\noindent A: \emph{Google announced the launch of Maps.}\\
B: \emph{Pichai announced the launch of Maps.}\\
In the training phase, the significant contextual overlap between sentences can confuse the model in labeling the named entities correctly. The model is likely to memorize sentence templates rather than learn to predict correct entity labels, leading to misclassifications. Also, suppose we trained our NER model on an out-of-domain NER dataset. In that case, the model would have hardly received any information from the training set that "Google" is an organization and "Pichai" indicates a Person. \\
\noindent A: \emph{Berlin died in Season 2 of Money Heist.}\\
B: \emph{Messi saved Barcelona with an equalizer.}\\
In the first sentence above, "Berlin" refers to a person, whereas in the second sentence, "Barcelona" refers to an organization. There are reasonable chances of misclassification in these two sentences because of a high probability of training data missing such nuance differences in all the possible entity tags for a named entity.
From the examples mentioned above, we can infer that for the model to be aware of such subtle differences, we should provide it with the ability to look up relevant details from a reliable source. Therefore, world knowledge can open the gates for the model to access such information and learn details about entities that it might never come across in the training data. In addition to this, with access to structured world knowledge, far better applicability to a real-world setting can be expected.
Setting these points as our objective, in this work, we propose Knowledge Aware Representational Learning Network for Named Entity Recognition using Transformer (KARL-Trans-NER), which
\begin{enumerate}
\item Encodes the entities and relations existing in a knowledge base using a self-attention network to obtain Knowledge Graph Embeddings (KGEs). The embeddings thus obtained are dynamic and fully contextualized in nature.
\item Takes the encoded contextualized representations for entities and relations and generates a knowledge-aware representation for words. The representation obtained, which we also call "Global Representation" for words, can be augmented with the other underlying features to boost the NER model's performance.
\item Generates sentence embeddings using BERT by fusing task-specific information through NER tag embeddings.
\item And lastly, relies on a Transformer as its context encoder incorporating direction-aware, distance-aware, and un-scaled attention for enhanced encoder representation learning.
\end{enumerate}
To verify the effectiveness of our proposed model, we conduct our experiments on three publicly available datasets for NER. These are CoNLL 2003 \citep{DBLP:journals/corr/cs-CL-0306050}, CoNLL++ \citep{wang2019crossweigh} and OntoNotes v5 \citep{pradhan-etal-2013-towards}. Experimental results show that the global embeddings generated for every word using KARL, when used for feature augmentation, can result in significant performance gains of over 0.35-0.5 \emph{F}\textsubscript{1} on all the three NER datasets. Also, to validate the model's generalizability and applicability in a real-world setting, we generate the model's prediction on random texts taken from the web. Results suggest that incorporating world knowledge enables the model to make accurate predictions for every entity in the sentence.
\section{Related work}
The research community in NER moved from approaches using character and word representations \citep{yao2015, zhou2017, kuru2016} to sentence-level contextual representations \citep{yang2017, zhang2018}, and recently to document-level representations as proposed by \citet{qian2018} and \citet{akbik2018}. Expanding the scope of embeddings from character and word level to document level has shown significant improvements in the results for many NLP tasks, including NER \citep{luo2019hierarchical}. To expand the scope further, researchers have explored external knowledge bases to learn facts existing in the universe that may not be present in the training data \citep{annervaz2018learning, qizhen2020}.
Incorporating information present in Knowledge Graph is an emerging research topic in NLP.
While some methods focus on graph structure encoding \citep{lin-etal-2015-modeling, das-etal-2017-chains}, others focus on learning entity-relation embeddings \citep{wang2020coke, jiang-etal-2019-adaptive}.
\citet{zhong2015} proposed an alignment model for jointly embedding a knowledge base and a text corpus that achieved better or comparable performance on four NLP tasks: link prediction, triplet classification, relational fact extraction, and analogical reasoning.
\citet{xiao2015} proposed a generative embedding model, TransG, which can discover the latent semantics of a relation and leverage a mixture of related components for generating embedding. They also reported substantial improvements over the state-of-the-art baselines on the task of link prediction.
\citet{10.1145/3038912.3052675} presented a neural network to answer simple questions over large-scale knowledge graphs using a hierarchical word and character-level question encoder.
\citet{annervaz2018learning} leveraged world knowledge in training task-specific models and proposes a novel convolution-based architecture to reduce the attention space over entities and relations. It outperformed other models on text classification and natural language inference tasks.
Despite producing state-of-the-art results in many NLP tasks, Knowledge Graphs are relatively unexplored for NER. \citet{qizhen2020} introduced a Knowledge-Graph Augmented Word Representation (KAWR). The proposed model encoded the prior knowledge of entities from an external knowledge base into the representation. Though KAWR performed better than its benchmark BERT \citep{bert}, the model underperformed compared to the SOTA models for NER.
\section{Knowledge Augmented Representation Learning Network}
In this section, we describe the end-to-end proposed model for the task of NER (KARL-Trans-NER).
\subsection{Knowledge Graph Embedding Model}
World Knowledge is represented in the form of fact triplets in a Knowledge Graph, such as \emph{("Albert Einstein", "BornYear", "1879")}. Like any other representation technique, to incorporate the information residing inside these fact triples, they need to be encoded into a numeral representation.
To take into account polysemy and learn graph embeddings as a function of the graph context, we take inspiration from CoKE \citep{wang2020coke} and train our knowledge graph embedding model on an entity prediction task using the idea of Masked Language Modeling \citep{bert}. \\
\noindent \textbf{Model Architecture}
\noindent Our KGE model is based on a Transformer architecture \citep{vaswani2017attention}. The corresponding architecture is shown in Figure 1. Training objective of our KG embedding module is inspired from that of CoKE. Since its application to a task like NER was not straightforward, we designed a character-level rich knowledge graph embeddings to understand the underlying representation of characters in a KG. For, e.g., in the fact-triplet \emph{"(BarackObama, HasChild, SashaObama)"}, the tokens do not appear the way words do in English sentences. Introducing a character-level representation layer in CoKE can help the model understand the intrinsic character sequence representations of entities and relations alongside word-sequence representation to achieve an enhanced representation learning of the knowledge graph. Also, as a collection of fact triplets in the form of a knowledge graph is an ever-expanding resource, only entity/relation level embeddings will not adapt to newer entities, thereby introducing trouble in obtaining embeddings that were not observed during training the KGEs. We used two different character sequence encoders, one for the entities and another for the relations. Also, instead of feeding sinusoidal positional embeddings with the input embeddings \citep{vaswani2017attention}, we adopt relative positional embeddings \citep{shaw-etal-2018-self, DBLP:journals/corr/abs-1901-02860}. Positional embeddings are distance-aware, but they are unaware of the directionality. Our intuition of using relative positional embeddings here relies on the findings of \citet{yan2019tener}.\\
\noindent \textbf{Training Data Preparation}
\noindent We are given a Knowledge Graph composed of fact triplets as follows:
\begin{center}
$ KG = \{<s,r,o> | (s,o) \in E, r \in R\} $
\end{center}
\noindent where, $s, o$ is the subject and object entity respectively, $r$ is the relation between them, $E$ is the entity set and $R$ is the relation set. A sequence of tokens or a context is created using each triplet as $\emph{s} \rightarrow \emph{r} \rightarrow \emph{o}$.
All the triplets in the knowledge graph are formulated in this manner to obtain a set of graph contexts as follows:
\begin{center}
$ S = \{(s \rightarrow r \rightarrow o) | (s,o) \in E, r \in R\} $
\end{center}
\noindent Next, we create the training set $T$ from $S$. This is done by replacing each object entity $o$ in $S$ with the "[MASK]" token and defining the object entity $o$ as the prediction label for the triplet as follows:
\begin{center}
$ T = \{(<s,r,[MASK]>, o)\} $
\end{center}
\noindent \textbf{Model Training}
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{rdf_model.png}
\caption{The overall architecture of the Knowledge Graph Embedding Learning Network.}
\end{figure}
\noindent Consider an input sequence $T$ = $(t_{1}, t_{2}, t_{3})$.
First, each token $t_{i}$ is passed through its corresponding character level encoder to obtain the character sequence representation. Further, for each token $t_{i}$ in the input sequence $T$, we obtain its word-level representation which is tuned during model's training. The character sequence representation $(x_{i}^{char})$ and the word-level representation $(x_{i}^{word})$ are then concatenated together to obtain the element or the token embedding $(x_{i}^{ele})$ for $t_{i}$.
$$x_{i}^{ele} = [x_{i}^{word} ; x_{i}^{char}]$$
The final embedding input $(h_{\emph{\text{i}}}^{\text{0}})$ which is given as an input to the transformer encoder for token $t_{i}$ is obtained by the element wise sum of its element embedding $(x_{i}^{ele})$ and its relative positional embedding $(x_{i}^{pos})$.
\begin{center}
$h_{i}^{0} = x_{i}^{ele} \oplus x_{i}^{pos}$
\end{center}
Once the input representation is generated, it is fed to a transformer encoder with $L$ successive layers. The hidden state for token $t_{i}$ at layer $j$ is denoted as $h_{i}^{j}$ and is given by:
\begin{center}
$h_{i}^{j}$ = TransEnc($h_{i}^{j-1}$), $j = 1,2, \dots, L$
\end{center}
We treat the hidden representations obtained at the very last layer as the output of the transformer encoder. This is denoted by $\{h_{1}^{L}, h_{2}^{L}, h_{3}^{L}\}$.
After obtaining the transformer encoder representations of the last layer, $\{h_{i}^{L}\}_{i=1}^{3}$, we select the encoder representation corresponding to the "[MASK]" token, i.e., $h_{3}^{L}$. This is fed through a feedforward layer, which is followed by softmax classification layer to predict the third token, or the object entity $t_{3}$ in $(t_{1}, t_{2}, t_{3})$.
Mathematically, the above feedforward layer and softmax classification layer is defined as follows:
\begin{center}
$f$ = $W*h_{3}^{L} + b$
\end{center}
\begin{center}
$o$ = $\frac{exp(f_{k})}{\sum_{k}^{}exp(f_{k}))}$
\end{center}
Here, $W \in \mathbb{R}^{V\times D}$ and $b \in \mathbb{R}^{V\times 1}$ are learnable parameters of the feedforward layer, $V$ is entity vocabulary size, $D$ is the hidden size, $f$ is the output of the feedforward layer and $o$ is the predicted probabilities through softmax layer over all the entities in the entity vocabulary.
The model is trained using Adam Optimizer \citep{kingma2017adam} and we define training loss as the cross-entropy loss \citep{10.5555/646815.708603} between the predicted entity probabilities $o$ and one-hot ground truth entity $p$ as follows:
\begin{center}
$loss$ = $ - \sum_{k}^{}p_{k}\log{o_{k}}$
\end{center}
Here, $p_{k}$ and $o_{k}$ are the $k^{th}$ components of $p$ and $o$ respectively.
\subsection{NER Model}
The architecture of the proposed model is shown in Figure 2. We describe each component of our NER model in detail below.
\subsubsection{Input Representation Layer}
Given an input sequence of tokens $S = (x_{1}, x_{2}, \dots, x_{n})$, we first obtain the embeddings by generating features in six different ways of varying scope. These features are defined below:\\
\noindent \textbf{Word-level Representations:} For word-level features, we use the pre-trained 100-dimensional Glove Embeddings \citep{pennington-etal-2014-glove} and tune them during model's training. \\
\noindent \textbf{Character-level Representations:} Using word-level representations alone typically is not considered the best approach to NER \citep{santos2015boosting} due to the out-of-vocabulary (OOV) problem. To address this, many neural NER systems have shown the effectiveness of incorporating character-level representations for words \citep{ma2016endtoend, chiu-nichols-2016-named}. In addition to solving the OOV problem, they also help the model understand the underlying structure of words. This includes learning the arrangement of chars, the distinctive features a named entity follows such as the capitalization of the first letter of named entities, etc. Most works in the literature prefer a CNN-based character-level encoder over an LSTM based encoder \citep{li-etal-2017-leveraging-linguistic}. This is because of CNN's parallelization capabilities and almost similar or even better performance than the latter. To enrich our NER model with character-level features, we adopt IntNet \citep{intnet}, a funnel-shaped convolutional neural network. IntNet combines kernels of various sizes to extract different n-grams from words and learn an enhanced representation of words.
The entire network of IntNet comprises a series of convolution blocks, and each block consists of two layers. The first layer is a basic $N \times 1$ convolution on the input. The second layer applies convolutions of different kernel sizes on the first layer's output, concatenates them, and feeds it to the next convolution block.\\
\begin{figure}
\centering
\includegraphics[width=0.40\textwidth, height = 6cm]{NERmodel.png}
\caption{The detailed architecure of the proposed KARL-Trans-NER model.}
\end{figure}
\noindent \textbf{Context-level Representations:} To enlighten the model with contextual knowledge, here, we use the dominant pre-trained contextualized embedding model BERT \citep{bert}. As BERT generates embeddings at the word-piece-level, we take the average of all the word pieces to obtain the contextualized word embedding of a word. \\
\noindent \textbf{Sentence-level Representations:} Most sentence embedding learning techniques rely on word-level embeddings to generate sentence-level features \citep{8639211}. For computational ease, some methods compute the average of all the token embeddings in a sentence \citep{annervaz2018learning, coates-bollegala-2018-frustratingly} to obtain the sentence representation.
\begin{center}
$s$ = $\frac{\sum_{i} w_{i}}{n}$
\end{center}
where, $n$ in the sequence length and $w_{i}$ is the word embedding of token $i$.
Averaging words embeddings neglects the word order information. It also assumes an equal contribution from each word towards the sentence representation. Therefore, it is not the best approach to obtain sentence embeddings.
Sentence transformers proposed by \citet{reimers2019sentencebert} generate sentence embeddings using contextualized word embeddings, making them adaptive to new contexts. Though sentence transformers have provided SOTA results in many NLP tasks \citep{ke2020sentilare, Li_2020}, we believe that these representations could be enhanced by incorporating task-specific information. To achieve this, we adopt the techniques proposed by \citet{wang2018joint} and incorporate task-specific information by learning the NER label embeddings. We discuss our approach of generating sentence embeddings using the contextualized word embeddings obtained from BERT in detail below.
We begin by first taking the contextualized representations of words obtained from BERT and linearly projecting them to a different space. Let us denote the obtained representations by $c = (c_{1}, c_{2},\dots, c_{n})$, where $c \in \mathbb{R}^{n\times d}$, $c_{i}$ denotes the contextualized embedding of word $i$, $n$ is the number of words in the sentence and $d$ is the dimension BERT embeddings were projected to. The next step involves embedding the labels (PER, LOC etc.) to the same space. The unique set of labels in the data are embedded into dense representations. Let these representations be defined as $l = (l_{1}, l_{2}, \dots, l_{m})$. Here, $l \in \mathbb{R}^{m\times d}$, $m$ denotes the number of tag labels and $d$ is the hidden size.
Next, we apply the label embedding attention over the word representations. Cosine-similarity between each token embeddings $c_{i}$ and label embedding $l_{j}$ is treated as the compatibility function.
\begin{center}
$simi(c_{i}, l_{j}) = \frac{c_{i}^{T}l_{j}}{||c_{i}|| ||l_{j}||}$
\end{center}
A convolution operation is applied to the similarities obtained above. This enables the model to capture the behavior of the same label in its neighboring words. In other words, the application of a convolution layer captures the relative spatial information for a particular label over a phrase.
Considering a phrase of length $2k+1$ where $k$ is the kernel size, the convolution for token $i$ is applied over $simi_{[i-k:i+k, :]}$ which is followed by max-pooling. The attention weights for the entire sentence are generated after that by applying softmax operation over the scores. Finally, sentence-level representation $s$ is computed as the weighted sum of the token embeddings over the attention weights. This is mathematically defined as follows:
\begin{center}
$scores = max(W^{T} simi_{[i-k:i+k, :]} + b) $ \\
$\alpha = softmax(scores)$
$s = \sum_{i=1}^{n} \alpha_{i}c_{i}$ \\
\end{center}
Here, $W \in \mathbb{R}^{2k+1}$ and $b \in \mathbb{R}^{m}$ are learnable parameters, $\alpha \in \mathbb{R}^{n}$ and $s \in \mathbb{R}^{d}$.\\
\noindent \textbf{Document-level Representations:} To obtain Document-level features, we adopt a key-value Memory Network \citep{weston2015memory} that generates document-aware representations of every unique word in the training data. Specifically, we refer to the model adopted by \citet{luo2019hierarchical} to create features at a document level. \\
\noindent \textbf{Global-level Representations:}
\citet{annervaz2018learning} was among the first attempts to augment learning models with structured graph knowledge. They tested their model in a sentence classification setting. To retrieve entities and relations from the knowledge base, they first clustered entities and relations. A convolution network was used to obtain the cluster representation. Though the technique did work well for sentence classification, we identified a couple of limitations in their approach.
The first lies in the clustering step. To retrieve relevant entities and relations, clustering should be accurate. Also, clustering generally results in a loss of information. The cluster representations are always assumed to be a generalized representation of its constituents. Being generalized, it can neglect essential information existing inside the cluster's objects. Second, they augmented their model with graph knowledge at a sentence level. For sequence labeling, augmentation at the sentence level will not be effective as we require precise entity-level information from the knowledge graph. We address the limitations mentioned above and propose more efficient and reliable technique for incorporating relevant information from the knowledge graph.
Instead of performing clustering, we first shortlist entities and relations that are relevant to our task. The shortlisted entities and relations are passed through the already trained knowledge graph embedding module. Here, we remove the classification layer of the knowledge graph embedding module and keep the representations generated by the transformer network to create the fact triplet embeddings. Next, we discuss the entity and relation shortlisting step. \\
\noindent \textbf{Entity Shortlisting:} One caveat of similarity-based methods is that their performance heavily depends on the modeling of adopted features for similarity estimation. This can introduce errors. Therefore, we adopt an n-gram based matching with the input document to shortlist entities from the Knowledge Graph. The idea here is to generate a candidate set for each entity in the document. More specifically, we refer to the rules proposed by \citet{10.1145/3038912.3052675} for candidate set generation. As a particular entity in the document can match with multiple entities in the knowledge graph, we first rank all the entities based on the number of triplets they appear as subjects in the knowledge graph. The top $k_{1}$ entities with the highest rank are selected and added to the candidate entity set $C_{w}$ of the current word $w$. \\
\noindent \textbf{Relation Shortlisting:} After generating the candidate entity set $C_{w}$ for a word $w$, we extract the $k_{2}$ most frequent relations that emerge from entities in the candidate entity set $C_{w}$. This results in the Entity Relation set $ER_{w}$ for a word $w$ as follows:
\begin{equation*}
ER_{w} = \Bigg\{
\begin{matrix}
(e_{1}, r_{1}) & (e_{1}, r_{2}) & \dots & (e_{1}, r_{k_{2}})\\
(e_{2}, r_{1}) & (e_{2}, r_{2}) & \dots & (e_{2}, r_{k_{2}})\\
(e_{k_{1}}, r_{1}) & (e_{k_{1}}, r_{2}) & \dots & (e_{k_{1}}, r_{k_{2}})
\end{matrix}
\Bigg\}
\end{equation*}
After shortlisting relevant entities and relations, we augment the current word with world knowledge. We do this by first encoding all the triples in the set $ER_{w}$ using the pre-trained knowledge graph embedding model. Each entity-relation pair in $ER_{w}$ is appended with the [MASK] token and fed to the transformer network. The hidden state obtained from the last layer of the transformer network acts as the contextualized representation for these triplets. Finally, we apply soft-attention over the hidden states obtained from the transformer encoder and treat the input contextualized embeddings obtained from BERT as the query vector for augmentation at word-level. For a word $w$, this is mathematically defined as follows:
\begin{center}
$I = [h_{w,1}^{L}; h_{w,2}^{L}; h_{w,3}^{L}] $\\
$Q, K, V = W^{q}B_{w}, W^{k}I, W^{v}I $\\
$A_{j} = QK_{j}^{T}$\\
$g_{w} = \sum_{i}Softmax(\frac{A_{i}}{\sqrt 3d})V_{i}$
\end{center}
Here, $W^{q} \in \mathbb{R}^{3d\times h}$, $W^{k} \in \mathbb{R}^{n \times 3d}$ and $W^{v} \in \mathbb{R}^{n \times 3d}$ are learnable parameters, $B_{w} \in \mathbb{R}^{h}$ is the contextualized word embedding of word $w$ obtained from BERT, $I \in \mathbb{R}^{n\times3d}$ is the concatenation of the individual hidden states $\{h_{w,i}^{L}\}_{i=1}^{3}$ corresponding to the last layer of the transformer encoder, $n$ is the number of triplets in the set $ER_{w}$, $d$ is the output dim of individual hidden state from the transformer encoder and lastly, $g_{w} \in \mathbb{R}^{3d}$ is the global representation of word $w$.
This concludes our input representation layer. The output of the input representation layer is treated as the concatenation of the above-described features at token-level to obtain the word-level representation, which is 1) character-aware, 2) word-aware, 3) context-aware, 4) sentence-aware, 5) document-aware, and 6) knowledge-aware.
\subsubsection{Encoder Layer}
As a context-encoder, we use the fully-connected self-attention Network, aka Transformer network. For NER, \citet{guo2019startransformer} found the transformer encoder to perform poorly compared to a recurrent context encoder like LSTM \citep{articlelstm}. \citet{yan2019tener} proposed that the use of un-scaled attention and relative positional encodings in place of positional encoding can significantly improve the performance of the Transformer. Thus, we also adopt their proposed modifications in the Transformer based context encoder.
\subsubsection{Decoder Layer}
Conditional random field (CRF) \citep{10.5555/645530.655813} is a widely adopted decoder in many state-of-the-art NER models \citep{ma2016endtoend, lample-etal-2016-neural}. CRF can establish strong connections between the output tags, which help in making better predictions over the softmax layer. Viterbi Algorithm is applied in the decoding phase to obtain the final output label sequence with the highest probability out of all the valid label sequences.
\section{Experiments and Results}
We evaluated our models on three NER tasks. To reduce the impact of randomness, we conducted each experiment three times and reported the average span-level F1 score and standard deviation. Starting with word-level, we incrementally conducted our experiments and augmented features in increasing order of the extent of information modeled by them. To verify the performance of these trained models in a real-world scenario and compare their adaptability to unseen entities, we also generated predictions on two random pieces of texts taken from the web. In the subsequent section, we discuss the data statistics, model details, results, and performance on unseen entities.\\
\begin{table}[t]
\resizebox{0.49\textwidth}{!}
{%
\begin{tabular}{r|c|c|c|c|c}
\toprule
\textbf{Dataset} & \textbf{Type} & \textbf{Train} & \textbf{Test} & \textbf{Dev} & \textbf{Tags} \\
\midrule
\multirow{2}{*}{CoNLL 2003} & Sentence & 14041 & 3453 & 3250 & \multirow{2}{*}{4}\\
& Token & 203621 & 46435 & 51362 \\
\midrule
\multirow{2}{*}{OntoNotes v5} & Sentence & 59924 & 8262 & 8528 & \multirow{2}{*}{18}\\
& Token & 1088503 & 152728 & 147724 \\
\bottomrule
\end{tabular}%
}
\caption{The table represents the details for CoNLL2003 and OntoNotes v5 datasets. The statistics for CoNLL++ dataset are same as that of CoNLL 2003.}
\end{table}
\noindent \textbf{NER datasets and Knowledge Graph}
\noindent The statistics of the three NER datasets we experimented on are listed in Table 1. Wikidata is one of the largest sources of real-world knowledge data covering concepts belonging to various domains. We filter the Wikidata and use only those fact triplets that were relevant to our task. This left us with approximately 10 million fact triplets out of roughly 400 million fact triplets present in Wikidata.\\
\noindent \textbf{Model Hyperparameters}
\noindent We tuned all the model hyper-parameters manually, and list down the respective search ranges for all the hyper-parameters involved in Table 2. \\
\begin{table}[t]
\resizebox{0.49\textwidth}{!}
{%
\begin{tabular}{p{0.20\textwidth} p{0.20\textwidth}||p{0.20\textwidth}p{0.20\textwidth}}
\toprule
\multicolumn{2}{c||}{\textbf{NER-Transformer}} & \multicolumn{2}{c}{\textbf{Knowledge-Graph Transformer}} \\
\midrule
Layers & ~\hfill~[2] ~\hfill~ & Layers & ~\hfill~[2] ~\hfill~ \\
\midrule
Learning Rate & ~\hfill~[0.001, 0.0009] ~\hfill~ & Learning Rate & ~\hfill~[0.0005, 0.0003] ~\hfill~ \\
\midrule
Heads & ~\hfill~[8, 12, 14] ~\hfill~ & Heads & ~\hfill~[4, 8] ~\hfill~\\
\midrule
Head Dim. & ~\hfill~ [64, 96, 128] ~\hfill~ & Head Dim. & ~\hfill~ [256, 128] ~\hfill~\\
\midrule
FC Dropout & ~\hfill~ [0.40] ~\hfill~ & FC Dropout & ~\hfill~ [0.40] ~\hfill~ \\
\midrule
Attn. Dropout & ~\hfill~ [0.15] ~\hfill~ & Attn. Dropout & ~\hfill~ [0.25] ~\hfill~ \\
\midrule
Optimizer & ~\hfill~ [SGD] ~\hfill~ & Optimizer & ~\hfill~ [Adam] ~\hfill~ \\
\midrule
\midrule
\multicolumn{4}{c}{\textbf{Other Hyperparameters and Model Settings}}\\
\midrule
\multicolumn{1}{c}{IntNet Layers} & \multicolumn{1}{c|}{[5]} &
\multicolumn{1}{c}{IntNet Kernel Sizes} & \multicolumn{1}{c}{[[3,4,5]]}\\
\midrule
\multicolumn{1}{c}{IntNet Embedding Dim.} & \multicolumn{1}{c|}{[16, 32, 64]} &
\multicolumn{1}{c}{IntNet Hidden Dim.} & \multicolumn{1}{c}{[8, 16, 32]}\\
\midrule
\multicolumn{1}{c}{Momentum} & \multicolumn{1}{c|}{[0.9]} & \multicolumn{1}{c}{Epochs} & \multicolumn{1}{c}{[100]} \\
\midrule
\multicolumn{1}{c}{Sentence Transformer} & \multicolumn{1}{c|}{[stsb-bert-large]} & \multicolumn{1}{c}{Bert pre-trained} & \multicolumn{1}{c}{[bert-large-cased]} \\
\midrule
\multicolumn{1}{c}{Glove Embedding} & \multicolumn{1}{c|}{[glove-en-100d]} & & \\
\bottomrule
\end{tabular}%
}
\caption{The table represents the hyper-parameter search ranges for the Recurrent and Transformer Encoder, and some other hyperparameters.}\label{tab:hyperparams}
\end{table}
\noindent \textbf{Results and Analysis}
\noindent We report the results achieved by our models in Table 3. Starting with word-level features, we observe that as augmentation is done with different features, the performance on every dataset consistently increases. For CoNLL datasets, the accuracy lift generated by each feature augmentation step follows the following trend: \emph{Char $>$ Context $>$ Global $>$ Doc $>$ Sent}. The same trend for OntoNotes v5 is \emph{Context $>$ Char $>$ Global $>$ Doc $>$ Sent}.
\begin{table}[h]
\resizebox{0.49\textwidth}{!}
{
\begin{tabular}{p{0.20\textwidth}||c|c|c|c|c|c}
\toprule
\multicolumn{1}{c||}{\textbf{Model}} & \multicolumn{2}{c|}{\textbf{CoNLL 2003}} & \multicolumn{2}{c|}{\textbf{CoNLL++}} & \multicolumn{2}{c}{\textbf{OntoNotes}}\\
\midrule
\multicolumn{1}{c||}{LUKE \citep{yamada2020luke}} & \multicolumn{2}{c|}{\underline{94.3}} & \multicolumn{2}{c|}{-} & \multicolumn{2}{c}{-}\\
\midrule
\multicolumn{1}{c||}{\multirow{2}{*}{CMV \citep{luoma2020exploring}}} & \multicolumn{2}{c|}{{93.74 (0.25)}{$\bot$}} & \multicolumn{2}{c|}{-} &\multicolumn{2}{c} {-}\\
\multicolumn{1}{c||}{} & \multicolumn{2}{c|}{93.44 (0.06)} & \multicolumn{2}{c|}{-} & \multicolumn{2}{c}{-}\\
\midrule
\multicolumn{1}{c||}{ACE \citep{wang2020automated}} & \multicolumn{2}{c|}{{93.6}{$\bot$}} & \multicolumn{2}{c|}{-} & \multicolumn{2}{c}{-}\\
\midrule
\multicolumn{1}{c||}{CL-KL \citep{wang2021improving}} & \multicolumn{2}{c|}{{93.56}{$\bot$}} & \multicolumn{2}{c|}{\underline{94.81}} & \multicolumn{2}{c}{-}\\
\midrule
\multicolumn{1}{c||}{CrossWeigh \citep{wang2019crossweigh}} & \multicolumn{2}{c|}{{93.43}{$\bot$}} & \multicolumn{2}{c|}{{94.28}{$\bot$}} & \multicolumn{2}{c}{-}\\
\midrule
\multicolumn{1}{c||}{BERT-MRC+DSC \citep{li2020dice}} & \multicolumn{2}{c|}{93.33 (0.29)} & \multicolumn{2}{c|}{-} & \multicolumn{2}{c}{\underline{{92.07 (0.96)}}{$\bot$}} \\
\midrule
\multicolumn{1}{c||}{Biaffine-NER \citep{yu2020named}} & \multicolumn{2}{c|}{{93.5} $\bot$ } & \multicolumn{2}{c|}{-} & \multicolumn{2}{c}{91.30} \\
\midrule
\multicolumn{1}{c||}{KAWR \citep{qizhen2020}} & \multicolumn{2}{c|}{91.80 (0.24)} & \multicolumn{2}{c|}{-} & \multicolumn{2}{c}{-} \\
\midrule
\midrule
\multicolumn{1}{c||}{\textbf{KARL-Trans-NER}} & F1 & $\triangle$ & F1 & $\triangle$ & F1 & $\triangle$ \\
\midrule
\multicolumn{1}{c||}{Word-level} & 90.12 (0.02) & - & 90.90 (0.01) & - & 86.97 (0.01) & -\\
\multicolumn{1}{c||}{+ Char-level} & 91.60 (0.01) & 1.48 & 92.53 (0.01) & 1.63 & 88.40 (0.08) & 1.43 \\
\multicolumn{1}{c||}{+ Context-level} & 92.92 (0.02) & 1.32 & 93.77 (0.05) & 1.24 & 90.13 (0.02) & 1.73 \\
\multicolumn{1}{c||}{+ Sentence-level} & 93.10 (0.07) & 0.18 & 93.90 (0.07) & 0.13 & 90.42 (0.03) & 0.29 \\
\multicolumn{1}{c||}{+ Document-level} & 93.38 (0.04) & 0.28 & 94.17 (0.03) & 0.27 & 90.91 (0.04) & 0.49 \\
\multicolumn{1}{c||}{+ Global-level} & \textbf{93.74 (0.05)} & 0.36 & \textbf{94.52 (0.06)} & 0.35 & \textbf{91.41 (0.06)} & 0.50\\
\midrule
\multicolumn{1}{c||}{Context-level + Global-level} & 92.44 (0.09) & - & 92.98 (0.11) & - & - & - \\
\bottomrule
\end{tabular}
}
\caption{F1 scores on CoNLL 2003, CoNLL++ and OntoNotes v5. Results marked with {$\bot$} used both Train and Dev sets for model training. The SOTA results are underlined and the best results of our model are marked in bold. Standard deviation is written in parenthesis. {$\triangle$} denotes change in F1 by feature addition.}
\end{table}
In addition to this, we also observe that with access to world knowledge, the proposed model achieves superior results than most previously proposed systems existing in the literature. To better evaluate the effectiveness of our approach, we also compare our results with KAWR \citep{qizhen2020}. The authors also leveraged world knowledge to solve NER. To the best of our knowledge, their work is the only work in the entire literature that utilized knowledge graph embeddings and applied them to the datasets we used in our work. We compare their reported results with ours. An approximate lift of \emph{2-F\textsubscript{1}} is observed here.
As the results reported by KAWR only utilized Context-level and Knowledge-level embeddings, we conducted our experiments with KARL using the exact configuration of the embeddings. Experimental results show that KARL achieved far better performance as compared to KAWR, outperforming it by approximately \emph{0.65} units on F\textsubscript{1} on the CoNLL 2003 dataset, which indicates the effectiveness of our proposed model in incorporating world knowledge. \\
\noindent \textbf{Evaluation on Unseen Entities}
\begin{table}[]
\resizebox{0.49\textwidth}{!}
{%
\begin{tabular}{@{}r|p{0.28\textwidth}|c|p{0.25\textwidth}|p{0.25\textwidth}@{}}
\toprule
\textbf{S.No} & ~\hfill~\textbf{Text}~\hfill~ & ~\hfill~\textbf{Model} ~\hfill~ & ~\hfill~\textbf{Predictions} ~\hfill~ & ~\hfill~\textbf{Predictions (lowercased)} ~\hfill~ \\
\midrule
\multirow{11}{*}{1} & \multirow{6}{*}{\parbox{0.28\textwidth}{[SpaceX (\emph{ORG})] is an aerospace manufacturer and space transport services company headquartered in [California (\emph{LOC})].}} & Word & None & None\\
\cline{3-5}
& & \multirow{2}{*}{+ Char} & SpaceX (PER) & \multirow{2}{*}{None} \\
& & & California (LOC) &\\
\cline{3-5}
& & \multirow{2}{*}{+ Context} & SpaceX (ORG) & SpaceX (MISC) \\
& & & California (LOC) & California (LOC)\\
\cline{3-5}
& & \multirow{2}{*}{+ Sent} & SpaceX (ORG) & SpaceX (MISC)\\
& & & California (LOC) & California (LOC) \\
\cline{3-5}
& & \multirow{2}{*}{+ Doc} & SpaceX (ORG) & SpaceX (MISC)\\
& & & California (LOC) & California (LOC)\\
\cline{3-5}
& & \multirow{2}{*}{+ Global} & SpaceX (ORG) & SpaceX (ORG)\\
& & & California (LOC) & California (LOC)\\
\midrule
\multirow{13}{*}{2} & \multirow{6}{*}{\parbox{0.28\textwidth}{[Liverpool (\emph{ORG})] suffered an upset first time home league defeat of the season, beaten 1 by a [Guy Whittingham (\emph{PER}]) goal for [Sheffield Wednesday (\emph{ORG})].}} & Glove & None & None \\
\cline{3-5}
& & \multirow{1}{*}{+ Int} & Liverpool (\emph{ORG}) & None\\
\cline{3-5}
& & \multirow{2}{*}{+ Bert} & Liverpool (\emph{ORG}) & Liverpool (\emph{PER}) \\
& & & Sheffield (\emph{PER}) & Sheffield (\emph{PER}) \\
\cline{3-5}
& & \multirow{3}{*}{+ Sent} & Liverpool (\emph{ORG}) & Liverpool (\emph{PER})\\
& & & Sheffield (\emph{PER}) & Sheffield (\emph{PER}) \\
& & & Whittingham (\emph{PER}) & Whittingham (\emph{PER}) \\
\cline{3-5}
& & \multirow{3}{*}{+ Doc} & Liverpool (\emph{ORG}) & Liverpool (\emph{PER})\\
& & & Sheffield (\emph{PER}) & Sheffield (\emph{PER})\\
& & & Whittingham (\emph{PER}) & Whittingham (\emph{PER})\\
\cline{3-5}
& & \multirow{3}{*}{+ KG} & Liverpool (\emph{ORG}) & Liverpool (\emph{ORG})\\
& & & Sheffield Wednesday (\emph{ORG}) & Sheffield Wednesday (\emph{ORG})\\
& & & Guy Whittingham (\emph{PER}) & Guy Whittingham (\emph{PER})\\
\bottomrule
\end{tabular}%
}
\caption{Table shows the predictions made by each of the 6 models we trained. "Predictions" column lists the entities and the tags predicted by the model on the corresponding Text. "Predictions (lowercased)" indicates the predictions on the lowercased version of Text.}
\end{table}
\noindent Next, we test each model's performance on unseen entities. We take the models trained on the CoNLL 2003 dataset and predict named entities for random pieces of texts taken from the web, which we annotated manually. We show the entities identified and their corresponding entity tags as predicted by these models for two such sentences in the "Predictions" column of Table 4. We observe that the word-level model failed to identify any named entity from the first sentence. With the introduction of character-level features, the model made one classification error and classified "SpaceX" as a Person rather than an Organisation. After augmenting features at the context level, all the subsequent models generated accurate predictions and identified all the named entities and the corresponding tags precisely.
In the second sentence, the word-level model again failed to identify any named entities from the input text. Although the model did start to identify parts of named entities with further feature augmentations, we observe that majority of the tag labels predicted were wrong. For instance, the "+Bert", "+Sent" and "+Doc" models misclassified "Sheffield" as a person. It is not until the augmentation of world knowledge do we start observing accurate predictions. The Global model, just as before, achieved an accuracy of 100\% in named entity identification and tag-labeling.
To introduce some complexity, we repeated the above experiments on the same two sentences, but the entire sentence had been lowercased this time. We did this to verify the model's sensitivity towards casing. As shown in the table above, the predictions made by each of the models on lowercased inputs varied significantly, and the models committed more entity misclassifications than before. However, the predictions made by the model augmented with knowledge remained unaltered and unaffected, which verified the applicability and adaptability of our proposed method to a real-world scenario where the raw text is not guaranteed to carry any specific formatting.
\section{Conclusion and Future Work}
This work proposed a novel world knowledge augmentation technique that leveraged large knowledge bases represented as fact triplets and successfully extracted relevant information for word-level augmentation. The model was trained and tested in an NER setting. Experimental results showed that knowledge level representation learning outperformed most NER systems in literature and made the model highly applicable to a real-world scenario by accurately predicting entities in random pieces of text.
Since we augmented features at the word level, we believe our method could facilitate many other NLP tasks, such as Chunking, Word Sense Disambiguation, Question Answering, etc. Therefore, as future work, we plan to test the applicability of the proposed methods on other NLP tasks as well. Our intuition says that any system can leverage the proposed system as a general knowledge representation learning tool. Moreover, being among the very few works in this direction, we see an ample scope of improvement. For instance, the Knowledge Graph Embedding model was trained separately on a Masked Language Modelling task, and then the trained model was used on the task at hand. This restricted the model from interacting and learning from the task at hand, NER in our case. We believe that a technique to incorporate and train the NER model with the knowledge representation module can be more beneficial. Another improvement that could be made lies in the entity shortlisting step. Although the technique is quite reliable, it does not consider any semantic information about the entities. Different entities in a knowledge base can be highly correlated to each other and yet have different names. Therefore, we plan to improve the entity shortlisting technique further for more accurate and robust shortlisting.
\section{Introduction}
Named Entity Recognition (NER) is the task of locating and classifying named entities in a given piece of text into pre-defined entity categories such as Person (PER), Location (LOC), Organisation (ORG), etc. NER is considered an essential preprocessing step that can benefit many downstream applications in Natural Language Processing (NLP), such as Machine Translation \cite{babych-hartley-2003-improving}, Information Retrieval \citep{articleir_ner} and Text Classification \citep{article_tc_ner}.
Over the past few years, Deep Learning has been the key to solving not only NER but many other NLP applications \citep{le-etal-2018-deep, kouris-etal-2019-abstractive}.
On the downside, these models also demand a lot of well-structured and annotated data for their training. This restricts the applicability of trained models to a real-world scenario as the model's behavior and predictions become very specific to the type of data they are trained on. To conquer this, many studies have recently evolved that focus on building models that can incorporate world knowledge for enhanced modeling and inference on the task at hand, such as \citet{qizhen2020} for NER, \citet{denk-peleteiro-ramallo-2020-contextual} for Representation Learning and \citet{7451560} for Dependency Parsing, etc.
Although recent works in literature have successfully incorporated world knowledge for Sequence Labeling \citep{qizhen2020}, they come with certain limitations, which we discuss ahead. First, as words in a language can be polysemous \citep{The_Structure_of_Polysemy}, entities and relations in a knowledge graph can be polysemous too \citep{xiao2015}. To introduce Knowledge Graph Embeddings (KGEs), we noticed that previously proposed approaches have primarily used pre-trained static embeddings obtained from extensive sources such as Wikidata. KGEs in these models fundamentally relies on the assumption that the tail entity is a linear transformation of the head entity and the relation, making them non-contextualized in nature. Second, we noticed that prior work only considered head-entity and relation embedding to get the knowledge graph embedding and ignored the tail-entity of the triplet completely. Dropping the tail entity entirely could lead to a potential loss of information. We observed that in addition to carrying information about the triplet itself, the head-relation-tail also helps in understanding and extracting implicit relationships existing between entities across triplets. Therefore, the model must know where the head and the relation are leaning towards to achieve accurate embedding estimation. The final limitation lies in applying a Recurrent architecture to obtain KGEs, introducing time inefficiency and a high computation cost \citep{annervaz2018learning}.
To further understand the importance and our motivation behind using world knowledge for NER, consider a couple of examples mentioned below.\\
\noindent A: \emph{Google announced the launch of Maps.}\\
B: \emph{Pichai announced the launch of Maps.}\\
In the training phase, the significant contextual overlap between sentences can confuse the model in labeling the named entities correctly. The model is likely to memorize sentence templates rather than learn to predict correct entity labels, leading to misclassifications. Also, suppose we trained our NER model on an out-of-domain NER dataset. In that case, the model would have hardly received any information from the training set that "Google" is an organization and "Pichai" indicates a Person. \\
\noindent A: \emph{Berlin died in Season 2 of Money Heist.}\\
B: \emph{Messi saved Barcelona with an equalizer.}\\
In the first sentence above, "Berlin" refers to a person, whereas in the second sentence, "Barcelona" refers to an organization. There are reasonable chances of misclassification in these two sentences because of a high probability of training data missing such nuance differences in all the possible entity tags for a named entity.
From the examples mentioned above, we can infer that for the model to be aware of such subtle differences, we should provide it with the ability to look up relevant details from a reliable source. Therefore, world knowledge can open the gates for the model to access such information and learn details about entities that it might never come across in the training data. In addition to this, with access to structured world knowledge, far better applicability to a real-world setting can be expected.
Setting these points as our objective, in this work, we propose Knowledge Aware Representational Learning Network for Named Entity Recognition using Transformer (KARL-Trans-NER), which
\begin{enumerate}
\item Encodes the entities and relations existing in a knowledge base using a self-attention network to obtain Knowledge Graph Embeddings (KGEs). The embeddings thus obtained are dynamic and fully contextualized in nature.
\item Takes the encoded contextualized representations for entities and relations and generates a knowledge-aware representation for words. The representation obtained, which we also call "Global Representation" for words, can be augmented with the other underlying features to boost the NER model's performance.
\item Generates sentence embeddings using BERT by fusing task-specific information through NER tag embeddings.
\item And lastly, relies on a Transformer as its context encoder incorporating direction-aware, distance-aware, and un-scaled attention for enhanced encoder representation learning.
\end{enumerate}
To verify the effectiveness of our proposed model, we conduct our experiments on three publicly available datasets for NER. These are CoNLL 2003 \citep{DBLP:journals/corr/cs-CL-0306050}, CoNLL++ \citep{wang2019crossweigh} and OntoNotes v5 \citep{pradhan-etal-2013-towards}. Experimental results show that the global embeddings generated for every word using KARL, when used for feature augmentation, can result in significant performance gains of over 0.35-0.5 \emph{F}\textsubscript{1} on all the three NER datasets. Also, to validate the model's generalizability and applicability in a real-world setting, we generate the model's prediction on random texts taken from the web. Results suggest that incorporating world knowledge enables the model to make accurate predictions for every entity in the sentence.
\section{Related work}
The research community in NER moved from approaches using character and word representations \citep{yao2015, zhou2017, kuru2016} to sentence-level contextual representations \citep{yang2017, zhang2018}, and recently to document-level representations as proposed by \citet{qian2018} and \citet{akbik2018}. Expanding the scope of embeddings from character and word level to document level has shown significant improvements in the results for many NLP tasks, including NER \citep{luo2019hierarchical}. To expand the scope further, researchers have explored external knowledge bases to learn facts existing in the universe that may not be present in the training data \citep{annervaz2018learning, qizhen2020}.
Incorporating information present in Knowledge Graph is an emerging research topic in NLP.
While some methods focus on graph structure encoding \citep{lin-etal-2015-modeling, das-etal-2017-chains}, others focus on learning entity-relation embeddings \citep{wang2020coke, jiang-etal-2019-adaptive}.
\citet{zhong2015} proposed an alignment model for jointly embedding a knowledge base and a text corpus that achieved better or comparable performance on four NLP tasks: link prediction, triplet classification, relational fact extraction, and analogical reasoning.
\citet{xiao2015} proposed a generative embedding model, TransG, which can discover the latent semantics of a relation and leverage a mixture of related components for generating embedding. They also reported substantial improvements over the state-of-the-art baselines on the task of link prediction.
\citet{10.1145/3038912.3052675} presented a neural network to answer simple questions over large-scale knowledge graphs using a hierarchical word and character-level question encoder.
\citet{annervaz2018learning} leveraged world knowledge in training task-specific models and proposes a novel convolution-based architecture to reduce the attention space over entities and relations. It outperformed other models on text classification and natural language inference tasks.
Despite producing state-of-the-art results in many NLP tasks, Knowledge Graphs are relatively unexplored for NER. \citet{qizhen2020} introduced a Knowledge-Graph Augmented Word Representation (KAWR). The proposed model encoded the prior knowledge of entities from an external knowledge base into the representation. Though KAWR performed better than its benchmark BERT \citep{bert}, the model underperformed compared to the SOTA models for NER.
\section{Knowledge Augmented Representation Learning Network}
In this section, we describe the end-to-end proposed model for the task of NER (KARL-Trans-NER).
\subsection{Knowledge Graph Embedding Model}
World Knowledge is represented in the form of fact triplets in a Knowledge Graph, such as \emph{("Albert Einstein", "BornYear", "1879")}. Like any other representation technique, to incorporate the information residing inside these fact triples, they need to be encoded into a numeral representation.
To take into account polysemy and learn graph embeddings as a function of the graph context, we take inspiration from CoKE \citep{wang2020coke} and train our knowledge graph embedding model on an entity prediction task using the idea of Masked Language Modeling \citep{bert}. \\
\noindent \textbf{Model Architecture}
\noindent Our KGE model is based on a Transformer architecture \citep{vaswani2017attention}. The corresponding architecture is shown in Figure 1. Training objective of our KG embedding module is inspired from that of CoKE. Since its application to a task like NER was not straightforward, we designed a character-level rich knowledge graph embeddings to understand the underlying representation of characters in a KG. For, e.g., in the fact-triplet \emph{"(BarackObama, HasChild, SashaObama)"}, the tokens do not appear the way words do in English sentences. Introducing a character-level representation layer in CoKE can help the model understand the intrinsic character sequence representations of entities and relations alongside word-sequence representation to achieve an enhanced representation learning of the knowledge graph. Also, as a collection of fact triplets in the form of a knowledge graph is an ever-expanding resource, only entity/relation level embeddings will not adapt to newer entities, thereby introducing trouble in obtaining embeddings that were not observed during training the KGEs. We used two different character sequence encoders, one for the entities and another for the relations. Also, instead of feeding sinusoidal positional embeddings with the input embeddings \citep{vaswani2017attention}, we adopt relative positional embeddings \citep{shaw-etal-2018-self, DBLP:journals/corr/abs-1901-02860}. Positional embeddings are distance-aware, but they are unaware of the directionality. Our intuition of using relative positional embeddings here relies on the findings of \citet{yan2019tener}.\\
\noindent \textbf{Training Data Preparation}
\noindent We are given a Knowledge Graph composed of fact triplets as follows:
\begin{center}
$ KG = \{<s,r,o> | (s,o) \in E, r \in R\} $
\end{center}
\noindent where, $s, o$ is the subject and object entity respectively, $r$ is the relation between them, $E$ is the entity set and $R$ is the relation set. A sequence of tokens or a context is created using each triplet as $\emph{s} \rightarrow \emph{r} \rightarrow \emph{o}$.
All the triplets in the knowledge graph are formulated in this manner to obtain a set of graph contexts as follows:
\begin{center}
$ S = \{(s \rightarrow r \rightarrow o) | (s,o) \in E, r \in R\} $
\end{center}
\noindent Next, we create the training set $T$ from $S$. This is done by replacing each object entity $o$ in $S$ with the "[MASK]" token and defining the object entity $o$ as the prediction label for the triplet as follows:
\begin{center}
$ T = \{(<s,r,[MASK]>, o)\} $
\end{center}
\noindent \textbf{Model Training}
\begin{figure}[t]
\includegraphics[width=0.49\textwidth]{rdf_model.png}
\caption{The overall architecture of the Knowledge Graph Embedding Learning Network.}
\end{figure}
\noindent Consider an input sequence $T$ = $(t_{1}, t_{2}, t_{3})$.
First, each token $t_{i}$ is passed through its corresponding character level encoder to obtain the character sequence representation. Further, for each token $t_{i}$ in the input sequence $T$, we obtain its word-level representation which is tuned during model's training. The character sequence representation $(x_{i}^{char})$ and the word-level representation $(x_{i}^{word})$ are then concatenated together to obtain the element or the token embedding $(x_{i}^{ele})$ for $t_{i}$.
$$x_{i}^{ele} = [x_{i}^{word} ; x_{i}^{char}]$$
The final embedding input $(h_{\emph{\text{i}}}^{\text{0}})$ which is given as an input to the transformer encoder for token $t_{i}$ is obtained by the element wise sum of its element embedding $(x_{i}^{ele})$ and its relative positional embedding $(x_{i}^{pos})$.
\begin{center}
$h_{i}^{0} = x_{i}^{ele} \oplus x_{i}^{pos}$
\end{center}
Once the input representation is generated, it is fed to a transformer encoder with $L$ successive layers. The hidden state for token $t_{i}$ at layer $j$ is denoted as $h_{i}^{j}$ and is given by:
\begin{center}
$h_{i}^{j}$ = TransEnc($h_{i}^{j-1}$), $j = 1,2, \dots, L$
\end{center}
We treat the hidden representations obtained at the very last layer as the output of the transformer encoder. This is denoted by $\{h_{1}^{L}, h_{2}^{L}, h_{3}^{L}\}$.
After obtaining the transformer encoder representations of the last layer, $\{h_{i}^{L}\}_{i=1}^{3}$, we select the encoder representation corresponding to the "[MASK]" token, i.e., $h_{3}^{L}$. This is fed through a feedforward layer, which is followed by softmax classification layer to predict the third token, or the object entity $t_{3}$ in $(t_{1}, t_{2}, t_{3})$.
Mathematically, the above feedforward layer and softmax classification layer is defined as follows:
\begin{center}
$f$ = $W*h_{3}^{L} + b$
\end{center}
\begin{center}
$o$ = $\frac{exp(f_{k})}{\sum_{k}^{}exp(f_{k}))}$
\end{center}
Here, $W \in \mathbb{R}^{V\times D}$ and $b \in \mathbb{R}^{V\times 1}$ are learnable parameters of the feedforward layer, $V$ is entity vocabulary size, $D$ is the hidden size, $f$ is the output of the feedforward layer and $o$ is the predicted probabilities through softmax layer over all the entities in the entity vocabulary.
The model is trained using Adam Optimizer \citep{kingma2017adam} and we define training loss as the cross-entropy loss \citep{10.5555/646815.708603} between the predicted entity probabilities $o$ and one-hot ground truth entity $p$ as follows:
\begin{center}
$loss$ = $ - \sum_{k}^{}p_{k}\log{o_{k}}$
\end{center}
Here, $p_{k}$ and $o_{k}$ are the $k^{th}$ components of $p$ and $o$ respectively.
\subsection{NER Model}
The architecture of the proposed model is shown in Figure 2. We describe each component of our NER model in detail below.
\subsubsection{Input Representation Layer}
Given an input sequence of tokens $S = (x_{1}, x_{2}, \dots, x_{n})$, we first obtain the embeddings by generating features in six different ways of varying scope. These features are defined below:\\
\noindent \textbf{Word-level Representations:} For word-level features, we use the pre-trained 100-dimensional Glove Embeddings \citep{pennington-etal-2014-glove} and tune them during model's training. \\
\noindent \textbf{Character-level Representations:} Using word-level representations alone typically is not considered the best approach to NER \citep{santos2015boosting} due to the out-of-vocabulary (OOV) problem. To address this, many neural NER systems have shown the effectiveness of incorporating character-level representations for words \citep{ma2016endtoend, chiu-nichols-2016-named}. In addition to solving the OOV problem, they also help the model understand the underlying structure of words. This includes learning the arrangement of chars, the distinctive features a named entity follows such as the capitalization of the first letter of named entities, etc. Most works in the literature prefer a CNN-based character-level encoder over an LSTM based encoder \citep{li-etal-2017-leveraging-linguistic}. This is because of CNN's parallelization capabilities and almost similar or even better performance than the latter. To enrich our NER model with character-level features, we adopt IntNet \citep{intnet}, a funnel-shaped convolutional neural network. IntNet combines kernels of various sizes to extract different n-grams from words and learn an enhanced representation of words.
The entire network of IntNet comprises a series of convolution blocks, and each block consists of two layers. The first layer is a basic $N \times 1$ convolution on the input. The second layer applies convolutions of different kernel sizes on the first layer's output, concatenates them, and feeds it to the next convolution block.\\
\begin{figure}
\centering
\includegraphics[width=0.40\textwidth, height = 6cm]{NERmodel.png}
\caption{The detailed architecure of the proposed KARL-Trans-NER model.}
\end{figure}
\noindent \textbf{Context-level Representations:} To enlighten the model with contextual knowledge, here, we use the dominant pre-trained contextualized embedding model BERT \citep{bert}. As BERT generates embeddings at the word-piece-level, we take the average of all the word pieces to obtain the contextualized word embedding of a word. \\
\noindent \textbf{Sentence-level Representations:} Most sentence embedding learning techniques rely on word-level embeddings to generate sentence-level features \citep{8639211}. For computational ease, some methods compute the average of all the token embeddings in a sentence \citep{annervaz2018learning, coates-bollegala-2018-frustratingly} to obtain the sentence representation.
\begin{center}
$s$ = $\frac{\sum_{i} w_{i}}{n}$
\end{center}
where, $n$ in the sequence length and $w_{i}$ is the word embedding of token $i$.
Averaging words embeddings neglects the word order information. It also assumes an equal contribution from each word towards the sentence representation. Therefore, it is not the best approach to obtain sentence embeddings.
Sentence transformers proposed by \citet{reimers2019sentencebert} generate sentence embeddings using contextualized word embeddings, making them adaptive to new contexts. Though sentence transformers have provided SOTA results in many NLP tasks \citep{ke2020sentilare, Li_2020}, we believe that these representations could be enhanced by incorporating task-specific information. To achieve this, we adopt the techniques proposed by \citet{wang2018joint} and incorporate task-specific information by learning the NER label embeddings. We discuss our approach of generating sentence embeddings using the contextualized word embeddings obtained from BERT in detail below.
We begin by first taking the contextualized representations of words obtained from BERT and linearly projecting them to a different space. Let us denote the obtained representations by $c = (c_{1}, c_{2},\dots, c_{n})$, where $c \in \mathbb{R}^{n\times d}$, $c_{i}$ denotes the contextualized embedding of word $i$, $n$ is the number of words in the sentence and $d$ is the dimension BERT embeddings were projected to. The next step involves embedding the labels (PER, LOC etc.) to the same space. The unique set of labels in the data are embedded into dense representations. Let these representations be defined as $l = (l_{1}, l_{2}, \dots, l_{m})$. Here, $l \in \mathbb{R}^{m\times d}$, $m$ denotes the number of tag labels and $d$ is the hidden size.
Next, we apply the label embedding attention over the word representations. Cosine-similarity between each token embeddings $c_{i}$ and label embedding $l_{j}$ is treated as the compatibility function.
\begin{center}
$simi(c_{i}, l_{j}) = \frac{c_{i}^{T}l_{j}}{||c_{i}|| ||l_{j}||}$
\end{center}
A convolution operation is applied to the similarities obtained above. This enables the model to capture the behavior of the same label in its neighboring words. In other words, the application of a convolution layer captures the relative spatial information for a particular label over a phrase.
Considering a phrase of length $2k+1$ where $k$ is the kernel size, the convolution for token $i$ is applied over $simi_{[i-k:i+k, :]}$ which is followed by max-pooling. The attention weights for the entire sentence are generated after that by applying softmax operation over the scores. Finally, sentence-level representation $s$ is computed as the weighted sum of the token embeddings over the attention weights. This is mathematically defined as follows:
\begin{center}
$scores = max(W^{T} simi_{[i-k:i+k, :]} + b) $ \\
$\alpha = softmax(scores)$
$s = \sum_{i=1}^{n} \alpha_{i}c_{i}$ \\
\end{center}
Here, $W \in \mathbb{R}^{2k+1}$ and $b \in \mathbb{R}^{m}$ are learnable parameters, $\alpha \in \mathbb{R}^{n}$ and $s \in \mathbb{R}^{d}$.\\
\noindent \textbf{Document-level Representations:} To obtain Document-level features, we adopt a key-value Memory Network \citep{weston2015memory} that generates document-aware representations of every unique word in the training data. Specifically, we refer to the model adopted by \citet{luo2019hierarchical} to create features at a document level. \\
\noindent \textbf{Global-level Representations:}
\citet{annervaz2018learning} was among the first attempts to augment learning models with structured graph knowledge. They tested their model in a sentence classification setting. To retrieve entities and relations from the knowledge base, they first clustered entities and relations. A convolution network was used to obtain the cluster representation. Though the technique did work well for sentence classification, we identified a couple of limitations in their approach.
The first lies in the clustering step. To retrieve relevant entities and relations, clustering should be accurate. Also, clustering generally results in a loss of information. The cluster representations are always assumed to be a generalized representation of its constituents. Being generalized, it can neglect essential information existing inside the cluster's objects. Second, they augmented their model with graph knowledge at a sentence level. For sequence labeling, augmentation at the sentence level will not be effective as we require precise entity-level information from the knowledge graph. We address the limitations mentioned above and propose more efficient and reliable technique for incorporating relevant information from the knowledge graph.
Instead of performing clustering, we first shortlist entities and relations that are relevant to our task. The shortlisted entities and relations are passed through the already trained knowledge graph embedding module. Here, we remove the classification layer of the knowledge graph embedding module and keep the representations generated by the transformer network to create the fact triplet embeddings. Next, we discuss the entity and relation shortlisting step. \\
\noindent \textbf{Entity Shortlisting:} One caveat of similarity-based methods is that their performance heavily depends on the modeling of adopted features for similarity estimation. This can introduce errors. Therefore, we adopt an n-gram based matching with the input document to shortlist entities from the Knowledge Graph. The idea here is to generate a candidate set for each entity in the document. More specifically, we refer to the rules proposed by \citet{10.1145/3038912.3052675} for candidate set generation. As a particular entity in the document can match with multiple entities in the knowledge graph, we first rank all the entities based on the number of triplets they appear as subjects in the knowledge graph. The top $k_{1}$ entities with the highest rank are selected and added to the candidate entity set $C_{w}$ of the current word $w$. \\
\noindent \textbf{Relation Shortlisting:} After generating the candidate entity set $C_{w}$ for a word $w$, we extract the $k_{2}$ most frequent relations that emerge from entities in the candidate entity set $C_{w}$. This results in the Entity Relation set $ER_{w}$ for a word $w$ as follows:
\begin{equation*}
ER_{w} = \Bigg\{
\begin{matrix}
(e_{1}, r_{1}) & (e_{1}, r_{2}) & \dots & (e_{1}, r_{k_{2}})\\
(e_{2}, r_{1}) & (e_{2}, r_{2}) & \dots & (e_{2}, r_{k_{2}})\\
(e_{k_{1}}, r_{1}) & (e_{k_{1}}, r_{2}) & \dots & (e_{k_{1}}, r_{k_{2}})
\end{matrix}
\Bigg\}
\end{equation*}
After shortlisting relevant entities and relations, we augment the current word with world knowledge. We do this by first encoding all the triples in the set $ER_{w}$ using the pre-trained knowledge graph embedding model. Each entity-relation pair in $ER_{w}$ is appended with the [MASK] token and fed to the transformer network. The hidden state obtained from the last layer of the transformer network acts as the contextualized representation for these triplets. Finally, we apply soft-attention over the hidden states obtained from the transformer encoder and treat the input contextualized embeddings obtained from BERT as the query vector for augmentation at word-level. For a word $w$, this is mathematically defined as follows:
\begin{center}
$I = [h_{w,1}^{L}; h_{w,2}^{L}; h_{w,3}^{L}] $\\
$Q, K, V = W^{q}B_{w}, W^{k}I, W^{v}I $\\
$A_{j} = QK_{j}^{T}$\\
$g_{w} = \sum_{i}Softmax(\frac{A_{i}}{\sqrt 3d})V_{i}$
\end{center}
Here, $W^{q} \in \mathbb{R}^{3d\times h}$, $W^{k} \in \mathbb{R}^{n \times 3d}$ and $W^{v} \in \mathbb{R}^{n \times 3d}$ are learnable parameters, $B_{w} \in \mathbb{R}^{h}$ is the contextualized word embedding of word $w$ obtained from BERT, $I \in \mathbb{R}^{n\times3d}$ is the concatenation of the individual hidden states $\{h_{w,i}^{L}\}_{i=1}^{3}$ corresponding to the last layer of the transformer encoder, $n$ is the number of triplets in the set $ER_{w}$, $d$ is the output dim of individual hidden state from the transformer encoder and lastly, $g_{w} \in \mathbb{R}^{3d}$ is the global representation of word $w$.
This concludes our input representation layer. The output of the input representation layer is treated as the concatenation of the above-described features at token-level to obtain the word-level representation, which is 1) character-aware, 2) word-aware, 3) context-aware, 4) sentence-aware, 5) document-aware, and 6) knowledge-aware.
\subsubsection{Encoder Layer}
As a context-encoder, we use the fully-connected self-attention Network, aka Transformer network. For NER, \citet{guo2019startransformer} found the transformer encoder to perform poorly compared to a recurrent context encoder like LSTM \citep{articlelstm}. \citet{yan2019tener} proposed that the use of un-scaled attention and relative positional encodings in place of positional encoding can significantly improve the performance of the Transformer. Thus, we also adopt their proposed modifications in the Transformer based context encoder.
\subsubsection{Decoder Layer}
Conditional random field (CRF) \citep{10.5555/645530.655813} is a widely adopted decoder in many state-of-the-art NER models \citep{ma2016endtoend, lample-etal-2016-neural}. CRF can establish strong connections between the output tags, which help in making better predictions over the softmax layer. Viterbi Algorithm is applied in the decoding phase to obtain the final output label sequence with the highest probability out of all the valid label sequences.
\section{Experiments and Results}
We evaluated our models on three NER tasks. To reduce the impact of randomness, we conducted each experiment three times and reported the average span-level F1 score and standard deviation. Starting with word-level, we incrementally conducted our experiments and augmented features in increasing order of the extent of information modeled by them. To verify the performance of these trained models in a real-world scenario and compare their adaptability to unseen entities, we also generated predictions on two random pieces of texts taken from the web. In the subsequent section, we discuss the data statistics, model details, results, and performance on unseen entities.\\
\begin{table}[t]
\resizebox{0.49\textwidth}{!}
{%
\begin{tabular}{r|c|c|c|c|c}
\toprule
\textbf{Dataset} & \textbf{Type} & \textbf{Train} & \textbf{Test} & \textbf{Dev} & \textbf{Tags} \\
\midrule
\multirow{2}{*}{CoNLL 2003} & Sentence & 14041 & 3453 & 3250 & \multirow{2}{*}{4}\\
& Token & 203621 & 46435 & 51362 \\
\midrule
\multirow{2}{*}{OntoNotes v5} & Sentence & 59924 & 8262 & 8528 & \multirow{2}{*}{18}\\
& Token & 1088503 & 152728 & 147724 \\
\bottomrule
\end{tabular}%
}
\caption{The table represents the details for CoNLL2003 and OntoNotes v5 datasets. The statistics for CoNLL++ dataset are same as that of CoNLL 2003.}
\end{table}
\noindent \textbf{NER datasets and Knowledge Graph}
\noindent The statistics of the three NER datasets we experimented on are listed in Table 1. Wikidata is one of the largest sources of real-world knowledge data covering concepts belonging to various domains. We filter the Wikidata and use only those fact triplets that were relevant to our task. This left us with approximately 10 million fact triplets out of roughly 400 million fact triplets present in Wikidata.\\
\noindent \textbf{Model Hyperparameters}
\noindent We tuned all the model hyper-parameters manually, and list down the respective search ranges for all the hyper-parameters involved in Table 2. \\
\begin{table}[t]
\resizebox{0.49\textwidth}{!}
{%
\begin{tabular}{p{0.20\textwidth} p{0.20\textwidth}||p{0.20\textwidth}p{0.20\textwidth}}
\toprule
\multicolumn{2}{c||}{\textbf{NER-Transformer}} & \multicolumn{2}{c}{\textbf{Knowledge-Graph Transformer}} \\
\midrule
Layers & ~\hfill~[2] ~\hfill~ & Layers & ~\hfill~[2] ~\hfill~ \\
\midrule
Learning Rate & ~\hfill~[0.001, 0.0009] ~\hfill~ & Learning Rate & ~\hfill~[0.0005, 0.0003] ~\hfill~ \\
\midrule
Heads & ~\hfill~[8, 12, 14] ~\hfill~ & Heads & ~\hfill~[4, 8] ~\hfill~\\
\midrule
Head Dim. & ~\hfill~ [64, 96, 128] ~\hfill~ & Head Dim. & ~\hfill~ [256, 128] ~\hfill~\\
\midrule
FC Dropout & ~\hfill~ [0.40] ~\hfill~ & FC Dropout & ~\hfill~ [0.40] ~\hfill~ \\
\midrule
Attn. Dropout & ~\hfill~ [0.15] ~\hfill~ & Attn. Dropout & ~\hfill~ [0.25] ~\hfill~ \\
\midrule
Optimizer & ~\hfill~ [SGD] ~\hfill~ & Optimizer & ~\hfill~ [Adam] ~\hfill~ \\
\midrule
\midrule
\multicolumn{4}{c}{\textbf{Other Hyperparameters and Model Settings}}\\
\midrule
\multicolumn{1}{c}{IntNet Layers} & \multicolumn{1}{c|}{[5]} &
\multicolumn{1}{c}{IntNet Kernel Sizes} & \multicolumn{1}{c}{[[3,4,5]]}\\
\midrule
\multicolumn{1}{c}{IntNet Embedding Dim.} & \multicolumn{1}{c|}{[16, 32, 64]} &
\multicolumn{1}{c}{IntNet Hidden Dim.} & \multicolumn{1}{c}{[8, 16, 32]}\\
\midrule
\multicolumn{1}{c}{Momentum} & \multicolumn{1}{c|}{[0.9]} & \multicolumn{1}{c}{Epochs} & \multicolumn{1}{c}{[100]} \\
\midrule
\multicolumn{1}{c}{Sentence Transformer} & \multicolumn{1}{c|}{[stsb-bert-large]} & \multicolumn{1}{c}{Bert pre-trained} & \multicolumn{1}{c}{[bert-large-cased]} \\
\midrule
\multicolumn{1}{c}{Glove Embedding} & \multicolumn{1}{c|}{[glove-en-100d]} & & \\
\bottomrule
\end{tabular}%
}
\caption{The table represents the hyper-parameter search ranges for the Recurrent and Transformer Encoder, and some other hyperparameters.}\label{tab:hyperparams}
\end{table}
\noindent \textbf{Results and Analysis}
\noindent We report the results achieved by our models in Table 3. Starting with word-level features, we observe that as augmentation is done with different features, the performance on every dataset consistently increases. For CoNLL datasets, the accuracy lift generated by each feature augmentation step follows the following trend: \emph{Char $>$ Context $>$ Global $>$ Doc $>$ Sent}. The same trend for OntoNotes v5 is \emph{Context $>$ Char $>$ Global $>$ Doc $>$ Sent}.
\begin{table}[h]
\resizebox{0.49\textwidth}{!}
{
\begin{tabular}{p{0.20\textwidth}||c|c|c|c|c|c}
\toprule
\multicolumn{1}{c||}{\textbf{Model}} & \multicolumn{2}{c|}{\textbf{CoNLL 2003}} & \multicolumn{2}{c|}{\textbf{CoNLL++}} & \multicolumn{2}{c}{\textbf{OntoNotes}}\\
\midrule
\multicolumn{1}{c||}{LUKE \citep{yamada2020luke}} & \multicolumn{2}{c|}{\underline{94.3}} & \multicolumn{2}{c|}{-} & \multicolumn{2}{c}{-}\\
\midrule
\multicolumn{1}{c||}{\multirow{2}{*}{CMV \citep{luoma2020exploring}}} & \multicolumn{2}{c|}{{93.74 (0.25)}{$\bot$}} & \multicolumn{2}{c|}{-} &\multicolumn{2}{c} {-}\\
\multicolumn{1}{c||}{} & \multicolumn{2}{c|}{93.44 (0.06)} & \multicolumn{2}{c|}{-} & \multicolumn{2}{c}{-}\\
\midrule
\multicolumn{1}{c||}{ACE \citep{wang2020automated}} & \multicolumn{2}{c|}{{93.6}{$\bot$}} & \multicolumn{2}{c|}{-} & \multicolumn{2}{c}{-}\\
\midrule
\multicolumn{1}{c||}{CL-KL \citep{wang2021improving}} & \multicolumn{2}{c|}{{93.56}{$\bot$}} & \multicolumn{2}{c|}{\underline{94.81}} & \multicolumn{2}{c}{-}\\
\midrule
\multicolumn{1}{c||}{CrossWeigh \citep{wang2019crossweigh}} & \multicolumn{2}{c|}{{93.43}{$\bot$}} & \multicolumn{2}{c|}{{94.28}{$\bot$}} & \multicolumn{2}{c}{-}\\
\midrule
\multicolumn{1}{c||}{BERT-MRC+DSC \citep{li2020dice}} & \multicolumn{2}{c|}{93.33 (0.29)} & \multicolumn{2}{c|}{-} & \multicolumn{2}{c}{\underline{{92.07 (0.96)}}{$\bot$}} \\
\midrule
\multicolumn{1}{c||}{Biaffine-NER \citep{yu2020named}} & \multicolumn{2}{c|}{{93.5} $\bot$ } & \multicolumn{2}{c|}{-} & \multicolumn{2}{c}{91.30} \\
\midrule
\multicolumn{1}{c||}{KAWR \citep{qizhen2020}} & \multicolumn{2}{c|}{91.80 (0.24)} & \multicolumn{2}{c|}{-} & \multicolumn{2}{c}{-} \\
\midrule
\midrule
\multicolumn{1}{c||}{\textbf{KARL-Trans-NER}} & F1 & $\triangle$ & F1 & $\triangle$ & F1 & $\triangle$ \\
\midrule
\multicolumn{1}{c||}{Word-level} & 90.12 (0.02) & - & 90.90 (0.01) & - & 86.97 (0.01) & -\\
\multicolumn{1}{c||}{+ Char-level} & 91.60 (0.01) & 1.48 & 92.53 (0.01) & 1.63 & 88.40 (0.08) & 1.43 \\
\multicolumn{1}{c||}{+ Context-level} & 92.92 (0.02) & 1.32 & 93.77 (0.05) & 1.24 & 90.13 (0.02) & 1.73 \\
\multicolumn{1}{c||}{+ Sentence-level} & 93.10 (0.07) & 0.18 & 93.90 (0.07) & 0.13 & 90.42 (0.03) & 0.29 \\
\multicolumn{1}{c||}{+ Document-level} & 93.38 (0.04) & 0.28 & 94.17 (0.03) & 0.27 & 90.91 (0.04) & 0.49 \\
\multicolumn{1}{c||}{+ Global-level} & \textbf{93.74 (0.05)} & 0.36 & \textbf{94.52 (0.06)} & 0.35 & \textbf{91.41 (0.06)} & 0.50\\
\midrule
\multicolumn{1}{c||}{Context-level + Global-level} & 92.44 (0.09) & - & 92.98 (0.11) & - & - & - \\
\bottomrule
\end{tabular}
}
\caption{F1 scores on CoNLL 2003, CoNLL++ and OntoNotes v5. Results marked with {$\bot$} used both Train and Dev sets for model training. The SOTA results are underlined and the best results of our model are marked in bold. Standard deviation is written in parenthesis. {$\triangle$} denotes change in F1 by feature addition.}
\end{table}
In addition to this, we also observe that with access to world knowledge, the proposed model achieves superior results than most previously proposed systems existing in the literature. To better evaluate the effectiveness of our approach, we also compare our results with KAWR \citep{qizhen2020}. The authors also leveraged world knowledge to solve NER. To the best of our knowledge, their work is the only work in the entire literature that utilized knowledge graph embeddings and applied them to the datasets we used in our work. We compare their reported results with ours. An approximate lift of \emph{2-F\textsubscript{1}} is observed here.
As the results reported by KAWR only utilized Context-level and Knowledge-level embeddings, we conducted our experiments with KARL using the exact configuration of the embeddings. Experimental results show that KARL achieved far better performance as compared to KAWR, outperforming it by approximately \emph{0.65} units on F\textsubscript{1} on the CoNLL 2003 dataset, which indicates the effectiveness of our proposed model in incorporating world knowledge. \\
\noindent \textbf{Evaluation on Unseen Entities}
\begin{table}[]
\resizebox{0.49\textwidth}{!}
{%
\begin{tabular}{@{}r|p{0.28\textwidth}|c|p{0.25\textwidth}|p{0.25\textwidth}@{}}
\toprule
\textbf{S.No} & ~\hfill~\textbf{Text}~\hfill~ & ~\hfill~\textbf{Model} ~\hfill~ & ~\hfill~\textbf{Predictions} ~\hfill~ & ~\hfill~\textbf{Predictions (lowercased)} ~\hfill~ \\
\midrule
\multirow{11}{*}{1} & \multirow{6}{*}{\parbox{0.28\textwidth}{[SpaceX (\emph{ORG})] is an aerospace manufacturer and space transport services company headquartered in [California (\emph{LOC})].}} & Word & None & None\\
\cline{3-5}
& & \multirow{2}{*}{+ Char} & SpaceX (PER) & \multirow{2}{*}{None} \\
& & & California (LOC) &\\
\cline{3-5}
& & \multirow{2}{*}{+ Context} & SpaceX (ORG) & SpaceX (MISC) \\
& & & California (LOC) & California (LOC)\\
\cline{3-5}
& & \multirow{2}{*}{+ Sent} & SpaceX (ORG) & SpaceX (MISC)\\
& & & California (LOC) & California (LOC) \\
\cline{3-5}
& & \multirow{2}{*}{+ Doc} & SpaceX (ORG) & SpaceX (MISC)\\
& & & California (LOC) & California (LOC)\\
\cline{3-5}
& & \multirow{2}{*}{+ Global} & SpaceX (ORG) & SpaceX (ORG)\\
& & & California (LOC) & California (LOC)\\
\midrule
\multirow{13}{*}{2} & \multirow{6}{*}{\parbox{0.28\textwidth}{[Liverpool (\emph{ORG})] suffered an upset first time home league defeat of the season, beaten 1 by a [Guy Whittingham (\emph{PER}]) goal for [Sheffield Wednesday (\emph{ORG})].}} & Glove & None & None \\
\cline{3-5}
& & \multirow{1}{*}{+ Int} & Liverpool (\emph{ORG}) & None\\
\cline{3-5}
& & \multirow{2}{*}{+ Bert} & Liverpool (\emph{ORG}) & Liverpool (\emph{PER}) \\
& & & Sheffield (\emph{PER}) & Sheffield (\emph{PER}) \\
\cline{3-5}
& & \multirow{3}{*}{+ Sent} & Liverpool (\emph{ORG}) & Liverpool (\emph{PER})\\
& & & Sheffield (\emph{PER}) & Sheffield (\emph{PER}) \\
& & & Whittingham (\emph{PER}) & Whittingham (\emph{PER}) \\
\cline{3-5}
& & \multirow{3}{*}{+ Doc} & Liverpool (\emph{ORG}) & Liverpool (\emph{PER})\\
& & & Sheffield (\emph{PER}) & Sheffield (\emph{PER})\\
& & & Whittingham (\emph{PER}) & Whittingham (\emph{PER})\\
\cline{3-5}
& & \multirow{3}{*}{+ KG} & Liverpool (\emph{ORG}) & Liverpool (\emph{ORG})\\
& & & Sheffield Wednesday (\emph{ORG}) & Sheffield Wednesday (\emph{ORG})\\
& & & Guy Whittingham (\emph{PER}) & Guy Whittingham (\emph{PER})\\
\bottomrule
\end{tabular}%
}
\caption{Table shows the predictions made by each of the 6 models we trained. "Predictions" column lists the entities and the tags predicted by the model on the corresponding Text. "Predictions (lowercased)" indicates the predictions on the lowercased version of Text.}
\end{table}
\noindent Next, we test each model's performance on unseen entities. We take the models trained on the CoNLL 2003 dataset and predict named entities for random pieces of texts taken from the web, which we annotated manually. We show the entities identified and their corresponding entity tags as predicted by these models for two such sentences in the "Predictions" column of Table 4. We observe that the word-level model failed to identify any named entity from the first sentence. With the introduction of character-level features, the model made one classification error and classified "SpaceX" as a Person rather than an Organisation. After augmenting features at the context level, all the subsequent models generated accurate predictions and identified all the named entities and the corresponding tags precisely.
In the second sentence, the word-level model again failed to identify any named entities from the input text. Although the model did start to identify parts of named entities with further feature augmentations, we observe that majority of the tag labels predicted were wrong. For instance, the "+Bert", "+Sent" and "+Doc" models misclassified "Sheffield" as a person. It is not until the augmentation of world knowledge do we start observing accurate predictions. The Global model, just as before, achieved an accuracy of 100\% in named entity identification and tag-labeling.
To introduce some complexity, we repeated the above experiments on the same two sentences, but the entire sentence had been lowercased this time. We did this to verify the model's sensitivity towards casing. As shown in the table above, the predictions made by each of the models on lowercased inputs varied significantly, and the models committed more entity misclassifications than before. However, the predictions made by the model augmented with knowledge remained unaltered and unaffected, which verified the applicability and adaptability of our proposed method to a real-world scenario where the raw text is not guaranteed to carry any specific formatting.
\section{Conclusion and Future Work}
This work proposed a novel world knowledge augmentation technique that leveraged large knowledge bases represented as fact triplets and successfully extracted relevant information for word-level augmentation. The model was trained and tested in an NER setting. Experimental results showed that knowledge level representation learning outperformed most NER systems in literature and made the model highly applicable to a real-world scenario by accurately predicting entities in random pieces of text.
Since we augmented features at the word level, we believe our method could facilitate many other NLP tasks, such as Chunking, Word Sense Disambiguation, Question Answering, etc. Therefore, as future work, we plan to test the applicability of the proposed methods on other NLP tasks as well. Our intuition says that any system can leverage the proposed system as a general knowledge representation learning tool. Moreover, being among the very few works in this direction, we see an ample scope of improvement. For instance, the Knowledge Graph Embedding model was trained separately on a Masked Language Modelling task, and then the trained model was used on the task at hand. This restricted the model from interacting and learning from the task at hand, NER in our case. We believe that a technique to incorporate and train the NER model with the knowledge representation module can be more beneficial. Another improvement that could be made lies in the entity shortlisting step. Although the technique is quite reliable, it does not consider any semantic information about the entities. Different entities in a knowledge base can be highly correlated to each other and yet have different names. Therefore, we plan to improve the entity shortlisting technique further for more accurate and robust shortlisting.
|
{
"timestamp": "2021-12-01T02:25:13",
"yymm": "2111",
"arxiv_id": "2111.15436",
"language": "en",
"url": "https://arxiv.org/abs/2111.15436"
}
|
\section*{Abstract}
{\bf
The sensitivity of particle-level fiducial cross section measurements from ATLAS, CMS and LHCb to a leptophobic top-colour model is studied. The model has previously been the subject of resonance searches. Here we compare it directly to state-of-the-art predictions for Standard Model top quark production and also take into account next-to-leading order predictions for the new physics signal.
We make use of the \CONTUR framework to evaluate the sensitivity of the current measurements, first under the default \CONTUR assumption that the measurement and the SM exactly coincide, and then using the full SM theory calculation for $t\bar{t}$ at next-to-leading and next-to-next-to-leading order as the background model.
We derive exclusion limits, discuss the differences between these approaches, and compare to the limits from resonance searches by ATLAS and CMS.
}
\vspace{10pt}
\noindent\rule{\textwidth}{1pt}
\tableofcontents\thispagestyle{fancy}
\noindent\rule{\textwidth}{1pt}
\vspace{10pt}
\section{Introduction}
\label{sec:intro}
The quest for physics that goes beyond the Standard Model (SM) of particle physics is one of the most important research goals of the Large Hadron Collider (LHC) at CERN, particularly after the great success of the Higgs boson discovery in 2012. Indeed, for at least the next 15 years the LHC will remain our best hope for discovering new physics in a controlled collider environment. During Run 2, the LHC has already collected data with an integrated luminosity of about 140 fb$^{-1}$ per experiment. During Run 3 (2021-2023) the statistics will be roughly doubled to 300 fb$^{-1}$, and during the High Luminosity LHC phase (HL-LHC) starting in 2026 it is expected that an integrated luminosity of up to 3000 fb$^{-1}$ will be reached. This will allow access to lower cross-sections, in particular to those high energy regions where differential cross sections decrease rapidly. The full exploitation of the future LHC data therefore remains one of the most important tasks in particle physics in the coming years.
Apart from a few promising hints, e.g.\ in rare decays of heavy $B$-mesons~\cite{PhysRevLett.115.111803,PhysRevLett.120.171802,HFLAV:2019otj,Belle:2019gij}, however, no clear signs of new physics have so far appeared in any of the experimental analyses. Therefore, it becomes increasingly probable that any potential new physics effect at the LHC will be subtle, e.g.\ it may appear as a small deviation in kinematic distributions due to the influence of loop effects. As a consequence, precise theoretical predictions for observables in the SM and theories Beyond the SM (BSM) are very important. In view of the many null-results in the channel-by-channel searches, it becomes also mandatory to change perspective. Firstly, a more global approach is required, as opposed to benchmark-driven signature-by-signature searches. Secondly, the use of differential cross section measurements allows direct comparison to precision SM predictions. As well as facilitating such a global approach to discovering where BSM physics may hide, this will also allow the level of precision at which the SM describes those measurements to be quantified.
It is natural to perform global analyses in the context of an effective field theory (EFT) such as the SM EFT \cite{Brivio:2017vri}. The advantage of this approach is that it is rather model-independent, so that a large variety of postulated BSM theories and scenarios can be efficiently constrained. On the other hand, in order for the EFT to be valid at LHC energies, the scale of new physics $\Lambda$ has to lie above the LHC energy scale, i.e.\ beyond the direct reach of the LHC. For this reason, a complementary direct approach remains relevant. Here, specific models are probed in the context of a global analysis of a variety of LHC data. One may then constrain the allowed parameter space of the model, or, in the case of clear deviations from the SM, analyse the likelihood of this specific BSM theory, without the restrictions on the applicability and the ambiguity of an EFT. Obviously the constraints themselves are model dependent; however, by making use of particle-level cross sections, the model-independence of the data is retained and so many models may be rapidly investigated with the same measurements.
In this study we follow the latter approach, using \CONTUR~\cite{Butterworth:2016sqg,Buckley:2021neu} to examine the sensitivity of ATLAS, CMS and LHCb particle-level fiducial cross section measurements, available in \rivet~3.1.4~\cite{Bierlich:2019rhm}, to a leptophobic top-colour \cite{HILL1991419, Hill1994hp} scenario. There are a number of improvements with respect to previous analyses with \CONTUR:
\begin{itemize}
\item This is the first \CONTUR~analysis using higher-order theory predictions for the SM background. Previous studies have used data as the
background expectation. Since the measurements concerned have all been shown to agree with SM expectations, this is equivalent to assuming the SM uncertainties are negligible compared to the measurement uncertainties. The inclusion of the SM theory predictions for the relevant fiducial cross sections in the \CONTUR~framework, carried out as part of this work, allows us to examine the validity of this assumption.
\item We also obtain next-to-leading order (NLO) predictions for the new physics signals. The relevant NLO calculations are consistently matched to parton shower Monte Carlo generators in the \POWHEG box framework and include also electroweak contributions~\cite{Alioli:2010xd,Altakach:2020ugg}. Most \CONTUR results to date have used the inclusive LO calculations of \HERWIG \cite{Bellm:2019zci} for their signal predictions.
\end{itemize}
The top-colour model considered here (see Sec.\ \ref{sec:2}) has been previously analysed in several experimental searches of new heavy spin-one resonances \cite{ATLAS:2015hef,ATLAS:2018rvc,ATLAS:2019npw,ATLAS:2020lks,CMS:2012zja,CMS:2012jea,CMS:2018rkg}. The fact that the signature is simply a resonance in the $t\bar{t}$ channel implies that the benefits of a global analysis are less clear than might be the case for models with a more complex phenomenology, or models which are less well studied. However, our purpose is to examine the direct use of precision SM calculations in probing BSM physics, and in this sense the model is a good test case, since higher order predictions for both signal and background are available.
As such, this paper is a proof of concept exploring the possibility to extend the \CONTUR~idea to higher perturbative orders. For example, the calculation in Ref.\ \cite{Altakach:2020ugg} covers a wider class of models with $Z'$ and $W'$ resonances, which can be scanned in the future. Furthermore, the theory predictions for the SM background remain relevant also for other classes of models.
The paper is structured as follows: first, we discuss the calculations used, comparing the full NLO \POWHEG calculation of $t\bar{t}$ production~\cite{Bonciani:2015hgv, Altakach:2020ugg, Altakach:2020azd,Frixione:2007nw} -- the main process of interest -- with the more inclusive, but LO, \HERWIG calculations based on the same model. We then evaluate the sensitivity of the current measurements, both under the default \CONTUR assumptions that
the measurement and the SM exactly coincide, and using the full SM theory calculation for $t\bar{t}$ as the background model and discuss the differences. We conclude with an estimate of the current exclusion limits and the potential future reach of LHC data.
\section{Calculations of signal and background}
\label{sec:2}
In this section, we describe the theoretical framework of our \POWHEG calculations for both the signal and background processes.
First, we employ the NLO LUXqed parton distribution functions (PDFs) obtained within the NNPDF3.1 global fit \cite{Bertone:2017bme,Manohar:2016nzj,Manohar:2017eqh} as implemented in the LHAPDF library (ID $=$ 324900) \cite{Buckley:2014ana,Andersen:2014efa}. This set provides, in addition to the quark and gluon PDFs, a precise determination of the photon PDF inside the proton, which we need for our predictions of electroweak cross section contributions. The PDF uncertainties are calculated using Eqs.~(21) and (22) of Ref.~\cite{Butterworth:2015oua}.
Second, the strong coupling constant $\alpha_s(\mu_R)$ is evaluated at NLO in the $\overline{\text{MS}}$ scheme. It is provided together with the PDF set and satisfies the condition $\alpha_s(M_Z) =$ 0.118. While our choices of renormalisation and factorisation scales depend on the considered subprocess, we always identify the two scales for our central predictions and evaluate the scale uncertainties with the usual seven-point method, i.e. by independently multiplying the scales by factors of $\xi_R, \xi_F \in$ \{0.5, 1, 2\} discarding combinations with $\xi_F/\xi_R =$ 4 or 1/4.
For the total theoretical uncertainty on the SM cross section, we take the envelope of all predictions resulting from scale and PDF variations. This uncertainty is applied to the SM background calculations when evaluating the sensitivities with \CONTUR, which treats the PDF and scale as correlated uncertainty sources within a given measurement, and sums them in quadrature with the statistical uncertainty. They are not applied to the signal calculations, where statistical uncertainties dominate.
The setup described above is used throughout the rest of the publication unless specified otherwise.
\subsection{Top-colour model signal}
The fact that the top-quark mass is large indicates that it may play a special role with respect to electroweak symmetry breaking. One possibility to generate a large top-quark mass is provided by the so-called Top-Colour (TC) model \cite{HILL1991419, Hill1994hp}, where a top-quark pair condensate is dynamically generated by an additional strong SU(3) gauge group that couples only to the third generation, while the original SU(3) gauge group couples only to the first and second generations. The two groups can then be broken to the QCD group SU(3)$_C$ in order to restore the strong dynamics of the SM.
To prevent the formation of a bottom-quark condensate, an additional U(1) symmetry and associated $Z'$-boson must be introduced. In Ref.~\cite{Harris1999ya}, four variants of the TC model are proposed, which correspond to four different choices of the couplings between the additional $Z'$-boson and the three fermion generations. We focus in this article on the Model IV of the reference cited above, which is known as the leptophobic TC model \cite{Harris2011ez}. The $Z'$-boson in this model does not couple to the second generation of quarks and, as indicated by the name of the model, has no significant couplings to leptons.
The Lagrangian of the leptophobic TC model is given in Ref.~\cite{Harris2011ez} and reads
\begin{equation}\begin{aligned}
\mathcal{L} = &(\frac{1}{2}g_1 \cot \theta_H) Z'^\mu (\bar{t}_L \gamma_\mu t_L + \bar{b}_L \gamma_\mu b_L + f_1 \bar{t}_R \gamma_\mu t_R + f_2 \bar{b}_R \gamma_\mu b_R \\
&-\bar{u}_L \gamma_\mu u_L - \bar{d}_L \gamma_\mu d_L - f_1 \bar{u}_R \gamma_\mu u_R - f_2 \bar{d}_R \gamma_\mu d_R).
\end{aligned}\end{equation}
Here, $g_1$ is the U(1)$_Y$ coupling constant of the SM hypercharge, $\cot \theta_H$ is the ratio of the two U(1) coupling constants, and $f_1$ and $f_2$ are the relative strengths of the couplings of right-handed up- and down-type quarks with respect to those of the left-handed quarks. We set $f_1$ and $f_2$ to 1 and 0, respectively. The parameter $\cot \theta_H$ is related to the total decay width of the $Z'$-boson, which is given in Ref.~\cite{Harris2011ez} as
\begin{equation}
\Gamma_{Z'} = \frac{\alpha \cot^2\theta_H M_{Z'}}{8\cos^2\theta_W} \bigg[\sqrt{1 - \frac{4M^2_t}{M^2_{Z'}}} \Big(2+ \frac{4M^2_t}{M^2_{Z'}}\Big) +4 \bigg].
\label{eq:width}
\end{equation}
The TC signal is then calculated using our {\tt PBZpWp} event generator \cite{Bonciani:2015hgv, Altakach:2020ugg}, where both the BSM production of top-quark pairs and the interference with the electroweak SM processes are implemented. Note that {\tt PBZpWp} also provides predictions for the interference of BSM production with the SM QCD processes, but these contributions vanish in the TC model. The {\tt PBZpWp} generator employs the \POWHEG \cite{Nason:2004rx, Frixione:2007vw} method within the \POWHEG BOX framework \cite{Alioli:2010xd,Jezo:2015aia} and matches NLO calculations with parton showers (PS). For the TC signal, we set the factorisation and renormalisation scales to the partonic centre-of-mass energy, $\mu_F = \mu_R = \sqrt{\hat{s}}$. The top-quark decay, PS and modelling of non-perturbative effects are all performed by {\tt Pythia\,8.2}~\cite{Sjostrand:2014zea}. The mass of the \ZP-boson is treated as a free parameter, as is $\cot\theta_H$, which in turn determines the width (see above).
For comparison with the {\tt PBZpWp} results, we also use the \HERWIG event generator \cite{Bellm:2019zci}. This method is less precise, being based on leading-order (LO) estimates, but is fast, and is the default method of evaluating potential signals in \CONTUR. We have generated, using the UFO \cite{Degrande:2011ua} model file for the TC model, all $2 \rightarrow 2$ diagrams involving a BSM particle either in the $s$-channel propagator or as an outgoing leg. In this case, there is no matching or merging between the PS and higher-order QCD diagrams. Instead, \HERWIG separates $s$-channel diagrams of the type $q\bar{q} \rightarrow \ZP \rightarrow t\bar{t}$ from the QCD radiative diagrams $q\bar{q} \rightarrow \ZP g$ (with subsequent decay of $\ZP \rightarrow t\bar{t}$) using a transverse momentum cut, $\ktmin$, on the radiated gluon. This approximate procedure can emulate the most important real emission part of the higher-order corrections to $s$-channel \ZP-exchange, but will create double-counting with the PS and thus overestimate the cross section, if \ktmin is too low. We therefore varied \ktmin from 10~GeV to 1~TeV, the default value being 20~GeV, and \MZP between 2 and 5~TeV. We find that the cross section for the $q\bar{q} \rightarrow \ZP g$ subprocess drops below the $s$-channel process, for $\ktmin \approx 100$~GeV. Furthermore, above about 50~GeV the \HERWIG calculation is in good agreement with \POWHEG for the considered subprocess. We therefore use $\ktmin=50$~GeV in our \HERWIG studies. We use the CT14~\cite{Dulat:2015mca} PDF set, which is the default in \HERWIG.
\subsection{Standard Model background}
\label{sec:SM}
In the SM, pairs of top quarks can be produced both strongly and electroweakly. The production modes due to electroweak forces are often neglected, as they are relatively suppressed by the small value of the corresponding coupling constant. However, in the BSM model considered here the new physics couples via electroweak-like couplings, so that we also have to consider SM electroweak $t\bar{t}$ production and its QCD corrections. To be more precise, we consider the QCD top-pair production to $\mathcal{O}(\alpha_S^2)$ and $\mathcal{O}(\alpha_S^3)$, electroweak top-pair production to $\mathcal{O}(\alpha^2)$ and $\mathcal{O}(\alpha^2 \alpha_S)$, and mixed production to $\mathcal{O}(\alpha \alpha_S)$. Conversely, we neither consider electroweak corrections to strong processes of $\mathcal{O}(\alpha_S^2)$, nor QCD corrections to mixed $\mathcal{O}(\alpha \alpha_S)$ processes, which are of the same order, nor non-resonant production modes that can yield the same final state as the resonant ones after both top quarks have decayed.
We simulate the QCD production of top-quark pairs up to NLO QCD using the {\tt hvq}~\cite{Frixione:2007nw} event generator, which again matches NLO corrections to the PS using the \POWHEG method.
For the $s$-channel and $t$-channel electroweak production mediated by the $Z$- and $W$- bosons up to NLO QCD, we use our {\tt PBZpWp} event generator.
It also includes the mixed QCD and electroweak production, i.e.~both the interference between the purely QCD and the purely electroweak production modes and the photon induced channels.
For the SM background processes, the factorisation and renormalisation scales in both {\tt hvq} and {\tt PBZpWp} are identified with the transverse mass of the top quark in the rest frame of the $q\bar{q}$ system: $\mu_F = \mu_R = \sqrt{p_T^2 + M^2}$.
Higher order QCD corrections for top-pair production up to next-to-next-to-leading order (NNLO) have now been available for some time \cite{Czakon:2013goa, Czakon:2015owf, Catani:2019iny, Catani:2019hip}. %
Recently, a method for matching such NNLO calculations to PS has been introduced in Ref.~\cite{Mazzitelli:2020jio}.
Additionally to the {\tt hvq} event sample we consider the event sample of Ref.~\cite{Mazzitelli:2020jio} which was obtained for the LHC operating at 13 TeV with the NNPDF31\_nnlo\_as\_0118 (303600) PDF set.
The renormalisation and factorisation scales in this sample are set to $\mu_F = \mu_R = 0.5 M_{t\bar{t}}$.
The decay of the top quark (if not already included in the event generator), the PS and
the modelling of non-perturbative effects are, as in the signal case, carried out by {\tt Pythia\,8.2}.
Using the event generators mentioned above, twelve different LHC measurements of top-quark pair production at both 8 TeV (NLO predictions only) and 13 TeV (NNLO and NLO) centre-of-mass energy were simulated \cite{Sirunyan:2018wem, Aad:2015hna, Aad:2015mbv, Sirunyan:2017yar, Sirunyan:2019rfa, Aad:2019hzw, Aad:2019ntk, Aaboud:2017fha, Aaboud:2018uzf, Sirunyan:2018ptc, Aaboud:2018eqg, Khachatryan:2016mnb}.
In addition, we simulate the ATLAS inclusive jet and dijet cross section measurement \cite{ATLAS:2017ble} using the {\tt dijet}~\cite{Alioli:2010xa} \POWHEG package, where we use the default choice for the renormalisation and factorisation scales, i.e. the transverse momentum of the two jets in the underlying Born configuration. We set the minimum generation cut and the Born suppression parameter to 50 GeV and 1000 GeV, respectively.
Again, the showering, the hadronisation and the multiparton interactions are performed using {\tt Pythia\,8.2}.
For convenience, all the employed data sets are summarised in Tab.\ \ref{tab:measurements}.
\begin{table}
\begin{center}
\resizebox{\columnwidth}{!}{\begin{tabular}{||c| c| c| >{\centering\arraybackslash}p{2cm}| >{\centering\arraybackslash}p{4cm}||}
\hline
Contur Category & $\mathcal{L}$ [fb$^{-1}$] & Rivet/Inspire ID & Highest SM Order & Rivet description \\ [0.5ex]
\hline\hline
ATLAS 8 LMETJET & 20.3 & ATLAS$\_$2015$\_$I1397637 \cite{Aad:2015hna} & NLO & Boosted $t \bar t$ differential cross-section \\
\hline
ATLAS 8 LMETJET & 20.3 & ATLAS$\_$2015$\_$I1404878 \cite{Aad:2015mbv} & NLO & $t\bar t$ (to l+jets) differential cross sections at 8 TeV \\
\hline
CMS 8 LMETJET & 19.7 & CMS$\_$2017$\_$I1518399 \cite{Sirunyan:2017yar} & NLO & Differential $t\bar{t}$ cross-section as a function of the leading jet mass for boosted top quarks at 8 TeV \\
\hline
ATLAS 13 LMETJET & 3.2 & ATLAS$\_$2017$\_$I1614149 \cite{Aaboud:2017fha} & NNLO & Resolved and boosted $t \bar t$ l+jets cross sections at 13 TeV \\
\hline
ATLAS 13 LMETJET & 3.2 & ATLAS$\_$2018$\_$I1656578 \cite{Aaboud:2018uzf} & NNLO & Differential $t\bar{t}$ l+jets cross-sections at 13 TeV \\
\hline
ATLAS 13 LMETJET & 36 & ATLAS$\_$2019$\_$I1750330 \cite{Aad:2019ntk} & NNLO & Semileptonic $t \bar t$ at 13 TeV \\
\hline
CMS 13 LMETJET & 2.3 & CMS$\_$2016$\_$I1491950 \cite{Khachatryan:2016mnb} & NNLO & Differential $t\bar t$ cross sections using the lepton+jets final state in $pp$ collisions at 13 TeV \\
\hline
CMS 13 LMETJET & 35.9 & CMS$\_$2018$\_$I1662081 \cite{Sirunyan:2018ptc} & NNLO & Differential $t \bar t$ cross sections as a function of kinematic event variables in $pp$ collisions at 13 TeV \\
\hline
CMS 13 LMETJET & 35.8 & CMS$\_$2018$\_$I1663958 \cite{Sirunyan:2018wem} & NNLO & $t\bar t$ lepton+jets 13 TeV \\
\hline
ATLAS 13 L1L2METJET & 36.1 & ATLAS$\_$2019$\_$I1759875 \cite{Aad:2019hzw} & NNLO & Dileptonic $t\bar t$ at 13 TeV \\
\hline
ATLAS 13 TTHAD & 36.1 & ATLAS$\_$2018$\_$I1646686 \cite{Aaboud:2018eqg} & NNLO & All-hadronic boosted $t\bar t$ at 13 TeV \\
\hline
CMS 13 TTHAD & 35.9 & CMS$\_$2019$\_$I1764472 \cite{Sirunyan:2019rfa} & NNLO & Differential $t\bar t$ cross section as a function of the jet mass and top quark mass in boosted hadronic top quark decays \\
\hline
ATLAS 13 JETS & 3.2 & ATLAS$\_$2018$\_$I1634970 \cite{ATLAS:2017ble} & NLO & ATLAS inclusive jet and dijet cross section measurement at 13 TeV
\\ [1ex]
\hline
\end{tabular}
}
\end{center}
\caption{Table of the Rivet routines used for the limit-setting scan.}
\label{tab:measurements}
\end{table}
\section{Sensitivity}
\subsection{Default background model}
As discussed previously, the default \CONTUR approach is to take the fact that all the measurements considered have been shown in their original publications to be consistent with the SM,
and make the additional assumption that they are identical to it; the sensitivity is then derived by seeing how much room the
experimental uncertainties leave for a BSM contribution, using a $\chi^2$ test to evaluate the relative likelihood, as discussed in Ref.~\cite{Buckley:2021neu}. The results using this approach, employing either \POWHEG or \HERWIG for the signal,
are shown in Fig.~\ref{fig:phhw_data}.
\begin{figure}
\begin{center}
\subfloat[]{\includegraphics[width=0.45\textwidth]{figures/ph-data}\label{fig:ph_data}}
\subfloat[]{\includegraphics[width=0.45\textwidth]{figures/hw-data}\label{fig:hw_data}}
\caption{Sensitivity to the leptophobic TC model, in the \ZP mass (GeV) versus the $\cot \theta_H$ plane.
The coloured blocks indicate the most sensitive
final state (see legend below). The 95\% CL (solid red) and 68\% CL exclusion (dashed red) contours are superimposed,
considering the data as background. (a) NLO $t\bar{t}$ signal calculated using \POWHEG, (b) signal calculated using \HERWIG (inclusive LO).}
\label{fig:phhw_data}
\begin{tabular}{llll}
\swatch{powderblue}~CMS $\ell$+\ensuremath{E_T^{\rm miss}}{}+jet &
\swatch{blue}~ATLAS $\ell$+\ensuremath{E_T^{\rm miss}}{}+jet &
\swatch{cadetblue}~ATLAS $e$+\ensuremath{E_T^{\rm miss}}{}+jet \\
\swatch{navy}~ATLAS $\mu$+\ensuremath{E_T^{\rm miss}}{}+jet &
\swatch{silver}~ATLAS jets &
\swatch{wheat}~CMS Hadronic $t\bar{t}$ \\
\swatch{snow}~ATLAS Hadronic $t\bar{t}$ &
\swatch{cornflowerblue}~ATLAS $\ell_1\ell_2$+\ensuremath{E_T^{\rm miss}}{} &
\swatch{turquoise}~ATLAS $\ell_1\ell_2$+\ensuremath{E_T^{\rm miss}}{}+jet \\
\end{tabular}
\end{center}
\end{figure}
The general features are similar with both \POWHEG and \HERWIG, with measurements involving tops giving the greatest sensitivity.
At lower masses, several measurements have similar sensitivity, with the most sensitive at each point subject to statistical fluctuations, leading to a patterning in those regions of the figures. At high \MPZ, the boosted, fully hadronic top cross section gives the greatest sensitivity of the top measurements.
However, especially in the \HERWIG case, the sensitivity extends to higher masses than for \POWHEG, and this is driven by the ATLAS jet measurements at 13~TeV~\cite{Aad:2020fch,Aaboud:2017wsi}. This final state receives
contributions not only from $q\bar{q} \rightarrow \ZP \rightarrow t\bar{t}$,
but also from $q\bar{q} \rightarrow \ZP \rightarrow q\bar{q}$,
where $q = u, d$. In the inclusive, but LO, generation of \HERWIG these are included, whereas in the NLO calculation of \POWHEG only
the \ZP decay to tops is implemented.
We will return to this aspect in Section~\ref{sec:jets}, after first discussing the higher order
top calculations in more detail.
\subsection{SM calculation as background}
The higher order SM predictions for top final states, discussed in Section~\ref{sec:SM}, can also be used directly as the
background expectation by \CONTUR when calculating the sensitivity.
\begin{figure}
\begin{center}
\subfloat[]{\includegraphics[width=0.45\textwidth]{figures/ph-to-data.pdf}\label{fig:ph_data_to}}
\subfloat[]{\includegraphics[width=0.45\textwidth]{figures/ph-nlo.pdf}\label{fig:ph_nlo}}\\
\subfloat[]{\includegraphics[width=0.45\textwidth]{figures/ph-nnlo.pdf}\label{fig:ph_nnlo}}
\subfloat[]{\includegraphics[width=0.45\textwidth]{figures/ph-nnlo-exp.pdf}\label{fig:ph_nnlo_exp}}
\caption{Sensitivity to the leptophobic TC model, in the \ZP mass (GeV) versus $\GZP/\MZP$ plane, for $\ZP \rightarrow t\bar{t}$.
The coloured blocks indicate the most sensitive
final state (see legend below). The 95\% CL (solid red) and 68\% CL exclusion (dashed red) contours are superimposed.
(a) Data used as background, but only those measurements with available SM predictions are used.
(b) Using NLO SM prediction for background.
(c) Using NNLO SM prediction for background.
(d) Expected limit using NNLO SM prediction for background.
}
\label{fig:ph-smth}
\end{center}
\begin{tabular}{llll}
\swatch{snow}~ATLAS Hadronic $t\bar{t}$ &
\swatch{wheat}~CMS Hadronic $t\bar{t}$ &
\swatch{green}~ATLAS \ensuremath{E_T^{\rm miss}}{}+jet \\
\swatch{blue}~ATLAS $\ell$+\ensuremath{E_T^{\rm miss}}{}+jet &
\swatch{silver}~ATLAS jets &
\swatch{powderblue}~CMS $\ell$+\ensuremath{E_T^{\rm miss}}{}+jet \\
\swatch{darkorange}~ATLAS $\mu\mu$+jet &
\swatch{orangered}~ATLAS $ee$+jet &
\swatch{turquoise}~ATLAS $\ell_1\ell_2$+\ensuremath{E_T^{\rm miss}}{}+jet
\end{tabular}
\end{figure}
Fig.~\ref{fig:ph-smth} shows the sensitivity again, now in the plane of the ratio of the width of the \ZP to its mass \MZP,
versus \MZP. For a given \MZP, there is a one-to-one correspondence between \cotH and the \GZP, given by eq.~\eqref{eq:width},
with $\cotH=0.45$ corresponding to $\GZP/\MZP = 0.00154$, for $\MZP = 2$~TeV. In Fig.~\ref{fig:ph-smth}a we again use the data as the SM background, but
only use that subset of measurements for which \CONTUR has access to the NLO SM predictions (See Table~\ref{tab:measurements}).
This then allows a fair comparison
with Fig.~\ref{fig:ph-smth}b, in which the NLO SM calculations are used as the background. It can be seen that the limits
are similar, which is expected since the SM theory agrees reasonably well with the measurement, and the measurement uncertainties
dominate given the precision of the SM calculation. The limits in Fig.~\ref{fig:ph-smth}b are somewhat stronger than the default case because
in some regions the SM prediction already overshoots the data slightly, so
this existing minor discrepancy adds to that caused by injecting an additional BSM contribution, as seen in Fig.~\ref{fig:ph-1pt_smth}a and Fig.~\ref{fig:ph-1pt_smth}b.
In Fig.~\ref{fig:ph-smth}c, NNLO SM $t\bar{t}$ predictions are used for the (13 TeV) SM backgrounds; again the limits are stronger, for example increasing from 4.6 TeV at NLO to 5.2 TeV for NNLO, at 50\% $\GZP/\MZP$. This is due to a reduction in scale uncertainties,
as seen in Fig.~\ref{fig:ph-1pt_smth}c, and highlights the importance
of increased SM precision in extending the reach of the LHC for BSM
physics. Finally, in Fig.~\ref{fig:ph-smth}d
we show this ``expected'' limit, evaluated by moving the central value of the measurement to lie exactly on the SM theory
prediction, but retaining the measurement uncertainties. We see that the actual limits are slightly stronger than the
expectation, again due to the fact that the SM theory lies slightly above the data.
\begin{figure}
\begin{center}
\subfloat[]{\includegraphics[width=0.45\textwidth]{figures/data.pdf}\label{fig:ph_1pt_data}}
\subfloat[]{\includegraphics[width=0.45\textwidth]{figures/NLO-theory.pdf}\label{fig:ph_1pt_nlo}}\\
\subfloat[]{\includegraphics[width=0.45\textwidth]{figures/NNLO-theory.pdf}\label{fig:ph_1pt_nnlo}}
\subfloat[]{\includegraphics[width=0.45\textwidth]{figures/NNLO--el.pdf}\label{fig:ph_1pt_nnlo_exp}}
\caption{ATLAS all-hadronic boosted $t\bar{t}$ measurement, and {\tt PBZpWp} signal for $\MZP=4.56$~TeV, $\GZP/\MZP=0.5$. Transverse momentum distribution for $t\bar{t}$,
(a) using data as background,
(b) using NLO SM as background,
(c) using NNLO SM as background,
(d) Expected exclusion using NNLO SM prediction for background.
In each case the black points are the measurement, the red histogram is the SM background + BSM signal, and the green is the SM
prediction. The lower insets show the ratio of the signal plus background to the measurement, with the yellow band
indicating the combined $1~\sigma$ uncertainty on the ratio, and the green band indicating the uncertainty on the SM prediction.}
\label{fig:ph-1pt_smth}
\end{center}
\end{figure}
\subsection{Dijet signature}
\label{sec:jets}
As discussed above in the \HERWIG comparison, the LO \HERWIG calculation is inclusive, and so all decays of the \ZP are generated,
including those to first generation quarks. This, coupled with the fact that hadronic top decays also lead to jets, leads to the
sensitivity at the highest masses being dominated by the ATLAS 13~TeV jet measurements~\cite{ATLAS:2017ble}, with an improved sensitivity compared to \POWHEG at low \cotH, see Fig.~\ref{fig:phhw_data}.
This comes principally from contributions to the
central dijet invariant mass measurement~\cite{Aaboud:2017wsi}, with the high mass multijet final states~\cite{Aad:2020fch} playing a minor role.
SM predictions for these final states are
less precise than for top production, and uncertainties can be at least comparable to those in the data, so the assumption
that the SM is identical to the data becomes difficult to justify.
For the multijet final states, the state-of-the art predictions
are high-multiplicity tree-level calculations matched to parton
showers\footnote{Although NNLO calculations for three-jet final states have recently been presented\cite{Czakon:2021mjy},
comparisons to these measurements are not yet available.}.
The spread of such predictions (as shown in \cite{Aad:2020fch}) is indeed comparable to the data uncertainties.
If the multijet
measurements are removed, and only measurements for which more precise predictions are available are used, the sensitivity is
slightly reduced, to that shown in Fig.~\ref{fig:hw_data_to}, with the 95\% exclusion now stopping just below 4~TeV, rather than
just above it as seen Fig.~\ref{fig:hw_data}.
The dijet measurement, for which an NLO QCD calculation is available\cite{Alioli:2010xa}, is still used in this case.
The exclusion due to this measurement, using
the data as the background, is illustrated in Fig.~\ref{fig:dijet}a for $\cotH=4.5$ and $\MZP=3.6$~TeV. However, also shown in that
figure is the NLO QCD SM prediction. Not only are the uncertainties comparable to those of the measurement, but the prediction falls
below the data at high dijet mass.
The expected exclusion (Fig.~\ref{fig:dijet}b) would still be above 95\%, but the actual
exclusion using the SM prediction as background is zero. The impact of this is that at high
\cotH the expected limit, shown in Fig.~\ref{fig:hw_exp}, is higher than the actual limit shown in Fig.~\ref{fig:hw_sm}, and the jet cross section
measurements are in fact never the most sensitive.
\begin{figure}
\begin{center}
\subfloat[]{\includegraphics[width=0.33\textwidth]{figures/hw-data-to.pdf}\label{fig:hw_data_to}}
\subfloat[]{\includegraphics[width=0.33\textwidth]{figures/hw-exp.pdf}\label{fig:hw_exp}}
\subfloat[]{\includegraphics[width=0.33\textwidth]{figures/hw-sm.pdf}\label{fig:hw_sm}}
\caption{Exclusions derived using \HERWIG. (a) As Fig.~\ref{fig:hw_data} but only using those measurements for which
SM predictions are available. (b) Expected limit. (c) Measured limits using the SM predictions as background.
}
\label{fig:dijet_limits}
\begin{tabular}{llll}
\swatch{blue}~ATLAS $\ell$+\ensuremath{E_T^{\rm miss}}{}+jet &
\swatch{snow}~ATLAS Hadronic $t\bar{t}$ &
\swatch{powderblue}~CMS $\ell$+\ensuremath{E_T^{\rm miss}}{}+jet \\
\swatch{silver}~ATLAS jets &
\swatch{darkorange}~ATLAS $\mu\mu$+jet \\
\end{tabular}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=1.0\textwidth]{figures/dijet-comb.pdf}
\caption{ATLAS jets measurements, and \HERWIG signal for $\MZP=3.6$~TeV, $\cotH=4.5$.
(a) Dijets using data as background.
(b) Dijets expected exclusion.
(c) Dijets using NLO SM as background.
In each case the black points are the measurement, the red histogram is the SM background + BSM signal, and the green is the SM
prediction. The lower insets show the ratio of the signal plus background to the measurement, with the yellow band
indicating the combined $1~\sigma$ uncertainty on the ratio, and the green band indicating the uncertainty on the SM prediction.
}
\label{fig:dijet}
\end{center}
\end{figure}
\section{Discussion and conclusions}
\begin{table}
\begin{center}
\begin{tabular}{||c| c| c| c||}
\hline
\multicolumn{4}{||c||}{Excluded $M_{Z'}$ [Tev]}\\
\hline
$\Gamma_{Z'}/\MZP$ [\%] & Data as bgd. & NLO as bgd. & NNLO as bgd. \\ [0.5ex]
\hline\hline
1 & 2.29 & 2.35 & 2.50 \\
\hline
10 & 3.17 & 3.22 & 3.55 \\
\hline
30 & 4.01 & 4.04 & 4.53 \\
\hline
50 & 4.54 & 4.61 & 5.19 \\
\hline
\end{tabular}
\end{center}
\caption{Exclusion limits on \MZP obtained in this analysis.}
\label{tab:results}
\end{table}
The exclusion limits obtained in this analysis are summarised
in Tab.\ \ref{tab:results} where we have used
the available measurements in both leptonic and hadronic decay modes, with the maximum integrated luminosity of any measurement being 36.1/fb.
As can be seen, we exclude \MZP below 2.29, 3.17 and 4.01 TeV when data are used as background, for widths of 1, 10, and 30\% of the mass respectively. For the same width fractions, these numbers become 2.35, 3.22 and 4.04 TeV when the NLO
prediction is used for background, and 2.50, 3.55, 4.53 when the NNLO predictions are used.
Moreover, our scans reaching up to the fraction of $\Gamma_{Z'}/M_{Z'} = 50\%$ exclude it below 4.54 TeV, 4.61 and 5.19 TeV, again for data, NLO and NNLO predictions used for background, respectively.
The fact that the limits using SM calculations as background are somewhat stronger than those obtained in the default \CONTUR mode when data are used may seem surprising, since the default mode effectively assumes that the SM uncertainties are negligible, whereas the SM uncertainties are correctly accounted for when the calculations are used. It arises, as already mentioned, because the SM prediction lies slightly above the data, so any signal on top of it takes the prediction still further away from the data. The impact of more precise SM predictions is seen in the increased limits when NNLO predictions are used compared to NLO.
Our exclusions can be compared to the strongest limits to date on this model, coming from resonance
searches by ATLAS and CMS. CMS~\cite{CMS:2018rkg} excludes the TC $Z'$ boson below 3.80, 5.25, and 6.65 TeV for 1, 10, and 30\% widths respectively, using leptonic and hadronic decays of the top in 35.9/fb of data. ATLAS \cite{ATLAS:2020lks} excludes it below 3.9 and 4.7 TeV for decay widths of 1 and 3\% respectively using the fully hadronic decay channel only in 139/fb of integrated luminosity.
An earlier ATLAS search~\cite{ATLAS:2018rvc}, using the semileptonic decay mode in 36.1/fb of integrated luminosity excludes the $Z'$ bosons with \MZP below 3 (3.8) TeV for 1\% (3\%) decay width. In \cite{ATLAS:2019npw}, using the fully hadronic decay mode in 36.1/fb of integrated luminosity, ATLAS excludes $Z'$ bosons with mass below 3.1 (3.6) TeV for 1\% (3\%) decay width.
The limits in our analysis are significantly weaker than the direct searches. Some of this difference comes from the fact that no measurements using the full Run 2 luminosity of the LHC are yet available in Rivet. However, a more significant factor is the binning of the measurements. In a measurement unfolded to particle level, the binning is generally chosen to ensure that there are several events (typically at least of order ten) in each bin. The searches use a binned maximum likelihood fit with no such constraint, and of course the sensitivity at high mass comes from the tails of the distribution, where there are many empty bins.
With \CONTUR we are also able to derive new exclusion limits in a previously unexplored region of the parameter space where $\Gamma_{Z'}/M_{Z'} > 30\%$, a region where direct searches based on bump hunting, without precise SM background calculations, become more challenging.
This analysis therefore illustrates both the strengths and the weaknesses of a \CONTUR-like approach, using differential cross section measurements to constrain BSM physics.
On the one hand, in the regions where the SM cross section is significant, we validate the \CONTUR approach, using either data or SM predictions as background. The advantage of this is that a very wide range of BSM models can be rapidly studied. This advantage becomes very apparent in models with a greater number free parameters and more complex phenomenology \cite{Buckley:2020wzk,Butterworth:2020vnb,Butterworth:2021jto}.
In this sense our results support the assumptions made in such studies.
On the other hand, in this study we have addressed a model with a single, clear signature for which several dedicated searches already exist. In this case, the benefits of a more global analysis are minimal, and the \CONTUR exclusions are not found to be competitive. The greater reach of the searches comes from their use of the low statistics tails of distributions, where particle-level cross section measurements have not yet been made, or have been made with very coarse binning. It is not clear this is a fundamental limitation; upper limits on model-independent cross sections could be used by \CONTUR when provided, and discussions about the best way to publish statistical information from experiments\cite{LHCReinterpretationForum:2020xtr,Cranmer:2021urp} should also consider these observables.
Looking to the future, the precision of the measurements, and probably the SM predictions, will increase throughout the high-luminosity LHC period, while no large leaps in energy are anticipated for the foreseeable future. This implies that the relative reach of measurement-based approaches compared to searches seems likely to increase. Meanwhile, the theoretical landscape of BSM ideas continues to grow, increasing the value of making model-independent measurements which can be reinterpreted in multiple scenarios.
\section*{Acknowledgements}
We are grateful to L.\ Corpe and S.\ Kraml for carefully reading the draft and for useful discussions. We also thank J.~Mazzitelli for providing us with the NNLO+PS event samples.
\paragraph{Funding information}
JMB has received funding from the European Union's Horizon 2020 research and innovation
program as part of the Marie Skłodowska-Curie Innovative Training
Network MCnetITN3 (grant agreement no. 722104) and from a UKRI Science and Technology Facilities Council
(STFC) consolidated grant for experimental particle physics.
Work at WWU M\"unster was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through Project-Id 273811115 - SFB 1225 and the Research Training Network 2149 “Strong and weak interactions - from hadrons to dark matter”.
The work of TJ was also supported by the DFG under grant 396021762 - TRR 257.
The work of MMA and IS was supported in part by the IN2P3 master project Th\'eorie–BSMGA. The
Work of MMA is also supported by the National Science Center, Poland, under the research grant 2017/26/E/ST2/00135.
|
{
"timestamp": "2021-12-01T02:24:07",
"yymm": "2111",
"arxiv_id": "2111.15406",
"language": "en",
"url": "https://arxiv.org/abs/2111.15406"
}
|
\section{Introduction: extending the domain of the Fourier transform}
Fourier transform (FT) and generalized functions (GF) are naturally
interwoven, since the former naturally leads to suitable spaces of
the latter. This already occurs even in trivial cases, such as transforming
a simple sound wave $f(t)=A\sin(2\pi\omega_{0}t)$, whose spectrum
must be, in some way, concentrated at the frequencies $\pm\omega_{0}$.
Even the link between constants and delta-like functions was already
conceived by Fourier (see e.g.~\cite{Lau92}). Although different
theories of generalized functions arise for different motivations,
from distribution theory of Sobolev, Schwartz \cite{Sch45,Sob50}
up to Hairer's regularity structures \cite{Hai}, almost all these
theories are usually augmented with a corresponding calculus of FT,
which can be applied to an appropriate subspace of generalized functions.
Since the beginning of distribution theory, it was hence natural to
try to extend the domain of the FT with less or even with no growth
restrictions imposed. In fact, e.g., as a consequence of these restrictions,
the only solution of the trivial ODE $y'=y$ we can achieve using
tempered distributions is the trivial one. We can hence cite in \cite{GeSh1,GeSh2}
the definition of the FT as the limit of a sequence of functions integrated
on a finite domain, or \cite{Zem65} for a two-sided Laplace transform
defined on a space larger than that of tempered distributions, and
similarly in \cite{AtPiSa} for the directional short-time Fourier
transform of exponential-type distributions. In the same direction
we can inscribe the works \cite{AtMaPi,CaKaPi,DiPr,KaPePi,PiRaTeVi,Teo,Smi,EsFu,EsViYa}
on ultradistributions, hyperfunctions and thick distributions.
On the other hand, problems originating from physics, such as singularities
and point-source fields, also suggest us to consider alternative modeling,
ranging from non-smooth functions as test functions in the theory
of distributions (see e.g.~\cite{Yan} and references therein) to
non-Archimedean analysis (i.e.~mathematical analysis over a ring
extending the real field and containing infinitesimal and/or infinite
numbers, see \cite{GKOS,FGBL}). In the interplay between mathematics
and physics, it is well-known that heuristically manipulating non-linear
pointwise equalities such as $H^{2}=H$ ($H$ being the Heaviside
function) can easily lead to contradictions (see e.g.~\cite{Bow,GKOS}).
This can make particularly difficult to realize the strategy of \cite{LiWe},
where the authors search for a metaplectic representation from symplectic
maps to symplectic relations. According to A.~Weinstein (personal
communication, May 2019), this would require an algebra of generalized
functions extending the usual algebra of smooth functions and a FT
acting on them with the usual inversion formula and transforming the
Dirac delta into $1$. As we will see more diffusely in the following
sections, this is not possible in the classical approach to Colombeau's
algebra, see \cite{Col85,Das91,NedPil92,Hor99}.
To overcome this type of problems, we are going to use the category
of \emph{generalized smooth functions} (GSF), see \cite{GiKu15,GiKu16,LeLuGi17,GiKu18,GIO1}.
This theory seems to be a good candidate, since it is an extension
of classical distribution theory which allows to model nonlinear singular
problems, while at the same time sharing many nonlinear properties
with ordinary smooth functions, like the closure with respect to composition
(thereby, they form an algebra extending the algebra of \emph{smooth}
functions with pointwise product) and several non trivial classical
theorems of the calculus. One could describe GSF as a methodological
restoration of Cauchy-Dirac's original conception of generalized function,
see \cite{Dirac,Lau89,KaTa99}. In essence, the idea of Cauchy and
Dirac (but also of Poisson, Kirchhoff, Helmholtz, Kelvin and Heaviside)
was to view generalized functions as suitable types of smooth set-theoretical
maps obtained from ordinary smooth maps depending on suitable infinitesimal
or infinite parameters. For example, the density of a Cauchy-Lorentz
distribution with an infinitesimal scale parameter was used by Cauchy
to obtain classical properties which nowadays are attributed to the
Dirac delta, cf.~\cite{KaTa99}.
The basic idea to define a very general FT in this setting is the
following: Since GSF form a non-Archimedean framework, we can consider
a positive infinite generalized number $k$ (i.e.~$k>r$ for all
$r\in\R_{>0}$) and define the FT with the usual formula, but integrating
over the $n$-dimensional interval $[-k,k]^{n}$. Although $k$ is
an infinite number (hence, $[-k,k]^{n}\supseteq\R^{n}$), this interval
behaves like a compact set for GSF, so that, e.g., on these domains
we always have an extreme value theorem and integrals always exist.
Clearly, this leads to a FT, called \emph{hyperfinite} FT, that depends
on the parameter $k$, but, on the other hand, where we can transform
\emph{all} the GSF defined on this interval and these, for a suitable
$k$ depending on the open set $\Omega\subseteq\R^{n}$, include \emph{all}
the Colombeau generalized functions $\gs(\Omega)$\emph{ }and hence
\emph{all} the Sobolev-Schwartz distributions $\mathcal{D}'(\Omega)$.
Not all the properties of the classical FT remain unchanged for this
more general transform, but the final formalism still retains the
useful properties of the FT in dealing with differential equations.
Even more, the new formula for the transform of derivatives leads
to discover also exponential solutions of the aforementioned ODE $y'=y$.
Since \cite{DeHaPiVa} proves that ultradistributions and periodic
hyperfunctions can be embedded in Colombeau type algebra, this give
strong hints to conjecture that the hyperfinite FT is very general,
and it justifies the title of this article.
The structure of the paper is as follows. We start with an introduction
into the setting of GSF and give basic notions concerning GSF and
their calculus that are needed for a first study of the hyperfinite
FT (Sec.~\ref{sec:Basic-notions}). We then define the hyperfinite
FT in Sec.~\ref{sec:Hyperfinite-Fourier-transform} and the convolution
of compactly supported GSF in Sec.~\ref{sec:Convolution}. In Sec.~\ref{sec:Elementary-properties},
we show how the elementary properties of FT change for the hyperfinite
FT. In Sec.~\ref{sec:The-inverse-hyperfinite} and Sec.~\ref{sec:preservation},
we respectively prove the inversion theorem and that the embedding
of Sobolev-Schwartz tempered distributions preserves their FT, i.e.~that
the hyperfinite FT commutes with the embedding of Schwartz functions
and tempered distributions. In this section, we also recall the problems
of FT in the Colombeau's setting and how we overcome them. Finally,
in Sec.~\ref{sec:Examples-and-applications} we give several examples
which underscore the new possibility to transform any generalized
functions. Thanks to the developed formalism, which stresses the similarities
with ordinary smooth functions, frequently the proofs we are going
to present are very simple and similar to those for smooth functions,
but replacing the real field $\R$ with the non-Archimedean ring of
Robinson-Colombeau $\RC{\rho}$.
The paper is self-contained, in the sense that it contains all the
statements required for the proofs we are going to present. If proofs
of preliminaries are omitted, we clearly give references to where
they can be found. Therefore, to understand this paper, only a basic
knowledge of distribution theory is needed.
\section{Basic notions\label{sec:Basic-notions}}
\subsection{The new ring of scalars}
In this work, $I$ denotes the interval $(0,1]\subseteq\R$ and we
will always use the variable $\eps$ for elements of $I$; we also
denote $\eps$-dependent nets $x\in\R^{I}$ simply by $(x_{\eps})$.
By $\N$ we denote the set of natural numbers, including zero.
We start by defining a new simple non-Archimedean ring of scalars
that extends the real field $\R$. The entire theory is constructive
to a high degree, e.g.~neither ultrafilter nor non-standard method
are used. For all the proofs of results in this section, see \cite{GiKu18,GiKu15,GIO1,GiKu16}.
\begin{defn}
\label{def:RCGN}Let $\rho=(\rho_{\eps})\in(0,1]^{I}$ be a net such
that $(\rho_{\eps})\to0$ as $\eps\to0^{+}$ (in the following, such
a net will be called a \emph{gauge}), then
\begin{enumerate}
\item $\mathcal{I}(\rho):=\left\{ (\rho_{\eps}^{-a})\mid a\in\R_{>0}\right\} $
is called the \emph{asymptotic gauge} generated by $\rho$.
\item If $\mathcal{P}(\eps)$ is a property of $\eps\in I$, we use the
notation $\forall^{0}\eps:\,\mathcal{P}(\eps)$ to denote $\exists\eps_{0}\in I\,\forall\eps\in(0,\eps_{0}]:\,\mathcal{P}(\eps)$.
We can read $\forall^{0}\eps$ as \emph{for $\eps$ small}.
\item We say that a net $(x_{\eps})\in\R^{I}$ \emph{is $\rho$-moderate},
and we write $(x_{\eps})\in\R_{\rho}$ if
\[
\exists(J_{\eps})\in\mathcal{I}(\rho):\ x_{\eps}=O(J_{\eps})\text{ as }\eps\to0^{+},
\]
i.e., if
\[
\exists N\in\N\,\forall^{0}\eps:\ |x_{\eps}|\le\rho_{\eps}^{-N}.
\]
\item Let $(x_{\eps})$, $(y_{\eps})\in\R^{I}$, then we say that $(x_{\eps})\sim_{\rho}(y_{\eps})$
if
\[
\forall(J_{\eps})\in\mathcal{I}(\rho):\ x_{\eps}=y_{\eps}+O(J_{\eps}^{-1})\text{ as }\eps\to0^{+},
\]
that is if
\begin{equation}
\forall n\in\N\,\forall^{0}\eps:\ |x_{\eps}-y_{\eps}|\le\rho_{\eps}^{n}.\label{eq:negligible}
\end{equation}
This is a congruence relation on the ring $\R_{\rho}$ of moderate
nets with respect to pointwise operations, and we can hence define
\[
\RC{\rho}:=\R_{\rho}/\sim_{\rho},
\]
which we call \emph{Robinson-Colombeau ring of generalized numbers}.
This name is justified by \cite{Rob73,C1}: Indeed, in \cite{Rob73}
A.~Robinson introduced the notion of moderate and negligible nets
depending on an arbitrary fixed infinitesimal $\rho$ (in the framework
of nonstandard analysis); independently, J.F.~Colombeau, cf.~e.g.~\cite{C1}
and references therein, studied the same concepts without using nonstandard
analysis, but considering only the particular gauge $\rho_{\eps}=\eps$.
\end{enumerate}
\end{defn}
We will also use other directed sets instead of $I$: e.g.~$J\subseteq I$
such that $0$ is a closure point of $J$, or $I\times\N$. The reader
can easily check that all our constructions can be repeated in these
cases. We can also define an order relation on $\RC{\rho}$ by saying
that $[x_{\eps}]\le[y_{\eps}]$ if there exists $(z_{\eps})\in\R^{I}$
such that $(z_{\eps})\sim_{\rho}0$ (we then say that $(z_{\eps})$
is \emph{$\rho$-negligible}) and $x_{\eps}\le y_{\eps}+z_{\eps}$
for $\eps$ small. Equivalently, we have that $x\le y$ if and only
if there exist representatives $[x_{\eps}]=x$ and $[y_{\eps}]=y$
such that $x_{\eps}\le y_{\eps}$ for all $\eps$. Although the order
$\le$ is not total, we still have the possibility to define the infimum
$[x_{\eps}]\wedge[y_{\eps}]:=[\min(x_{\eps},y_{\eps})]$, the supremum
$[x_{\eps}]\vee[y_{\eps}]:=\left[\max(x_{\eps},y_{\eps})\right]$
of a finite number of generalized numbers. See \cite{MTAG} for a
complete study of supremum and infimum in $\RC{\rho}$. Henceforth, we
will also use the customary notation $\RC{\rho}^{*}$ for the set
of invertible generalized numbers, and we write $x<y$ to say that
$x\le y$ and $x-y\in\RC{\rho}^{*}$. Our notations for intervals are:
$[a,b]:=\{x\in\RC{\rho}\mid a\le x\le b\}$, $[a,b]_{\R}:=[a,b]\cap\R$,
and analogously for segments $[x,y]:=\left\{ x+r\cdot(y-x)\mid r\in[0,1]\right\} \subseteq\RC{\rho}^{n}$
and $[x,y]_{\R^{n}}=[x,y]\cap\R^{n}$. We also set $\mathcal{C}_{\rho}:=\R_{\rho}+i\cdot\R_{\rho}$
and $\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}}:=\RC{\rho}+i\cdot\RC{\rho}$, where $i=\sqrt{-1}$. On the $\RC{\rho}$-module
$\RC{\rho}^{n}$ we can consider the natural extension of the Euclidean
norm, i.e.~$|[x_{\eps}]|:=[|x_{\eps}|]\in\RC{\rho}$, where $[x_{\eps}]\in\RC{\rho}^{n}$.
As in every non-Archimedean ring, we have the following
\begin{defn}
\label{def:nonArchNumbs}Let $x\in\RC{\rho}^{n}$ be a generalized
number, then
\begin{enumerate}
\item $x$ is \emph{infinitesimal} if $|x|\le r$ for all $r\in\R_{>0}$.
If $x=[x_{\eps}]$, this is equivalent to $\lim_{\eps\to0^{+}}\left|x_{\eps}\right|=0$.
We write $x\approx y$ if $x-y$ is infinitesimal.
\item $x$ is \emph{finite} if $|x|\le r$ for some $r\in\R_{>0}$.
\item $x$ is \emph{infinite} if $|x|\ge r$ for all $r\in\R_{>0}$. If
$x=[x_{\eps}]$, this is equivalent to $\lim_{\eps\to0^{+}}\left|x_{\eps}\right|=+\infty$.
\end{enumerate}
\end{defn}
\noindent For example, setting $\diff{\rho}:=[\rho_{\eps}]\in\RC{\rho}$,
we have that $\diff{\rho}^{n}\in\RC{\rho}$, $n\in\N_{>0}$, is an
invertible infinitesimal, whose reciprocal is $\diff{\rho}^{-n}=[\rho_{\eps}^{-n}]$,
which is necessarily a positive infinite number. Of course, in the
ring $\RC{\rho}$ there exist generalized numbers which are not in
any of the three classes of Def.~\ref{def:nonArchNumbs}, like e.g.~$x_{\eps}=\frac{1}{\eps}\sin\left(\frac{1}{\eps}\right)$.
\begin{defn}
\label{def:stronWeak}We say that $x$ is a \emph{strong infinite
number} if $|x|\ge\diff{\rho}^{-r}$ for some $r\in\R_{>0}$, whereas
we say that $x$ is a \emph{weak infinite number} if $|x|\le\diff{\rho}^{-r}$
for all $r\in\R_{>0}$. For example, $x=-N\log\diff{\rho}$, $N\in\N,$
is a weak infinite number, whereas if $x_{\eps}=\rho_{\eps}^{-1}$
for $\eps=\frac{1}{k}$, $k\in\N_{>0}$, and $x_{\eps}=-\log\rho_{\eps}$
otherwise, then $x$ is neither a strong nor a weak infinite number.
\end{defn}
The following result is useful to deal with positive and invertible
generalized numbers. For its proof, see e.g.~\cite{GKOS}.
\begin{lem}
\label{lem:mayer} Let $x\in\RC{\rho}$. Then the following are equivalent:
\begin{enumerate}
\item \label{enu:positiveInvertible}$x$ is invertible and $x\ge0$, i.e.~$x>0$.
\item \label{enu:strictlyPositive}For each representative $(x_{\eps})\in\R_{\rho}$
of $x$ we have $\forall^{0}\eps:\ x_{\eps}>0$.
\item \label{enu:greater-i_epsTom}For each representative $(x_{\eps})\in\R_{\rho}$
of $x$ we have $\exists m\in\N\,\forall^{0}\eps:\ x_{\eps}>\rho_{\eps}^{m}$.
\item \label{enu:There-exists-a}There exists a representative $(x_{\eps})\in\R_{\rho}$
of $x$ such that $\exists m\in\N\,\forall^{0}\eps:\ x_{\eps}>\rho_{\eps}^{m}$.
\end{enumerate}
\end{lem}
\subsection{Topologies on $\RC{\rho}^{n}$}
As we mentioned above, on the $\RC{\rho}$-module $\RC{\rho}^{n}$
we defined $|[x_{\eps}]|:=[|x_{\eps}|]\in\RC{\rho}$, where $[x_{\eps}]\in\RC{\rho}^{n}$.
Even if this generalized norm takes values in $\RC{\rho}$, it shares
some essential properties with classical norms:
\begin{align*}
& |x|=x\vee(-x)\\
& |x|\ge0\\
& |x|=0\Rightarrow x=0\\
& |y\cdot x|=|y|\cdot|x|\\
& |x+y|\le|x|+|y|\\
& ||x|-|y||\le|x-y|.
\end{align*}
It is therefore natural to consider on $\RC{\rho}^{n}$ a topology
generated by balls defined by this generalized norm and the set of
radii $\RC{\rho}_{>0}$ of positive invertible numbers:
\begin{defn}
\label{def:setOfRadii}Let $c\in\RC{\rho}^{n}$ then:
\begin{enumerate}
\item $B_{r}(c):=\left\{ x\in\RC{\rho}^{n}\mid\left|x-c\right|<r\right\} $
for each $r\in\RC{\rho}_{>0}$.
\item $\Eball_{r}(c):=\{x\in\R^{n}\mid|x-c|<r\}$, for each $r\in\R_{>0}$,
denotes an ordinary Euclidean ball in $\R^{n}$ if $c\in\R^{n}$.
\end{enumerate}
\end{defn}
\noindent The relation $<$ has better topological properties as compared
to the usual strict order relation $a\le b$ and $a\ne b$ (that we
will \emph{never} use) because the set of balls $\left\{ B_{r}(c)\mid r\in\RC{\rho}_{>0},\ c\in\RC{\rho}^{n}\right\} $
is a base for a topology on $\RC{\rho}^{n}$ called \emph{sharp topology}.
We will call \emph{sharply open set} any open set in the sharp topology.
The existence of infinitesimal neighborhoods (e.g.~$r=\diff{\rho}$)
implies that the sharp topology induces the discrete topology on $\R$.
This is a necessary result when one has to deal with continuous generalized
functions which have infinite derivatives. In fact, if $f'(x_{0})$
is infinite, we have $f(x)\approx f(x_{0})$ only for $x\approx x_{0}$
, see \cite{GiKu18}. Also open intervals are defined using the relation
$<$, i.e.~$(a,b):=\{x\in\RC{\rho}\mid a<x<b\}$.
\subsection{\label{subsec:subpoints}The language of subpoints}
The following simple language allows us to simplify some proofs using
steps that recall the classical real field $\R$, see \cite{MTAG}.
We first introduce the notion of \emph{subpoint}:
\begin{defn}
For subsets $J$, $K\subseteq I$ we write $K\subseteq_{0} J$ if $0$
is an accumulation point of $K$ and $K\subseteq J$ (we read it as:
$K$ \emph{is co-final in $J$}). Note that for any $J\subseteq_{0} I$,
the constructions introduced so far in Def.~\ref{def:RCGN} can be
repeated using nets $(x_{\eps})_{\eps\in J}$. We indicate the resulting
ring with the symbol $\RC{\rho}^{n}|_{J}$. More generally, no peculiar
property of $I=(0,1]$ will ever be used in the following, and hence
all the presented results can be easily generalized considering any
other directed set. If $K\subseteq_{0} J$, $x\in\RC{\rho}^{n}|_{J}$ and
$x'\in\RC{\rho}^{n}|_{K}$, then $x'$ is called a \emph{subpoint} of
$x$, denoted as $x'\subseteq x$, if there exist representatives
$(x_{\eps})_{\eps\in J}$, $(x'_{\eps})_{\eps\in K}$ of $x$, $x'$
such that $x'_{\eps}=x_{\eps}$ for all $\eps\in K$. In this case
we write $x'=x|_{K}$, $\dom{x'}:=K$, and the restriction $(-)|_{K}:\RC{\rho}^{n}\longrightarrow\RC{\rho}^{n}|_{K}$
is a well defined operation. In general, for $X\subseteq\RC{\rho}^{n}$
we set $X|_{J}:=\{x|_{J}\in\RC{\rho}^{n}|_{J}\mid x\in X\}$.
\end{defn}
In the next definition, we introduce binary relations that hold only
\emph{on subpoints}. Clearly, this idea is inherited from nonstandard
analysis, where co-final subsets are always taken in a fixed ultrafilter.
\begin{defn}
Let $x$, $y\in\RC{\rho}$, $L\subseteq_{0} I$, then we say
\begin{enumerate}
\item $x<_{L}y\ :\iff\ x|_{L}<y|_{L}$ (the latter inequality has to be
meant in the ordered ring $\RC{\rho}|_{L}$). We read $x<_{L}y$ as ``\emph{$x$
is less than $y$ on $L$}''.
\item $x\sbpt{<}y\ :\iff\ \exists L\subseteq_{0} I:\ x<_{L}y$. We read $x\sbpt{<}y$
as ``\emph{$x$ is less than $y$ on subpoints''.}
\end{enumerate}
Analogously, we can define other relations holding only on subpoints
such as e.g.: $=_{L}$, $\in_{L}$, $\sbpt{\in}$, $\sbpt{\le}$,
$\sbpt{=}$, $\sbpt{\subseteq}$, etc.
\end{defn}
\noindent For example, we have
\begin{align*}
x\le y\ & \iff\ \forall L\subseteq_{0} I:\ x\le_{L}y\\
x<y\ & \iff\ \forall L\subseteq_{0} I:\ x<_{L}y,
\end{align*}
the former following from the definition of $\le$, whereas the latter
following from Lem.~\ref{lem:mayer}. Moreover, if $\mathcal{P}\left\{ x_{\eps}\right\} $
is an arbitrary property of $x_{\eps}$, then
\begin{equation}
\neg\left(\forall^{0}\eps:\ \mathcal{P}\left\{ x_{\eps}\right\} \right)\ \iff\ \exists L\subseteq_{0} I\,\forall\eps\in L:\ \neg\mathcal{P}\left\{ x_{\eps}\right\} .\label{eq:negation}
\end{equation}
Note explicitly that, generally speaking, relations on subpoints,
such as $\sbpt{\le}$ or $\sbpt{=}$, do not inherit the same properties
of the corresponding relations for points. So, e.g., both $\sbpt{=}$
and $\sbpt{\le}$ are not transitive relations.
The next result clarifies how to equivalently write a negation of
an inequality or of an equality using the language of subpoints.
\begin{lem}
\label{lem:negationsSubpoints}Let $x$, $y\in\RC{\rho}$, then
\begin{enumerate}
\item \label{enu:neg-le}$x\nleq y\quad\Longleftrightarrow\quad x\sbpt{>}y$
\item \label{enu:negStrictlyLess}$x\not<y\quad\Longleftrightarrow\quad x\sbpt{\ge}y$
\item \label{enu:negEqual}$x\ne y\quad\Longleftrightarrow\quad x\sbpt{>}y$
or $x\sbpt{<}y$
\end{enumerate}
\end{lem}
Using the language of subpoints, we can write different forms of dichotomy
or trichotomy laws for inequality.
\begin{lem}
\label{lem:trich1st}Let $x$, $y\in\RC{\rho}$, then
\begin{enumerate}
\item \label{enu:dichotomy}$x\le y$ or $x\sbpt{>}y$
\item \label{enu:strictDichotomy}$\neg(x\sbpt{>}y$ and $x\le y)$
\item \label{enu:trichotomy}$x=y$ or $x\sbpt{<}y$ or $x\sbpt{>}y$
\item \label{enu:leThen}$x\le y\ \Rightarrow\ x\sbpt{<}y$ or $x=y$
\item \label{enu:leSubpointsIff}$x\sbpt{\le}y\ \Longleftrightarrow\ x\sbpt{<}y$
or $x\sbpt{=}y$.
\end{enumerate}
\end{lem}
\noindent As usual, we note that these results can also be trivially
repeated for the ring $\RC{\rho}|_{L}$. So, e.g., we have $x\not\le_{L}y$
if and only if $\exists J\subseteq_{0} L:\ x>_{J}y$, which is the analog
of Lem.~\ref{lem:negationsSubpoints}.\ref{enu:neg-le} for the ring
$\RC{\rho}|_{L}$.
\subsection{Open, closed and bounded sets generated by nets}
A natural way to obtain sharply open, closed and bounded sets in $\RC{\rho}^{n}$
is by using a net $(A_{\eps})$ of subsets $A_{\eps}\subseteq\R^{n}$.
We have two ways of extending the membership relation $x_{\eps}\in A_{\eps}$
to generalized points $[x_{\eps}]\in\RC{\rho}^{n}$ (cf.\ \cite{ObVe08,GiKu15}).
\begin{defn}
\label{def:internalStronglyInternal}Let $(A_{\eps})$ be a net of
subsets of $\R^{n}$, then
\begin{enumerate}
\item $[A_{\eps}]:=\left\{ [x_{\eps}]\in\RC{\rho}^{n}\mid\forall^{0}\eps:\,x_{\eps}\in A_{\eps}\right\} $
is called the \emph{internal set} generated by the net $(A_{\eps})$.
\item Let $(x_{\eps})$ be a net of points of $\R^{n}$, then we say that
$x_{\eps}\in_{\eps}A_{\eps}$, and we read it as $(x_{\eps})$ \emph{strongly
belongs to $(A_{\eps})$}, if
\begin{enumerate}
\item $\forall^{0}\eps:\ x_{\eps}\in A_{\eps}$.
\item If $(x'_{\eps})\sim_{\rho}(x_{\eps})$, then also $x'_{\eps}\in A_{\eps}$
for $\eps$ small.
\end{enumerate}
\noindent Moreover, we set $\sint{A_{\eps}}:=\left\{ [x_{\eps}]\in\RC{\rho}^{n}\mid x_{\eps}\in_{\eps}A_{\eps}\right\} $,
and we call it the \emph{strongly internal set} generated by the net
$(A_{\eps})$.
\item We say that the internal set $K=[A_{\eps}]$ is \emph{sharply bounded}
if there exists $M\in\RC{\rho}_{>0}$ such that $K\subseteq B_{M}(0)$.
\item Finally, we say that the $(A_{\eps})$ is a \emph{sharply bounded
net} if there exists $N\in\R_{>0}$ such that $\forall^{0}\eps\,\forall x\in A_{\eps}:\ |x|\le\rho_{\eps}^{-N}$.
\end{enumerate}
\end{defn}
\noindent Therefore, $x\in[A_{\eps}]$ if there exists a representative
$[x_{\eps}]=x$ such that $x_{\eps}\in A_{\eps}$ for $\eps$ small,
whereas this membership is independent from the chosen representative
in case of strongly internal sets. An internal set generated by a
constant net $A_{\eps}=A\subseteq\R^{n}$ will simply be denoted by
$[A]$.
The following theorem (cf.~\cite{ObVe08,GiKu15,GIO1}) shows that
internal and strongly internal sets have dual topological properties:
\begin{thm}
\noindent \label{thm:strongMembershipAndDistanceComplement}For $\eps\in I$,
let $A_{\eps}\subseteq\R^{n}$ and let $x_{\eps}\in\R^{n}$. Then
we have
\begin{enumerate}
\item \label{enu:internalSetsDistance}$[x_{\eps}]\in[A_{\eps}]$ if and
only if $\forall q\in\R_{>0}\,\forall^{0}\eps:\ d(x_{\eps},A_{\eps})\le\rho_{\eps}^{q}$.
Therefore $[x_{\eps}]\in[A_{\eps}]$ if and only if $[d(x_{\eps},A_{\eps})]=0\in\RC{\rho}$.
\item \label{enu:stronglyIntSetsDistance}$[x_{\eps}]\in\sint{A_{\eps}}$
if and only if $\exists q\in\R_{>0}\,\forall^{0}\eps:\ d(x_{\eps},A_{\eps}^{\text{c}})>\rho_{\eps}^{q}$,
where $A_{\eps}^{\text{c}}:=\R^{n}\setminus A_{\eps}$. Therefore,
if $(d(x_{\eps},A_{\eps}^{\text{c}}))\in\R_{\rho}$, then $[x_{\eps}]\in\sint{A_{\eps}}$
if and only if $[d(x_{\eps},A_{\eps}^{\text{c}})]>0$.
\item \label{enu:internalAreClosed}$[A_{\eps}]$ is sharply closed.
\item \label{enu:stronglyIntAreOpen}$\sint{A_{\eps}}$ is sharply open.
\item \label{enu:internalGeneratedByClosed}$[A_{\eps}]=\left[\text{\emph{cl}}\left(A_{\eps}\right)\right]$,
where $\text{\emph{cl}}\left(S\right)$ is the closure of $S\subseteq\R^{n}$.
\item \label{enu:stronglyIntGeneratedByOpen}$\sint{A_{\eps}}=\sint{\text{\emph{int}\ensuremath{\left(A_{\eps}\right)}}}$,
where $\emph{int}\left(S\right)$ is the interior of $S\subseteq\R^{n}$.
\end{enumerate}
\end{thm}
\noindent For example, it is not hard to show that the closure in
the sharp topology of a ball of center $c=[c_{\eps}]$ and radius
$r=[r_{\eps}]>0$ is
\begin{equation}
\overline{B_{r}(c)}=\left\{ x\in\RC{\rho}^{d}\mid\left|x-c\right|\le r\right\} =\left[\overline{\Eball_{r_{\eps}}(c_{\eps})}\right],\label{eq:closureBall}
\end{equation}
whereas
\[
B_{r}(c)=\left\{ x\in\RC{\rho}^{d}\mid\left|x-c\right|<r\right\} =\sint{\Eball_{r_{\eps}}(c_{\eps})}.
\]
\subsection{Generalized smooth functions and their calculus}
Using the ring $\RC{\rho}$, it is easy to consider a Gaussian with an
infinitesimal standard deviation. If we denote this probability density
by $f(x,\sigma)$, and if we set $\sigma=[\sigma_{\eps}]\in\RC{\rho}_{>0}$,
where $\sigma\approx0$, we obtain the net of smooth functions $(f(-,\sigma_{\eps}))_{\eps\in I}$.
This is the basic idea we are going to develop in the following
\begin{defn}
\label{def:netDefMap}Let $\left(\Omega_{\eps}\right)$ be a net of
open subsets of $\R^{n}$. Let $X\subseteq\RC{\rho}^{n}$ and $Y\subseteq\RC{\rho}^{d}$
be arbitrary subsets of generalized points. Then we say that
\[
f:X\longrightarrow Y\text{ is a \emph{generalized smooth function}}
\]
if there exists a net $f_{\eps}\in\mathcal{C}^\infty(\Omega_{\eps},\R^{d})$
defining the map $f:X\longrightarrow Y$ in the sense that
\begin{enumerate}
\item $X\subseteq\langle\Omega_{\eps}\rangle$,
\item $f([x_{\eps}])=[f_{\eps}(x_{\eps})]\in Y$ for all $x=[x_{\eps}]\in X$,
\item $(\partial^{\alpha}f_{\eps}(x_{\eps}))\in\R_{{\scriptscriptstyle \rho}}^{d}$
for all $x=[x_{\eps}]\in X$ and all $\alpha\in\N^{n}$.
\end{enumerate}
The space of generalized smooth functions (GSF) from $X$ to $Y$
is denoted by $\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(X,Y)$.
\end{defn}
Let us note explicitly that this definition states minimal logical
conditions to obtain a set-theoretical map from $X$ into $Y$ and
defined by a net of smooth functions of which we can take arbitrary
derivatives still remaining in the space of $\rho$-moderate nets.
In particular, the following Thm.~\ref{thm:propGSF} states that
the equality $f([x_{\eps}])=[f_{\eps}(x_{\eps})]$ is meaningful,
i.e.~that we have independence from the representatives for all derivatives
$[x_{\eps}]\in X\mapsto[\partial^{\alpha}f_{\eps}(x_{\eps})]\in\RC{\rho}^{d}$,
$\alpha\in\N^{n}$.
\begin{thm}
\label{thm:propGSF}Let $X\subseteq\RC{\rho}^{n}$ and $Y\subseteq\RC{\rho}^{d}$
be arbitrary subsets of generalized points. Let $f_{\eps}\in\mathcal{C}^\infty(\Omega_{\eps},\R^{d})$
be a net of smooth functions that defines a generalized smooth map
of the type $X\longrightarrow Y$, then
\begin{enumerate}
\item $\forall\alpha\in\N^{n}\,\forall(x_{\eps}),(x'_{\eps})\in\R_{\rho}^{n}:\ [x_{\eps}]=[x'_{\eps}]\in X\ \Rightarrow\ (\partial^{\alpha}f_{\eps}(x_{\eps}))\sim_{\rho}(\partial^{\alpha}f_{\eps}(x'_{\eps}))$.
\item \label{enu:GSF-cont}Each $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(X,Y)$ is continuous with respect
to the sharp topologies induced on $X$, $Y$.
\item \label{enu:globallyDefNet}$f:X\longrightarrow Y$ is a GSF if and
only if there exists a net $v_{\eps}\in\mathcal{C}^\infty(\R^{n},\R^{d})$ defining
a generalized smooth map of type $X\longrightarrow Y$ such that $f=[v_{\eps}(-)]|_{X}$.
\item \label{enu:category}GSF are closed with respect to composition, i.e.~subsets
$S\subseteq\RC{\rho}^{s}$ with the trace of the sharp topology, and
GSF as arrows form a subcategory of the category of topological spaces.
We will call this category $\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}$, the \emph{category of GSF}. Therefore,
with pointwise sum and product, any space $\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(X,\RC{\rho})$ is an
algebra.
\end{enumerate}
\end{thm}
The differential calculus for GSF can be introduced by showing existence
and uniqueness of another GSF serving as incremental ratio (sometimes
this is called \emph{derivative á la Carathéodory}, see e.g.~\cite{Kuh91}).
\begin{thm}[Fermat-Reyes theorem for GSF]
\label{thm:FR-forGSF} Let $U\subseteq\RC{\rho}^{n}$ be a sharply
open set, let $v=[v_{\eps}]\in\RC{\rho}^{n}$, and let $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(U,\RC{\rho})$
be a GSF generated by the net of smooth functions $f_{\eps}\in\mathcal{C}^\infty(\Omega_{\eps},\R)$.
Then
\begin{enumerate}
\item \label{enu:existenceRatio}There exists a sharp neighborhood $T$
of $U\times\{0\}$ and a generalized smooth map $r\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(T,\RC{\rho})$,
called the \emph{generalized incremental ratio} of $f$ \emph{along}
$v$, such that
\[
\forall(x,h)\in T:\ f(x+hv)=f(x)+h\cdot r(x,h).
\]
\item \label{enu:uniquenessRatio}Any two generalized incremental ratios
coincide on a sharp neighborhood of $U\times\{0\}$, so that we can
use the notation $f[x;h]:=r(x,h)$ if $(x,h)$ are sufficiently small.
\item \label{enu:defDer}We have $f[x;0]=\left[\frac{\partial f_{\eps}}{\partial v_{\eps}}(x_{\eps})\right]$
for every $x\in U$ and we can thus define $Df(x)\cdot v:=\frac{\partial f}{\partial v}(x):=f[x;0]$,
so that $\frac{\partial f}{\partial v}\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(U,\RC{\rho})$.
\end{enumerate}
\end{thm}
Note that this result permits us to consider the partial derivative
of $f$ with respect to an arbitrary generalized vector $v\in\RC{\rho}^{n}$
which can be, e.g., infinitesimal or infinite. Using recursively this
result, we can also define subsequent differentials $D^{j}f(x)$ as
$j-$multilinear maps, and we set $D^{j}f(x)\cdot h^{j}:=D^{j}f(x)(h,\displaystyle \mathop {\ldots\ldots\,}} % marks with smth over. Usage: \ptind^{...^{j},h)$.
The set of all the $j-$multilinear maps $\left(\RC{\rho}^{n}\right)^{j}\longrightarrow\RC{\rho}^{d}$
over the ring $\RC{\rho}$ will be denoted by $L^{j}(\RC{\rho}^{n},\RC{\rho}^{d})$.
For $A=[A_{\eps}(-)]\in L^{j}(\RC{\rho}^{n},\RC{\rho}^{d})$, we set $\Vert{A}\Vert:=[|{A_{\eps}}|]$,
the generalized number defined by the operator norms of the multilinear
maps $A_{\eps}\in L^{j}(\R^{n},\R^{d})$.
The following result follows from the analogous properties for the
nets of smooth functions defining $f$ and $g$.
\begin{thm}
\label{thm:rulesDer} Let $U\subseteq\RC{\rho}^{n}$ be an open subset
in the sharp topology, let $v\in\RC{\rho}^{n}$ and $f$, $g:U\longrightarrow\RC{\rho}$
be generalized smooth maps. Then
\begin{enumerate}
\item $\frac{\partial(f+g)}{\partial v}=\frac{\partial f}{\partial v}+\frac{\partial g}{\partial v}$
\item $\frac{\partial(r\cdot f)}{\partial v}=r\cdot\frac{\partial f}{\partial v}\quad\forall r\in\RC{\rho}$
\item $\frac{\partial(f\cdot g)}{\partial v}=\frac{\partial f}{\partial v}\cdot g+f\cdot\frac{\partial g}{\partial v}$
\item For each $x\in U$, the map $\diff{f}(x).v:=\frac{\partial f}{\partial v}(x)\in\RC{\rho}$
is $\RC{\rho}$-linear in $v\in\RC{\rho}^{n}$
\item Let $U\subseteq\RC{\rho}^{n}$ and $V\subseteq\RC{\rho}^{d}$ be open subsets
in the sharp topology and $g\in{}^{\rho}\Gcinf(V,U)$, $f\in{}^{\rho}\Gcinf(U,\RC{\rho})$
be generalized smooth maps. Then for all $x\in V$ and all $v\in\RC{\rho}^{d}$,
we have $\frac{\partial\left(f\circ g\right)}{\partial v}(x)=\diff{f}\left(g(x)\right).\frac{\partial g}{\partial v}(x)$.
\end{enumerate}
\end{thm}
One dimensional integral calculus of GSF is based on the following
\begin{thm}
\label{thm:existenceUniquenessPrimitives}Let $f\in{}^{\rho}\Gcinf([a,b],\RC{\rho})$
be a GSF defined in the interval $[a,b]\subseteq\RC{\rho}$, where
$a<b$. Let $c\in[a,b]$. Then, there exists one and only one GSF
$F\in{}^{\rho}\Gcinf([a,b],\RC{\rho})$ such that $F(c)=0$ and $F'(x)=f(x)$
for all $x\in[a,b]$. Moreover, if $f$ is defined by the net $f_{\eps}\in\Coo(\R,\R)$
and $c=[c_{\eps}]$, then $F(x)=\left[\int_{c_{\eps}}^{x_{\eps}}f_{\eps}(s)\diff{s}\right]$
for all $x=[x_{\eps}]\in[a,b]$.
\end{thm}
\noindent We can thus define
\begin{defn}
\label{def:integral}Under the assumptions of Theorem \ref{thm:existenceUniquenessPrimitives},
we denote by $\int_{c}^{(-)}f:=\int_{c}^{(-)}f(s)\,\diff{s}\in{}^{\rho}\Gcinf([a,b],\RC{\rho})$
the unique GSF such that:
\begin{enumerate}
\item $\int_{c}^{c}f=0$
\item $\left(\int_{u}^{(-)}f\right)'(x)=\frac{\diff{}}{\diff{x}}\int_{u}^{x}f(s)\,\diff{s}=f(x)$
for all $x\in[a,b]$.
\end{enumerate}
\end{defn}
\noindent All the classical rules of integral calculus hold in this
setting:
\begin{thm}
\label{thm:intRules}Let $f\in{}^{\rho}\Gcinf(U,\RC{\rho})$ and $g\in{}^{\rho}\Gcinf(V,\RC{\rho})$
be two GSF defined on sharply open domains in $\RC{\rho}$. Let $a$,
$b\in\RC{\rho}$ with $a<b$ and $c$, $d\in[a,b]\subseteq U\cap V$,
then
\begin{enumerate}
\item \label{enu:additivityFunction}$\int_{c}^{d}\left(f+g\right)=\int_{c}^{d}f+\int_{c}^{d}g$
\item \label{enu:homog}$\int_{c}^{d}\lambda f=\lambda\int_{c}^{d}f\quad\forall\lambda\in\RC{\rho}$
\item \label{enu:additivityDomain}$\int_{c}^{d}f=\int_{c}^{e}f+\int_{e}^{d}f$
for all $e\in[a,b]$
\item \label{enu:chageOfExtremes}$\int_{c}^{d}f=-\int_{d}^{c}f$
\item \label{enu:foundamental}$\int_{c}^{d}f'=f(d)-f(c)$
\item \label{enu:intByParts}$\int_{c}^{d}f'\cdot g=\left[f\cdot g\right]_{c}^{d}-\int_{c}^{d}f\cdot g'$
\item \label{enu:intMonotone}If $f(x)\le g(x)$ for all $x\in[a,b]$, then
$\int_{a}^{b}f\le\int_{a}^{b}g$.
\item \label{enu:derUnderInt}Let $a$, $b$, $c$, $d\in\RC{\rho}$, with
$a<b$ and $c<d$, and $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}([a,b]\times[c,d],\RC{\rho}^{d})$, then
\[
\frac{\diff{}}{\diff{s}}\int_{a}^{b}f(\tau,s)\,\diff{\tau}=\int_{a}^{b}\frac{\partial}{\partial s}f(\tau,s)\,\diff{\tau}\quad\forall s\in[c,d].
\]
\end{enumerate}
\end{thm}
\begin{thm}
\label{thm:changeOfVariablesInt}Let $f\in{}^{\rho}\Gcinf(U,\RC{\rho})$
and $\phi\in{}^{\rho}\Gcinf(V,U)$ be GSF defined on sharply open
domains in $\RC{\rho}$. Let $a$, $b\in\RC{\rho}$, with $a<b$, such that
$[a,b]\subseteq V$, $\phi(a)<\phi(b)$, $[\phi(a),\phi(b)]\subseteq U$.
Finally, assume that $\phi([a,b])\subseteq[\phi(a),\phi(b)]$. Then
\[
\int_{\phi(a)}^{\phi(b)}f(t)\diff{t}=\int_{a}^{b}f\left[\phi(s)\right]\cdot\phi'(s)\diff{s}.
\]
\end{thm}
We also have a generalization of Taylor formula:
\begin{thm}
\label{thm:Taylor}Let $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(U,\RC{\rho})$ be a generalized smooth
function defined in the sharply open set $U\subseteq\RC{\rho}^{d}$.
Let $a$, $b\in\RC{\rho}^{d}$ such that the line segment $[a,b]\subseteq U$,
and set $h:=b-a$. Then, for all $n\in\N$ we have
\begin{enumerate}
\item \label{enu:LagrangeRest}$\exists\xi\in[a,b]:\ f(a+h)=\sum_{j=0}^{n}\frac{\diff{^{j}f}(a)}{j!}\cdot h^{j}+\frac{\diff{^{n+1}f}(\xi)}{(n+1)!}\cdot h^{n+1}.$
\item \label{enu:integralRest}$f(a+h)=\sum_{j=0}^{n}\frac{\diff{^{j}f}(a)}{j!}\cdot h^{j}+\frac{1}{n!}\cdot\int_{0}^{1}(1-t)^{n}\,\diff{^{n+1}f}(a+th)\cdot h^{n+1}\,\diff{t}.$
\end{enumerate}
\noindent Moreover, there exists some $R\in\RC{\rho}_{>0}$ such that
\begin{equation}
\forall k\in B_{R}(0)\,\exists\xi\in[a,a+k]:\ f(a+k)=\sum_{j=0}^{n}\frac{\diff{^{j}f}(a)}{j!}\cdot k^{j}+\frac{\diff{^{n+1}f}(\xi)}{(n+1)!}\cdot k^{n+1}\label{eq:LagrangeInfRest}
\end{equation}
\begin{equation}
\frac{\diff{^{n+1}f}(\xi)}{(n+1)!}\cdot k^{n+1}=\frac{1}{n!}\cdot\int_{0}^{1}(1-t)^{n}\,\diff{^{n+1}f}(a+tk)\cdot k^{n+1}\,\diff{t}\approx0.\label{eq:integralInfRest}
\end{equation}
\end{thm}
Formulas \ref{enu:LagrangeRest} and \ref{enu:integralRest} correspond
to a plain generalization of Taylor's theorem for ordinary smooth
functions with Lagrange and integral remainder, respectively. Dealing
with generalized functions, it is important to note that this direct
statement also includes the possibility that the differential $\diff{^{n+1}f}(\xi)$
may be an infinite number at some point. For this reason, in \eqref{eq:LagrangeInfRest}
and \eqref{eq:integralInfRest}, considering a sufficiently small
increment $k$, we get more classical infinitesimal remainders $\diff{^{n+1}f}(\xi)\cdot k^{n+1}\approx0$.
We can also define right and left derivatives as e.g.~$f'(a):=f'_{+}(a):=\lim_{\substack{t\to a\\
a<t
}
}f'(t)$, which always exist if $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}([a,b],\RC{\rho}^{d})$.
\subsection{\label{subsec:Embedding}Embedding of Sobolev-Schwartz distributions
and Colombeau functions}
We finally recall two results that give a certain flexibility in constructing
embeddings of Schwartz distributions. Note that both the infinitesimal
$\rho$ and the embedding of Schwartz distributions have to be chosen
depending on the problem we aim to solve. A trivial example in this
direction is the ODE $y'=y/\diff{\eps}$, which cannot be solved for
$\rho=(\eps)$, but it has a solution for $\rho=(e^{-1/\eps})$. As
another simple example, if we need the property $H(0)=1/2$, where
$H$ is the Heaviside function, then we have to choose the embedding
of distributions accordingly. In other words, both the gauges and
the particular embedding we choose have to be thought of elements
of the mathematical structure we are considering to deal with the
particular problem we want to solve. See also \cite{GiLu16,LuGi17}
for further details in this direction.\\
If $\phi\in\mathcal{D}(\R^{n})$, $r\in\R_{>0}$ and $x\in\R^{n}$,
we use the notations $r\odot\phi$ for the function $x\in\R^{n}\mapsto\frac{1}{r^{n}}\cdot\phi\left(\frac{x}{r}\right)\in\R$
and $x\oplus\phi$ for the function $y\in\R^{n}\mapsto\phi(y-x)\in\R$.
These notations permit us to highlight that $\odot$ is a free action
of the multiplicative group $(\R_{>0},\cdot,1)$ on $\mathcal{D}(\R^{n})$
and $\oplus$ is a free action of the additive group $(\R_{>0},+,0)$
on $\mathcal{D}(\R^{n})$. We also have the distributive property
$r\odot(x\oplus\phi)=rx\oplus r\odot\phi$.
\begin{lem}
\label{lem:strictDeltaNet}Let $b\in\RC{\rho}$ be a net such that $\lim_{\eps\to0^{+}}b_{\eps}=+\infty$.
Let $d\in(0,1)_{\R}$, there exists a net $\left(\psi_{\eps}\right)_{\eps\in I}$
of $\mathcal{D}(\R^{n})$ with the properties:
\begin{enumerate}
\item \label{enu:suppStrictDeltaNet}$supp(\psi_{\eps})\subseteq B_{1}(0)$
and $\psi_{\eps}$ is even for all $\eps\in I$.
\item \label{enu:c_n}Let $\omega_{n}$ denote the surface area of $S^{n-1}$
and set $c_{n}:=\frac{2n}{\omega_{n}}$ for $n>1$ and $c_{1}:=1$,
then $\psi_{\eps}(0)=c_{n}$ for all $\eps\in I$.
\item \label{enu:intOneStrictDeltaNet}$\int\psi_{\eps}=1$ for all $\eps\in I$.
\item \label{enu:moderateStrictDeltaNet}$\forall\alpha\in\N^{n}:\ \sup_{x\in\R^{n}}\left|\partial^{\alpha}\psi_{\eps}(x)\right|=O(b_{\eps}^{2+|\alpha|})$
as $\eps\to0^{+}$.
\item \label{enu:momentsStrictDeltaNet}$\forall j\in\N\,\forall^{0}\eps:\ 1\le|\alpha|\le j\Rightarrow\int x^{\alpha}\cdot\psi_{\eps}(x)\,\diff{x}=0$.
\item \label{enu:smallNegPartStrictDeltaNet}$\forall\eta\in\R_{>0}\,\forall^{0}\eps:\ \int\left|\psi_{\eps}\right|\le1+\eta$.
\item \label{enu:int1Dim}If $n=1$, then the net $(\psi_{\eps})_{\eps\in I}$
can be chosen so that $\int_{-\infty}^{0}\psi_{\eps}=d$.
\end{enumerate}
\noindent In particular $\psi_{\eps}^{b}:=b_{\eps}^{-1}\odot\psi_{\eps}$
satisfies \ref{enu:intOneStrictDeltaNet} - \ref{enu:smallNegPartStrictDeltaNet}.
\end{lem}
\noindent Concerning embeddings of Schwartz distributions, we have
the following result, where $\csp{\Omega}:=\{[x_{\eps}]\in[\Omega]\mid\exists K\Subset\Omega\,\forall^{0}\eps:\ x_{\eps}\in K\}$
is called the set of \emph{compactly supported points in }$\Omega\subseteq\R^{n}$.
Note that $\csp{\Omega}=\left\{ x\in[\Omega]\mid x\text{ is finite}\right\} $
(see Def.~\ref{def:nonArchNumbs}).
\begin{thm}
\label{thm:embeddingD'}Under the assumptions of Lemma \ref{lem:strictDeltaNet},
let $\Omega\subseteq\R^{n}$ be an open set and let $(\psi_{\eps}^{b})$
be the net defined in \ref{lem:strictDeltaNet}. Then the mapping
\begin{equation}
\iota_{\Omega}^{b}:T\in\mathcal{E}'(\Omega)\mapsto\left[\left(T\ast\psi_{\eps}^{b}\right)(-)\right]\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\csp{\Omega},\RC{\rho})\label{eq:embE'}
\end{equation}
uniquely extends to a sheaf morphism of real vector spaces
\[
\iota^{b}:\mathcal{D}'\longrightarrow\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\csp{-},\RC{\rho}),
\]
and satisfies the following properties:
\begin{enumerate}
\item \label{enu:embSmooth}If $b\in\RC{\rho}_{>0}$ is a strong infinite number,
then $\iota^{b}|_{\Coo(-)}:\Coo(-)\longrightarrow\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\csp{-},\RC{\rho})$ is
a sheaf morphism of algebras and $\iota_{\Omega}^{b}(f)(x)=f(x)$
for all smooth functions $f\in\Coo(\Omega)$ and all $x\in\Omega$;
\item \label{enu:supportDistr}If $T\in\mathcal{E}'(\Omega)$ then $\text{\text{\emph{supp}}}(T)=\text{\emph{\text{stsupp}}}(\iota_{\Omega}^{b}(T))$,
where
\begin{equation}
\text{\emph{stsupp}}(f):=\left(\bigcup\left\{ \Omega'\subseteq\Omega\mid\Omega'\text{ open},\ f|_{\Omega'}=0\right\} \right)^{\text{c}}\label{eq:stSupp}
\end{equation}
for all $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\csp{\Omega},\RC{\rho})$.
\item \label{enu:D'}Let $b\in\RC{\rho}_{>0}$ be a strong infinite number. Then
$\big[\int_{\Omega}\iota_{\Omega}^{b}(T)_{\eps}(x)\cdot\phi(x)\,\diff{x}\big]=\langle T,\phi\rangle$
for all $\phi\in\mathcal{D}(\Omega)$ and all $T\in\mathcal{D}'(\Omega)$;
\item $\iota^{b}$ commutes with partial derivatives, i.e.~$\partial^{\alpha}\left(\iota_{\Omega}^{b}(T)\right)=\iota_{\Omega}^{b}\left(\partial^{\alpha}T\right)$
for each $T\in\mathcal{D}'(\Omega)$ and $\alpha\in\N$.
\item Similar results also hold for the embedding of tempered distributions:
\[
\iota_{\Omega}^{b}:T\in\mathcal{S}'(\Omega)\mapsto\left[\left(T\ast\psi_{\eps}^{b}\right)(-)\right]\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\csp{\Omega},\RC{\rho}).
\]
\end{enumerate}
\end{thm}
Concerning the embedding of Colombeau generalized functions (CGF),
we recall that the special Colombeau algebra on $\Omega$ is defined
as the quotient $\gs(\Omega):=\mathcal{E}_{M}(\Omega)/\ns(\Omega)$
of \emph{moderate nets} over \emph{negligible nets}, where the former
is
\[
\mathcal{E}_{M}(\Omega):=\{(u_{\eps})\in\mathcal{C}^\infty(\Omega)^{I}\mid\forall K\Subset\Omega\,\forall\alpha\in\N^{n}\,\exists N\in\N:\sup_{x\in K}|\partial^{\alpha}u_{\eps}(x)|=O(\eps^{-N})\}
\]
and the latter is
\[
\ns(\Omega):=\{(u_{\eps})\in\mathcal{C}^\infty(\Omega)^{I}\mid\forall K\Subset\Omega\,\forall\alpha\in\N^{n}\,\forall m\in\N:\sup_{x\in K}|\partial^{\alpha}u_{\eps}(x)|=O(\eps^{m})\}.
\]
Using $\rho=(\eps)$, we have the following compatibility result:
\begin{thm}
\label{thm:inclusionCGF}A Colombeau generalized function $u=(u_{\eps})+\ns(\Omega)^{d}\in\gs(\Omega)^{d}$
defines a GSF $u:[x_{\eps}]\in\csp{\Omega}\longrightarrow[u_{\eps}(x_{\eps})]\in\Rtil^{d}$.
This assignment provides a bijection of $\gs(\Omega)^{d}$ onto $\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\csp{\Omega},\RC{\rho}^{d})$
for every open set $\Omega\subseteq\R^{n}$.
\end{thm}
\begin{example}
\label{exa:deltaCompDelta}~
\begin{enumerate}
\item \label{enu:deltaH}Let $\delta\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\csp{\R^{n}},\RC{\rho})$ and $H\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\csp{\R},\RC{\rho})$
be the $\iota^{b}$-embeddings of the Dirac delta and of the Heaviside
function. Then $\delta(x)=b^{n}\cdot\psi(b\cdot x)$, where $\psi(x):=[\psi_{\eps}(x_{\eps})]$
is called \emph{$n$-dimensional Colombeau mollifier}. Note that $\delta$
is an even function because of Lem.~\ref{lem:strictDeltaNet}.\ref{enu:suppStrictDeltaNet}.
We have that $\delta(0)=c_{n}b^{n}$ is a strong infinite number and
$\delta(x)=0$ if $|x|>r$ for some $r\in\R_{>0}$ because of Lem.~\ref{lem:strictDeltaNet}.\ref{enu:suppStrictDeltaNet}
(see Lem.~\ref{lem:strictDeltaNet}.\ref{enu:c_n} for the definition
of $c_{n}\in\R_{>0}$). If $n=1$, by the intermediate value theorem
(see \cite{GIO1}), $\delta$ takes any value in the interval $[0,b]\subseteq\RC{\rho}$.
Similar properties can be stated e.g.~for $\delta^{2}(x)=b^{2}\cdot\psi(b\cdot x)^{2}$.
Using these formulas, we can simply consider $\delta\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\RC{\rho}^{n},\RC{\rho})$
and $H\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\RC{\rho},\RC{\rho})$.
\item Analogously, we have $H(x)=1$ if $x>r$ for some $r\in\R_{>0}$;
$H(x)=0$ if $x<-r$ for some $r\in\R_{>0}$, and finally $H(0)=\frac{1}{2}$
because of Lem.~\ref{lem:strictDeltaNet}.\ref{enu:suppStrictDeltaNet}.
By the intermediate value theorem, $H$ takes any value in the interval
$[0,1]\subseteq\RC{\rho}$.
\item If $n=1$, The composition $\delta\circ\delta\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\RC{\rho},\RC{\rho})$
is given by $(\delta\circ\delta)(x)=b\psi\left(b^{2}\psi(bx)\right)$
and is an even function. If $|x|>r$ for some $r\in\R_{>0}$, then
$(\delta\circ\delta)(x)=b$. Since $(\delta\circ\delta)(0)=0$, again
using the intermediate value theorem, we have that $\delta\circ\delta$
takes any value in the interval $[0,b]\subseteq\RC{\rho}$. Suitably
choosing the net $(\psi_{\eps})$ it is possible to have that if $0\le x\le\frac{1}{kb}$
for some $k\in\N_{>1}$ (hence $x$ is infinitesimal), then $(\delta\circ\delta)(x)=0$.
If $x=\frac{k}{b}$ for some $k\in\N_{>0}$, then $x$ is still infinitesimal
but $(\delta\circ\delta)(x)=b$. Analogously, one can deal with compositions
such as $H\circ\delta$ and $\delta\circ H$.
\end{enumerate}
\noindent See Fig.~\ref{fig:MollifierHeaviside} for a graphical
representations of $\delta$ and $H$. The infinitesimal oscillations
shown in this figure can be proved to actually occur as a consequence
of Lem.~\ref{lem:strictDeltaNet}.\ref{enu:momentsStrictDeltaNet}
which is a necessary property to prove Thm.~\ref{thm:embeddingD'}.\ref{enu:embSmooth},
see \cite{GIO1,GiLu16}. It is well-known that the latter property
is one of the core ideas to bypass the Schwartz's impossibility theorem,
see e.g.~\cite{GKOS}.
\end{example}
\noindent \begin{center}
\begin{figure}
\label{fig: Col_mol}
\begin{centering}
\includegraphics[scale=0.15]{"delta"}
\par\end{centering}
\begin{centering}
\includegraphics[scale=0.15]{"H"}
\par\end{centering}
\caption{\label{fig:MollifierHeaviside}Representations of Dirac delta and
Heaviside function}
\end{figure}
\par\end{center}
\subsection{Functionally compact sets and multidimensional integration}
\subsubsection{\label{subsec:EVTandFcmp}Extreme value theorem and functionally
compact sets}
For GSF, suitable generalizations of many classical theorems of differential
and integral calculus hold: intermediate value theorem, mean value
theorems, suitable sheaf properties, local and global inverse function
theorems, Banach fixed point theorem and a corresponding Picard-Lindelöf
theorem both for ODE and PDE, see \cite{GiKu15,GiKu16,GIO1,LuGi17,GiLu16}.
Even though the intervals $[a,b]\subseteq\RC{\rho}$, $a$, $b\in\R$,
are not compact in the sharp topology (see \cite{GiKu15}), analogously
to the case of smooth functions, a GSF satisfies an extreme value
theorem on such sets. In fact, we have:
\begin{thm}
\label{thm:extremeValues}Let $f\in\Gcinf(X,\RC{\rho})$ be a GSF defined
on the subset $X$ of $\RC{\rho}^{n}$. Let $\emptyset\ne K=[K_{\eps}]\subseteq X$
be an internal set generated by a sharply bounded net $(K_{\eps})$
of compact sets $K_{\eps}\Subset\R^{n}$ , then
\begin{equation}
\exists m,M\in K\,\forall x\in K:\ f(m)\le f(x)\le f(M).\label{eq:epsExtreme}
\end{equation}
\end{thm}
We shall use the assumptions on $K$ and $(K_{\eps})$ given in this
theorem to introduce a notion of ``compact subset'' which behaves
better than the usual classical notion of compactness in the sharp
topology.
\begin{defn}
\label{def:functCmpt-1} A subset $K$ of $\RC{\rho}^{n}$ is called
\emph{functionally compact}, denoted by $K\Subset_{\text{\rm f}}\RC{\rho}^{n}$, if there
exists a net $(K_{\eps})$ such that
\begin{enumerate}
\item \label{enu:defFunctCmpt-internal-1}$K=[K_{\eps}]\subseteq\RC{\rho}^{n}$.
\item \label{enu:defFunctCmpt-sharpBound-1}$\exists R\in\RC{\rho}_{>0}:\ K\subseteq B_{R}(0)$,
i.e.~$K$ is sharply bounded.
\item \label{enu:defFunctCmpt-cmpt-1}$\forall\eps\in I:\ K_{\eps}\Subset\R^{n}$.
\end{enumerate}
If, in addition, $K\subseteq U\subseteq\RC{\rho}^{n}$ then we write
$K\Subset_{\text{\rm f}} U$. Finally, we write $[K_{\eps}]\Subset_{\text{\rm f}} U$ if \ref{enu:defFunctCmpt-sharpBound-1},
\ref{enu:defFunctCmpt-cmpt-1} and $[K_{\eps}]\subseteq U$ hold.
Any net $(K_{\eps})$ such that $[K_{\eps}]=K$ is called a \emph{representative}
of $K$.
\end{defn}
\noindent We motivate the name \emph{functionally compact subset}
by noting that on this type of subsets, GSF have properties very similar
to those that ordinary smooth functions have on standard compact sets.
\begin{rem}
\noindent \label{rem:defFunctCmpt}\
\begin{enumerate}
\item \label{enu:rem-defFunctCmpt-closed}By Thm.~\ref{thm:strongMembershipAndDistanceComplement}.\ref{enu:internalAreClosed},
any internal set $K=[K_{\eps}]$ is closed in the sharp topology and
hence functionally compact sets are always closed. In particular,
the open interval $(0,1)\subseteq\RC{\rho}$ is not functionally compact
since it is not closed.
\item \label{enu:rem-defFunctCmpt-ordinaryCmpt}If $H\Subset\R^{n}$ is
a non-empty ordinary compact set, then the internal set $[H]$ is
functionally compact. In particular, $[0,1]=\left[[0,1]_{\R}\right]$
is functionally compact.
\item \label{enu:rem-defFunctCmpt-empty}The empty set $\emptyset=\widetilde{\emptyset}\Subset_{\text{\rm f}}\RC{\rho}$.
\item \label{enu:rem-defFunctCmpt-equivDef}$\RC{\rho}^{n}$ is not functionally
compact since it is not sharply bounded.
\item \label{enu:rem-defFunctCmpt-cmptlySuppPoints}The set of compactly
supported points $\csp{\R}$ is not functionally compact because the
GSF $f(x)=x$ does not satisfy the conclusion \eqref{eq:epsExtreme}
of Thm.~\ref{thm:extremeValues}.
\end{enumerate}
\end{rem}
\noindent In the present paper, we need the following properties of
functionally compact sets.
\begin{thm}
\label{thm:image}~
\begin{enumerate}
\item Let $K\subseteq X\subseteq\RC{\rho}^{n}$, $f\in\Gcinf(X,\RC{\rho}^{d})$.
Then $K\Subset_{\text{\rm f}}\RC{\rho}^{n}$ implies $f(K)\Subset_{\text{\rm f}}\RC{\rho}^{d}$.
\item \label{enu:fcmpIntCup}Let $K$, $H\Subset_{\text{\rm f}}\RC{\rho}^{n}$. If $K\cup H$
is an internal set, then it is a functionally compact set. If $K\cap H$
is an internal set, then it is a functionally compact set.
\item \label{enu:fcmpSub}Let $H\subseteq K\Subset_{\text{\rm f}}\RC{\rho}^{n}$, then if $H$
is an internal set, then $H\Subset_{\text{\rm f}}\RC{\rho}^{n}$.
\end{enumerate}
\end{thm}
\noindent As a corollary of this theorem and Rem.\ \eqref{rem:defFunctCmpt}.\ref{enu:rem-defFunctCmpt-ordinaryCmpt}
we get
\begin{cor}
\label{cor:intervalsFunctCmpt}If $a$, $b\in\RC{\rho}$ and $a\le b$,
then $[a,b]\Subset_{\text{\rm f}}\RC{\rho}$.
\end{cor}
\noindent Let us note that $a$, $b\in\RC{\rho}$ can also be infinite
numbers, e.g.~$a=\diff{\rho}^{-N}$, $b=\diff{\rho}^{-M}$ or $a=-\diff{\rho}^{-N}$,
$b=\diff{\rho}^{-M}$ with $M>N$, so that e.g.~$[-\diff{\rho}^{-N},\diff{\rho}^{-M}]\supseteq\R$.
Finally, in the following result we consider the product of functionally
compact sets:
\begin{thm}
\noindent \label{thm:product}Let $K\Subset_{\text{\rm f}}\RC{\rho}^{n}$ and $H\Subset_{\text{\rm f}}\RC{\rho}^{d}$,
then $K\times H\Subset_{\text{\rm f}}\RC{\rho}^{n+d}$. In particular, if $a_{i}\le b_{i}$
for $i=1,\ldots,n$, then $\prod_{i=1}^{n}[a_{i},b_{i}]\Subset_{\text{\rm f}}\RC{\rho}^{n}$.
\end{thm}
Applying the extreme value theorem Thm.~\ref{thm:extremeValues}
to the first derivative, we also have the following
\begin{thm}
\label{thm:meanValue}Let $a$, $b\in\RC{\rho}^{n}$, $a<b$, $f\in{}^{\rho}\Gcinf([a,b],\RC{\rho})$
be a GSF. Then
\begin{enumerate}
\item \label{enu:meanValue}$\exists c\in[a,b]:\ f(b)-f(a)=(b-a)\cdot f'(c)$.
\item \label{enu:finiteIncr}Setting $M:=\max_{c\in[a,b]}\left|f'(c)\right|\in\RC{\rho}$,
we hence have $\forall x,y\in[a,b]:\ \left|f(x)-f(y)\right|\le M\cdot\left|x-y\right|$.
\end{enumerate}
\end{thm}
A theory of compactly supported GSF has been developed in \cite{GiKu16},
and it closely resembles the classical theory of LF-spaces of compactly
supported smooth functions.
\subsubsection{\label{subsec:Multidimensional-integration}Multidimensional integration}
Finally, to define FT of multivariable GSF we have to introduce multidimensional
integration on suitable subsets of $\RC{\rho}^{n}$ (see \cite{GIO1}).
\begin{defn}
\label{def:intOverCompact}Let $\mu$ be a measure on $\R^{n}$ and
let $K$ be a functionally compact subset of $\RC{\rho}^{n}$. Then,
we call $K$ $\mu$-\emph{measurable} if the limit
\begin{equation}
\mu(K):=\lim_{m\to\infty}[\mu(\overline{\Eball}_{\rho_{\eps}^{m}}(K_{\eps}))]\label{eq:muMeasurable}
\end{equation}
exists for some representative $(K_{\eps})$ of $K$. Here $m\in\N$,
the limit is taken in the sharp topology on $\RC{\rho}$, and $\overline{\Eball}_{r}(A):=\{x\in\R^{n}:d(x,A)\le r\}$.
\label{def:integrableMap}Let $K\Subset_{\text{\rm f}}\RC{\rho}^{n}$. Let $(\Omega_{\eps})$
be a net of open subsets of $\R^{n}$, and $(f_{\eps})$ be a net
of continuous maps $f_{\eps}$: $\Omega_{\eps}\longrightarrow\R$.
Then we say that
\[
(f_{\eps})\textit{ defines a generalized integrable map}:K\longrightarrow\RC{\rho}
\]
if
\begin{enumerate}
\item $K\subseteq\sint{\Omega_{\eps}}$ and $[f_{\eps}(x_{\eps})]\in\RC{\rho}$
for all $[x_{\eps}]\in K$.
\item $\forall(x_{\eps}),(x'_{\eps})\in\R_{\rho}^{n}:\ [x_{\eps}]=[x'_{\eps}]\in K\ \Rightarrow\ (f_{\eps}(x_{\eps}))\sim_{\rho}(f_{\eps}(x'_{\eps}))$.
\end{enumerate}
\noindent If $f\in\Set(K,\RC{\rho})$ is such that
\begin{equation}
\forall[x_{\eps}]\in K:\ f\left([x_{\eps}]\right)=\left[f_{\eps}(x_{\eps})\right
\end{equation}
we say that $f:K\longrightarrow\RC{\rho}$ is a \emph{generalized
integrable function}.
\noindent We will again say that $f$ \emph{is defined by the net}
$(f_{\eps})$ or that the net $(f_{\eps})$ \emph{represents} $f$.
The set of all these generalized integrable functions will be denoted
by $\GI(K,\RC{\rho})$.
\end{defn}
\noindent E.g., if $f=[f_{\eps}(-)]|_{K}\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(K,\RC{\rho})$, then
both $f$ and $|f|=[|f_{\eps}(-)|]|_{K}$ are integrable on $K$ (but
note that, in general, $|f|$ is not a GSF).
\noindent In the following result, we show that this definition generates
a correct notion of multidimensional integration for GSF.
\begin{thm}
\label{thm:muMeasurableAndIntegral}Let $K\subseteq\RC{\rho}^{n}$
be $\mu$-measurable.
\begin{enumerate}
\item \label{enu:indepRepr}The definition of $\mu(K)$ is independent of
the representative $(K_{\eps})$.
\item \label{enu:existsRepre}There exists a representative $(K_{\eps})$
of $K$ such that $\mu(K)=[\mu(K_{\eps})]$.
\item \label{enu:epsWiseDefInt}Let $(K_{\eps})$ be any representative
of $K$ and let $f=[f_{\eps}(-)]|_{K}\in\GI(K,\RC{\rho})$. Then
\[
\int_{K}f\,\diff{\mu}:=\lim_{m\to\infty}\biggl[\int_{\overline{\Eball}_{\rho_{\eps}^{m}}(K_{\eps})}f_{\eps}\,\diff{\mu}\biggr]\in\RC{\rho}
\]
exists and its value is independent of the representative $(K_{\eps})$.
\item \label{enu:existsReprDefInt}There exists a representative $(K_{\eps})$
of $K$ such that
\begin{equation}
\int_{K}f\,\diff{\mu}=\biggl[\int_{K_{\eps}}f_{\eps}\,\diff{\mu}\biggr]\in\RC{\rho}\label{eq:measurable}
\end{equation}
for each $f=[f_{\eps}(-)]|_{K}\in\GI(K,\RC{\rho})$. From \eqref{eq:measurable},
it also follows that $\left|\int_{K}f\,\diff{\mu}\right|\le\int_{K}\left|f\right|\,\diff{\mu}$.
\item \label{enu:int-ndimInt}If $K=\prod_{i=1}^{n}[a_{i},b_{i}]$, then
$K$ is $\lambda$-measurable ($\lambda$ being the Lebesgue measure
on $\R^{n}$) and for all for each $f=[f_{\eps}(-)]|_{K}\in\GI(K,\RC{\rho})$
we have
\begin{equation}
\int_{K}f\,\diff{\lambda}=\left[\int_{a_{1,\eps}}^{b_{1,\eps}}\,dx_{1}\dots\int_{a_{n,\eps}}^{b_{n,\eps}}f_{\eps}(x_{1},\dots,x_{n})\,\diff{x_{n}}\right]\in\RC{\rho}\label{eq:intInt}
\end{equation}
for any representatives $(a_{i,\eps})$, $(b_{i,\eps})$ of $a_{i}$
and $b_{i}$ respectively. Therefore, if $n=1$, this notion of integral
coincides with that of Thm.~\ref{thm:existenceUniquenessPrimitives}
and Def.~\ref{def:integral}. Note that \eqref{eq:intInt} also directly
implies Fubini's theorem for this type of integrals.
\item Let $K\subseteq\RC{\rho}^{n}$ be $\lambda$-measurable, where $\lambda$
is the Lebesgue measure, and let $\phi\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(K,\RC{\rho}^{d})$ be
such that $\phi^{-1}\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\phi(K),\RC{\rho}^{n})$. Then $\phi(K)$
is $\lambda$-measurable and
\[
\int_{\phi(K)}f\,\diff{\lambda}=\int_{K}(f\circ\phi)\left|\det(\diff{\phi})\right|\,\diff{\lambda}
\]
for each $f\in\GI(\phi(K),\RC{\rho})$.
\end{enumerate}
\end{thm}
\noindent In order to state a continuity property for this notion
of integration, we have to introduce \emph{hypernatural numbers and
hyperlimits} as follows
\begin{defn}
\label{def:hyperfiniteN}~
\begin{enumerate}
\item \label{enu:hypernatural}$\hyperN{\rho}:=\left\{ [n_{\eps}]\in\RC{\rho}\mid n_{\eps}\in\N\quad\forall\eps\right\} $.
Elements of $\hyperN{\rho}$ are called \emph{hypernatural numbers} or \emph{hyperfinite
numbers}. We clearly have $\N\subseteq\hyperN{\rho}$, but among hypernatural
numbers we also have infinite numbers.
\item $\N_{\rho}:=\left\{ (n_{\eps})\in\R_{\rho}\mid n_{\eps}\in\N\quad\forall\eps\right\} $.
\item \label{enu:hyperlimit}A map $x:\hyperN{\sigma}\longrightarrow\RC{\rho}$, whose
domain is the set of hyperfinite numbers $\hyperN{\sigma}$ is called
a ($\sigma-$) \emph{hypersequence} (of elements of $\RC{\rho}$) and
denoted by $(x_{n})_{n\in\hyperN{\sigma}}$, or simply $(x_{n})_{n}$
if the gauge on the domain is clear from the context. Let $\sigma$,
$\rho$ be two gauges, $x:\hyperN{\sigma}\longrightarrow\RC{\rho}$ be a hypersequence
and $l\in\RC{\rho}$. We say that $l$ is the \emph{hyperlimit} of $(x_{n})_{n}$
as $n$$\rightarrow\infty$ and $n$$\in\hyperN{\sigma}$, if
\[
\forall q\in N\,\exists M\in\hyperN{\sigma}\,\forall n\in\hyperN{\sigma}_{\geq M}:\ |x_{n}-l|<\diff{\rho}^{q}.
\]
It can be easily proved that there exists at most one hyperlimit,
and in this case it is denoted by $\hyperlim{\rho}{\sigma}x_{n}=l$.
Note that $\diff{\rho}<\frac{1}{n}$ if $n\in\N_{>0}$ so that $\frac{1}{n}\not\to0$
in the sharp topology. On the contrary $\hyperlim{\rho}{\rho}\frac{1}{n}=0$
because $\hyperN{\rho}$ contains arbitrarily large infinite hypernatural
numbers.
\end{enumerate}
\end{defn}
The following continuity result once again underscores that functionally
compact sets (even if they can be unbounded from a classical point
of view) behaves as compact sets for GSF.
\begin{thm}
\label{thm:contResult}Let $K\subseteq\RC{\rho}^{n}$ be a $\mu$-measurable
functionally compact set and $f_{n}\in\GI(K,\RC{\rho}^{d})$ for all
$n\in\N$. Then, if the hyperlimit $\hyperlim{\rho}{\sigma}f_{n}(x)$
exists for each $x\in K$, then $\hyperlim{\rho}{\sigma}f_{n}$ is
integrable on $K$ and
\begin{equation}
\hyperlim{\rho}{\sigma}\int_{K}f_{n}\,\diff{x_{n}}=\int_{K}\hyperlim{\rho}{\sigma}f_{n}\,\diff{x_{n}}.\label{eq:ContinuityProperty}
\end{equation}
\end{thm}
\noindent For the proof of this theorem see \cite{GIO1}, and for
the notion of hyperlimit see \cite{MTAG}.
\section{Convolution on $\RC{\rho}^{n}$\label{sec:Convolution}}
In this section, we define and study convolution $f*g$ of two GSF,
where $f$ or $g$ is compactly supported. Compactly supported GSF
were introduced in \cite{GiKu18} for the gauge $\rho_{\eps}=\eps$.
For an arbitrary gauge, we here define and study the notions needed
for the HFT as well as for the study of convolution of GSF.
\begin{defn}
Assume that $X\subseteq\RC{\rho}^{n}$, $Y\subseteq\RC{\rho}^{d}$ and $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(X,Y\right)$,
then
\begin{enumerate}
\item \label{enu:support}$\text{supp}\left(f\right):=\overline{\left\{ x\in X\mid\left|f\left(x\right)\right|>0\right\} }$,
where $\overline{\left(\cdot\right)}$ denotes the relative closure
in $X$ with respect to the sharp topology, is called the \emph{support}
of $f$. We recall (see just after Def.~\ref{def:RCGN} and Lem.~\ref{lem:mayer})
that $x>0$ means that $x\in\RC{\rho}_{\ge0}$ is positive and invertible.
\item \label{enu:exterior}For $A\subseteq\RC{\rho}$ we call the set $\text{ext}\left(A\right):=\left\{ x\in\RC{\rho}\mid\forall a\in A:\ \left|x-a\right|>0\right\} $
the \emph{strong exterior of }$A$. Recalling Lem.~\ref{lem:mayer},
if $x\in\text{ext}(A)$, then $|x-a|\ge\diff{\rho}^{q}$ for all $a\in A$
and for some $q=q(a)\in\N$.
\item \label{enu:CSgsf}Let $H\Subset_{\text{\rm f}}\RC{\rho}^{n}$, we say that $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}\left(H,Y\right)$
if $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\RC{\rho}^{n},Y)$ and $\text{supp}\left(f\right)\subseteq H$.
We say that $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho}^{n},Y)$ if $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}\left(H,Y\right)$
for some $H\Subset_{\text{\rm f}}\RC{\rho}^{n}$. Such an $f$ is called \emph{compactly
supported}; for simplicity we set $\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(H):=\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(H,\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}})$. Note
that $\text{supp}(f)$ is clearly always closed, and if $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}\left(H,Y\right)$
then it is also sharply bounded. However, in general it is not an
internal set so it is not a functionally compact set. Accordingly,
the theory of multidimensional integration of Sec.~\ref{subsec:Multidimensional-integration}
does not allow us to consider $\int_{\text{supp}(f)}f$ even if $f$
is compactly supported.
\end{enumerate}
\end{defn}
\begin{rem}
~
\begin{enumerate}
\item Note that the notion of \emph{standard support} $\text{stsupp}\left(f\right)$
as defined in Thm. \ref{thm:embeddingD'} and the present notion $\text{supp}\left(f\right)$
of support, as defined above, are different. The main distinction
is that $\text{stsupp}\left(f\right)\subseteq\mathbb{R}^{n}$ while
$\text{supp}\left(f\right)\subseteq\RC{\rho}^{n}$. Moreover if we consider
a CGF $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\csp{\Omega},\RC{\rho}^{d})$, then $\text{supp}\left(f\right)\cap\Omega\subseteq\text{stsupp}\left(f\right)$.
\item Since $\delta\left(0\right)>0$ then $\delta|_{B_{r}\left(0\right)}>0$
for some $r\in\RC{\rho}_{>0}$ by the sharp continuity of $\delta$,
i.e.~Thm\@.~\ref{thm:propGSF}.\ref{enu:GSF-cont}, hence $B_{r}\left(0\right)\subseteq\text{supp}\left(\delta\right)$,
whereas $\text{stsupp}\left(\delta\right)=\left\{ 0\right\} $. Example
\ref{exa:deltaCompDelta}.\ref{enu:deltaH} also yields that $\text{supp}(\delta)\subseteq[-r,r]^{n}$
for all $r\in\R_{>0}$.
\item \label{enu:rapDecrComptSupp}Any rapidly decreasing function $f\in\mathcal{S}(\R^{n})$
satisfies the inequality $0\leq f\left(x\right)\leq\left|x\right|^{-q}$,
$\forall q\in\mathbb{N}$, for $\left|x\right|$ finite sufficiently
large. Therefore, for all strongly infinite $x$, we have $f\left(x\right)=0$
i.e., $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}\left(\RC{\rho}^{n}\right)$.
\end{enumerate}
\end{rem}
\begin{lem}
\label{lem:extOpen}Let $\emptyset\ne H\Subset_{f}\RC{\rho}^{n}$. Then
$\text{\emph{ext}}\left(H\right)$ is sharply open.
\end{lem}
\begin{proof}
If $x=\left[x_{\eps}\right]\in\text{ext}\left(H\right)$, we set $d_{\eps}:=d\left(x_{\eps},H_{\eps}\right)$
where $H=\left[H_{\eps}\right]$ and $\emptyset\ne H_{\eps}\Subset\mathbb{R}^{n}$
for all $\eps$ (because $H\ne\emptyset$). Then $\exists h_{\eps}\in H_{\eps}:\ d:=d\left(x_{\eps},h_{\eps}\right)$,
we set $h:=\left[h_{\eps}\right]\in H$ and $\left|x-h\right|=\left[d_{\eps}\right]=:d>0$
because $x\in\text{ext}(H)$ and $h\in H$. Now, by taking $r:=\frac{d}{2}>0$,
we prove that $B_{r}\left(x\right)\subseteq\text{ext}\left(H\right)$.
Pick $y\in B_{r}\left(x\right)$, then for all $a\in H$, we have
$\left|y-a\right|\ge|x-a|-|y-x|\ge d-\frac{d}{2}>0$.
\end{proof}
\begin{thm}
\label{thm:DerivativeIsZero}Let $H\Subset_{\text{\rm f}}\RC{\rho}^{n}$ and $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\RC{\rho}^{n},\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}})$,
then the following properties hold:
\begin{enumerate}
\item \label{enu:equivCmptSupp}$f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}\left(H\right)$ if and only if
$f|_{\text{\emph{ext}}(H)}=0$.
\end{enumerate}
If $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}\left(H\right)$, $x\in\RC{\rho}^{n}$ and $\alpha\in\mathbb{N}^{n}$,
then:
\begin{enumerate}[resume]
\item \label{enu:derZeroExt}$\partial^{\alpha}f\left(x\right)=0$ for all
$x\in\text{\emph{ext}}(H)$.
\item \label{enu:derZeroBound}If $H\subseteq[-h,h]^{n}$ then $\partial^{\alpha}f(x)=0$
whenever $x_{p}\ge h$ or $x_{p}\le-h$ for some $p=1,\ldots,n$.
\item \label{enu:intBound}If $H\subseteq[-h,h]^{n}\subseteq\prod_{p=1}^{n}[a_{p},b_{p}]$,
then
\[
\intop_{a_{1}}^{b_{1}}\,\diff{x_{1}}\ldots\intop_{a_{n}}^{b_{n}}f\left(x\right)\,\diff{x_{n}}=\intop_{-h}^{h}\,\diff{x_{1}}\ldots\intop_{-h}^{h}f\left(x\right)\,\diff{x_{n}}
\]
\end{enumerate}
\end{thm}
\begin{proof}
\ref{enu:equivCmptSupp}: Assume that $\text{supp}(f)\subseteq H$
and $x=[x_{\eps}]\in\text{ext}(H)$, but $f(x)\ne0$. This implies
that $|f(x)|\not\le0$ because always $|f(x)|\ge0$. Thereby, Lem.~\ref{lem:negationsSubpoints}
yields $|f(x)|>_{L}0$ for some $L\subseteq_{0} I$. Applying Lem.~\ref{lem:mayer}
for the ring $\RC{\rho}|_{L}$ we get $|f(x)|>_{L}\diff{\rho}^{q}$ for
some $q\in\R_{>0}$, i.e.~$\left|f_{\eps}(x_{\eps})\right|>\rho_{\eps}^{q}$
for all $\eps\in L_{\le\eps_{0}}$. Define $\bar{x}_{\eps}:=x_{\eps}$
for all $\eps\in L$ and $\bar{x}_{\eps}:=x_{\eps_{0}}$ otherwise,
so that $\bar{x}:=[\bar{x}_{\eps}]\in\RC{\rho}^{n}$ and $|f(\bar{x})|>\diff{\rho}^{q}$.
This yields $\bar{x}\in\text{supp}(f)\subseteq H$, and hence $|x-\bar{x}|>0$,
which is impossible by construction because $\bar{x}|_{L}=x|_{L}$
and because of Lem.~\ref{lem:mayer}.
Vice versa, assume that $f|_{\text{ext}(H)}=0$ and take $x=[x_{\eps}]\in\text{supp}(f)\setminus H$.
The property
\[
\forall q\in\R_{>0}\,\forall^{0}\eps:\ d(x_{\eps},H_{\eps})\le\rho_{\eps}^{q}
\]
cannot hold, because for $q\to+\infty$ Thm\@.~\ref{thm:strongMembershipAndDistanceComplement}.\ref{enu:internalSetsDistance}
would imply $x\in H=[H_{\eps}]$. Therefore, for some $q\in\R_{>0}$
and some $L\subseteq_{0} I$, we have $d(x_{\eps},H_{\eps})\ge\rho_{\eps}^{q}$
for all $\eps\in L$. Thereby, if $a=[a_{\eps}]\in H$ where $a_{\eps}\in H_{\eps}$
for all $\eps$, we get $d(x_{\eps},a_{\eps})\ge d(x_{\eps},H_{\eps})\ge\rho_{\eps}^{q}$
for all $\eps\in L$, i.e.~$x|_{L}\in\text{ext}(H)|_{L}$. Applying
Lem.~\ref{lem:extOpen} for the ring $\RC{\rho}|_{L}$ we get
\begin{equation}
B_{r}(x)|_{L}\subseteq\text{ext}(H)|_{L}\label{eq:extLBall}
\end{equation}
for some $r\in\RC{\rho}_{>0}$. From $x\in\text{supp}(f)$, we get the
existence of a sequence $(x_{p})_{p\in\N}$ of points of $\left\{ x\in\RC{\rho}^{n}\mid|f(x)|>0\right\} $
such that $x_{p}\to x$ as $p\to+\infty$ in the sharp topology. Therefore,
$x_{p}\in B_{r}(x)$ for $p\in\N$ sufficiently large. Thereby, $x_{p}|_{L}\in\text{ext}(H)|_{L}$
from \eqref{eq:extLBall} and hence $f(x_{p})|_{L}=\left[\left(f_{\eps}(x_{p\eps})\right)_{\eps\in L}\right]=0$,
which contradicts $|f(x_{p})|>0$.
Property \ref{enu:derZeroExt} follows by induction on $|\alpha|\in\N$
using Thm.~\ref{thm:FR-forGSF}. We prove property \ref{enu:derZeroBound}
for the case $x_{p}\ge h$, the other case being similar. We consider
\[
\bar{x}_{q}:=(x_{1},\displaystyle \mathop {\ldots\ldots\,}} % marks with smth over. Usage: \ptind^{...^{p-1},x_{p-1},x_{p}+\diff{\rho}^{q},x_{p+1},\ldots,x_{n})\quad\forall q\in\N.
\]
Then $|\bar{x}_{q}-a|\ge|x_{p}+\diff{\rho}^{q}-a_{p}|\ge\diff{\rho}^{q}$
for all $a\in[-h,h]^{n}\supseteq H$ because $x_{p}\ge h\ge a_{p}$.
Therefore, $\bar{x}_{q}\in\text{ext}(H)$ and hence $\partial^{\alpha}f(\bar{x}_{q})=0$
from the previous \ref{enu:derZeroExt}. The conclusion now follows
from the sharp continuity of the GSF $\partial^{\alpha}f$ (Thm.~\ref{thm:propGSF}.\ref{enu:GSF-cont}).
\ref{enu:intBound}: The inclusion $\pm(h,\ldots,h)\in[-h,h]^{n}\subseteq\prod_{p=1}^{n}[a_{p},b_{p}]$
implies $a_{p}\le-h$ and $b_{p}\ge h$ for all $p=1,\ldots,n$. Using
Thm.~\ref{thm:muMeasurableAndIntegral}.\ref{enu:int-ndimInt}, we
can write
\begin{align*}
\intop_{a_{1}}^{b_{1}}\,\diff{x_{1}}\ldots\intop_{a_{n}}^{b_{n}}f\left(x\right)\,\diff{x_{n}} & =\intop_{a_{1}}^{b_{1}}\,\diff{x_{1}}\ldots\intop_{a_{n-1}}^{b_{n-1}}\,\diff{x_{n-1}}\intop_{a_{n}}^{-h}f\left(x\right)\,\diff{x_{n}}+\\
& \phantom{=}\intop_{a_{1}}^{b_{1}}\,\diff{x_{1}}\ldots\intop_{a_{n-1}}^{b_{n-1}}\,\diff{x_{n-1}}\intop_{-h}^{+h}f\left(x\right)\,\diff{x_{n}}+\\
& \phantom{=}\intop_{a_{1}}^{b_{1}}\,\diff{x_{1}}\ldots\intop_{a_{n-1}}^{b_{n-1}}\,\diff{x_{n-1}}\intop_{h}^{b_{n}}f\left(x\right)\,\diff{x_{n}}.
\end{align*}
But if $x_{n}\in[a_{n},-h]$ or $x_{n}\in[h,b_{n}]$, then property
\ref{enu:derZeroBound} yields $f(x)=0$ and we obtain
\[
\intop_{a_{1}}^{b_{1}}\,\diff{x_{1}}\ldots\intop_{a_{n}}^{b_{n}}f\left(x\right)\,\diff{x_{n}}=\intop_{a_{1}}^{b_{1}}\,\diff{x_{1}}\ldots\intop_{a_{n-1}}^{b_{n-1}}\,\diff{x_{n-1}}\intop_{-h}^{h}f\left(x\right)\,\diff{x_{n}}.
\]
Proceeding in the same way with all the other integrals we get the
claim.
\end{proof}
\noindent In particular, if $T\in\mathcal{E}'(\Omega)$, then Thm.~\ref{thm:DerivativeIsZero}.\ref{enu:equivCmptSupp}
implies that $\iota_{\Omega}^{b}(T)\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}\left(\RC{\rho}^{n}\right)$.
Also observe that $f(x)=e^{-x^{2}}$, $x\in\left\{ x\in\RC{\rho}\mid\exists N\in\N:\ x^{2}\ge N\log\diff{\rho}\right\} $,
satisfies $f(x)\le x^{-q}$ for all infinite $x$ and all $q\in\N$.
Therefore
\[
\forall Q\in\N:\ f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}\left([-\diff{\rho}^{-Q},\diff{\rho}^{-Q}]\right).
\]
\noindent Based on these results, we can define
\begin{defn}
\label{def:intCmpSupp}Let $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho}^{n})$, then
\begin{equation}
\int f:=\int_{\RC{\rho}^{n}}f:=\intop_{a_{1}}^{b_{1}}\,\diff{x_{1}}\ldots\intop_{a_{n}}^{b_{n}}f\left(x\right)\,\diff{x_{n}}\label{eq:intCmpSupp}
\end{equation}
where $\text{supp}(f)\subseteq\prod_{p=1}^{n}[a_{p},b_{p}]$. This
equality does not depend on $a_{p}$, $b_{p}$ because of Thm.~\ref{thm:DerivativeIsZero}.\ref{enu:intBound}.
\end{defn}
\noindent Note that we can also write \eqref{eq:intCmpSupp} as
\begin{equation}
\int f=\lim_{\substack{a_{p}\to-\infty\\
b_{p}\to+\infty\\
p=1,\ldots,n
}
}\ \intop_{a_{1}}^{b_{1}}\,\diff{x_{1}}\ldots\intop_{a_{n}}^{b_{n}}f\left(x\right)\,\diff{x_{n}}=\lim_{h\to+\infty}\intop_{-h}^{h}\,\diff{x_{1}}\ldots\intop_{-h}^{h}f\left(x\right)\,\diff{x_{n}}\label{eq:intCmpSuppLimit}
\end{equation}
even if we are actually considering limits of eventually constant
functions. Using this notion of integral of a compactly supported
GSF, we can also write the value of a distribution $\langle T,\phi\rangle$
as an integral: let $b\in\RC{\rho}_{>0}$ be a strong infinite number,
$\Omega\subseteq\R^{n}$ be an open set, $T\in\mathcal{D}'(\Omega)$
and $\phi\in\mathcal{D}(\Omega$), with $\text{supp}(\phi)\subseteq\prod_{i=1}^{n}[a_{i},b_{i}]_{\R}=:J$.
Then from Thm.~\ref{thm:embeddingD'}.\ref{enu:D'} and Thm.~\ref{thm:muMeasurableAndIntegral}.\ref{enu:int-ndimInt}
we get
\begin{equation}
\langle T,\phi\rangle=\int_{[J]}\iota_{\Omega}^{b}(T)(x)\cdot\phi(x)\,\diff{x}=\int\iota_{\Omega}^{b}(T)(x)\cdot\phi(x)\,\diff{x},\label{eq:pairTphiAsInt}
\end{equation}
where the equalities are in $\RC{\rho}$.
\begin{defn}
\label{def:convolution}Let $f$, $g\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\RC{\rho}^{n})$, with $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho}^{n})$
or $g\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho}^{n})$. In the former case, by Thm.~\ref{thm:propGSF}.\ref{enu:category}
and Thm.~\ref{thm:DerivativeIsZero}.\ref{enu:equivCmptSupp}, for
all $x\in\RC{\rho}^{n}$, $f\cdot g(x-\cdot)\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho}^{n})$ with $\text{supp}\left(f\cdot g(x-\cdot)\right)\subseteq\text{supp}(f)\Subset_{\text{\rm f}}\RC{\rho}^{n}$.
Moreover, $\text{supp}\left(f(x-\cdot)\cdot g\right)\subseteq x-\text{supp}(f)\Subset_{\text{\rm f}}\RC{\rho}^{n}$.
Similarly, we can argue in the latter case, and we can hence define
\begin{equation}
\left(f\ast g\right)\left(x\right):=\intop f\left(y\right)g\left(x-y\right)\,\diff{y}=\intop f\left(x-y\right)g\left(y\right)\,\diff{y}\quad\forall x\in\RC{\rho}^{n}.\label{eq:defConvolution}
\end{equation}
\noindent Note that directly from Thm\@.~\ref{thm:existenceUniquenessPrimitives}
and Def.~\ref{def:intCmpSupp}, it follows that $f*g\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\RC{\rho}^{n})$.
The next theorems provide the usual basic properties of convolution
suitably formulated in our framework. We start by studying how the
convolution is in relation to the supports of its factors:
\end{defn}
\begin{thm}
\label{thm:convSupp}Let $f$, $g$, $h\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho}^{n})$. Then
the following properties hold:
\begin{enumerate}
\item \label{enu:supp1}Let $\text{\emph{supp}}(f)\subseteq[-a,a]^{n}$,
$\text{\emph{supp}}(g)\subseteq[-b,b]^{n}$, $a$, $b\in\RC{\rho}_{>0}$,
and $x\in\RC{\rho}^{n}$. Set $L_{x}:=[-a,a]^{n}\cap\left(x-[-b,b]^{n}\right)$,
then
\begin{align}
\text{\emph{supp}}\left(f\cdot g(x-\cdot)\right) & \subseteq L_{x}=\prod_{p=1}^{n}[\max(-a,x_{p}-b),\min(a,x_{p}+b)]\label{eq:supp0}\\
\left(f\ast g\right)\left(x\right) & =\intop_{L_{x}}f\left(y\right)g\left(x-y\right)\,\diff{y}.\label{eq:supp1}
\end{align}
\item \label{enu:supp2}$\text{\emph{supp}}(f*g)\subseteq\overline{\text{\emph{supp}}(f)+\text{\emph{supp}}(g)}$,
therefore $f*g\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho}^{n})$.
\end{enumerate}
\end{thm}
\begin{proof}
\ref{enu:supp1}: If $|f(t)g(x-t)|>0$, then $t\in\text{supp}(f)$
and $x-t\in\text{supp}(g)$. Therefore, $\text{supp}\left(f\cdot g(x-\cdot)\right)\subseteq[-a,a]^{n}\cap\left(x-[-b,b]^{n}\right)$.
As in the case of real numbers, we can say that if $t\in[-a,a]^{n}\cap\left(x-[-b,b]^{n}\right)$,
then $-a\le t_{p}\le a$ and $-b\le x_{p}-t_{p}\le b$ for all $p=1,\ldots,n$.
Therefore, $t_{p}\in[\max(-a,x_{p}-b),\min(a,x_{p}+b)]$. Similarly,
we can prove that also $L_{x}\subseteq[-a,a]^{n}\cap\left(x-[-b,b]^{n}\right)$.
The conclusion \eqref{eq:supp0} now follows from Def.~\ref{def:intCmpSupp}.
For completeness, recall that in general $\text{supp}(f)$ and $\text{supp}(g)$
are not functionally compact sets and our integration theory allows
to integrate only over the latter kind of sets. This justifies our
formulation of the present property using intervals.
\ref{enu:supp2}: Since $f$ and $g$ are compactly supported, we
have $\text{supp}(f)\subseteq H$ and $\text{supp}(g)\subseteq L$
for some $H$, $L\Subset_{\text{\rm f}}\RC{\rho}^{n}$. Assume that $\left|(f*g)(x)\right|>0$.
Then, by Thm.~\ref{thm:intRules}.\ref{enu:intMonotone}, Thm.~\ref{thm:muMeasurableAndIntegral}.\ref{enu:int-ndimInt}
and the extreme value Thm.~\ref{thm:extremeValues}, we get
\[
0<\left|(f*g)(x)\right|\le\lambda(H)\cdot\max_{y\in H}|f(y)g(x-y)|,
\]
where $\lambda$ is the extension of the Lebesgue measure given by
Def.~\ref{def:intOverCompact}. Therefore, there exists $y\in H$
such that $0<\lambda(H)\cdot\left|f(y)g(x-y)\right|$. This implies
that $y\in\text{supp}(f)$ and $x-y\in\text{supp}(g)$. Thereby, $x=y+(x-y)\in\text{supp}(f)+\text{supp}(g)$.
Taking the sharp closure we get the conclusion. Finally, $\overline{\text{supp}(f)+\text{supp}(g)}\subseteq\overline{H+L}=H+L$
and $H+L\Subset_{\text{\rm f}}\RC{\rho}^{n}$ because it is the image under the sum $+$
of $H\times L$ (see Thm.~\ref{thm:product} and Thm.~\ref{thm:image}).
\end{proof}
Now, we consider algebraic properties of convolution and its relations
with derivations and integration:
\begin{thm}
\label{thm:convAlgDiffInt}Let $f$, $g$, $h\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\RC{\rho}^{n})$ and
assume that at least two of them are compactly supported. Then the
following properties hold:
\begin{enumerate}
\item \label{enu:Commutative:conv}$f\ast g=g*f$.
\item \label{enu:Associative:conv}$\left(f\ast g\right)\ast h=f\ast\left(g\ast h\right)$.
\item \label{enu:Distributive:conc}$f\ast\left(h+g\right)=f\ast h+f\ast g$.
\item \label{enu:complex_conjugate}$\overline{f*g}=\overline{f}*\overline{g}$
\item \label{enu:translation_conv}${\displaystyle t\oplus\left(f*g\right)=\left(t\oplus f\right)*g=f*\left(t\oplus g\right)}$
where $t\oplus f$ is the translation of the function $f$ by $t$
defined by $\left(t\oplus f\right)\left(x\right)=f\left(x-t\right)$
(see Sec.~\ref{subsec:Embedding}).
\item \label{enu:differen_conv}$\frac{\partial}{\partial x_{p}}\left(f*g\right)=\frac{\partial f}{\partial x_{p}}*g=f*\frac{\partial g}{\partial x_{p}}$
for all $p=1,\ldots,n$.
\item \label{enu:integration_conv}$\intop\left(f\ast g\right)\left(x\right)\,\diff{x}=\left(\intop f\left(x\right)\,\diff{x}\right)\left(\intop g\left(x\right)\,\diff{x}\right)$
\end{enumerate}
\end{thm}
\begin{proof}
\ref{enu:Commutative:conv}: We assume, e.g., that $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho}^{n})$.
Take $h\in\RC{\rho}_{>0}$ such that $\text{supp}(f)\subseteq[-h,h]^{n}$.
By \eqref{eq:supp1} and Def.~\ref{def:intCmpSupp}, we can write
\[
\left(f\ast g\right)\left(x\right)=\intop_{-h}^{h}\,\diff{y_{1}}\ldots\intop_{-h}^{h}f\left(y\right)g\left(x-y\right)\,\diff{y_{n}}.
\]
We can now proceed as in the classical case, i.e.~considering the
change of variable $z=x-y$ (Thm.~\ref{thm:changeOfVariablesInt}).
We get
\[
\left(f\ast g\right)\left(x\right)=\intop_{x_{1}-h}^{x_{1}+h}\,\diff{z_{1}}\ldots\intop_{x_{n}-h}^{x_{n}+h}f\left(x-z\right)g\left(z\right)\,\diff{z_{n}}.
\]
Taking the limit $h\to+\infty$ (see \eqref{eq:intCmpSuppLimit}),
we obtain the desired equality. Similarly, we can also prove \ref{enu:Associative:conv}
and \ref{enu:Distributive:conc}.
As usual, \ref{enu:complex_conjugate} is a straightforward consequence
of the definition of complex conjugate.
\ref{enu:translation_conv}: The usual proof applies, in fact
\begin{align}
t\oplus\left(f*g\right)\left(x\right) & =\left(f*g\right)\left(x-t\right)=\intop f\left(y\right)g\left(x-t-y\right)\,\mathrm{d}y=\nonumber \\
& =\intop f\left(y\right)\left(t\oplus g\right)\left(x-y\right)\,\mathrm{d}y=\left(f\ast\left(t\oplus g\right)\right)\left(x\right).\label{eq:tInside}
\end{align}
Finally, the commutativity property \ref{enu:Commutative:conv} yields
$\left(t\oplus f\right)*g=g*(t\oplus f)$ and applying \eqref{eq:tInside}
$g*(t\oplus f)=t\oplus\left(g*f\right)=t\oplus\left(f*g\right)$.
\ref{enu:differen_conv}: Set $h:=f*g$ and take $x\in\RC{\rho}^{n}$.
Using differentiation under the integral sign (Thm.~\ref{thm:intRules}.\ref{enu:derUnderInt})
and Def.~\ref{def:intCmpSupp} we get
\[
\frac{\partial}{\partial x_{p}}h\left(x\right)=\intop_{\RC{\rho}^{n}}f\left(y\right)\frac{\partial g}{\partial x_{p}}\left(x-y\right)\,\diff{y}=\left(f\ast\frac{\partial g}{\partial x_{p}}\right)\left(x\right).
\]
Using \ref{enu:Commutative:conv}, we also have $\frac{\partial}{\partial x_{p}}h=\frac{\partial f}{\partial x_{p}}\ast g$.
To prove \ref{enu:integration_conv} we show the case $n=1$, even
if the general one is similar. Let $a$, $b\in\RC{\rho}_{>0}$ be such
that $\text{supp}(f*g)\subseteq[-a,a]$ (Thm.~\ref{thm:convSupp})
and $\text{supp}(f)\subseteq[-b,b]$. Then
\[
\int(f*g)(x)\,\diff{x}=\int_{-a}^{a}\diff{x}\int_{-b}^{b}f(y)g(x-y)\,\diff{y}.
\]
Using Fubini's Thm.~\ref{thm:muMeasurableAndIntegral}.\ref{enu:int-ndimInt},
we can write
\begin{align*}
\int(f*g)(x)\,\diff{x} & =\int_{-b}^{b}f(y)\int_{-a}^{a}g(x-y)\,\diff{x}\,\diff{y}=\\
& =\int_{-b}^{b}f(y)\int_{-a-y}^{a-y}g(z)\,\diff{z}\,\diff{y}=\\
& =\int_{-b}^{b}f(y)\,\diff{y}\int_{-c}^{c}g(z)\,\diff{z},
\end{align*}
where we have taken $a\to+\infty$ or equivalently, considered any
$c\ge a+b$.
\end{proof}
Young's inequality for convolution is based on the generalized Hölder's
inequality, on the inequality $\left|\int_{K}f\,\diff{\mu}\right|\le\int_{K}\left|f\right|\,\diff{\mu}$
(see Thm.~\ref{thm:muMeasurableAndIntegral}.\ref{enu:existsReprDefInt}),
monotonicity of integral (see Thm.~\ref{thm:intRules}.\ref{enu:intMonotone})
and Fubini's theorem (see Thm.~\ref{thm:muMeasurableAndIntegral}.\ref{enu:int-ndimInt}).
Therefore, the usual proofs can be repeated in our setting if we take
sufficient care of terms such as $|f(x)|^{p}$ if $p\in\RC{\rho}_{\ge1}$:
\begin{defn}
\label{def:pNorm}Let $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho}^{n})$ and $p\in\RC{\rho}_{\ge1}$
be a finite number. Then, we set
\[
\Vert f\Vert_{p}:=\left(\int|f(x)|^{p}\,\diff{x}\right)^{1/p}\in\RC{\rho}_{\ge0}.
\]
Note that $|f|^{p}$ is a generalized integrable function (Def.~\ref{def:intOverCompact})
because $p$ is a finite number (in general the power $x^{y}$ is
not well-defined, e.g.~$\left(\frac{1}{\rho_{\eps}}\right)^{1/\rho_{\eps}}=\rho_{\eps}^{-1/\rho_{\eps}}$
is not $\rho$-moderate).
\end{defn}
\noindent On the other hand, Hölder's inequality, if $\Vert f\Vert_{p}>0$
and $\Vert g\Vert_{q}>0$, is simply based on monotonicity of integral,
Fubini's theorem and Young's inequality for products. The latter holds
also in $\RC{\rho}_{\ge0}$ because it holds in the entire $\R_{\ge0}$,
see e.g.~\cite{Sch05}.
\begin{thm}[Hölder]
\label{thm:Holder}Let $f_{k}\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho}^{n})$ and $p_{k}\in\RC{\rho}_{\ge1}$
for all $k=1,\ldots,m$ be such that $\sum_{k=1}^{m}\frac{1}{p_{k}}=1$
and $\Vert f_{k}\Vert_{p_{k}}>0$. Then
\[
\left\Vert \prod_{k=1}^{m}f_{k}\right\Vert _{1}\le\prod_{k=1}^{m}\Vert f_{k}\Vert_{p_{k}}.
\]
\end{thm}
\
\begin{thm}[Young]
\label{thm:convYoung}~Let $f$, $g\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho}^{n})$ and $p$,
$q$, $r\in\RC{\rho}_{\ge1}$ be such that ${\displaystyle {\textstyle {\frac{1}{p}}+{\frac{1}{q}}=1+{\frac{1}{r}}}}$
and $\Vert f\Vert_{p}$, $\Vert g\Vert_{q}>0$, then $\Vert f*g\Vert_{r}\le\Vert f\Vert_{p}\cdot\Vert g\Vert_{q}$.
\end{thm}
\noindent In the following theorem, we consider when the equality
$\left(\delta*f\right)(x)=f(x)$ holds. As we will see later in Sec.~\ref{subsec:The-Riemann-Lebesgue-lemma},
as a consequence of the Riemann-Lebesgue lemma we necessarily have
a limitation concerning the validity of this equality.
\begin{thm}
\label{thm:convIdentity}Let $\delta$ be the $\iota_{\R^{n}}^{b}$-embedding
of the $n$-dimensional Dirac delta (see Thm.~\ref{thm:embeddingD'}).
Assume that $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(\RC{\rho}^{n}\right)$ satisfies, at the point
$x\in\RC{\rho}^{n}$, the condition
\begin{equation}
\exists r\in\R_{>0}\,\exists M,c\in\RC{\rho}\,\forall y\in\overline{B}_{r}(x)\,\forall j\in\mathbb{N}:\ \left|\diff{^{j}f}\left(y\right)\right|\leq Mc^{j},\label{eq:regular}
\end{equation}
\[
\frac{b}{c}\text{ is a large infinite number}
\]
i.e. all its derivatives in a \emph{finite }neighborhood of $x$ are
bounded by a suitably small polynomial $Mc^{j}$ (such a function
$f$ will be called\emph{ bounded by a tame polynomial at} $x$).
Then $\left(\delta\ast f\right)(x)=f(x)$.
\end{thm}
\begin{proof}
Considering that $\delta(y)=b^{n}\psi(by)$, where $\psi$ is the
considered $n$-dimensional Colombeau mollifier and $b$ is a strong
infinite number. (see Example \ref{exa:deltaCompDelta}.\ref{enu:deltaH}),
we have:
\begin{align*}
\left(\delta\ast f\right)\left(x\right)-f\left(x\right) & =\int f\left(x-y\right)\delta\left(y\right)\,\diff{y}-f\left(x\right)\int\delta\left(y\right)\,\diff{y}=\\
& =\int\left(f\left(x-y\right)-f\left(x\right)\right)\delta\left(y\right)\,\diff{y}=\\
& =\int_{\left[-\frac{r}{\sqrt{n}},\frac{r}{\sqrt{n}}\right]^{n}}\left(f\left(x-y\right)-f\left(x\right)\right)\delta\left(y\right)\,\diff{y}=\\
& =\int_{\left[-\frac{r}{\sqrt{n}},\frac{r}{\sqrt{n}}\right]^{n}}\left(f\left(x-y\right)-f\left(x\right)\right)b^{n}\psi\left(by\right)\,\diff{y},
\end{align*}
where $r\in\RC{\rho}_{>0}$ is the radius from \eqref{eq:regular}, so
that $\text{supp}(\delta)\subseteq\left[-\frac{r}{\sqrt{n}},\frac{r}{\sqrt{n}}\right]^{n}$
since $r\in\R_{>0}$. By changing the variable $by=t$, and setting
$H:=\left[-\frac{br}{\sqrt{n}},\frac{br}{\sqrt{n}}\right]^{n}$we
have
\[
\left(f\ast\delta\right)\left(x\right)-f\left(x\right)=\int_{H}\left(f\left(x-\frac{t}{b}\right)-f\left(x\right)\right)\psi\left(t\right)\,\diff{t}.
\]
Using Taylor's formula (Thm.~\ref{thm:Taylor}.\ref{enu:integralRest})
up to an arbitrary order $q\in\N$, we get
\begin{multline}
\int_{H}\left(f\left(x-\frac{t}{b}\right)-f\left(x\right)\right)\psi\left(t\right)\,\diff{t}=\int_{H}\sum_{0<\left|\alpha\right|\leq q}\frac{1}{\alpha!}\left(-\frac{t}{b}\right)^{\alpha}\partial^{\alpha}f\left(x\right)\psi\left(t\right)\,\diff{t}+\\
+\int_{H}\frac{1}{(q+1)!}\int_{0}^{1}\left(1-z\right)^{q}\diff{^{q+1}f}\left(x-z\frac{t}{b}\right)\left(-\frac{t}{b}\right)^{q+1}\psi\left(t\right)\,\diff{z}\,\diff{t}.\label{eq:tay}
\end{multline}
But \ref{enu:suppStrictDeltaNet} and \ref{enu:momentsStrictDeltaNet}
of Lem.~\ref{lem:strictDeltaNet} yield:
\[
\int_{H}t^{\alpha}\psi(t)\,\diff{t}=\left[\int_{\left[-\frac{b_{\eps}r}{\sqrt{n}},\frac{b_{\eps}r}{\sqrt{n}}\right]^{n}}t^{\alpha}\psi_{\eps}(t)\,\diff{t}\right]=\left[\int t^{\alpha}\psi_{\eps}(t)\,\diff{t}\right]=0\quad\forall|\alpha|\le q,
\]
where we also used that $\frac{b_{\eps}r}{\sqrt{n}}>1$ for $\eps$
sufficiently small because $b>0$ is an infinite number and $r\in\R_{>0}$.
Thereby, in \eqref{eq:tay} we only have to consider the remainder
\begin{multline*}
R_{q}\left(x\right):=\int_{H}\frac{1}{(q+1)!}\int_{0}^{1}\left(1-z\right)^{q}\diff{^{q+1}f}\left(x-z\frac{t}{b}\right)\left(-\frac{t}{b}\right)^{q+1}\psi\left(t\right)\,\diff{z}\,\diff{t}=\\
=\frac{(-1)^{q+1}}{b^{q+1}(q+1)!}\int_{H}\int_{0}^{1}\left(1-z\right)^{q}\diff{^{q+1}f}\left(x-z\frac{t}{b}\right)t^{q+1}\psi\left(t\right)\,\diff{z}\,\diff{t}.
\end{multline*}
For all $z\in(0,1)$ and $t\in H=\left[-\frac{r}{\sqrt{n}},\frac{r}{\sqrt{n}}\right]^{n}$,
we have $\left|\frac{zt}{b}\right|\leq\left|\frac{t}{b}\right|\leq\frac{\sqrt{n}|t|_{\infty}}{b}\le\frac{rb}{b}=r$
and hence $x-z\frac{t}{b}\in\overline{B}_{r}(x)$. Thereby, assumption
\eqref{eq:regular} yields $\diff{^{q+1}f}\left(x-z\frac{t}{b}\right)\le Mc^{q+1}$,
and hence
\begin{align*}
\left|R_{q}\left(x\right)\right| & \leq b^{-q-1}\frac{Mc^{q+1}}{(q+1)!}\intop_{H}\left|t^{q+1}\psi\left(t\right)\right|\,\diff{t}=\\
& =\left(\frac{b}{c}\right)^{-q-1}\frac{M}{(q+1)!}\intop_{[-1,1]^{n}}\left|t^{q+1}\psi\left(t\right)\right|\,\diff{t}\le\\
& \le\left(\frac{b}{c}\right)^{-q-1}\frac{M}{(q+1)!}\intop_{[-1,1]^{n}}\left|\psi\left(t\right)\right|\,\diff{t}\le\\
& \le\left(\frac{b}{c}\right)^{-q-1}\frac{2M}{(q+1)!},
\end{align*}
where we used \ref{enu:suppStrictDeltaNet} and \ref{enu:smallNegPartStrictDeltaNet}
of Lem.~\ref{lem:strictDeltaNet} and $\frac{br}{\sqrt{n}}>1$. We
can now let $q\rightarrow+\infty$ considering that $\frac{b}{c}>\diff{\rho}^{-s}$
for some $s\in\R_{>0}$, so that $\left|R_{q}\left(x\right)\right|\rightarrow0$
and hence $\left(\delta*f\right)(x)=f(x)$.
\end{proof}
\begin{example}
\label{exa:tamePol}~
\begin{enumerate}
\item If $f_{\omega}(x)=e^{-ix\omega}$, $b\ge\diff{\rho}^{-r}$ and $\omega\in\RC{\rho}$
satisfies $|\omega|\le\diff{\rho}^{-s}$ with $s<r$ (e.g.~if $\omega$
is a weak infinite number, see Def.~\ref{def:stronWeak}), then $\frac{b}{|\omega|}\ge\diff{\rho}^{-(r-s)}$
and $f_{\omega}$ is bounded by a tame polynomial at each point $x\in\RC{\rho}$.
On the contrary, e.g.~if $b=\diff{\rho}^{-r}$ and $\left|\omega\right|\ge\diff{\rho}^{-r}$,
then $\frac{b}{|\omega|}\le1$ and $f_{\omega}$ is not bounded by
a tame polynomial at any $x\in\RC{\rho}$.
\item If $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\RC{\rho}^{n})$ has always finite derivatives at a finite
point $x\in\RC{\rho}^{n}$ (e.g.~it originates from the embedding of an
ordinary smooth function), then it suffices to take $c=\diff{\rho}^{-r-1}$
to prove that $f$ is bounded by a tame polynomial at $x$. Similarly,
we can argue if $f$ is polynomially bounded for $x\to\infty$ and
$x\in\RC{\rho}^{n}$ is not finite.
\item The Dirac delta $\delta(x)=b^{n}\psi(bx)$ is not bounded by a tame
polynomial at $x=0$. This also shows that, generally speaking, the
embedding of a compactly supported distribution is not bounded by
a tame polynomial. Below we will show that indeed $\delta*\delta\ne\delta$,
even if we clearly have $\left(\delta*\delta\right)(x)=\delta(x)=0$
for all $x\in\RC{\rho}^{n}$ such that $|x|\ge r\in\R_{>0}$.
\item \label{enu:intAgainstDelta}If $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(\RC{\rho}^{n}\right)$
is bounded by a tame polynomial at $0$, then since $\delta$ is an
even function (see Example \ref{exa:deltaCompDelta}.\ref{enu:deltaH}),
we have:
\begin{equation}
\int\delta(x)\cdot f(x)\,\diff{x}=\int\delta(0-x)\cdot f(x)\,\diff{x}=\left(f*\delta\right)(0)=f(0).\label{eq:deltaAgainstsFncn}
\end{equation}
\end{enumerate}
\end{example}
Finally, the following theorem considers the relations between convolution
of distributions and their embedding as GSF:
\begin{thm}
\label{thm:compatConv}Let $S\in\mathcal{E}'(\R^{n})$, $T\in\mathcal{D}'(\R^{n})$
and $b\in\RC{\rho}_{>0}$ be a strong positive infinite number, then for
all $\phi\in\mathcal{D}(\R^{n})$:
\begin{enumerate}
\item \label{enu:convDist1}$\langle S*T,\phi\rangle=\int\iota_{\R^{n}}^{b}(S)(x)\cdot\iota_{\R^{n}}^{b}(T)(y)\cdot\phi(x+y)\,\diff{x}\,\diff{y}=\int\left(\iota_{\R^{n}}^{b}(S)*\iota_{\R^{n}}^{b}(T)\right)(z)\cdot\phi(z)\,\diff{z}.$
\item \label{enu:convDist2}$T*\phi=\iota_{\R^{n}}^{b}(T)*\phi$.
\end{enumerate}
\end{thm}
\begin{proof}
\ref{enu:convDist1}: Using \eqref{eq:pairTphiAsInt}, we have
\begin{align*}
\langle S*T,\phi\rangle & =\langle S(x),\langle T(y),\phi(x+y)\rangle\rangle=\langle S(x),\int\iota_{\R^{n}}^{b}(T)(y)\phi(x+y)\,\diff{y}\rangle=\\
& =\int\iota_{\R^{n}}^{b}(S)(x)\int\iota_{\R^{n}}^{b}(T)(y)\phi(x+y)\,\diff{y}\,\diff{x}=\\
& =\int\left(\iota_{\R^{n}}^{b}(S)*\iota_{\R^{n}}^{b}(T)\right)(z)\phi(z)\,\diff{z},
\end{align*}
where, in the last step, we used the change of variables $x=z-y$
and Fubini's theorem.
\ref{enu:convDist2}: For all $x\in\csp{\R^{n}}$, using again \eqref{eq:pairTphiAsInt},
we have $\left(T*\phi\right)(x)=\langle T(y),\phi(x-y)\rangle=\int\iota_{\R^{n}}^{b}(T)(y)\phi(x-y)\,\diff{y}=\left(\iota_{\R^{n}}^{b}(T)*\phi\right)(x)$.
\end{proof}
\noindent We note that an equality of the type $\iota_{\R^{n}}^{b}(S*T)=\iota_{\R^{n}}^{b}(S)*\iota_{\R^{n}}^{b}(T)$
cannot hold because from Thm.~\ref{thm:convAlgDiffInt}.\ref{enu:Associative:conv}
it would imply $1*(\delta'*H)=(1*\delta')*H$ as distributions. Considering
their embeddings, we have $\iota_{\R^{n}}^{b}(1)*\left(\iota_{\R^{n}}^{b}(\delta')*\iota_{\R^{n}}^{b}(H)\right)=\iota_{\R^{n}}^{b}(1)*\left(\iota_{\R^{n}}^{b}(\delta)*\iota_{\R^{n}}^{b}(\delta)\right)=\left(\iota_{\R^{n}}^{b}(1)*\iota_{\R^{n}}^{b}(\delta')\right)*\iota_{\R^{n}}^{b}(H)=\left(\iota_{\R^{n}}^{b}(1')*\iota_{\R^{n}}^{b}(\delta)\right)*\iota_{\R^{n}}^{b}(H)=0$.
In particular, at the term $\iota_{\R^{n}}^{b}(\delta)*\iota_{\R^{n}}^{b}(\delta)$
we cannot apply Thm.~\ref{thm:convIdentity} because $\delta^{(j)}(x)=b^{j+1}\psi^{(j)}(bx)$.
This also implies that $\iota_{\R^{n}}^{b}(\delta)*\iota_{\R^{n}}^{b}(\delta)\ne\iota_{\R^{n}}^{b}(\delta)$
because otherwise we would have $0=\iota_{\R^{n}}^{b}(1)*\left(\iota_{\R^{n}}^{b}(\delta)*\iota_{\R^{n}}^{b}(\delta)\right)=\iota_{\R^{n}}^{b}(1)*\iota_{\R^{n}}^{b}(\delta)=\int\delta=1$.
\section{Hyperfinite Fourier transform\label{sec:Hyperfinite-Fourier-transform}}
\begin{defn}
\label{def:HyperfiniteFT}Let $k\in\RC{\rho}_{>0}$ be a positive infinite
number. Let $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(K,\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}})$, we define the $n$-dimensional
\emph{hyperfinite Fourier transform (HFT) $\mathcal{F}_{k}(f)$ of
$f$} \emph{on }$K:=\left[-k,k\right]^{n}$ as follows:
\begin{equation}
\mathcal{F}_{k}\left(f\right)\left(\omega\right):=\intop_{K}f\left(x\right)e^{-ix\cdotp\omega}\,\diff{x}=\intop_{-k}^{k}\,\diff{x_{1}}\ldots\intop_{-k}^{k}f\left(x_{1},\ldots,x_{n}\right)e^{-ix\cdotp\omega}\,\diff{x_{n}},\label{eq:Hpfinite_FT}
\end{equation}
\noindent where $x=\left(x_{1}\ldots x_{n}\right)\in K$ and $\omega=\left(\omega_{1}\ldots\omega_{n}\right)\in\RC{\rho}^{n}$.
As usual, the product $x\cdotp\omega$ on $\RC{\rho}^{n}$ denotes the
dot product $x\cdotp\omega=\sum_{j=1}^{n}x_{j}\omega_{j}\in\RC{\rho}$.
For simplicity, in the following we will also use the notation $\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(X):=\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(X,\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}})$.
If $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(X)$ and $\text{supp}(f)\subseteq K=[-k,k]^{n}$, based
on Def.~\ref{def:intCmpSupp}, we can use the simplified notation
$\mathcal{F}(f):=\mathcal{F}_{k}(f)$.
\end{defn}
In the following, $k=[k_{\eps}]\in\RC{\rho}_{>0}$ will always denote
a positive infinite number, and we set $K:=\left[-k,k\right]^{n}\Subset_{\text{\rm f}}\RC{\rho}^{n}$.
The adjective \emph{hyperfinite} can be motivated as follows: on the
one hand, $k\in\RC{\rho}$ is an infinite number, but on the other hand
we already mentioned that GSF behave on a functionally compact set
like $K$ as if it were a compact set. Similarly to the case of hyperfinite
numbers $\hyperN{\rho}$ (see Def.~\ref{def:hyperfiniteN}), the adjective
\emph{hyperfinite }is frequently used to denote mathematical objects
which are in some sense infinite but behave, from several points of
view, as bounded ones.
\begin{thm}
\label{thm:base}Let $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(K\right)$, then the following
properties hold:
\begin{enumerate}
\item \label{enu:epsIntFT}Let $\omega=[\omega_{\eps}]\in\RC{\rho}^{n}$ and
let $f$ be defined by the net $(f_{\eps})$. Then we have:
\[
\mathcal{F}_{k}\left(f\right)\left(\omega\right)=\intop_{K}f\left(x\right)e^{-ix\cdotp\omega}\,\diff{x}=\left[\intop_{-k_{\eps}}^{k_{\eps}}\,\diff{x_{1}}\ldots\intop_{-k_{\eps}}^{k_{\eps}}f_{\eps}\left(x_{1},\ldots,x_{n}\right)e^{-ix\cdotp\omega_{\eps}}\,\diff{x_{n}}\right]\in\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}}.
\]
\item \label{enu:bound}$\forall\omega\in\RC{\rho}^{n}:\ \left|\mathcal{F}_{k}(f)(\omega)\right|\le\int_{K}\left|f(x)\right|\,\diff{x}=\Vert f\Vert_{1}$,
so that the HFT is always sharply bounded.
\item \label{enu:Fourier_Map}$\mathcal{F}_{k}:\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(K\right)\longrightarrow\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(\RC{\rho}^{n}\right)$.
\end{enumerate}
\end{thm}
\begin{proof}
\ref{enu:epsIntFT}: For all $\omega\in\RC{\rho}^{n}$ fixed, the map
$x\in K\mapsto f\left(x\right)e^{-ix\cdotp\omega}$ is a GSF by the
closure with respect to composition, i.e.~Thm.~\ref{thm:propGSF}.\ref{enu:category}.
Therefore, we can apply Thm.~\ref{thm:muMeasurableAndIntegral}.\ref{enu:int-ndimInt}.
To prove \ref{enu:Fourier_Map}, we have to show that $\mathcal{F}_{k}(f):\RC{\rho}^{n}\longrightarrow\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}}$
is defined by a net $\left(\mathcal{F}_{k}\right)_{\eps}\in\mathcal{C}^\infty\left(\mathbb{R}^{n},\mathcal{C}\right)$
(see Def.~\ref{def:netDefMap}). We can naturally define such a net
as
\[
\left(\mathcal{F}_{k}\right)_{\eps}\left(y\right):=\intop_{-k_{\eps}}^{k_{\eps}}\,\diff{x_{1}}\ldots\intop_{-k_{\eps}}^{k_{\eps}}f_{\eps}\left(x_{1},\ldots,x_{n}\right)e^{-ix\cdotp y}\,\diff{x_{n}}\quad\forall y\in\R^{n},
\]
and we claim it satisfies the following properties:
\begin{enumerate}[label=(\alph*)]
\item \label{enu:hypotheis1}$\left[\left(\mathcal{F}_{k}\right)_{\eps}\left(\omega_{\eps}\right)\right]\in\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}}$,
$\forall\left[\omega_{\eps}\right]\in\RC{\rho}^{n}$.
\item \label{enu:hypothesis2}$\forall\left[\omega_{\eps}\right]\in\RC{\rho}^{n}\,\forall\alpha\in\mathbb{N}^{n}:\ \left(\partial^{\alpha}\left(\mathcal{F}_{k}\right)_{\eps}\left(\omega_{\eps}\right)\right)\in\mathcal{C}_{\rho}$.
\end{enumerate}
Claim \ref{enu:hypotheis1} is justified by \ref{enu:epsIntFT} above.
From \ref{enu:epsIntFT} it directly follows \ref{enu:bound}. In
order to prove \ref{enu:hypothesis2}, we use the standard derivation
under the integral sign to have
\[
\partial^{\alpha}\left(\mathcal{F}_{k}\right)_{\eps}\left(\omega_{\eps}\right)=\intop_{-k_{\eps}}^{k_{\eps}}\,\diff{x_{1}}\ldots\intop_{-k_{\eps}}^{k_{\eps}}f_{\eps}\left(x_{1},\ldots,x_{n}\right)e^{-ix\cdotp\omega_{\eps}}(-ix^{\alpha})\,\diff{x_{n}}.
\]
We can now proceed as above to prove \ref{enu:hypothesis2} and hence
the claim \ref{enu:Fourier_Map}.
\end{proof}
\subsection{\label{subsec:The-heuristic-motivation}The heuristic motivation
of the FT in a non-Archimedean setting}
Frequently, the formula for the definition of the FT (e.g.~for rapidly
decreasing functions) is informally motivated using its relations
with Fourier series. In order to replicate a similar argument for
GSF, we need the notion of \emph{hyperseries}. In fact, exactly as
the ordinary limit $\lim_{n\in\N}a_{n}$ is not well suited for the
sharp topology (because of its infinitesimal neighbourhoods) and we
have to consider hyperlimits $\hyperlim{\rho}{\sigma}a_{n}$ (see
Def.~\ref{def:hyperfiniteN}.\ref{enu:hyperlimit}), likewise to
study series of $a_{n}\in\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}}$, $n\in\N$, we have to consider
\begin{align*}
\hypersum{\rho}{\sigma}a_{n} & :=\hyperlimarg{\rho}{\sigma}{N}\sum_{n=0}^{N}a_{n}\in\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}},\\
\hypersumZ{\rho}{\sigma}a_{n} & :=\hyperlimarg{\rho}{\sigma}{N}\sum_{n=-N}^{N}a_{n}\in\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}},
\end{align*}
where $\hyperZ{\sigma}:=\hyperN{\sigma}\cup\left(-\hyperN{\sigma}\right)\subseteq\RC{\sigma}$.
The main problem in this definition is how to define the \emph{hyperfinite
sums} $\sum_{n=M}^{N}a_{n}\in\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}}$ for arbitrary hypernatural
numbers $N$, $M\in\hyperN{\sigma}$ and starting from suitable \emph{ordinary}
sequences $\left(a_{n}\right)_{n\in\N}$ of $\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}}$. However, this
can be done, and the resulting notion extends several classical theorems,
see \cite{TiwGio21}.
Only for this section, we hence assume that $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}([-T,T])$,
$T\in\RC{\rho}_{>0}$, can be written as a Fourier hyperseries
\[
f(t)=\hypersumZ{\rho}{\sigma}c_{n}e^{2\pi i\frac{n}{T}t}\quad\forall t\in(-T,T),
\]
where $\sigma$ is another gauge such that $\sigma_{\eps}\le\rho_{\eps}^{q}$
for all $q\in\N$ and for $\eps$ small (so that $\R_{\rho}\subseteq\R_{\sigma}$,
see Def.~\ref{def:RCGN}). Using Thm.~\ref{thm:contResult} to exchange
hyperseries and integration, for each $h\in\hyperZ{\sigma}$, we have
\[
\int_{-T}^{T}f(t)e^{-2\pi i\frac{h}{T}t}\,\diff{t}=\hypersumZ{\rho}{\sigma}c_{n}\int_{-T}^{T}e^{2\pi i\frac{t}{T}(n-h)}\,\diff{t}=2T\cdot c_{h}.
\]
That is $c_{h}=\frac{1}{2T}\mathcal{F}(f)\left(2\pi\frac{h}{T}\right)$.
It is also well-known that, informally, if $T$ is ``sufficiently
large'', then the Fourier coefficients $c_{n}$ ``approximate''
the FT scaled by $\frac{1}{2T}$ and dilated by $2\pi$. Using our
non-Archimedean language, this can be formalized as follows: Let $\omega=[\omega_{\eps}]\in\RC{\rho}$,
and assume that $T=[T_{\eps}]$ is an infinite number, then setting
$h_{\omega}:=\left[\text{int}\left(\omega_{\eps}\cdot T_{\eps}\right)\right]\in\hyperZ{\rho}$
(here we use $\R_{\rho}\subseteq\R_{\sigma}$), we have $\omega_{\eps}\le\frac{h_{\omega\eps}}{T_{\eps}}\le\omega_{\eps}+\frac{1}{T_{\eps}}$,
so that $\frac{h_{\omega}}{T}\approx\omega$ because $T$ is an infinite
number. By Thm.~\ref{thm:base}, $\mathcal{F}(f)$ is a GSF. Let
$a$, $b$, $c$, $d\in$$\RC{\rho}$, with $a<c<d<b$, and set $M:=\max_{\omega\in[2\pi a,2\pi b]}\mathcal{F}(f)'(\omega)$.
Using Lem.~\ref{lem:mayer}, we can find $q\in\N$ such that $c-a\ge\diff{\rho}^{q}$
and $b-d\ge\diff{\rho}^{q}$. Assume that $T$ is sufficiently large
so that the following conditions hold
\[
\frac{1}{T}\le\diff{\rho}^{q},\quad\frac{M}{T}\approx0.
\]
Then, for all $\omega\in[c,d]$, we have $\frac{h_{\omega}}{T}\le\omega+\frac{1}{T}\le d+\diff{\rho}^{q}\le b$,
and $\frac{h_{\omega}}{T}\ge\omega\ge c>a$, so that $\frac{h_{\omega}}{T}$,
$\omega\in[a,b]$. From the mean value theorem Thm.~\ref{thm:meanValue},
we hence have
\[
\left|\mathcal{F}(f)\left(2\pi\frac{h_{\omega}}{T}\right)-\mathcal{F}(f)\left(2\pi\omega\right)\right|\le2\pi M\left|\frac{h_{\omega}}{T}-\omega\right|\le2\pi\frac{M}{T}\approx0.
\]
We hence proved that
\[
\exists Q\in\N\,\forall T\ge\diff{\rho}^{-Q}:\ c_{h_{\omega}}\approx\frac{1}{2T}\mathcal{F}(f)(2\pi\omega).
\]
Finally, note that since $T$ is an infinite number, if $h_{\omega}\in\Z$,
then necessarily $\omega$ must be infinitesimal; on the contrary,
if $\omega\ge r\in\R_{\ne0}$, then necessarily $h_{\omega}\in\hyperZ{\sigma}\setminus\Z$
is an infinite integer number.
Therefore, with the precise meaning given above, the heuristic relations
between Fourier coefficients and HFT holds also for GSF.
\subsection{\label{subsec:The-Riemann-Lebesgue-lemma}The Riemann-Lebesgue lemma
in a non-linear setting}
The following result represents the Riemann-Lebesgue lemma in our
framework. It immediately highlights an important difference with
respect to the classical approach since it states that the HFT of
a very large class of compactly supported GSF is still compactly supported
(see also Thm.~\ref{thm:uncertainty} for a classical formulation
of the uncertainty inequality for GSF).
\begin{lem}
\label{lem:Rieman-Lebesgue}Let $H\Subset_{\text{\rm f}}\RC{\rho}^{n}$ and $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}\left(H\right)$
be a compactly supported GSF. Assume that
\begin{equation}
\exists C,b\in\RC{\rho}_{>0}\,\forall x\in H\,\forall j\in\N:\ \left|\diff{^{j}f}(x)\right|\le C\cdot b^{j}.\label{eq:derPol}
\end{equation}
For all $N_{1},\ldots,N_{n}\in\N$ and $\omega\in\RC{\rho}^{n}$, if $\omega_{1}^{N_{1}}\cdot\ldots\cdot\omega_{n}^{N_{n}}$
is invertible, then
\begin{equation}
\left|\mathcal{F}(f)(\omega)\right|\le\frac{1}{\left|\omega_{1}^{N_{1}}\cdot\ldots\cdot\omega_{n}^{N_{n}}\right|}\cdot\int_{H}\left|\partial_{1}^{N_{1}}\ldots\partial_{n}^{N_{n}}f(x)\right|\,\diff{x}.\label{eq:R-Lineq}
\end{equation}
Therefore
\begin{equation}
\lim_{\omega\to\infty}\left|\mathcal{F}(f)(\omega)\right|=0.\label{eq:R-Llim}
\end{equation}
Actually, \eqref{eq:R-Lineq} yields the stronger result:
\begin{equation}
\exists Q\in\N:\ \mathcal{F}(f)\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}\left(\overline{B_{\diff{\rho}^{-Q}}(0)}\right).\label{eq:HFTCmptSupp}
\end{equation}
\end{lem}
\begin{proof}
Let us apply integration by parts Thm.~\ref{thm:intRules}.\ref{enu:intByParts}
at the $p$-th integral in \eqref{eq:Hpfinite_FT} (assuming that
$N_{p}>0$):
\begin{align*}
\intop_{-k}^{k}f\left(x\right)e^{-i\omega\cdot x}\,\diff{x_{p}} & =\left.-\frac{f\left(x\right)}{i\omega_{p}}e^{-i\omega\cdot x}\right|_{x_{p}=-k}^{x_{p}=k}+\frac{1}{i\omega_{p}}\intop_{-k}^{k}\partial_{p}f\left(x\right)e^{-i\omega\cdot x}\,\diff{x_{p}}=\\
& =\frac{1}{i\omega_{p}}\intop_{-k}^{k}\partial_{p}f\left(x\right)e^{-i\omega\cdot x}\,\diff{x_{p}}.
\end{align*}
because Thm.~\ref{thm:DerivativeIsZero}.\ref{enu:derZeroBound}
yields $f(x)=0$ if $x_{p}=\pm k$. Applying the same idea with $N_{p}\in\N$
repeated integrations by parts for each integral in \eqref{eq:Hpfinite_FT},
and using Thm.~\ref{thm:DerivativeIsZero}.\ref{enu:derZeroBound},
we obtain
\[
\mathcal{F}(f)(\omega)=\frac{1}{\omega_{1}^{N_{1}}\cdot\ldots\cdot\omega_{n}^{N_{n}}i^{N_{1}+\ldots+N_{n}}}\int_{K}\partial_{1}^{N_{1}}\ldots\partial_{n}^{N_{n}}f(x)e^{-ix\cdot\omega}\,\diff{\rho}.
\]
Claims \eqref{eq:R-Lineq} and \eqref{eq:R-Llim} both follows from
Thm.~\ref{thm:muMeasurableAndIntegral}.\ref{enu:existsReprDefInt}
and from the closure of GSF with respect to differentiation, i.e.~Thm.~\ref{thm:FR-forGSF}.
To prove \eqref{eq:HFTCmptSupp}, we first recall \eqref{eq:closureBall},
so that $\overline{B_{\diff{\rho}^{-Q}}(0)}\Subset_{\text{\rm f}}\RC{\rho}^{n}$. Let $C$,
$b\in\RC{\rho}_{>0}$ from \eqref{eq:derPol} and $\lambda(H)\in\RC{\rho}$,
where $\lambda$ is the Lebesgue measure. Therefore, $b\le\diff{\rho}^{-R}$
for some $R\in\N$, and we can set $Q:=R+1$. We want to prove the
claim using Thm.~\ref{thm:DerivativeIsZero}.\ref{enu:equivCmptSupp},
so that we take $\omega=(\omega_{1},\ldots,\omega_{n})\in\text{ext}\left(\overline{B_{\diff{\rho}^{-Q}}(0)}\right)$.
It cannot be $|\omega|\sbpt{<}\diff{\rho}^{-Q}$ because this would
yield $|\omega-a|\sbpt{=}0$ for some $a\in\overline{B_{\diff{\rho}^{-Q}}(0)}$;
thereby, $|\omega|\ge\diff{\rho}^{-Q}$ by Lem.~\ref{lem:trich1st}.
It always holds $\max_{l=1,\ldots,n}|\omega_{l}|\ge\frac{1}{n}|\omega|$,
i.e.~$\left[\max_{l=1,\ldots,n}|\omega_{l\eps}|\right]\ge\frac{1}{n}\left[|\omega_{\eps}|\right]$,
where $\omega_{l}=[\omega_{l\eps}]$ and $\omega_{\eps}:=\left|(\omega_{1\eps},\ldots,\omega_{n\eps})\right|$.
In general, we cannot say that $|\omega_{p}|=\max_{l=1,\ldots,n}|\omega_{l}|$
for some $p=1,\ldots,n$ because at most this equality holds only
for subpoints. In fact, set $L_{p}:=\left\{ \eps\in I\mid\max_{l=1,\ldots,n}|\omega_{l\eps}|=\left|\omega_{p\eps}\right|\right\} $
and let $P\subseteq\{1,\ldots,n\}$ be the non empty set of all the
indices $p=1,\ldots,n$ such that $L_{p}\subseteq_{0} I$. We hence have
$|\omega_{p}|=_{L_{p}}\max_{l=1,\ldots,n}|\omega_{l}|\ge\frac{1}{n}|\omega|\ge\frac{1}{n}\diff{\rho}^{-Q}$
for all $p\in P$, and
\begin{equation}
\forall^{0}\eps\,\exists p\in P:\ \eps\in L_{p}.\label{eq:L_p}
\end{equation}
We apply assumption \eqref{eq:derPol} and inequality \eqref{eq:R-Lineq}
with an arbitrary $N_{p}=N\in\N$, $p\in P$, and with $N_{j}=0$
for all $j\ne p$ to get
\begin{align*}
\left|\mathcal{F}(f)(\omega)\right| & \le\frac{1}{|\omega_{p}|^{N}}\cdot\int_{H}\left|\partial_{p}^{N}f(x)\right|\,\diff{x}\le_{L_{p}}n^{N}\cdot\diff{\rho}^{NQ}Cb^{N}\lambda(H)\le\\
& \le\diff{\rho}^{-1}\cdot\diff{\rho}^{N(Q-R)}C\lambda(H)=\diff{\rho}^{N-1}C\lambda(H).
\end{align*}
For $N\to+\infty$ (in the ring $\RC{\rho}|_{L_{p}})$, we hence have that
$\mathcal{F}(f)(\omega)=_{L_{p}}0$. From \eqref{eq:L_p} we hence
finally get $\mathcal{F}(f)(\omega)=0$.
\end{proof}
\begin{rem}
\label{rem:R-L}~
\begin{enumerate}
\item Considering that $\delta(t)=b^{n}\psi(bt)$ and that $\psi$ is an
even function (Lem.~\ref{lem:strictDeltaNet}.\ref{enu:suppStrictDeltaNet}),
we have
\begin{equation}
\mathcal{F}(\delta)(\omega)=\int\delta(t)e^{-it\omega}\,\diff{t}=\int\delta(0-t)e^{-it\omega}\,\diff{t}=\left(\delta*e^{-i(-)\omega}\right)(0).\label{eq:deltaConvFT}
\end{equation}
We already know that if $b/|\omega|$ is a strong infinite number,
then the function $f_{\omega}(t)=e^{-it\omega}$ is bounded by a tame
polynomial. Thereby, using Thm.~\ref{thm:convIdentity}, we have
$\mathcal{F}(\delta)(\omega)=f_{\omega}(0)=1$; in particular, $\mathcal{F}(\delta)|_{\R}=1$.
\item On the other hand, $\delta^{(j)}(t)=b^{j+n}\psi^{(j)}(bt)$ and hence
Lem.~\ref{lem:strictDeltaNet}.\ref{enu:moderateStrictDeltaNet}
yields
\[
\left|\delta^{(j)}(t)\right|\le b^{j+n}Cb^{j+2}=Cb^{n+2}\left(b^{2}\right)^{j}\quad\forall t\in\RC{\rho}.
\]
Thus, Dirac's delta satisfies condition \eqref{eq:derPol} and hence
\begin{equation}
\exists Q\in\N:\ \mathcal{F}(\delta)\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\overline{B_{\diff{\rho}^{-Q}}(0)}).\label{eq:FTDeltaCmptSupp}
\end{equation}
In the following, we will use the notation $\mathbb{1}:=\mathcal{F}(\delta)$.
\item The previous result also yields that $f*\delta=f$ cannot hold in
general since otherwise, we can argue as in \eqref{eq:deltaConvFT}
to prove that $\mathcal{F}(\delta)(\omega)=1$ for all $\omega\in\RC{\rho}$,
in contradiction with \eqref{eq:FTDeltaCmptSupp}.
\end{enumerate}
\end{rem}
Inequality \eqref{eq:R-Lineq} can also be stated as a general impossibility
theorem (where we intuitively think $n=1$).
\begin{thm}
\label{thm:R-Limp}Let $(R,\le)$ be an ordered ring and $G$ be an
$R$-module. Assume that we have the following maps (for which we
use notations aiming to draw the interpretation where $G$ is a space
of GF)
\begin{align*}
(-)' & :G\longrightarrow G\\
\int & :G\longrightarrow R\\
(-)\cdot\exp_{\omega} & :G\longrightarrow G\quad\forall\omega\in R\\
|-| & :R\longrightarrow R.
\end{align*}
These maps satisfy the following integration by parts formula
\begin{equation}
\int f\cdot\exp_{\omega}=\frac{1}{\omega}\int f'\cdot\exp_{\omega}\label{eq:absIntParts}
\end{equation}
for all invertible $\omega\in R^{*}$, $f\in G$, and
\begin{equation}
|rs|=|r||s|\quad\forall r,s\in R\label{eq:absProd}
\end{equation}
\begin{equation}
\forall f\in G\,\exists C\in R\,\forall\omega\in R^{*}:\ \left|\int f\cdot\exp_{\omega}\right|\le C.\label{eq:boundInt}
\end{equation}
Then for all $f\in G$ and all $N\in\N_{>0}$ there exists $C=C(f,N)\in R$
such that
\begin{equation}
\forall\omega\in R^{*}:\left|\int f\cdot\exp_{\omega}\right|\le\frac{C}{|\omega|^{N}}.\label{eq:absRL}
\end{equation}
Therefore, if $\delta\in G$ satisfies $\frac{C(\delta,N)}{|\omega|^{N}}<1$
for some $\omega\in R$ and some $N\in\N$, then
\[
\left|\int\delta\cdot\exp_{\omega}\right|<1.
\]
\end{thm}
\begin{proof}
For $f\in G$, in the usual way we recursively define $f^{(p)}\in G$
using the map $(-)':G\longrightarrow G$. Taking formula \eqref{eq:absIntParts}
for $N\in\N_{>0}$ times we get $\int f\cdot\exp_{\omega}=\frac{1}{\omega^{N}}\int f^{(N)}\cdot\exp_{\omega}$.
Applying $|-|$ and using \eqref{eq:absProd} and \eqref{eq:boundInt}
we get the conclusion \eqref{eq:absRL}.
\end{proof}
\noindent Note that we can take $R=\left\{ i\cdot r\mid r\in\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}}\right\} $
to apply this abstract result to the case of Lem\@.~\ref{lem:Rieman-Lebesgue}.
This result also underscore that in the case $G=\mathcal{D}'(\R)$,
$R=\R$ we cannot have an integration by parts formula such as \eqref{eq:absIntParts}.
Once more, it also underscores that, since \eqref{eq:absIntParts}
holds in our setting, we cannot have $f*\delta=f$ without limitations
because this would imply $\mathcal{F}(\delta)(\omega)=1$ for all
$\omega\in\RC{\rho}$.
\begin{example}
\label{exa:exp}Let $f\left(x\right)=e^{x}$ for all $\left|x\right|\leq k$,
where $k:=-\log\left(\diff\rho\right)$. The hyperfinite Fourier transform
$\mathcal{F}_{k}$ of $f$ is
\begin{align*}
\mathcal{F}_{k}\left(f\right)\left(\omega\right) & =\frac{e^{k\left(1-i\omega\right)}-e^{-k\left(1-i\omega\right)}}{1-i\omega}=\frac{\diff{\rho^{\left(i\omega-1\right)}}-\diff{\rho^{\left(1-i\omega\right)}}}{1-i\omega}=\\
& =\frac{1}{1-i\omega}\left(\frac{\diff{\rho}^{i\omega}}{\diff{\rho}}-\frac{\diff{\rho}}{\diff{\rho}^{i\omega}}\right)\quad\forall\omega\in\RC{\rho}.
\end{align*}
Note that $1-i\omega$, $\omega\in\RC{\rho}$, is always invertible with
the usual inverse $\frac{1+i\omega}{1+\omega^{2}}$, moreover, $\diff{\rho}^{i\omega}=e^{i\omega\log\diff{\rho}}$
and hence $|\diff{\rho}^{i\omega}|=1$. Therefore, $\mathcal{F}_{k}(f)(\omega)$
is always an infinite complex number for all finite numbers $\omega$.
If $\omega\ge\diff{\rho}^{-1-r}$, $r\in\R_{>0}$, then $\mathcal{F}_{k}(f)(\omega)$
is infinitesimal but not zero. Clearly, $f\notin\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(K)$.
\end{example}
\section{Elementary properties of the hyperfinite Fourier transform\label{sec:Elementary-properties}}
In this section, we list and prove the elementary properties of the
HFT.
\begin{thm}
\label{thm:thmProperties}(see Sec.~\ref{subsec:Embedding} for the
notations $\odot$ and $\oplus$) Let $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(K\right)$ and
$g:\RC{\rho}^{n}\longrightarrow\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}}$, then
\begin{enumerate}
\item \label{enu:prop1}$\mathcal{F}_{k}\left(f+g\right)=\mathcal{F}_{k}\left(f\right)+\mathcal{F}_{k}\left(g\right)$
if $g\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(K)$.
\item \label{enu:prop2}$\mathcal{F}_{k}\left(bf\right)=b\mathcal{F}_{k}\left(f\right)$
for all $b\in\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}}$.
\item \label{enu:prop3}$\mathcal{F}_{k}\left(\overline{f}\right)=\overline{-1\diamond\mathcal{F}_{k}(f)}$,
where $-1\diamond f$ is the \emph{reflection} of $f$, i.e.~$\left(-1\diamond f\right)\left(x\right):=f\left(-x\right)$.
\item \label{enu:prop4}$\mathcal{F}_{k}\left(-1\diamond f\right)=-1\diamond\mathcal{F}_{k}(f)$
\item \label{enu:prop5}$\mathcal{F}_{k}\left(t\diamond g\right)=t\odot\mathcal{F}_{tk}\left(g\right)$
for all $t\in\RC{\rho}_{>0}$ such that $tk$ is still infinite and $g|_{K}\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(K)$,
$g|_{tK}\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(tK)$. Here, $t\diamond g$ is the \emph{dilation
}of $f,$ i.e.~$\left(t\diamond g\right)\left(x\right):=g\left(tx\right)$.
\item \label{enu:prop6}Let $k>h>0$ be infinite numbers, $s\in[-(k-h),k-h]^{n}$,
$f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}([-h,h]^{n})$. Then
\[
\mathcal{F}_{k}\left(s\oplus f\right)=e^{-is\cdot\left(-\right)}\mathcal{F}_{k}\left(f\right)=e^{-is\cdot\left(-\right)}\mathcal{F}_{h}\left(f\right)=e^{-is\cdot\left(-\right)}\mathcal{F}\left(f\right).
\]
In particular, if $h\ge\diff{\rho}^{-p}$, $k\ge\diff{\rho}^{-q}$,
$p$, $q\in\R_{>0}$, $q>p$, and $s\in\csp{\R^{n}}$, then $s\in[-(k-h),k-h]^{n}$.
In particular, $\R^{n}\subseteq[-(k-h),k-h]^{n}$.
\item \label{enu:prop7}$\mathcal{F}_{k}\left(e^{is\cdot\left(-\right)}f\right)=s\oplus\mathcal{F}_{k}\left(f\right)$
for all $s\in\RC{\rho}^{n}$.
\item \label{enu:prop8}Let $\omega\in\RC{\rho}^{n}$ and $\alpha\in\N^{n}\setminus\{0\}$.
For $p=1,\ldots,|\alpha|$, define $\beta_{p}=\left(\beta_{p,q}\right)_{q=1,\ldots,n}\in\N^{n}$
with
\begin{align*}
\beta_{0} & :=\alpha\\
\beta_{p+1} & :=(0,\displaystyle \mathop {\ldots\ldots\,}} % marks with smth over. Usage: \ptind^{...^{j_{p}-1},0,\beta_{p,j_{p}}-1,\beta_{p,j_{p}+1},\ldots,\beta_{p,n})\text{ if }j_{p}:=\min\left\{ q\mid\beta_{p,q}>0\right\} .
\end{align*}
Finally, for all $\bar{f}\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(K)$ and $j=1,\ldots,n$, set
\begin{align*}
\Delta_{1k}\bar{f}(\omega):= & \left[\bar{f}(x)e^{-ix\cdot\omega}\right]_{x_{j}=-k}^{x_{j}=k}\\
\Delta_{jk}\bar{f}(\omega):= & \intop_{-k}^{k}\,\diff{x_{1}}\ldots\intop_{-k}^{k}\,\diff{x_{j-1}}\intop_{-k}^{k}\,\diff{x_{j+1}}\ldots\intop_{-k}^{k}\left[\bar{f}(x)e^{-ix\cdot\omega}\right]_{x_{j}=-k}^{x_{j}=k}\,\diff{x_{n}}.
\end{align*}
Then, we have
\begin{align}
\mathcal{F}_{k}\left(\partial_{j}f\right) & =i\omega_{j}\mathcal{F}_{k}\left(f\right)+\Delta_{jk}f\quad\forall j=1,\ldots,n\label{eq:DerRule1}\\
\mathcal{F}_{k}\left(\partial^{\alpha}f\right) & =\left(i\omega\right)^{\alpha}\mathcal{F}_{k}\left(f\right)+\sum_{p=0}^{|\alpha|-1}(i\omega)^{\alpha-\beta_{p}}\Delta_{j_{p}k}(\partial^{\beta_{p+1}}f).\label{eq:DerRule}
\end{align}
In particular, if
\begin{equation}
f\left(x_{1},\ldots,x_{j-1},k,x_{j+1}\right)=f\left(x_{1},\ldots,x_{j-1},-k,x_{j+1}\right)=0\quad\forall x\in K,\label{eq:Hpk-k0}
\end{equation}
then
\[
\mathcal{F}_{k}\left(\partial_{j}f\right)=i\omega_{j}\mathcal{F}_{k}\left(f\right).
\]
\item \label{enu:prop9}$\frac{\partial}{\partial\omega_{j}}\mathcal{F}_{k}\left(f\right)=-i\mathcal{F}_{k}\left(x_{j}f\right)$
for all $j=1,\ldots,n$.
\item \label{enu:prop10}If $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(K)$ or $g\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(K)$, then $\mathcal{F}_{k}\left(f\ast g\right)=\mathcal{F}_{k}\left(f\right)\mathcal{F}_{k}\left(g\right)$.
Therefore, if $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho}^{n})$ and $g\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho}^{n})$, then
$\mathcal{F}\left(f\ast g\right)=\mathcal{F}\left(f\right)\mathcal{F}\left(g\right)$.
\item \label{enu:prop11}$\mathcal{F}_{k}\left(s\odot g\right)=s\diamond\mathcal{F}_{\frac{k}{s}}\left(g\right)$
for all invertible $s\in\RC{\rho}_{>0}$ such that $\frac{k}{s}$ is infinite,
$g|_{K}\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(K)$ and $g|_{K/s}\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(K/s)$.
\end{enumerate}
\end{thm}
\begin{proof}
Properties \ref{enu:prop1}-\ref{enu:prop5} can be proved like in
the case of rapidly decreasing smooth functions. For \ref{enu:prop6},
we have
\begin{align*}
\mathcal{F}_{k}\left(s\oplus f\right)\left(\omega\right) & =\mathcal{F}_{k}\left(f\left(x-s\right)\right)\left(\omega\right)=\intop_{K}f\left(x-s\right)e^{-ix\cdot\omega}\,\diff{x}=\\
& =\intop_{-k}^{k}\,\diff{x_{1}}\ldots\intop_{-k}^{k}f\left(x-s\right)e^{-ix\cdotp\omega}\,\diff{x_{n}}.
\end{align*}
Considering the change of variable $x-s=u$ we have
\[
\mathcal{F}_{k}\left(s\oplus f\right)\left(\omega\right)=e^{-is\cdot\omega}\intop_{-k-s_{1}}^{k-s_{1}}\,\diff{u_{1}}\ldots\intop_{-k-s_{n}}^{k-s_{n}}f\left(u\right)e^{-iu\cdotp\omega}\,\diff{u_{n}}.
\]
Finally, considering that $k>h$ and $s\in[-k+h,k-h]^{n}$ we have
$k-s_{i}\ge h$, $-h\ge-k-s_{i}$ and $k+s_{i}\ge h$ for all $i=1,\ldots,n$,
so that
\begin{align*}
\intop_{-k-s_{1}}^{k-s_{1}}\,\diff{u_{1}}\ldots\intop_{-k-s_{n}}^{k-s_{n}}f\left(u\right)e^{-iu\cdotp\omega}\,\diff{u_{n}} & =\intop_{-h}^{h}\,\diff{u_{1}}\ldots\intop_{-h}^{h}f\left(u\right)e^{-iu\cdotp\omega}\,\diff{u_{n}}=\\
& =\intop_{-k}^{k}\,\diff{u_{1}}\ldots\intop_{-k}^{k}f\left(u\right)e^{-iu\cdotp\omega}\,\diff{u_{n}}
\end{align*}
from Def.~\ref{def:intCmpSupp} since $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}([-h,h]^{n})$.
\ref{enu:prop7} is immediate from the Def. \ref{def:HyperfiniteFT}.
To prove \ref{enu:prop8}, using integration by parts formula, we
have
\begin{align*}
\mathcal{F}_{k}\left(\partial_{j}f\right)\left(\omega\right) & =\intop_{K}\partial_{j}f\left(x\right)e^{-ix\cdot\omega}\,\diff{x}=\intop_{-k}^{k}\,\diff{x_{1}}\ldots\intop_{-k}^{k}\partial_{j}f\left(x\right)e^{-ix\cdotp\omega}\,\diff{x_{n}}=\\
& =-\intop_{-k}^{k}\,\diff{x_{1}}\ldots\intop_{-k}^{k}f\left(x\right)\left(-i\omega_{j}\right)e^{-ix\cdotp\omega}\,\diff{x_{n}}+\\
& \phantom{=}+\intop_{-k}^{k}\,\diff{x_{1}}\ldots\intop_{-k}^{k}\,\diff{x_{j-1}}\intop_{-k}^{k}\,\diff{x_{j+1}}\ldots\intop_{-k}^{k}\left[f(x)e^{-ix\cdot\omega}\right]_{x_{j}=-k}^{x_{j}=k}\,\diff{x_{n}}=\\
& =i\omega_{j}\mathcal{F}_{k}\left(f\right)\left(\omega\right)+\Delta_{jk}f(\omega).
\end{align*}
Therefore, by applying this formula with $\partial_{p}f$ instead
of $f$, we obtain
\[
\mathcal{F}_{k}\left(\partial_{j}\partial_{p}f\right)(\omega)=-\omega_{j}\omega_{p}\mathcal{F}_{k}(f)(\omega)+i\omega_{j}\Delta_{pk}\left(f\right)(\omega)+\Delta_{jk}\left(\partial_{p}f\right)(\omega).
\]
Proceeding similarly by induction on $|\alpha|$, we can prove the
general claim.
To prove \ref{enu:prop9}, we use Thm. \ref{thm:intRules}.\ref{enu:derUnderInt},
i.e.~derivation under the integral sign:
\begin{align*}
\frac{\partial}{\partial\omega_{j}}\mathcal{F}_{k}\left(f\right)\left(\omega\right) & =\frac{\partial}{\partial\omega_{j}}\left(\intop_{-k}^{k}\,\diff{x_{1}}\ldots\intop_{-k}^{k}f\left(x\right)e^{-ix\cdotp\omega}\,\diff{x_{n}}\right)=\\
& =\intop_{-k}^{k}\,\diff{x_{1}}\ldots\intop_{-k}^{k}\frac{\partial}{\partial\omega_{j}}\left(f\left(x\right)e^{-ix\cdotp\omega}\right)\,\diff{x_{n}}=\\
& =\intop_{-k}^{k}\,\diff{x_{1}}\ldots\intop_{-k}^{k}-ix_{j}f\left(x\right)e^{-ix\cdotp\omega}\,\diff{x_{n}}=\\
& =-i\mathcal{F}_{k}\left(x_{j}f\right)\left(\omega\right).
\end{align*}
\ref{enu:prop10}:
\begin{align*}
\mathcal{F}_{k}\left(\left(f\ast g\right)\right)\left(\omega\right) & =\intop_{K}e^{-ix\omega}\left(f\ast g\right)\left(x\right)\,\diff{x}=\\
& =\intop_{K}e^{-ix\omega}\intop_{K}f\left(y\right)g\left(x-y\right)\,\diff{y}\,\diff{x}.
\end{align*}
Considering the change of variable $x-y=t$ and using Fubini's theorem,
we have
\begin{align*}
\intop_{K}e^{-i\left(t+y\right)\omega}\intop_{K}f\left(y\right)g\left(t\right)\,\diff{y}\,\diff{t} & =\intop_{K}e^{-iy\omega}f\left(y\right)\,\diff{y}\intop_{K}e^{-it\omega}g\left(t\right)\,\diff{t}=\\
& =\mathcal{F}_{k}\left(f\right)\left(\omega\right)\mathcal{F}_{k}\left(g\right)\left(\omega\right).
\end{align*}
Finally, we prove \ref{enu:prop11}:
\[
\mathcal{F}_{k}\left(s\odot g\right)\left(\omega\right)=\mathcal{F}_{k}\left(\frac{1}{s^{n}}g\left(\frac{x}{s}\right)\right)\left(\omega\right)=\intop_{K}e^{-ix\cdot\omega}g\left(\frac{x}{s}\right)\,\frac{\diff{x}}{s^{n}}.
\]
\noindent Considering the change of variable $\frac{x}{s}=y$ we have
\begin{align*}
\intop_{K}e^{-ix\cdot\omega}g\left(\frac{x}{s}\right)\,\frac{\diff{x}}{s^{n}} & =\intop_{-k/s}^{k/s}\,\diff{y_{1}}\ldots\intop_{-k/s}^{k/s}g\left(y\right)e^{-isy\cdotp\omega}\,\diff{y_{n}}=\\
& =\intop_{K/s}g\left(y\right)e^{-iy\cdotp s\omega}\,\diff{y}=\mathcal{F}_{k/s}\left(g\right)\left(s\omega\right)=\\
& =\left[s\diamond\mathcal{F}_{k/s}\left(g\right)\right](\omega).
\end{align*}
\end{proof}
We will see in Sec.~\ref{sec:Examples-and-applications} that the
additional term in \eqref{eq:DerRule} plays an important role in
finding \emph{non tempered} solutions of differential equations (like
the exponentials of the trivial ODE $y'=y$). We also note that condition
\eqref{eq:Hpk-k0} is clearly weaker than asking $f$ compactly supported.
For example, setting
\[
l_{j}(x):=\frac{1}{2k}\left[f\left(x\right)|_{x_{j}=k}-f\left(x\right)|_{x_{j}=-k}\right]\cdot(x_{j}+k)+f\left(x\right)|_{x_{j}=-k},
\]
then $\bar{f}:=f-l_{j}$ satisfies \eqref{eq:Hpk-k0}.
\section{\label{sec:The-inverse-hyperfinite}The inverse hyperfinite Fourier
transform, Parseval's relation, Plancherel's identity and the uncertainty
principle}
We naturally define the inverse HFT as follows:
\begin{defn}
Let $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(K\right)$, we define the \emph{inverse} HFT as
\begin{equation}
\mathcal{F}_{k}^{-1}\left(f\right)(x):=\frac{1}{\left(2\pi\right)^{n}}\intop_{K}f\left(\omega\right)e^{ix\cdot\omega}\,\diff{\omega}\label{eq:InverseFT}
\end{equation}
\noindent for all $x\in\RC{\rho}$. As we proved in Thm.~\ref{thm:base},
we have $\mathcal{F}_{k}^{-1}:\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(K\right)\longrightarrow\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(\RC{\rho}^{n}\right)$.
We immediately note that the notation of the inverse function $\mathcal{F}_{k}^{-1}$
is an abuse of language because the codomain of $\mathcal{F}_{k}$
is larger than the domain of $\mathcal{F}_{k}^{-1}$ (and vice versa).
When dealing with inversion properties, it is hence better to think
at
\begin{align*}
\mathcal{F}_{k}|_{K} & :=(-)|_{K}\circ\mathcal{F}_{k}:\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(K\right)\longrightarrow\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(K\right)\\
\mathcal{F}_{k}^{-1}|_{K} & :=(-)|_{K}\circ\mathcal{F}_{k}^{-1}:\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(K\right)\longrightarrow\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(K\right).
\end{align*}
We will see in Sec.~\ref{sec:Examples-and-applications} that lacking
this precision can easily lead to inconsistencies.
Note that
\begin{equation}
\left(2\pi\right)^{n}\mathcal{F}_{k}^{-1}(f)=\mathcal{F}_{k}\left(-1\diamond f\right)=-1\diamond\mathcal{F}_{k}(f),\label{eq:reflection}
\end{equation}
where $-1\diamond$ denotes the reflection $\left(-1\diamond g\right)(x):=g(-x)$.
\end{defn}
Our main goal is clearly to investigate the relationship between HFT
and its inverse HFT, i.e.~to prove the Fourier inversion theorem
for the HFT. Three important results used in the classical proof of
the Fourier inversion theorem are: the application of approximate
identities for convolution defined by Gaussian like functions, Lebesgue
dominated converge theorem (we can replace it with Thm\@.~\ref{thm:contResult}),
and the translation property of FT. In our setting, the last property
corresponds to Thm.~\ref{thm:thmProperties}.\ref{enu:prop6}, which
works only for compactly supported GSF. The idea is hence to avoid
proving the inversion theorem firstly at the origin and then employing
the translation property, but to prove it directly at an arbitrary
interior point $y\in K$ using approximate identities obtained by
mollification of a Gaussian function.
We hence start by the latter, explicitly noting that, contrary to
the usual setting, considering Robinson-Colombeau generalized numbers,
the Gaussian is compactly supported:
\begin{lem}
\label{lem:example}Let $f\left(x\right)=e^{-\frac{\left|x\right|^{2}}{2}}$
for all $x\in\RC{\rho}^{n}$. Then $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}\left(\overline{B_{h}(0)}\right)$
for all strong infinite number $h\in\RC{\rho}_{>0}$. Moreover, $\mathcal{F}\left(f\right)\left(\omega\right)=\left(2\pi\right)^{\frac{n}{2}}e^{-\frac{\left|\omega\right|^{2}}{2}}$.
\end{lem}
\begin{proof}
The function $f$ satisfies the inequality $0\leq f\left(x\right)\leq x^{-q}$,
$\forall q\in\mathbb{N}$, for $\left|x\right|$ finite sufficiently
large. Therefore, for all strongly infinite $x$, we have $f\left(x\right)=0$
i.e., $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}\left(\RC{\rho}^{n}\right)$. We first prove the second
claim in dimension $n=1$, by noting that $f\left(x\right)=e^{-\frac{\left|x\right|^{2}}{2}}$
satisfies the ODE
\begin{equation}
f'\left(x\right)+xf\left(x\right)=0\label{eq:1dimODE}
\end{equation}
\noindent with the initial value $f\left(0\right)=1$. By separation
of variables, any solution of \eqref{eq:1dimODE} is of the form $f\left(x\right)=c\cdot e^{-\frac{x^{2}}{2}}$,
where $c=f\left(0\right).$ Applying the HFT to \eqref{eq:1dimODE},
and considering \ref{enu:prop8} and \ref{enu:prop9} of Thm.~\ref{thm:thmProperties},
we have
\[
i\omega\mathcal{F}\left(f\right)\left(\omega\right)+i\mathcal{F}'\left(f\right)\left(\omega\right)=0.
\]
\noindent Thus $\mathcal{F}\left(f\right)$ also solves the ODE \eqref{eq:1dimODE}.
Therefore we must have $\mathcal{F}\left(f\right)\left(\omega\right)=ce^{-\frac{\omega^{2}}{2}}$
and, taking as $h=[h_{\eps}]$ any strong infinite number so that
$\mathcal{F}_{h}(f)=\mathcal{F}(f)$, we have
\begin{align*}
c=\mathcal{F}\left(f\right)\left(0\right) & =\intop_{-h}^{h}e^{-\frac{x^{2}}{2}}\,\diff{x}=\left[\intop_{-h_{\eps}}^{h_{\eps}}e^{-\frac{x^{2}}{2}}\,\diff{x}\right]=\\
& =\left[\intop_{-\infty}^{\infty}e^{-\frac{x^{2}}{2}}\,\diff{x}+\intop_{-\infty}^{-h_{\eps}}e^{-\frac{x^{2}}{2}}\,\diff{x}+\intop_{h_{\eps}}^{\infty}e^{-\frac{x^{2}}{2}}\,\diff{x}\right]=\sqrt{2\pi}
\end{align*}
\noindent since $\intop_{-\infty}^{-h_{\eps}}e^{-\frac{x^{2}}{2}}\,\diff{x}$
and $\intop_{h_{\eps}}^{\infty}e^{-\frac{x^{2}}{2}}\,\diff{x}$ are
negligible: in fact, using L'Hôpital rule we can prove that $\lim_{y\to0^{+}}\frac{\intop_{\pm1/y}^{\pm\infty}e^{-\frac{x^{2}}{2}}\,\diff{x}}{y^{q}}=0$
for all $q\in\N$. We note that naturally the equality above is in
$\RC{\rho}$. In dimension $n>1$ we directly calculate using Fubini's
theorem:
\begin{alignat*}{1}
\mathcal{F}\left(e^{-\frac{\left|x\right|^{2}}{2}}\right)\left(\omega\right) & =\prod_{j=1}^{n}\intop e^{-ix_{j}\cdot\omega_{j}}e^{-\frac{x_{j}^{2}}{2}}\,\diff{x_{j}}\\
& =\prod_{j=1}^{n}\mathcal{F}\left(f\right)\left(\omega_{j}\right)=\prod_{j=1}^{n}\left(2\pi\right)^{\frac{1}{2}}e^{-\frac{\omega_{j}^{2}}{2}}=\left(2\pi\right)^{\frac{n}{2}}e^{-\frac{\left|\omega\right|^{2}}{2}}.
\end{alignat*}
\end{proof}
\noindent In the following result, we use the notation $\forall^{\infty}p\in\hyperN{\rho}$
to denote $\exists P\in\hyperN{\rho}\,\forall p\in\hyperN{\rho}_{\ge P}$, and we
read it \emph{for all $p\in\hyperN{\rho}$ sufficiently large}.
\begin{thm}
\label{thm:approxId} Let $y$ be a sharply interior point of $K$,
$f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(K\right)$, $G_{p}\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}\left(\RC{\rho}^{n}\right)$,
for $p\in\hyperN{\rho}_{>0}$, satisfy
\begin{enumerate}[label=(\alph*)]
\item \label{enu:int1}$\intop G_{p}=1$ for $p\in\hyperN{\rho}_{>0}$ sufficiently
large.
\item \label{enu:G_tZero}For $p$ sufficiently large, $\left(G_{p}\right)_{\in\hyperN{\rho}_{>0}}$
is zero outside every ball $B_{\delta}\left(0\right)$, $\delta\in\RC{\rho}_{>0}$,
i.e.,
\begin{equation}
\forall\delta\in\RC{\rho}_{>0}\,\forall^{\infty}p\in\hyperN{\rho}\,\forall x:\ \left|x\right|\geq\delta\Rightarrow G_{p}\left(x\right)=0.\label{eq:mid-condition}
\end{equation}
\item \label{enu:delta_bd}$\exists M\in\RC{\rho}_{>0}\,\forall^{\infty}p\in\hyperN{\rho}:\ \intop\left|G_{p}\left(y\right)\right|\,\diff{y}\leq M.$
\end{enumerate}
Then
\begin{enumerate}
\item \label{enu:cond1}$\intop_{y+K}f\left(x\right)G_{p}\left(y-x\right)\,\diff{x}=\intop f\left(x\right)G_{p}\left(y-x\right)\,\diff{x}=\intop f\left(y-z\right)G_{p}\left(z\right)\,\diff{z}$
for all $p\in\hyperN{\rho}_{>0}$ sufficiently large.
\item \label{enu:cond2}$\hyperlimarg{\rho}{\rho}{p}\left(f*G_{p}\right)(y)=f\left(y\right)$.
\end{enumerate}
\end{thm}
\begin{proof}
We only have to generalize the classical proof concerning limits of
convolutions between continuous functions and approximate identities.
Since $B_{\delta}(y)\subseteq K$ for some $\delta\in\RC{\rho}_{>0}$,
we have that $\text{supp}\left(G_{p}(y-\cdot)\right)\subseteq y+K$,
for $p$ large, by condition \eqref{eq:mid-condition}. The remaining
equality of property \ref{enu:cond1} is immediate by considering
the change of variable $y-x=z$. For \ref{enu:cond2}, we proceed
as follows. Using \ref{enu:int1}, for $p$ large, let us say for
$p\ge P\in\hyperN{\rho}_{>0}$, we get
\begin{align*}
\left|\int f(x)G_{p}(y-x)\diff{x}-f(y)\right| & =\left|\int\left[f(x)-f(y)\right]G_{p}(y-x)\,\diff{x}\right|\\
& \le\int\left|f(x)-f(y)\right|\cdot\left|G_{p}(y-x)\right|\,\diff{x}.
\end{align*}
Now, for each $l\in\RC{\rho}_{>0}$, sharp continuity of $f$ at $y$
yields $\left|f(x)-f(y)\right|<l$ for all $x$ such that $|x-y|<\delta\in\RC{\rho}_{>0}$.
By \ref{enu:G_tZero}, for $p$ large, we have
\begin{equation}
\left|\int f(x)G_{p}(y-x)\,\diff{x}-f(y)\right|\le l\int_{\overline{B_{\delta}(0)}}\left|G_{p}(y-x)\right|\,\diff{x}\le l\cdot M,\label{eq:rTimesIntegG}
\end{equation}
where we have taken $p$ sufficiently large so that also \ref{enu:delta_bd}
holds. The right hand side of \eqref{eq:rTimesIntegG} can be taken
arbitrarily small in $\RC{\rho}_{>0}$ because $l\in\RC{\rho}_{>0}$ is an
arbitrary positive generalized number.
\end{proof}
\noindent For example, we set can $t_{p}:=\frac{1}{p}$ for $p\in\hyperN{\rho}_{>0}$,
$g(z):=e^{-|z|^{2}/2}$ and $G_{p}:=t_{p}\odot g$, so that $G_{p}(z)=(2\pi)^{-n/2}\cdot e^{-\frac{|z|^{2}}{2t_{p}^{2}}}\cdot\frac{1}{t_{p}^{n}}$
for all $z\in\RC{\rho}^{n}$. Let $\delta\ge\diff{\rho}^{Q}$ and $|z|\ge\delta$;
for all $q\in\N$ and for $p\in\hyperN{\rho}$ sufficiently large, we have
\[
e^{-\frac{|z|^{2}}{2t_{p}^{2}}}\cdot\frac{1}{t_{p}^{n}}\le p^{n}e^{-\frac{1}{2}\delta{}^{2}p^{2}}\le p^{n}\frac{1}{\delta^{2q}p^{2q}}\le\diff{\rho}^{q},
\]
where the latter inequality holds e.g.~for all $p\ge\diff{\rho}^{-Q}$.
This shows that property \eqref{eq:mid-condition} holds in this case.
\begin{thm}
\label{thm:MainPropertiesHFT}Let $h\in\RC{\rho}_{>0}$ be an infinite
number and set $H:=\left[-h,h\right]^{n}$. Let $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(K\right)$
and $g\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(H\right)$. Then
\begin{enumerate}
\item \label{enu:IntegralProp}$\intop_{H}\mathcal{F}_{k}\left(f\right)\left(\omega\right)g\left(\omega\right)\,\diff{\omega}=\intop_{K}f\left(x\right)\mathcal{F}_{h}\left(g\right)\left(x\right)\,\diff{x}$.
\item \label{enu:HFFIT}$\mathcal{F}_{k}^{-1}|_{K}\left(\mathcal{F}_{k}|_{K}\left(f\right)\right)=f=\mathcal{F}_{k}|_{K}\left(\mathcal{F}_{k}^{-1}|_{K}\left(f\right)\right)$.
\item \label{enu:hom}$\mathcal{F}_{k}|_{K}:\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(K)\longrightarrow\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(K)$ is a homeomorphism.
\item \label{enu:FF}$\mathcal{F}_{k}|_{K}(\mathcal{F}_{k}|_{K}(f))=(2\pi)^{n}\left(-1\diamond f\right)$
and $\mathcal{F}_{k}^{-1}|_{K}(\mathcal{F}_{k}^{-1}|_{K}(f))=(2\pi)^{-n}\left(-1\diamond f\right)$,
where we recall that $-1\diamond f$ is the reflection of $f$.
\end{enumerate}
If $H=K$, then
\begin{enumerate}[resume]
\item \label{enu:Parseval's-relation}(Parseval's relation) $(2\pi)^{n}\intop_{K}f\overline{g}=\intop_{K}\mathcal{F}_{k}\left(f\right)\overline{\mathcal{F}_{k}\left(g\right)}$.
\item \label{enu:Plancherel=002019s-identity}(Plancherel’s identity) $(2\pi)^{n}\intop_{K}\left|f\right|^{2}=\intop_{K}\left|\mathcal{F}_{h}\left(f\right)\right|^{2}$.
\item \label{enu:five}$\intop_{K}fg=\intop_{K}\mathcal{F}_{k}\left(f\right)\mathcal{F}_{k}^{-1}\left(g\right)$.
\end{enumerate}
\end{thm}
\begin{proof}
\ref{enu:IntegralProp} follows from Def.~\ref{def:HyperfiniteFT}
and Fubini's theorem.
\ref{enu:HFFIT}: We first prove the inversion theorem for sharply
interior points $y\in K$. Hence, let $B_{\delta}(y)\subseteq K$
for some $\delta>0$. Set $t_{p}$, $g$ and $G_{p}$ as above. Set
$g_{p}(\omega,y):=\frac{e^{iy\cdot\omega}}{(2\pi)^{n/2}}\cdot e^{-\frac{|t_{p}\omega|^{2}}{2}}$
for all $\omega\in\RC{\rho}^{n}$, and hence $g_{p}(-,y)=\frac{e^{iy\cdot\left(-\right)}}{(2\pi)^{n/2}}\cdot\left(t_{p}\diamond g\right)$.
Thereby, from $g\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho}^{n})$, we also get $g_{p}(-,y)\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho}^{n})$.
Since $k>0$ and $\text{supp}(G_{p})\subseteq\overline{B_{t_{p}\cdot\diff{\rho}^{-1}}(0)}$
(see Lem.~\ref{lem:example}), we get that
\begin{equation}
\text{supp}(G_{p})\subseteq K\label{eq:suppG_p}
\end{equation}
for all $p$ sufficiently large. Let's compute the HFT $\mathcal{F}\left[g_{p}(-,y)\right]$
for an arbitrary $p\in\hyperN{\rho}$:
\[
\mathcal{F}\left[g_{p}(-,y)\right]=\frac{1}{(2\pi)^{n/2}}\mathcal{F}\left[e^{iy\cdot(-)}\cdot\left(t_{p}\diamond g\right)\right]=\frac{1}{(2\pi)^{n/2}}\cdot y\oplus\mathcal{F}\left(t_{p}\diamond g\right),
\]
where we used Thm.~\ref{thm:thmProperties}.\ref{enu:prop7}. We
have $\text{supp}(t_{p}\diamond g)\subseteq\overline{B_{\diff{\rho}^{-1}/t_{p}}(0)}$
because $\text{supp}(g)\subseteq\overline{B_{\diff{\rho}^{-1}}(0)}$.
Set $h_{p}:=\frac{\diff{\rho}^{-1}}{t_{p}}$, and use Thm.~\ref{thm:thmProperties}.\ref{enu:prop5}
noting that $t_{p}h_{p}=\diff{\rho}^{-1}$ is an infinite number:
\begin{align*}
\mathcal{F}\left[g_{p}(-,y)\right](x) & =\left[\frac{1}{(2\pi)^{n/2}}\cdot y\oplus\mathcal{F}_{h_{p}}\left(t_{p}\diamond g\right)\right](x)=\\
& =\frac{1}{(2\pi)^{n/2}}\cdot\left[t_{p}\odot\mathcal{F}(g)\right](x-y)=\\
& =\frac{1}{(2\pi)^{n/2}}\cdot\left[t_{p}\odot(2\pi)^{n/2}g\right](x-y)=\\
& =\left[t_{p}\odot g\right](x-y)=G_{p}(x-y)=G_{p}(y-x).
\end{align*}
Therefore, using \ref{enu:IntegralProp}, and for $p$ sufficiently
large, we obtain
\begin{align}
\int_{K}\mathcal{F}_{k}(f)(\omega)\cdot g_{p}(\omega,y)\,\diff{\omega} & =\int_{K}f(x)\cdot\mathcal{F}\left[g_{p}(-,y)\right](x)\,\diff{x}=\nonumber \\
& =\int_{K}f(x)\cdot G_{p}(y-x)\,\diff{x}=(f*G_{p})(y),\label{eq:twoInt}
\end{align}
\noindent where for $p$ large, we have $\text{supp}\left(G_{p}(y-\cdot)\right)\subseteq B_{\delta}(y)\subseteq K$.
Taking the hyperlimit for $p\in\hyperN{\rho}$ of the right hand side of
\eqref{eq:twoInt}, Thm.~\ref{thm:approxId} yields that it converges
to $f(y)$. The same hyperlimit of the left hand side and Thm.~\ref{thm:contResult}
give
\begin{align*}
\hyperlimarg{\rho}{\rho}{p}\int_{K}\mathcal{F}_{k}(f)(\omega)\cdot g_{p}(\omega,y)\,\diff{\omega} & =\int_{K}\mathcal{F}_{k}(f)(\omega)\cdot\hyperlim{\rho}{\rho}g_{p}(\omega,y)\,\diff{\omega}=\\
& =\int_{K}\mathcal{F}_{k}(f)|_{K}(\omega)\frac{e^{iy\cdot\omega}}{(2\pi)^{n}}\,\diff{\omega}
\end{align*}
because of the definition of $g_{p}$. For boundary points of $K$,
the identity follows using sharp continuity. This proves that $\mathcal{F}_{k}^{-1}\left(\mathcal{F}_{k}(f)|_{K}\right)=f$
on $K$, i.e.~that $\mathcal{F}_{k}^{-1}|_{K}\left(\mathcal{F}_{k}|_{K}(f)\right)=f$.
To prove the other identity, it suffices to apply this equality to
$-1\diamond f$ and consider \eqref{eq:reflection}.
\ref{enu:hom} follows directly from Thm.~\ref{thm:base}.\ref{enu:bound}
and the previous inversion theorem.
\ref{enu:FF} follows by \eqref{eq:reflection} using \ref{enu:HFFIT}
and the definition of reflection.
To prove \ref{enu:Parseval's-relation}, use \ref{enu:IntegralProp}
with $\overline{\mathcal{F}_{k}\left(g\right)}$ instead of $g$,
then Thm.~\ref{thm:thmProperties}.\ref{enu:prop3}, and finally
\ref{enu:FF}.
Plancherel’s identity \ref{enu:Plancherel=002019s-identity} is a
trivial consequence of \ref{enu:Parseval's-relation}.
Finally, \ref{enu:five} follows from \ref{enu:HFFIT} and \ref{enu:IntegralProp}.
\end{proof}
We close this section with a proof of the uncertainty principle
\begin{thm}
\label{thm:uncertainty}If $\psi\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho})$, then
\begin{enumerate}
\item If $\psi\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(H)\cap\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(K)$, then
\[
\intop_{H}\omega^{2}\left|\mathcal{F}\left(\psi\right)\left(\omega\right)\right|^{2}\,\diff{\omega}=\intop_{K}\omega^{2}\left|\mathcal{F}\left(\psi\right)\left(\omega\right)\right|^{2}\,\diff{\omega}=:\intop\omega^{2}\left|\mathcal{F}\left(\psi\right)\left(\omega\right)\right|^{2}\,\diff{\omega}
\]
\item \label{enu:uncertainty}$\left(\intop x^{2}\left|\psi\left(x\right)\right|^{2}\,\diff{x}\right)\left(\intop\omega^{2}\left|\mathcal{F}\left(\psi\right)\left(\omega\right)\right|^{2}\,\diff{\omega}\right)\geq\frac{1}{4}\Vert\psi\Vert_{2}\Vert\mathcal{F}(\psi)\Vert_{2}$.
\end{enumerate}
\end{thm}
\begin{proof}
Properties \ref{enu:derZeroExt} and \ref{enu:equivCmptSupp} of Thm.~\ref{thm:DerivativeIsZero}
imply that also $\psi'\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(H)$. Therefore, Plancherel's identity
Thm.~\ref{thm:MainPropertiesHFT}.\ref{enu:Plancherel=002019s-identity}
yields
\[
\int_{H}\left|\psi'\right|^{2}=\frac{1}{2\pi}\int_{H}\left|\mathcal{F}(\psi')\right|^{2}.
\]
But $\left|\mathcal{F}(\psi')\right|^{2}=\omega^{2}\left|\mathcal{F}(\psi)\right|^{2}$
from Thm.~\ref{thm:thmProperties}.\ref{enu:prop8} because $\psi$
is compactly supported\emph{ }and hence $\Delta_{1k}\psi=0$. Therefore
\begin{equation}
\int_{H}\left|\psi'\right|^{2}=\frac{1}{2\pi}\int_{H}\omega^{2}\left|\mathcal{F}(\psi)(\omega)\right|^{2}\,\diff{\omega}.\label{eq:varHFT}
\end{equation}
At the same result we arrive considering $K$ instead of $H$. Finally,
we apply Def.~\ref{def:intCmpSupp} of integral of a compactly supported
GSF, which yields independence from the functionally compact integration
domain.
To prove inequality \ref{enu:uncertainty}, we assume that $\psi\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(K)$;
using integration by parts, we get:
\begin{align*}
\int x\overline{\psi(x)}\psi'(x)\,\diff{x} & =\int_{-k}^{k}x\overline{\psi(x)}\psi'(x)\,\diff{x}=\\
& =\left[x\overline{\psi(x)}\psi(x)\right]_{x=-k}^{x=k}-\int\psi(x)\left(\overline{\psi(x)}+x\overline{\psi'(x)}\right)\,\diff{x}=\\
& =-\int\left[\left|\psi(x)\right|^{2}+x\psi(x)\overline{\psi'(x)}\right]\,\diff{x}.
\end{align*}
Thereby
\begin{align*}
\int\left|\psi(x)\right|^{2}\,\diff{x} & =-2\text{Re}\left(\int x\psi(x)\overline{\psi'(x)}\,\diff{x}\right)\le\\
& \le2\left|\text{Re}\left(\int x\psi(x)\overline{\psi'(x)}\,\diff{x}\right)\right|\le\\
& \le2\int\left|x\psi(x)\overline{\psi'(x)}\right|\,\diff{x}.
\end{align*}
Where we used the triangle inequality for integrals (see Thm.~\ref{thm:muMeasurableAndIntegral}.\ref{enu:existsReprDefInt}).
Using Cauchy-Schwarz inequality (see Thm.~\ref{thm:Holder}), we
hence obtain
\begin{align*}
\left(\int\left|\psi(x)\right|^{2}\,\diff{x}\right)^{2} & \le4\left(\int\left|x\psi(x)\overline{\psi'(x)}\right|\,\diff{x}\right)^{2}\le\\
& \le4\left(\int x^{2}\left|\psi(x)\right|^{2}\,\diff{x}\right)\left(\int\left|\psi'(x)\right|^{2}\,\diff{x}\right).
\end{align*}
From this, thanks to \eqref{eq:varHFT} and Plancherel's equality,
the claim follows.
\end{proof}
\noindent Note that if $\Vert\psi\Vert_{2}\in\RC{\rho}$ is invertible,
then also $\Vert\mathcal{F}(\psi)\Vert_{2}$ is invertible by Plancherel's
equality, and we can hence write the uncertainty principle in the
usual normalized form.
\begin{example}
\label{exa:uncertaintyDelta}On the contrary with respect the classical
formulation in $L^{2}(\R)$ of the uncertainty principle, in Thm.~\ref{thm:uncertainty}
we can e.g.~consider $\psi=\delta\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho})$, and we have
\[
\int x^{2}\delta(x)^{2}\,\diff{x}=\left[\int_{-1}^{1}x^{2}b_{\eps}^{2}\psi_{\eps}(b_{\eps}x)^{2}\,\diff{x}\right]
\]
where $\psi(x)=[\psi_{\eps}(x_{\eps})]$ is a Colombeau mollifier
and $b=[b_{\eps}]\in\RC{\rho}$ is a strong infinite number (see Example
\ref{exa:deltaCompDelta}). Since normalizing the function $\eps\mapsto b_{\eps}^{2}\psi_{\eps}(b_{\eps}x)^{2}$
we get an approximate identity, we have $\lim_{\eps\to0^{+}}\int_{-1}^{1}x^{2}b_{\eps}^{2}\psi_{\eps}(b_{\eps}x)^{2}\,\diff{x}=0$,
and hence $\int x^{2}\delta(x)^{2}\,\diff{x}\approx0$ is an infinitesimal.
The uncertainty principle Thm.~\ref{thm:uncertainty} implies that
it is an invertible infinitesimal. Considering the HFT $\mathbb{1}=\mathcal{F}(\delta)\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho})$,
we have
\[
\int\omega^{2}\mathbb{1}(\omega)^{2}\,\diff{\omega}\ge\int_{-r}^{r}\omega^{2}\,\diff{\omega}=2\frac{r^{3}}{3}\quad\forall r\in\R_{>0}.
\]
Thereby, $\int\omega^{2}\mathbb{1}(\omega)^{2}\,\diff{\omega}$ is
an infinite number.
\end{example}
\section{\label{sec:preservation}Preservation of classical Fourier transform}
It is natural to inquire the relations between classical FT of tempered
distributions and our HFT.
Let us start with a couple of exploring examples:
\begin{enumerate}
\item $\mathcal{F}_{k}(1)(\omega)=\int_{-k}^{k}1\cdot e^{-ix\omega}\,\diff{x}=\int_{-k}^{k}\cos(x\omega)\,\diff{x}$.
If $L\subseteq_{0} I$ and $\omega|_{L}$ is invertible (see Sec.~\ref{subsec:subpoints}
for the language of subpoints), then $\mathcal{F}_{k}(1)(\omega)=_{L}2\frac{\sin(k\omega)}{\omega}$;
if $\omega=_{L}0$, then $\mathcal{F}_{k}(1)(\omega)=2k$. Classically,
we have $\hat{1}=2\pi\delta$.
\item \label{enu:HFT_H}$\mathcal{F}_{k}(H)(\omega)=\int_{-k}^{k}H(x)e^{-ix\omega}\,\diff{x}$.
Assuming that $\omega|_{L}$ is invertible on $L\subseteq_{0} I$, we have
$\mathcal{F}_{k}(H)(\omega)=_{L}\frac{i}{\omega}e^{-ik\omega}-\frac{i}{\omega}\mathbb{1}(\omega)$.
Classically, we have $\hat{H}=\pi\delta-\frac{i}{\omega}$. Therefore,
if $k$ is a strong infinite number and $\omega$ is far from the
origin, $|\omega|\ge r\in\R_{>0}$, we have $\mathcal{F}_{k}(H)(\omega)=\iota_{\R}^{b}\left(\hat{H}\right)(\omega)$
(here $\iota_{\R}^{b}$ is an embedding of $\mathcal{D}'(\R)$ into
$\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\csp{\R})$, see Sec.~\ref{sec:preservation}). However, the
latter equality does not hold if $\omega\approx0$.
\end{enumerate}
Since classically we do not have infinite numbers, the former of these
examples leads us to the following idea
\[
\mathcal{F}(1\cdot\mathbb{1})=\mathcal{F}(\mathcal{F}(\delta))=2\pi\left(-1\diamond\delta\right)=2\pi\delta.
\]
Note that if $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(K)$, then $\left(f\cdot\mathbb{1}\right)(\omega)=f(\omega)$
for all finite point $\omega\in K$. We therefore call $f\cdot\mathbb{1}$
the \emph{finite part of $f$}. The same idea works for $e^{iax}$
and hence also for $\sin$, $\cos$. Let us now consider $\delta\cdot\mathbb{1}$:
\[
\mathcal{F}(\delta\cdot\mathbb{1})(\omega)=\int\delta(x)\mathcal{F}(\delta)(x)e^{-ix\omega}\,\diff{x}.
\]
We recall that integrating against $\delta$ yields the evaluation
of the second factor at $0$ only if the latter is bounded by a tame
polynomial at $0$ (see Example \ref{exa:tamePol}.\ref{enu:intAgainstDelta}).
But the function $x\mapsto\mathcal{F}(\delta)(x)e^{-ix\omega}$ is
bounded by a tame polynomial at $x=0$ for all $\omega$, and we get
$\mathcal{F}(\delta\cdot\mathbb{1})(\omega)=1$. Being bounded by
a tame polynomial is, in general, a necessary assumption because
\begin{align*}
\mathcal{F}(H\cdot\mathbb{1})(\omega) & =\int H(x)\cdot\mathcal{F}(\delta)(x)e^{-ix\omega}\,\diff{x}=\\
& =\int\delta(x)\mathcal{F}_{k}(H\cdot e^{-i(-)\omega})(x)\,\diff{x}=\\
& =\int\delta(x)\mathcal{F}_{k}(H)(x+\omega)\,\diff{x},
\end{align*}
but $\mathcal{F}_{k}(H)(x+\omega)=\frac{i}{x+\omega}e^{-ik(x+\omega)}-\frac{i}{x+\omega}\mathbb{1}(x+\omega)$
is not bounded by a tame polynomial at $x=0$ if $\omega\approx0$
because of the terms $\frac{1}{\omega}$.
These exploratory examples lead us to the following
\begin{thm}
\label{thm:preservationFinitePart}Let $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(K)$, and assume
that $\mathcal{F}_{k}(f)$ is bounded by a tame polynomial at $\omega\in\RC{\rho}^{n}$.
Then $\mathcal{F}(f\cdot\mathbb{1})(\omega)=\mathcal{F}_{k}(f)(\omega)$.
\end{thm}
\begin{proof}
It suffices to apply Thm.~\ref{thm:MainPropertiesHFT}.\ref{enu:IntegralProp}:
\begin{align*}
\mathcal{F}(f\cdot\mathbb{1})(\omega) & =\int f(x)\mathcal{F}(\delta)(x)e^{-ix\cdot\omega}\,\diff{x}=\\
& =\int\delta(x)\mathcal{F}_{k}\left(f\cdot e^{-i(-)\cdot\omega}\right)(x)\,\diff{x}=\\
& =\int\delta(x)\mathcal{F}_{k}\left(f\right)(x+\omega)\,\diff{x}=\mathcal{F}_{k}\left(f\right)(\omega).
\end{align*}
\end{proof}
Since $\frac{\partial}{\partial x_{j}}\left[\mathcal{F}_{k}(f)\right](\omega)=-i\mathcal{F}_{k}(x_{j}f)(\omega)$,
we have the following sufficient condition for $\mathcal{F}_{k}(f)$
being bounded by a tame polynomial at $\omega\in\RC{\rho}^{n}$:
\begin{thm}
\label{thm:preservationS}Let $b\in\RC{\rho}_{>0}$ be a large infinite
number, and let $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(K)$ be uniformly bounded by a tame polynomial
at $K$, i.e.
\begin{equation}
\exists M,c\in\RC{\rho}\,\forall y\in K\,\forall j\in\N:\ \left|\diff{^{j}}f(y)\right|\le M\cdot c^{j},\quad\frac{b}{c}\text{ is a large infinite number}.\label{eq:unifBTP}
\end{equation}
Then for all $\omega\in\RC{\rho}^{n}$, the HFT $\mathcal{F}_{k}(f)$ is
bounded by a tame polynomial at $\omega$. In particular, every $f\in\mathcal{S}(\R^{n})$
satisfies condition \eqref{eq:unifBTP}, and hence
\begin{equation}
\mathcal{F}(f)=\mathcal{F}(f\cdot\mathbb{1})=\iota_{\R^{n}}^{b}(\hat{f}),\label{eq:preservRapDecr}
\end{equation}
where $\hat{f}\in\mathcal{S}(\R^{n})$ is the classical FT of $f$.
\end{thm}
\begin{proof}
We have
\begin{align*}
\left|\diff{^{j}}\mathcal{F}_{k}(f)(\omega)\right| & \le\max_{h\le j}\left|\mathcal{F}_{k}(x_{h}f)(\omega)\right|\le\max_{h\le j}\int_{K}\left|x_{h}f(x)\right|\,\diff{x}\le\\
& \le Mc^{j}\max_{h\le j}\int_{K}|x_{h}|\,\diff{x}=:\bar{M}c^{j}.
\end{align*}
If $f\in\mathcal{S}(\R^{n})$, then $\left|\diff{^{j}}f(y)\right|\in\R$,
so that if $b\ge\diff{\rho}^{-r}$, $r\in\R_{>0}$, it suffices to
take $c=\diff{\rho}^{-r+s}$ where $0<s<r$ to have that \eqref{eq:unifBTP}
holds. The last equality in \eqref{eq:preservRapDecr} is equivalent
to say that $\int_{\R^{n}}f(x)e^{-ix\cdot\omega}\,\diff{x}=\int_{K}f(x)e^{-ix\cdot\omega}\,\diff{x}$,
which can be proved as for the Gaussian, see Lem.~\ref{lem:example}.
\end{proof}
We can now consider the relations between $\iota_{\R^{n}}^{b}(\hat{T})$
and $\mathcal{F}_{k}(\iota_{\R^{n}}^{b}(T))$ when $T\in\mathcal{S}'(\R^{n})$.
A first trivial link is given by the so-called \emph{equality in the
sense of generalized tempered distributions}: For all $\phi\in\mathcal{S}(\R^{n})$,
from \eqref{eq:pairTphiAsInt} we have
\[
\int\iota_{\R^{n}}^{b}(\hat{T})\phi=\langle\hat{T},\phi\rangle=\langle T,\hat{\phi}\rangle=\int\iota_{\R^{n}}^{b}(T)\hat{\phi}.
\]
Using the previous Thm.~\ref{thm:preservationS} we get $\hat{\phi}=\mathcal{F}(\phi)$
(identifying a smooth function with its embedding). Thereby
\begin{equation}
\int\iota_{\R^{n}}^{b}(\hat{T})\phi=\int\iota_{\R^{n}}^{b}(T)\mathcal{F}(\phi)=\int\mathcal{F}\left(\iota_{\R^{n}}^{b}(T)\right)\phi\quad\forall\phi\in\mathcal{S}(\R^{n}).\label{eq:gtd}
\end{equation}
In Colombeau's theory, this relation is usually written $\iota_{\R^{n}}^{b}(\hat{T})=_{\text{g.t.d.}}\mathcal{F}\left(\iota_{\R^{n}}^{b}(T)\right)$.
In the following result, we give a sufficient condition to have a
pointwise equality between the HFT of the finite part of $\iota_{\R^{n}}^{b}(T)$
and $\hat{T}$:
\begin{thm}
\label{thm:preservation}Let $b\in\RC{\rho}_{>0}$ be a large infinite
number and $T\in\mathcal{S}'(\R^{n})$. Assume that $\mathcal{F}(\iota_{\R^{n}}^{b}(T))$
is bounded by a tame polynomial at $\omega\in\RC{\rho}^{n}$. Then
\[
\mathcal{F}_{k}(\iota_{\R^{n}}^{b}(T))(\omega)=\mathcal{F}(\iota_{\R^{n}}^{b}(T)\cdot\mathbb{1})(\omega)=\iota_{\R^{n}}^{b}(\hat{T})(\omega).
\]
\end{thm}
\begin{proof}
For simplicity of notation, we use $\iota:=\iota_{\R^{n}}^{b}$. Using
Thm.~\ref{thm:preservationFinitePart}, we have
\[
\mathcal{F}\left(\iota(T)\cdot\mathbb{1}\right)(\omega)=\mathcal{F}_{k}(\iota(T))(\omega).
\]
Let $\psi(x)=[\psi_{\eps}(x_{\eps})]$ \emph{be an $n$-}dimensional
Colombeau mollifier defined by $b$, and set $K_{\eps}:=[-k_{\eps},k_{\eps}]^{n}$;
we have
\begin{align*}
\mathcal{F}_{k}\left(\iota(T)\right)(\omega) & =\left[\int_{K_{\eps}}\langle T(y),\psi_{\eps}(x-y)\rangle e^{-ix\cdot\omega_{\eps}}\,\diff{x}\right]=\\
& =\left[\langle T(y),\int\psi_{\eps}(x-y)e^{-ix\cdot\omega_{\eps}}\,\diff{x}\rangle\right]=\\
& =\left[\langle T(y),\widehat{y\oplus\psi_{\eps}}(\omega_{\eps})\rangle\right]=\\
& =\left[\langle\hat{T}(y),\left(y\oplus\psi_{\eps}\right)(\omega_{\eps})\rangle\right]=\\
& =\left[\langle\hat{T}(y),\psi_{\eps}(\omega_{\eps}-y)\rangle\right]=\iota(\hat{T})(\omega).
\end{align*}
\end{proof}
\subsection{Fourier transform in the Colombeau's setting}
Only in this section we assume a very basic knowledge of Colombeau's
theory.
Assume that $\Omega\subseteq\mathbb{R}^{n}$ be an open set. The algebra
$\mathcal{G}_{\tau}^{s}\left(\Omega\right)$ of tempered generalized
functions was introduced by J.F.~Colombeau in \cite{C1}, in order
to develop a theory of Fourier transform. Since then, there has been
a rapid development of the Fourier analysis, regularity theory and
micro-local analysis in this setting.
\begin{defn}
The $\mathcal{G}_{\tau}^{s}(\Omega)$ algebra of \emph{Colombeau tempered
GF} (trivially generalized by using an arbitrary gauge $\rho$) is
defined as follows:
\begin{align*}
\mathcal{E}_{\tau}^{s}\left(\Omega\right) & :=\left\{ \left(u_{\eps}\right)\in\left(\Coo(\Omega)\right)^{I}\mid\forall\alpha\in\mathbb{N}^{n}\,\exists N\in\mathbb{N}:\right.\\
& \phantom{:=\qquad}\left.\sup_{x\in\Omega}\left(1+\left|x\right|\right)^{-N}\left|\partial^{\alpha}u_{\eps}\left(x\right)\right|=O(\rho_{\eps}^{-N})\right\} ,
\end{align*}
\begin{align*}
\mathcal{N}_{\tau}^{s}\left(\Omega\right) & :=\left\{ \left(u_{\eps}\right)\in\left(\Coo(\Omega)\right)^{I}\mid\forall\alpha\in\mathbb{N}^{n}\,\exists p\in\N\,\forall m\in\mathbb{N}:\right.\\
& \phantom{:=\qquad}\left.\sup_{x\in\Omega}\left(1+\left|x\right|\right)^{-p}\left|\partial^{\alpha}u_{\eps}\left(x\right)\right|=O(\rho_{\eps}^{m})\right\} ,
\end{align*}
\[
\mathcal{G}_{\tau}^{s}(\Omega):=\mathcal{E}_{\tau}^{s}(\Omega)/\mathcal{N}_{\tau}^{s}(\Omega).
\]
\end{defn}
\noindent Colombeau tempered GF can be embedded as GSF, at least if
the internal set $[\Omega]$ is sharply bounded:
\begin{thm}
\label{thm:embCTGF}Let $\Omega\subseteq\R^{n}$ be an open set such
that $[\Omega]$ is sharply bounded. A Colombeau tempered GF $u=(u_{\eps})+\ns_{\tau}(\Omega)\in\gs_{\tau}(\Omega)$
defines a GSF $u:[x_{\eps}]\in[\Omega]\longrightarrow[u_{\eps}(x_{\eps})]\in\frontRise{\mathbb{C}}{\rho}\widetilde{\mathbb{C}}$.
This assignment provides a bijection of $\gs_{\tau}(\Omega)$ onto
the space defined by $u\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}_{\tau}([\Omega])$ if and only if $u\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}([\Omega])$
and
\[
\forall\alpha\in\mathbb{N}^{n}\,\exists N\in\mathbb{N}\,\forall x\in[\Omega]:\ \left|\partial^{\alpha}u\left(x\right)\right|\le\frac{\left(1+\left|x\right|\right)^{N}}{\diff{\rho}^{N}}.
\]
\end{thm}
Integration of a CGF $u=[u_{\eps}]\in\gs(\Omega)$ over a standard
$K\Subset\Omega$ can be defined $\eps$-wise as $\int_{K}u\left(x\right)\,\diff{x}:=\left[\int_{K}u_{\varepsilon}\left(x\right)\,\diff{x}\right]\in\RC{\rho}$.
Similarly we can proceed for $\int_{\Omega}u$ if $u$ is compactly
supported and $\Omega\subseteq\R^{n}$ is an arbitrary open set. On
the other hand, to define the Fourier transform, we have to integrate
tempered CGF on the entire $\R^{n}$. Using this integration of CGF,
this is accomplished by multiplying the generalized function by a
so-called \emph{damping measure} $\phi$, see e.g.~\cite{Hor99}:
\begin{defn}
\label{def:damping}Let $\phi\in\mathcal{S}(\R^{n})$ with $\int_{\R^{n}}\phi=1$,
$\int_{\R^{n}}x^{\alpha}\phi(x)\,\diff{x}=0$ for all $\alpha\in\N^{n}\setminus\{0\}$,
and set $\phi_{\eps}(x):=\rho_{\eps}\odot\phi(x)=\rho_{\eps}^{-n}\phi(\rho_{\eps}^{-1}x)$.
Let $u=[u_{\eps}]\in\mathcal{G}_{\tau}(\R^{n})$, then $u_{\hat{\phi}}:=[u_{\eps}\widehat{\phi_{\eps}}]$,
$\int_{\R^{n}}u(x)\,\diff{}_{\hat{\phi}}x:=\int_{\R^{n}}u_{\hat{\phi}}\,\diff{x}=\left[\int_{\R^{n}}u_{\eps}(x)\widehat{\phi_{\eps}}(x)\,\diff{x}\right]\in\Ctil$,
where $\widehat{\phi_{\eps}}$ denotes the classical FT. In particular,
\begin{align*}
\mathcal{F}_{\hat{\phi}}(u)(\omega) & :=\int_{\R^{n}}e^{-ix\omega}u(x)\,\diff{}_{\hat{\phi}}x=\left[\int_{\R^{n}}e^{-ix\omega}u_{\eps}(x)\widehat{\phi_{\eps}}(x)\,\diff{x}\right]\\
\mathcal{F}_{\hat{\phi}}^{*}(u)(x) & :=(2\pi)^{-n}\int_{\R^{n}}e^{-ix\omega}u(\omega)\,\diff{}_{\hat{\phi}}\omega=\left[(2\pi)^{-n}\int_{\R^{n}}e^{-ix\omega}u_{\eps}(\omega)\widehat{\phi_{\eps}}(\omega)\,\diff{\omega}\right].
\end{align*}
\end{defn}
As a result, although this notion of Fourier transform in the Colombeau
setting shares several properties with the classical one, it lacks
e.g.~the Fourier inversion theorem, which holds only at the level
of equality in the sense of generalized tempered distributions \cite{Col85,Das91,NedPil92},
see also \eqref{eq:gtd}. See also \cite{Sor96} for a Paley-Wiener
like theorem. In other words, we only have e.g.~$\mathcal{F}_{\hat{\phi}}(\partial^{\alpha}u)=_{\text{g.t.d.}}i^{|\alpha|}\omega^{\alpha}\mathcal{F}_{\hat{\phi}}(u)$,
$i^{|\alpha|}\mathcal{F^{*}}_{\hat{\phi}}(\partial^{\alpha}u)=_{\text{g.t.d.}}x^{\alpha}\mathcal{F^{*}}_{\hat{\phi}}(u)$,
$\mathcal{F}_{\hat{\phi}}\mathcal{F^{*}}_{\hat{\phi}}u=_{\text{g.t.d.}}\mathcal{F^{*}}_{\hat{\phi}}\mathcal{F}_{\hat{\phi}}u$,
where $\mathcal{F}_{\hat{\phi}}(u)$ denotes the Fourier transform
with respect to the damping measure. Moreover $\langle\iota_{\R}(\hat{T}),\psi\rangle\approx\langle\mathcal{F}_{\hat{\phi}}\iota_{\R}(T),\psi\rangle$
for all $T\in\mathcal{S}'(\R)$ and all $\psi\in\mathcal{S}(\R)$,
where $\iota_{\R}(T)$ is the embedding of Thm.~\ref{thm:embeddingD'}.
Intuitively, one could say that the use of the multiplicative damping
measure introduces a perturbation of infinitesimal frequencies that
inhibit several results that, on the contrary, hold for the HFT. On
the other hand, HFT lies on a better integration theory that allows
to integrate any GSF on the functionally compact set $K$. The only
known possibility to obtain a strict Fourier inversion theorem in
Colombeau's theory, is the approach used by \cite{Nig16}, where smoothing
kernels are used as index set (instead of the simpler $\eps\in I$)
and therefore the knowledge of infinite dimensional calculus in convenient
vector spaces is needed. Unfortunately, the latter approach is not
widely known, even in the community of CGF, and it can be considered
as technically involved.
Finally, the following result links the HFT with the FT of tempered
CGF as defined above.
\begin{thm}
\label{thm:presFT_TCGF}Let $\Omega\subseteq\R^{n}$ be an open set
such that $[\Omega]$ is sharply bounded, and let $u\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}_{\tau}([\Omega])$
be a tempered CGF (identified with the corresponding GSF). Finally,
let $\phi\in\mathcal{S}(\R^{n})$ be a dumping measure. Then
\[
\mathcal{F}_{\hat{\phi}}(u)=\mathcal{F}\left[u\cdot\hat{\phi}((-)\cdot\diff{\rho})\right]=\mathcal{F}\left[u\cdot\mathcal{F}(\phi)((-)\cdot\diff{\rho})\right].
\]
\end{thm}
\begin{proof}
Def.~\eqref{def:damping} yields
\begin{align*}
\mathcal{F}_{\hat{\phi}}(u)(\omega) & =\int_{\R^{n}}u(x)e^{-ix\cdot\omega}\,\diff{_{\hat{\phi}}x}=\\
& =\int_{\R^{n}}u(x)e^{-ix\cdot\omega}\widehat{\diff{\rho}\odot\phi}(x)\,\diff{x}=\\
& =\int_{\R^{n}}u(x)e^{-ix\cdot\omega}\left(\diff{\rho}\diamond\hat{\phi}\right)(x)\,\diff{x}=\\
& =\int_{\R^{n}}u(x)e^{-ix\cdot\omega}\hat{\phi}(\diff{\rho}\cdot x)\,\diff{x}=\\
& =\mathcal{F}\left[u\cdot\hat{\phi}((-)\cdot\diff{\rho})\right](\omega)=\\
& =\mathcal{F}\left[u\cdot\mathcal{F}(\phi)((-)\cdot\diff{\rho})\right](\omega),
\end{align*}
where, in the last equality, we applied \eqref{eq:preservRapDecr}.
\end{proof}
\section{Examples and applications\label{sec:Examples-and-applications}}
In this section we present an initial study of possible applications
of HFT. Our aim is mainly to highlight the new potentialities of the
theory.
\subsection{Applications of HFT to ordinary differential equations}
\subsubsection*{The simplest ODE}
We first consider the following, apparently trivial but actually meaningful,
example:
\begin{equation}
y'=y,\ y\left(0\right)=c,\ y\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}\left(\left[-k,k\right]\right),\ c\in\RC{\rho},\label{eq:first_order_ode}
\end{equation}
where $k=-\log\left(\diff{\rho}\right)$. Since we do not impose limitations
on the initial value $c$, this simple example clearly shows the possibilities
of the HFT to manage non tempered generalized functions. Applying
the HFT to both sides of \eqref{eq:first_order_ode} and using the
derivation formula \eqref{eq:DerRule1}, we get
\begin{equation}
\mathcal{F}_{k}\left(y\right)=\Delta_{1k}y+i\omega\mathcal{F}_{k}\left(y\right).\label{eq:FT_of_first_order_ode}
\end{equation}
Set for simplicity $C\left(\omega\right):=\Delta_{1k}y\left(\omega\right)=y(k)e^{-ik\omega}-y(-k)e^{ik\omega}$
and note that the function $C$ does not depend on the whole function
$y$ but only on the two values $y(\pm k)$. We get $\mathcal{F}_{k}\left(y\right)\left(\omega\right)=\frac{C\left(\omega\right)}{1-i\omega}$,
and applying the inverse HFT Thm.~\ref{thm:MainPropertiesHFT}.\ref{enu:HFFIT},
we obtain
\begin{equation}
y=\mathcal{F}_{k}^{-1}|_{K}\left(\left.\frac{C\left(\omega\right)}{1-i\omega}\right|_{K}\right).\label{eq:ODE1ViceVersa}
\end{equation}
Using the initial condition in \eqref{eq:first_order_ode}, we have
\begin{equation}
y\left(0\right)=\mathcal{F}_{k}^{-1}\left(\frac{C\left(\omega\right)}{1-i\omega}\right)\left(0\right)=\int_{-k}^{k}\frac{C\left(\omega\right)}{1-i\omega}\,\diff{\omega}=c.\label{eq:C}
\end{equation}
Clearly, e.g.~by separation of variables, \eqref{eq:first_order_ode}
necessarily yields $y(x)=ce^{x}$ for all $x\in[-k,k]$. Therefore,
$y\left(k\right)=ce^{-\log\diff{\rho}}=\frac{c}{\diff{\rho}}$, $y\left(-k\right)=ce^{\log\diff{\rho}}=c\diff{\rho}$
and $C\left(\omega\right)=c\diff{\rho^{i\omega-1}}-c\diff{\rho^{-i\omega+1}}$
because $\diff{\rho^{i\omega}}=e^{-ik\omega}$. Vice versa, if \eqref{eq:ODE1ViceVersa}
holds, using again Thm.~\ref{thm:MainPropertiesHFT}.\ref{enu:HFFIT},
we obtain $\mathcal{F}_{k}|_{K}(y)(\omega)=\frac{C(\omega)}{1-i\omega}$
for all $\omega\in K$; from the differentiation formula \eqref{eq:DerRule1}
we hence get $\mathcal{F}_{k}(y)(\omega)-\mathcal{F}_{k}(y')(\omega)+C(\omega)=C(\omega)$.
Another application of the inversion Thm.~\ref{thm:MainPropertiesHFT}.\ref{enu:HFFIT}
yields $y'=y$ in $K$. We have hence proved that $y$ satisfies \eqref{eq:first_order_ode}
if and only if
\begin{equation}
y(x)=ce^{x}=\mathcal{F}_{k}^{-1}|_{K}\left(\left.c\frac{\diff{\rho^{i\omega-1}}-\diff{\rho^{-i\omega+1}}}{1-i\omega}\right|_{K}\right)(x)\quad\forall x\in K.\label{eq:iff1ODE}
\end{equation}
We finally underscore that:
\begin{enumerate}[label=(\alph*)]
\item In the classical theory, the lacking of the term $C(\omega)$ does
not allow to obtain the non-tempered solution for $c\ne0$: in other
words, if $c\ne0$, then \eqref{eq:C} implies that $C\ne0$.
\item In the previous deduction, it is clearly important that the HFT can
be applied to all the GF of the space $\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(K)$.
\item If we missed the restriction to $K$ in \eqref{eq:ODE1ViceVersa},
we would wrongly obtained that $y=ce^{x}\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\RC{\rho})$, which necessarily
implies $c=0$ because the exponential function is not defined on
the whole $\RC{\rho}$.
\item Compare \eqref{eq:iff1ODE} with Example \ref{exa:exp} to note that
if $c\ge r\in\R_{>0}$, then in \eqref{eq:iff1ODE} we are considering
the inverse HFT of a GSF which always takes infinite values for all
finite $\omega$. Clearly, this strongly motivates the use of a non-Archimedean
framework for this type of problems.
\item All our results, in particular the inversion Thm.~\ref{thm:MainPropertiesHFT}.\ref{enu:HFFIT},
hold for an arbitrary infinite number $k$. In this particular case,
we considered $k$ of logarithmic type to get moderateness of the
exponential function.
\end{enumerate}
\subsubsection*{General constant coefficient ODE}
Let us consider an arbitrary $n$-th order constant (generalized)
coefficient ODE
\begin{equation}
a_{n}y^{\left(n\right)}+\ldots a_{1}y^{\left(1\right)}+a_{0}y=h,\ y,h\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\left[-k,k\right]),\ a_{j}\in\RC{\rho},\ n\in\mathbb{N}_{\geq1}.\label{eq:non_homog_ode}
\end{equation}
Note that simply assuming to have a solution $y$ defined on the infinite
interval $[-k,k]$ already yields an implicit limitation on the coefficients
$a_{j}\in\RC{\rho}$. In fact, the equation $y'-\frac{1}{\diff{\rho}}y=0$
has solution $y(x)=y(0)e^{x/\diff{\rho}}$, which is defined only
if $x\le-N\diff{\rho}\log\diff{\rho}\approx0$ for some $N\in\N$.
By applying the HFT to both sides of equation \eqref{eq:non_homog_ode},
the differential equation is converted into the algebraic equation
\begin{equation}
P\left(\omega\right)\mathcal{F}_{k}\left(y\right)+C\left(\omega\right)=\mathcal{F}_{k}\left(h\right),\label{eq:hft_applied_to_nonhomog_ode}
\end{equation}
where
\[
P\left(\omega\right)=\sum_{j=0}^{n}a_{j}\left(i\omega\right)^{j},
\]
and $C\left(\omega\right)$ is the sum of all the extra terms in Thm.~\ref{thm:thmProperties}.\ref{enu:prop8},
which in this case becomes
\[
C(\omega):=\sum_{j=1}^{n}a_{j}\cdot\sum_{p=1}^{j}(i\omega)^{j-p}\Delta_{1k}y^{(p-1)}(\omega)\quad\forall\omega\in K.
\]
Note that the function $C$ depends only on the points $y^{(p)}(\pm k)$
for $p=0,\ldots,n-1$ and not on the whole function $y$. Assuming
that $P(\omega)$ is invertible for all $\omega\in K$, from \eqref{eq:hft_applied_to_nonhomog_ode}
and the inversion Thm\@.~\ref{thm:MainPropertiesHFT}.\ref{enu:HFFIT},
we get
\begin{equation}
y=\mathcal{F}_{k}^{-1}|_{K}\left(\left.\frac{\mathcal{F}_{k}(h)-C}{P}\right|_{K}\right).\label{eq:finalODE}
\end{equation}
Proceeding as in the previous example, i.e.~using again the inversion
Thm\@.~\ref{thm:MainPropertiesHFT}.\ref{enu:HFFIT} and the differentiation
formula \eqref{eq:DerRule1}, we can actually prove that \eqref{eq:finalODE}
is equivalent to \eqref{eq:non_homog_ode}. For a generalization to
GSF of the usual results about $n$-th order constant generalized
coefficient ODE, see \cite{LuGi16a}.
\subsubsection*{Airy equation}
A simple example of non-constant coefficient linear ODE is given by
the Airy equation
\begin{gather}
u''(x)-x\cdot u(x)=0,\ u\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\left[-k,k\right],\RC{\rho}).\label{eq:airy}
\end{gather}
By applying the derivative formulas Thm.~\ref{thm:thmProperties}.\ref{enu:prop8}
and Thm.~\ref{thm:thmProperties}.\ref{enu:prop9}, we get
\[
-\omega^{2}\mathcal{F}_{k}\left(u\right)+i\omega\Delta_{1k}u+\Delta_{1k}u'-i\mathcal{F}{}_{k}\left(u\right)'=0
\]
or dividing both sides by $i$
\[
i\omega^{2}\mathcal{F}_{k}\left(u\right)+\omega\Delta_{1k}u-i\Delta_{1k}u'-\mathcal{F}'_{k}\left(u\right)=0
\]
Let us now set $C\left(\omega\right):=\omega\Delta_{1k}u\left(\omega\right)-i\Delta_{1k}u'\left(\omega\right)$,
$\forall\omega\in K$. Note, once again, that the function $C$ does
not depend on the whole functions $u$ and $u'$ but only on the points
$u\left(\pm k\right)$ and $u'\left(\pm k\right)$.
\begin{equation}
\mathcal{F}'_{k}\left(u\right)-i\omega^{2}\mathcal{F}_{k}\left(u\right)=C.\label{eq:non_CC_ode}
\end{equation}
Equation \eqref{eq:non_CC_ode} is a first order non-constant coefficient,
non-homogeneous generalized ODE with respect to the variable $\omega$.
We can solve it e.g.~by considering the integrating factor $\mu\left(\omega\right):=e^{\int_{0}^{\omega}-iz^{2}\,\diff{z}}=e^{-i\frac{\omega^{3}}{3}}$.
Then, the solution of \eqref{eq:non_CC_ode} is given by
\[
\mathcal{F}_{k}\left(u\right)\left(\omega\right)=\frac{\intop_{0}^{\omega}\mu\left(z\right)C\left(z\right)\,\diff z+c}{\mu\left(\omega\right)}=\frac{\intop_{0}^{\omega}e^{-\frac{iz^{3}}{3}}C\left(z\right)\,\diff z+c}{e^{-\frac{i\omega^{3}}{3}}}\quad\forall\omega\in\RC{\rho},
\]
where $c:=\mathcal{F}_{k}(u)(0)\in\RC{\rho}$. Finally, we apply the
inversion Thm.~\ref{thm:MainPropertiesHFT}.\ref{enu:HFFIT} and
substitute $C\left(\omega\right)$ to recover the original function
\begin{align*}
u(x) & =\left.\mathcal{F}_{k}^{-1}\right|_{K}\left(\left.\frac{\intop_{0}^{\omega}e^{-\frac{iz^{3}}{3}}C\left(z\right)\diff z+c}{e^{-\frac{i\omega^{3}}{3}}}\right|_{K}\right)(x)=\\
& =\left.\mathcal{F}_{k}^{-1}\right|_{K}\left(\left.\frac{\intop_{0}^{\omega}e^{-\frac{iz^{3}}{3}}C\left(z\right)\diff z}{e^{-\frac{i\omega^{3}}{3}}}\right|_{K}\right)(x)+\frac{c}{\pi}\int_{0}^{k}\cos\left(\frac{\omega^{3}}{3}+\omega x\right)\,\diff\omega=\\
& =\frac{1}{\pi}\int_{0}^{k}\cos\left(\omega x+\frac{\omega^{3}}{3}\right)\int_{0}^{\omega}e^{-i\left(kz+\frac{z^{3}}{3}\right)}\left(zu\left(k\right)-iu'\left(k\right)\right)\diff z\diff\omega-\\
& \phantom{=}-\frac{1}{\pi}\int_{0}^{k}\cos\left(\omega x+\frac{\omega^{3}}{3}\right)\int_{0}^{\omega}e^{-i\left(-kz+\frac{z^{3}}{3}\right)}\left(zu\left(-k\right)-iu'\left(-k\right)\right)\diff z\diff\omega+\\
& \phantom{=}+\frac{c}{\pi}\int_{0}^{k}\cos\left(\frac{\omega^{3}}{3}+\omega x\right)\,\diff\omega\quad\forall x\in K.
\end{align*}
If we assume that $u(\pm k)\approx0$ and $u'\left(\pm k\right)\approx0$,
then we get the first Airy function up to infinitesimals $u(x)\approx c\cdot\text{Ai}(x)$.
Therefore, if $u(\pm k)\not\approx0$ or $u'\left(\pm k\right)\not\approx0$
and $c=0$, we must get, up to infinitesimals, a multiple of the second
Airy function (see e.g.~\cite{AbrSte72})
\[
\exists d\in\RC{\rho}:\ u(x)\approx\text{Bi}(x)=\frac{d}{\pi}\int_{0}^{+\infty}\left\{ \exp\left(-\frac{t^{3}}{3}+xt\right)+\sin\left(\frac{t^{3}}{3}+xt\right)\right\} \diff t.
\]
We explicitly recall that $\text{Bi}(x)$ is of exponential order
as $x\to+\infty$ and hence it is not a tempered distribution, so
that classically we miss this solution.
\subsection{Applications of HFT to partial differential equations}
\subsubsection*{\label{subsec:The-wave-equation}The wave equation}
Let us consider the one dimensional (generalized) wave equation\textit{
\begin{equation}
\frac{\partial^{2}u}{\partial t^{2}}=c^{2}\frac{\partial^{2}u}{\partial x^{2}},\ c\in\RC{\rho},\ u\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\left[-k,k\right]\times\RC{\rho}_{\geq0}),\label{eq:wave}
\end{equation}
}where $c$ is invertible, and subject to the boundary conditions
at $t=0$ and $x=\pm k$
\begin{align}
u(-,0) & =f,\quad\partial_{t}u(-,0)=g,\label{eq:ICwave}\\
u(\pm k,-) & =F_{\pm},\quad\partial_{x}u(\pm k,-)=G_{\pm}.
\end{align}
Explicitly note that all the GSF $f$, $g\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\left[-k,k\right])$,
$F_{+}$, $F_{-}$, $G_{+}$, $G_{-}\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\RC{\rho}_{\ge0})$ are completely
arbitrary. As usual, we directly apply the HFT with respect to the
variable $x$ to both sides and then apply Thm.~\ref{thm:thmProperties}.\ref{enu:prop8}
to the right hand side
\[
\mathcal{F}_{k}\left(\frac{\partial^{2}u}{\partial t^{2}}\right)=c^{2}\mathcal{F}_{k}\left(\frac{\partial^{2}u}{\partial x^{2}}\right),
\]
\[
\frac{\partial^{2}\mathcal{F}_{k}\left(u\right)}{\partial t^{2}}=-c^{2}\omega^{2}\mathcal{F}_{k}\left(u\right)+i\omega\Delta_{1k}u+\Delta_{1k}\left(\partial_{x}u\right).
\]
Note that also the $\Delta_{1k}$-terms refer to the variable $x$,
but the result is a function of $t$. Set $C\left(\omega,t\right):=i\omega\Delta_{1k}\left(u(-,t)\right)+\Delta_{1k}\left(\partial_{x}u(-,t)\right)$.
The function $C$ does not depend on the whole functions $u$ and
$\partial_{x}u$ but only on the functions of $t$: $u\left(\pm k,-\right)=F_{\pm}$
and $\partial_{x}u\left(\pm k,-\right)=G_{\pm}$:
\begin{equation}
C(\omega,t)=i\omega\left[F_{+}(t)e^{-ik\omega}-F_{-}(t)e^{ik\omega}\right]+G_{+}(t)e^{-ik\omega}-G_{-}(t)e^{ik\omega}\quad\forall\omega\in\RC{\rho}.\label{eq:Cwave}
\end{equation}
Hence, we get
\begin{equation}
\frac{\partial^{2}\mathcal{F}_{k}\left(u\right)}{\partial t^{2}}(\omega,-)+c^{2}\omega^{2}\mathcal{F}_{k}\left(u\right)(\omega,-)=C(\omega,-)\quad\forall\omega\in\RC{\rho}.\label{eq:SOnHODE}
\end{equation}
We obtain, for each fixed $\omega$, a constant (generalized) coefficient,
non-ho\-mo\-ge\-ne\-ous, second order ODE in the unknown $\mathcal{F}_{k}\left(u\right)\left(\omega,-\right)$.
Clearly, \eqref{eq:SOnHODE} already highlights a difference with
the classical FT, where $C=0$. To solve equation \eqref{eq:SOnHODE},
we can use the standard method of variation of parameters to get
\begin{align}
\mathcal{F}_{k}(u)(\omega,t) & =d_{2}(\omega)tS(c\omega t)+d_{1}(\omega)\cos(c\omega t)+\nonumber \\
& \phantom{=}+tS(c\omega t)\int_{1}^{t}C(\omega,s)\cos(c\omega s)\,\diff s-\\
& \phantom{=}-\cos(c\omega t)\int_{1}^{t}sC(\omega,s)S(c\omega s)\,\diff s,\label{eq:Fwave}\\
S(z) & :=\frac{1}{2}\int_{-1}^{1}\cos(zt)\,\diff t.\label{eq:sinx/x}
\end{align}
More precisely, in the previous step we applied the general theory
of linear constant generalized coefficient, non-homogeneous ODE developed
in \cite{LuGi16a}, which generalizes the classical theory proving
that the space of all the solutions is a $2$-dimensional $\RC{\rho}$-module,
generated in this case by $tS(c\omega t)$ and $\cos(c\omega t)$,
and translated by a particular solution of \eqref{eq:SOnHODE}. Explicitly
note that every functions in \eqref{eq:Fwave} is a smooth function
or a GSF; in particular, $S(z)$ is the smooth extension of $\frac{\sin(z)}{z}$.
We also note that the functions $d_{1}$, $d_{2}$ satisfy
\begin{align*}
d_{1}(\omega) & =\mathcal{F}_{k}(f)(\omega)-\int_{0}^{1}sC(\omega,s)S(c\omega s)\,\diff s\\
d_{2}(\omega) & =\mathcal{F}_{k}(g)(\omega)+\int_{0}^{1}C(\omega,s)\cos(c\omega s)\,\diff s.
\end{align*}
They hence depend on all the functions of the boundary conditions.
Finally, applying the inversion Thm.~\ref{thm:MainPropertiesHFT}.\ref{enu:HFFIT}
we get
\begin{align*}
u(x,t) & =\left.\mathcal{F}_{k}^{-1}\right|_{K}\left(\left.d_{2}(\omega)tS(c\omega t)+d_{1}(\omega)\cos(c\omega t)\right|_{K}\right)(x,t)+\\
& \phantom{=}+\left.\mathcal{F}_{k}^{-1}\right|_{K}\left(\left.tS(c\omega t)\int_{1}^{t}C(\omega,s)\cos(c\omega s)\,\diff s\right|_{K}\right)(x,t)-\\
& \phantom{=}-\left.\mathcal{F}_{k}^{-1}\right|_{K}\left(\left.\cos(c\omega t)\int_{1}^{t}sC(\omega,s)S(c\omega s)\,\diff s\right|_{K}\right)(x,t).
\end{align*}
Following the usual calculations, the first summand yields the d'Alembert
formula
\begin{align}
u(x,t) & =\frac{1}{2}\left[f(x-ct)+f(x+ct)\right]+\frac{1}{2c}\int_{x-ct}^{x+ct}g(x')\,\diff x'+\nonumber \\
& \phantom{=}+\left.\mathcal{F}_{k}^{-1}\right|_{K}\left(\left.tS(c\omega t)\int_{1}^{t}C(\omega,s)\cos(c\omega s)\,\diff s\right|_{K}\right)(x,t)-\nonumber \\
& \phantom{=}-\left.\mathcal{F}_{k}^{-1}\right|_{K}\left(\left.\cos(c\omega t)\int_{1}^{t}sC(\omega,s)S(c\omega s)\,\diff s\right|_{K}\right)(x,t).\label{eq:waveSol}
\end{align}
Given $f$, $g\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\left[-k,k\right])$ and $F_{\pm}$, $G_{\pm}\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\RC{\rho}_{\ge0})$,
we can define $u(x,t)$ using \eqref{eq:waveSol} and reverse all
the steps above to get a global solution of the wave equation subject
to the given boundary conditions. This proves the following
\begin{thm}
\label{thm:waveGlobal}Given $f$, $g\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\left[-k,k\right])$
and $F_{\pm}$, $G_{\pm}\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\RC{\rho}_{\ge0})$, there exists one and
only one solution $u\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\left[-k,k\right]\times\RC{\rho}_{\geq0})$
of the wave equation subject to the boundary conditions \eqref{eq:ICwave}.
In particular, if $F_{\pm}=G_{\pm}=0$, we get the usual solution,
and if in addition we take $f=0$, $g=\delta$, we get the wave kernel
$u(x,t)=\frac{1}{2c}\left[H(x+ct)-H(x-ct)\right]$.
\end{thm}
\subsubsection*{The Heat equation}
Let us consider the one dimensional (generalized) heat equation\textit{
\begin{equation}
\frac{\partial u}{\partial t}=a^{2}\frac{\partial^{2}u}{\partial x^{2}},\ \ u\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\left[-k,k\right]\times\RC{\rho}_{\geq0}).\label{eq:Heat}
\end{equation}
}where $a\in\RC{\rho}_{>0}$, $t\leq\frac{N}{a^{2}k^{2}}\log\left(\diff\rho\right)$,
$N\in\mathbb{N}_{>0}$ and subject to the boundary conditions at $t=0$
and $x=\pm k$
\begin{align}
u(-,0) & =f,\label{eq:ICheat}\\
u(\pm k,-) & =F_{\pm},\quad\partial_{x}u(\pm k,-)=G_{\pm}.
\end{align}
Note, once again, that all the GSF $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\left[-k,k\right])$,
$F_{+}$, $F_{-}$, $G_{+}$, $G_{-}\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\RC{\rho}_{\ge0})$ are completely
arbitrary. Applying, as usual, the HFT with respect to the variable
$x$ to both sides of \eqref{eq:Heat} and Thm.~\ref{thm:thmProperties}.\ref{enu:prop8}
we get
\[
\frac{\partial\mathcal{F}_{k}\left(u\right)}{\partial t}=-a^{2}\omega^{2}\mathcal{F}_{k}\left(u\right)+i\omega\Delta_{1k}u+\Delta_{1k}\left(\partial_{x}u\right).
\]
For all $\omega\in\RC{\rho}$, set
\begin{align*}
C\left(\omega,t\right) & :=i\omega\Delta_{1k}\left(u(-,t)\right)+\Delta_{1k}\left(\partial_{x}u(-,t)\right)=\\
& =i\omega\left[F_{+}(t)e^{-ik\omega}-F_{-}(t)e^{ik\omega}\right]+G_{+}(t)e^{-ik\omega}-G_{-}(t)e^{ik\omega}.
\end{align*}
Therefore, we get
\begin{equation}
\frac{\partial\mathcal{F}_{k}\left(u\right)}{\partial t}\left(\omega,-\right)+a^{2}\omega^{2}\mathcal{F}_{k}\left(u\right)\left(\omega,-\right)=C\left(\omega,-\right)\ \forall\omega\in\RC{\rho}.\label{eq:FOnHODE}
\end{equation}
Solving \eqref{eq:FOnHODE} with the integrating factor $\mu\left(t\right):=e^{a^{2}\omega^{2}\intop_{0}^{t}\diff t}=e^{a^{2}\omega^{2}t}$
(which is well-defined if $\omega\in K$ because we assumed that $t\leq\frac{N}{a^{2}k^{2}}\log\left(\diff\rho\right)$),
we have
\[
\mathcal{F}_{k}\left(u\right)\left(\omega,t\right)=\frac{\intop_{0}^{t}e^{a^{2}\omega^{2}t}C\left(\omega,t\right)\diff t+c(\omega)}{e^{a^{2}\omega^{2}t}},
\]
where $c(\omega):=\mathcal{F}_{k}\left(u\right)\left(\omega,0\right)=\mathcal{F}_{k}(f)(\omega)\in\RC{\rho}$,
so that
\begin{align*}
\mathcal{F}_{k}\left(u\right)\left(\omega,t\right) & =e^{-a^{2}\omega^{2}t}\intop_{0}^{t}e^{a^{2}\omega^{2}t}C\left(\omega,t\right)\diff t+\mathcal{F}_{k}(f)(\omega)e^{-a^{2}\omega^{2}t}=\\
& =e^{-a^{2}\omega^{2}t}\intop_{0}^{t}e^{a^{2}\omega^{2}t}C\left(\omega,t\right)\diff t+\mathcal{F}_{k}(f)(\omega)\mathcal{F}\left(\frac{1}{2a\sqrt{\pi t}}e^{-\frac{x^{2}}{4a^{2}t}}\right)(\omega,t)=:\\
& =:e^{-a^{2}\omega^{2}t}\intop_{0}^{t}e^{a^{2}\omega^{2}t}C\left(\omega,t\right)\diff t+\mathcal{F}_{k}(f)(\omega)\mathcal{F}\left(H_{t}^{a}(x)\right)(\omega,t)=\\
& =e^{-a^{2}\omega^{2}t}\intop_{0}^{t}e^{a^{2}\omega^{2}t}C\left(\omega,t\right)\diff t+\mathcal{F}_{k}\left(f*H_{t}^{a}\right)(\omega,t),
\end{align*}
where $H_{t}^{a}(x):=\frac{1}{2a\sqrt{\pi t}}e^{-\frac{x^{2}}{4a^{2}t}}$
is the heat kernel (which, in our setting, is a compactly supported
GSF). Finally, applying the inversion Thm.~\ref{thm:MainPropertiesHFT}.\ref{enu:HFFIT}
and the convolution formula Thm.~\ref{thm:thmProperties}.\ref{enu:prop10}
we get
\[
u(x,t)=(f*H_{t}^{a})(x,t)+\mathcal{F}_{k}^{-1}|_{K}\left(e^{-a^{2}\omega^{2}t}\intop_{0}^{t}e^{a^{2}\omega^{2}t}C\left(\omega,t\right)\diff t\right)(x,t).
\]
As usual, if $C\left(\omega,t\right)$ equals zero, we obtain the
classical solution. We can reverse all the steps above to get a global
solution of the heat equation subject to the given boundary conditions.
This proves the following
\begin{thm}
\label{thm:waveGlobal-1}Given $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\left[-k,k\right])$ and
$F_{\pm}$, $G_{\pm}\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\RC{\rho}_{\ge0})$, there exists one and only
one solution $u\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\left[-k,k\right]\times\RC{\rho}_{\geq0})$ of
the heat equation subject to the boundary conditions \eqref{eq:ICheat}.
In particular, if $F_{\pm}=G_{\pm}=0$, we get the usual solution,
and if in addition we take $f=\delta$, then we get the heat kernel
$u(x,t)=H_{t}^{a}(x)=\frac{1}{2a\sqrt{\pi t}}e^{-\frac{x^{2}}{4a^{2}t}}$.
\end{thm}
\subsubsection*{Laplace's equation}
Let us consider the one dimensional Laplace's equation \textit{
\begin{equation}
\frac{\partial^{2}u}{\partial x^{2}}+\frac{\partial^{2}u}{\partial y^{2}}=0\ ,u\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}([-k,k]\times[\frac{N}{k}\log\diff\rho,\frac{N}{k}\log\diff\rho]),\label{eq:laplace}
\end{equation}
}where $N\in\N_{>0}$, and subject to the boundary conditions at $y=0$
and $x=\pm k$
\begin{align}
u(-,0) & =f,\quad\partial_{y}u(-,0)=0,\label{eq:IClaplace}\\
u(\pm k,-) & =F_{\pm},\quad\partial_{x}u(\pm k,-)=0,
\end{align}
where $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\left[-k,k\right])$, $F_{+}$, $F_{-}\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(Y)$
and $Y:=[\frac{N}{k}\log\diff\rho,\frac{N}{k}\log\diff\rho]\subseteq\RC{\rho}$.
Actually, we show this example only for the sake of completeness,
but we present here only a preliminary study. By applying the HFT
with respect to $x$ and applying Thm.~\ref{thm:thmProperties}.\ref{enu:prop8},
the problem is converted into
\[
\frac{\partial^{2}\mathcal{F}_{k}\left(u\right)}{\partial y^{2}}=\omega^{2}\mathcal{F}_{k}\left(u\right)+i\omega\Delta_{1k}u+\Delta_{1k}\left(\partial_{x}u\right)
\]
Set
\begin{align}
C\left(\omega,y\right): & =i\omega\Delta_{1k}\left(u\left(-,y\right)\right)+\Delta_{1k}\left(\partial_{x}u\left(-,y\right)\right)=\nonumber \\
& =i\omega\left[F_{+}(y)e^{-ik\omega}-F_{-}(y)e^{ik\omega}\right].\label{eq:CLaplace}
\end{align}
Note explicitly that $\frac{C(\omega,y)}{\omega}$ is a GSF exactly
because of our boundary condition $\partial_{x}u(\pm k,-)=0$ (compare
\eqref{eq:CLaplace} with \eqref{eq:Cwave}). Hence, we get
\begin{equation}
\frac{\partial^{2}\mathcal{F}_{k}\left(u\right)}{\partial y^{2}}\left(\omega,y\right)-\omega^{2}\mathcal{F}_{k}\left(u\right)\left(\omega,y\right)=C\left(\omega,y\right),\ \forall\omega\in\RC{\rho},\label{eq:LnHODE}
\end{equation}
whose general solution is
\begin{align*}
\mathcal{F}_{k}(u)(\omega,y) & =d_{1}(\omega)e^{\omega y}+d_{2}(\omega)e^{-\omega y}-\\
& \phantom{=}-e^{-\omega y}\int_{1}^{y}\frac{e^{z\omega}}{2}i\left[F_{+}(z)e^{-ik\omega}-F_{-}(z)e^{ik\omega}\right]\,\diff z+\\
& \phantom{=}+e^{\omega y}\int_{1}^{y}\frac{e^{-z\omega}}{2}i\left[F_{+}(z)e^{-ik\omega}-F_{-}(z)e^{ik\omega}\right]\,\diff z=:\\
& =:d_{1}(\omega)e^{\omega y}+d_{2}(\omega)e^{-\omega y}+D(\omega,y)
\end{align*}
where the functions \textbf{$d_{1}$}, $d_{2}$ satisfy
\begin{align*}
\mathcal{F}_{k}(f)(\omega) & =d_{1}(\omega)+d_{2}(\omega)+D(\omega,0)\\
0 & =\omega d_{1}(\omega)-\omega d_{2}(\omega),
\end{align*}
because $\partial_{y}u(-,0)=0$ and $\partial_{y}D(\omega,0)=0$.
Since the set of invertible numbers in $\RC{\rho}$ is dense in the sharp
topology, we hence have
\[
d_{1}(\omega)=d_{2}(\omega)=\frac{1}{2}\left[\mathcal{F}_{k}(f)(\omega)-D(\omega,0)\right].
\]
Note that $e^{\pm\omega y}$ is well defined for all $\omega\in K$
and all $y\in Y=[\frac{N}{k}\log\diff\rho,\frac{N}{k}\log\diff\rho]$.
Finally, applying the inversion Thm.~\ref{thm:MainPropertiesHFT}.\ref{enu:HFFIT}
we get
\begin{align*}
u(x,y) & =\left.\mathcal{F}_{k}^{-1}\right|_{K}\left(\left.d_{1}(\omega)e^{\omega y}+d_{2}(\omega)e^{-\omega y}\right|_{K}\right)(x,y)+\\
& \phantom{=}+\left.\mathcal{F}_{k}^{-1}\right|_{K}\left(\left.D(\omega,y)\right|_{K}\right)(x,y).
\end{align*}
\begin{thm}
Given $f\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\left[-k,k\right])$ and $F_{\pm}\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}([\frac{N}{k}\log\diff\rho,\frac{N}{k}\log\diff\rho])$,
$N\in\N_{>0}$, there exists one and only one solution $u\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(\left[-k,k\right]\times[\frac{N}{k}\log\diff\rho,\frac{N}{k}\log\diff\rho])$
of the Laplace's equation subject to the boundary conditions \eqref{eq:IClaplace}.
In particular, if $F_{\pm}=0$, we get the usual solution, and if
in addition we take $f=\delta$, then $u(x,y)=\left.\mathcal{F}_{k}^{-1}\right|_{K}\left(\left.\mathbb{1}(\omega)\cosh(\omega y)\right|_{K}\right)(x,y)$.
\end{thm}
\subsection{Applications to convolution equations.}
Consider the following convolution equation in $y$
\begin{equation}
g=f*y,\label{eq:integralEquation}
\end{equation}
where we assume that $y$, $g\in\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(K)$ and $f\in\frontRise{\mathcal{D}}{\rho}\mathcal{GD}(\RC{\rho})$.
As in the classical theory, we apply the convolution Thm.~\ref{thm:thmProperties}.\ref{enu:prop10}
to get
\[
\mathcal{F}_{k}\left(g\right)=\mathcal{F}\left(f\right)\mathcal{F}_{k}\left(y\right).
\]
Assuming that $\mathcal{F}\left(f\right)(\omega)$ is invertible for
all $\omega\in K$, the inversion Thm.~\ref{thm:MainPropertiesHFT}.\ref{enu:HFFIT}
yields
\[
y=\left.\mathcal{F}_{k}^{-1}\right|_{K}\left(\left.\frac{\mathcal{F}_{k}\left(g\right)}{\mathcal{F}\left(f\right)}\right|_{K}\right).
\]
\noindent For example, to highlight the differences with the classical
theory, let us consider the convolution equation $(\delta'+\delta)*y=\delta$
with $y(-1)=0$. We have $g=\delta$, and $f=\delta'+\delta$ so that
$\mathcal{F}(f)=i\omega\mathbb{1}+\mathbb{1}$, where as usual $\mathbb{1}=\mathcal{F}_{k}\left(\delta\right)$.
Since $\mathbb{1}(\omega)\in\RC{\rho}$ for all $\omega$, the quantity
$i\omega\mathbb{1}(\omega)+\mathbb{1}(\omega)$ is always invertible,
and we obtain
\[
y=\left.\mathcal{F}^{-1}\right|_{K}\left(\left.\frac{\mathbb{1}}{i\omega\mathbb{1}+\mathbb{1}}\right|_{K}\right).
\]
It is easy to prove that $y(t)+y'(t)=\mathcal{F}^{-1}|_{K}\left(1|_{K}\right)(t)=\frac{1}{2\pi}\int_{-k}^{k}e^{i\omega t}\,\diff t=\frac{k}{\pi}S(kt)$
(see \eqref{eq:sinx/x}) and hence $y(t)=e^{-t}\frac{k}{\pi}\int_{-1}^{t}S(kx)e^{x}\,\diff x$
e.g.~for all $\log(\diff\rho)\le t\le-\log(\diff\rho)$. Therefore
\[
y(t)=e^{-t}\int_{-1}^{t}\mathcal{F}^{-1}|_{K}\left(1|_{K}\right)(s)e^{s}\,\diff s\approx e^{-t}\int_{-1}^{t}\delta(s)e^{s}\,\diff s=e^{-t}H(t)
\]
for all $t\in\RC{\rho}$ which are far from the origin, i.e.~such that
$|t|\ge r\in\R_{>0}$ for some $r$.
\section{Conclusions}
In the introduction of this article, we motivated the natural attempts
of several authors to extend the domain of some kind of Fourier transform.
The HFT presented in this paper can be applied to the entire space
of all the GSF defined in the infinite interval $[-k,k]^{n}$. These
clearly include all tempered Schwartz distributions, all tempered
Colombeau GF, but also a large class of non-tempered GF, such as the
exponential functions, or non-linear examples like $\delta^{a}\circ\delta^{b}$,
$\delta^{a}\circ H^{b}$, $a$, $b\in\N$, etc.
We want to close by listing some features of the theory that allow
some of the main results we saw:
\begin{enumerate}
\item The power of a non-Archimedean language permeates the whole theory
since the beginning (e.g.~by defining GF as set-theoretical maps
with infinite values derivatives or in the use of sharp continuity).
This power turned out to be important also for the HFT: see the heuristic
motivation of the FT in Sec.~\ref{subsec:The-heuristic-motivation},
Example \ref{exa:uncertaintyDelta} about application of the uncertainty
principle to a delta distribution, or the HFT of exponential functions
in Example \ref{exa:exp} and in Sec.~\ref{sec:Examples-and-applications}.
\item The results presented here are deeply founded on a strong and flexible
theory of multidimensional integration of GSF on functionally compact
sets: the possibility to exchange hyperlimits and integration is an
important step in the proof of the Fourier inversion theorem Thm.~\ref{thm:MainPropertiesHFT}.\ref{enu:HFFIT};
the possibility to compute $\eps$-wise integrals on intervals is
another feature used in several theorems and a key step in defining
integration of compactly supported GSF.
\item It can also be worth explicitly mentioning that the definition of
HFT is based on the classical formulas used only for rapidly decreasing
smooth functions and not on duality pairing. In our opinion, this
is a strong simplification that even more underscores the strict analogies
between ordinary smooth functions and GSF. All this in spite of the
fact that the ring of scalars $\RC{\rho}$ is not a field and is not totally
ordered.
\item Important differences with respect to the classical theory result
from the Riemann-Lebesgue Lem.~\ref{lem:Rieman-Lebesgue} and the
differentiation formula \eqref{eq:DerRule1}. In the former case,
we explained these differences as a general consequence of integration
by part formula, i.e.~of the non-linear framework we are working
in, see Thm.~\ref{thm:R-Limp}. The compact support of the HFT $\mathbb{1}$
of Dirac's delta reveals to be very important in stating and proving
the preservation properties of HFT, see Sec.~\ref{sec:preservation}.
Surprisingly (the classical formula dates back at least to 1822),
in Sec.~\ref{sec:Examples-and-applications} we showed that the new
differentiation formula is very important to get out of the constrained
world of tempered solutions.
\item Finally, Example \ref{exa:uncertaintyDelta} of application of the
uncertainty principle, further suggests that the space $\frontRise{\mathcal{G}}{\rho}\mathcal{GC}^{\infty}(K)$
may be a useful framework for quantum mechanics, so as to have both
GF and smooth functions in a space sharing several properties with
the classical $L^{2}(\R^{n})$ (but which, on the other hand, is a
\emph{graded} Hilbert space).
\end{enumerate}
|
{
"timestamp": "2021-12-01T02:24:15",
"yymm": "2111",
"arxiv_id": "2111.15408",
"language": "en",
"url": "https://arxiv.org/abs/2111.15408"
}
|
\section{Introduction}
The extraction of vector representations of building polygons from aerial and satellite imagery has been growing in importance in many remote sensing applications, such as cartography, city modelling and reconstruction, as well as map generation.
Most building extraction and polygonization methods rely on the vectorization of probability maps produced by a segmentation network.
These approaches are not end-to-end learned, which means that imperfections and artifacts produced by the segmentation model are carried through the entire pipeline with the consequent generation of unregular polygons.
In this paper, we present a new way of tackling the building polygonization problem.
Rather than learning a segmentation network which is then followed by a polygonization method, we propose a novel neural network architecture called PolyWorld that detects building corners from a satellite image and uses a learned matching procedure to connect them in order to form polygons.
Thereby, our method allows the generation of valid polygons in an end-to-end fashion.
PolyWorld extracts positions and visual descriptors of building corners using a Convolutional Neural Network (CNN) and generates polygons by evaluating whether the connections between vertices are valid.
This procedure finds the best connection assignment between the detected vertex descriptors, which means that every corner must be matched with the subsequent vertex of the polygon.
The connections between polygon vertices can be represented as the solution of a linear sum assignment problem.
In PolyWorld, an important role is played by a Graph Neural Network (GNN) that propagates global information through all the vertex embeddings, increasing the descriptors distinctiveness.
Moreover, it refines the position of the detected corners in order to minimize the combined segmentation and polygonal angle difference loss.
PolyWorld demonstrates superior performance compared to the state of the art building extraction and polygonization methods, not only achieving higher segmentation and detection results, but also producing more regular and clean building polygons.
\section{Related work}
Since building detection and segmentation from satellite images has been of major research interest throughout the last few decades, discussing all work is beyond the scope of this paper. In this section we therefore focus on the most relevant contributions in different related categories. \smallskip
\textbf{Building segmentation:} Before the great success of deep learning methods, building footprint delineation was mainly done with multi-step, bottom-up approaches by combining multi-spectral overhead images and airborne LiDAR data\cite{sohn2007data, awrangjeb2010automatic}.
Nowadays, deep learning-based methods are state-of-the-art, mainly addressing the problem by refining raster footprints via heuristic polygonization approaches computed by powerful semantic or instance segmentation networks \cite{hamaguchi2018building, iglovikov2018ternausnetv2, golovanov2018building, iglovikov2018ternausnet, liu2018path, he2017mask}.
The majority of these segmentation models is trained with cross entropy, soft intersection over union, or Focal based losses \cite{lin2017focal, berman2018lovasz, rahman2016optimizing, sudre2017generalised}, achieving great scores in terms of intersection over union, recall, and precision, but mostly generating irregular building outlines that are neither visually pleasing, nor employable in most cartographic applications.
A typical problem of semantic and instance segmentation networks is, in fact, the inability of outlining straight building walls and sharp corners in presence of ground truth noise, e.g. misalignment between segmentation mask and intensity image.
Some publications, therefore, suggest to post-process the segmented building footprints in order to align the segmentation outlines to the actual building contours described in the intensity image.
DSAC \cite{marcos2018learning} employs an Active Contour Model to integrate geometrical priors and constraints in the segmentation process, while DARNet \cite{cheng2019darnet} proposes a loss function that encourages the contours to match the building boundaries.
Another technique to make the building contours more regular and realistic is to combine adversarial and regularized losses \cite{zorzi2019regularization, tang2018normalized, tang2018regularized}. \smallskip
\textbf{Polygon prediction:} Standard semantic and instance segmentation networks are easy to train and generate accurate segmentation masks, but most remote sensing applications that involve building layers require segmentation data in vector format rather than rasterized masks.
Object detection and polygonization methods found in literature can be classified into two categories.
The first category includes methods that perform the vectorization of grid-like information, e.g. the probability map produced by a segmentation network.
In \cite{zhao2018building} the authors corrected the segmentation masks produced with Mask R-CNN\cite{he2017mask} by first simplifying the detected boundaries using the Douglas-Peucker algorithm \cite{douglas1973algorithms} and subsequently refining the resulting polygons using a Minimum Descriptor Length method \cite{sohn2012implicit}.
More recently, Chen et al. \cite{chen2021quantization} suggested to regularize the segmentation produced with a CNN via quantizing the histogram of building boundaries in angle space, which can be achieved by exploiting a Relative Angle Gradient Transform.
Zorzi et al. \cite{zorzi2021machine} applies three different networks in series to perform the extraction and polygonization.
First, it uses a CNN to generate building segmentation, then, it performs a regularization on the raster data by applying an autoencoder trained with regularized \cite{tang2018normalized, tang2018regularized} and adversarial losses, and finally detects building corners using a third CNN.
The polygonization is performed by ordering the detected corners following the regularized boundaries.
All these methods are developed with the idea of decomposing the building extraction and polygonization problem into smaller tasks that can be tackled individually.
As a result, most of these approaches are computationally heavy, they lack of parallelization and their hyperparameters must be carefully tuned in order to achieve the desired results.
Most importantly, since they are composed of a sequence of blocks, these methods can accumulate errors through their pipeline, which can harm the quality of the final polygonization.
The current state-of-the-art in the field is achieved by the Frame Field Learning (FFL) method \cite{girard2021polygonal}, which generates a frame field that encodes useful boundary information alongside the corresponding segmentation mask.
Moreover, the contour is optimized to be aligned to the frame field using an Active Skeleton Model.
The second category is represented by methods that directly learn a vector representation.
PolyTransform \cite{liang2020polytransform} initializes a polygon for every object instance and refines the vertex positions using a Transformer network \cite{vaswani2017attention}.
Curve GCN \cite{ling2019fast} learns a graph convolutional network to deform polygons in an iterative manner.
Some networks also utilize recurrent neural networks (RNN) to extract polygons vertex by vertex, e.g. Polygon-RNN\cite{castrejon2017annotating} and Polygon-RNN++ \cite{acuna2018efficient}.
Also PolyMapper \cite{li2019topological} applies a RNN to predict building and road vertices one by one.
All these methods directly process polygon parameters but they are typically more difficult to train and they need multiple iterations during inference.
Moreover they have troubles dealing with complex building shapes, e.g. structures having curved walls or holes in their shape.
PolyWorld, which is presented in this paper, fits well into the second category of direct polygon prediction, although the employed architecture and general idea fundamentally differs from all existing work.
\section{The PolyWorld Architecture}
\label{sec:architecture}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{img/polygons.png}
\caption{In PolyWorld, the connections between polygon vertices are described with a permutation matrix. The $i$-th row of the permutation matrix $P_{clock}$ or $P_{count}$ indicates the index of the next clockwise or counterclockwise vertex connected to $v_i$. Please note that the permutation matrix of the clockwise oriented polygons $P_{clock}$ is the transposed of the permutation matrix of the counterclockwise oriented polygons $P_{count}$.}
\label{fig:polygons}
\end{figure}
The main idea behind PolyWorld is to represent building polygons in the scene as a set of vertices connected according to a permutation matrix, as illustrated in Figure \ref{fig:polygons}.
Each corner of the polygon is associated to a specific row of the permutation matrix that indicates the next clockwise vertex.
The permutation matrix must fulfill certain polygonal constraints: \textcolor{black}{\Circled{1}} every vertex corresponds to at most one clockwise connection and one counterclockwise connection; \textcolor{black}{\Circled{2}} the permutation matrix of the clockwise oriented polygons is the transposed of the counterclockwise permutation matrix; \textcolor{black}{\Circled{3}} a vertex having its entry in the diagonal of the permutation matrix can be discarded since, in reality, there are no building polygons having a single corner, e.g. vertex $v_6$ in Fig. \ref{fig:polygons}.
PolyWorld is composed of three blocks: a Vertex Detection Network that extracts a set of possible building corner candidates, an Attentional Graph Neural Network that aggregates information through the vertices and refines their position, and an Optimal Connection Network that generates the connections between vertices .
Given the input image, the model provides the position of the detected building corners and a valid permutation matrix.
\subsection{Vertex Detection Network}
The Vertex Detection Network is depicted in Figure \ref{fig:vertices_detection}.
The module receives an image $I \in \mathbb{R}^{3 \times H \times W}$ as input, it forward propagates $I$ through a fully convolutional backbone, and returns a $D$-dimensional feature map $F \in \mathbb{R}^{D \times H \times W}$.
The vertex detection mask $Y \in \mathbb{R}^{H \times W}$ is obtained by propagating the features $F$ through a $1 \times 1$ convolutional layer.
The detection mask $Y$ is then filtered using a Non Maximum Suppression algorithm with kernel size of 3, in order to retain the most relevant peaks.
The \textit{positions} $p$ of the $N$ highest peaks are then used to extract $N$ \textit{visual descriptors} $d \in \mathbb{R}^D$ from the feature map $F$.
Vertex positions consist of $x$ and $y$ image coordinates $p_i:=(x,y)_i$.
During training, the backbone not only learns to produce a feature map $F$ useful to segment building corners but it also learns to embed an abstract representation of the latter.
During training, this information is constrained to represent the building vertex by matching with the other detected corners.
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{img/vertices_detection.png}
\caption{The Vertex Detection Network of PolyWorld. A backbone CNN receives the intensity image and returns a features map and a vertex detection mask. A Non Maximum Suppression (NMS) algorithm removes undesired vertices and returns $N$ locations that correspond to the highest peaks in the detection mask. The visual descriptors are then extracted from the feature map at every location provided by the NMS.}
\label{fig:vertices_detection}
\end{figure}
\subsection{Attentional Graph Neural Network}
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{img/graph.png}
\label{fig:short-a}
\caption{The Attention Graph Neural Network and the Optimal Connection Network of PolyWorld. The first module uses a vertex encoder to map \textit{vertex positions} $p$ and \textit{visual descriptors} $d$ into a single vector, and uses $L$ self-attention layers to increase their distinctiveness. The module returns a set of \textit{offsets} $t$ and the \textit{matching descriptors} $m$. The offsets are used to refine the vertex positions, while $m$ are propagated through the optimal connection network that creates a $N \times N$ score matrix and generates the permutation matrix using the Sinkhorn algorithm.}
\label{fig:short}
\end{figure*}
Besides the position and the visual appearance of a building corner, considering other contextual information is essential to describe it in a more rich and distinctive way.
Capturing relationships between its position and appearance with other vertices in the image can be helpful to link it with corners having the same roof style, having a compatible shape and pose for the matching, or simply with adjacent corners.
Motivated by this consideration, we design the next PolyWorld block using an Attentional Graph Neural Network (GNN) that computes a set of \textit{matching descriptors} $m_i \in \mathbb{R}^D$ by learning short and long term vertex relationships analyzing the vertex positions $p$ and the visual descriptors $v$ extracted by the vertex detection network.
Moreover, this block also estimates a \textit{positional offset} $t_i \in \mathbb{R}^2$ in order to refine the vertex positions optimizing the corner angle and the footprint segmentation.
As we will show in the following chapters, aggregating features from all the detected vertices and refining the vertex positions leads not only to improved segmentation scores, but also to more realistic building polygons.
\subsubsection{Vertices Encoder}
Before forward propagating through the Graph Neural Network, positions $p$ and visual descriptors $d$ are merged by a Multilayer Perceptron (MLP).
\begin{equation}
d'_i = \textsc{MLP}_{enc} \left( \left[ d_i || p_i \right] \right)
\end{equation}
$\textsc{MLP}_{enc}$ receives the concatenation $[\cdot||\cdot]$ of $p_i$ and $d_i$ and returns a new descriptor $d' \in \mathbb{R}^D$ that encodes positional and visual information together.
\subsubsection{Self Attention Network}
The aggregation is performed by a self-attention mechanism \cite{vaswani2017attention} that propagates the information across vertices, increasing their contextual information.
Given the intermediate descriptors $x \in \mathbb{R}^{D \times N}$, the model employs a linear projection to produce a query $Q(x)$, a key $K(x)$, and a value $V(x)$.
The weights between the nodes are computed taking the softmax over the dot product $Q(x) K(x)^\top$.
The result is then multiplied with the values $V(x)$ in order to propagate the information across all the vertices.
The attention mechanism can be written as:
\begin{equation}
A = \text{softmax} \left( \frac{Q(x) \cdot K(x)^\top}{\sqrt{d_k}} \right) V(x)
\end{equation}
where the normalization term $d_k$ is the dimension of the queries and keys.
This operation is repeated for a fixed numbers of layers $L$.
The message $A^{(l)} \in \mathbb{R}^{D \times N}$ is the attention result at layer $l$ and it is used to update the vertex descriptors at every step.
We indicate $a^{(l)}_i$ the $i$-th column of $A^{(l)}$, that represents the attention message relative to the $i$-th vertex of the graph.
In every layer the vertex descriptors are updated as follows:
\begin{equation}
x^{(l+1)}_i = \textsc{MLP}^{(l)} \left( \left[ x^{(l)}_i || a^{(l)}_i \right] \right)
\end{equation}
The embeddings received by the the first attention layer are the descriptors produced by the vertex encoder $d' = x^{(l=1)}$.
Finally, the embedding of the $i$-th vertex produced by the last attention layer $x^{(L)}_i$ is then decomposed in two components: a \textit{matching descriptor} $m_i \in \mathbb{R}^{D}$ and a \textit{positional offset} $t_i \in \mathbb{R}^{2}$.
\begin{equation}
m_i = \textsc{MLP}_{match} \left( x^{(L)}_i \right)
\end{equation}
\begin{equation}
t_i = \textsc{MLP}_{offset} \left( x^{(L)}_i \right)
\end{equation}
The matching descriptors are used further to generate a valid combination of connections between the vertices, while the offsets are combined to the vertices positions as follows:
\begin{equation}
\hat{p}_i = p_i + \gamma \cdot t_i
\label{eq:offset}
\end{equation}
where $\gamma$ is a factor that regulates the correction radius since the offsets are generated through a HardTanh activation function and the values range between $-1$ and $1$.
\subsection{Optimal Connection Network}
The last block of PolyWorld is the optimal connection layer that connects the vertices generating a permutation matrix $P \in \mathbb{R}^{N \times N}$.
The assignment can be obtained calculating a score matrix $S \in \mathbb{R}^{N \times N}$ for all possible vertex pairs and maximizing the overall score $\sum_{i,j} P_{i,j} S_{i,j}$.
Given two matching descriptors $m_i$ and $m_j$ encoding the information of two distinct vertices, we exploit $\textsc{MLP}_{clock}$ to detect whether the clockwise connection $m_i \xrightarrow{} m_j$ is possible.
The network receives the concatenation of the two descriptors and returns a high score value if the connection between them is strong; e.g. if $m_i$ represents the top-left corner of an orange roof, it is likely that $m_j$ is the next clockwise vertex if it represents a top-right corner of an orange roof.
\begin{equation}
s^{clock}_{i \xrightarrow{} j} = \textsc{MLP}_{clock} \left( \left[ m_i || m_j \right] \right)
\end{equation}
Vice versa we estimate how strong is the counterclockwise connection $m_i \xrightarrow{} m_j$ exploiting a second network $\textsc{MLP}_{count}$.
\begin{equation}
s^{count}_{i \xrightarrow{} j} = \textsc{MLP}_{count} \left( \left[ m_i || m_j \right] \right)
\end{equation}
By enforcing the constraint \textcolor{black}{\Circled{2}} we can establish a consistency check between the clockwise and the counterclockwise path of vertices.
The final score matrix $S$ is calculated as the combination of the clockwise score matrix $S_{clock}$ and the transposed version of the counterclockwise score matrix $S_{count}$:
\begin{equation}
S = S_{clock} + S^{\top}_{count}
\end{equation}
The double path consistency ensures to have stronger matches, better connections and, ultimately, higher polygon quality.
As a final step, we use the Sinkhorn algorithm \cite{sinkhorn1967concerning, cuturi2013sinkhorn, peyre2019computational, sarlin2020superglue} to find the optimal assignment matrix $P$ given the score matrix $S$.
The Sinkhorn is a GPU efficient and differentiable version of the Hungarian algorithm \cite{munkres1957algorithms}, used to solve linear sum assignment problems, and it consists of normalizing rows and columns of $\text{exp}(S)$ for a certain amount of iterations.
\section{Losses}
\textbf{Detection:} We train the corner detection as a segmentation task using weighted binary cross-entropy loss:
\begin{equation}
\begin{aligned}
\mathcal{L}_{det} = &- \omega \cdot \sum^H_{i=1} \sum^W_{j=1} \bar{Y}_{i,j} \cdot \text{log} \left( Y_{i,j} \right) \\
&- \sum^H_{i=1} \sum^W_{j=1} (1 - \bar{Y}_{i,j}) \cdot \text{log} \left( 1 - Y_{i,j} \right)
\end{aligned}
\end{equation}
The ground truth $\bar{Y}$ is a sparse array of zeros.
Pixels that indicate the presence of a building corner have value of one.
Since the segmentation is heavily unbalanced for the foreground pixels, we use a weight $\omega=100$ to counterbalance positive samples.\smallskip
\textbf{Matching:} The attention graph neural network and the optimal connection network of PolyWorld are fully differentiable which allows us to backpropagate from the generated partial assignment to the backbone that generates the visual descriptors.
This path is trained in a supervised manner from the ground truth permutation matrix $\bar{P}$ using cross entropy loss:
\begin{equation}
\begin{aligned}
\mathcal{L}_{match} = - \sum^N_{i=1} \sum^N_{j=1} \bar{P}_{i,j} \cdot \text{log} \left( P_{i,j} \right)
\end{aligned}
\end{equation}
Due to the iterative normalization through rows and columns made by the Sinkhorn algorithm, minimizing the negative log-likelihood of the positive matches of $P$ leads to simultaneously maximizing the precision and the recall of the matching.\smallskip
\textbf{Positional refinement:} Due to low image resolution, ground truth misalignments, or wrong building labelling, the position of the vertices provided by the vertex detection network is not optimal in practice.
The subsequent matching procedure, therefore, could produce polygons having corner angles different from the ground truth, altering the visual appeal of the extracted polygons.
In order to repress this phenomenon, we minimize the difference between the corner angles of the predicted polygons and the ground truth polygons.
We indicate with $\mathcal{C}$ the function that converts a permutation matrix and vertex positions to a list of polygons $\mathcal{P}$.
The polygons predicted by PolyWorld and the ground truth polygons are then $\mathcal{P} = \mathcal{C} \left( \hat{p}, P \right)$ and $\mathcal{\bar{P}} = \mathcal{C} \left( \bar{p}, \bar{P} \right)$, respectively.
Indicating with $\mathcal{P}_k$ the $k$-th polygon instance extracted from the image and composed of a set of clockwise ordered vertex positions, we formulate the angle loss as:
\begin{equation}
\begin{aligned}
\mathcal{L}_{angle} = \sum^K_{k=1} \sum_{(u\scriptveryshortarrow v\scriptveryshortarrow w)} 1 - \text{exp} \left( -\sigma \cdot \left| \Delta_{k, (u,v,w)} \right| \right)\\
\Delta_{k,(u,v,w)} = \angle \left( \hat{p}_u, \hat{p}_v, \hat{p}_w \right)_k - \angle \left( \bar{p}_u, \bar{p}_v, \bar{p}_w \right)_k
\end{aligned}
\end{equation}
where $(u \veryshortarrow v \veryshortarrow w)$ denotes the indices of any three consecutive vertices in polygon $\mathcal{P}_k$ and $\mathcal{\bar{P}}_k$.
The strength of the loss term is regulated by the factor $\sigma$, while $ \angle \left( \hat{p}_u, \hat{p}_v, \hat{p}_w \right)_k$ and $\angle \left( \bar{p}_u, \bar{p}_v, \bar{p}_w \right)_k$ indicate the angle at the $v$-th vertex of the polygon $\mathcal{P}_k$ and $\mathcal{\bar{P}}_k$, respectively.
Even if the network is encouraged to fix corner angles, $\mathcal{L}_{angle}$ potentially induces unexpected modifications of the polygon shapes since it leaves some degrees of freedom to the network on how to warp the vertices.
In our experiments the network stretched the polygons in undesired ways while respecting the angle criterion, potentially producing misaligned footprints.
PolyWorld fixes this issue by minimizing a segmentation loss between the ground truth and predicted polygons.
This refinement loss not only inhibits unwanted effects of $\mathcal{L}_{angle}$, but it also increases segmentation scores as documented in the next chapters.
We generate the footprint mask of the predicted polygons exploiting a Differentiable Polygon Rendering method \cite{stekovic2021montefloor}.
It is the soft version of the winding number algorithm, that checks whether a pixel location $x$ is inside the polygon $\mathcal{P}_k$ with the equation:
\begin{equation}
\begin{aligned}
W(x, \mathcal{P}_k) = \sum_{(u \scriptveryshortarrow v)} \frac{\lambda \cdot \text{det}(\overline{\hat{p}_u x}, \overline{\hat{p}_v x} )_k }{1 + \left| \lambda \cdot \text{det}(\overline{\hat{p}_u x}, \overline{\hat{p}_v x} )_k \right|} \cdot \angle \left( \hat{p}_u, x, \hat{p}_v \right)_k
\end{aligned}
\end{equation}
where $(u \veryshortarrow v)$ are the indices of any two consecutive vertices of $\mathcal{P}_k$, $\text{det}(\cdot)$ is the determinant of vectors $\overline{\hat{p}_u q}$ and $\overline{\hat{p}_v q}$, and the value $\lambda$ fixes the smoothness of the raster contours.
Calculating the winding number for every pixel location in the image, we generate the raster mask $M_k \in \mathbb{R}^{H \times W}$ of the polygon $\mathcal{P}_k$.
The segmentation loss $\mathcal{L}_{seg}$ is finally calculated as the soft intersection over union \cite{rahman2016optimizing} between the ground truth segmentation mask $\bar{M}$ and the combination of extracted polygon masks:
\begin{equation}
\begin{aligned}
\mathcal{L}_{seg} = \text{softIoU} \left( \sum^K_{k=1} M_k, \bar{M} \right)
\end{aligned}
\end{equation}
Since the NMS block is not differentiable, the only way for the network to minimize $\mathcal{L}_{seg}$ and $\mathcal{L}_{angle}$ is to generate a proper set of offsets $t$ for the Equation \ref{eq:offset}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.99\linewidth]{img/thumbnails.png}
\caption{Examples of building extraction and polygonization on CrowdAI test dataset. Top row: Frame Field Learning \cite{girard2021polygonal}} approach with Res101-UNet as backbone and ACM polygonization. Bottom row: PolyWorld results.
\label{fig:thumbnails}
\end{figure*}
\section{Implementation details}
\textbf{Training and inference:} The NMS algorithm extracts a list of $N=256$ vertex positions $p$ with the highest detection confidence.
During training, these positions are not directly used to extract the descriptors $d$ from the features $F$, but they are first sorted to match the nearest neighboring ground truth point.
After sorting, $p_i$ is the closest vertex to the ground truth point $\bar{p}_i$.
This procedure ensures to have index consistency between the positions $p$ and the ground truth permutation matrix $\bar{P}$.
In reality, the number of extracted points $N$ is always greater than the number of building corners in the image, therefore the vertices that do not minimize the distance with any of the ground truth points have their entry assigned to the diagonal of $\bar{P}$.
PolyWorld is trained from scratch linearly combining detection, matching and refinement losses: $\mathcal{L}_{det} + \mathcal{L}_{match} + \mathcal{L}_{angle} + \mathcal{L}_{seg}$.
Rather than learning the matching branch at the early training stage, we prefer to first pretrain the vertex detection network only using $\mathcal{L}_{det}$.
When it extracts sufficiently accurate building corners, we keep training the full PolyWorld architecture with the complete loss.
During inference, vertices that have their entry in the diagonal of the permutation matrix are discarded (constraint \textcolor{black}{\Circled{3}}). \smallskip
\textbf{Architecture:} As backbone PolyWorld uses a Residual U-Net model \cite{alom2019recurrent}.
The descriptor dimension and the intermediate representations of the attention graph neural network have the same size $D=64$.
We use $L=4$ self attention layers having $4$ parallel heads each.
During training the permutation matrix $P$ is calculated performing $T=100$ Sinkhorn iterations, while during inference we calculate the exact linear sum assignment result using the Hungarian algorithm on the CPU.
With this configuration a forward pass takes on average 24 ms per image ($320\times320$ pixels) on a NVIDIA GTX 3090 and an AMD Ryzen7 3700X.
\section{Experiments}
\textbf{Dataset:} Building extraction and polygonization networks require ground truth polygonal annotations in order to be trained.
Therefore, we perform all our experiments using the CrowdAI Mapping Challenge dataset \cite{Mohanty:2018}, which is composed of over 280k satellite images of size $300 \times 300$ pixels for training and 60k images for testing.
The dataset provides the polygon annotations in MS COCO format \cite{lin2014microsoft} for every image. \smallskip
\textbf{Evaluation metrics:} We evaluate and compare the results of PolyWorld computing classical segmentation and detection metrics, such as Intersection over Union (IoU), and MS COCO \cite{lin2014microsoft} Average Precision (AP) and Average Recall (AR).
In order to evaluate the regularity of the extracted building contours, we also calculate the Max Tangent Angle Error \cite{girard2021polygonal}.
This metric compares the tangent angles of the predicted and ground truth polygons, penalizing building contours not aligned with the ground truth.
In general, simple polygonization methods applied to the raster output of classical segmentation networks produce unregular polygons with a high amount of redundant vertices.
On the other hand, building extraction and polygonization methods tend to reduce the segmentation scores in favour of more regular and realistic footprints.
Since the goal of the proposed method is to generate high quality building polygons ready to be used on geographical applications, we introduce the \textit{complexity aware IoU (C-IoU)} metric computed as following:
\begin{equation}
\text{C-IoU}(A, \bar{A}) = \text{IoU}(A, \bar{A}) \cdot \left( 1 - \text{RD}(N_A, N_{\bar{A}}) \right)
\end{equation}
where the first term $\text{IoU}(A, \bar{A})$ indicates the intersection over union between the predicted polygons raster mask $A$ and the ground truth segmentation $\bar{A}$.
The second term $\text{RD}(N_A, N_{\bar{A}}) = |N_A - N_{\bar{A}}| / (N_A + N_{\bar{A}})$ is the relative difference between the number of extracted vertices $N_A$ in the image used to produce the raster $A$, and the number of ground truth vertices $N_{\bar{A}}$.
The metric aims to favor polygonizations with a complexity similar to ground truth, penalizing both oversimplified building shapes and polygons with redundant vertices.
Ideally, a method achieves a high C-IoU score if it manages to balance the trade off between segmentation accuracy and polygonization complexity.
\begin{table*}
\centering
\resizebox{\linewidth}{!}{%
\begin{tabular}{l|cccccc|cccccc}
\hline
\textbf{Method} & $AP$ & $AP_{50}$ & $AP_{75}$ & $AP_{S}$ & $AP_{M}$ & $AP_{L}$ & $AR$ & $AR_{50}$ & $AR_{75}$ & $AR_{S}$ & $AR_{M}$ & $AR_{L}$ \\ \hline
Mask R-CNN\cite{he2017mask} & 41.9 & 67.5 & 48.8 & 12.4 & 58.1 & 51.9 & 47.6 & 70.8 & 55.5 & 18.1 & 65.2 & 63.3 \\
PANet\cite{liu2018path} & 50.7 & 73.9 & 62.6 & 19.8 & 68.5 & 65.8 & 54.4 & 74.5 & 65.2 & 21.8 & 73.5 & 75.0 \\
PolyMapper\cite{li2019topological} & 55.7 & 86.0 & 65.1 & 30.7 & 68.5 & 58.4 & 62.1 & 88.6 & 71.4 & 39.4 & 75.6 & 75.4 \\ \hline
FFL (no field), mask & 57.8 & 84.0 & 66.9 & 33.8 & 74.1 & 80.7 & 67.0 & 90.4 & 76.9 & 46.2 & 79.7 & 85.7 \\
FFL (no field), simple poly & 61.1 & 87.4 & 71.2 & 35.1 & 74.5 & 82.3 & 64.7 & 89.4 & 74.1 & 41.7 & 77.9 & 85.7 \\
FFL (with field), mask & 57.7 & 83.8 & 66.3 & 33.8 & 73.8 & 81.0 & 68.1 & 91.0 & 77.7 & 47.5 & 80.0 & 86.7 \\
FFL (with field), simple poly & 61.7 & 87.6 & \textbf{71.4} & 35.7 & 74.9 & 83.0 & 65.4 & 89.8 & 74.6 & 42.5 & 78.6 & 85.8 \\
FFL (with field), ACM poly\cite{girard2021polygonal} & 61.3 & 87.4 & 70.6 & 33.9 & 75.1 & 83.1 & 64.9 & 89.4 & 73.9 & 41.2 & 78.7 & 85.9 \\ \hline
PolyWorld (offset off) & 58.7 & 86.9 & 64.5 & 31.8 & 80.1 & 85.9 & 71.7 & 92.6 & 79.9 & 47.4 & 85.7 & 94.0 \\
PolyWorld (offset on) & \textbf{63.3} & \textbf{88.6} & 70.5 & \textbf{37.2} & \textbf{83.6} & \textbf{87.7} & \textbf{75.4} & \textbf{93.5} & \textbf{83.1} & \textbf{52.5} & \textbf{88.7} & \textbf{95.2} \\ \hline
\end{tabular}
}
\caption{MS COCO \cite{lin2014microsoft} results on the CrowdAI test dataset\cite{Mohanty:2018} for all the building extraction and polygonization experiments. The results of PolyWorld are calculated discarding the correction offsets (offset off), and refining the vertex positions (offset on). FFL refers to the Frame Field Learning \cite{girard2021polygonal} method. The results are computed with and without frame field estimation. "mask" refers to the pure segmentation produced by the model. "simple poly" refers to the and Douglas–Peucker polygon simplification \cite{douglas1973algorithms}, and "ACM poly" refers to the Active Contour Model \cite{girard2021polygonal} polygonization method.}
\label{tab:coco}
\end{table*}
\begin{table}
\centering
\resizebox{\linewidth}{!}{%
\begin{tabular}{l|cccc}
\hline
\textbf{Method} & IoU & C-IoU & MTA & N ratio \\ \hline
FFL (no field), simple poly & 83.9 & 23.6 & 51.8° & 5.96 \\
FFL (with field), simple poly & 84.0 & 30.1 & 48.2° & 2.31 \\
FFL (with field), ACM poly & 84.1 & 73.7 & 33.5° & 1.13 \\ \hline
PolyWorld (offset off) & 89.9 & 86.9 & 35.0° & 0.93 \\
PolyWorld (offset on) & \textbf{91.3} & \textbf{88.2} & \textbf{32.9°} & 0.93 \\ \hline
\end{tabular}
}
\caption{\textit{Intersection over union (IoU)}, \textit{mean tangent angle error (MTA)}, and \textit{complexity aware IoU (C-IoU)} results on the test-set of the CrowdAI dataset\cite{Mohanty:2018}. The last column reports the ratio between the number of detected vertices and the number of ground truth vertices.}
\label{tab:iou_MTA}
\end{table}
\textbf{Results:} Results of experiments conducted on the CrowdAI \cite{Mohanty:2018} dataset can be observed in Figure \ref{fig:thumbnails}.
The images represent different kind of urban areas and they are sorted by building complexity from left to right.
We compare the results of PolyWorld with the Frame Field Learning (FFL) method\cite{girard2021polygonal} that represents the state-of-the-art on building extraction and polygonization.
Both FFL and PolyWorld generalize well in every kind of building, but PolyWorld produces overall cleaner and more linear geometries without developing undesired artifacts.
It is interesting to note that PolyWorld can better deal with hard object occlusions, estimating the position of the hidden corners and connecting them producing more regular and realistic footprints, as shown on the left image.
The robustness of the vertex detection and matching process is shown on the right images, where PolyWorld does not have issues on generating polygons of complex buildings with curved walls or inner courtyards.
More images can be found in the supplementary material.
In Table \ref{tab:coco} we report the MS COCO metrics results using the test-set of CrowdAI.
We computed the scores of PolyWorld considering and discarding the positional offsets used to correct the vertex positions ("offset on" and "offset off").
Our approach is compared with FFL, PolyMapper \cite{li2019topological}, and two general-purpose instance segmentation networks: Mask R-CNN \cite{he2017mask} and PANet\cite{liu2018path}.
For the FFL method, we report the results of the model trained with and without frame field output, and with different polygonization approaches: "mask" is raster segmentation, "simple poly" refers to the marching squares \cite{lorensen1987marching} contour detector followed by the Douglas–Peucker \cite{douglas1973algorithms} simplification, and "ACM poly" refers to the Active Contour Model \cite{girard2021polygonal} polygonization.
The results of PolyWorld show state-of-the-art precision and recall performances despite the fact that the refinement offsets have been ignored.
With the vertex position refinement enabled, all the scores improve by a considerable margin, demonstrating the effectiveness of the refinement losses.
Another interesting fact to mention is that PolyWorld uses considerable fewer points to describe the buildings compared to the FFL approach.
In the 60k test images of CrowdAI dataset, the ground truth counts a total of about 4.4M vertices.
PolyWorld extracts 4.2M polygon vertices in the test-set, compared to the 5.1M extracted by FFL with ACM polygonization.
Nevertheless, our approach is able to achieve better segmentation scores, suggesting that the PolyWorld vertex extraction is more efficient.
In Table \ref{tab:iou_MTA} we report the intersection over union, mean tangent angle error, and complexity aware IoU results.
Once again, there is a noticeable improvement in all the metrics exploiting the vertex position refinement.
Even though the ACM polygonization of FFL significantly outperforms the Douglas–Peucker polygonization in terms of MTA and C-IoU, the full PolyWorld method manages to overtake all the FFL results.
\section{Limitations and future work}
In our future work we want to demonstrate the capability of PolyWorld to generalize and produce accurate polygons on large scale data sets with a number of unseen conditions.
This will include the Inria segmentation dataset \cite{maggiori2017dataset} with Open Street Map annotations since it contains varied areas captured from different cities around the globe, and includes adjacent buildings with common corners.
From a technical point of view, the case of common corners can be efficiently solved using PolyWorld by generalizing the vertex detection network to multiclass segmentation, detecting the number of vertices located in the same position, and sampling the visual descriptor multiple times from the feature map in case a shared corner is detected.
\section{Conclusion}
We presented PolyWorld, a novel method capable of elegantly extracting building polygons from satellite and aerial images in an end-to-end manner.
The evaluation results experimentally prove the power and effectiveness of self-attention graph neural networks for matching and positional refinement of detected building vertices.
By solving an optimal transport problem, our method provides strong and reliable vertex connections and implicitly avoids redundant points.
Our experiments show that PolyWorld significantly outperforms existing building extraction approaches, enabling highly accurate and regular building footprints, which fulfill the strict requirements of geographic and cartographic applications.
\section*{Acknowledgments}
Thanks to VRVis for financing the project.
VRVis is funded by BMK, BMDW, Styria, SFG, Tyrol and Vienna Business Agency in the scope of COMET - Competence Centers for Excellent Technologies (879730) which is managed by FFG.
\newpage
{\small
\bibliographystyle{ieee_fullname}
|
{
"timestamp": "2021-12-01T02:26:53",
"yymm": "2111",
"arxiv_id": "2111.15491",
"language": "en",
"url": "https://arxiv.org/abs/2111.15491"
}
|
\section*{Results}
As a proof-of-concept, we provide a parallelized BHt-SNE, namely pt-SNE, where we apply the chunk\&mix protocol and a simplified parametric setup (see subsection~\nameref{sec:parametric_configuration} in Methods). Nevertheless, we want to highlight the generality of the concept, possibly suitable also to FIt-SNE or UMAP. We show that pt-SNE achieves good approximations to FIt-SNE, which we take here as our ground truth, and significantly reduces the computational complexity that penalizes t-SNE for increasing ppxs. Herein, we can explore data structuring at scales that otherwise remained prohibitive. The downside is a small cost in local accuracy that, if required, we can improve with simple post-processing later on.
We assess the goodness of pt-SNE visualizations by means of the \textit{k-ary neighborhood preservation} (kNP, \cite{Lee:2015}). The kNP depicts the matching between neighborhoods in the high and the low dimensional spaces (HD, LD) across a wide range of neighborhood sizes. A further summarization of kNP is the area under the curve (AUC, \cite{Lee:2015}). The kNP is commonly depicted against a logarithmic transformation of ppx to enhance the results of the low range of neighborhood sizes. As our interest extends now to higher values of ppx, we include a linear version of the kNP. Consequently, we distinguish linear and logarithmic versions of the AUC (linAUC and logAUC, respectively) to characterize the high-dimensional to the low-dimensional matching of global and local data structures.
\subsection*{Global/Local structure trade-off}
\begin{figure}[tp!]
\centering
\includegraphics[width=11.66cm, height=14.83cm]{figs/results/fig1.pdf}
\caption{\textbf{Global/Local trade-off} panels a) to i) pt-SNE visualization with ppx values corresponding to 0.01, 0.05, 0.10, 0.30, 0.50, 0.80, 0.90, 0.95 and 0.99 of the data set size. Colors depict relative pair-wise distances in original space (red:closer, blue:farther); j) log and k) linear kNP.}
\label{fig:s3d}
\end{figure}
It is well established that t-SNE fails to capture the global structure of large data sets \cite{Wattenberg:2016, Kobak:2018} and, since the appearance of UMAP, an increasing feeling among practitioners is that UMAP outperforms t-SNE both in output quality and running time \cite{Becht:2018}. This debate has been distorted so far by the inability to work with large neighborhood affinity matrices. Seemingly, the captured global structure is just the result of an informative initialization and can hardly be credited to any intrinsic advantage of either of the algorithms \cite{Kobak:2019}. We stress that t-SNE simultaneously optimizes both the local and the global structure in the data, and featuring more of the latter is just a matter of using large enough perplexities as the method prescribes. We show this by following the \textit{retrieval information} approach presented in \cite{Venna:2010}, where the authors introduce the \textit{neighbor retriever visualizer} (NeRV). NeRV is a visualization method that identifies the cost of retrieving/missing neighbors from the HD/LD representations of the data with the fundamental trade-off between \textit{precision} and \textit{recall} in information retrieval. By extending their approach (see Sec.~S1 in the Supplementary Material), we reach a reformulation of the t-SNE cost function that explicitly states how t-SNE assesses data structure at the local and global scale,
\begin{equation}
KL\left(P\| Q\right) \equiv \mathbb{E}_{p_{i}}\left[KL\left(p_{.\mid i}, q_{.\mid i}\right)\right] + KL\left(p_{i\mid .}, q_{i\mid .}\right)
\label{eq:qMeasure}
\end{equation}
The distributions $p_{.\mid i}$ and $q_{.\mid i}$ in Eq.~\ref{eq:qMeasure} are probabilistic models of \textit{smoothed recall}, i.e. describe the probability of picking a data point from the neighborhood of $\mathbf{x}_i$ in the HD and LD spaces, respectively. Analogously, the distributions $p_{i\mid .}\equiv p_{i}$ and $q_{i\mid .}\equiv q_{i}$ are probabilistic models of \textit{global prevalence}, i.e. represent the average probability of picking data point $\mathbf{x}_{i}$ as a neighbor of any other point. $KL\left(\right)$ is the Kullback-Leibler divergence. Thus, the first term in Eq. \ref{eq:qMeasure}, known as the \textit{expected smooth recall}, is an average measure of the mismatch between the HD and LD representations of the neighborhood of $\mathbf{x}_{i}$, weighted by the prevalence of $\mathbf{x}_{i}$. The second term is a measure of matching between the HD and LD models of prevalence, closely related to an index of global structure preservation. In summary, the t-SNE approach maximizes the \textit{expected smoothed recall} prioritizing areas where local structure is more significant while simultaneously trying to preserve the global structure.
We illustrate this idea using a data set with a known structure (Fig.~\ref{fig:s3d}). The Sierpinski-3D data set is a graph representation of the well known Sierpinski 3D-triangle, where each row represents a node of the graph (a vertex of the structure) and each column is the \textit{shortest-path-distance} to the rest of the nodes (https://sparse.tamu.edu/, \cite{Hu:2005}, \cite{Kruiger:2017}). The Sierpinski-3D data set is not challenging in size but represents a challenging fractal structure for a dimensional reduction algorithm. By scanning this data set across a wide range of perplexities (Fig.~\ref{fig:s3d}) we observe how the algorithm accurately resolves the fractal structure at each specific scale (Fig.~\ref{fig:s3d} panels a to i, see also the Supplementary Material https://rpubs.com/bigMap/839757). Interestingly, the embedding starts showing the complete structure for ppxs beyond 30\% of the data (Fig.~\ref{fig:s3d}~f), an indication of the convenience of exploring high-ranged perplexities. Additionally, we include the kNP plots to depict a quantitative assessment of the embedding (Fig.~\ref{fig:s3d},~j, k). The kNP curves show how pt-SNE balances the prevalence of local and global structure signatures as ppx increases (Fig.~\ref{fig:s3d},~j) while the waving in each curve show how pt-SNE captures the inherent fractality of the object across different scales.
\subsection*{Speed/Accuracy trade-off}
\label{sec:speed_accuracy}
A fundamental parameter in the chunk\&mix protocol is the thread-ratio $\rho$ defining the proportion of data running in each partial t-SNE (see subsection~\nameref{sec:chunkandmix} in Methods). Using low thread-ratios, that is, splitting the data into more and smaller chunks, is the key to achieving reasonable running times at large perplexities. However, as the data chunks become smaller, the amount of structural information contained in each of them is lower, leading to a quality loss in the visualization of the local structure. This balance between chunk size and visualization quality is the speed/accuracy trade-off in pt-SNE. As an example, we show the visualization of the Sierpinski-3D data set for $ppx = 1948$ (same as in Fig.~\ref{fig:s3d}~h) with decreasing thread-ratio $\rho=\{1.0, 0.67, 0.50, 0.40, 0.33, 0.25\}$ (Fig.~\ref{fig:speedAcc} panels a to f, see also the Supplementary Material https://rpubs.com/bigMap/841359). As no subset of this data set can adequately reflect the overall data structure, a decrease in the \textit{thread-ratio} rapidly translates into a loss of information. In terms of kNP, a degradation of the local structure is clear (Fig.~\ref{fig:speedAcc}~g), but the global structure is preserved (Fig.~\ref{fig:speedAcc}~h). We also depict the speed/accuracy trade-off by plotting the running times and the logAUC versus the thread-ratio ranging from 1.0 down to 0.25 (Fig.~\ref{fig:speedAcc}~i).
\begin{figure}[tp!]
\centering
\includegraphics[width=14.0cm, height=12.33cm]{figs/results/fig2.pdf}
\caption{\textbf{Speed/Accuracy trade-off} Panels a) to f) ptSNE visualization for $ppx = 1948$ and thread-ratio $\rho=\{1.0,\,0.67,\,0.5,\,0.40,\,0.33,\,0.25\}$; the local structure deteriorates as the thread-ratio decreases. Colors depict relative pair-wise distances in original space (red:closer, blue:farther); g) log and h) linear kNP; i) running-time and $logAUC$ versus thread-ratio.}
\label{fig:speedAcc}
\end{figure}
\subsection*{Parametric setup}
Our parallel implementation of t-SNE demands a smooth parameter optimization avoiding too early cluster arrangements that may compromise the long run convergence among partial t-SNE solutions. To achieve such a smooth optimization pt-SNE shows a minimal parametric configuration where we drop \textit{exaggeration} and we use auto-adaptive schemes for \textit{momentum} and \textit{learning-rate} (see subsection.~\nameref{sec:parametric_configuration} in Methods). Furthermore, pt-SNE works well with random initialization. Therefore we are basically left with ppx (and the Barnes-Hut parameter $\theta$ which works well in general with the default value $\theta = 0.5$, \cite{Maaten:2014}). Additionally, the chunk\&mix protocol involves the \textit{thread-ratio} $\rho$, given by way of two parameters: $threads$, defining the number of chunks of data, and $layers$, defining the degree of overlapping among threads. These two parameters determine the proportion of data running in each partial t-SNE, $\rho=layers/threads$, (see subsection~\nameref{sec:chunkandmix} in Methods and Section~\nameref{sec:speed_accuracy}).
\begin{figure}[tp!]
\centering
\includegraphics[width=11.66cm, height=14.55cm]{figs/results/fig3.pdf}
\caption{\textbf{UMI-based transcriptomic data} Cluster assignments and annotations taken from the original publications. Panels a to d: \cite{Tasic:2018} $n = 23822$ cells from adult mouse cortex classified into 133 clusters. Warm colors correspond to inhibitory neurons, cold colors correspond to excitatory neurons, brown/gray colors correspond to non-neural cells. Panels e to h: \cite{Macosko:2015} $n=44808$ cells from the mouse retina classified into 39 classes. Amacrine cells (green) and bipolar cells (dark blue) comprise 8 and 21 sub-classes respectively, not shown in the legend, but visible in the embedding). Panels a,~e: Global structure as depicted by the two first principal components of the data. Panels b,~f: FIt-SNE embedding as shown in \cite{Kobak:2018} (multi-scale affinities combining ppxs 30 and $n/100$). Panels c,~g: pt-SNE embedding using high perplexities (20\% and 40\% respectively). Panels d,~h: pt-SNE embedding using low perplexities (0.01\% and 0.005\% respectively).}
\label{fig:UMI}
\end{figure}
\cite{Kobak:2018} explains how to achieve improved t-SNE visualizations that preserve the global structuring in the data using real UMI-based transcriptomic data, e.g. \cite{Tasic:2018}, and \cite{Macosko:2015}. The main settings in \cite{Kobak:2018} included PCA initialization, multi-scale affinities combining perplexities 30 and $n/100$, learning-rate $eta=n/12$ and exaggeration $\alpha=12$. We run pt-SNE on the same data sets to check out our simple parametric setup involving random initialization, no exaggeration and auto-adaptive schemes for learning rate and momentum. Performing PCA on \cite{Tasic:2018} data (Fig. \ref{fig:UMI}~a) shows three well-separated groups corresponding to excitatory neurons (cold colors), inhibitory neurons (warm colors) and non-neural cells such as astrocytes or microglia (grey/brown colors). Performing separate PCA on these three subsets reveals further (local) structure in each of them (\cite{Kobak:2018}~Supplementary Material). \cite{Kobak:2018} showed that using their settings, FIt-SNE preserved much of the global structure in the data, clustering the different classes/sub-classes of cells in a coherent pattern. In the case of \cite{Tasic:2018} data (Fig.\ref{fig:UMI}~b), inhibitory neurons are well-separated into two groups, Pvalb/SSt-expressing (red/yellow) and Vip/Lamp5-expressing (purple/salmon)). In the case of \cite{Macosko:2015} data (Fig:\ref{fig:UMI}~f), multiple clusters of amacrine cells (green), bipolar cells (dark-blue), and non-neural cells (classes from Mueller glia to microglia) are close together. Using pt-SNE, we achieve similar results by simply switching from high (Fig.~\ref{fig:UMI}~c,~g) to low (Fig.~\ref{fig:UMI}~d,~h) ppx values. Of note, pt-SNE does not mix multi-scale affinities but, instead, shows the different scales by gradually increasing the ppx (see the Supplementary Material https://rpubs.com/bigMap/840112 and https://rpubs.com/bigMap/840131). For instance, panels d and h show a similar level of local structure as reported in \cite{Kobak:2019} (panels b and f) but a lower level of the global one. However, pt-SNE can achieve higher levels of global structure (panels c and g), up to the point of improving the global structure suggested by the PC plots (panels a and e).
\subsection*{Computational scaling}
By breaking down the t-SNE into chunks, pt-SNE could potentially scale as much as the number of parallel t-SNE instances allowed by our hardware resources. On top of this, we designed pt-SNE to effortlessly work with high-performance computing (HPC) platforms using both intra and inter-node parallelization. However, two limitations arise: (i) decreasing the \textit{thread-ratio} in excess, that is, reducing too much the length of the data chunks, can result in a loss of data structure information and visualization over-blurring (Fig.~\ref{fig:speedAcc}, see also subsection~\nameref{sec:speed_accuracy}), and (ii) the chunk\&mix scheme (alternating short t-SNE runs with mixing of the partial solutions) adds an extra computational time (inter-epoch time) required to pool all partial t-SNE outputs after an epoch has finished, and this inter-epoch time increases with the number of partial t-SNEs (see subsection~\nameref{sec:chunkandmix} in Methods).
\begin{figure}[tp!]
\centering
\includegraphics[width=14.3cm, height=15.8cm]{figs/results/fig4.pdf}
\caption{\textbf{1.3 Million Brain Cells from E18 Mice} \cite{10xGenomics:2018}. Clustering labels taken from \cite{Wolf:2018}. a) pt-SNE embedding with $ppx=65306$ b) Overlay of the 4 largest classes; c),~d) log and linear kNP.; e),~f) pt-SNE vs. FIt-SNE, AUC and running times for different values of ppx.}
\label{fig:10xm}
\end{figure}
We run pt-SNE on the \cite{10xGenomics:2018} data set with $ppx=\{130, 653, 1301, 6531, 13061, 65306\}$, the latter equivalent to 5\% of the data set size (Fig.\ref{fig:10xm}) to show that, pt-SNE scales computationally for biological large data sets of about this size and up to perplexity values far beyond a few hundreds. The \cite{10xGenomics:2018} is a large transcriptomic data set ($n=1306127$ brain cells from E18 mice) commonly used to assess large scale visualization procedures (\cite{Wolf:2018, Kobak:2018, Belkina:2019}). The clustering labels for this data were derived by \cite{Wolf:2018} using the Louvain clustering algorithm (\cite{Blondel:2008}) and comprise 39 clusters. However, when visualized through dimensional reduction algorithms, these clusters show a significant overlap. Our analysis, with an augmented perplexity, offers an improved visualization in terms of compactness of the Louvain clusters (Supplementary Material https://rpubs.com/bigMap/841421) and in terms of kNP (both log and linear, Fig.\ref{fig:10xm}~c,~d), which could bring more accurate insights about the data. We run this analysis on an HPC platform using MPI and 130 cores (i.e. thread-ratio $\rho=0.015$, data chunks of 20094 observations). On this order of magnitudes, it took about 300 min (Fig.\ref{fig:10xm}~f) to complete the whole process (i.e. computing the bandwidths $\beta_i$ and running the gradient descent). We also run FIt-SNE for values of ppx ranging from 32 to 320 with quite similar results (Supplementary Material https://rpubs.com/bigMap/841412). We could not explore higher ranges because the amount of memory needed to run FIt-SNE with $ppx=320$ was already above 400GB. Of note, the ppxs in pt-SNE and FIt-SNE are not equivalent: in pt-SNE, the ppx must be higher to get similar results because the neighborhood size is smaller (see subsection~\nameref{sec:chunkandmix} in Methods). In terms of running times, FIt-SNE ($\mathcal{O}\left(n\right)$) is obviously faster than pt-SNE (indeed a BHt-SNE $\mathcal{O}\left(n\,\log\,n\right)$) but FIt-SNE is extremely dependent on ppx while pt-SNE is almost independent (Fig.\ref{fig:10xm}~f). FIt-SNE performed extremely well at capturing global structure from the very lowest value of ppx, only surmounted by pt-SNE at much larger values of ppx (Fig.\ref{fig:10xm}~e). However, pt-SNE embedding shows higher sensitivity to ppx than FIt-SNE (note the increasing linAUC for increasing ppxs, dashed red line in panel e). Therefore, pt-SNE highlights much better the differences of data organization across scales. The drawback is a loss of local structure (solid red line in panel e) due to the small size of the data chunks, (i.e. low thread-ratio $\rho=0.015$).
We also compared pt-SNE and FIt-SNE on the \textit{Primes-1M} data set~\cite{Williamson:2019}. This data set is a structured representation of the first $10^6$ integers based on their prime factor decomposition, conforming to a highly sparse matrix with 78498 dimensions (the set of primes lower than $10^6$) where the values are the prime factorization powers. We note some differences: (i) in Williamson’s work, any factor is represented as either a zero or a one, just indicating divisibility by that factor, while we use the actual power for each factor to have unique representations for each integer; (ii) Williamson computes cosine-similarity based affinities, while we use euclidean-distance based affinities.
We run pt-SNE on the \textit{Primes-1M} dataset with $ppx=\{1000, 5000, 10000, 20000, 50000\}$ (equivalent to a neighborhood size of 0.1, 0.5, 1, 2 and 5\% of the data set size, (see Supplementary Material in https://rpubs.com/bigMap/840981). We run pt-SNE using MPI and 100 cores with 4GB/core (thread-ratio $\rho=0.02$). We also run FIt-SNE on \textit{Primes-1M} data set with $ppx=\{50, 100, 200, 300, 400\}$ (Supplementary Material https://rpubs.com/bigMap/840980). As Fit-SNE in~\cite{klugerlab:2019} does not have support for sparse matrices, for this dataset we used FIt-SNE from OpentSNE \cite{Policar:2019}, using a single node with 20 cores and 400GB.
\begin{figure}[tp!]
\centering
\includegraphics[width=13.00cm, height=11.45cm]{figs/results/fig5.pdf}
\caption{\textbf{Primes-1M data set embedding}. a), b) pt-SNE visualization with $ppx=\{1000, 50000\}$; c), d) FIt-SNE visualization with $ppx = \{50, 400\}$. Hue color component is a combination of the two first prime factors while saturation depict the factors' powers. Embedding position of the first 15 primes (2 to 47) and of the last prime (999983) displayed in black. Embedding positions of some primes' powers of interest (e.g. 4, 8, 16, 32) displayed in red.}
\label{fig:P1G_embedding}
\end{figure}
\begin{figure}[tp!]
\centering
\includegraphics[width=13.00cm, height=12.90cm]{figs/results/fig6.pdf}
\caption{\textbf{Primes-1M data set. ptSNE embedding versus FIt-SNE embedding}. a), b) pt-SNE log and linear kNP; c), d) FIt-SNE log and linear kNP; e) pt-SNE vs. FIt-SNE logAUC; f) pt-SNE vs. FIt-SNE running times.}
\label{fig:P1G_plots}
\end{figure}
The~\textit{Primes-1M} showed a complex data structure with strong hierarchical relations between its components not equally depicted by both algorithms (Fig. \ref{fig:P1G_embedding}). The differences observed between the pt-SNE and the Fit-SNE visualizations are mainly due to a different computation of the bandwidths $B_{i}$ (see Supplementary Material Section~S2.3). Also, the two algorithms showed unequal behaviour as we moved from local to global scales. On the one hand, FIt-SNE returned a quite neat depiction of the structure, capturing much of both the local and the global structure, even from the lowest value of ppx (\ref{fig:P1G_embedding}~c). Increased ppxs slightly improved the global structure without any loss of information at the local level (Fig. \ref{fig:P1G_plots}~c,~d). However, the embedding landscape did not change significantly overall (\ref{fig:P1G_embedding}~c,~d). On the other hand, beyond $ppx=10000$ pt-SNE starts revealing similar structures as those shown by FI-tSNE, but adding the characteristic blurring due to the chunk\&mix protocol (Fig. \ref{fig:P1G_embedding}~a,~b). Noteworthy, pt-SNE embedding outputs were much more sensitive to ppx than the ones derived from FIt-SNE (compare Fig. \ref{fig:P1G_plots}~a,~b) with Fig. \ref{fig:P1G_plots}~c,~d), revealing novel details at high perplexities not observed with FI-tSNE. We can also assess the higher sensitivity of pt-SNE landscapes to ppx by looking at the linAUC curves (global structure) in Fig. \ref{fig:P1G_plots}~e (doted-lines). By means of pt-SNE, we \textit{gradually} evolve the landscape from local (Fig. \ref{fig:P1G_embedding}~a) to global structure (Fig. \ref{fig:P1G_embedding}~b) by simply increasing the ppx (see Supplementary Material https://rpubs.com/bigMap/840981), and this is just the aim of pt-SNE. In terms of running times, pt-SNE shows a lower dependence on ppx and lower running times than FIt-SNE as implemented in OpentSNE (Fig. \ref{fig:P1G_plots}~f).
Both examples in this section, the first from the biological domain, the second from the mathematical one, illustrate the main features of pt-SNE: (i) independence of running times to ppx, enabling global data structure analysis beyond the state-of-the-art (ii) high sensitivity of embedded landscapes to ppx, gradually depicting data organization across structuring scales, and (iii) blurring at local scales due to the chunk\&mix protocol, highlighting an unavoidable trade-off between computational speed and accuracy. In section~S3 of the Supplementary Material, we explain how simple post-processing can efficiently restore the accuracy of the local structure while keeping the global data structure obtained with pt-SNE (Figures S1 and S2).
\section*{Discussion}
As a first principle, methods for data exploratory analysis should be as simple and flexible as possible. Nonetheless, it is not easy to accommodate and fulfill these principles, hence, key conceptual and operational limits exist in current visualization methods. FIt-SNE \cite{Linderman:2019} and UMAP \cite{McInnes:2018} are both outstanding algorithms that can perform the gradient-descent on large data sets within very reasonable times but show complex parameterizations and computational limitations to explore large scale data structures. Focused on t-SNE, our work aims at breaking through these two limitations.
At the origin of the parametric complexity of t-SNE there is a contrived trickery in the computation of the gradient descent intended to force a quick convergence and a neat depiction of the structure. We noted that the gradient descent equation suggests by itself an auto-adaptive scheme for the learning-rate (Eq.~\ref{eq:learning_rate}) that smoothly drives the embedding to an optimal solution. Based on this observation, we showed that a simple parametric setup, using random initialization, auto-adaptive schemes for learning-rate and momentum and discarding the use of exaggeration can achieve similar results as those reported in the related literature.
The limitation at exploring data across scales might stem from the t-SNE heuristic itself or the algorithmic implementation of the heuristic. We showed that the t-SNE heuristic simultaneously optimizes both local and global structure (Eq.~\ref{eq:qMeasure}) and that the final embedding is a balanced visualization of one or the other only depending on the value of ppx, just as the heuristic prescribes. Thus, the actual limitation to explore data across scales is purely an implementation issue stemming from both data set size and the neighboring size parameter (i.e. ppx). To overcome this limitation, we introduced a chunk\&mix protocol as a parallelized re-implementation of t-SNE. The underlying assumption is that large data sets convey a lot of redundant evidence so that random subsets of data comprehensively reflect the structure in the data. Therefore, we can approximate the solution by running instances of the algorithm on random subsets of the data and combining the partial results. This approach extends the state-of-the-art, overcoming the impossibility of computing the affinity matrix for large values of ppx, and allows scanning data structure across a wide range of scales. The downside is a loss of detail in the definition of the local structure, the loss increasing as the data chunks are shorter or the redundancy in the data is less.
As a proof of concept, we developed pt-SNE, a parallel version of BHt-SNE, and we showed that the chunk\&mix protocol converges to good global embedding. Hence, we expect the same approach to apply to other visualization algorithms (e.g. FIt-SNE or UMAP). pt-SNE runs efficiently on HPC platforms up to $10^6$ observations. To scale beyond this order of magnitude, we can either increase the number or the size of the data chunks. Increasing the number of data chunks may result in too small data subsets and excessive loss of information at local scales. Still, we could overcome this accuracy loss by combining a large ppx pt-SNE with a subsequent dimensional reduction algorithm (e.g. FIt-SNE) with lower ppx, using the former to sketch the global structure and the latter to refine the local one. Increasing the size of the data chunks results in a quadratic increase of running time (being pt-SNE based on BHt-SNE), so it is not a practical solution. However, algorithms like FIt-SNE and UMAP around $\mathcal{O}\left(n\right)$ could help improve this limitation. Therefore, a promising solution is to apply the chunk\&mix approach with these much faster algorithms, thus boosting the computation of the gradient descent and allowing larger thread sizes within reasonable running times.
\section*{Methods}
The pt-SNE runs multiple instances (independent threads) of the t-SNE on different chunks of data (partial t-SNEs). The algorithm starts by randomly allocating the data points in the low-dimensional space (a 2D half-unit disk, i.e. of radius $r=0.5$). Afterwards, a cyclic scheme arranged into \textit{epochs} optimizes the embedding (Fig. \ref{fig:ptSNE_scheme1}), each epoch involving: shuffling and chunking the data set indexes, exporting a chunk of indexes to each thread, running the partial t-SNEs with a short number of iterations, and pooling the solutions of the partial t-SNEs into a global embedding.
\subsection*{Chunk\&Mix parameters}
\label{sec:chunkandmix}
The parameters that define the chunk\&mix scheme are the following:
\begin{figure}[!t]\centering
\includegraphics[width=13.5cm, height=6.5cm]{figs/methods/ptSNE_scheme1.png}
\caption{\textbf{ptSNE basic parallelization scheme}. The parallelized implementation runs a number (5 in this example) of instances of the t-SNE algorithm in an alternating scheme of short runs and mixing of partial solutions. Each run-and-mix phase is an epoch. In this example, each thread iterates on a single chunk of data, starting with random mapping positions. After a number of iterations the partial t-SNEs are pooled together and mixed. A new epoch is started with each thread iterating on a new chunk of data and its current mapping positions.}
\label{fig:ptSNE_scheme1}
\end{figure}
\begin{figure}[!th]\centering
\includegraphics[width=12cm, height=5.5cm]{figs/methods/ptSNE_scheme3.png}
\caption{\textbf{ptSNE parallelization scheme with 3 layers}. Each thread iterates on 3 chunks of data sharing each one of them with successive threads. Common data points create a link between the partial solutions that favors convergence. As each data point is running on 3 different threads we get 3 different mapped positions for each one. After pooling all partial solutions we get 3 global mapping layers.}
\label{fig:ptSNE_scheme3}
\end{figure}
\paragraph{threads}
The number of threads $z$ is the number of partial t-SNEs that will run. The pt-SNE splits the data set into this number of elementary chunks, so that, the larger the number of threads, the faster the computation of the final solution. Note that the number of threads can be higher than the number of physical cores available (what is known as multi-threading). Using multi-threading can yield further reductions in computational time.
\paragraph{layers}
In the most simple scheme (Fig. \ref{fig:ptSNE_scheme1}), each thread runs a single chunk of data. However, the key to convergence to a common solution is to impose some overlap among data chunks. Instead of running single chunks of data, threads are cyclically chained so that each thread includes at least two chunks of data. Thus, each thread shares one-half of the data points with the previous thread and the other half with the next one. The number of \textit{layers} sets the degree of overlapping. We name this parameter \textit{layers} because the overlapping makes each data-point take part in at least two partial t-SNEs and, after pooling all partial solutions, we will have at least two layers of global solutions. As an example, setting $threads=5$ and $layers=3$ (Fig. \ref{fig:ptSNE_scheme3}) the pt-SNE pools chunks 1, 2, and 3 into thread 1, chunks 2, 3 and 4 into thread 2, and so on, up to chunks 5, 1 and 2 into thread 5).
\paragraph{thread-ratio and thread-size}
The \textit{thread-ratio} is defined as $\rho=layers/threads$ and determines the thread-size $\nu=\rho\,n$. Being t-SNE of order quadratic to the size of the data set, making $\nu\ll n$ overcomes the unsuitability of the t-SNE algorithm for large-scale data sets. The thread-ratio represents a trade-off between accuracy and computational time. The closer is the thread-ratio to 1, the more robust and comprehensive is the solution, but the larger the computational cost. However, for large data sets ($n>10^4$) where the redundancy assumption holds, pt-SNE yields a good global solution even with values of $\rho$ as low as 0.01.
\paragraph{epochs and iterations}
The number of epochs is set to $2\,log\,n^2$ (where $n$ is the data set size) and the number of iterations per epoch is set to $2\,log\,\nu^2$ (where $\nu$ is the \textit{thread-size}). We empirically determined that these settings allow reaching a stable solution. Nevertheless, the user can restart the process and run it for additional epochs if the cost function does not show a flat line (Fig.~\ref{fig:gradient_descent}~b,~c). The epoch running time depends on the number of iterations performed by the threads, plus an inter-epoch time for the master process to pool the solutions, mixing them and sending new chunks to the workers.
\subsection*{Partial t-SNEs}
The t-SNE algorithm starts by transforming similarities (usually given as pairwise euclidean distances) into a probability distribution known as the \textit{affinity matrix}. Computing the whole affinity matrix is prohibitive for large data sets. The pt-SNE splits the data set into chunks and runs parallel instances of the t-SNE with subsets of the data, each partial t-SNE computing a much smaller affinity matrix. When splitting a data set $X$ into $z$ chunks, we denote the subset of data in each partial t-SNE as $X^k,\, 1\leq k\leq z$, and we denote the neighborhood of $i$ in $X$ as $N_{i}$ and the neighborhood of $i$ in $X^{k}$ as $N^{k}_{i}$. Thus, in each partial t-SNE, we compute the similarities in the input and output spaces as follows:
\paragraph{Similarities in the input (high dimensional) space, $\mathcal{X}\in \mathcal{R}^m$}
The similarity between observations $x_j$ and $x_i$, expressed as $\|x_i-x_j \| ^2$, is converted into the conditional probability $p_{j\mid i}$ given by a Gaussian kernel centered at $x_{i}$,
\begin{flalign*}
p_{j\mid i} = \frac{\exp\left(-\beta_{i}\,\|x_i-x_j\| ^2\right)}{\sum_{k\neq i}\exp\left(-\beta_{i}\,\|x_i-x_k\|^2\right)}
&\;,\quad i\in X^{k},\; j\in N^{k}_{i}
\\
p_{j\mid i} = 0
&\;,\quad i\in X^{k},\; j\notin N^{k}_{i}
\end{flalign*}
\noindent with precision $\beta_{i}=1/\left(2\,\sigma_i^2\right)$. Decreasing values of $\beta_{i}$ induce a probability distribution of increasing entropy $H\left(p_{j|i}\right)$ and increasing \textit{perplexity}, defined as,
\begin{equation*}
ppx\left(p_{j|i}\right)=2^{H\left(p_{j|i}\right)}\;,\quad i\in X,\; j\in N_{i}
\label{eq:perplexity}
\end{equation*}
\noindent We stress that $\beta_{i}$ are computed globally, i.e. for the whole data set. The maximum perplexity is equal to $n-1$ (where $n$ is the data set size), corresponding to $\beta_{i}=0$ and a uniform affinity distribution. In a preliminary step, t-SNE computes the values $\beta_{i}$ that result in a fixed perplexity for all $x_{i}$. We describe the procedure to find $\beta_{i}$ in Supplementary File S1. Computing similarities based on perplexities is a powerful transformation because it allows defining affinities in terms of spatial proximity without explicitly referring to any actual distance. The usual interpretation of perplexity is the number of neighbors to be picked: low perplexity will unveil the local structure in the data, whereas high perplexity will enhance the emergence of global structuring. Thus, it is fundamental to tune perplexity according to our requirements or, alternatively, explore data structuring across a range of perplexities. Afterwards, each partial t-SNE computes a symmetric joint probability given by,
\begin{equation*}
p_{ij} = p_{ji} = \frac{p_{j\mid i}+p_{i\mid j}}{2\nu}
\label{eq:h_joint_prob}
\end{equation*}
\noindent where $\nu$ is the thread-size, thus $\sum_{i,j}\,p_{ij} = 1$, and $\sum_j\,p_{ij}>\frac{1}{2\nu},\;\forall\,x_{i}$, so that each data point plays its role in the embedding process \cite{Maaten:2008}.
\paragraph{} To reduce the complexity in the computation of attractive forces, \cite{Maaten:2014} proposed to use a neighborhood size of $N_i=3\,ppx$, dramatically decreasing the size of the affinity matrix. To find the nearest neighbors' sets, standard implementations of t-SNE make use of fast approximations like \textit{vantage point trees} \cite{Nielsen:2009} or ANNOY \cite{Annoy:2018} and, afterwards, determine the local bandwidths and compute the affinity matrix. In pt-SNE, the partial t-SNEs run a different chunk of data at each new epoch, thus requiring the recomputation of the affinity matrices. We alleviate this process by precomputing the global neighborhoods $N_i$ and the bandwidths $\beta_i\left(N_i\right)$, and we pull out the distance of the furthest neighbor $L_i=max\{Lij\mid\, j\in N_i\}$. The values $B_i$ and $L_i$ are shared to all processes so that each one can use it to determine the neighborhoods $N^k_i=\{j\in\,X^k\mid\,L_{ij}\ll L_i\}$ and compute the partial affinity matrix. This process adds a short inter-epoch computation time. On average, $N^k_i\approx\rho\,3\,ppx$, thus, for low values of $\rho$, the number of forces acting on a data point is much lower in pt-SNE than in standard t-SNE. Therefore, as a general rule, we will use higher perplexities in pt-SNE to have equivalent outputs.
\paragraph{Similarities in the output (low-dimensional) space, $\mathcal{Y}\in \mathcal{R}^d$, $d\in\{2, 3\}$}
The similarities between mapped data points $y_j$ and $y_i$, also expressed as $\|y_i - y_j\|^2$, are treated differently. A well-known issue of embedding processes is the so-called \textit{crowding problem} (i.e. a surface at a given distance from a data point in a high-dimensional space can enclose more data points than those that fit in the corresponding low-dimensional area \cite{Maaten:2008}). This problem is alleviated using a heavy-tailed distribution to represent affinities in the low dimensional space, namely a Cauchy distribution (i.e. a t-Student distribution with one degree of freedom). Therefore, we define the joint probabilities $q_{ij}$ as,
\begin{equation}
q_{ij} = \frac{\left(1 + \| y_i-y_j \|^2\right)^{-1}}{\sum_{k\neq l}\left(1 + \| y_k-y_l \|^2\right)^{-1}}\;,\quad i,\,j \in X^{k}
\label{eq:l_joint_prob}
\end{equation}
\subsection*{Cost function}
t-SNE uses a gradient descent method to find a low-dimensional representation of the data that minimizes the mismatch between $p_{ij}$ and $q_{ij}$. The cost function is defined as the Kullback-Leibler divergence between both distributions (\cite{Maaten:2008}),
\begin{equation}
\label{eq:cost_function}
C =KL\left(P\|Q\right)=\sum_{i,j}p_{ij}\log\frac{p_{ij}}{q_{ij}}
\end{equation}
\noindent with a gradient with respect to the low-dimensional mapped positions given as (\cite{Maaten:2008}),
\begin{equation}
\frac{\delta C}{\delta y_i} = 4\sum_j\left(p_{ij}-q_{ij}\right)\left(y_i-y_j\right)\left(1+\|y_i-y_j\|^2\right)^{-1}
\label{eq:cost_gradient}
\end{equation}
\noindent The mode the gradient descent operates is clear by noting $L_{ij} \equiv \frac{\left(y_i-y_j\right)}{\left(1+\|y_i-y_j\|^2\right)}$, and writing Eq.\ref{eq:cost_gradient} as,
\begin{equation}
\frac{1}{4}\frac{\delta C}{\delta y_i} = \sum_j p_{ij}\,L_{ij} - \sum_j q_{ij}\,L_{ij} = \mathcal{F}_{attr, i} - \mathcal{F}_{rep, i}
\label{eq:forces}
\end{equation}
\noindent $L_{ij}$ depends only on the pair-wise distances in the embedding, and the computation of the gradient represents a balance between attractive and repulsive forces exerted over each data point by all the others. On the one hand, high affinities in the HD space embedded as long distances in the LD space result in strong attractive forces, and low affinities in the HD space embedded as close distances result in weak attractive forces. On the other hand, repulsive forces monotonically decrease as a function of the embedding distance. Thus, the final embedding positions correspond to the point of equilibrium where attractive and repulsive forces are optimally balanced.
\paragraph{} In pt-SNE, the gradient descent is performed independently at each partial t-SNE, and the global cost function is computed as an average of the cost functions in each partial t-SNE,
\begin{equation*}
C = \frac{1}{z}\sum_{k} C^{k} = \frac{1}{z}\sum_{k} KL\left(P^{k}\|Q^{k}\right)
\end{equation*}
\paragraph{Finite affinity mass}
t-SNE holds an implicit dependence on the size $n$ of the data set. The reason is that the joint affinity distribution has a finite amount of probability mass to be allocated among all pairwise distances, the latter growing with $n\,\left(n-1\right)$. Therefore, as $n$ grows, affinities will be lower on average, tending to uniformity for the limiting case $n\rightarrow\infty$. This loss in discriminating power generates undesired side effects that we generically call the \textit{finite affinity mass} problem.
\paragraph{Pseudo-normalized cost function}
A first effect of the \textit{finite affinity mass} is that the cost function (Eq.~\ref{eq:cost_function}) holds itself an implicit dependence on $n$, and makes it difficult to objectively interpret its value. Taking into account that $p_{ij}$ and $q_{ij}$ decrease on average with $n\,\left(n-1\right)$, we see that this dependence amounts to,
\begin{flalign}
\nonumber
\langle C \rangle
&\propto -\log \langle q_{ij} \rangle \sum_{i,j} p_{ij} \\
&\propto \;\log\left(n \,\left(n-1\right)\right)
\label{eq:average_cost}
\end{flalign}
\noindent where we have dropped the term $\sum_{i,j}p_{ij}\log p_{ij}$, which is constant along the gradient descent.
The expression in Eq.~\ref{eq:average_cost} is the cost of a uniform distribution of affinities, i.e. the cost of a uniform embedding of $n\,\left(n-1\right)$ pairwise distances, expressing that all data points are equally similar. While it is not feasible to arrange $n\,\left(n-1\right)$ uniform pairwise distances in 2D, such a uniform distribution constitutes the worst possible embedding with respect to $P$, be $P$ what it may. Thus, Eq.~\ref{eq:average_cost} is an upper bound for $KL\left(P\|Q\right)$ and it makes sense to define a pseudo-normalized cost function as,
\begin{equation}
C = -\frac{\sum_{i,j}p_{ij}\log q_{ij}}{\log\left(n\,\left(n-1\right)\right)}
= -\frac{H\left(P, Q\right)}{H\left(P, U\right)}
\label{eq:normalized_cost}
\end{equation}
\noindent In terms of information theory, this is the \textit{normalized cross-entropy} of distributions $P$ and $Q$, that is, the average cost of coding $P$ as $Q$ relative to the worst-case cost, which is the cost of a uniform embedding of $n\,\left(n-1\right)$ pairwise distances. Also, we can interpret this normalization factor as an expression of the loss in discriminating power due to the \textit{finite affinity mass} which, intuitively, we could approach as the relative increase of entropy of the affinity distribution, amounting to $\log\left(n\left(n -1\right)\right)$. This pseudo-normalized expression results in a value close to 1 for a random initial mapping, thus helping to make sense of the gradient descent and assess the stability of the final output.
\subsection*{Parametric configuration of the cost gradient}
\label{sec:parametric_configuration}
pt-SNE starts with a random embedding in a 2D half-unit disk (i.e. of radius r = 0.5) drawn from an isotropic Gaussian and updates embedding positions $y_{i}$ using the following expression,
\begin{flalign}
\nonumber
y^t_{i}
&= y^{t-1}_{i} +\Delta y_{i}^t
\\
\Delta y_{i}^{d,t}
&= \mu^t\Delta y_{i}^{d,t-1} - 4\,\eta^t\sum_{j\in nN_{i}}\left(\alpha^t\,p_{ij}-q^t_{ij}\right)\,\frac{l^{d,t}_{ij}}{1+\left(L^2_{ij}\right)^t}
\label{eq:mapping_update}
\end{flalign}
\noindent where $t, d$ are indicators of current iteration and embedding dimension respectively, $\mu$ is the \textit{momentum}, $\eta$ is the \textit{learning rate} and $\alpha$ is the \textit{exaggeration fator}. Also, we recall that $p_{ij}$ and $q_{ij}$ are the affinities in the HD/LD spaces, $L_{ij}$ is the pairwise distance in the embedding space and $l^d_{ij}$ is the component of $L{ij}$ along $d$.
\paragraph{Learning-rate} The learning-rate $\eta$ has a strong impact on convergence speed and it is not clear from the literature how to set its value: the default in most t-SNE implementations is $\eta=200$, while \cite{Wolf:2018} recommends increasing it to 1000 or \cite{Belkina:2019} suggests the value $\eta=n/12$. In pt-SNE, we use an auto-adaptive scheme given by,
\begin{equation}
\eta^t_{i} = 2\,\left(d^t+\frac{1}{d^t}\right)\,\log\left(\nu\,N^k_{i}\right)\,g^t_{i}
\label{eq:learning_rate}
\end{equation}
where $d^t$ is the diameter of the embedding at iteration $t$, and where we note the following:
\begin{itemize}
\item
While the size of the embedding clearly changes along the optimization, Eq.~\ref{eq:mapping_update} seems to be lacking a reference size for the distance factors $L_{ij}$ and $l_{ij}$, thus suggesting,
\begin{equation*}
\eta^t \propto \frac{1+\left(d^t\right)^2}{d^t} = d^t+\frac{1}{d^t}
\end{equation*}
The factor $\left(d^t+\frac{1}{d^t}\right)$ renders Eq.\ref{eq:mapping_update} independent of the size of the embedding and, interestingly, yields equally increasing learning-rates at both sides of $d=1$ (Fig.~\ref{fig:gradient_descent}~a) i.e. either expanding or compressing embedding, the latter occurring for large ppx. Furthermore, if we specifically set
\begin{equation}
\eta^t \propto 2\,\frac{1+\left(d^t\right)^2}{d^t} = \frac{1+\left(d^t\right)^2}{r^t}
\label{eq:plain_learning_rate}
\end{equation}
\noindent where $r$ is the radius of the embedding, then for $d=1$ we have $\eta=4$, so that we make sense out of factor 4 in the gradient expression (Eq. \ref{eq:cost_gradient}) as the learning-rate corresponding to a half-unit disk embedding (i.e. $r=0.5$). Therefore we can drop the factor 4 from Eq.~\ref{eq:mapping_update}.
\item
$\log\left(\nu\,N^k_{i}\right)$ compensates the effect of the \textit{finite affinity mass} problem rendering Eq.\ref{eq:mapping_update} independent of the size of the affinity matrix operating on each partial t-SNE, actually given by $\nu\,N^k_i =\left(\rho\,n\right)\,\left(\rho\,3\,ppx\right)$.
\item
$g^t_i$ is an acceleration factor given as a point-wise step size update by means of the Jacobs adaptive learning rate scheme \cite{Jacobs:1988} which is indeed implemented in BHt-SNE \cite{Maaten:2014}. This factor increases the learning rate in directions in which the gradient is stable, anticipating position updates that are likely to occur in the next steps,
\begin{flalign*}
\text{init:}\quad &g\left(i, 1\right) = 1.0
\\
\text{if}\quad & \text{gradient\_direction}\left(i, t\right) == \text{gradient\_direction}\left(i, t-1\right)
\\
&g\left(i, t\right) \,+= 0.1\,\text{gain}
\\
\text{else}\quad &
\\
&g\left(i, t\right) \,*= \left(1 - 0.1\,\text{gain}\right)
\end{flalign*}
\noindent with $gain = 2.0$ by default.
\end{itemize}
In summary, our auto-adaptive scheme for the learning-rate (Eq.\ref{eq:learning_rate}) operates in favor of a stable gradient descent. On the initial stages of the optimization, the mismatch between $p_{ij}$ and $q_{ij}$ will likely be large but the embedding size is small, thus the learning-rate will mitigate the impact of the strong attraction/repulsion forces originated during this critical moment. Along the course of the optimization, the size of the embedding will get larger, increasing the learning rate to compensate the decreasing attraction/repulsion forces.
\begin{figure}[!t]\centering
\includegraphics[width=15.0cm, height=5.3cm]{figs/methods/grad.pdf}
\caption{\textbf{Effect of the parameters in the gradient descent}. a) Plain learning rate as resulting from Eq.\ref{eq:plain_learning_rate}; the learning-rate increases proportionally at both sides of $d=1$. b) Plain gradient descent as resulting from Eq.~\ref{eq:plain_learning_rate} (solid lines) and after correcting for sample size by a factor $\log\left(\nu\,N_{i}\right)$ (dashed lines). c) Gradient descent including gain $g\left(i, t\right)$ (Eq.~\ref{eq:learning_rate} (solid lines), and gradient descent with learning-rate (Eq.~\ref{eq:learning_rate}) and momentum (Eq.~\ref{eq:momentum}), (dashed lines). Color indicates sample size.}
\label{fig:gradient_descent}
\end{figure}
\paragraph{Momentum} Using momentum $\mu$ speeds up the gradient descent but might blow up the embedding size on the final steps. In this final stage, the learning rate is high because the embedding has grown large, but the attractive/repulsive forces are almost balanced hence the position updates should be minor. Therefore adding momentum is risky. Because of this, we use a decaying momentum of the form,
\begin{equation}
\mu^t = 0.8\,\left(1-\frac{e}{\epsilon}\right)^2
\label{eq:momentum}
\end{equation}
\noindent where $e$ stands for the current epoch and $\epsilon$ is the total number of epochs.
\paragraph{Exaggeration factor} Using the exaggeration factor $\alpha$ is likely to be detrimental in pt-SNE. Forcing an early arrangement of clusters in each thread might depict a more fragmented global solution at the early stages of the gradient descent, which is unlikely to be solved in subsequent epochs. Therefore we dismiss using exaggeration in pt-SNE.
\paragraph{} We illustrate the effect of the parametric setup described above in the gradient descent (Fig.\ref{fig:gradient_descent}~b,~c). We note that a plain learning-rate (Eq.\ref{eq:plain_learning_rate}) yields a very smooth decay of the embedding cost (Fig.~\ref{fig:gradient_descent}~b, solid lines) with a strong dependence on the size of the data set $n$. Including the acceleration factor $g\left(i, t\right)$ has a massive impact on the gradient descent, dramatically reducing the number of iterations needed to reach a stable embedding (Fig.~\ref{fig:gradient_descent}~c, compare the range of the x-axis with panel b). Using a decaying momentum contributes to faster reaching a stable solution, although in general, the final embedding is not so good (Fig.\ref{fig:gradient_descent}~c dashed lines).
\section*{Data availability}
All data used in this paper is available at https://github.com/jgarriga65/bigMap/examples.
\section*{Code availability}
The pt-SNE algorithm is implemented in the \textbf{bigMap} R-package along with additional tools for the post-processing of the output. We worked out the examples in this paper using version bigMap\_4.5.6 available at https://github.com/jgarriga65/bigMap/package and using the HPC platform at the Computational Biology Lab (CEAB-CSIC) (Table~\ref{tbl:CBLab}).
\section*{Acknowledgments}
We acknowledge members from the Theoretical and Computational Ecology lab to provide comments on previous versions of the MS, and insights as beta users of the bigMap R-package. This work was supported by the Spanish Ministry (MINECO, Grant CGL2016-78156-R), and the Max Planck Institute for Ornithology (MPIO, Germany). The high-performance computation cluster at the Computational Biology Lab (CEAB-CSIC) was supported by the Spanish Ministry (MINECO, CSIC13-4E-1999).
\section*{Author information}
The authors contributed equally in designing and writing the paper. J.G developed pt-SNE and performed the analysis.
\section*{Competing interests}
The authors declare no competing interests.
\begin{table}[!t]
\centering
\small
\begin{tabular}{ccccccc}
nodes & model & CPU & cores & fr.(MHz) & RAM & OS (bits)\\
\hline
5 & PowerEdge R420 & Intel(R) Xeon(R) E5-2450L & 16 & 1800 & 161G & 64 \\
7 & PowerEdge R430 & Intel(R) Xeon(R) E5-2650 & 20 & 2300 & 193G & 64 \\
1 & PowerEdge R815 & AMD Opteron(tm) 6380 & 64 & 2500 & 515G & 64 \\
\hline
\end{tabular}
\caption{\textbf{High-performance computing cluster at the Computational Biology Lab (CEAB-CSIC)}. Technical specifications.}
\label{tbl:CBLab}
\end{table}
|
{
"timestamp": "2021-12-03T02:25:47",
"yymm": "2111",
"arxiv_id": "2111.15506",
"language": "en",
"url": "https://arxiv.org/abs/2111.15506"
}
|
\section{Introduction}
\label{s:introduction}
Cosmic rays are one of the major ingredients in the interstellar medium (ISM), their energy density being comparable to that of the gaseous phases. Hence, cosmic rays play a major role in shaping the formation and evolution of galaxies in the Universe. The physics of cosmic rays is now investigated with multi-messenger astronomy \citep[see][for a recent review]{becker_tjus_20a}, with a focus on the Milky Way. In recent years, nearby galaxies have become accessible both with radio continuum \citep{irwin_12a} and $\gamma$-ray observations \citep{ackermann_12a} to better constrain cosmic-ray transport parameters. In this review, we present some observational inferences that have been made in the past few years with improved (i.e.\ more sensitive) radio continuum observations, and some of the advances made modelling them. Our aims are several-fold: first, we wish to explore the physics at cloud-scale at least in an indirect way, such as the entrainment of clouds in a hot wind \citep{brueggen_20a}. Second, the global structure of the ISM dynamics is studied -- something that can be well done for external galaxies -- and which may inform simulations from column-type simulations that can resolve the supernova blast waves on a 10-pc scale \citep{girichidis_18a} over global simulations of isolated galaxies \citep{salem_14a,Pakmor_16} to cosmological zoom-in simulations \citep{pakmor_17a}. Third, we can also explore the relationship with the magnetic field in the halo which fascinatingly takes the form of an X-shaped morphology \citep{tuellmann_00a,soida_11a} and compare this with models and simulations that include the effect of magnetic fields \citep{pakmor_17a,steinwandel_20a}. Our work may lead eventually to the necessary understanding, so that the frequently used simple recipes for `sub-grid physics' that are used in cosmological simulations of galaxy evolution to resemble observed galaxies \citep{vogelsberger_20a} to put on a sound physical basis.
We will in particular address the question to what extent cosmic rays can have an influence on galaxy evolution in the form of galactic winds \citep[see][for a recent review on the cold component of winds]{veilleux_20a}. Cosmic rays are thought to be responsible for winds that are `cooler and smoother' \citep{girichidis_18a} and so can lead to higher mass-loss rates than purely thermally driven winds. Also, cosmic ray-driven winds can be successful in environments that are more typical for $L_\star$ galaxies, such as our own Milky Way, and in particular our solar neighbourhood \citep{everett_08a}. These environments have much lower star-formation rate surface densities ($\Sigma_{\rm SFR}$) with $\Sigma_{\rm SFR}$ $\approx 3\times 10^{-3}$~$\rm M_{\odot}\,yr^{-1}\,kpc^{-2}$, however, observationally they are more difficult to access than canonical star burst galaxies such as M~82 and the nuclear region in NGC~253. These `superwind' galaxies with $\Sigma_{\rm SFR}$\ $\sim 10^{-1}$~$\rm M_{\odot}\,yr^{-1}\,kpc^{-2}$\ \citep{heckman_00a} are more extreme than the relatively benign late-type galaxies that have radio haloes \citep{wiegert_15a}. \citet{dahlem_95a} already suggested a low critical $\Sigma_{\rm SFR}$-value based on radio continuum observations, which were later corroborated by optical emission line studies using integral field unit spectroscopy \citep{ho_16a,lopez_coba_19a}.
More generally speaking, we can explore which effects are driving galactic winds, with processes related to stellar feedback and active galactic nuclei (AGNs) the main candidates \citep{yu_20a}. Not only the mass-loss rates, but also the composition of the wind fluid is important for galaxy evolution as is the final fate of the gas and the relation that galaxies have with the circum-galactic medium \citep[CGM; see][for a recent review]{tumlinson_17a}. The main questions that we would like to address with the study of radio continuum haloes (see Fig.~\ref{fig:radiohalo}) are (i) how predominant are galactic winds?; (ii) what is the role of supernovae, radiation pressure, cosmic-ray pressure, and AGN? Is there a minimum threshold of star formation or black hole activity needed to trigger cool outflows?; (iii) what is the relative distribution of the cool, warm, and hot phases in the wind? (iv) What feedback effects do they exert on the host galaxy ISM and CGM?
\begin{figure}[tb]
\centering
\includegraphics[width=0.8\columnwidth]{radiohalo_crop.pdf}
\caption{The three principle components that we aim to study in the radio continuum of a galaxy as seen in the edge-on position}
\label{fig:radiohalo}
\end{figure}
Cosmic rays have become recently a hot candidate to drive galactic winds, although the basic idea was already explored by \citet{ipavich_75a}. Cosmic rays have a relatively soft equation of state mean that they build up a gentle pressure gradient in the halo with a scale height of $\sim$1~kpc. This pressure gradient can gently accelerate the gas, possibly in conjunction with the hot ionised gas \citep{breitschwerdt_93a,everett_08a,recchia_16a}. In order to build up the necessary pressure gradient, cosmic rays have to first diffuse out of the star-forming regions \citep{salem_14a}. This can be done by either diffusion or streaming \citep{uhlig_12a}; if the cosmic rays are only passively advected, they only act as an additional pressure component and so merely puff up the gaseous disc a bit more without leading to a wind \citep{farber_18a}. Besides creating a wind, cosmic rays may play a key role in accelerating clouds of cold gas via the `bottle neck effect' in which streaming plays an important role \citep{wiener_17a}, significantly boosting the mass-loss rate.
Radio continuum observations trace cosmic-ray electrons, the spectra of which give important clues on their transport. Early works on the integrated radio continuum spectra of galaxies showed that their curved spectra can be explained by a transition from escape-dominated radio haloes at low frequencies to radiation-loss dominated haloes at high frequencies \citep{pohl_91a}. The changing radio spectral index with distance from the star-forming mid-plane can be modelled with diffusion and advection, which result in different properties \citep{lisenfeld_00a}.
The analysis of the radio spectral index in external galaxies was for a long time limited by observations, where it is relatively hard to measure the radio spectral index of extended objects using radio interferometry, for instance by the limitations due to a lack of sufficiently short base lines. However, with new instruments such as the LOw-Frequency Array \citep[LOFAR;][]{vanHaarlem_13a}, the upgraded Jansky Very Large Array \citep[JVLA;][]{irwin_12a} and improved data reduction techniques, in particular image deconvolution with the multi-scale multi-frequency MS-MFS {\sc clean} algorithm \citep{rau_11a}, some of these limitations have now been overcome.
\subsection{A simplified overview of cosmic ray transport}
We follow the standard paradigm, where cosmic rays are accelerated and injected into the ISM at supernova remnants (SNRs) by diffusive shock acceleration \citep[DSA;][]{bell_78a}. On average, the kinetic energy per supernova is $10^{51}~\rm erg$, a few per cent of which is used for the acceleration of cosmic rays \citep[e.g.][]{rieger_13a}. The cosmic-ray luminosity of a galaxy is then \citep{socrates_08a}:
\begin{equation}
L_{\rm CR} = 3(\epsilon_{\rm SN}/0.1)\rm (SFR / M_\odot\,yr^{-1}) \times 10^{40}~\rm erg\,s^{-1},
\label{eq:cosmic_ray_luminosity}
\end{equation}
where $\epsilon_{\rm SN}$ is the energy conversion factor from SNe kinetic energy into cosmic rays. Of the energy stored in the cosmic rays, between 1 and 2 per cent is channelled into the cosmic-ray electrons with the rest into protons and heavier nuclei \citep{beck_05a}.
Cosmic-ray transport proceeds either by diffusion along and across magnetic field lines, cosmic-ray streaming and advection \citep{ensslin_11a}. Diffusion of cosmic rays can be understood as them being scattered at magnetic field irregularities and so following a stochastic path with a bulk speed much smaller than the speed of light. This view is corroborated by the fact that in the Milky Way the cosmic ray flux has a directional anisotropy of only $10^{-4}$ \citep{ahlers_17a}. Cosmic rays reside in the Galaxy for an energy-dependent time which is (1--$2)\times 10^{7}$~yr at 1 GeV and decreases as a low fractional power of energy \citep{zweibel_13a}. The turbulence of the magnetic fields can be either created by external processes such as supernovae and stellar winds that inject the turbulence at the tens of parsec scale, which cascades down to the cosmic-ray gyro radius; this case is usually referred to as cosmic-ray diffusion. Or cosmic rays can transfer some of their energy and momentum on the magnetic field thereby creating their own turbulence; this case is referred to as cosmic-ray streaming, where the cosmic rays follow the magnetic field lines too.
The question which values of diffusion coefficients and streaming speeds to use is of importance for numerical simulations. Values for the diffusion coefficient range from $10^{27}$~$\rm cm^2\,s^{-1}$\ \citep{salem_14a} to more conventional values of $10^{28}$~$\rm cm^2\,s^{-1}$\ \citep{girichidis_18a} to even larger values of $10^{29}$--$10^{30}$~$\rm cm^2\,s^{-1}$\ \citep{hopkins_20a}. The canonical Milky Value of $3\times 10^{28}$~$\rm cm^2\,s^{-1}$\ \citep{strong_07a} is model-dependent, particularly on the size of the halo, so that the diffusion coefficient may potentially be higher if the halo is larger. In several works, a small diffusion coefficient is argued to be of importance so that the interaction with the gas is strong enough \citep{pakmor_16a}. In contrast, \citet{hopkins_20a} argue that the diffusion coefficient needs to be larger at $10^{29}$~$\rm cm^2\,s^{-1}$\ so that the $\gamma$-ray flux is not too high in star-forming galaxies. If anisotropic diffusion is modelled, the ratio of perpendicular to parallel diffusion coefficients is of importance but only poorly constrained with canonical values of $D_\perp/D_\parallel = 10$--100. Similarly, the velocity of cosmic-ray streaming is largely unknown although most theories agree that it should be of the order of the Alfv\'en speed. In the absence of ion-neutral damping, the wave growth of the Alfv\'en waves is unchecked so that cosmic rays can stream at super-Alfv\'enic speeds \citep{ruszkowski_17a}.
\subsection{Review structure}
\label{ss:review_structure}
A study of cosmic ray transport in external galaxies aims to determine the value of the diffusion coefficient including its energy dependence, whether diffusion proceeds isotropic or anisotropic and to what extent streaming takes over diffusion in galactic discs as the dominant transport process. In order to do this we exploit synchrotron emission from cosmic-ray electrons. As cosmic rays are injected at star formation sites, the smearing out of the radio continuum emission with respect to the star-formation distribution allows us to measure the cosmic-ray transport length. In conjunction with spectral ageing, we can model cosmic-ray transport using the electrons as proxies. This is the basic idea of our approach.
This review is structured as follows. In Section~\ref{s:methodology}, we introduce the methodology used in order to interpret the radio continuum observations. Section~\ref{s:spinnaker} gives an overview of the software {\sc spinnaker}, which we have developed to model the observations. The next three sections provide an overview on the different methods that have been used: in Section~\ref{s:radio_haloes}, we present our inferences that we can gain from the vertical intensity profiles in edge-on galaxies; Section~\ref{s:radio_continuum_spectrum} summarises what we can learn from the radio continuum spectrum; in Section~\ref{s:face_on_galaxies}, we extend this approach to face-on galaxies. In Section~\ref{s:results}, we summarise the most important results from our studies thus far. These results motivate a new approach to model radio haloes by stellar feedback-driven winds as laid out in Section~\ref{s:wind}. We put our results into context of inferences from absorption- and emission-line studies in Section~\ref{s:optical_inferences} and to inferences from theory in Section~\ref{s:inferences_from_theory}. In Section~\ref{s:missing_physics}, we discuss missing physics from our models thus far and how to address this shortcoming in the future. In Section~\ref{s:summary}, we summarise.
\section{Methodology}
\label{s:methodology}
\subsection{Radio continuum emission from galaxies}
Radio continuum emission from galaxies traces cosmic-ray electrons (CR$e^{-}$), emitting synchrotron emission while spiralling around magnetic field lines. The other contribution is from \emph{thermal} emission, which stems from the free--free emission of thermal electrons; for this contribution, the thermal H\,$\alpha$ emission is a good tracer and so that the emission can be separated if desired.
In the interstellar medium, CR$e^{-}$ are losing their energy mainly due to
synchrotron and inverse Compton (IC) radiation, so that GeV-electrons have lifetimes of a few $10^7~\rm
yr$. The ionization and bremsstrahlung losses for typical ISM
densities of $n=0.05~\rm cm^{-3}$ result in lifetimes of the order of $10^9~\rm yr$
and can hence be neglected \citep{heesen_09a}, except at low frequencies in dense gaseous, star-forming regions \citep{basu_15a}. A comparison of $\gamma$-ray luminosity with Monte--Carlo simulations have shown that cosmic rays sample the mean density of the interstellar medium \citep{boettcher_13a}, hence such an assumption may be justified. The combined synchrotron and IC loss rate for CR$e^{-}$ is given by \citep{longair_11a}:
\begin{equation}
-\left (\frac{{\rm d}E}{{\rm d}t}\right )=b(E)=\frac{4}{3} \sigma_{\rm T} c \left (\frac{E}{m_{\rm
e}c^2} \right )^2 (U_{\rm rad}+U_{\rm B}),
\label{eq:be}
\end{equation}
where $U_{\rm rad}$ is the radiation energy density, $U_{\rm B}=B^2/8\pi$ is the magnetic
energy density, $\sigma_{\rm T}=6.65\times 10^{-25}~\rm cm^2$ is the
Thomson cross-section and $m_{\rm e}=511~\rm keV\,c^{-2}$ is the electron
rest mass. The CR$e^{-}$ energy can be inferred from the critical frequency, where the synchrotron spectrum peaks for an individual electron \citep{beck_15a}:
\begin{equation}
E \approx \left(\frac{\nu}{16~\rm MHz}\right )^{1/2} \left(\frac{\mu\rm G}{B_{\perp}}\right)^{1/2},
\label{eq:cre_energy}
\end{equation}
where $B_{\perp}$ is the total magnetic field strength perpendicular to the line of sight (i.e. in the sky plane). The time dependence of the energy for an individual CR$e^{-}$ is $E(t)=E_0(1+t/t_{\rm syn})^{-1}$,
so that at $t=t_{\rm syn}$ the energy has dropped to half of its initial
energy $E_0$. The CR$e^{-}$ synchrotron lifetime, as determined by synchrotron losses, and a smaller contribution from IC radiation losses, can be expressed by \citep{heesen_16a}:
\begin{eqnarray}
t_{\rm syn} & = &34.2 \left (\frac{\nu}{\rm 1\,GHz}\right )^{-0.5}
\left (\frac{B}{\rm 10\,\mu G}\right )^{-1.5} \\\nonumber & & \left
(1+\frac{U_{\rm rad}}{U_{\rm B}}\right )^{-1}~{\rm Myr}.
\label{eq:t_syn}
\end{eqnarray}
If the CR$e^{-}$ escape time is $t_{\rm esc}$, the effective CR$e^{-}$ lifetime is then:
\begin{equation}
\tau^{-1} = t_{\rm syn}^{-1} + t_{\rm esc}^{-1}.
\label{eq:cre_lifetime}
\end{equation}
The CR$e^{-}$ injection spectrum ${\rm d}EN(E) = N_0 E^{-\gamma_{\rm inj}}$ is a power-law with an injection spectral index of $\gamma_{\rm inj}\approx 2.2$ \citep[fig.~3a in][]{caprioli_11a}. Hence, the integrated radio continuum spectrum can give us important clues about the escape of CR$e^{-}$ because, depending on the energy dependence of the various loss processes, the injection spectrum is converted into a power-law with a different slope. For instance, the spectrum is steepened to $\propto E^{-\gamma_{\rm inj}-1}$ if the energy losses are proportional to $E^2$ as is the case for both synchrotron and IC radiation losses \citep{longair_11a}. This means that the radio spectral index is steepened to $\alpha=\alpha_{\rm inj} - 0.5$, where $\alpha_{\rm inj}=(1-\gamma_{\rm inj})/2$ is the injection radio spectral index.\footnote{Radio spectral indices are defined as $I_\nu\propto \nu^{\alpha}$.} Thus, in galaxies with free CR$e^{-}$ escape, the radio continuum spectrum is a power-law with $\alpha\approx -0.6$. Contrary, if the CR$e^{-}$ losses due to synchrotron and IC losses are important, the spectrum steepens to $\alpha\approx -1.2$ \citep{lisenfeld_00a}.
\subsection{Advection--diffusion approximation}
The CR$e^{-}$ energy spectrum $N(E){\rm d}E$ can be
modelled by solving
the diffusion--loss equation for the CR$e^{-}$ \citep[e.g.][]{longair_11a}:
\begin{equation}
\frac{{\rm d} N(E)}{{\rm d}t} = D \nabla^2 N(E) +
\frac{\partial}{\partial E}\left[ b(E) N(E)\right ] + Q(E,t),
\label{eq:diffloss}
\end{equation}
where $b(E)=-{\rm d}E/{\rm d}t$ for a single CR$e^{-}$ as given by Equation~(\ref{eq:be}). Massive
spiral galaxies have rather constant star formation histories, so that the CR$e^{-}$ injection
rate can be assumed as approximately constant and so the source term $Q(E,t)$ has no explicit time dependence. If we assume that all sources of CR$e^{-}$ are located in the disc plane, we obtain for the source term $Q(E,t)=0$ for
$z>0$ (Fig.~\ref{fig:radiohalo}). Equation~(\ref{eq:diffloss}) can be evolved in time until a stationary solution is found. We use a slightly different approach, first by restricting ourselves to a one-dimensional (1D) problem, and second by imposing a fixed inner boundary condition of $N(E, 0) = N E^{-\gamma_{\rm inj}}$. In the stationary case, the change of the CR$e^{-}$ number density $\partial N/\partial t$ is solely determined by the energy loss term (second term on the right-hand side of equation~\ref{eq:diffloss}). Noticing that for advection we have $\partial N /\partial t = v \partial N /\partial z$, we can re-write equation~\eqref{eq:diffloss} for the case of pure advection to:
\begin{equation}
\frac{\partial N}{\partial z} = \frac{1}{v} \left \lbrace \frac{\partial}{\partial E} [b(E)N(E,z)]\right \rbrace,
\label{eq:n_advection}
\end{equation}
where $v$ is the advection speed, assumed here to be constant. Similarly, for diffusion we have $\partial N /\partial t = D\partial N^2 /\partial z^2$ (Fick’s second law of diffusion), so that we can re-write equation~\eqref{eq:diffloss} for the case of pure diffusion to:
\begin{equation}
\frac{\partial^2 N}{\partial z^2} = \frac{1}{D} \left \lbrace \frac{\partial}{\partial E} [b(E)N(E,z)]\right \rbrace ,
\label{eq:n_diffusion}
\end{equation}
where the diffusion coefficient can be parametrised as function of energy as $D = D_0(E/{\rm GeV})^{\mu}$. If the diffusion coefficient is energy-dependent, values for $\mu$ are thought to be between $0.3$ and $0.6$ \citep*{strong_07a}. For diffusion we also assume that the halo size is much larger than the CR$e^{-}$ diffusion length and so the CR$e^{-}$ cannot escape at the halo boundary, and the decrease of the CR$e^{-}$ number density is solely determined by the energy losses (synchrotron and IC radiation).
If we drop the assumption of a constant advection speed, the CR$e^{-}$ number density will change even if the cross-sectional area $A$ of the outflow is constant. According to the continuity equation:
\begin{equation}
n_{\rm CR} v A = \rm const.,
\end{equation}
where $v$ is the advection speed and $n_{\rm CR}$ is CR$e^{-}$ number density. Additionally, there are adiabatic losses (cooling) that can be described as:
\begin{equation}
-\left (\frac{{\rm d}E}{{\rm d}t}\right ) = \frac{1}{3}(\nabla\cdot v)E = \frac {E}{t_{\rm ad}}.
\end{equation}{}
For a linearly accelerating wind with a constant cross-sectional area, the adiabatic loss time-scale is:
\begin{equation}
t_{\rm ad} = 3\left (\frac{{\rm d}v}{{\rm d}z}\right)^{-1}.
\end{equation}
An outflow that is either expanding laterally with an increasing cross-section or accelerating hence leads to adiabatic losses. Both effects can of course also work in combination, which decrease the cosmic-ray energy density, such that the cosmic rays can be in equipartition with the magnetic field. Assuming that the cosmic rays are in equipartition with the magnetic field in the disc plane, a constant advection speed in conjunction with a non-expanding outflow leads to a severe violation of equipartition in the halo \citep{mora_19a}.
We also have to assume a magnetic field distribution. Because of simplicity we first parametrise the magnetic field as exponential distribution, so that the magnetic field strength is:
\begin{equation}
B(z) = B_0 \exp(-z/h_{\rm B}),
\label{eq:one_comp_bfield}
\end{equation}
where $h_{\rm B}$ is the magnetic field scale height. The magnetic field strength in the mid-plane $B_0$ is then a fixed parameter calculated with the revised equipartition formula \citep{beck_05a}. Alternatively, we also use a two-component exponential magnetic field:
\begin{equation}
B(z) = B_1 \exp(-z/h_{\rm B1}) + B_2 \exp(-z/h_{\rm B2}),
\label{eq:two_comp_bfield}
\end{equation}
where $h_{\rm B1}$ and $h_{\rm B2}$ are the magnetic field scale heights in the thin and thick radio disc, respectively, with the magnetic field strengths related as $B_0=B_1+B_2$. The thick radio disc is also referred to as radio halo (see Fig.~\ref{fig:radiohalo}).
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{t_syn.pdf}
\caption{Comparison of advection and diffusion, where we plot the normalised CR$e^{-}$ number density as function of the CR$e^{-}$ transport length. The CR$e^{-}$ injection spectral index is $\gamma_{\rm inj}=3$ (\emph{top panel}) and $\gamma_{\rm inj}=2$ (\emph{bottom panel}). Further parameters as described in the text are $B=10~\mu\rm G$, $U_{\rm rad}/U_{\rm B}=0.3$, $D=10^{28}~\rm cm^2\,s^{-1}$, and $v=100~\rm km\, s^{-1}$. Models adopted from \citet{heesen_16a}}
\label{fig:t_syn}
\end{figure}
\subsubsection{Cosmic-ray electron transport length}
\label{sss:cre_transport_length}
With the most simplistic description, the cosmic-ray diffusion length can be described as:
\begin{equation}
D = \frac{L^2}{4\tau},
\label{eq:diffusion}
\end{equation}{}
where $D$ is the isotropic diffusion coefficient and $\tau$ is the CR$e^{-}$ lifetime. Hence, it follows that the cosmic-ray transport length $L$ scales only with the square root of the CR$e^{-}$ lifetime as $L=\sqrt{4D\tau}$. Using convenient units, we find:
\begin{equation}
D = 0.75\times 10^{29}~\frac{(L/{\rm kpc})^2}{\tau/{\rm Myr}}~\rm cm^2\,s^{-1}.
\label{eq:diffusion_coefficient}
\end{equation}{}
Conversely, advection can be simply described as:
\begin{equation}
v = \frac{L}{\tau},
\label{eq:advection}
\end{equation}{}
where $v$ is the advection speed. Or, in convenient units:
\begin{equation}
v = 980\frac{L/{\rm kpc}}{\tau/{\rm Myr}}~\rm km\,s^{-1}.
\end{equation}
For advection, the CR$e^{-}$ transport length scales linearly with the CR$e^{-}$ lifetime as $L=v\tau$. For small CR$e^{-}$ lifetimes, diffusion happens faster than advection and so diffusion dominates over advection near the sources in the star-forming disc. Equating diffusion and advection length, $\sqrt{4D\tau}=v\tau$, the CR$e^{-}$ lifetime becomes:
\begin{equation}
\tau = \frac{4D}{v^2},
\end{equation}{}
or, in convenient units:
\begin{equation}
\tau = 12\frac {D / 10^{28}~{\rm cm^2\,s^{-1}}}{(v/100~{\rm km\,s^{-1})^2}}~\rm Myr.
\end{equation}{}
Inserting this lifetime into equation~\eqref{eq:advection}, we obtain the cosmic-ray transport length, where the transition from diffusion to advection happens:
\begin{equation}
z_{\star} \approx 1.2 \frac{D / 10^{28}~{\rm cm^2\,s^{-1}}}{v / {\rm 100~\rm km\,s^{-1}}}~\rm kpc.
\label{eq:diff_adv}
\end{equation}{}
\begin{figure*}[tb]
\includegraphics[width=\textwidth]{n5775}
\caption{{\sc spinnaker} as viewed with {\sc spinteractive}. Application in NGC~5775 to LOFAR 150-MHz and CHANG-ES 1.5-GHz data. In the left panel, the parameters are summarised. The middle panel shows from top to bottom, the 140 MHz data, the 1.5-GHz data, and the radio spectral index. The right panel shows the parameters than can be interactively changed.}
\label{fig:spinteractive}
\end{figure*}
The diffusion-dominated region near the mid-plane extends to heights of $z\lesssim z_\star$, whereas the advection-dominated region in the halo is at heights of $z\gtrsim z_\star$ \citep{recchia_16a}.
In Fig.~\ref{fig:t_syn}, we plot the CR$e^{-}$ number density both for advection and diffusion as function of the CR$e^{-}$ transport length. The transition happens at about $0.6$~kpc, where for diffusion the CR$e^{-}$ number density drops rapidly and so advection takes over as the dominating transport mode. For the modelling of the cosmic-ray transport it is hence useful to approximate the transport by \emph{pure advection} if the galaxy has a wind because diffusion is suppressed in the halo, where we model the data. In contrast, if a galaxy has no wind, we can approximate the transport by \emph{pure diffusion}. This is the approach we take in the following.
\subsection{Expected relations}
\label{ss:expected_relations}
Intensity scale heights in edge-on galaxies can be used in two ways in order to investigate the cosmic-ray transport. For both methods, we use the \emph{equipartition} assumption to derive the CR$e^{-}$ scale height from the non-thermal intensity scale height by:
\begin{equation}
h_e = \frac{3 -\alpha_{\rm nt}}{2} h_{\rm syn}.
\label{eq:h_e}
\end{equation}{}
The first method is then to measure the scale height at two different frequencies (or more), where the different frequency-dependence of the scale height can be used to distinguish between advection and diffusion. Combining the CR$e^{-}$ synchrotron lifetime (equation~\ref{eq:t_syn}) with the advection transport length (equation~\ref{eq:advection}), we obtain for \emph{advection}:
\begin{equation}
h_{\rm e} \propto \nu^{-0.5}B^{-3/2}.
\label{eq:advection_scale_height}
\end{equation}{}
Similarly, using the diffusion transport length (equation~\ref{eq:diffusion}), we obtain for \emph{diffusion}:
\begin{equation}
h_{\rm e} \propto \nu^{(\mu-1)/4}B^{-(\mu+3)/4}.
\label{eq:diffusion_scale_height}
\end{equation}{}
Hence for diffusion, the CR$e^{-}$ scale height depends less on the frequency than for advection. For a possible energy-dependence of the diffusion coefficient, this frequency dependence of the CR$e^{-}$ scale height is reduced even further such that for a hypothetical, strong energy-dependence of the diffusion coefficient with $\mu=1$, the frequency-dependence of the scale height even vanishes entirely.
It is important to be aware of that above relations only apply as long as the energy losses of the CR$e^{-}$ are high, as is for instance the case if the magnetic field strength in the halo is constant and so the CR$e^{-}$ lose all their energy. This scenario is referred to as the \emph{calorimetric case}. More realistically, galaxies may lose some of their CR$e^{-}$ or the CR$e^{-}$ even escape almost freely from the galaxy, referred to as \emph{non-calorimetric} case. For the latter, we do not expect any dependence of the scale height on frequency. This is the case if the escape time-scale:
\begin{equation}
t_{\rm esc} = \frac{h_{\rm e}}{v}
\end{equation}
is much smaller than the CR$e^{-}$ lifetime, i.e. $t_{\rm esc}\ll \tau$.
Since the CR$e^{-}$ lifetime depends most on the frequency and the magnetic field strength, attempts so far have concentrated on measuring the CR$e^{-}$ transport length as function of them. Even more challenging is to quantify the influence of the magnetic field structure, the influence of which on the anisotropic parallel diffusion coefficient can be parametrised as \citep{shalchi_09a}:
\begin{equation}
D_\parallel \propto \left (\frac{B_{\rm ord}}{B_{\rm turb}}\right )^2B_{\rm ord}^{-1/3},
\end{equation}
where $B_{\rm ord}$ is the ordered magnetic field strength and $B_{\rm turb}$ is the turbulent magnetic field strength. As we shall see, the main challenge is in separating the effects of spectral ageing and the influence of the magnetic field. The basic idea is to use the equations for advection and diffusion to separate them. In order to do this we implemented them in a simple-to-use computer program.
\input{par}
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{diffadv_crop.pdf}
\caption{Family of {\sc spinnaker} models for various CR$e^{-}$ injection spectral indices at 1.4~GHz (solid lines) and 5~GHz (dashed lines). (a) is for advection and (b) is for diffusion. The magnetic field is a 1-component exponential function with $B=10~\mu\rm G\exp(-z/4~{\rm kpc})$, the advection speed is constant with $v_0=200~\rm km\,s^{-1}$ and the diffusion coefficient is $D=3\times 10^{28}(E/{\rm GeV})^{0.5}$~$\rm cm^2\,s^{-1}$. The first row shows the CR$e^{-}$ number density, the second row the non-thermal intensity and the third row the non-thermal radio spectral index between $1.4$ and 5~GHz. From \citet{heesen_16a}}
\label{fig:diffadv}
\end{figure}
\section{An overview of {\sc spinnaker}}
\label{s:spinnaker}
The above equations were implemented in the computer program SPectral INdex Numerical Anlysis of K(c)osmic-ray Electron Radio-emission ({\sc spinnaker}).\footnote{https://github.com/vheesen/Spinnaker} The interactive version {\sc spinteractive} allows one the fitting of the intensities and radio spectral index profiles in a convenient way (see Fig.~\ref{fig:spinteractive}). In Table~\ref{tbl:par} we present the parameters that are fitted in each model. We now present the various options.
Before we present the various options, we briefly summarise the degeneracies involved in the empirical modelling, in particular with respect to the CR$e^{-}$ density, advection velocity, and diffusion coefficient. We assume the magnetic field strength in the disc as a fixed parameter, to be measured from the energy equipartition between cosmic rays and magnetic field. The main degeneracy we have to resolve is that either a high advection speed or diffusion coefficient will lead to a higher CR$e^{-}$ density in the halo, which can compensate a weaker magnetic field such that it still matches the observed level of intensity. Conversely, a strong magnetic field can compensate a lower CR$e^{-}$ density in the halo resulting in the same radio continuum intensity. This degeneracy can be be resolved by using the radio spectral index, since a higher advection speed or diffusion coefficient will lead to a flatter radio spectral index profile as the ageing of CR$e^{-}$ is suppressed. This is the reason why this kind of modelling can work at all and so we get fairly reliable values for either the diffusion coefficient and/or the advection speed \citep{heesen_16a}.
\subsection{Diffusion}
Pure diffusion is chosen by ${\tt mode=1}$, where the diffusion equation~\eqref{eq:n_diffusion} provides us with the CR$e^{-}$ number density profile as presented in Fig.~\ref{fig:diffadv}(b). As can be seen, the diffusion approximation results in flatter CR$e^{-}$ number density profiles in the inner parts of the galaxy but steeper in the outskirts. The corresponding radio intensity profiles can be thus better described as Gaussian rather than as exponential functions \citep{heesen_19a}. The models presented in Fig.~\ref{fig:diffadv}(b) assume a non-constant, exponential magnetic field; while the magnetic field distribution influence the intensity profiles, the profiles are still markedly different from those as for advection (see also Fig.~\ref{fig:t_syn}). We also note that the profiles of the radio spectral index are also affected by this and have a `parabolic' shape. For diffusion we fit both the diffusion coefficient and the energy dependency $\mu$.
\subsection{Advection}
The option $\tt mode = 2$ selects pure advection for the CR$e^{-}$ transport, where the CR$e^{-}$ number density is calculated according to equation~\eqref{eq:n_advection}.
\subsubsection{Constant advection speed}
For $\tt velocity\_field=0$, the advection speed is constant, which means the CR$e^{-}$ number density is regulated by radiation losses only. Hence, the CR$e^{-}$ number density decreases gradually with distance, different to diffusion (see Fig.~\ref{fig:diffadv}a). The radio spectral index is then also more gradually steepening in contrast to the diffusion solution, so that a linear function is a better fit.
For advection with a constant wind speed, we fit simultaneously for the advection speed $v_0$ and the magnetic field scale height. In principle, there is a \emph{degeneracy} between the advection speed and the magnetic field scale height if only one of the intensities are studied: a smaller magnetic field scale height can be compensated by a larger advection speed. However, the radio spectral index is also very dependent on the advection speed and so a unique solution can be found (Fig.~\ref{fig:chi2}). Depending on whether the vertical profile needs one or two magnetic field components (equations~\eqref{eq:one_comp_bfield} and \eqref{eq:two_comp_bfield}), we also may need to fit the magnetic field strength $B_1$ and scale height $h_{\rm B1}$ of the thin radio disc. If the angular resolution is sufficiently high to resolve the thin disc, it may be beneficial to only fit the radio spectral index in the halo, where advection dominates \citep{heesen_18b}.
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{n4631_chi2_crop.pdf}
\caption{Reduced $\chi^2$ for advection with a constant speed in the northern halo of NGC~4631. Solid lines (best-fitting intensities) define a diagonal area running from the top-left to the bottom-right, so that there is a degeneracy between advection speed and magnetic field scale height. The solution is unique though because the radio spectral index requires advection velocities as indicated by dashed lines, almost independent on the magnetic field scale height. The overlapping area in the middle with the star in the centre, define the allowed best-fitting solutions. From \citet{heesen_18b}}
\label{fig:chi2}
\end{figure}
The assumption of a constant advection speed has the advantage that the advection speeds can be accurately measured and these speeds can be regarded as a lower limits. The downside is that the cosmic-ray energy density is not in equipartition with the magnetic field for which an accelerating wind is necessary \citep{mora_19a}.
\begin{figure*}[tb]
\centering
\includegraphics[width=\textwidth]{dumbbell_crop.pdf}
\caption{Examples for two prominent dumbbell-shaped radio haloes with NGC~253 (a) and NGC~891 (b). Shown is the radio continuum intensity at 4850~MHz (a) and 146~MHz (b) as contours overlaid on optical images from the Digitized Sky Survey (DSS). From \citet{heesen_09a} (a) and \citet{mulcahy_18a} (b), respectively}
\label{fig:dumbbell}
\end{figure*}
\input{sample}
\subsubsection{Accelerating advection speed}
If the outflow has no lateral expansion, an accelerating wind can be a way to ensure energy equipartition in the halo. We notice that radio haloes have a box-shaped outline, where the radial extent of the halo hardly changes with height and is well correlated with the size of the star-forming disc \citep{dahlem_06a,heesen_18c,heald_21a}, which argues against a strong lateral expansion. Hence, dropping the assumption of a constant advection speed, we are able to ensure energy equipartition in the halo, for instance by using an exponential velocity distribution ($\tt velocity\_field =1$). This introduces one more free parameter, the velocity scale height $h_v$, so that the advection speed becomes $v(z)=v_0\exp(z/h_v)$. Galactic winds essentially accelerate linearly in the region where mass and energy is injected before the acceleration tailors off. This is the basic picture by the analytic wind model of \citet{chevalier_85a}. Including different driving agents such as radiation pressure and cosmic rays change this picture only slightly \citep{yu_20a}. \citet{heesen_18c} applied such a model successfully to the dwarf irregular galaxy IC~10. For exponential magnetic fields, energy equipartition requires $h_v\approx h_B/2$, so that the magnetic energy density is in agreement with the cosmic-ray energy density.
Another option is a wind velocity profile with a polynominal shape ($\tt velocity\_field=2$), where the advection velocity is parametrised as:
\begin{equation}
v = v_0 \left [ 1 + \left (\frac{z}{h_v}\right )^\beta\right ].
\label{eq:advection_profile}
\end{equation}{}
For $\beta=1$, the wind is linearly accelerating, whereas for $\beta=0.5$ the wind accelerates fast in the beginning and then the acceleration tailors off. The former is a good approximation for a cosmic ray-driven wind, where both simulations \citep{girichidis_18a} and semi-analytical 1D wind models \citep{everett_08a} predict linear velocity profiles. The latter is a closer approximation to stellar-driven wind models \citep{lamers_99a}.
\subsubsection{Advection in a wind}
Acceleration is not the only way to achieve equipartition in the halo, the second possibility is lateral expansion. Such a geometry can be a spherical outflow, as is the case with M~82, or a flux tube geometry which has been used to model cosmic ray-driven winds. We use the latter as this better represents the morphology of radio haloes. There is a choice of magnetic field parametrisation with either a pure vertical field geometry or a helical field with both azimuthal and vertical components. Faraday rotation measurements indicate that the magnetic field in the halo may be \emph{helical} \citep{heesen_11a,mora_19b,stein_20a}, so that there is an azimuthal component as well, hence we chose such a configuration. Nevertheless, we point out that there is a degeneracy between the assumed magnetic field geometry and the acceleration of the advection speed. Changing the magnetic field strength results in different energies of the CR$e^{-}$ we can probe (Equation~\ref{eq:cre_energy}), so that the spectral ageing is changed as well.
Hence, the third possibility is advection as a result of a simplified wind model using an iso-thermal wind solution (${\tt velocity\_field} =3$). This option will be motivated in more detail in Section~\ref{s:wind} \citep[see also][]{heald_21a}. Basically, this results in an approximately linear advection speed profile with approximate energy equipartition between the cosmic rays and the magnetic field. The simplified wind equation assumes a constant sound speed (iso-thermal wind model) and a flux tube geometry \citep{breitschwerdt_91a}. This allows us to describe a stellar feedback-driven wind with few free parameters; the parameters that need to be fitted are then advection speed at the critical point $v_0$, the flux tube scale height $z_0$ and the flux tube opening parameter $\beta$. This updated model is successful in matching the vertical distribution of non-thermal radio emission, and the vertical steepening of the associated spectral index, in
a consistent conceptual framework with few free parameters.
\begin{figure*}[tb]
\includegraphics[width=\textwidth]{profiles_crop.pdf}
\caption{Vertical intensity profiles in the haloes of four nearby edge-on spiral galaxies. The top row shows two examples for diffusion-dominated radio haloes with NGC~4565 at 140~MHz (a) and NGC~7462 at 1.4 and 4.7 GHz (b). The bottom row shows two examples which are advection dominated namely NGC 4217 at 140~MHz (c) and NGC~4631 at 1.4 and 4.9 GHz (d). Diffusion-dominated haloes have Gaussian intensity profiles (with preferentially only a 1-component profile), whereas advection-dominated radio haloes have exponential intensity profiles with preferentially 2-component profiles. Adopted from (a) \citet{heesen_19b}, (b) \citet{heesen_16a}, (c) \citet{stein_20a} and (d) \citet{heesen_18b}}
\label{fig:diffusion}
\end{figure*}
\section{Radio haloes}
\label{s:radio_haloes}
Radio haloes offer us the possibility to apply the simple models of cosmic ray transport to the distribution of electrons in the halo. While some degeneracy remains between the magnetic field and the cosmic rays, the radio spectral index distribution and intensity distribution agrees to first degree with the models. This motivates to exploit the spatially resolved radio continuum emission to study cosmic-ray transport in more detail. Figure~\ref{fig:dumbbell} shows two prominent radio haloes as examples of what can be seen in the radio continuum. What is immediately clear is that the morphology of the radio haloes is not like a sphere, something that has been invoked to explain the radio sky background \citep{singal_15a}. With such an outside view we can also fairly easily check the size of the radio halo, as \citet{miskolczi_19a} could show the radio halo can extend to a size of up to 10~kpc as was also suggested by the modelling of the Milky Way halo \citep{orlando_13a}.
As this review focuses on what we have learned from the modelling with {\sc spinnaker}, we build on the sample by \citet{heesen_18b} who investigated 12 edge-on galaxies. Since then a few more galaxies were investigated in a similar way, so that we now have a sample of 16 galaxies that were analysed in a consistent way. In Table~\ref{tbl:sample}, these galaxies are listed.
\subsection{Profile shape}
\label{ss:profile_shape}
Depending on the shape of the magnetic field distribution in the halo, the CR$e^{-}$ distribution is different for diffusion and advection, allowing us to distinguish between these two processes. Assuming an exponential magnetic field distribution is the first step since the radio continuum emission in the halo has this exponential distribution as well. Hence, the advection--diffusion approximation is used to show that diffusion leads to approximately \emph{Gaussian} intensity profiles and \emph{advection} leads to approximately \emph{exponential} intensity profiles \citep[see][and Section~\ref{s:spinnaker}]{heesen_16a}.
\subsubsection{Gaussian profile shape}
\label{sss:gaussian_profile_shape}
Examples for Gaussian radio haloes with $I_{\nu}\propto \exp(-z^2/h_{\rm syn})$ are rare so far (see Fig.~\ref{fig:diffusion} (a) and (b)), with the only examples NGC~4013 \citep{stein_19a}, NGC~4565 \citep{heesen_19b} and NGC~7462 \citep{heesen_16a}. What these three galaxies have in common, however, are their low star-formation rate surface densities with $\Sigma_{\rm SFR} < 2\times 10^{-3}~$~$\rm M_{\odot}\,yr^{-1}\,kpc^{-2}$. At these low values of $\Sigma_{\rm SFR}$, simulations suggest that the formation of outflows is suppressed \citep{vasiliev_19a}. It is an exciting prospect that radio haloes can possibly establish whether such an outflow $\Sigma_{\rm SFR}$-threshold really exists, and whether there are any other contributing factors such as a high mass-surface density.
A possible $\Sigma_{\rm SFR}$-threshold for the existence of gaseous haloes was posited already by \citet{rossa_03a}, who observed the extra-planar diffuse ionised gas (eDIG) in galaxies, which was later confirmed by X-ray observation of the hot ionised gas \citep{tuellmann_06a}. These observations suggested an $\Sigma_{\rm SFR}$-threshold value similar to the one indicated by the diffusion--advection transition. The only other galaxy outside of this sample fitted with a single Gaussian component is NGC~4594 (M~104), an early type galaxy with a very low $\Sigma_{\rm SFR}$\ \citep{krause_06a}, hence fitting the trend.
\subsubsection{Exponential profile shape}
\label{sss:exponential_profile_shape}
Most vertical intensity profiles are exponential so that $I_{\nu}\propto \exp(-z/h_{\rm syn})$ with either one or two components \citep{krause_18a}, showing that they are advection dominated. If there are two components, then we refer them to a thin and thick radio disc, respectively. This is in agreement with the finding that the scale heights are almost identical at both 1.5 and 6~GHz, suggesting an almost free escape of CR$e^{-}$ in a wind. Of the 16 galaxies considered in this review (see Table~\ref{tbl:sample}), 13 have exponential radio continuum profiles (Fig.~\ref{fig:diffusion}(c) and (d)). There are other galaxies outside of our sample, which have been fitted with exponential profiles, such as NGC~3034 \citep[M~82;][]{adebahr_13a}.
\subsubsection{Multi-component radio disc}
\label{sss:multi_component_radio_disc}
It is an open question whether galaxies always have both thin and thick radio discs, as is predominantly found by observations thus far. Generally speaking, our observations thus far indicate that galaxies have either a 2-component exponential vertical distribution, consisting of both and thick radio discs, or a 1-component Gaussian disc, consisting of a thick radio disc only. Of course, this will be resolution dependent since most thin radio discs have only a scale height of a few hundred parsec \citep{heesen_18b}, so that the angular resolution has to be sufficiently high in order to resolve them. In the sample discussed here, only 2 out of the 16 galaxies do not have a multi-component radio disc, NGC~4565 and NGC~7462, which both possess only a thick radio disc. It is notable that these two galaxies are diffusion-dominated. The only other galaxy outside of this sample that has a Gaussian vertical intensity profile is NGC~4594 (M~104), which is also fitted by a single Gaussian component \citep{krause_06a}. We can speculate that diffusion results in only a thick radio disc, whereas in the case of advection both the thin and thick discs form. Since diffusion dominates near the disc, the thin radio disc will be diffusion dominated and advection takes over as the dominating transport mode, where the profile flattens and the thick radio disc begins (Section~\ref{sss:cre_transport_length}).
Such a transition in the cosmic-ray distribution is also seen in cosmological simulations with {\sc fire-2}, where the transition is at 10~kpc height \citep[which is expected as][use much larger diffusion coefficients]{hopkins_20a}. \citet{girichidis_18a} find a flattening of the profile at $0.5$~kpc height with a more typical diffusion coefficient of $10^{28}$~$\rm cm^2\,s^{-1}$. As equation~\eqref{eq:diff_adv} predicts, for typical advection speeds of a few 100~$\rm km\,s^{-1}$\ and diffusion coefficients of $10^{28}~\rm cm^2\,s^{-1}$, we expect the transition to happen at around 1~kpc or less. Thus we raise the possibility that a galaxy with a wind has a two-component radio disc, whereas no-wind galaxies have only a one-component radio disc with a thick disc. NGC~4013 is the only galaxy that has a two-component Gaussian radio disc; this galaxy is a hybrid case where diffusion and advection both contribute because the advection speed is sufficiently slow \citep{stein_19b}.
\subsection{Scale heights}
\label{s:scale_heights}
\subsubsection{Global measurements}
\label{ss:scale_heights}
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{scaleheight.pdf}
\caption{The exponential radio continuum scale height in the CHANG-ES sample at $1.5$ and 6~GHz with the corresponding ratio. Shaded areas show the expectation for non-calorimetric advection (free escape; yellow), calorimetric diffusion (green), and calorimetric advection (blue). From \citet{krause_18a}}
\label{fig:scaleheight}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{hadvection.pdf}
\caption{The non-thermal exponential radio continuum scale height in NGC~253 as function of the CR$e^{-}$ synchrotron lifetime. The line shows the best-fitting advection solution. From \citet{heesen_09a}}
\label{fig:hadvection}
\end{figure}
In Fig.~\ref{fig:scaleheight}, we present the scale height ratio between $1.5$ and 6~GHz in the CHANG-ES sample \citep{krause_18a}. For advection, we would expect the ratio to be 2 (equation~\ref{eq:advection_scale_height}), for diffusion to be around $1.3$ (depending on $\mu$; equation~\ref{eq:diffusion_scale_height}), and for a free escape we would expect the ratio be 1. As can be seen, the ratio is in agreement with either diffusion with a significant energy loss or free escape. What we can rule out, however, is advective transport in a calorimetric halo, although diffusive transport in a calorimetric halo would be still possible. However, there are two reasons that argue against this latter option: first, the exponential profiles are in agreement with advection; second, the galaxies have integrated radio spectral indices that are not steep enough in order classify them as CR$e^{-}$ calorimeters. Hence, the scale height analysis points to advective transport in winds \citep{krause_18a}.
\subsubsection{Spatially resolved measurements}
The second method using scale heights to measure CR$e^{-}$ transport, is to use spatially resolved measurements. For a given galaxy, the mode of CR$e^{-}$ transport should not change much across the size of galaxy, for instance in a galaxy-wide outflow advection dominates. In this case, the \emph{local} scale height will be a function of the local CR$e^{-}$ lifetime, which depends on the local magnetic field strength. The motivation for this approach was the observation of radio haloes that have a `dumbbell' shape, meaning smaller radio scale heights in the centre of the galaxy and increasing scale heights in their outskirts. Examples for this type of haloes are NGC~253 (Fig.~\ref{fig:dumbbell}(a)), NGC~891 (Fig.~\ref{fig:dumbbell}(b)) and NGC~4217 \citep{stein_20a}.
The CR$e^{-}$ scale height can be compared with expected relations for advection (equation~\ref{eq:advection_scale_height}) and diffusion (equation~\ref{eq:diffusion_scale_height}). The first measurement of cosmic-ray advection with this method of comparing the CR$e^{-}$ distribution with the magnetic field strength was presented by \citet{heesen_09a}, who found that the radio continuum scale height scales linearly with the CR$e^{-}$ lifetime as presented in Fig.~\ref{fig:hadvection}. Consequently, they calculated the cosmic-ray advection speed to be $v=300\pm 30$~$\rm km\,s^{-1}$. The alternative is to study directly the dependence of the CR$e^{-}$ scale height on the magnetic field field strength. This has been done by \citet{mulcahy_18a} for NGC~891, who found a dependence of $h_{\rm e}\propto B^{-1.2\pm 0.6}$, in agreement with either diffusion or advection (see Fig.~\ref{fig:n891_h}).
\subsection{Size--scale height relation}
\label{ss:size_scale_height_relation}
\citet{krause_18a} studied the scale heights in CHANG-ES galaxies and found that the scale height scales linearly with the size of the galaxy. In order to exclude the size of the galaxy, they defined a normalised scale height. This normalised scale height fulfils a scale height--mass surface density relation, where the normalised scale height decreases with increasing mass-surface density. Both relations point to a relation of the radio halo with stellar feedback. Interestingly, both the intensity and magnetic field scale height do not depend on either the SFR, $\Sigma_{\rm SFR}$, or rotation speed \citep{heesen_18b}. This might point to a geometric model with an expanding outflow as well, as do the results of \citet{krause_18a}.
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{n891_h.pdf}
\caption{The radio continuum exponential scale height at 146~MHz in NGC~891 as function of the magnetic field strength in the mid-plane. The red line shows the best-fitting exponential function. From \citet{mulcahy_18a}}
\label{fig:n891_h}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{alpha.pdf}
\caption{Spectral curvature as function of the inclination angle. The differential spectral index is defined as $\Delta\alpha = \alpha_{\rm low}-\alpha_{\rm high}$, where $\alpha_{\rm low}$ is the radio spectral index between 150 and 1500~MHz and $\alpha_{\rm high}$ is the radio spectral index between 1500 and 5000~MHz. From \citet{chyzy_18a} }
\label{fig:alpha}
\end{figure}
\begin{figure}[!tbh]
\includegraphics[width=\columnwidth]{n891_alpha.pdf}
\caption{The radio spectral index in NGC~891 between 146 and 1500~MHz. The spectral index is flat in the disc (although clearly non-thermal) and steepens in the halo. From \citet{mulcahy_18a} }
\label{fig:n891_alpha}
\end{figure}
\section{Radio continuum spectrum}
\label{s:radio_continuum_spectrum}
\subsection{Global spectrum}
\label{ss:global_spectrum}
Observations show the integrated (global) radio continuum spectrum of galaxies to be in agreement with a power-law with a non-thermal radio spectral index of $-0.9$ at frequencies between 1 and 10~GHz \citep{tabatabaei_17a}. However, at low frequencies ($<1~\rm GHz$) the radio continuum spectrum deviates from a power-law and the spectrum flattens significantly \citep{marvil_15a}. The most comprehensive study to date is that of \citet{chyzy_18a}, who studied $\sim$100 galaxies with LOFAR and archival data between 50~MHz and 5~GHz. They found that the spectral index flattens by $\Delta\alpha = 0.2$ from a spectral index of $\alpha=-0.77$ above $1.5$~GHz to $\alpha=-0.57$ below $1.5$~GHz. Hence the low-frequency spectral index is close to the injection spectral index, which means that the CR$e^{-}$ may be able to escape the galaxy freely. This view is supported by the observation that the steepening of the spectrum is independent of the inclination angle (see Fig.~\ref{fig:alpha}). Prior, it was posed that internal effects such as due to \emph{free--free absorption} at low frequencies, the spectrum is artificially flattened \citep{israel_90a}. This interpretation seems to be now at least unlikely.
\subsection{Spatially resolved measurements}
In edge-on galaxies, the radio spectral index can be also measured locally. Since the radio spectral index is fairly flat in the disc ($\alpha\approx -0.6$), the star-forming galactic mid-plane, this suggests that the CR$e^{-}$ are freshly injected. In the halo, the radio spectral index steepens to values of $\alpha\approx -1$ or even steeper (see Fig.~\ref{fig:n891_alpha}).
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{n891_alpha_profile2.pdf}
\caption{Vertical profiles of the non-thermal radio spectral index between 146 and 1500~MHz in NGC~891. Lines show best-fitting advection models, which well describe the linear decrease of the spectral index in the halo at $|z|\gtrsim 0.5$~kpc. From \citet{schmidt_19a}}
\label{fig:n891_alpha_profile}
\end{figure}
\begin{figure}[!htb]
\includegraphics[width=\columnwidth]{n7462_diffusion.pdf}
\caption{Vertical radio spectral index profile between $1.4$ and $4.7$~GHz in NGC~7462 as an example for diffusion-dominated transport. The profile has a `parabolic' shape, lines show the best-fitting diffusion model, which was fitted in the halo only at $|z|>1~\rm kpc$. From \citet{heesen_16a} }
\label{fig:n7462_diffusion}
\end{figure}
The next step is to use the advection--diffusion approximation to calculate vertical radio spectral index profiles. In Fig.~\ref{fig:n891_alpha_profile}, we present vertical spectral index profiles in NGC~891, which are approximately linear. The spectral index profiles show a flat spectral index in the disc, rapidly steepening in the halo, so that one finds a two-component spectral index profile. In contrast, in Fig.~\ref{fig:n7462_diffusion} we present the vertical radio spectral index profile for NGC~7462, a diffusion-dominated galaxy. In this case, the spectral index is already quite steep in the disc with values of $\alpha\approx -1.2$, as would be expected for a calorimetric galaxy, with no CR$e^{-}$ escape. Remarkably, the radio spectral index does not steepen out to distances of $z\approx 2$~kpc, quite differently to advective galaxies. The best other example for this kind of radio spectral index profiles is NGC~4565 \citep{schmidt_19a}, which has also remarkably steep spectral indices in the disc. Hence, we indeed find vertical spectral index profiles in approximate agreement with our idealised versions of the pure diffusion and advection models (Section~\ref{s:spinnaker}).
This then motivates the application of the {\sc spinnaker} models to the edge-on galaxies to decide whether they are diffusion- or advection-dominated and to measure diffusion coefficients and advection speeds (see Table~\ref{tbl:sample}). We will return to these results in Section~\ref{s:results}.
\begin{figure*}[tb]
\includegraphics[width=\textwidth]{smoothing_crop.pdf}
\caption{Example for the influence of CR$e^{-}$ transport in the face-on galaxy NGC~5194. The 145-MHz radio continuum map (a) is the smoothed version of the $\Sigma_{\rm SFR}$-map (b). Both maps are showing the star-formation rate surface density as $\log_{10}(\Sigma_{\rm SFR}/{\rm M_\odot\,yr^{-1}\,kpc^{-2}})$. (c) shows the radio spectral index between 145 and 1365~MHz. From \citet{heesen_19a}}
\label{fig:smoothing}
\end{figure*}
\section{Face-on galaxies}
\label{s:face_on_galaxies}
\subsection{Smoothing experiments}
\label{ss:smoothing_experiment}
A different approach of measuring the cosmic-ray transport length is to study face-on galaxies. The radio continuum emission is smoothed with respect to the star-formation rate surface density ($\Sigma_{\rm SFR}$), which can be explained with CR$e^{-}$ transport (Fig.~\ref{fig:smoothing}). This idea was first exploited by \citet{murphy_08a} who compared the $1.4$-GHz emission from the WSRT--SINGS sample by \citet{braun_07a} with 70-$\mu$m far-infrared emission from \emph{Spitzer}. They convolved the far-infrared map with both exponential and Gaussian kernels and minimised the difference between the convolved map and the radio continuum map. The half-length of the smoothing kernel is then the cosmic-ray transport length, which lies between $0.4$ and $2.3$~kpc. They found the length to be a function of the $\Sigma_{\rm SFR}$, which can be explained by increased synchrotron and IC losses and thus shorter CR$e^{-}$ lifetimes.
The same approach was used in \citet{vollmer_20a}, who investigated both $1.4$- and 5-GHz radio continuum maps. They found that the cosmic-ray transport length is a function of frequency with $1.8\pm 0.5$~kpc at $1.4$~GHz and $0.9\pm 0.3$~kpc at 5 GHz. They also tested both exponential and Gaussian kernels and found that the goodness of the fit cannot be used to distinguish between advection (streaming) and diffusion. However, they found that in several galaxies the $1.4$/5~GHz-ratio of the transport length is larger than $1.5$, an indication for streaming (equation~\ref{eq:advection_scale_height}). This interpretation not dependent on the question of electron calorimetry since escape would lead to an even smaller frequency dependency. Ideally, one would like to measure the shape of the CR$e^{-}$ transport kernel in order to make a distinction between different models, but so far exponential and Gaussian kernels cannot be distinguished by their fitting quality alone \citep{murphy_08a,vollmer_20a}. This is easier to do in edge-on galaxies since we can measure the shape directly assuming that the CR$e^{-}$ are injected only in the thin star-forming disc.
\subsection{Radio--SFR relation}
\label{ss:radio_sfr}
A variation of the smoothing experiment (Section~\ref{ss:smoothing_experiment}) is to study the spatially resolved radio--SFR relation, where we plot the radio continuum emission as a function of the $\Sigma_{\rm SFR}$-values \citep{berkhuijsen_13a}. The radio continuum--star-formation rate (radio--SFR) relation is approximately linear for global measurements \citep{heesen_14a}, but the spatially resolved radio--SFR relation is sub-linear with slopes of $0.6$ when measured at 1-kpc spatial resolution. The $\Sigma_{\rm SFR}$-map can then be convolved with a Gaussian kernel in order to linearise the radio--SFR relation \citep{berkhuijsen_13a,heesen_14a,heesen_19a}. \citet{heesen_19a} found that the half-width of the Gaussian kernel, the cosmic-ray transport length, is a function of frequency. Depending on the frequency-dependence, the transport is dominated by either by cosmic-ray diffusion or streaming.
The key finding of the spatially resolved radio--SFR relation is that the deviation from the theoretical expectation such as the Condon relation \citep{condon_92a} and its more recent derivatives \citep{murphy_11a} is dependent on the radio spectral index. For a fairly flat radio spectral index of $\alpha\approx -0.6$, the deviation is small and so the relation is almost linear (Fig.~\ref{fig:m51_alpha}). This fits our expectation that on a kpc-scale the radio--SFR relation is linear as long as the CR$e^{-}$ are young and cosmic-ray transport plays no role. \citet{dumas_11a} and \citet{basu_15a} found linear radio--SFR relations in the spiral arms of galaxies, where the spectral index is flat as well. Contrary, if the radio spectral is steep $\alpha<-0.85$, the radio continuum emission lies above the radio--SFR relation. This finding is important because it shows that spectral ageing is important shaping the relation. In areas of low star-formation rates, old CR$e^{-}$ have diffused into these areas and thus boost the radio continuum emission above the level of what would be expected for the local $\Sigma_{\rm SFR}$.
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{radio_sfr.pdf}
\caption{Spatially resolved radio--SFR relation in four late-type galaxies at 145~MHz. The radio-derived SFR surface density is here shown as function of the mid-infrared and far-ultraviolet hybrid SFR surface density. Data points are coloured according to their radio spectral index, with red data points indicating young CR$e^{-}$, green points CR$e^{-}$ of intermediate age, and blue points old CR$e^{-}$. The solid line shows the best-fitting relation with a sub-linear slope (when compared with the dashed 1:1 relation) that can be attributed to cosmic-ray transport. From \citet{heesen_19a}}
\label{fig:m51_sfr}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{m51_alpha.pdf}
\caption{The radial radio spectral index profile in NGC~5194 (M~51) between 140 and 1500~MHz. Lines show various 1D diffusion models. From \citet{mulcahy_16a}}
\label{fig:m51_alpha}
\end{figure}
\subsection{Diffusion modelling}
\citet{mulcahy_16a} solved the diffusion--loss equation for radial transport of CR$e^{-}$ in the 1D case. The radial intensity profiles and spectral index profiles are fitted with the diffusion coefficient and its energy dependency. They found that a diffusion coefficient of $6\times 10^{28}~\rm cm^2\,s^{-1}$ as their best-fitting solution. Their energy-dependence is consistent with zero, meaning that the diffusion coefficient is constant. Fig.~\ref{fig:m51_alpha} shows the radial spectral index profile that they modelled. A key finding is that the spectral index is too steep without escape of CR$e^{-}$. Hence, they included the diffusive escape time as:
\begin{equation}
t_{\rm esc} = \frac{h^2}{D}.
\end{equation}
They used H\,{\sc i} scale heights to measure $h$, which are between 3 and 9 kpc, so that the escape time is between 11 and 88~Myr. The spectral index profile in Fig.~\ref{fig:m51_alpha} shows that the best-fitting solution. It can be seen that the model shows a smaller radial variation than the observed data, in particular the minimum at $r=4~\rm kpc$, so that a better fit might be obtained with a smaller diffusion coefficient and escape in a wind.
\section{Results}
\label{s:results}
In this section, we summarize the results that have so far been obtained for the cosmic-ray transport in external galaxies using radio continuum observations.
\subsection{Diffusion coefficients}
\label{ss:diffusion_coefficients}
The measured diffusion coefficients are between values of $10^{27}$ and $10^{29}$~$\rm cm^2\,s^{-1}$, with most values at around $10^{28}$~$\rm cm^2\,s^{-1}$\ \citep{murphy_08a,murphy_11a,berkhuijsen_13a,heesen_18b,heesen_19b,vollmer_20a}. This is expected since we are tracing a few kpc-scales and the CR$e^{-}$ lifetime is a few 10~Myr, resulting in this number using equation~\eqref{eq:diffusion_coefficient}. The lowest diffusion coefficients are found in dwarf galaxies \citep{murphy_11a,heesen_18c} with the highest ones in radio haloes \citep{heesen_09a,heesen_18b}. The diffusion coefficients depend weakly on the far-infrared (SFR) surface density as \citet{murphy_08a} have shown. This is expected as long as the CR$e^{-}$ lifetime is dependent on $\Sigma_{\rm SFR}$, because $t_{\rm syn}\propto B^{-3/2}$ and $B\propto \Sigma_{\rm SFR}^{1/3}$, we expect $t_{\rm syn}\propto \Sigma_{\rm SFR}^{-1/2}$. Thus, the diffusion length should be $L\propto \Sigma_{\rm SFR}^{-1/4}$ for a non-energy dependent diffusion coefficient. This is in approximate agreement with the results of \citet{murphy_08a} although the scatter is quite significant. \citet{tabatabaei_13a} repeated this experiment and found no dependence on the SFR surface density, although their sample was fairly small.
\subsubsection{Energy dependence}
\label{sss:energy_dependence}
The energy dependence of the diffusion coefficient has been explored as well. There are cases when no energy dependence is needed to fit the data, such as is the case if the diffusion length scales as $L\propto t_{\rm syn}^{-1/2}$ or, expressed as frequency, $L\propto \nu^{-1/4}$ (Section~\ref{ss:expected_relations}). Frequently, the frequency dependence of the diffusion length $L(\nu)$ is flatter such as $L\propto \nu^{-1/8}$, but this can be also a result of electron non-calorimetry \citep{heesen_19a}. The edge-on galaxy so far analysed in most detail with a pure diffusion halo, NGC~4565, is indeed better consistent with a energy independent diffusion coefficient or only a weakly dependent diffusion coefficient \citep{heesen_19b, schmidt_19a}. Essentially, an energy-dependent diffusion coefficient would lead to an even more pronounced curvature of the radio spectral index profile than what is observed. \citet{heesen_16a} tested the energy-dependence in NGC~7462, but did not find a strong indication for it.
A different approach is to fit the Gaussian convolution kernel in face-on galaxies with a cosmic-ray diffusion model. \citet{heesen_19a} did this and found the energy dependence to vary widely with $\mu=0$--$0.6$. There are indications, however, in particular from the radio spectral index, that high values of $\mu$ are the result of CR$e^{-}$ escape (electron non-calorimetry) rather than an intrinsic feature of the CR$e^{-}$ transport. In summary, diffusion coefficients in the GeV-range seem to be not energy dependent. In those cases where we see a dependence, the indication is either weak (in edge-on galaxies) or can be largely explained by flat radio spectral indices hinting at CR$e^{-}$ escape (in face-on galaxies). The observation that for a few GeV the diffusion coefficient is not energy-dependent is in agreement with the Boron-to-Carbon (secondary to primary) cosmic-ray ratio in the Milky Way \citep{becker_tjus_20a}.
\subsection{Cosmic-ray streaming}
\label{ss:cosmic_ray_streaming}
The indications for cosmic-ray streaming come mostly from scaling of the CR$e^{-}$ transport length with frequency, which in case of streaming resembles advection rather than diffusion (Section~\ref{ss:expected_relations}). \citet{vollmer_20a} found two galaxies where the CR$e^{-}$ transport length scales more with the frequency than can be explained by pure diffusion even when the diffusion coefficient is assumed to be energy independent. Similarly, \citet{beck_15b} found in IC~342 the CR$e^{-}$ propagation length to scale with $L\propto \nu^{-0.5}$. This can be explained by cosmic-ray streaming, where the CR$e^{-}$ are transported with a constant speed, for instance the Alfv\'en speed. We may consider the influence of an advection-dominated radio halo, which would result in a similar behaviour. What argues against such a halo is that an advective halo will limit the confinement of cosmic rays, which would again limit the effective CR$e^{-}$ lifetime and thus reduce the frequency dependence. Taken together, the results by \citet{beck_15b} and \citet{vollmer_20a} seem to be strongly indicative of cosmic-ray streaming. Another hint comes from \citet{tabatabaei_13a} who found that the CR$e^{-}$ transport length in NGC~6946 is larger than what one would expect from the ratio of ordered and turbulent magnetic field strength.
In edge-on galaxies, cosmic rays can stream from the disc into the halo along vertical magnetic field lines. Obviously, in diffusion-dominated galaxies streaming must be suppressed, so that we can assume that galaxies without outflows do not have the right type of magnetic field structure, presumably lacking vertical magnetic field lines. Indeed, the two pure diffusion haloes in our sample, NGC~4565 and NGC~7462, have no dominant vertical magnetic field lines \citep{heesen_16a,wiegert_15a}. The hybrid diffusion--advection galaxy NGC~4013 has at least a significant vertical magnetic field component \citep{stein_19a}. In galaxies with winds, advection and streaming may be observed together although a separation of them is difficult. In the edge-on galaxy NGC~5775, the vertical radio spectral index gradient is much reduced at the position of vertical magnetic field lines \citep{duric_98a,heald_21a}. This could also be the result of CR$e^{-}$ streaming; the effective CR$e^{-}$ bulk speed is then the superposition of Alfv\'en and wind speed.
\input{scale}
\begin{figure*}[!thb]
\centering
\includegraphics[width=\textwidth]{advection_sfr_pdf_crop.pdf}
\caption{Re-evaluated scaling relations of the CR$e^{-}$ advection speed in edge-on galaxies. (a) shows the advection speed as function of SFR, (b) as function of SFR surface density and (c) as function of the rotation speed. Best-fitting advection scaling relations (Table~\ref{tbl:scale}) are shown as solid lines. Values for the filled circular data points are from Table~\ref{tbl:sample}; grey stars show UV absorption line measurements for a different sample of galaxies which were taken from \citet{heckman_15a}}
\label{fig:scaling}
\end{figure*}
\subsection{Anisotropic diffusion}
\label{ss:anisotropic_diffusion}
The question whether diffusion happens isotropic or anisotropic is of importance for the modelling of galaxy evolution. \citet{vollmer_20a} used elliptical smoothing kernels aligned with the magnetic field as measured from linear polarisation and found slight indication that the CR$e^{-}$ are preferentially transported along magnetic field lines. An indirect way to study the influence of the magnetic field may be using the radio spectral index as a proxy for CR$e^{-}$ confinement times. In face-on galaxies, we find steep radio spectral indices in inter-arm regions with strong ordered magnetic fields. Such areas may be the places where the CR$e^{-}$ are stored by disc-parallel magnetic fields, before they can escape into the halo. Prominent examples are NGC~5055 \citep{heesen_19a}, NGC~5194 \citep[M~51;][]{mulcahy_14a} and NGC~6946 \citep{tabatabaei_13a}. Corroborating the influence of the magnetic field, galaxies lacking a large-scale spiral magnetic field, such as the flocculent spiral galaxy NGC~2403, show a flat spectral index throughout the disc (Sridhar et al., in preparation).
In NGC~253, \citet{heesen_11a} found that the CR$e^{-}$ diffusion across a magnetic filament perpendicular to the field direction is quite fast, with a diffusion coefficient of $D_\perp = 1.5\times 10^{28}~\rm cm^2\,s^{-1}$. This is a fairly high diffusion coefficient for pure perpendicular diffusion, which can be explained by a small amount of turbulence in the magnetic field. One can also take the radio haloes as a proxy for anisotropic diffusion. In this case diffusion coefficient tend to be quite high of the order $10^{29}~\rm cm^2\,s^{-1}$ \citep{dahlem_95a, heesen_09a}. \citet{buffie_13a} provided a theoretical explanation for the ratio of the perpendicular to parallel diffusion coefficient, which involves the turbulent component of the magnetic field which can be described by the so-called correlation length (similar to the field line bend-over length).
\subsection{Advection speed scaling relations}
\label{ss:scaling_relations}
The advection speed scaling relations with SFR, $\Sigma_{\rm SFR}$, and the rotation speed $v_{\rm rot}$ were already investigated by \citet{heesen_18b}. For this review, we have re-evaluated their sample which we extended to 16 galaxies (Table~\ref{tbl:sample}). In our sample, three galaxies are diffusion-dominated, which we exclude in the fitting process but present them in the plots for comparison. In Table~\ref{tbl:scale} an overview of the scaling relations discussed can be found.
The advection speed as function of the SFR is presented in Fig.~\ref{fig:scaling}(a), where the advection speed scales with the SFR as $v\propto SFR^{0.4}$. Similarly, the advection speed scales with the SFR surface density as $v\propto \Sigma_{\rm SFR}^{0.4}$ as shown in Fig.~\ref{fig:scaling}(b). However, this relation only holds if the starburst dwarf irregular galaxy IC~10, analysed by \citet{heesen_18c}, is excluded from the fitting. IC~10 has a very high SFR surface density, but only a relatively small advection speed. This outlier may point to the limitations of a scaling with $\Sigma_{\rm SFR}$. The advection speed scales also with the rotation speed of the galaxy as $v\propto v_{\rm rot}^{1.4}$ (Fig.~\ref{fig:scaling}(c)). The fact that the advection speed is related to the SFR surface density may be a consequence of a supernovae-driven blast wave \citep{vijayan_20a}. In contrast, for cosmic ray-driven wind models, or for any other wind model, the advection speed is expected to scale with the escape velocity as long as gravity is included \citep{ipavich_75a,breitschwerdt_91a,everett_08a}, so the scaling with rotation speed is expected as well. Including IC~10 gives an indication that a wind model is preferred, but clearly more dwarf irregular galaxies need to be studied.
\subsection{Accelerated advection speed}
\label{ss:accelerated_advection}
With the advent of LOFAR, we are now able to probe the areas in the halo far away from the star-forming mid-plane with height the excess of 10~kpc, where we can probe compatibility of our data with accelerating winds. \citet{miskolczi_19a} have shown that in the galaxy NGC~3556 (M~108) an accelerating wind fits better than advection with a constant wind speed, where they assumed a linearly accelerating wind accelerating from 123~$\rm km\,s^{-1}$ near the mid-plane to 350~$\rm km\,s^{-1}$ at 14~kpc distance (see Fig.~\ref{fig:m108}). This is the first time, where an accelerating wind fits better, whereas with GHz-observations a constant wind speed fits equally well as an accelerating wind \citep{schmidt_19a}. An accelerating wind has the advantage that one can have \emph{energy equipartition} between the cosmic rays and the magnetic field in the halo. For instance, \citet{mora_19a} have shown that a constant advection speed can lead to a divergence between the cosmic-ray energy and the magnetic field of up to a factor of 40 in the halo. For an accelerating wind, cosmic rays can be in equipartition with the magnetic field and possibly even with the warm neutral and the warm ionised gas (see Fig.~\ref{fig:energy}), which is physically more plausible.
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{m108.png}
\caption{Advection speed in NGC~3556 (M~108) as an example for advection-dominated CR$e^{-}$ transport with an accelerating wind. Shaded ares show the expected escape velocity for different dark matter halo distributions. The dashed line shows the best-fitting model with a constant advection speed for comparison. From \citet{miskolczi_19a}}
\label{fig:m108}
\end{figure}
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{energy.pdf}
\caption{Vertical profiles of the energy densities in the dwarf irregular galaxy IC~10 of the magnetic field ($B$), the warm neutral medium (H\,{\sc i}), the warm ionised medium (H\,$\alpha$), and the cosmic rays (CRs). From \citet{heesen_18c}}
\label{fig:energy}
\end{figure}
Several possible advection profiles were investigated by \citet{miskolczi_19a}, where they parametrised the advection velocity using equation~\eqref{eq:advection_profile}. For $\beta=1$, the wind is a linearly accelerating, for $\beta=0.5$ the wind acceleration is high near the disc and then tailors off in the halo. They found that $\beta=1$ fits best to the observations. \citet{schmidt_19a} also use a linear advection velocity profile successfully. Hence, a linear advection speed profile appears to be favoured by observations thus far. In \citet{schmidt_19a}, the local advection speed was investigated as well. Surprisingly, the advection is smaller in the centre of the galaxy. This is in contrast to the stronger gravitational acceleration in the centre of the galaxy should lead to higher advection speeds as \citet{breitschwerdt_02a} demonstrated for the case of the Milky Way.
\section{Stellar feedback-driven wind}
\label{s:wind}
Thus far we have used the CR$e^{-}$ as tracers for a galactic wind and neglected the dynamical influence that the cosmic rays have themselves on the wind. Together with the thermal gas they may be able to drive a wind as a result of stellar feedback as is now widely accepted in the literature \citep[e.g.][]{ipavich_75a,breitschwerdt_91a,everett_08a,recchia_16a,mao_18a}. In this section, we present a simple approach that tries to \emph{emulate} such a wind model, but sidestepping the details of cosmic-ray transport which is needed to create such a wind in the first place. For the latter, it is usually assumed that either diffusion or streaming in addition to advection is needed to prevent the adiabatic cooling of the wind. Without such detailed modelling it is not possible to distinguish between the dynamical influence of the thermal and cosmic-ray gas, hence we refer this model to as generic `stellar feedback-driven wind'. Nevertheless, our approach already fulfils some of the requirements we identified in Section~\ref{s:results}:
\begin{itemize}
\item (i) advection speed is a `wind solution';
\item (ii) energy equipartition between cosmic rays and the magnetic field;
\item (iii) linearly increasing advection speed.
\end{itemize}
Assumption (i) is motivated by the fact that a tight correlation between advection speed an escape velocity (i.e. rotation velocity) is observed (Section~\ref{ss:scaling_relations}). Assumption (ii) is made such that energy equipartition is required as suggested by the tight radio--SFR relation (Section~\ref{ss:radio_sfr}). The magnetic fields should be approximately exponential since that is the shape of the vertical intensity profiles (Section~\ref{sss:gaussian_profile_shape}). Assumption (iii) fulfils our finding that linear profiles are well fitting the LOFAR data (Section~\ref{ss:accelerated_advection}). We attempt to meet these requirements with a simple \emph{iso-thermal} wind model.
\subsection{Motivation}
We assume that the cosmic rays are advected in the flow of magnetised plasma, which is directed vertically and expands adiabatically. We use the following functional term for the cross-sectional area:
\begin{equation}
A(z) = A_0 \left [1 + \left (\frac{z}{z_0}\right )^\beta \right ],
\end{equation}
which describes the `flux tube' geometry. It has been widely used in semi-analytic 1D cosmic ray-driven wind models \citep{breitschwerdt_93a,everett_08a,recchia_16a}. This choice eases the comparison with these aforementioned models. If $\beta=2$ then the model is an expanding cone with a constant opening angle. We may possibly identify these flux tubes with the bubble-like features that definitely play a role as well and disc--halo interface may be more akin to a 'boiling disc' found in radio continuum observations \citep{stein_20a} but also in simulations \citep{krause_21a}. Once these bubbles break out of the thin gaseous disc, the field lines open up and a chimney is formed \citep{norman_89a}. These bubbles and chimneys then may merge and form together a kpc-sized superbubble that expands further into the halo, something that is suggested by the properties of warm dust in the halo \citep{yoon_19a}. The boundary of such a bubble may be related to the X-shaped structures centred on the nucleus, but with footpoints at a galactocentric radius $r_0$. Thus then would define the midplane flow radius, which may be the boundary of this outflow \citep[][see also Fig.~\ref{fig:cone}]{veilleux_21a}. We now also need an equation that governs the magnetic field strength:
\begin{equation}
B = B_0 \left (\frac{r_0}{r}\right )\times \left ( \frac{v_0}{v}\right ),
\end{equation}
where $B_0$ is the magnetic field strength in the galactic mid-plane, and $r_0$ and $v_0$ are the mid-plane flow radius and advection speed, respectively. This is the expected behaviour for radial and toroidal magnetic field components in a quasi-1D flow \citep{baum_97a}. Since we do not take rotation into account, we cannot include any dynamical effect that the magnetic field might have on the wind \citep[see][for a simulation of a magnetically driven wind]{steinwandel_20a}. The continuity equation needs to be fulfilled:
\begin{equation}
\rho v A = \rm const.,
\end{equation}
where $v$ is the advection speed and $\rho$ is the gas density. The momentum conservation is governed by the Euler equation:
\begin{equation}
\rho v \frac{{\rm d}v}{{\rm d}z} = \frac{{\rm d}P}{{\rm d}z} - g\rho,
\end{equation}
where $P$ is the combined cosmic-ray and gas pressure and $g$ is the gravitational acceleration. With such a setup, we obtain approximate energy equipartition. Integrating the Euler equation leads to a wind equation, where we assume for simplicity that the compound sound speed $v_{\rm c}^2=P/\rho$ is constant. It can be shown that the wind velocity profile is in linear approximation:
\begin{equation}
v = v_{\rm c} \left (1 + \frac{z-z_{\rm c}}{z_{0}} \right ),
\label{eq:linear_advection_speed}
\end{equation}
where $z=z_{\rm c}$ is the so-called critical point of the wind solution and $v=v_{\rm c}$ is the velocity at the critical point equivalent to the compound sound speed \citep{heald_21a}. This means we can parametrise the wind velocity profile in a linear way as required.
As we do not solve the energy equation explicitly, we have to check whether the energy conservation is indeed fulfilled. This is done via a cloud entrainment factor $\epsilon$, where the total energy flux in the wind is limited by the cosmic-ray luminosity (equation~\ref{eq:cosmic_ray_luminosity}) $L_{\rm CR}=1/2\epsilon \dot M v^2$ with $\dot M$ the global mass-loss rate. This entrainment factor is expected to be of order unity for a cosmic ray-driven wind.
\begin{figure}[tb]
\includegraphics[width=\columnwidth]{N5775bkgrd8crop2Label.pdf}
\caption{Outflow geometry for the stellar feedback-driven wind model. The conical expanding cross-section can be described by the flux tube approximation. From \citet{heald_21a}}
\label{fig:cone}
\end{figure}
\subsection{Application to NGC 5775}
\label{ss:n5775}
The model is applied to LOFAR 150-MHz and CHANG-ES $1.5$-GHz observations of NGC~5775 \citep{heald_21a}. The data can be indeed well fitted, with a linear acceleration of the advection speed as a result of the wind model (equation~\ref{eq:linear_advection_speed}) and an expanding bi-conical outflow (see Fig.~\ref{fig:cone}). Using the compound sound speed, the thermal electron densities can be calculated. The electron density decreases from a few $10^{-3}~\rm cm^{-2}\,s^{-1}$ by a factor of 10 at the detection limit of the halo at $z\approx 15$~kpc. This phase seems to be most consistent with the hot ionized medium (HIM). There are indications that in certain places the warm ionized medium (WIM) may be entrained in certain places, in particular near the H\,$\alpha$ filaments \citep{tuellmann_00a}. The implied mass-loss rate $\dot M$ is a few solar masses per year. As the advection speeds exceeds the escape velocity at the edge of the halo, it is suggested that the mass is lost entirely from the galaxy. The mass-loss efficiency $\eta=\dot M/SFR$ would then be of order unity. However, we point out that there is substantial uncertainty arising from the outflow geometry and the poorly-understood distribution of ISM material entrained in the vertical flow.
\section{Spectroscopic observations}
\label{s:optical_inferences}
\subsection{Wind speed}
\label{ss:wind_speed}
The arguably most direct way to identify outflows and measure outflow speeds are spectroscopic observations. In the optical wavelength range, the interstellar Na\,{\sc i} absorption line can be used \citep{martin_05a,rupke_05a}, although the drawback of this particular line is that it works only in galaxies at the higher end of the luminosity scale. This limitation was remedied with the Cosmic Origins Spectrograph (COS) aboard the \emph{Hubble Space Telescope} (HST), which made it possible to use ultraviolet absorption lines such as of Si\,{\sc ii} \citep{chisholm_15a} and C\,{\sc ii}, Si\,{\sc iii}, Si\,{\sc iv}, and N\,{\sc ii} \citep{heckman_15a,heckman_16a}. These data trace the warm ionized phase, which is supposed to carry the bulk of the mass in an outflow and so allows us to trace winds in normal star-forming galaxies. This phase can be also seen in emission using the H\,$\alpha$ line, but this again requires high SFRs, so that the galaxies are classified as (U)LIRGs \citep{arribas_14a}.
Our advection speeds increase with the SFR, $\Sigma_{\rm SFR}$ and rotation speed $v_{\rm rot}$, which indicates that they are tracing stellar feedback-driven winds (Section~\ref{ss:scaling_relations}). We now compare the advection speed scaling relations (Table~\ref{tbl:scale}) with the equivalent relation of the gaseous tracers. The UV-absorption line measurements by \citet{chisholm_15a} point to a weak dependence of the outflow speed with the SFR of $v\propto SFR^{0.08-0.22}$ and similarly with the rotation speed of $v\propto v_{\rm rot}^{0.44-0.87}$. In contrast, \citet{heckman_16a}, also using UV-absorption lines, find much stronger dependencies with $v\propto SFR^{0.32\pm 0.02}$ and $v\propto v_{\rm rot}^{1.16\pm 0.37}$ \citep[see also][]{heckman_15a}. \citet{martin_05a} used Na\,{\sc i} and K\,{\sc i} absorption lines in ultra-luminous infrared galaxies and found $v\propto SFR^{0.35}$.
On the subject of whether the wind speed depends on $\Sigma_{\rm SFR}$, the literature is even more divided. \citet{chisholm_15a} did find no notable correlation, whereas \citet{davies_19a} claim a strong correlation of $v\propto \Sigma_{\rm SFR}^{0.34\pm 0.10}$. Notably, the sample of \citet{davies_19a} contains mostly star bursts with $\Sigma_{\rm SFR}$ $= 0.1$--1~$\rm M_{\odot}\,yr^{-1}$, whereas the sample by \citet{chisholm_15a} covers also lower values of $\Sigma_{\rm SFR}$. \citet{heckman_16a} claimed a correlation of $v\propto \Sigma_{\rm SFR}^{0.34}$ up to a value of 100~$\rm M_{\odot}\,yr^{-1}$, flattening out at even higher values. In Fig.~\ref{fig:scaling}, we compare the UV measurements of \citet{heckman_15a} with our advection speeds. In general, we find a good agreement with their wind speeds as function both of the SFR and rotation speed, although the scatter is fairly large for the UV measurements. For the comparison with the SFR surface density, there is no such good agreement, with our winds happening at much lower values of $\Sigma_{\rm SFR}$. In part this may be explained by our different definition of $\Sigma_{\rm SFR}$, which employs the full extent of the star-forming disc whereas \citet{heckman_15a} use an effective (half-light) star-forming disc radius.
\subsection{Mass loading}
\label{ss:mass_loading}
The mass-loading factor is defined as $\eta=\dot M/SFR$, where $\dot M$ is the mass-loss rate. The mass-loading factor is predicted to increase strongly with decreasing rotation speed, so that in dwarf galaxies the mass-loading factor could easily exceed unity, whereas in Milky Way-type $L_\star$ galaxies, the factor is of order unity. \citet{chisholm_17a} parametrised the mass-loading factor as:
\begin{equation}
\eta = 1.12 \pm 0.27 \left (\frac{v_{\rm rot}}{100~\rm km\,s^{-1}} \right )^{-1.56\pm 0.25},
\label{eq:mass_loss_rate}
\end{equation}
using UV-absorption line studied of outflows. Similarly, \citet{heckman_16a}, also using UV-absorption line studies, found a slightly flatter dependency of $\eta\propto v_{\rm rot}^{-0.98}$. While we have not applied our stellar feedback-driven wind model (Section~\ref{s:wind}) to a sample yet, we can use the theoretical expectation of a similar cosmic ray-driven wind model of $\eta\propto v_{\rm rot}^{-5/3}$ \citep{mao_18a}, which gives quite reasonable agreement. It is also encouraging that our one data point for NGC~5775 (Section~\ref{ss:n5775}) predicts a mass-loading factor of order unity, which is in good agreement with equation~\eqref{eq:mass_loss_rate}.
\subsection{Wind velocity profile}
\label{ss:wind_velocity_profile}
Wind velocity profile measurements are only few and far between since it requires spatially resolved line observations. The wind velocity profiles from the optical measurements look significantly differently than linear acceleration, where the acceleration happens close to the disc and converges quickly \cite{chisholm_16a}. Notably, the acceleration happens already largely within 1~kpc from the star burst region. \citet{chisholm_16a} attribute this velocity profile to either radiation pressure or cosmic-ray pressure, assuming that the accelerating force falls of with distance squared. There are a handful of other galaxies where the wind velocity profile has been measured such as in NGC~253 \citep{westmoquette_11a}, where outflow speeds of a few hundred $\rm km\,s^{-1}$\ are found within a few 100~pc from the disc and which increase linearly with height.
Our radio haloes may require acceleration in particular if the lateral expansion needs to be limited as the morphology of the radio haloes suggests. On the other hand, the wind models such as of \citet{chevalier_85a} even with the inclusion of cosmic rays \citep{samui_10a,yu_20a} all predict rapid acceleration near the disc even when adopted to the flux tube geometry \citep{heald_21a}. Hence, the jury is still out whether the wind velocity profiles are more in agreement with a linear acceleration across the size of the halo ($\sim$10~kpc), possibly extending even further, as some wind models predict that do not include an extended area of mass-loading but inject all energy at $z=0~\rm kpc$ \cite{breitschwerdt_91a,everett_08a,recchia_16a}. While using the radio spectral index is a rather indirect way of measuring the velocity profile and subject to assumptions about the magnetic field, some form of acceleration seems to be most plausible as it is also the result of any stellar feedback-driven wind model (Section~\ref{s:wind}).
\subsection{Outflow size}
\label{ss:outflow_size}
The outflow size in most absorption line studies is only a few kpc at most \citep{heckman_16a}, whereas radio haloes are indicative of galaxy-wide outflows with radii typically a few kpc. Although the boundary of radio haloes is poorly defined, but the size of the haloes is typically comparable to the size of the star-forming disc \citep{dahlem_06a}. The connection of the galaxy-wide outflows with nuclear star bursts is rather uncertain \citep{westmoquette_11a}. A case in point is NGC~253, which does have a nuclear star burst with $\Sigma_{\rm SFR}$ $\sim$ 1~$\rm M_{\odot}\,yr^{-1}\,kpc^{-2}$, which shows a well-defined nuclear outflow \citep{heesen_11a}. The same galaxy has also a galaxy-wide advective radio halo \citep{heesen_09a} and an X-ray halo indicating a galaxy-wide outflow as well \citep{bauer_08a}. It is possible that the `down the barrel' optical and UV spectroscopic surveys do overlook the larger size of the outflow region due to sensitivity issues since a broad component emission or absorption line has to be identified.
When other measurement are used such integral field unit (IFU) spectroscopy, the size of the haloes are much larger in width. In the SAMI data of \citet{ho_16a}, the velocity field is widely asymmetric in galaxies indicating a larger outflow size. In the CALIFA sample, \citet{lopez_coba_19a} identified outflows with increasing line ratios such as [N\,{\sc ii}]/H$\alpha$ along the semi-major axis. Such increasing ratios are consistent with shock ionization in galactic outflows. Again, the morphology points to galactic outflows.
\subsection{Outflow threshold}
\label{ss:outflow_threshold}
The existence of a minimum value for the star-formation rate surface density was first posed by \citet{rossa_03b}, who studied the extra-planar diffuse ionized gas (eDIG) in edge-on galaxies. Their value is $\Sigma_{\rm SFR}$ $= 2\times 10^{-3}$~$\rm M_{\odot}\,yr^{-1}\,kpc^{-2}$, which was then corroborated by \citet{tuellmann_06a} who studied extra-planar hot ionized gas via X-ray emission. In most galaxies, there is no extended eDIG emission detected below this threshold, and if there is a detection, the dust temperature is significantly higher. This threshold is much lower than the canonical threshold for galactic winds by \citet{heckman_00a}, who suggested $\Sigma_{\rm SFR}$ $\approx 10^{-1}$~$\rm M_{\odot}\,yr^{-1}\,kpc^{-2}$. Galaxies exceeding this value are commonly referred to as `superwind' galaxies and are known to have extensive X-ray haloes \citep{strickland_04a}. More recent observations have shown this outflow threshold to be potentially much lower, as for instance the detection of a superbubble of warm dust in NGC~891 with a local $\Sigma_{\rm SFR}$\ of $0.03~$$\rm M_{\odot}\,yr^{-1}\,kpc^{-2}$\ suggests \citep{yoon_19a}. It is probably the local value of $\Sigma_{\rm SFR}$\, which needs to be $\sim 10^{-2}$~$\rm M_{\odot}\,yr^{-1}\,kpc^{-2}$\ to allow the formation of chimneys facilitating outflows. The chimneys would form at spiral arms and predominantly at smaller galactocentric radii allowing an outflow in the inner parts of the galaxy \citep[see instructive simulations by][]{krause_21a}.
\citet{ho_16a} suggests much lower values of globally averaged star formation rates with $\Sigma_{\rm SFR}$ $=SFR/(\pi r_{\rm e}^2)$ with values of $10^{-3}$--$10^{-1.5}$~$\rm M_{\odot}\,yr^{-1}\,kpc^{-2}$, where $r_{\rm e}$ is the effective radius. \citet{lopez_coba_19a} suggest $\Sigma_{\rm SFR}$ $> 10^{-2}$ $\rm M_{\odot}\,yr^{-1}\,kpc^{-2}$\ in conjunction with a centrally concentrated gas distribution. Clearly, this value depends on the detection method. The radio continuum method suggests also a low threshold of around the value as for the existence of eDIG. This threshold is identified by the transition from diffusion-dominated haloes to advection-dominated ones (Section~\ref{ss:profile_shape} and Fig.~\ref{fig:scaling}(b)). That this property of radio haloes fits to the optical observations raises the possibility that the vertical profile of radio haloes (Gaussian and exponential, Section~\ref{ss:profile_shape}) and the multi-component fitting (Section~\ref{sss:multi_component_radio_disc}) allow us already to distinguish between galaxies with outflows and those that do not have outflows.
\subsection{Cosmic-ray calorimetry}
The detection of $\gamma$-rays from star-forming galaxies with \emph{Fermi} has allowed us to compare the $\gamma$-ray luminosity with expectations from cosmic-ray calorimetry \citep{ackermann_12a}. \citet{lacki_11a} show that the star-burst nuclei of NGC~253 and M~82 are closest to calorimetry, although there is some uncertainty with regards to the GeV-emission from secondary CR$e^{-}$. \citet{yoast_hull_13a} showed that M~82 is a good electron calorimeter but not proton calorimeter, while for NGC~253 the situation is more complex \citep{yoast_hull_14a}. It has been recently suggested by \citet{hopkins_20a} that the other galaxies at lower SFRs are only poor cosmic-ray proton calorimeters. This requires a fast escape of cosmic rays either by diffusion, as facilitated with a high diffusion coefficient \citep{hopkins_20a}, or via a galactic wind.
\subsection{Extra-planar gas}
The extra-planar gas comprises several phases, the warm neutral medium traced by H\,{\sc i}, the WIM traced by H\,$\alpha$, and the HIM traced by X-ray emission. The HIM as traced by X-ray emission has electron density scale heights between 4 and 8~kpc with exponential profiles preferred over Gaussian or power-law profiles \citep{strickland_04a,hodges-kluck_13a}. These data would be in approximate agreement with what is expected for an outflow of a hot wind. In contrast, the WIM as traced by H$\alpha$ emission has much smaller scale heights of $\sim$1~kpc \citep{dettmar_06a}, although in places there can be filaments with much larger scale heights of 3--5~kpc \citep{boettcher_13a}. These profiles can be again approximated by exponential functions. The atomic gas as traced by H\,{\sc i} emission has a large variety of scale heights \citep{zschaechner_15a}, sometimes with both a thin and thick in the order of between a few 100~pc and a few kpc.
Because both the H\,{\sc i} and H\,$\alpha$ are line emissions, one can measure the rotation speed of edge-on galaxies as function of height. It is observed that the rotation speed of the gas decreases approximately linearly with height. This is referred to as `rotational lag' and has an amplitude of 5--20~$\rm km\,s^{-1}\,kpc^{-1}$ \citep{heald_06a,zschaechner_15a}.
\section{Inferences from theory}
\label{s:inferences_from_theory}
\subsection{Cosmic ray-driven wind models}
For cosmic rays to able to drive a wind, they have to be effectively confined in the galaxy for some time. We recall that the cosmic-ray mean free path is only a few pc, so that galaxies are effectively optically thick for cosmic rays. The cosmic rays then transfer a small part of their momentum and energy on the gas every time they interact via Alfv\'en waves, which they generate themselves via the streaming instability \citep[`self-confinement' picture][]{zweibel_13a}. The 1D cosmic ray-driven wind models by \citet{breitschwerdt_91a} and \citet{everett_08a} showed that cosmic rays streaming along the magnetic field lines can compensate for the adiabatic cooling of the wind fluid and so the compound sound speed increases slightly in the halo. This is required for a wind solution to go through the critical point where the gravitational acceleration is approximately constant as in the case for a galaxy halo \citep{mao_18a}. The wind velocity profiles are approximately linear before the speed converges to a few times the rotation speed \citep{everett_08a}. One of the limitations of these wind models is that all the energy and mass are injected at $z=0$, which is not very physical. The widely used analytical wind model of \citet{chevalier_85a} has a driving region where the energy and momentum are injected, which is more realistic. This model was extended by \citet{samui_10a} to include cosmic rays, which shows rapid acceleration in the driving region and nearly constant velocity at larger radii. As the driving region is rather confined to the disc plane with a height of $\sim 100$~pc, similar to the gaseous scale height of the warm neutral medium, the acceleration will be small at heights $\gtrsim$1~kpc.
In a steady state and without considering internal losses, the cosmic rays would transfer a fraction of their luminosity on the total energy flux in the wind, so $L_{\rm CR}\approx 1/2(\epsilon/0.5)\dot M v^2$, where $\epsilon$ is an efficiency factor. As we shown in Section~\ref{s:wind}, this is consistent with a stellar feedback-driven wind model in NGC~5775. A consequence of cosmic ray-driven winds is that the hydrodynamical equilibrium state of galaxies will be affected by the cosmic rays \citep{crocker_20b}. Another property of cosmic ray-driven wind models, which include gravity, is that the wind velocity scales linearly with the rotation speed in the galaxy \citep{ipavich_75a}. Such a behaviour is also expected for `momentum-driven' winds for which radiative cooling is important \citep{murray_05a,veilleux_20a}. The advection speed scaling relation (Section~\ref{ss:scaling_relations}) with the rotation speed is in good agreement with such an expectation.
\input{cr1d}
\subsection{Simulations}
Simulations have shown how important cosmic rays are in order to create galaxy-wide outflows. There are now many MHD simulations available that include cosmic rays such as the one of \citet{girichidis_18a}. They showed that cosmic ray-driven outflows are significantly cooler with a large fraction of the gas staying at $10^4$~K rather than $10^6$~K for the thermally driven case. The reason that cosmic rays are so effective at driving galactic winds is that they can diffuse out of star-forming regions and then create a `background sea' with a pressure gradient on kpc-scales \citep{salem_14a}. This gradient then can lift the gas into the halo. The transport of cosmic rays is important as if the cosmic rays are just advected with the gas, they act only as an additional pressure component and so the gaseous disc is `puffed up', but no outflow is created \citep{ruszkowski_17a}.
The cosmological simulations by \citet{pakmor_16a} showed the influence of anisotropic cosmic-ray diffusion. Galaxies with spiral magnetic fields created in part by differential rotation, can store cosmic rays for longer. While a wind develops both in the case of anisotropic and isotropic diffusion, the isotropic diffusion suppresses the magnetic field amplification in the disc. Again, without diffusion, a wind does not form at all. In \citet{jacob_18a}, simulations of dwarf galaxies show spherical winds with slow speeds of 20~$\rm km\,s^{-1}$, whereas galaxies with higher masses have more bi-conical outflows with higher speeds of 200~$\rm km\,s^{-1}$. Interesting, for galaxies in excess of a virial mass of $10^{11.5}~\rm M_{\odot}$ do not form cosmic ray-driven winds beyond the virial radius. For all cases, the outflow speed is in good agreement with the escape velocity near the galactic mid-plane. However, their wind speeds seem to be always on the low side when compared with optical observations and our data.
\subsubsection{Wind velocity profiles}
The simulations by \citet{girichidis_18a} showed approximately linearly accelerating mass-weighted velocity profiles. They simulated only the first 2~kpc near the galactic mid-plane, so that it is not clear whether the escape velocity is reached. The authors point out that further acceleration is expected in the halo. The outflow speeds they find are $\sim$50~$\rm km\,s^{-1}$, so significantly lower than what we measure near the disc. In MHD simulations of isolated galaxies with cosmic rays, \citet{jacob_18a} found that the vertical velocity profiles are in good agreement with the wind model of \citet{chevalier_85a}, with rapid acceleration near the disc and then a nearly constant velocity in the halo.
\subsubsection{Cloud entrainment}
The entrainment of clouds is important to load the wind, which contains mostly of the HIM, with further mass. \citet{banda_20a} showed that the hot wind is able to accelerate clouds at a speed comparable to the escape velocity within the first 1~kpc away from the the disc. These clouds form a `mist' of WIM that can be further accelerated in the wind. The wind is formed this close to the disc as expanding superbubbles which are particularly good in converting the kinetic energy of SNe into thermal energy, i.e. the thermalization efficiency is quite high with a few 10 per cent \citep{sharma_14a}. What appears to be important is that these clouds can then be considered as the starting point for the reference level at 1~kpc where we start to see advection-dominated haloes. It appears hence as possible that these clouds are advected as a thin mist with the fast flow of the HIM and traced by the CR$e^{-}$.
As noted in Section~\ref{ss:wind_speed}, there is a good agreement between the scaling relations as found for the advection speeds and the cool neutral and warm ionized outflows as measured from UV interstellar absorption lines. This is the case for both the magnitude of the outflow velocity as well as the scaling relations slopes. This agreement, while assuming that radio traces rather the HIM (Section~\ref{ss:n5775}), needs entrainment of clouds in order to get similar wind speeds for the clouds.
\section{Missing physics}
\label{s:missing_physics}
We have now approached a phase where we have demonstrated that radio continuum observations can give us a complementary view on galactic winds. What we have not yet been able to do though is to identify the mechanisms that allow us to probe the workings of a cosmic ray-driven galactic wind. In our iso-thermal wind model we have only \emph{assumed} that the cosmic rays do not cool adiabatically and the sound speed is constant (Section~\ref{s:wind}). This can be achieved both by either cosmic-ray streaming along magnetic field lines or anisotropic diffusion. This has been achieved already with 1D models in the literature, so their application remains to be carried out in future work. In Table~\ref{tbl:cr1d}, we present an overview of our pure phenomenological models and the 1D wind models from the literature. We now discuss the main physical effects.
\subsection{Cosmic-ray streaming}
The vertical spectral index profiles are well fitted with our advection models, although our concave model profiles may still be improved in order to fit the data better \citep{miskolczi_19a}. This would require a faster CR$e^{-}$ bulk speed without the change in magnetic field as demanded by the continuity equation and energy equipartition. Such a change may hence be in agreement with cosmic-ray streaming which does change the CR$e^{-}$ bulk speed. As it was also suggested, the streaming speed can be even a few times the Alfv\'en speed, when the cosmic rays decouple from the magnetic field such as when neutral atoms suppress Alfv\'en waves \citep{ruszkowski_17a}. This raises the possibility to detect a particularly fast cosmic-ray bulk speed in areas with an excess of H\,{\sc i} emission.
An alternative suggestion is to measure the velocity differential between the CR$e^{-}$ bulk speed and the ionized gas in the outflow. For edge-on galaxies, the line-of-sight velocity is of course fairly small depending on the outflow opening angle. The entrained clouds should soon reach the speed of the HIM, so that we can use these clouds as a tracer for the outflow speed \citep{banda_20a}. The global wind speeds appear to be in good agreement with what has been measured for the WIM using optical and UV spectroscopy (Section~\ref{ss:wind_speed}), but there may still be local effects in particular where the magnetic field has a strong vertical component \citep{tuellmann_00a}.
\subsection{Cosmic-ray diffusion}
Near the galactic mid-plane, cosmic-ray diffusion is important. The high-angular resolution images resolved on a 1-kpc scale resolve the thin radio disc. Hence, the escape of the cosmic rays is governed by the superposition and advection. Since we do not resolve this region well with our observations, we have neglected its influence. However, as \citet{breitschwerdt_93a} and \citet{recchia_16a} have shown, this region is quite important for the launching of the wind and also for the cosmic-ray spectrum \citep{ptuskin_97a}. Obviously, the escape of CR$e^{-}$ would have a bearing also on the integrated radio spectral index, which we can measure now over several decades in frequency. As pointed out before, while we may neglect diffusion in comparison to advection for simply measuring the transport of CR$e^{-}$ (Section~\ref{sss:cre_transport_length}), from a theoretical point of view we cannot trace the makings of a cosmic ray-driven wind without taking diffusion into account.
\subsection{Rotation}
We have neglected the influence of the magnetic pressure on the outflow velocity. This is the case if the outflow follows the magnetic field lines, so that lines of magnetic force are parallel to the wind velocity. Obviously, this will have some consequence for the outflow geometry which we can potentially test with linear polarisation measurements. Due to the superposition of the azimuthal velocity and vertical velocity, the magnetic field lines would wind up in a helical shape in the halo. This may be potentially observable as a rotation measure signal. So far linear polarisation measurements have shown X-shaped magnetic fields but only little or no large-scale rotation measure signal in the halo \citep{soida_11a}.
What then also needs to be taken into account is a proper treatment of the angular momentum in the outflow. This will change the wind solutions as \citet{zirakashvili_96a} have demonstrated. They also found that the dynamics of the ionized gas in the halo is changed since some of the angular momentum is transferred to this gas.
\section{Summary}
\label{s:summary}
As we have demonstrated in this review, radio continuum observations open up a new window on cosmic-ray transport in nearby galaxies. These observations allow us to calibrate the influence that cosmic rays have on galactic winds, a process that shapes and influences galaxy evolution in a unique way. Cosmic rays are transported by diffusion, advection, and streaming, which all contribute to a different degree. Since galaxies have complex magnetic field configurations, they are effectively optically thick to the scattering of cosmic rays. With radio continuum observations we trace cosmic-ray electrons in GeV-energy range, corresponding to the peak of the cosmic ray energy density in the spectrum of protons and heavier nuclei. In summary, our main results are as follows:
\begin{enumerate}
\item Diffusion coefficients for GeV-cosmic rays are of order $10^{28}$~$\rm cm^2\,s^{-1}$, with either little or no energy dependence (Section~\ref{ss:diffusion_coefficients});
\item Gaussian radio haloes are diffusion-dominated and are found in galaxies with no winds, whereas exponential radio haloes are advection-dominated indicative of winds (Section~\ref{ss:profile_shape});
\item Advective radio haloes are predominant in galaxies with $\Sigma_{\rm SFR}$$\geq 2\times 10^{-3}$~$\rm M_{\odot}\,yr^{-1}\,kpc^{-2}$\ (Section~\ref{sss:gaussian_profile_shape}); this suggests that there is a $\Sigma_{\rm SFR}$-threshold value for galactic winds (Section~\ref{ss:outflow_threshold});
\item The advection speed scales with SFR, $\Sigma_{\rm SFR}$, and $v_{\rm rot}$ (Section~\ref{ss:scaling_relations}), corroborating stellar feedback as the cause for radio haloes;
\item The high advection speeds, comparable to the escape velocities, suggest that the gas can escape into the CGM and contribute to the escape of baryons and metals;
\item The advection speed scaling relations are in good agreement with what has been measured using optical and UV spectroscopy, further corroborating galactic winds and radio haloes have a common cause (Section~\ref{ss:wind_speed});
\item A stellar feedback-driven wind model suggests that the hot ionized medium is the main wind fluid (Section~\ref{s:wind}). If cosmic rays are driving the wind, the mass-loading factor could be of order unity for Milky Way-type galaxies. However, due to the uncertain geometry and entrainment of warm ionized and cold neutral medium clouds, this mass-loss rate might be substantially different.
\end{enumerate}
There are a few caveats relevant to these conclusions, however. One of them is that it can be difficult to distinguish advection and diffusion based purely on the radio spectral index \citep{stein_19b}. It is thus possible that galaxies near the diffusion--advection boundary may be mis-classified. Another uncertainty is the unknown contribution from cosmic-ray streaming in edge-on galaxies, which may lead to advection speeds overestimating wind velocities. Hence, in the future a better modelling of streaming would be required, incorporating it into the stellar feedback-driven wind model (Section~\ref{ss:cosmic_ray_streaming}). In face-on galaxies, a better modelling of cosmic-ray diffusion and streaming while accounting for the escape of CR$e^{-}$ would be necessary in order to affirm measurements of the diffusion coefficient and to be able to distinguish between these transport modes (Section~\ref{s:face_on_galaxies}).
Our long-term goal is the aim to inform cosmological simulations of galaxies, which have to build in these kind of physics as part of `subgrid' models \citep{vogelsberger_20a}. With new radio facilities now producing many observational data sets we can test these models, we expect that important input will come from observations for the foreseeable future.
\section*{Data availability}
All data generated or analysed during this study are included in this published article.
\section*{Statements and Declarations}
The authors have no competing interests to declare that are relevant to the content of this article.
\acknowledgments
We would like to thank the editors of this Topical Collection, Manami Sasaki, Ralf-J\"urgen Dettmar and Julia Tjus, for the invitation to write this review article. We also would like to thank the anonymous referee for an insightful report that helped to improve this paper. We thank our colleagues for providing important contributions to this article. In particular, we would like to thank Arpad Miskolczi, Yelena Stein, Philip Schmidt, Sarrvesh Sridahr, and George Heald. They all helped a lot with studying the radio continuum observations and applying the {\sc spinnaker} models to them. Arpad Miskolczi is also thanked especially for developing {\sc spinteractive}, which made the application of these models so much easier.
\bibliographystyle{spr-mp-nameyear-cnd}
|
{
"timestamp": "2021-12-01T02:25:19",
"yymm": "2111",
"arxiv_id": "2111.15439",
"language": "en",
"url": "https://arxiv.org/abs/2111.15439"
}
|
\section*{Appendix \thesection\protect\indent \parbox[t]{11.15cm}{#1}}
\addcontentsline{toc}{section}{Appendix \thesection\ \ \ #1}}
\newcommand{\appwithlabel}[2]{\addtocounter{section}{1}\setcounter{equation}{0}
\renewcommand{\thesection}{\Alph{section}}
\section*{Appendix \thesection\protect\indent \parbox[t]{11.15cm}{#1}}\label{#2}
\addcontentsline{toc}{section}{Appendix \thesection\ \ \ #1}}
\def{\bf{e}}{{\bf{e}}}
\def{\bf{\ell}}{{\bf{\ell}}}
\font\mybb=msbm10 at 11pt
\font\mybbb=msbm10 at 17pt
\def\bb#1{\hbox{\mybb#1}}
\def\bbb#1{\hbox{\mybbb#1}}
\def\bb{Z} {\bb{Z}}
\def\bb{R} {\bb{R}}
\def\bb{E} {\bb{E}}
\def\bb{H} {\bb{H}}
\def\bb{C} {\bb{C}}
\def\bb{N} {\bb{N}}
\def{\cal{L}} {{\cal{L}}}
\def{\tilde{X}} {{\tilde{X}}}
\def{\hat{V}} {{\hat{V}}}
\newcommand\cn{\Check{\mathrm{\nabla}}}
\newcommand\cirn{\mathring{\mathrm{\nabla}}}
\newcommand\cird{\mathring{\mathcal{D}}}
\newcommand\fe{{\mathbf e}}
\newcommand\te{{\mathrm e}}
\def\mathrm{a}{\mathrm{a}}
\def\mathrm{b}{\mathrm{b}}
\def\mathrm{c}{\mathrm{c}}
\def\Gamma\mkern-4.0mu X{\Gamma\mkern-4.0mu X}
\def\Gamma\mkern-4.0mu \omega{\Gamma\mkern-4.0mu \omega}
\def\Gamma\mkern-2.0mu Y{\Gamma\mkern-2.0mu Y}
\def\Gamma\mkern-4.0mu F{\Gamma\mkern-4.0mu F}
\def\Gamma\mkern-4.0mu Q{\Gamma\mkern-4.0mu Q}
\def\Gamma\mkern-4.0mu H{\Gamma\mkern-4.0mu H}
\def\Gamma\mkern-4.0mu G{\Gamma\mkern-4.0mu G}
\defF\mkern-4.0mu H{F\mkern-4.0mu H}
\defD {D}
\def\underline{\alpha}{\underline{\alpha}}
\def\underline{\beta}{\underline{\beta}}
\def\underline{\gamma}{\underline{\gamma}}
\def\underline{A}{\underline{A}}
\def\underline{B}{\underline{B}}
\def\underline{C}{\underline{C}}
\def\underline{a}{\underline{a}}
\def\underline{b}{\underline{b}}
\def\underline{c}{\underline{c}}
\def\underline{r}{\underline{r}}
\def\underline{s}{\underline{s}}
\def\underline{t}{\underline{t}}
\def\slashed {X}{\slashed {X}}
\def\slashed {\gX}{\slashed {\Gamma\mkern-4.0mu X}}
\def\slashed {Y}{\slashed {Y}}
\def\slashed {\gY}{\slashed {\Gamma\mkern-2.0mu Y}}
\def\slashed {h}{\slashed {h}}
\def\slashed {F}{\slashed {F}}
\def\slashed {\gF}{\slashed {\Gamma\mkern-4.0mu F}}
\def\slashed {Q}{\slashed {Q}}
\def\slashed {\gQ}{\slashed {\Gamma\mkern-4.0mu Q}}
\def\slashed{\FH}{\slashed{F\mkern-4.0mu H}}
\def\slashed{H}{\slashed{H}}
\def\slashed{\gH}{\slashed{\Gamma\mkern-4.0mu H}}
\def\slashed{\gG}{\slashed{\Gamma\mkern-4.0mu G}}
\def{\cal D}{{\cal D}}
\def\hat{\ell}{\hat{\ell}}
\def\hat{r}{\hat{r}}
\newcommand{\text{\tiny $I$}}{\text{\tiny $I$}}
\newcommand{\text{\tiny $J$}}{\text{\tiny $J$}}
\newcommand{\text{\tiny $A$}}{\text{\tiny $A$}}
\newcommand{\text{\tiny $B$}}{\text{\tiny $B$}}
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\nonumber \\}{\nonumber \\}
\newcommand{\begingroup\color{blue}}{\begingroup\color{blue}}
\newcommand{\endgroup}{\endgroup}
\DeclareMathOperator{\dvol}{dvol}
\def\operatorname{Re}{\operatorname{Re}}
\def\operatorname{Im}{\operatorname{Im}}
\def\mathbb{R}{\mathbb{R}}
\def\f5{f_{(5)}}
\def\tilde{f}_{(5)}{\tilde{f}_{(5)}}
\def\g4{f_{(4)}}
\def\tilde{f}_{(4)}{\tilde{f}_{(4)}}
\def\bar{H}{\bar{H}}
\def\bar{\Phi}{\bar{\Phi}}
\def\bar{G}{\bar{G}}
\newcommand{\new}[1]{\textcolor{red}{#1}}
\usepackage[normalem]{ulem}
\usepackage{microtype}
\usepackage[colorlinks, citecolor=blue, linkcolor=blue, urlcolor=Maroon, filecolor=Maroon, linktocpage=true]{hyperref}
\usepackage{chngcntr}
\begin{document}
\begin{center}
\vspace*{-1.0cm}
\begin{flushright}
\end{flushright}
\vspace{2.0cm} {\Large \bf TCFHs, hidden symmetries and type II theories} \\[.2cm]
\vskip 2cm
L. Grimanellis, G. Papadopoulos and J. Phillips
\\
\vskip .6cm
\begin{small}
\textit{Department of Mathematics
\\
King's College London
\\
Strand
\\
London WC2R 2LS, UK}\\
\texttt{loukas.grimanellis@kcl.ac.uk}
\\
\texttt{george.papadopoulos@kcl.ac.uk}
\\
\texttt{jake.phillips@kcl.ac.uk}
\end{small}
\\*[.6cm]
\end{center}
\vskip 2.5 cm
\begin{abstract}
\noindent
We present the twisted covariant form hierarchies (TCFH) of type IIA and IIB 10-dimensional supergravities and show that all form bilinears of supersymmetric backgrounds satisfy the conformal Killing-Yano equation with respect to a TCFH connection. We also compute the Killing-St\"ackel, Killing-Yano and closed conformal Killing-Yano tensors of all spherically symmetric type II brane backgrounds and demonstrate that the geodesic flow on these solutions is completely integrable by giving all independent charges in involution. We then identify all form bilinears of common sector and D-brane backgrounds which generate hidden symmetries for particle and string probe actions. We also explore the question on whether charges constructed from form bilinears are sufficient to prove the integrability
of probes on supersymmetric backgrounds.
\end{abstract}
\vskip 1.5 cm
\newpage
\renewcommand{\thefootnote}{\arabic{footnote}}
\section{Introduction}
It has been known for sometime that some gravitational backgrounds admit Killing-St\"ackel (KS) and Killing-Yano (KY) tensors, see \cite{carter-b}-\cite{lun}, the reviews \cite{revky} and \cite{frolov} and the references within. These are used to demonstrate the separability and integrability of classical equations, such as the geodesic, Hamilton-Jacobi and Dirac equations, on these backgrounds. A key property of KS tensors is that they generate hidden symmetries for relativistic particles while KY tensors generate hidden symmetries \cite{gibbons} for spinning particles \cite{bvh} propagating on gravitational backgrounds.
It has been shown in \cite{gptcfh} that the conditions imposed by the gravitino Killing spinor equation (KSE) on the (Killing spinor) form bilinears can be arranged as a twisted covariant form hierarchy (TCFH) \cite{jggp}. This means that there is a connection, ${\cal D}^{\cal F}$, on the space of spacetime forms which depends on the fluxes, ${\cal F}$, of the theory such that the highest weight representation of ${\cal D}^{\cal F}\Omega$ vanishes, where $\Omega$ is a collection of forms of various degrees and ${\cal D}^{\cal F}$ may not be form degree preserving. Equivalently, this condition can be written as
\begin{eqnarray}
{\cal D}_X^{\cal F}\Omega= i_X {\cal P}+ X\wedge {\cal Q}~,
\label{tcfheqn}
\end{eqnarray}
for every spacetime vector field $X$, where ${\cal P}$ and ${\cal Q}$ are appropriate multi-forms and $X$ also denotes the associated 1-form constructed from the vector field $X$ after using the spacetime metric $g$, $X(Y)=g(X,Y)$. The proof of this result is rather general and includes supergravities on spacetimes of any signature as well as the effective theories of strings which include higher order curvature corrections. It also puts the conditions imposed by the KSEs on the form bilinears on a firm geometric basis.
One consequence of the TCFH is that the form bilinears satisfy a generalisation of the conformal Killing-Yano (CKY) equation with respect to the
connection ${\cal D}^{\cal F}$. This can be easily seen after taking the skew-symmetric part and contraction with respect the metric $g$ of (\ref{tcfheqn}), and so one
identifies ${\cal P}$ with an exterior derivative constructed from ${\cal D}^{\cal F}$ and ${\cal Q}$ with a formal adjoint of ${\cal D}^{\cal F}$ acting on $\Omega$. This raises the
question on whether the form bilinears generate hidden symmetries for worldvolume actions which describe the propagation of certain probes in supersymmetric backgrounds. This question was first investigated in the context of 5- and 4-dimensional supergravities in \cite{gpeb}.
The purpose of this paper is twofold. One is to present the TCFHs of IIA and IIB supergravities and to discuss some of the properties of the TCFH connections ${\cal D}^{\cal F}$, like for example their holonomy, on generic as well as on some special supersymmetric backgrounds. As a consequence we demonstrate that the form bilinears of these theories satisfy a CKY equation with respect to ${\cal D}^{\cal F}$ in agreement with the general result of \cite{gptcfh}. Another purpose of this paper is to give the KS tensors of type II branes\footnote{Brane solutions have been instrumental in the understanding of string dualities \cite{hulltown, town}.} \cite{funstring}-\cite{d8} and to use them to prove the complete integrability of the geodesic flow of those solutions that are spherically symmetric, i.e. those that depend on a harmonic function with one centre. In addition, the KY tensors that square to the KS tensors of these backgrounds will be given and the symmetries of spinning particles propagating on these backgrounds will be explored. Furthermore we shall investigate the conditions required for the TCFH to yield symmetries for particle and string probes propagating in common sector and D-brane backgrounds. Finally we shall compare the results we have obtained from the point of view of KS and KY tensors with those that arise from the TCFHs.
To investigate under which conditions the form bilinears generate symmetries for certain probe actions propagating in type II supersymmetric backgrounds, we shall match the conditions required for certain probe actions to be invariant under transformations generated by form bilinears with those imposed on them by the TCFHs. For the common sector of type II theories, it is shown that all form bilinears which are covariantly constant with respect to a connection with torsion given by the NS-NS 3-form field strength generate symmetries for string and spinning particle probes propagating on these backgrounds. Common sector backgrounds also admit form bilinears which are not covariantly constant and instead satisfy a general TCFH. These form bilinears may not generate symmetries for probes propagating in common sector backgrounds but nevertheless are part of their geometric structure. In particular the form bilinears of the fundamental string and NS5-brane solutions that are allowed to depend on multi-centre harmonic functions have been computed. It has been found that the type II fundamental string solution admits $2^7$ covariantly constant independent form bilinears while the type II NS5-brane solution admits $2^5$ covariantly constant independent form bilinears. All these forms generate (hidden) symmetries for probe string and spinning particle actions propagating on these backgrounds.
A similar analysis is presented for all type II D-branes. In particular, the form bilinears of all D-branes are computed. It is found that the requirement for these to generate symmetries for spinning particle probes propagating on these backgrounds is rather restrictive. This is due to the difficulties of constructing probe actions which exhibit appropriate form couplings. Nevertheless all type II D-branes, which may depend on multi-centre harmonic functions, admit form bilinears which generate symmetries for spinning particle probe actions. It turns out that all such form bilinears have components only along the worldvolume directions of the D-branes. A comparison of the symmetries we have found generated by the KS and KY tensors and those generated by the form bilinears in type II brane backgrounds will be presented in the conclusions.
This paper is organised as follows. In section 2 and 3, we give the TCFHs of IIA and IIB supergravities and discuss some of the properties of the TCFH connections.
In section 4, after a summary of the properties of the KS and KY tensors, we present the KS and KY tensors of all type II branes. In addition, we prove the complete integrability of the geodesic flow in all type II branes that depend on a harmonic function with one centre by presenting all the independent conserved charges which are in involution. In section 5, we demonstrate that all covariantly constant form bilinears with respect to a connection with skew-symmetric torsion generate symmetries for certain probe string and particle actions propagating on common sector backgrounds. In addition, we explicitly give all the
covariantly constant form bilinears for the type II fundamental string and NS5-brane solutions. In sections 6 and 7, we identify the form bilinears which
generate symmetries for spinning particle actions propagating on type II D-brane backgrounds. In section 8, we give our conclusions. In appendices \ref{apa}
and \ref{apb}, we give all the form bilinears of type II common sector branes and type II D-branes, respectively.
\section{The TCFH of (massive) IIA supergravity }\label{iiatcfhs}
The KSEs of massive IIA supergravity \cite{romans} are given by the vanishing conditions of the supersymmetry variations of the gravitino and dilatino fields evaluated at the locus that all fermions are set to zero. The KSE associated with the gravitino field is a parallel transport equation for the supercovariant connection
$\mathcal{D}$. In the string frame, this is given by
\begin{equation}
\begin{split}
\mathcal{D}_M \defeq \nabla_M &+ \frac{1}{8}H_{MPQ}\Gamma^{PQ}\Gamma_{11} + \frac{1}{8}e^\Phi S \Gamma_M \\ &+ \frac{1}{16}e^\Phi F_{PQ} \Gamma^{PQ}\Gamma_M\Gamma_{11}+\frac{1}{8\cdot 4!}e^\Phi G_{P_1 \dots P_4} \Gamma^{P_1\dots P_4}\Gamma_M~,
\label{iiasc}
\end{split}
\end{equation}
see e.g. \cite{eric}, where $H$ is the NS-NS 3-form field strength, $\Phi$ is the dilaton, and $F$ and $G$ are the 2-form and 4-form R-R field strengths, respectively. In addition, $\nabla$ is the Levi-Civita connection induced on the spinor bundle and $S = e^\Phi m$, where $m$ is a constant which is non-zero in massive IIA and vanishes in the standard IIA supergravity. Furthermore $\Gamma$ denotes the gamma matrices which satisfy the Clifford algebra relation $\Gamma_A\Gamma_B+\Gamma_B \Gamma_A=2 \eta_{AB}$ and in our conventions $\Gamma_{11} \defeq -\Gamma_{012\dots 9}$. In what follows, we shall not make a sharp distinction between spacetime and frame indices but we shall always assume that the indices of gamma matrices are frame indices. It turns out that $\mathcal{D}$ is a connection on the spin bundle over the spacetime associated with the Majorana (real) representation of $\mathfrak{spin} (9,1)$. The (reduced) holonomy of ${\cal D}$ for generic backgrounds is $SL(32, \bb{R})$ \cite{gpdt}, see \cite{hull, duff, gpdtx} for the computation of the holonomy of the supercovariant derivative of 11-dimensional supergravity.
The Killing spinors $\epsilon$ satisfy the gravitino KSE, $\mathcal{D}\epsilon=0$, as well as the dilatino KSE which is an algebraic equation. Backgrounds that admit such Killing spinors are special and both the spacetime metric and fluxes are suitably restricted, see \cite{gpug} where the IIA KSEs have been solved for one Killing spinor. The TCFHs are associated with the gravitino KSE which we shall focus on in what follows.
Given $N$ Killing spinors $\epsilon^r$, $r=1,\dots, N$, one can construct the form bilinears
\begin{eqnarray}
\phi^{rs}={1\over k!}\langle \epsilon^r, \Gamma_{A_1\dots A_k}\epsilon^s\rangle_D\, e^{A_1}\wedge\dots\wedge e^{A_k}~,
\label{fbil}
\end{eqnarray}
where $\langle\cdot, \cdot\rangle_D$ denotes that Dirac inner product and $e^A$ is a suitable spacetime frame, $g_{MN}= \eta_{AB} e^A_M e^B_N$. As
\begin{eqnarray}
\nabla_M \phi^{rs}_{A_1\dots A_k}=\langle \nabla_M \epsilon^r, \Gamma_{A_1\dots A_k}\epsilon^s\rangle_D+\langle \epsilon^r, \Gamma_{A_1\dots A_k} \nabla_M\epsilon^s\rangle_D~,
\end{eqnarray}
one can use the gravitino KSE, $\mathcal{D}\epsilon=0$, and (\ref{iiasc}) to express the right-hand side of the above equation in terms of the fluxes and form bilinears of the theory. In \cite{gptcfh} has been shown that these equations can be organised as TCFH.
Using the reality condition on $\epsilon$, there are form bilinears which are either symmetric or skew-symmetric in the exchange
of spinors $\epsilon^r$ and $\epsilon^s$ in (\ref{fbil}). As a consequence the TCFH of the IIA supergravity factorises in two parts.
A basis in form bilinears, up to a Hodge duality\footnote{Our convention for the Hodge duality operation is
${}^\star{\omega}_{N_1 \dots N_{n-p}} = \frac{1}{p!}\omega_{P_1\dots P_p}\epsilon^{P_1\dots P_p}{}_{N_1\dots N_{n-p}}$ with $\epsilon_{012\dots (n-1)} = -1$, where $n$ is the spacetime dimension.}
operation, which are symmetric in the exchange of the two Killing spinors $\epsilon^r$ and $\epsilon^s$ is
\begin{eqnarray}
&& \tilde{\sigma}^{rs} = \D{\epsilon^r}{\Gamma_{11}\epsilon^s} , \quad k^{rs} = \D{\epsilon^r}{\Gamma_N\epsilon^s} \, e^N , \quad \tilde{k} = \D{\epsilon^r}{\Gamma_N\Gamma_{11}\epsilon^s} \, e^N , \qquad\qquad \cr
&&
\omega^{rs} = \frac{1}{2} \D{\epsilon^r}{\Gamma_{NR} \epsilon^s} \,e^N \wedge e^R, \quad
\tilde{\zeta}^{rs} = \frac{1}{4!} \D{\epsilon^r}{\Gamma_{N_1 \dots N_4}\Gamma_{11} \epsilon^s}\, e^{N_1} \wedge \dots \wedge e^{N_4},
\cr
&&
\tau^{rs} = \frac{1}{5!} \D{\epsilon^r}{\Gamma_{N_1 \dots N_5}\epsilon^s}\, e^{N_1} \wedge \dots \wedge e^{N_5}~.
\label{symiia}
\end{eqnarray}
A direct computation reveals that the TCFH is
\begin{equation}
{\cal D}^{\cal F}_M\tilde\sigma\defeq \nabla_M \tilde{\sigma} = -\frac{1}{4}H_{MPQ}\omega^{PQ} + \frac{1}{4}e^\Phi S \tilde{k}_M - \frac{1}{4} e^\Phi F_{MP}k^P - \frac{1}{4 \cdot 5!}{}^\star{G}_{MP_1\dots P_5}\tau^{P_1\dots P_5}~,
\label{iiatcfha}
\end{equation}
\begin{eqnarray}
&&{\cal D}_M^{\cal F} k_N \defeq \nabla_M k_N = -\frac{1}{2}H_{MNP}\tilde{k}^P + \frac{1}{4}e^\Phi S \omega_{MN} + \frac{1}{8} e^\Phi F_{PQ}\tilde{\zeta}^{PQ}{}_{MN} +\frac{1}{4}e^\Phi F_{MN}\tilde{\sigma} \cr
&&\qquad+\frac{1}{4\cdot 4!}e^\Phi {}^\star{G}_{MNP_1\dots P_4}\tilde{\zeta}^{P_1 \dots P_4} + \frac{1}{8}e^\Phi G_{MNPQ}\omega^{PQ}~,
\label{iiatcfhb}
\end{eqnarray}
\begin{eqnarray}
&&{\cal D}_M^{\cal F}\tilde k_N\defeq \nabla_M \tilde{k}_N - \frac{1}{2} e^\Phi F_{MP}\omega^P{}_N- \frac{1}{12}e^\Phi G_{MPQR}\tilde{\zeta}^{PQR}{}_N= -\frac{1}{2}H_{MNP}k^P \cr
&&\qquad+ \frac{1}{4}e^\Phi g_{MN}S\tilde{\sigma} +\frac{1}{8}e^\Phi g_{MN} F_{PQ}\omega^{PQ} -\frac{1}{2}e^\Phi F_{[M|P|}\omega^P{}_{N]}
\cr
&&\qquad+ \frac{1}{4\cdot 4!}e^\Phi g_{MN} G_{P_1\dots P_4}\tilde{\zeta}^{P_1 \dots P_4} - \frac{1}{12}e^\Phi G_{[M|PQR|}\tilde{\zeta}^{PQR}{}_{N]}~,
\label{iiatcfhc}
\end{eqnarray}
\begin{eqnarray}
&&{\cal D}_M^{\cal F}\omega_{NR}\defeq \nabla_M \omega_{NR} +\frac{1}{4}H_{MPQ}\tilde{\zeta}^{PQ}{}_{NR}+ e^\Phi F_{M[N}\tilde{k}_{R]} - \frac{1}{12} e^\Phi G_{MP_1P_2P_3}\tau^{P_1P_2P_3}{}_{NR} \cr
&&\qquad = \frac{1}{2}H_{MNR}\tilde{\sigma}+ \frac{1}{2}e^\Phi S g_{M[N}k_{R]} + \frac{3}{4} e^\Phi F_{[MN}\tilde{k}_{R]} + \frac{1}{2} e^\Phi g_{M[N}F_{R]P}\tilde{k}^P
\cr
&&\qquad+\frac{1}{4 \cdot 5!} e^\Phi {}^\star{F}_{MNRP_1\dots P_5}\tau^{P_1\dots P_5} + \frac{1}{2\cdot 4!}e^\Phi g_{M[N} G_{|P_1\dots P_4|}\tau^{P_1 \dots P_4}{}_{R]}
\cr
&&\qquad- \frac{1}{8}e^\Phi G_{[M|P_1 P_2 P_3|}\tau^{P_1 P_2 P_3}{}_{NR]} - \frac{1}{4}e^\Phi G_{MNRP} k^P ~,
\label{iiatcfhd}
\end{eqnarray}
\begin{eqnarray}
&&{\cal D}_M^{\cal F} \tilde{\zeta}_{N_1 \dots N_4} \defeq \nabla_M \tilde{\zeta}_{N_1 \dots N_4} +\frac{1}{3}{}^\star{H}_{M[N_1N_2N_3|PQR|}\tilde{\zeta}^{PQR}{}_{N_4]} - 3 H_{M[N_1 N_2}\omega_{N_3 N_4]}
\cr
&&\qquad+ \frac{1}{2} e^\Phi F_{MP}\tau^P{}_{N_1 \dots N_4}-\frac{1}{2}e^\Phi {}^\star{G}_{M[N_1N_2|PQR|}\tau^{PQR}{}_{N_3N_4]}+ 2e^\Phi G_{M[N_1N_2N_3}\tilde{k}_{N_4]} \cr
&&\qquad
= -\frac{1}{12}g_{M[N_1}{}^\star{H}_{N_2 N_3 N_4] P_1 \dots P_4}\tilde{\zeta}^{P_1 \dots P_4}+ \frac{5}{12} {}^\star{H}_{[MN_1N_2N_3|PQR|}\tilde{\zeta}^{PQR}{}_{N_4]}
\cr
&&\qquad - \frac{1}{4 \cdot 5!}e^\Phi {}^\star{S}_{MN_1\dots N_4P_1\dots P_5}\tau^{P_1\dots P_5} -\frac{1}{2} e^\Phi g_{M[N_1}F_{|PQ|}\tau^{PQ}{}_{N_2N_3N_4]}
\cr
&&\qquad + \frac{5}{8} e^\Phi F_{[M|P|}\tau^P{}_{N_1 \dots N_4]} -\frac{5}{12}e^\Phi {}^\star{G}_{[MN_1N_2|PQR|}\tau^{PQR}{}_{N_3N_4]}+ 3e^\Phi g_{M[N_1}F_{N_2N_3}k_{N_4]}
\cr
&&\qquad+ \frac{1}{8}e^\Phi g_{M[N_1}{}^\star{G}_{N_2 N_3|P_1 \dots P_4|}\tau^{P_1 \dots P_4}{}_{N_4]} -\frac{1}{4}e^\Phi {}^\star{G}_{MN_1\dots N_4P}k^P
\cr
&&
\qquad + \frac{5}{4}e^\Phi G_{[MN_1N_2N_3}\tilde{k}_{N_4]}+ e^\Phi g_{M[N_1}G_{N_2N_3N_4]P}\tilde{k}^P ~,
\label{iiatcfhe}
\end{eqnarray}
\begin{equation}
\begin{split}
{\cal D}_M^{\cal F}& \tau_{N_1 \dots N_5}\defeq \nabla_M \tau_{N_1 \dots N_5}+\frac{5}{6} {}^\star{H}_{M[N_1N_2N_3|PQR|}\tau^{PQR}{}_{N_4 N_5]} - \frac{5}{2}e^\Phi F_{M[N_1}\tilde{\zeta}_{N_2 \dots N_5]}
\\
&
+\frac{5}{2}e^\Phi {}^\star{G}_{M[N_1N_2N_3|PQ|}\tilde{\zeta}^{PQ}{}_{N_4N_5]} +5e^\Phi G_{M[N_1N_2N_3}\omega_{N_4N_5]} = \frac{5}{4} {}^\star{H}_{[MN_1N_2N_3|PQR|}\tau^{PQR}{}_{N_4 N_5]} \\
& - \frac{5}{12} g_{M[N_1}{}^\star{H}_{N_2N_3N_4|P_1\dots P_4|}\tau^{P_1\dots P_4}{}_{N_5]} +\frac{1}{4\cdot 4!}e^\Phi {}^\star{S}_{MN_1\dots N_5 P_1 \dots P_4}\tilde{\zeta}^{P_1\dots P_4} \\
&-\frac{1}{8} e^\Phi {}^\star{F}_{MN_1\dots N_5}{}^{PQ}\omega_{PQ}-5e^\Phi g_{M[N_1}F_{N_2|P|}\tilde{\zeta}^P{}_{N_3N_4N_5]} - \frac{15}{4} e^\Phi F_{[MN_1}\tilde{\zeta}_{N_2\dots N_5]} \\
&+ \frac{15}{8}e^\Phi {}^\star{G}_{[MN_1N_2N_3|PQ|}\tilde{\zeta}^{PQ}{}_{N_4N_5]} + \frac{1}{4}e^\Phi {}^\star{G}_{MN_1\dots N_5}\tilde{\sigma} + \frac{5}{6 }e^\Phi g_{M[N_1}{}^\star{G}_{N_2N_3N_4|PQR|}\tilde{\zeta}^{PQR}{}_{N_5]} \\
&+ \frac{15}{4} e^\Phi G_{[MN_1N_2N_3}\omega_{N_4N_5]} + 5 e^\Phi g_{M[N_1}G_{N_2N_3N_4|P|}\omega^P{}_{N_5]} ~,
\label{iiatcfhf}
\end{split}
\end{equation}
where for simplicity we have suppressed the $r, s$ indices on the form bilinears which count the different Killing spinors. The connection ${\cal D}^{\cal F}$ is the minimal connection of the TCFH, see \cite{gptcfh} for the definition. As it has been explained in the introduction, the above TCFH implies that the form bilinears (\ref{symiia}) satisfy a generalisation of the CKY
with respect to the connection ${\cal D}^{\cal F}$. As expected $k$ is Killing, $\nabla_{(M} k_{N)}=0$.
A basis in the form bilinears, up to a Hodge duality operation, which are skew-symmetric in the exchange of the two Killing spinors is
\begin{eqnarray}
&&
\sigma^{rs} = \D{\epsilon^r}{\epsilon^s}~, \quad \tilde\omega^{rs} = \frac{1}{2} \D{\epsilon^r}{\Gamma_{NR}\Gamma_{11}\epsilon^s} \, e^N \wedge e^R~,
\cr
&& \pi^{rs} = \frac{1}{3!} \D{\epsilon^r}{\Gamma_{NRS} \epsilon^s} \, e^N \wedge e^R \wedge e^S~,\quad \tilde\pi^{rs} = \frac{1}{3!} \D{\epsilon^r}{\Gamma_{NRS}\Gamma_{11}\epsilon^s} \, e^N \wedge e^R \wedge e^S~,
\cr
&&\zeta^{rs} = \frac{1}{4!} \D{\epsilon^r}{\Gamma_{N_1 \dots N_4} \epsilon^s} \, e^{N_1} \wedge \dots \wedge e^{N_4}~.
\label{iiaskew}
\end{eqnarray}
The associated TCFH with respect to the minimal connection is
\begin{eqnarray}
\mathcal{D}^\mathcal{F}_M\sigma\defeq \nabla_M \sigma = - \frac{1}{4} H_{MPQ} \tilde \omega^{PQ} - \frac{1}{8} e^\Phi F_{PQ} \tilde\pi^{PQ}{}_M + \frac{1}{4!} e^\Phi G_{MPQR} \pi^{PQR} ~,
\end{eqnarray}
\begin{eqnarray}
&&\mathcal{D}^\mathcal{F}_M \tilde\omega_{NR}\defeq \nabla_M \tilde\omega_{NR} +\frac{1}{4} H_{MPQ} \zeta^{PQ}{}_{NR} + \frac{1}{2}e^\Phi F_{MP} \pi^P{}_{NR}
- \frac{1}{2} e^\Phi G_{M[N|PQ|} \tilde\pi^{PQ}{}_{R]}
\cr
&&
\qquad= \frac{1}{2} H_{MNR} \sigma + \frac{1}{4} e^\Phi S \tilde\pi_{MNR} -\frac{1}{4}e^\Phi g_{M[N}F_{|PQ|}\pi^{PQ}{}_{R]} + \frac{3}{4} e^\Phi F_{[M|P|} \pi^P{}_{NR]}
\cr
&&\qquad+ \frac{1}{4!} e^\Phi {}^\star{G}_{MNRP_1P_2P_3}\pi^{P_1P_2P_3} - \frac{1}{12} e^\Phi g_{M[N}G_{R]P_1P_2P_3}\tilde\pi^{P_1P_2P_3}
\cr
&&
\qquad- \frac{3}{8}e^\Phi G_{[MN|PQ|} \tilde\pi^{PQ}{}_{R]} ~,
\label{iiatcfha1}
\end{eqnarray}
\begin{eqnarray}
&&\mathcal{D}^\mathcal{F}_M \pi_{NRS}\defeq \nabla_M \pi_{NRS} +\frac{3}{2} H_{M[N|P|} \tilde\pi^P{}_{RS]}
-\frac{3}{2} e^\Phi F_{M[N} \tilde\omega_{RS]}- \frac{3}{4}e^\Phi G_{M[N|PQ|} \zeta^{PQ}{}_{RS]}
\cr
&& \qquad= \frac{1}{4} e^\Phi S\zeta_{MNRS}+ \frac{1}{4 \cdot 4!} e^\Phi {}^\star{F}_{MNRSP_1 \dots P_4} \zeta^{P_1\dots P_4}
- \frac{3}{2} e^\Phi g_{M[N} F_{R|P|} \tilde \omega^P{}_{S]}
\cr
&&\qquad - \frac{3}{2} e^\Phi F_{[MN} \tilde\omega_{RS]} - \frac{1}{4} e^\Phi G_{MNRS} \sigma -\frac{1}{8} e^\Phi {}^\star{G}_{MNRSPQ} \tilde\omega^{PQ} \cr
&&\qquad - \frac{1}{4} e^\Phi g_{M[N} G_{R|P_1P_2P_3|}\zeta^{P_1P_2P_3}{}_{S]} - \frac{3}{4}e^\Phi G_{[MN|PQ|}\zeta^{PQ}{}_{RS]} ~,
\label{iiatcfha2}
\end{eqnarray}
\begin{eqnarray}
&&\mathcal{D}^\mathcal{F}_M \tilde\pi_{NRS}\defeq \nabla_M \tilde\pi_{NRS}+\frac{3}{2} H_{M[N|P|} \pi^P{}_{RS]} - \frac{1}{2} e^\Phi F_{MP} \zeta^P{}_{NRS}
+\frac{3}{2} e^\Phi G_{M[NR|P|} \tilde\omega^P{}_{S]}
\cr
&&\qquad+ \frac{1}{4}e^\Phi {}^\star{G}_{M[NR|P_1P_2P_3|}\zeta^{P_1P_2P_3}{}_{S]}= + \frac{3}{4} e^\Phi S g_{M[N} \tilde\omega_{RS]} + \frac{1}{2} e^\Phi F_{MP} \zeta^P{}_{NRS}
\cr
&&\qquad
+ \frac{3}{8} e^\Phi g_{M[N} F_{|PQ|} \zeta^{PQ}{}_{RS]} - e^\Phi F_{[M|P|} \zeta^P{}_{NRS]} - \frac{3}{4}e^\Phi g_{M[N}F_{RS]} \sigma
\cr
&&\qquad- \frac{3}{8} e^\Phi g_{M[N}G_{RS]PQ} \tilde \omega^{PQ}+ e^\Phi G_{[MNR|P|} \tilde\omega^P{}_{S]} \cr
&&\qquad - \frac{1}{32} e^\Phi g_{M[N} {}^\star{G}_{RS]P_1 \dots P_4}\zeta^{P_1\dots P_4} + \frac{1}{6} e^\Phi {}^\star{G}_{[MNR|P_1P_2P_3|}\zeta^{P_1P_2P_3}{}_{S]} ~,
\label{iiatcfha3}
\end{eqnarray}
\begin{eqnarray}
&&\mathcal{D}^\mathcal{F}_M \zeta_{N_1 \dots N_4} \defeq \nabla_M \zeta_{N_1 \dots N_4} + \frac{1}{3} {}^\star{H}_{M[N_1N_2N_3|PQR|}\zeta^{PQR}{}_{N_4]}
- 3 H_{M[N_1N_2} \tilde\omega_{N_3N_4]}
\cr
&&
\qquad +2 e^\Phi F_{M[N_1} \tilde\pi_{N_2 N_3 N_4]}
+3 e^\Phi G_{M[N_1N_2|P|}\pi^P{}_{N_3N_4]}- e^\Phi {}^\star{G}_{M[N_1N_2N_3|PQ|}\tilde\pi^{PQ}{}_{N_4]}
\cr
&&\qquad
= -\frac{1}{12} g_{M[N_1} {}^\star{H}_{N_2 N_3 N_4] P_1 \dots P_4} \zeta^{P_1\dots P_4} + \frac{5}{12} {}^\star{H}_{[MN_1N_2N_3|PQR|}\zeta^{PQR}{}_{N_4]}
\cr
&&\qquad
+e^\Phi S g_{M[N_1} \pi_{N_2N_3N_4]}
- \frac{1}{4!} e^\Phi {}^\star{F}_{MN_1 \dots N_4 PQR} \pi^{PQR} + 3 e^\Phi g_{M[N_1} F_{N_2|P|}\tilde\pi^P{}_{N_3N_4]}
\cr
&&\qquad+ \frac{5}{2} e^\Phi F_{[MN_1}\tilde\pi_{N_2N_3N_4]}
- \frac{1}{6} e^\Phi g_{M[N_1} {}^\star{G}_{N_2N_3N_4]PQR}\tilde\pi^{PQR} - \frac{5}{8} e^\Phi {}^\star{G}_{[MN_1N_2N_3|PQ|}\tilde\pi^{PQ}{}_{N_4]}
\cr
&&\qquad- \frac{3}{2} e^\Phi g_{M[N_1} G_{N_2N_3|PQ|}\pi^{PQ}{}_{N_4]} + \frac{5}{2} e^\Phi G_{[MN_1N_2|P|} \pi^P{}_{N_3N_4]} ~.
\label{iiatcfha4}
\end{eqnarray}
As in the previous case, a consequence of the TCFH above is that the forms (\ref{iiaskew}) satisfy a generalisation of the CKY equation with respect to the connection $\mathcal{D}^\mathcal{F}$. Later we shall demonstrate that in some cases the forms (\ref{symiia}) and (\ref{iiaskew}) generate symmetries
in string and particle actions probing some IIA backgrounds.
The factorisation of the domain that the minimal TCFH connection $\mathcal{D}^\mathcal{F}$ acts as in (\ref{symiia}) and (\ref{iiaskew}) can be understood as follows.
The product of two Majorana representations $\Delta_{32}$ in terms of forms is $\otimes^2 \Delta_{32}=\Lambda^*(\bb{R}^{9,1})$. Therefore the form bilinears of all spinor span all spacetime forms. Therefore generically the TCFH connection acts on the space of all spacetime forms. However we have seen that the TCFH connection preserves the forms which are symmetric (skew-symmetric) in the exchange of the two Killing spinors, i.e. it preserves that symmetrised
$S^2\,(\Delta_{32})$ and skew-symmetrised $\Lambda^2\,(\Delta_{32})$ subspaces of the product. As $\mathrm{dim}\, S^2 ( \Delta_{32})=528$ and $\mathrm{dim}\, \Lambda^2 ( \Delta_{32})=496$, the holonomy of $\mathcal{D}^\mathcal{F}$ is included\footnote{In fact the (reduced) holonomy of $\mathcal{D}^\mathcal{F}$ is included in $GL(527)\times GL(495)$ as it acts with partial derivatives on the scalars $\tilde \sigma$ and $\sigma$ but the holonomy of other TCFH connections, like that of the maximal connection, will be included in $GL(528)\times GL(496)$.} in $GL(528)\times GL(496)$. Of course the holonomy of $\mathcal{D}^\mathcal{F}$ reduces for special backgrounds.
\section{The TCFH of IIB supergravity}
The KSEs of IIB supergravity \cite{js} are again associated with the vanishing conditions of the gravitino and dilatino supersymmetry variations. The gravitino KSE is a parallel transport equation for the supercovariant derivative ${\cal D}$ of the theory. In the string frame, this can be expressed \cite{Bergshoeff:2005ac} as
\begin{align}
{\cal D}_M&\defeq \nabla_M - \frac{1}{8}\,\Gamma^{N_1N_2}\,H_{MN_1N_2}\,\sigma_3 - \frac{1}{4}\,e^{\Phi}\,\Gamma^{N}{}_{M}\,G^{(1)}_N\,(i\sigma_2) - \frac{1}{4}\,e^{\Phi}\,G^{(1)}_M\,(i\sigma_2) \nonumber \\
&-\frac{1}{24}\,e^{\Phi}\,\Gamma^{N_1N_2N_3}{}_{M}\,G^{(3)}_{N_1N_2N_3}\,\sigma_1 - \frac{1}{8}\,e^{\Phi}\,\Gamma^{N_1N_2}\Gamma^{(3)}_{MN_1N_2}\,\sigma_1 \nonumber \\
&- \frac{1}{96}\,e^{\Phi}\,\Gamma^{N_1\dots N_4}\,G^{(5)}_{MN_1\dots N_4}\,(i\sigma_2)~,
\end{align}
where $H$ and $G^{(n)}$ are the 3-form and $n$-form, for $n=1,3, 5$, NS-NS and R-R field strengths of the theory, respectively, $\Phi$ is that dilaton and $\sigma^i$, $i=1,2,3$ are the Pauli matrices. The field strength $G^{(5)}$ is anti-self-dual\footnote{Our Hodge duality conventions are as in the IIA theory.}. ${\cal D}$ is a connection of the spin bundle over the spacetime associated to two copies, $\oplus^2 \Delta^+_{16}$, of the positive chirality Majorana-Weyl representation, $\Delta^+_{16}$, of $\mathfrak{spin}(9,1)$. The (reduced) holonomy of ${\cal D}$ for generic IIB backgrounds
is included in $SL(32,\bb{R})$ \cite{gpdt}. The KSEs of IIB supergravity have been solved for one Killing spinor in \cite{ugjggp}.
As expected from the general result in \cite{gptcfh}, the conditions imposed on the form bilinears by the gravitino KSE, ${\cal D}\epsilon=0$, can be organised as a TCFH.
Given any two spinors $\epsilon^r$ and $\epsilon^s$, the form bilinears are given by
\begin{eqnarray}
&&k^{rs} = \delta_{ab}\,\left\langle \epsilon^{ra}, \Gamma_P\,\epsilon^{sb} \right\rangle_D\, e^P~, \qquad
k^{(i)rs} = \delta_{ab}\,\left\langle \epsilon^{ra}, \Gamma_P\,(\sigma^{i}\,\epsilon^s)^b \right\rangle_D\, e^P ~,
\cr
&&
\pi^{rs} = \frac{1}{3!}\delta_{ab}\,\left\langle \epsilon^{ra}, \Gamma_{P_1P_2P_3}\,\epsilon^{sb} \right\rangle_D\, e^{P_1}\wedge e^{P_2} \wedge e^{P_3}~,
\cr
&&
\pi^{(i)rs} = \frac{1}{3!}\,\delta_{ab}\,\left\langle \epsilon^{ra}, \Gamma_{P_1P_2P_3}\,(\sigma^i\,\epsilon^s)^b \right\rangle_D\, e^{P_1}\wedge e^{P_2} \wedge e^{P_3}~,
\cr
&&
\tau^{rs} = \frac{1}{5!}\,\delta_{ab}\,\left\langle \epsilon^{ra}, \Gamma_{P_1P_2P_3P_4P_5}\,\epsilon^{sb} \right\rangle_D\, e^{P_1}\wedge \dots \wedge e^{P_5}~,
\cr
&&
\tau^{(i)rs} = \frac{1}{5!}\,\delta_{ab}\,\left\langle \epsilon^{ra}, \Gamma_{P_1P_2P_3P_4P_5}\,(\sigma^{i}\epsilon^s)^b \right\rangle_D\, e^{P_1}\wedge \dots \wedge e^{P_5}~,
\end{eqnarray}
where $\left\langle \sigma^i\,\alpha, \beta \right\rangle_D = \left\langle \alpha, \sigma^i\,\beta \right\rangle_D$ as the Pauli matrices are hermitian and $a,b=1,2$. Note that the forms $k$, $k^{(1)}$, $k^{(3)}$, $\pi^{(2)}$, $\tau$, $\tau^{(1)}$ and $\tau^{(3)}$
are symmetric in the exchange of $\epsilon^r$ and $\epsilon^s$ while the rest are skew symmetric.
The forms $k^{(2)}$, $\pi^{(2)}$ and $\tau^{(2)}$ are purely imaginary while the rest are real. One could multiply them with the imaginary unit $i$ so they become real but in such case the expression for the TCFH below would have been more involved. So we shall not do this here but later when we consider applications, we shall replace $k^{(2)}$, $\pi^{(2)}$ and $\tau^{(2)}$ with $ik^{(2)}$, $i\pi^{(2)}$ and $i\tau^{(2)}$.
Using the gravitino KSE, ${\cal D}\epsilon=0$, one can show that the TCFH of IIB supergravity expressed in terms of the minimal connection ${\cal D}^{\cal F}$ is
\begin{eqnarray}
{\cal D}^{\cal F}_M k^{}_{P} &\defeq& \nabla_M\, k^{}_P = \frac{1}{2}\,H_{MP}{}^N\,k^{(3)}_N + \frac{i}{2}\, e^{\Phi}\,G^{(1)N}\,\pi^{(2)}_{NMP} + \frac{1}{12}\,e^{\Phi}\,G^{(3)N_1N_2N_3}\,\tau^{(1)}_{N_1N_2N_3MP} \cr
&&+ \frac{1}{2}\,e^{\Phi}\,G^{(3)}_{MP}{}^N\, k^{(1)}_{N} + \frac{i}{12}\,e^{\Phi}\,G^{(5)}_{MP}{}^{N_1N_2N_3}\,\pi^{(2)}_{N_1N_2N_3}~,
\label{iibtcfh1}
\end{eqnarray}
\begin{eqnarray}
{\cal D}^{\cal F}_M k^{(i)}_{P} &\defeq& \nabla_M\, k^{(i)}_P + \frac{i}{4}\,\varepsilon_{3ij}\,H_{M}{}^{N_1N_2}\pi^{(j)}_{PN_1N_2} - \varepsilon_{2ij}\,e^{\Phi}\,G_{M}^{(1)}\,k^{(j)}_{P} \cr
&& +\frac{i}{2}\,\varepsilon_{1ij}\,e^{\Phi}\,G^{(3)N_1N_2}_M\,\pi^{(j)}_{PN_1N_2} - \frac{1}{48}\,\varepsilon_{2ij}\,e^{\Phi}\,G_{M}^{(5)N_1\dots N_4}\,\tau^{(j)}_{PN_1\dots N_4} \cr
&&= \frac{1}{2}\,\delta_{i3}\,H_{MP}{}^N\,k^{}_N + \frac{i}{2}\,\delta_{i2}\,e^{\Phi}\,G^{(1)N}\,\pi^{}_{MPN} - \varepsilon_{2ij}\,e^{\Phi}\,G^{(1)}_{[M}\,k^{(j)}_{P]} \cr
&&-\frac{1}{2}\,\varepsilon_{2ij}\,e^{\Phi}\,g_{MP}\,G^{(1)N}\,k^{(j)}_N + \frac{1}{12}\,\delta_{i1}\,e^{\Phi}\,G^{(3)N_1N_2N_3}\,\tau^{}_{MPN_1N_2N_3} \cr
&&+\frac{i}{12}\,\varepsilon_{1ij}\,e^{\Phi}\,g_{MP}\,G^{(3)N_1N_2N_3}\,\pi^{(j)}_{N_1N_2N_3} + \frac{i}{2}\,\varepsilon_{1ij}\,e^{\Phi}\,G^{(3)N_1N_2}{}_{[M}\,\pi^{(j)}_{P]N_1N_2}
\cr
&&+\frac{1}{2}\,\delta_{i1}\,e^{\Phi}\,G^{(3)}_{MP}{}^N\,k^{}_N + \frac{i}{12}\,\delta_{i2}\,e^{\Phi}\,G^{(5)}_{MP}{}^{N_1N_2N_3}\,\pi^{}_{N_1N_2N_3}~,
\label{iibtcfh2}
\end{eqnarray}
\begin{eqnarray}
{\cal D}^{\cal F}_M \pi^{}_{P_1P_2P_3} &\defeq& \nabla_M\,\pi^{}_{P_1P_2P_3} - \frac{3}{2}\,H_{M[P_1}{}^N\,\pi^{(3)}_{P_2P_3]N} - 3\,e^{\Phi}\,G^{(3)}_{M[P_1}{}^N\,\pi^{(1)}_{P_2P_3]N} \cr
&&-\frac{i}{4}\,e^{\Phi}\,G^{(5)}_{M[P_1}{}^{N_1N_2N_3}\,\tau^{(2)}_{P_2P_3]N_1N_2N_3} \cr
&&= \frac{i}{2}\,e^{\Phi}\,G^{(1)N}\,\tau^{(2)}_{MP_1P_2P_3N} + 3i\,e^{\Phi}\,g_{M[P_1}\,G^{(1)}_{P_2}\,k^{(2)}_{P_3]} \cr
&&- \frac{1}{12}\,e^{\Phi}\,{}^{\star}G^{(7)}_{MP_1P_2P_3}{}^{N_1N_2N_3}\,\pi^{(1)}_{N_1N_2N_3}+\frac{3}{2}\,e^{\Phi}\,g_{M[P_1}\,G^{(3)}_{P_2}{}^{N_1N_2}\,\pi^{(1)}_{P_3]N_1N_2} \cr
&&+ 3\,e^{\Phi}\,G^{(3)}_{[P_1P_2}{}^N\,\pi^{(1)}_{P_3M]N} -\frac{i}{2}\,e^{\Phi}\,G^{(5)}_{MP_1P_2P_3}{}^N\,k^{(2)}_N~,
\label{iibtcfh3}
\end{eqnarray}
\begin{eqnarray}
{\cal D}^{\cal F}_M \pi^{(i)}_{P_1P_2P_3} &\defeq& \nabla_M\,\pi^{(i)}_{P_1P_2P_3} - \frac{3}{2}\,\delta_{i3}\,H_{M[P_1}{}^N\,\pi^{}_{P_2P_3]N} + \frac{i}{4}\,\varepsilon_{3ij}\,H_{MN_1N_2}\,\tau^{(j)}_{P_1P_2P_3}{}^{N_1N_2} \cr
&&-\frac{3i}{2}\,\varepsilon_{3ij}\,H_{M[P_1P_2}\,k^{(j)}_{P_3]} - \varepsilon_{2ij}\,e^{\Phi}\,G^{(1)}_M\,\pi^{(j)}_{P_1P_2P_3} - 3\,\delta_{i1}\,e^{\Phi}\,G^{(3)}_{M[P_1}{}^N\,\pi^{}_{P_2P_3]N}
\cr
&&+\frac{i}{2}\,\varepsilon_{1ij}\,e^{\Phi}\,G^{(3)}_{M}{}^{N_1N_2}\,\tau^{(j)}_{P_1P_2P_3N_1N_2} - 3i\,\varepsilon_{1ij}\,e^{\Phi}\,G^{(3)}_{M[P_1P_2}\,k^{(j)}_{P_3]} \cr
&&+\frac{3}{2}\,\varepsilon_{2ij}\,e^{\Phi}\,G^{(5)}_{M[P_1P_2}{}^{N_1N_2}\,\pi^{(j)}_{P_3]N_1N_2} -\frac{i}{4}\,\delta_{i2}\,e^{\Phi}\,G^{(5)}_{M[P_1}{}^{N_1N_2N_3}\,\tau^{}_{P_2P_3]N_1N_2N_3} \cr
&&= \frac{i}{2}\,\delta_{i2}\,e^{\Phi}\,G^{(1)N}\,\tau^{}_{MP_1P_2P_3N} + 3i\,\delta_{i2}\,e^{\Phi}\,g_{M[P_1}\,G^{(1)}_{P_2}\,k^{}_{P_3]} + 2\,\varepsilon_{2ij}\,e^{\Phi}\,G^{(1)}_{[P_1}\,\pi^{(j)}_{P_2P_3M]} \cr
&&-\frac{3}{2}\,\varepsilon_{2ij}\,e^{\Phi}\,G^{(1)N}\,g_{M[P_1}\,\pi^{(j)}_{P_2P_3]N} - \frac{1}{12}\,\delta_{i1}\,e^{\Phi}\,{}^{\star}G^{(7)}_{MP_1P_2P_3}{}^{N_1N_2N_3}\,\pi^{}_{N_1N_2N_3} \cr
&&+\frac{3}{2}\,\delta_{i1}\,e^{\Phi}\,g_{M[P_1}\,G^{(3)}_{P_2}{}^{N_1N_2}\,\pi^{}_{P_3]N_1N_2} + 3\,\delta_{i1}\,e^{\Phi}\,G^{(3)}_{[P_1P_2}{}^N\,\pi^{}_{P_3M]N}
\cr
&&+\frac{i}{4}\,\varepsilon_{1ij}\,e^{\Phi}\,G^{(3)N_1N_2N_3}\,g_{M[P_1}\,\tau^{(j)}_{P_2P_3]N_1N_2N_3} - i\,\varepsilon_{1ij}e^{\Phi}\,G^{(3)}_{[P_1}{}^{N_1N_2}\,\tau^{(j)}_{P_2P_3M]N_1N_2} \cr
&&-\frac{3i}{2}\,\varepsilon_{1ij}\,e^{\Phi}\,g_{M[P_1}\,G^{(3)}_{P_2P_3]}{}^N\,k^{(j)}_N + 2i\,\varepsilon_{1ij}\,e^{\Phi}\,G^{(3)}_{[P_1P_2P_3}\,k^{(j)}_{M]} \cr
&&-\frac{i}{2}\,\delta_{i2}\,e^{\Phi}\,G^{(5)}_{MP_1P_2P_3}{}^N\,k^{}_N + \frac{1}{4}\,\varepsilon_{2ij}\,e^{\Phi}\,g_{M[P_1}\,G^{(5)}_{P_2P_3]}{}^{N_1N_2N_3}\,\pi^{(j)}_{N_1N_2N_3} \cr
&&-\varepsilon_{2ij}\,e^{\Phi}\,G^{(5)}_{[P_1P_2P_3}{}^{N_1N_2}\,\pi^{(j)}_{M]N_1N_2}~,
\label{iibtcfh4}
\end{eqnarray}
\begin{eqnarray}
{\cal D}^{\cal F}_M \tau^{}_{P_1 \dots P_5} &\defeq& \nabla_M\,\tau^{}_{P_1 \dots P_5} + \frac{5}{2}\,H_M{}^N{}_{[P_1}\,\tau^{(3)}_{P_2\dots P_5]N} - 5\,e^{\Phi}\,G^{(3)}_{M[P_1}{}^N\,\tau^{(1)}_{P_2\dots P_5]N} \cr
&&+10i\,e^{\Phi}\,G^{(5)}_{M[P_1P_2P_3}{}^N\,\pi^{(2)}_{P_4P_5]N} \cr
&&=-\frac{i}{12}\,e^{\Phi}\,{}^{\star}G^{(9)}_{MP_1\dots P_5}{}^{N_1N_2N_3}\,\pi^{(2)}_{N_1N_2N_3}\, + 10i\,e^{\Phi}\,g_{M[P_1}\,G^{(1)}_{P_2}\,\pi^{(2)}_{P_3P_4P_5]} \cr
&&+ \frac{1}{2}\,e^{\Phi}\,{}^{\star}G^{(7)}_{MP_1\dots P_5}{}^N\,k^{(1)}_N +\frac{15}{2}\,e^{\Phi}\,G^{(3)}_{[P_1P_2}{}^N\,\tau^{(1)}_{P_3P_4P_5M]N} \cr
&&+ 5\,e^{\Phi}\,g_{M[P_1}\,G^{(3)}_{P_2}{}^{N_1N_2}\,\tau^{(1)}_{P_3P_4P_5]N_1N_2} -10\,e^{\Phi}\,g_{M[P_1}\,G^{(3)}_{P_2P_3P_4}\,k^{(1)}_{P_5]} \cr
&&-5i\,e^{\Phi}\,g_{M[P_1}\,G^{(5)}_{P_2P_3P_4}{}^{N_1N_2}\,\pi^{(2)}_{P_5]N_1N_2} -\frac{15i}{2}\,e^{\Phi}\,G^{(5)}_{[P_1\dots P_4}{}^N\,\pi^{(2)}_{P_5M]} ~,
\label{iibtcfh5}
\end{eqnarray}
\begin{eqnarray}
{\cal D}^{\cal F}_M \tau^{(i)}_{P_1\dots P_5} &\defeq& \nabla_M\,\tau^{(i)}_{P_1\dots P_5} +\frac{5}{2}\,\delta_{i3}\,H_M{}^N{}_{[P_1}\,\tau^{}_{P_2\dots P_5]N} +\frac{5i}{4}\,\varepsilon_{3ij}\,{}^{\star}H_{M[P_1\dots P_4}{}^{N_1N_2}\,\pi^{(j)}_{P_5]N_1N_2} \cr
&&-5i\,\varepsilon_{3ij}\,H_{M[P_1P_2}\,\pi^{(j)}_{P_3P_4P_5]} -\varepsilon_{2ij}\,e^{\Phi}\,G^{(1)}_M\,\tau^{(j)}_{P_1\dots P_5} - 5\,\delta_{i1}\,e^{\Phi}\,G^{(3)}_{M[P_1}{}^N\,\tau^{}_{P_2 \dots P_5]N} \cr
&&+\frac{5i}{2}\,\varepsilon_{1ij}\,e^{\Phi}\,{}^{\star}G^{(7)}_{M[P_1 \dots P_4}{}^{N_1N_2}\,\pi^{(j)}_{P_5]N_1N_2} -10i\,\varepsilon_{1ij}\,e^{\Phi}\,G^{(3)}_{M[P_1P_2}\,\pi^{(j)}_{P_3P_4P_5]} \cr
&&- 5\,\varepsilon_{2ij}\,e^{\Phi}\,G^{(5)}_{M[P_1\dots P_4}\,k^{(j)}_{P_5]} +\frac{5}{2}\,\varepsilon_{2ij}\,e^{\Phi}\,G^{(5)}_{M[P_1P_2}{}^{N_1N_2}\,\tau^{(j)}_{P_3P_4P_5]N_1N_2} \cr
&&+10i\,\delta_{i2}\,e^{\Phi}\,G^{(5)}_{M[P_1P_2P_3}{}^N\,\pi^{}_{P_4P_5]N} \cr
&&= \frac{5i}{12}\varepsilon_{3ij}\,g_{M[P_1}\,{}^{\star}H_{P_2\dots P_5]}{}^{N_1N_2N_3}\,\pi^{(j)}_{N_1N_2N_3} -\frac{3i}{2}\,\varepsilon_{3ij}\,{}^{\star}H_{[P_1\dots P_5}{}^{N_1N_2}\,\pi^{(j)}_{M]N_1N_2} \cr
&&- \frac{i}{12}\,\delta_{i2}\,e^{\Phi}\,{}^{\star}G^{(9)}_{MP_1\dots P_5}{}^{N_1N_2N_3}\,\pi^{}_{N_1N_2N_3} +10i\,\delta_{i2}\,e^{\Phi}\,g_{M[P_1}\,G^{(1)}_{P_2}\,\pi^{}_{P_3P_4P_5]} \cr
&&- \frac{5}{2}\,\varepsilon_{2ij}\,e^{\Phi}\,G^{(1)N}\,g_{M[P_1}\,\tau^{(j)}_{P_2\dots P_5]N} +3\,\varepsilon_{2ij}\,e^{\Phi}\,G^{(1)}_{[P_1}\,\tau^{(j)}_{P_2 \dots P_5M]} \cr
&&+\frac{1}{2} \delta_{i1}\,e^{\Phi}\,{}^{\star}G^{(7)}_{MP_1\dots P_5}{}^N\,k^{}_N +5\,\delta_{i1}\,e^{\Phi}\,g_{M[P_1}\,G^{(3)}_{P_2}{}^{N_1N_2}\,\tau^{}_{P_3P_4P_5]N_1N_2} \cr
&&+ \frac{15}{2}\delta_{i1}\,e^{\Phi}\,G^{(3)}_{[P_1P_2}{}^N\,\tau^{}_{P_3P_4P_5M]N} -10\delta_{i1}\,e^{\Phi}\,g_{M[P_1}\,G^{(3)}_{P_2P_3P_4}\,k^{}_{P_5]} \cr
&&+ 10i\,\varepsilon_{1ij}\,e^{\Phi}\,G^{(3)}_{[P_1P_2P_3}\,\pi^{(j)}_{P_4P_5M]} -15i\,\varepsilon_{1ij}\,e^{\Phi}\,g_{M[P_1}\,G^{(3)}_{P_2P_3}{}^N\,\pi^{(j)}_{P_4P_5]N} \cr
&&+ \frac{5i}{12}\,\varepsilon_{1ij}e^{\Phi} \,g_{M[P_1}\,{}^{\star}G^{(7)}_{P_2\dots P_5]}{}^{N_1N_2N_3}\,\pi^{(j)}_{N_1N_2N_3} -\frac{3i}{2}\,\varepsilon_{1ij}\,e^{\Phi}\,{}^{\star}G^{(7)}_{[P_1\dots P_5}{}^{N_1N_2}\,\pi^{(j)}_{M]N_1N_2} \cr
&&- \frac{5}{2}\,\varepsilon_{2ij}\,e^{\Phi}\,g_{M[P_1}\,G^{(5)}_{P_2\dots P_5]}{}^N\,k^{(j)}_N +3\,\varepsilon_{2ij}\,e^{\Phi}\,G^{(5)}_{[P_1\dots P_5}\,k^{(j)}_{M]} \cr
&&- 5i\,\delta_{i2}\,e^{\Phi}\,g_{M[P_1}G^{(5)}_{P_2P_3P_4}{}^{N_1N_2}\,\pi^{}_{P_5]N_1N_2} -\frac{15i}{2}\delta_{i2}\,e^{\Phi}\,G^{(5)}_{[P_1\dots P_4}{}^N\,\pi^{}_{P_5M]N}~,
\label{iibtcfh6}
\end{eqnarray}
where for simplicity we have suppressed the $r,s$ indices on the form bilinears that label the Killing spinors.
Although it is not manifest from the expression of TCFH above, the TCFH preserves the form bilinears that are either symmetric or skew-symmetric in the exchange of the two Killing spinors. Moreover all terms of the IIB TCFH can be arranged to be real. The imaginary unit that appears in some terms can be eliminated after replacing the purely imaginary forms $k^{(2)}$, $\pi^{(2)}$ and $\tau^{(2)}$ with the real forms $ik^{(2)}$, $i\pi^{(2)}$ and $i\tau^{(2)}$.
A consequence of the TCFH is that all form bilinears satisfy a generalisation of the CKY equation with respect to the minimal connection
${\cal D}^{\cal F}$. In particular $k$ is Killing, $\nabla_{(M}\,k^{rs}_{P)}=0$, as expected.
To understand the factorisation of the domain that ${\cal D}^{\cal F}$ acts note that the product of two Majorana-Weyl representations $\Delta_{16}^+$ of $\mathfrak{spin}(9,1)$ decomposes as
\begin{eqnarray}
\otimes^2 \Delta_{16}^+=
\Lambda^1(\bb{R}^{9,1})\oplus \Lambda^3(\bb{R}^{9,1})\oplus \Lambda^{5-}(\bb{R}^{9,1})~,
\end{eqnarray}
where $\Lambda^{5-}(\bb{R}^{9,1})$ is the space of anti-self-dual 5-forms on $\bb{R}^{9,1}$.
The Killing spinors lie in two copies of $\Delta_{16}^+$, i.e. $\Delta^+_{32}=\oplus^2 \Delta_{16}^+$. Therefore the space of all IIB form bilinears is identified with the product
$\otimes^2 \Delta^+_{32}$. This can be decomposed in terms of spacetime forms as indicated above. Indeed notice that $\mathrm{dim} \,\big(\otimes^2 \Delta^+_{32}\big)=32\cdot 32=4 [\mathrm{dim}\,\big(\Lambda^1(\bb{R}^{9,1})\big)+ \mathrm{dim}\,\big(\Lambda^3(\bb{R}^{9,1})\big)+\mathrm{dim}\,\big(\Lambda^{5-}(\bb{R}^{9,1})\big)]$. The minimal connection $\mathcal{D}^\mathcal{F}$ of the TCFH preserves
the symmetric $S^2\big(\Delta^+_{32}\big)$ and skew-symmetric $\Lambda^2\big(\Delta^+_{32}\big)$ subspaces of $\otimes^2 \Delta^+_{32}$. As a consequence
the (reduced) holonomy of $\mathcal{D}^\mathcal{F}$ for a generic background is included in $GL(528)\times GL(496)$ as in IIA case investigated in the previous section.
\section{Particles and integrability of type II branes }
Before we proceed to investigate the symmetries of particle and string probes generated by the TCFHs of type II theories, we shall summarise some of the properties of KS, KY and CKY tensors and their applications to generating symmetries for particle actions, for more detailed studies, see reviews \cite{revky} and \cite{frolov} and the references within. We shall also present some of the particle actions that are invariant under the symmetries generated by such tensors. Then we shall construct the KS and KY tensors of type II brane solutions which to our knowledge have not presented before. We shall use these to argue that the geodesic flow of some of these solutions is completely integrable and we shall give the associated independent conserved charges in involution.
\subsection{Killing-St\"ackel and Killing-Yano tensors}
\subsubsection{Definitions and outline of properties}
A rank $k$ conformal Killing-St\"ackel ($k$-CKS) tensor is a symmetric (0,$k$) tensor $d$ on a $n$-dimensional spacetime $M$ with metric $g$ which satisfies the equation
\begin{eqnarray}
\nabla_{(M} d_{N_1N_2\cdots N_k)}= g_{(MN_1} q_{N_2\cdots N_k)}~,
\label{cks}
\end{eqnarray}
where $q$ is a symmetric $(0, k-1)$ tensor, $g$ is the spacetime metric and $\nabla$ is the Levi-Civita connection of $g$. For $k=1$, the equation reduces to that of a conformal Killing vector field. If $q$ vanishes, $q=0$, then $d$ will be a Killing-St\"ackel (KS) tensor.
Furthermore observe that if $d$ and $e$ are $k-$ and $\ell-$ CKS (KS) tensors on $M$, then
\begin{eqnarray}
(d\otimes_s e)_{N_1\cdots N_{k+\ell}}\defeq d_{(N_1\cdots N_k} e_{N_{k+1}\cdots N_{k+\ell})}~,
\end{eqnarray}
is a $(k+\ell)-$CKS (KS) tensor on $M$.
KS tensors are associated with conserved charges of test particle systems. Indeed consider the action
\begin{eqnarray}
A={1\over2}\int\, d\tau\, g_{MN}\, \dot x^M\, \dot x^N~,
\label{gact}
\end{eqnarray}
which describes the geodesic flow\footnote{When viewing the geodesic flow as a dynamical system, $M$ is identified with its configuration space.} on a spacetime (manifold) $M$ with metric $g$, where $\dot x$ denotes the derivative of the coordinate $x$ with respect to the affine parameter $\tau$. It is straightforward to show that if the spacetime $M$ admits a KS tensor $d$, then
\begin{eqnarray}
Q(d)= d_{N_1N_2\cdots N_k}\, \dot x^{N_1}\, \dot x^{N_2}\cdots \dot x^{N_k}~,
\label{ccd}
\end{eqnarray}
is conserved along the geodesic flow, i.e. $\dot Q(d)=0$ subject to the geodesic equations with affine parameter $\tau$. This charge generates the infinitesimal transformation
\begin{eqnarray}
\delta x^M= \epsilon\, d^M{}_{N_1\cdots N_{k-1}} \dot x^{N_1} \cdots \dot x^{N_{k-1}}~,
\end{eqnarray}
which is a symmetry of the action (\ref{gact}) with infinitesimal parameter $\epsilon$.
A rank $k$ conformal Killing-Yano ($k$-CKY) tensor is a $k$-form, $\alpha$, which satisfies the condition
\begin{eqnarray}
\nabla_M \alpha_{N_1N_2\cdots N_k}={1\over k+1} d\alpha_{MN_1\dots N_k}-{k\over n-k+1} g_{M[N_1} \delta\alpha_{N_2\cdots N_k]}~.
\label{cky}
\end{eqnarray}
If $\alpha$ is co-closed, $\delta\alpha=0$, then $\alpha$ is a Killing-Yano (KY) form while if $\alpha$ is closed, $d\alpha=0$, $\alpha$ is a closed conformal Killing-Yano (CCKY) form. It turns out that if $\alpha$ is KY, then the Hodge dual, ${}^\star\alpha$, of $\alpha$ is CCKY form.
Furthermore, if $\alpha$ and $\beta$ are $k$-CKY ($k$-KY) forms, then
\begin{eqnarray}
\alpha_{(M }{}^{L_1\cdots L_{k-1}} \beta_{N) L_1\cdots L_{k-1}}~,
\end{eqnarray}
is a 2-CKS (2-KS) tensor. In addition, if $\alpha$ and $\beta$ are CCKY forms of rank $k$ and $\ell$, respectively, then $\alpha\wedge \beta$ is a ($k+\ell$)-CCKY form.
KY forms generate symmetries \cite{gibbons} for spinning particle actions \cite{bvh}. These are supersymmetric extensions of (\ref{gact}). Such an action is
\begin{eqnarray}
A=-{i\over2} \int\, d\tau\, d\theta\, g_{MN}\, D x^M\, \dot x^N~,
\label{sgact}
\end{eqnarray}
where $x$ are superfields $x=x(\tau, \theta)$, $\tau$ is the even and $\theta$ is the odd coordinate of the worldline superspace, and the superspace derivative $D$ satisfies $D^2=i\partial_\tau$.
In particular, the KY form $\alpha$ generates the infinitesimal symmetry
\begin{eqnarray}
\delta x^M=\epsilon\, \alpha^M{}_{N_1\cdots N_{k-1}} Dx^{N_1}\cdots Dx^{N_{k-1}}~,
\label{svar}
\end{eqnarray}
for the action (\ref{sgact}), where $\epsilon$ is an infinitesimal parameter. The associated conserved charge is
\begin{eqnarray}
&&Q(\alpha)=(k+1) \alpha_{N_1N_2\cdots N_k} \partial_\tau x^{N_1} Dx^{N_2}\cdots Dx^{N_k}
\cr
&&
\qquad\qquad\qquad
-{i\over k+1} (d\alpha)_{N_1N_2\cdots N_{k+1}} Dx^{N_1} Dx^{N_2}\cdots Dx^{N_{k+1}}~.
\label{ccalpha}
\end{eqnarray}
Observe that $Q(\alpha)$ is conserved, $DQ(\alpha)=0$, subject to the equations of motion of (\ref{sgact}).
Note that if the KY form $\alpha$ is closed, $d\alpha=0$, and so $\alpha$ is covariantly constant (or equivalently parallel) with respect to the Levi-Civita connection, then
\begin{eqnarray}
\tilde Q(\alpha)= \alpha_{N_1N_2\cdots N_k} D x^{N_1} Dx^{N_2}\cdots Dx^{N_k}~,
\end{eqnarray}
is also conserved subject to the field equations of (\ref{sgact}), $\partial_\tau \tilde Q(\alpha)=0$. This gives the conservation of two charges
$\tilde Q(\alpha)$ and $D \tilde Q(\alpha)$. The latter is proportional to that in (\ref{ccalpha}) with $d\alpha=0$.
There are several generalisations of CKY tensors \cite{gggpks, kubiznak, houri1, houri2, kygp, howe2}. One of the most common ones is to replace the Levi-Civita connection that appears in the
definition (\ref{cky}) with another connection, for example a connection with skew-symmetric torsion. Some of the properties mentioned above extend to the generalised KY tensors. For an application of the KY forms to G-structures see \cite{Ggp, sat}.
\subsubsection{Integrability and separability}
A dynamical system with a $2n$-dimensional phase space $P$ is completely integrable according to Liouville provided it admits $n$ independent constants of motion, $Q^r$, $r=1,\dots,n$, including the Hamiltonian $H$, in involution. Independence means that the map $Q: P\rightarrow \bb{R}^n$ is of rank $n$, where $Q=(Q^1, \dots, Q^n)$, and in involution means that the Poisson bracket algebra of the constants of motion $Q^r$ vanishes
\begin{eqnarray}
\{Q^r, Q^s\}_{\mathrm{PB}}=0~.
\end{eqnarray}
Returning to the particle system described by the action (\ref{gact}),
the conserved charges (\ref{ccd}) can be written as functions on phase space, $T^*M$, as
\begin{eqnarray}
Q(d)= d^{N_1\cdots N_k} p_{N_1} \dots p_{N_k}~,
\label{kscharge}
\end{eqnarray}
where $p_M$ is the conjugate momentum of $x^M$. It turns out that if $Q(d)$ and $Q(e)$ are conserved charges associated with KS tensors $d$ and $e$, then
$\{Q(d), Q(e)\}_{\mathrm{PB}}$ is associated with the KS tensor given in terms of the Nijenhuis-Schouten bracket
\begin{eqnarray}
([d, e]_{\mathrm{NS}})^{N_1\cdots N_{k+\ell-1}}=k d^{M(N_1\cdots N_{k-1}} \partial_{M} e^{N_k\cdots N_{k+\ell-1})}-\ell e^{M(N_1\cdots N_{\ell-1}} \partial_{M} d^{N_k\cdots N_{k+\ell-1})}~,
\end{eqnarray}
of $d$ and $e$. Therefore, one has
\begin{eqnarray}
\{Q(d), Q(e)\}_{\mathrm{PB}}= Q([d, e]_{\mathrm{NS}})~.
\label{nsb}
\end{eqnarray}
Observe that if $d$ is a vector, then $[d, e]_{\mathrm{NS}}={\cal L}_d e$, i.e. the Nijenhuis-Schouten bracket is the Lie derivative of $e$ with respect to the vector field $d$. So two charges are in involution provided that the Nijenhuis-Schouten bracket of the associated KS tensors vanishes.
Completely integrable systems are special. There are difficulties in both finding conserved charges in involution and in proving that they are independent. For example if $Q(d)$ and $Q(e)$ are conserved charges, $Q(d) Q(e)$ is not an independent conserved charge, as its inclusion in the map $Q: P\rightarrow \bb{R}^n$ does not alter its rank. However for the geodesic flow described by the action (\ref{gact}) that we shall investigate below, there is a simplifying feature. The spacetimes we shall be considering admit a non-abelian group of isometries.
For every isometry generated by a Killing vector field $K_r$, there is an associated conserved charge
\begin{eqnarray}
Q_r= K_r^M p_M~.
\end{eqnarray}
Of course these charges may not be in involution. However note that the charges $Q_r$ written in phase space do not depend on the spacetime metric. They only depend on the way that the isometry group acts on the spacetime. Typically there are many metrics for which $Q_r$ are constants of motion for the action (\ref{gact}). Of course any polynomial of $Q_r$ is also conserved and is independent from the metric of the particle system. We shall refer to these charges as {\it orbital} to emphasise their independence from the spacetime metric. In many occasions, it is possible to find polynomials of $Q_r$ which are independent and are in involution. Suppose that one can find $n-1$ such independent (polynomial) orbital charges in involution and the Hamiltonian,
\begin{eqnarray}
H={1\over 2} g^{MN} p_M p_N~,
\end{eqnarray}
is independent from the orbital charges. Then the geodesic flow is completely integrable because the orbital charges will Poisson commute with the Hamiltonian. Of course the Hamiltonian depends on the spacetime metric. To distinguish the conserved charges which depend on the spacetime metric from the orbital ones we shall refer to former as {\it Hamiltonian}. We shall demonstrate that this strategy of proving complete integrability of a geodesic flow based on non-abelian isometries is particularly effective whenever the non-abelian group of isometries has a principal orbit in a spacetime of codimension of at most one. The complete integrability of geodesics flows on homogeneous manifolds has been extensively investigated in the mathematics literature, see e.g. \cite{thimm}.
\subsubsection{An example}
Before we proceed to investigate the KS and KY tensors and the integrability of the geodesic flow on some type II backgrounds, let us present an example. The standard example is that of the Kerr black hole. However more suitable for the examples that follow is to consider $\bb{R}^{2n}$ with a conformally flat metric
\begin{eqnarray}
g= h(|y|)\delta_{ij} dy^i dy^j~,
\label{cflmetr}
\end{eqnarray}
where $|y|$ is the length of the coordinate $y$ with respect to the Euclidean norm and $h>0$.
A direct computation reveals that the following tensors
\begin{eqnarray}
d_{i_1\dots i_k}= h^{k}(|y|)~ y^{j_1}\dots y^{j_q} a_{j_1\dots j_q, i_1\dots i_k}~,
\end{eqnarray}
are KS tensors provided that the coefficients $a$ are constant and satisfy
\begin{eqnarray}
a_{(j_1\dots j_q, i_1)\dots i_k}=a_{j_1\dots (j_q, i_1\dots i_k)}=0~.
\end{eqnarray}
For each of these KS tensors, there is an associated conserved charge $Q(d)$ given in (\ref{kscharge}) of the geodesic flow on $\bb{R}^{2n}$ with metric (\ref{cflmetr}). These generate an infinite dimensional symmetry algebra for the action (\ref{gact}) with metric (\ref{cflmetr}) which is isomorphic to the Poisson algebra of $Q(d)$'s up to terms proportional to the equations of motion, i.e. the algebra of symmetry transformations is isomorphic on-shell to the Poisson bracket algebra of the charges. The conserved charges $Q(d)$ may neither be independent nor in involution.
Next let us turn to find the KY and CCKY tensors on $\bb{R}^{2n}$ with metric (\ref{cflmetr}). After some computation, one finds that
\begin{eqnarray}
\alpha= h^{{k\over2}} i_Y \varphi~,~~~~\beta= h^{k+2\over2} Y\wedge \varphi~,
\end{eqnarray}
are KY and CCKY forms, respectively, for any constant $k$-form $\varphi$ on $\bb{R}^{2n}$, where $Y$ is either the vector field $Y=y^i\partial_i$ or the one-form $Y=y_i dy^i$; it is clear from the context what $Y$ denotes in each case.
For each KY tensor above, one can construct the infinitesimal variation (\ref{svar}) which is a symmetry of the action (\ref{sgact}). However the commutator of two such infinitesimal transformations does not close to an infinitesimal transformation of the same type. Typically, the right-hand side of the commutator will involve a term polynomial in $Dx$ as well as a term which is linear in the velocity $\dot x$. A systematic exploration of such commutators in a related context can be found
in \cite{phgp}.
Next let us turn to investigate the integrability of the geodesic flow of the metric (\ref{cflmetr}). The geodesic equations can be easily integrated in angular coordinates. However it is instructive to provide a symmetry argument for the complete integrability of the geodesic equations.
The isometry group of the above backgrounds is $SO(2n)$. The Killing vector fields are
\begin{eqnarray}
k_{ij}=y_i\partial_j-y_j\partial_i~,~~~i<j~,
\label{rotvf}
\end{eqnarray}
where $y_i=y^i$. The associated conserved charges are
\begin{eqnarray}
Q_{ij}=Q(k_{ij})= y_i p_j-y_j p_i~.
\label{rotcc}
\end{eqnarray}
Notice that all these conserved charges are orbital as they do not depend on the metric (\ref{cflmetr}). As ${\cal L}_{k_{ij}} g=0$, one can show that $Q_{ij}$ commute with the Hamiltonian $H= {1\over2} h^{-1} \delta^{ij} p_i p_j$, i.e. $\{H, Q_{ij}\}_{\mathrm{PB}}=0$.
The conserved charges $Q_{ij}$ are not in involution as $\{Q(k_{ij}), Q({k_{pq}})\}_{\mathrm{PB}}=Q([k_{ij}, k_{pq}])$. However using these, one can verify that the $2n-1$ orbital conserved charges
\begin{eqnarray}
D_m={1\over4} \sum_{i,j\geq 2n+1-m} (Q_{ij})^2~,~~~m\geq 2, \dots, 2n~,
\label{casichso2n}
\end{eqnarray}
are in involution. These together with the Hamiltonian $H= {1\over2} h^{-1} \delta^{ij} p_i p_j$ give $2n$ charges in involution. Therefore the geodesic flow of the metric
(\ref{cflmetr}) is completely (Liouville) integrable.
An alternative way to think about the complete integrability of the geodesic flow on $\bb{R}^{2n}$ with metric (\ref{cflmetr}) is to consider it as a motion along the round sphere $S^{2n-1}$ in $\bb{R}^{2n}$ and as a motion along the radial direction
$r=|y|$. For this write the metric (\ref{cflmetr}) as
\begin{eqnarray}
g= h(r) (dr^2+ r^2 g(S^{2n-1}))~,
\end{eqnarray}
where $g(S^{2n-1})$ is the metric on the round $S^{2n-1}$ sphere. It is well known that the vector fields (\ref{rotvf}) are tangential to $S^{2n-1}$ and leave the round metric on $S^{2n-1}$ invariant. The associated conserved charges are as in (\ref{rotcc}) and they are functions of $T^*S^{2n-1}$, i.e. they do not depend on the radial direction $p_r$ of the momentum $p$. One can proceed to define (\ref{casichso2n}) and in turn show that the geodesic flow on $S^{2n-1}$ is completely integrable. Notice that $D_{2n}$ is the Hamiltonian of the geodesic flow on $S^{2n-1}$. All these charges including the Hamiltonian on $S^{2n-1}$ are orbital as they do not depend on the metric (\ref{cflmetr}).
As there are $2n-1$ independent charges in involution associated with the geodesic flow on $S^{2n-1}$, the addition of the Hamiltonian $H= {1\over2} h^{-1} \delta^{ij} p_i p_j$ of the geodesic flow on $\bb{R}^{2n}$ gives
$2n$ independent conserved charges in involution proving the complete integrability of the geodesic flow of the metric (\ref{cflmetr}).
This construction can be reversed engineered and generalised. In particular consider a metric on a n-dimensional manifold $M^n$
\begin{eqnarray}
g(M^n)=dz^2+ g(N^{n-1})( z)
\end{eqnarray}
where $z$ is a coordinate and $g(N^{n-1})(z)$ is a metric on the submanifold $N^{n-1}$ of $M^n$ which may depend on $z$. Suppose now there is a group of isometries on $M^n$ which has as a principal orbit $N^{n-1}$. Clearly the associated conserved charges $Q=K^M p_M$, for each Killing vector field $K$, will be functions on $T^*N$. If one is able to find orbital conserved charges $D_m$, $m=1,\dots, n-1$ in involution, then the geodesic flow on $M^n$ will be completely integrable after the inclusion of the Hamiltonian $H$
of the geodesic flow on $M^n$ as an additional conserved charge. This is because $H$ is a function on $T^*M^n$ and so it is independent from $D_m$ which are functions on $T^*N^{n-1}$. Moreover $\{D_m, H\}_{\mathrm{PB}}=0$ as
$D_m$ are constructed as polynomials of the conserved charges associated with the isometries on $M^n$. This argument will be repeatedly used to prove complete integrability of geodesic flows of brane backgrounds and clearly can be adapted to all manifolds which have a principal orbit of codimension at most one with respect to a group action.
\subsection{D-branes}
\subsubsection{The KS and CCKY tensors of D-branes}
The metric of type II Dp-branes in the string frame \cite{d5, d5hs, d3, d7, d8a, d8} is
\begin{eqnarray}
g=h^{-{1\over2}} \sum_{a,b=0}^p \eta_{ab}d\sigma^a d\sigma^b+ h^{{1\over2}} \sum_{i,j=1}^{9-p}\delta_{ij} dy^i dy^j~,
\label{dbrane}
\end{eqnarray}
where $p=0,\dots, 8$ with $p$ even (odd) for IIA (IIB) D-branes, $\sigma^a$ are the worldvolume coordinates, $y^i$ are the transverse coordinates and $h=h(y)$ is a harmonic function $\delta^{ij} \partial_i\partial_j h=0$. Apart from the metric, the solutions depend on a non-vanishing dilaton field and an appropriate form field strength which we suppress. For planar branes located at different points $y_s$ in $\bb{R}^{9-p}$, one takes for $p\leq 6$
\begin{eqnarray}
h=1+ \sum_{s} {q_s\over |y-y_s|^{7-p}}~,
\label{mhf}
\end{eqnarray}
where $|\cdot|$ is the Euclidean norm in $\bb{R}^{9-p}$ and $q_s$ is a constant proportional to the charge density of the branes.
The solution is invariant under the action of the Poincar\'e group, $SO(p,1)\ltimes \bb{R}^{p,1}$, acting on the worldvolume coordinates $\sigma^a$. If the harmonic function is chosen such that $h=h(|y|)$\footnote{The harmonic function is $h=1+{q\over |y|^{7-p}}$ for $p=0, \dots, 6$, $h=1+ q \log |y|$ for $=7$ and $h=1+ q |y|$ for $p=8$.}, then the solution will be invariant under the action of $SO(9-p)$ group acting on the transverse coordinates $y$.
Considering the Dp-branes (\ref{dbrane}) with $h=h(|y|)$, the KS tensors which are invariant under the worldvolume symmetry of the solution are
\begin{eqnarray}
d_{a_1\dots a_{2m}i_1\dots i_k}= h^{{1\over4} (k-m)}(|y|)\,\, y^{j_1}\dots y^{j_q} a_{j_1\dots j_q, i_1\dots i_k} \eta_{(a_1a_2}\dots \eta_{a_{2m-1} a_{2m})}~,
\end{eqnarray}
provided that the constant coefficients $a$ satisfy
\begin{eqnarray}
a_{(j_1\dots j_q, i_1)\dots i_k}=a_{j_1\dots (j_q, i_1\dots i_k)}=0~.
\end{eqnarray}
Each of these KS tensors will generate a symmetry of the relativistic particle action (\ref{gact}). As a result each such action on a D-brane background admits
an infinite number of symmetries. The algebra of the associated transformations is on-shell isomorphic to that of the Poisson bracket algebra of the associated charges.
To investigate the symmetries of the spinning particles (\ref{sgact}) propagating on D-branes, it suffices to find the KY tensors of these backgrounds. For this, one begins with an ansatz which respects the worldvolume isometries of the solutions. As the KY tensors are dual to CCKY ones, let us focus on the latter. It turns out that
\begin{eqnarray}
\beta(\varphi)=h^{k+1-p\over 4}(|y|)\,\, Y\wedge \varphi\wedge d{\mathrm{vol}}(\bb{R}^{p,1})~,
\end{eqnarray}
is a CCKY tensor for any constant $k$-form $\varphi$ on $\bb{R}^{8-p}$, where $d{\mathrm{vol}}(\bb{R}^{p,1})$ is the volume form of $\bb{R}^{p,1}$ with respect to the flat metric and $Y=\delta_{ij} y^i dy^j$. Therefore Dp-branes admit $2^{8-p}$ linearly independent KY forms each generating a symmetry of the action (\ref{sgact}) of spinning particle probes in these backgrounds.
The associated conserved charges are given in (\ref{ccalpha}).
\subsubsection{Complete integrability of geodesic flow}
The geodesic flow on all Dp-brane backgrounds with $h=h(|y|)$ is completely integrable. Of course one can separate the geodesic equation in angular variables. Here we shall give all the charges which are in involution. As we have already mentioned, the isometry group of such a Dp-brane solution is $SO(p,1)\ltimes \bb{R}^{p,1}\times SO(9-p)$. Such a group has a codimension one principal orbit $\bb{R}^{p,1}\times S^{8-p}$ in the Dp-brane background. In particular, the Killing vectors generated by the translations along the worldvolume coordinates are $k_a=\partial_a$ and those generated by $SO(9-p)$ rotations on the transverse coordinates are
\begin{eqnarray}
k_{ij}=y_i \partial_j-y_j\partial_i~,~~~i<j~,
\label{rotvfx}
\end{eqnarray}
where $y_i=y^i$. The associated conserved charges written in terms of the momenta are
\begin{eqnarray}
Q_a=p_a~,~~~ Q_{ij}=Q(k_{ij})= y_i p_j-y_j p_i~.
\end{eqnarray}
These charges are not in involution. However, one can verify that the 9 conserved charges
\begin{eqnarray}
Q_a~,~~~D_m={1\over4} \sum_{i,j\geq 10-p-m} (Q_{ij})^2~,~~~m\geq 2, \dots, 9-p~,
\end{eqnarray}
are all orbital, independent and in involution. These together with the Hamiltonian of (\ref{gact}) yield 10 charges in involution and the geodesic flow on all such Dp-brane solutions is completely integrable.
\subsection{Common sector branes}
\subsubsection{KS and KY tensors of common sector branes}
The metric of the fundamental string solution \cite{funstring} is
\begin{eqnarray}
g=h^{-1} \eta_{ab} d\sigma^a d\sigma^b+ \delta_{ij} dy^i dy^i~,
\label{fstring}
\end{eqnarray}
where $a,b=0,1$ and $i,j=1,\dots 8$ and $h$ is a harmonic function on $\bb{R}^8$, $\delta^{ij} \partial_i\partial_j h=0$. We have suppressed the other two fields of the solution the dilaton and 3-form field strength.
As for D-branes consider the fundamental string solution with $h=h(|y|)=1+{q\over |y|^6}$. Such a solution admits the same isometry group as that of D1-brane. Then one can demonstrate that
the KS tensors that preserve the worldvolume symmetry of the fundamental string are
\begin{eqnarray}
d_{a_1\dots a_{2m}i_1\dots i_k}= h^{-m}(|y|) y^{j_1}\dots y^{j_q} a_{j_1\dots j_q, i_1\dots i_k} \eta_{(a_1a_2}\dots \eta_{a_{2m-1} a_{2m})}~,
\end{eqnarray}
provided that the constant coefficients satisfy
$
a_{(j_1\dots j_q, i_1)\dots i_k}=a_{j_1\dots (j_q, i_1\dots i_k)}=0
$.
As a result a relativistic particle whose dynamics is described by the action (\ref{gact}) on such a background admits an infinite number of symmetries generated by these KS tensors.
After some computation, one can verify that CCKY forms of the fundamental string solution are
\begin{eqnarray}
\beta(\varphi)=h^{-1}(|y|)\, Y\wedge \varphi\wedge d\sigma^0\wedge d\sigma^1~,
\end{eqnarray}
for any constant $k$-form $\varphi$ on $\bb{R}^8$, where $Y=\delta_{ij} y^i dy^j$. These give rise to $2^7$ linearly independent dual KY forms which generate symmetries for a spinning particle with action (\ref{sgact}) propagating on this background.
The metric of the NS5-brane solution \cite{ns5, callan} is
\begin{eqnarray}
g= \eta_{ab} d\sigma^a d\sigma^b+ h \delta_{ij} dy^i dy^j~,
\label{ns5}
\end{eqnarray}
where $a,b=0,\dots, 5$, $i,j=1,2,3,4$ and $h$ is a harmonic function on $\bb{R}^4$. We have again suppressed the dilaton and 3-form fields of the solution. For $h=h(|y|)=1+{q\over |y|^2}$, the solution has the same isometry group as that of the D5-brane.
As for the fundamental string solution above, the KS tensors that preserve the worldvolume symmetry of the NS5-brane are
\begin{eqnarray}
d_{a_1\dots a_{2m}i_1\dots i_k}= h^{k}(|y|)\,\, y^{j_1}\dots y^{j_q} a_{j_1\dots j_q, i_1\dots i_k} \eta_{(a_1a_2}\dots \eta_{a_{2m-1} a_{2m})}~,
\end{eqnarray}
provided that the constant tensors $a$ satisfy
$
a_{(j_1\dots j_q, i_1)\dots i_k}=a_{j_1\dots (j_q, i_1\dots i_k)}=0$.
Therefore the action (\ref{gact}) of a relativistic particle action propagating in this background admits an infinite number of symmetries generated by these KS tensors.
The CCKY forms of the NS5-brane are
\begin{eqnarray}
\beta(\varphi)=h^{{k+2\over2}}(|y|)\,\, Y\wedge \varphi\wedge d{\mathrm{vol}}(\bb{R}^{5,1})~,
\end{eqnarray}
for any constant $k$-form $\varphi$ on $\bb{R}^4$, where $Y=\delta_{ij} y^i dy^j$ and $d{\mathrm{vol}}(\bb{R}^{5,1})$ is the volume form of the worldvolume of the NS5-branes with respect to the flat metric. These give rise to $2^3$ linearly independent dual KY forms that generated the symmetries of a spinning particle with action
(\ref{sgact}) propagating on the background.
\subsubsection{Complete integrability of geodesic flow}
Consider a relativistic particle propagating on the fundamental string solution with $h=h(|y|)$. The worldsheet translations and transverse coordinate $SO(8)$ rotations give rise to the conserved charges
\begin{eqnarray}
Q_a=p_a~,~~~a=0,1~;\qquad Q_{ij}= y_i p_j-y_j p_i~,~~~i,j=1,\dots,8~,
\end{eqnarray}
respectively. From these one can construct the following nine independent, orbital, conserved charges
\begin{eqnarray}
Q_a~,~~~D_m={1\over4} \sum_{i,j\geq 9-m} (Q_{ij})^2~,~~~m=2, \dots, 8~,
\end{eqnarray}
which are independent and in involution. These together with the Hamiltonian of the relativistic particle (\ref{gact}) lead to the integrability of the geodesic flow on the fundamental string background.
Similarly, the conserved charges of a relativistic particle propagating on a NS5-brane background associated with the worldvolume translations and transverse $SO(4)$ rotations are
\begin{eqnarray}
Q_a= p_a~,~~~a=0,\dots, 5~;~~~Q_{ij}= (y_i p_j-y_j p_i)~,~~~i,j=1,2,3,4~.
\end{eqnarray}
These give rise to nine independent, orbital, conserved charges
\begin{eqnarray}
Q_a~,~~~D_m={1\over4} \sum_{i,j\geq 5-m} (Q_{ij})^2~,~~~m=2,\dots, 4~,
\end{eqnarray}
which are independent and in involution. These together with the Hamiltonian of the relativistic particle imply the complete integrability of the geodesic flow of NS5-brane.
\section{Common sector and TCFHs}
The simplest sector to explore the TCFH of type II supergravities is the common sector. For this sector, all fields vanish apart from the metric, dilaton and
the NS-NS 3-form field strength $H$, $dH=0$. A direct inspection of the TCFH of type II supergravities reveals that some of the spinor bilinears are covariantly constant with respect to a connection with skew-symmetric torsion while some others satisfy a more general TCFH. The former are well known, especially in the context of string compactifications, and have been extensively investigated in the sigma model approach to string theory. They generate additional supersymmetries of the worldvolume actions as well as
W-type of symmetries \cite{phgp}. Here we shall demonstrate that string probes on all common sector supersymmetric solutions admit W-type of symmetries generated by the form bilinears.
\subsection{Probes}
Before we proceed with the details of describing how the TCFHs generate symmetries for probes in supersymmetric backgrounds, we shall first describe the probe actions that we shall be considering. The main focus will be on string and particle probes. The dynamics of string probes propagating on a spacetime with metric $g$ and a 2-form gauge potential $b$ \cite{ zumino, lag, ghull, howesierra} is described by the action
\begin{eqnarray}
A=\int d^2\rho\, d^2\theta\, (g+b)_{MN}\, D_+x^M\, D_-x^N~,
\label{sact}
\end{eqnarray}
where $x=x(\rho, \theta)$ are real superfields that depend on the worldsheet superspace with commuting $(\rho^0, \rho^1)$ and
anti-commuting $(\theta^+, \theta^-)$ real coordinates. The action above has been given as in \cite{howegp2x} where one defines the lightcone coordinates, $\rho^{=\kern-0.40em{\vert}}= \rho^0+\rho^1$, $\rho^{=}=-\rho^0+\rho^1$, and the algebra of superspace derivatives is $D_-^2=i\partial_{=}$, $D_+^2=i\partial_{{=\kern-0.40em{\vert}}}$ and $D_+ D_-+D_- D_+=0$\,. Note that the sign labelling of the worldsheet superspace coordinates denotes $\mathfrak{spin}(1,1)$ chirality.
The infinitesimal symmetries of (\ref{sact}) that we shall be considering are given by
\begin{eqnarray}
\delta x^M= \epsilon^{(+)} \beta^M{}_{P_1\dots P_k}\, D_+x^{P_1}\dots D_+x^{P_k}~,
\label{ssym1}
\end{eqnarray}
where $\beta$ is a spacetime ($k+1$)-form and $\epsilon^{(+)}$ is an infinitesimal parameter; the superscript $(+)$ indicates that the weight of the infinitesimal parameter $\epsilon$ is such that the right-hand side of (\ref{ssym1}) is a $\mathfrak{spin}(1,1)$ scalar. The action (\ref{sact}) is invariant under such transformations provided that
\begin{eqnarray}
\nabla^{(+)}_M \beta_{P_1\dots P_{k+1}}=0~,
\label{conp}
\end{eqnarray}
where
\begin{eqnarray}
\nabla^{(\pm)}=\nabla\pm {1\over 2}C~,
\end{eqnarray}
with $C=db$, i.e. $\nabla^{(\pm)}_M Y^N=\nabla_M Y^N\pm{1\over2} C^N{}_{MR} Y^R$. Therefore $\beta$ generates a symmetry provided it is a $\nabla^{(+)}$-covariantly constant form.
One can also consider symmetries of (\ref{sact}) generated by the infinitesimal transformation
\begin{eqnarray}
\delta x^M= \epsilon^{(-)} \beta^M{}_{P_1\dots P_k} D_-x^{P_1}\dots D_-x^{P_k}~,
\label{ssym2}
\end{eqnarray}
where $\epsilon^{(-)}$ is an infinitesimal parameter.
The condition for invariance of the action in such a case is
\begin{eqnarray}
\nabla^{(-)}_M \beta_{P_1\dots P_{k+1}}=0~,
\label{conm}
\end{eqnarray}
i.e. $\beta$ is a $\nabla^{(-)}$-covariantly constant form. In many examples that follow the spacetime will admit several $\nabla^{(\pm)}$-covariantly
constant forms which generate symmetries of the string probe action (\ref{sact}). All $\nabla^{(+)}$-covariantly constant forms of the common sector backgrounds coincide with those of heterotic supersymmetric backgrounds. In turn these can be computed using the classification results of \cite{hetgpug} for all heterotic background Killing spinors. The
$\nabla^{(-)}$-covariantly constant forms of common sector backgrounds can also be read from the classification results of \cite{hetgpug}.
One can easily investigate the commutators of these symmetries (\ref{ssym1}) and (\ref{ssym2}). In general these symmetries are of W-type and have been previously explored in \cite{phgp} both in the context of string compactifications and special geometric structures.
Actions of spinning particle probes are also invariant under the symmetries generated by either $\nabla^{(+)}-$ or $\nabla^{(-)}-$ covariantly constant forms $\beta$. One such worldline probe action is
\begin{eqnarray}
A=\int\, d\tau\, d^2\theta\, (g+b)_{MN} D_+x^M D_-x^N~,
\label{pact}
\end{eqnarray}
which in addition to the metric exhibits a 2-form coupling $b$, where the superfields $x^M=x^M(\tau, \theta)$ depend on the worldline superspace with commuting $\tau$ and anti-commuting $(\theta^+, \theta^-)$ real coordinates; see \cite{colesgp} for a systematic description of spinning particle actions with form and other couplings. The algebra of the worldline superspace derivatives is $D_+^2=D_-^2=i\partial_\tau$ and $D_+D_-+D_-D_+=0$. The signs on $\theta^\pm$ are just labels - there is no chirality in one dimension. The infinitesimal variation of the superfields is as in either (\ref{ssym1}) or (\ref{ssym2}), but now the fields are worldline superfields and the superspace derivatives are those of the worldline superspace. The conditions for invariance of the action above are given in either (\ref{conp}) or (\ref{conm}), respectively.
Another class of spinning particle probes we shall be considering are described by the action \cite{colesgp}
\begin{eqnarray}
A=-{1\over2} \int\, d\tau\, d\theta\, \big(i g_{MN} Dx^M \partial_\tau x^N+{1\over6} C_{MNR} Dx^M Dx^N Dx^R\big)~,
\label{1part}
\end{eqnarray}
where $g$ is the spacetime metric and $C$ is a 3-form on the spacetime - $C$ is not a necessarily closed 3-form. Moreover
$x^M$ is a superfield that depends on the worldline superspace coordinates $(\tau, \theta)$ and $D^2=i\partial_\tau$. Given a ($k$+1)-form $\beta$
one can construct the infinitesimal transformation
\begin{eqnarray}
\delta x^M= \alpha\,\, \beta^M{}_{P_1\dots P_k} Dx^{P_1}\dots Dx^{P_k}~,
\label{ssym3}
\end{eqnarray}
where $\alpha$ is an infinitesimal parameter. The conditions required for this action to be invariant under the transformation (\ref{ssym3}) can be arranged in two different ways. One way is to require, as in previous cases, that $\beta$ is $\nabla^{(+)}$-covariantly constant. An alternative way to arrange the conditions for invariance of (\ref{1part}) is
\begin{eqnarray}
&&\nabla^{(+)}_M \beta_{P_1\dots P_{k+1}}=\nabla^{(+)}_{[M} \beta_{P_1\dots P_{k+1}]}~,
\cr
&&di_\beta C+(-1)^k {k+2\over2} i_\beta dC=0~.
\label{addcon}
\end{eqnarray}
These conditions and an explanation of the notation can be found in \cite{kygp}.
Therefore this set of conditions implies that $\beta$ is a $\nabla^{(+)}$-KY form. For $C=0$, one obtains that $\beta$ is a KY form as for the spinning particles
described by the action (\ref{sgact}).
\subsection{IIA common sector}
\subsubsection{The TCFH}
The TCFH of the common sector can be written as
\begin{eqnarray}
\nabla_M \phi_{N_1 \dots N_p} -\frac{p}{2} H^P{}_{M[N_1} \tilde\phi_{|P|\dots N_p]}=0~,~~\nabla_M \tilde\phi_{N_1 \dots N_p} -\frac{p}{2} H^P{}_{M[N_1} \phi_{|P|\dots N_p]}=0~,
\label{iiacovf}
\end{eqnarray}
for $\phi= k, \pi, \tau$ and
\begin{equation}
\nabla_M \tilde{\sigma} = -\frac{1}{4}H_{MPQ}\omega^{PQ} ~,~~~ \nabla_M \omega_{NR} +\frac{1}{4}H_{MPQ}\tilde{\zeta}^{PQ}{}_{NR} = \frac{1}{2}H_{MNR}\tilde{\sigma} ~ ,
\label{iaom}
\end{equation}
\begin{eqnarray}
&&\nabla_M \tilde{\zeta}_{N_1 \dots N_4} +\frac{1}{3} {}^\star{H}_{M[N_1N_2N_3|PQR|}\tilde{\zeta}^{PQR}{}_{N_4]} - 3 H_{M[N_1 N_2}\,\omega_{N_3 N_4]} =
\cr
&&\quad-\frac{1}{12}g_{M[N_1}{}^\star{H}_{N_2 N_3 N_4] P_1 \dots P_4}\tilde{\zeta}^{P_1 \dots P_4} + \frac{5}{12} {}^\star{H}_{[MN_1N_2N_3|PQR|}\tilde{\zeta}^{PQR}{}_{N_4]}~,
\end{eqnarray}
\begin{equation}
\nabla_M \sigma = - \frac{1}{4} H_{MPQ} \tilde \omega^{PQ} ~ ,~~~
\nabla_M \tilde\omega_{NR}+\frac{1}{4} H_{MPQ} \zeta^{PQ}{}_{NR} = \frac{1}{2} H_{MNR} \sigma ~ ,
\label{iiatom}
\end{equation}
\begin{eqnarray}
&&
\nabla_M \zeta_{N_1 \dots N_4} + \frac{1}{3} {}^\star{H}_{M[N_1N_2N_3|PQR|}\zeta^{PQR}{}_{N_4]}- 3 H_{M[N_1N_2} \tilde\omega_{N_3N_4]}= \cr
&&
\qquad -\frac{1}{12} g_{M[N_1} {}^\star{H}_{N_2 N_3 N_4] P_1 \dots P_4} \zeta^{P_1\dots P_4} + \frac{5}{12} {}^\star{H}_{[MN_1N_2N_3|PQR|}\zeta^{PQR}{}_{N_4]} ~.
\label{iiiatom}
\end{eqnarray}
These can be easily derived from the general IIA TCFH in section \ref{iiatcfhs} upon setting all other fields apart from the metric, dilaton and NS-NS 3-form to zero.
It is clear from the TCFH that $k^{\pm rs}=k^{rs}\pm \tilde k^{rs}$, $\pi^{\pm rs}=\pi^{rs}\pm \tilde \pi^{rs}$ and $\tau^{\pm rs}= \tau^{rs}\pm \tilde \tau^{rs}$ are covariantly constant
\begin{eqnarray}
\nabla^{(\pm)}k^{\pm rs}=\nabla^{(\pm)} \pi^{\pm rs}=\nabla^{(\pm)} \tau^{\pm rs}=0~,
\label{ccf}
\end{eqnarray}
with respect to the connections
\begin{eqnarray}
\nabla^{(\pm)}=\nabla\pm {1\over2} H~.
\label{nablapm}
\end{eqnarray}
These are the forms that have mostly been explored in the literature. Although the rest do not satisfy such a straightforward condition they are nevertheless part of the
geometric structure of the common sector backgrounds. A consequence of the TCFH above is that
the reduced holonomy of the minimal connection ${\cal D}^{\cal F}$ of a generic common sector background is included in $\times^6 SO(9,1)\times^2 GL(255)$.
\subsubsection{Probe hidden symmetries generated by the TCFH}
After identifying the 3-form coupling $C=db$ of the probe actions (\ref{pact}) and (\ref{sact}) with the 3-form field strength $H$ of common sector backgrounds, $C=H$, the conditions
on the form bilinears $k^{\pm rs}$, $\pi^{\pm rs}$ and $\tau^{\pm rs}$ imposed by the TCFH (\ref{ccf}) coincide with those in (\ref{conp}) and (\ref{conm}) as required for the invariance of these probe actions. Therefore the $\nabla^{(\pm)}$-covariantly constant form bilinears $k^{\pm rs}$, $\pi^{\pm rs}$ and $\tau^{\pm rs}$ generate symmetries for the particle (\ref{pact}) and string (\ref{sact}) probe actions. These are given by the infinitesimal transformations
\begin{eqnarray}
&&\delta x^M= \epsilon^{(\pm)}_{ rs}(k^{\pm rs})^M~,~~~ \delta x^M= \epsilon^{(\pm)}_{ rs} (\pi^{\pm rs})^M{}_{PQ} D_\pm x^P D_\pm x^Q~,~~~
\cr
&&\delta x^M= \epsilon^{(\pm)}_{ rs} (\tau^{\pm rs})^M{}_{N_1\dots N_4} D_\pm x^{N_1}\dots D_\pm x^{N_4}~,
\label{iiainf}
\end{eqnarray}
where $\epsilon^{(\pm)}_{ rs}$ are the infinitesimal parameters.
Similarly after identifying $C$ with $H$ the spinning particle probes described by the action (\ref{1part})
are invariant under symmetries generated by the $\nabla^{(+)}$-covariantly constant forms $k^{+ rs}$, $\pi^{+ rs}$ and $\tau^{+ rs}$. The infinitesimal variations are given
as in (\ref{iiainf}) after replacing the worldsheet superfields with the worldline ones and the superspace derivative $D_+$ with $D$. The $\nabla^{(-)}$-covariantly constant forms $k^{- rs}$, $\pi^{- rs}$ and $\tau^{- rs}$ also generate symmetries but for the spinning particle probe with
action given in (\ref{1part}) but now with coupling $C$ identified with $-H$, $C=-H$.
The interpretation of the rest of the form bilinears satisfying the TCFH conditions (\ref{iaom})-(\ref{iiiatom}) as generators of symmetries of worldvolume probe actions
is not apparent. For generic common sector backgrounds, these bilinears do not generate symmetries for the probe actions we have considered here. Nevertheless, they may generate symmetries for probes on some special backgrounds, as some terms in the TCFH may vanish and so the remaining TCFH conditions can be interpreted as invariance conditions of some worldvolume probe action.
\subsubsection{Hidden symmetries of probes on common sector IIA branes}
We have demonstrated that particle and string probes in common sector backgrounds exhibit a large number of symmetries generated by the $\nabla^{(\pm)}$-covariantly constant forms $k^{\pm rs}$, $\pi^{\pm rs}$ and $\tau^{\pm rs}$. To present some examples, we shall explore the symmetries generated by the form bilinears of the fundamental string and NS5-brane. For this, we have to compute the form bilinears of these two backgrounds.
To begin, let us assume that the worldsheet directions of the fundamental string are along $05$. Then the Killing spinors of the solution can be written as $\epsilon=h^{-{1\over4}} \epsilon_0$, where $\epsilon_0$ is a constant spinor that satisfies the condition $\Gamma_0\Gamma_5\Gamma_{11}\epsilon_0=\pm \epsilon_0$ with the gamma matrices in a frame basis\footnote{This will be the case for the conditions on the Killing spinors of all brane solutions that we shall investigate from now on.}. The metric of the solution is given in (\ref{fstring}) after changing the worldvolume directions from $01$ to $05$ and taking $h$ to be any harmonic function on $\bb{R}^8$, e.g. $h$ can be a multi-centred harmonic function as in (\ref{mhf}) for $p=1$. The choice of worldsheet directions we have made for the string above may be thought as unconventional. However, it turns out that such a choice is aligned with the basis used in spinorial geometry \cite{uggp} to construct realisations of Clifford algebras in terms of forms; for a review on spinorial geometry techniques see \cite{review}. We shall use spinorial geometry to solve the condition on $\epsilon_0$ and so this labelling of the coordinates is convenient.
Indeed choosing the plus sign in the condition on $\epsilon_0$ and using the realisation of spinors in terms of forms\footnote{In spinorial geometry the Dirac spinors of $\mathfrak{spin}(9,1)$ are identified with
$\Lambda^*(\bb{C}^5)$. The Gamma matrices are realised on $\Lambda^*(\bb{C}^5)$ using the exterior multiplication and inner derivation operations with respect to a Hermitian basis $(e_1, \dots, e_5)$ in $\bb{C}^5$. The Majorana spinors satisfy the reality condition $\Gamma_{6789} *\epsilon=\epsilon$. For more details see e.g. appendix B of \cite{review}.} write $\epsilon_0=\eta+ e_5\wedge \lambda$, where $\eta$ and $\lambda$ are constant Majorana $\mathfrak{spin}(8)$ spinors. Then the condition $\Gamma_0\Gamma_5\Gamma_{11}\epsilon_0= \epsilon_0$ restricts $\eta$ and $\lambda$ to be positive chirality Majorana-Weyl spinors of $\mathfrak{spin}(8)$, i.e. $\eta, \lambda \in \Delta_8^{+}\equiv \Lambda^{\mathrm{ev}}(\bb{R}\langle e_1, e_2, e_3, e_4\rangle)$.
Thus the most general solution of $\Gamma_0\Gamma_5\Gamma_{11}\epsilon_0= \epsilon_0$ is
\begin{eqnarray}
\epsilon_0=\eta+ e_5\wedge \lambda~,
\label{fsol}
\end{eqnarray}
where $\eta$ and $\lambda$ are positive chirality Majorana-Weyl spinors of $\mathfrak{spin}(8)$.
Using (\ref{fsol}) one can easily express all the form bilinears of the fundamental string background in terms of the form bilinears of $\eta$ and $\lambda$. The explicit expressions have been collected in appendix \ref{apa}. Using these one finds that
\begin{eqnarray}
&&k^{+rs}= 2 h^{-{1\over2}}\langle \eta^r, \eta^s\rangle (e^0-e^5)~,~~~ k^{-rs}= h^{-{1\over2}} \langle \lambda^r, \lambda^s\rangle (e^0+e^5)~,
\cr
&&
\pi^{+rs}=h^{-{1\over2}} \langle \eta^r, \Gamma_{ij}\eta^s\rangle (e^0-e^5)\wedge e^i\wedge e^j~,~~~
\pi^{-rs}= h^{-{1\over2}}\langle \lambda^r,\Gamma_{ij} \lambda^s\rangle (e^0+e^5)\wedge e^i\wedge e^j~,
\cr
&&
\tau^{+rs}={2\over 4!}h^{-{1\over2}}\langle \eta^r,\Gamma_{ijk\ell} \eta^s\rangle (e^0-e^5)\wedge e^i\wedge e^j\wedge e^k\wedge e^\ell~,
\cr
&&
\tau^{-rs}= {2\over 4!} h^{-{1\over2}} \langle \lambda^r,\Gamma_{ijk\ell} \lambda^s\rangle (e^0+e^5)\wedge e^i\wedge e^j\wedge e^k\wedge e^\ell~,
\end{eqnarray}
where $(e^0, e^5, e^i)$ is a pseudo-orthonormal frame for the metric (\ref{fstring}), i.e. $g=-(e^0)^2+ (e^5)^2+\sum_i (e^i)^2$, and $\langle\cdot, \cdot\rangle$ is the $\mathfrak{spin}(8)$-invariant (Hermitian) inner product on $\Delta_8^{+}$. Both $k^{\pm rs}$ are along the worldvolume directions and Killing. This implies that both $k$ and $\tilde k$ are Killing as well. This is expected for $k$ but not for $\tilde k$. Nevertheless $\tilde k$ is Killing because the fundamental string is a special background. Observe that the $\nabla^{(+)}$- ($\nabla^{(-)}$-) parallel form bilinears are left- (right-) handed from the string worldvolume perspective as indicated by their dependence on the worldsheet lightcone directions.
It remains to compute the bilinears of $\mathfrak{spin}(8)$ Majorana-Weyl spinors $\eta$ and $\lambda$. These can be obtained using the decomposition of the product of two positive chirality Majorana-Weyl representations $\Delta_8^{+}$ in terms of forms on $\bb{R}^8$ as
\begin{eqnarray}
&&\Delta_8^{+}\otimes \Delta_8^+= \Lambda^{0}(\bb{R}^8)\oplus \Lambda^{2}(\bb{R}^8)\oplus \Lambda^{4+}(\bb{R}^8)~,
\label{repdec}
\end{eqnarray}
where $\Lambda^{4+}(\bb{R}^8)$ are the self-dual 4-forms on $\bb{R}^8$. As $\eta$ and $\lambda$ are in $\Delta_8^{+}$ and otherwise unrestricted, their bilinears span all 0-, 2- and self-dual 4-forms in $\bb{R}^8$. As a consequence, the string probe (\ref{sact}) and particle probe (\ref{pact}) actions are invariant under $2^7$ independent symmetries.
Next let us turn to the symmetries of probes on the NS5-brane background. Choosing the worldvolume of the NS5-brane along the $012567$ directions, the Killing spinors $\epsilon=\epsilon_0$ of the background satisfy the condition
$\Gamma_{3489}\Gamma_{11}\epsilon_0=\pm\epsilon_0$, where $\epsilon_0$ is a constant Majorana spinor. The metric of the solution is given in (\ref{ns5}) after changing the worldvolume directions from $012345$ to $012567$ for similar reasons as those explained for the fundamental string above and after taking $h$ to be a harmonic function on $\bb{R}^4$ as in (\ref{mhf}) for $p=5$. Choosing the plus sign, the condition $\Gamma_{3489}\Gamma_{11}\epsilon_0=\epsilon_0$ can be solved using spinorial geometry. It is convenient to first solve this condition for Dirac spinors and then impose the reality condition on $\epsilon$. The solution can be expressed as
\begin{eqnarray}
\epsilon=\eta^1+e_{34}\wedge \lambda^1+ e_3\wedge \eta^2+e_4\wedge \lambda^2~,
\label{ns5s}
\end{eqnarray}
where $\eta$ and $\lambda$ are positive chirality Weyl spinors of $\mathfrak{spin}(5,1)$, i.e. $\eta, \lambda\in \Delta^+_{(6)}\equiv \Lambda^{\mathrm{ev}}(\bb{C}\langle e_1,e_2, e_5\rangle)$. Imposing the reality condition on $\epsilon$, $\Gamma_{6789}*\epsilon=\epsilon$, one finds that
\begin{eqnarray}
\lambda^1=-\Gamma_{67} (\eta^1)^*~,~~~\lambda^2=-\Gamma_{67} (\eta^2)^*~.
\end{eqnarray}
So the Killing spinor $\epsilon$ is completely determined by the (complex) positive chirality $\mathfrak{spin}(5,1)$ spinors $\eta^1$ and $\eta^2$.
Using (\ref{ns5s}), one can easily compute all the form bilinears of the NS5-brane background and express them in terms of the form bilinears of $\eta^1$ and $\eta^2$. All these can be found in appendix \ref{apa}.
In particular the $\nabla^{(\pm)}$-covariantly constant spinor bilinears are
\begin{eqnarray}
&&k^{+rs}=4 \mathrm{Re}\langle \eta^{1r}, \Gamma_a\eta^{1s}\rangle_D e^a~,~~~k^{-rs}=4\mathrm{Re}\langle \eta^{2r}, \Gamma_a\eta^{2s}\rangle_D ~e^a~,
\end{eqnarray}
\begin{eqnarray}
&&\pi^{+rs}={2\over3}\mathrm{Re}\langle \eta^{1r},\Gamma_{abc} \eta^{1s}\rangle_D\, e^a\wedge e^b\wedge e^c
-4\mathrm{Re}\langle \eta^{1r},\Gamma_{a} \lambda^{1s}\rangle_D\, (e^3\wedge e^4 -e^8\wedge e^9)\wedge e^a
\cr
&&\qquad\qquad
-4\mathrm{Im}\langle \eta^{1r},\Gamma_{a} \eta^{1s}\rangle_D\, (e^3\wedge e^8 +e^4\wedge e^9)\wedge e^a
\cr
&&\qquad\qquad
-4\mathrm{Im}\langle \eta^{1r},\Gamma_{a} \lambda^{1s}\rangle_D\, (e^3\wedge e^9 -e^4\wedge e^8)\wedge e^a~,
\end{eqnarray}
\begin{eqnarray}
&&\pi^{-rs}={2\over3}\mathrm{Re}\langle \eta^{2r}, \Gamma_{abc} \eta^{2r}\rangle_D\, e^a\wedge e^b\wedge e^c
+4\mathrm{Re}\langle \eta^{2r},\Gamma_{a} \lambda^{2s}\rangle_D\, (e^3\wedge e^4 +e^8\wedge e^9)\wedge e^a
\cr
&&\qquad\qquad
+4\mathrm{Im}\langle \eta^{2r},\Gamma_{a} \eta^{2s}\rangle_D\, (e^3\wedge e^8 -e^4\wedge e^9)\wedge e^a
\cr
&&\qquad\qquad
+4\mathrm{Im}\langle \eta^{2r},\Gamma_{a} \lambda^{2s}\rangle_D\, (e^3\wedge e^9 +e^4\wedge e^8)\wedge e^a~,
\end{eqnarray}
\begin{eqnarray}
&&\tau^{+rs}= k^{+rs}\wedge e^3\wedge e^4\wedge e^8\wedge e^9
-{2\over3}\mathrm{Re}\langle \eta^{1r},\Gamma_{abc} \lambda^{1s}\rangle_D\, (e^3\wedge e^4 -e^8\wedge e^9)\wedge e^a\wedge e^b\wedge e^c
\cr
&&\qquad\qquad
-{2\over3}\mathrm{Im}\langle \eta^{1r},\Gamma_{abc} \eta^{1s}\rangle_D\, (e^3\wedge e^8 +e^4\wedge e^9)\wedge e^a\wedge e^b\wedge e^c
\cr
&&\qquad\qquad
-{2\over3}\mathrm{Im}\langle \eta^{1r},\Gamma_{abc} \lambda^{1s}\rangle_D\, (e^3\wedge e^9 -e^4\wedge e^8)\wedge e^a\wedge e^b\wedge e^c
\cr
&&\qquad\qquad
+{4\over5!} \mathrm{Re}\langle \eta^{1r}, \Gamma_{a_1\dots a_5}\eta^{1s}\rangle_D~e^{a_1}\wedge\dots\wedge e^{a_5}~,
\end{eqnarray}
\begin{eqnarray}
&&\tau^{-rs}=- k^{-rs}\wedge e^3\wedge e^4\wedge e^8\wedge e^9
+{2\over3}\mathrm{Re}\langle \eta^{2r},\Gamma_{abc} \lambda^{2s}\rangle_D (e^3\wedge e^4 +e^8\wedge e^9)\wedge e^a\wedge e^b\wedge e^c
\cr
&&\qquad\qquad
+{2\over3}\mathrm{Im}\langle \eta^{2r},\Gamma_{abc} \eta^{2s}\rangle_D (e^3\wedge e^8 -e^4\wedge e^9)\wedge e^a\wedge e^b\wedge e^c
\cr
&&\qquad\qquad
+{2\over3}\mathrm{Im}\langle \eta^{2r},\Gamma_{abc} \lambda^{2s}\rangle_D (e^3\wedge e^9 +e^4\wedge e^8)\wedge e^a\wedge e^b\wedge e^c
\cr
&&\qquad\qquad
+{4\over5!} \mathrm{Re}\langle \eta^{2r}, \Gamma_{a_1\dots a_5}\eta^{2s}\rangle_D~e^{a_1}\wedge\dots\wedge e^{a_5}~,
\end{eqnarray}
where $a,b,c=0,1,2,5,6,7$ are the worldvolume directions, $(e^a, e^3, e^4, e^8, e^9)$ is a pseudo-orthonormal frame for the metric (\ref{ns5}), $\langle\cdot,\cdot\rangle_D$ is the $\mathfrak{spin}(5,1)$ invariant Dirac inner product and $\epsilon_{3489}=1$. Both $k^{\pm rs}$ are along the worldvolume directions of the brane and are Killing. This in turn implies that both $k$ and $\tilde k$ are Killing as well. Again $\tilde k$ is Killing because the NS5-brane is a special background. The 3- and 5-forms have mixed components along both worldvolume and transverse directions. Note that the anti-self-dual and self-dual 2-forms along the transverse directions contribute to $\nabla^{(+)}$ and $\nabla^{(-)}$ covariantly constant
forms, respectively.
Therefore the NS5-brane form bilinears have been expressed in terms of those of two positive chirality Weyl $\mathfrak{spin}(5,1)$ spinors. The decomposition of two positive chirality Weyl $\mathfrak{spin}(5,1)$ representations into forms on $\bb{C}^6$ is given by
\begin{eqnarray}
\otimes^2 \Delta^+_{(6)}= \Lambda^1(\bb{C}^6)\oplus \Lambda^{3+} (\bb{C}^6)
\end{eqnarray}
Therefore the string probe with action (\ref{sact}) and
particle probe with action (\ref{pact}) are invariant under $2^5$ symmetries counted over the reals. To see this, observe that from the decomposition above all 1- and self-dual 3-forms along the NS5-brane worldvolume are spanned by these spinors. So there are $6+10=2^4$ independent symmetries generated by the $\nabla^{(+)}$-covariantly constant forms and similarly for the $\nabla^{(-)}$-covariantly constant forms yielding $2^5$ in total. These generate a symmetry algebra of W-type \cite{phgp}. For the remaining form bilinears in appendix \ref{apa}, there is not a straightforward way to relate them to symmetries of particle or string probe actions.
\subsection{IIB common sector}
\subsubsection{The TCFH and probe hidden symmetries}
The TCFH of IIB common sector can be written as
\begin{eqnarray}
\nabla_M\, \phi^{rs}_{N_1\dots N_p} -\frac{p}{2}\,H_{M[N_1}{}^P\,\phi^{(3)rs}_{|P|\dots N_p]}=0 ~, ~~~\nabla_M\, \phi^{(3)rs}_{N_1\dots N_p} -\frac{p}{2}\,H_{M[N_1}{}^P\,\phi^{rs}_{|P|\dots N_p]}=0 ~,
\end{eqnarray}
for $\phi= k, \pi$ and $\tau$.
The rest of the TCFH is
\begin{eqnarray}
\nabla_M\, k^{(\alpha)rs}_P + \frac{i}{4}\,\varepsilon_{\alpha\beta}\,H_{M}{}^{N_1N_2}\pi^{(\beta)rs}_{PN_1N_2}
= 0 ~,
\end{eqnarray}
\begin{eqnarray}
\nabla_M\,\pi^{(\alpha)rs}_{P_1P_2P_3} + \frac{i}{4}\,\varepsilon_{\alpha\beta}\,H_{MN_1N_2}\,\tau^{(\beta)rs}_{P_1P_2P_3}{}^{N_1N_2}
-\frac{3i}{2}\,\varepsilon_{\alpha\beta}\,H_{M[P_1P_2}\,k^{(\beta)rs}_{P_3]}
= 0~,
\end{eqnarray}
\begin{eqnarray}
&&
\nabla_M\,\tau^{(\alpha)rs}_{P_1\dots P_5}
+\frac{5i}{4}\,\varepsilon_{\alpha\beta}\,{}^{\star}H_{M[P_1\dots P_4}{}^{N_1N_2}\,\pi^{(\beta)rs}_{P_5]N_1N_2}
-5i\,\varepsilon_{\alpha\beta}\,H_{M[P_1P_2}\,\pi^{(\beta)rs}_{P_3P_4P_5]}
=
\cr
&&
\qquad
+\frac{5i}{12}\varepsilon_{\alpha\beta}\,g_{M[P_1}\,{}^{\star}H_{P_2\dots P_5]}{}^{N_1N_2N_3}\,\pi^{(\beta)rs}_{N_1N_2N_3} -\frac{3i}{2}\,\varepsilon_{\alpha\beta}\,{}^{\star}H_{[P_1\dots P_5}{}^{N_1N_2}\,\pi^{(\beta)rs}_{M]N_1N_2} ~.
\end{eqnarray}
where $\alpha, \beta=1,2$ and $\epsilon_{12}=1$. As it has already been explained the TCFH is real after replacing the purely imaginary form bilinears $k^{(2)}, \pi^{(2)}$ and $\tau^{(2)}$ with $ik^{(2)}, i\pi^{(2)}$ and $i\tau^{(2)}$.
It is clear from the TCFH above, that the forms $k^{\pm}\defeq k\pm k^{(3)}$, $\pi^{\pm}\defeq\pi\pm \pi^{(3)}$ and $\tau^{\pm}\defeq\tau\pm\tau^{(3)}$ are covariantly constant with respect to the $\nabla^{(\pm)}$ connection defined in (\ref{nablapm}). As a result these form bilinears generate symmetries in the worldvolume probe actions
given in (\ref{sact}) and (\ref{pact}). These are the form bilinears that have mostly been explored in the literature. The remaining form bilinears in the TCFH do not have such an apparent interpretation. Nevertheless they are part of the
geometric structure of the common sector backgrounds.
A consequence of the TCFH above is that
the holonomy of the minimal connection ${\cal D}^{\cal F}$ of a generic common sector background is included in $\times^6 SO(9,1)\times^2 GL(256)$.
Note that the holonomy of the IIA common sector minimal connection is included in $\times^6 SO(9,1)\times^2 GL(255)$. The difference is that the action of ${\cal D}^{\cal F}$ on the IIA 0-form bilinears $\sigma$ and $\tilde \sigma$ is via a partial derivative and so the holonomy is trivial. However if instead we had considered the maximal TCFH connections, see \cite{gptcfh}, of the IIA and IIB common sector both would have reduced holonomy contained in $\times^6 SO(9,1)\times^2 GL(256)$.
\subsubsection{Hidden symmetries of probes on common sector IIB branes}
As an example, we shall explicitly give the symmetries of string and particle probes on the IIB fundamental string and NS5-brane backgrounds. For this one has to calculate the form bilinears of these solutions. Starting with the fundamental string and
choosing the worldsheet along the $05$ directions as in the IIA case, the Killing spinors of the background are $\epsilon=h^{-{1\over 4}} \epsilon_0$, where the constant spinor $\epsilon_0$, $\epsilon_0^t=(\epsilon^1_0, \epsilon_0^2)$, and the two components of $\epsilon_0$ satisfy the conditions
\begin{eqnarray}
\Gamma_{05}\epsilon_0^1=\pm \epsilon^1_0~,~~~\Gamma_{05} \epsilon^2_0=\mp \epsilon^2_0~.
\label{iibfcon}
\end{eqnarray}
Both $\epsilon_0^1$ and $\epsilon_0^2$ are Majorana-Weyl $\mathfrak{spin}(9,1)$ spinors.
The metric of the solution is described in (\ref{fstring}) after changing the worldsheet directions from $01$ to $05$ and $h$ is taken to be a general harmonic function on $\bb{R}^8$, given in (\ref{mhf}) for $p=1$.
To solve the above condition, we shall again use spinorial geometry \cite{uggp}. In particular choosing the plus sign in (\ref{iibfcon}) and writing $\epsilon_0=\eta+e_5\wedge \lambda$, where $\eta$ ($\lambda$) is a doublet of chiral (anti-chiral) Majorana-Weyl $\mathfrak{spin}(8)$ spinors, one finds that
\begin{eqnarray}
\epsilon_0^1=\eta^1=\eta~,~~~\epsilon_0^2=e_5\wedge\lambda^2=e_5\wedge \lambda~,
\label{fiibsol}
\end{eqnarray}
i.e. the condition on the Killing spinor implies $\lambda^1=\eta^2=0$. One can use the solution (\ref{fiibsol}) to express the bilinears of the Killing spinors in terms of those of independent $\mathfrak{spin}(8)$ spinors $\eta$ and $\lambda$. The results can be found in appendix \ref{apa}.
In particular one finds that the $\nabla^{(\pm)}$-covariantly constant form bilinears can be expressed as
\begin{eqnarray}
&&
k^{+rs}=2 h^{-{1\over2}}\langle \eta^r, \eta^s\rangle (e^0-e^5)~,~~k^{-rs}=2 h^{-{1\over2}} \langle \lambda^r, \lambda^s\rangle (e^0+e^5)~,
\cr
&&
\pi^{+rs}= h^{-{1\over2}} \langle\eta^r, \Gamma_{ij} \eta^s\rangle (e^0-e^5) \wedge e^i\wedge e^j~,~~~\pi^{-rs}= h^{-{1\over2}} \langle\lambda^r, \Gamma_{ij} \lambda^s\rangle (e^0+e^5) \wedge e^i\wedge e^j~,
\cr
&&
\tau^{+rs}={2\over4!} h^{-{1\over2}} \langle\eta^r, \Gamma_{i_1\dots i_4} \eta^s\rangle (e^0-e^5) \wedge e^{i_1}\wedge \dots \wedge e^{i_4}~,
\cr
&&
\tau^{-rs}={2\over4!} h^{-{1\over2}} \langle\lambda^r, \Gamma_{i_1\dots i_4} \lambda^s\rangle (e^0+e^5) \wedge e^{i_1}\wedge\dots \wedge e^{i_4}~,
\end{eqnarray}
where $(e^0, e^5, e^i)$ is a pseudo-orthonormal frame for the metric (\ref{fstring}) and $\langle\cdot, \cdot\rangle$ is the $\mathfrak{spin}(8)$ invariant inner product. As in the IIA case both $k^{\pm rs}$ are along the worldvolume directions and Killing which in turn implies that $k$ and $k^{(3)}$ are Killing as well. The latter property is a special property of the IIB fundamental string solution. In addition, as in the IIA case, the $\nabla^{(+)}-$ ($\nabla^{(-)}-$) parallel form bilinears are left- (right-) handed from the string worldvolume perspective as indicated by their dependence on the worldsheet lightcone directions.
It remains to find the form bilinears of the $\mathfrak{spin}(8)$ spinors $\eta$ and $\lambda$. These can be identified from the decomposition
of the product of two chiral $\Delta^+_{(8)}$ and two anti-chiral $\Delta^-_{(8)}$ Majorana-Weyl representations of $\mathfrak{spin}(8)$. It is well known that
\begin{eqnarray}
\Delta^\pm_{(8)}\otimes \Delta^\pm_{(8)}= \Lambda^0(\bb{R}^8)\oplus \Lambda^2(\bb{R}^8)\oplus \Lambda^{4\pm}(\bb{R}^8)~.
\end{eqnarray}
Therefore these bilinears span all constant 0-, 2- and self-dual or anti-self-dual 4-forms on $\bb{R}^8$. As a result, the probe actions (\ref{sact}) and (\ref{pact})
admit $2^7$ independent symmetries generated by these forms. Commutators of symmetries generated by $\nabla^{(\pm)}$-covariantly constant forms have been examined in \cite{phgp} and it was found that they are of W-type.
After some investigation it has been found that the remaining form bilinears do not generate symmetries in particle and string probe actions like
(\ref{sact}), (\ref{pact}) and (\ref{1part}).
Next let us turn to investigate the form bilinears of the IIB NS5-brane. Choosing the worldvolume along the directions $051627$, the Killing spinors $\epsilon$ of the solution are constant, $\epsilon=\epsilon_0$, and satisfy the condition $\Gamma_{3489}\epsilon^1=\pm \epsilon^1$ and $\Gamma_{3489}\epsilon^2=\mp \epsilon^2$, where
both $\epsilon^1$ and $\epsilon^2$ are Majorana-Weyl $\mathfrak{spin}(9,1)$ spinors. Choosing
the first sign, one can solve the above conditions using spinorial geometry \cite{uggp}. As in the IIA case, it is best to first solve the condition for $\epsilon$ complex and then impose the reality condition. The solution is
\begin{eqnarray}
\epsilon^1=\eta^1+ e_{34}\wedge \lambda^1~,~~~\epsilon^2=e_3\wedge \eta^2+ e_4 \wedge \lambda^2~,
\label{iibsolns5}
\end{eqnarray}
where $\eta^1, \lambda^1$ are positive chirality $\mathfrak{spin}(5,1)$ spinors, i.e. $\eta^1, \lambda^1\in \Lambda^{\mathrm{ev}}(\bb{C}\langle e_1, e_2, e_5\rangle)$, and $\eta^2,\lambda^2$ are negative chirality $\mathfrak{spin}(5,1)$ spinors, i.e. $\eta^2, \lambda^2\in \Lambda^{\mathrm{odd}}(\bb{C}\langle e_1, e_2, e_5\rangle)$.
Moreover the reality condition on the $\epsilon^1$ and $\epsilon^2$ spinors implies that
\begin{eqnarray}
\lambda^1=-\Gamma_{67} (\eta^1)^*~,~~~\lambda^2=-\Gamma_{67} (\eta^2)^*~.
\end{eqnarray}
Using (\ref{iibsolns5}), one can easily compute the form bilinears in terms of those of $\eta^1$ and $\eta^2$. These can be found in appendix \ref{apa}.
Using the expressions of the form bilinears in appendix \ref{apa}, one finds that the $\nabla^{(\pm)}$-covariant constant bilinears are
\begin{eqnarray}
&&
k^{+rs}= 4 \mathrm{Re} \langle \eta^{1r}, \Gamma_a \eta^{1s}\rangle_D\, e^a~,~~~k^{-rs}=
4 \mathrm{Re} \langle \eta^{2r}, \Gamma_a \eta^{2s}\rangle_D\, e^a~,
\end{eqnarray}
\begin{eqnarray}
&&\pi^{+rs}=- 4 \mathrm{Re} \langle \eta^{1r}, \Gamma_a \lambda^{1s}\rangle_D \, e^a \wedge (e^3\wedge e^4-e^8\wedge e^9)
-4 \mathrm{Im}\langle \eta^{1r}, \Gamma_a \eta^{1s}\rangle_D\, e^a\wedge (e^3\wedge e^8+ e^4\wedge e^9)
\cr
&&\qquad\qquad
- 4 \mathrm{Im} \langle \eta^{1r}, \Gamma_a \lambda^{1s}\rangle_D \, e^a \wedge (e^3\wedge e^9-e^4\wedge e^8)
\cr
&&\qquad\qquad
+{2\over3}\mathrm{Re} \langle \eta^{1r}, \Gamma_{abc} \eta^{1s}\rangle_D e^a\wedge e^b \wedge e^c~,
\end{eqnarray}
\begin{eqnarray}
&&\pi^{-rs}=4 \mathrm{Re} \langle \eta^{2r}, \Gamma_a \lambda^{2s}\rangle_D\, e^a \wedge (e^3\wedge e^4+e^8\wedge e^9)
+4 \mathrm{Im}\langle \eta^{2r}, \Gamma_a \eta^{2s}\rangle_D\, e^a\wedge (e^3\wedge e^8- e^4\wedge e^9)
\cr
&&\qquad\qquad
+4 \mathrm{Im} \langle \eta^{2r}, \Gamma_a \lambda^{2s}\rangle_D\, e^a \wedge (e^3\wedge e^9+e^4\wedge e^8)
\cr
&&\qquad\qquad
+{2\over3} \mathrm{Re} \langle \eta^{2r}, \Gamma_{abc} \eta^{2s}\rangle_D e^a\wedge e^b \wedge e^c~,
\end{eqnarray}
\begin{eqnarray}
&&\tau^{+rs}= 4\mathrm{Re}\langle \eta^{1r}, \Gamma_a \eta^{1s}\rangle_D
\,e^a\wedge e^3\wedge e^4\wedge e^8\wedge e^9+{4\over5!} \mathrm{Re} \langle \eta^{1r}, \Gamma_{a_1\dots a_5} \eta^{1s}\rangle_D e^{a_1}\wedge\dots \wedge e^{a_5}
\cr
&&\qquad\qquad
- {2\over3} \mathrm{Re} \langle \eta^{1r}, \Gamma_{abc} \lambda^{1s}\rangle_D \, e^a \wedge e^b \wedge e^c \wedge(e^3\wedge e^4-e^8\wedge e^9)
\cr
&&\qquad\qquad
-{2\over3} \mathrm{Im}\langle \eta^{1r}, \Gamma_{abc} \eta^{1s}\rangle_D\, e^a\wedge e^b \wedge e^c \wedge (e^3\wedge e^8+ e^4\wedge e^9)
\cr
&&\qquad\qquad
- {2\over3} \mathrm{Im} \langle \eta^{1r}, \Gamma_{abc} \lambda^{1s}\rangle_D \, e^a\wedge e^b \wedge e^c \wedge (e^3\wedge e^9-e^4\wedge e^8)~,
\end{eqnarray}
\begin{eqnarray}
&&\tau^{-rs}= -4 \mathrm{Re}\langle \eta^{2r}, \Gamma_a \eta^{2s}\rangle_D
\,e^a\wedge e^3\wedge e^4\wedge e^8\wedge e^9+{4\over5!} \mathrm{Re} \langle \eta^{2r}, \Gamma_{a_1\dots a_5} \eta^{2s}\rangle_D e^{a_1}\wedge\dots \wedge e^{a_5}
\cr
&&\qquad\qquad
+{2\over3} \mathrm{Re} \langle \eta^{2r}, \Gamma_{abc} \lambda^{2s}\rangle_D\, e^a\wedge e^b \wedge e^c \wedge (e^3\wedge e^4+e^8\wedge e^9)
\cr
&&\qquad\qquad
+{2\over3} \mathrm{Im}\langle \eta^{2r}, \Gamma_{abc} \eta^{2s}\rangle_D\, e^a\wedge e^b \wedge e^c \wedge (e^3\wedge e^8- e^4\wedge e^9)
\cr
&&\qquad\qquad
+{2\over3} \mathrm{Im} \langle \eta^{2r}, \Gamma_{abc} \lambda^{2s}\rangle_D\, e^a\wedge e^b \wedge e^c \wedge (e^3\wedge e^9+e^4\wedge e^8)~,
\end{eqnarray}
where $(e^a, e^3, e^4, e^8, e^9)$ is a pseudo-orthonormal frame of the NS5-brane metric (\ref{ns5}) and $\langle\cdot,\cdot\rangle_D$ is the $\mathfrak{spin}(5,1)$ invariant Dirac inner product. Clearly $k^{\pm}$ are Killing which implies that both
$k$ and $k^{(3)}$ are Killing as well. As in all previous common sector branes, the latter generates an additional symmetry for particle and string actions on NS5-brane backgrounds. The 3- and 5-form bilinears above have mixed components along both the worldvolume and transverse directions, and the anti-self-dual and self-dual 2-forms along the transverse directions contribute to $\nabla^{(+)}$- and $\nabla^{(-)}$- covariantly constant
forms, respectively.
We have expressed the $\nabla^{(\pm)}$-covariantly constant bilinears in terms of the bilinears of the chiral and anti-chiral $\mathfrak{spin}(5,1)$ spinors
$\eta^1$ and $\eta^2$, respectively. To determine those note that
\begin{eqnarray}
\otimes^2 \Delta^\pm_{(6)}= \Lambda^1(\bb{C}^6)\oplus \Lambda^{3\pm} (\bb{C}^6)~.
\end{eqnarray}
Therefore these span all 1-forms and 3-forms on the worldvolume of the NS5-brane. In particular, they generate $2^5$ independent symmetries, counting over the real numbers, for the
spinning particle and string probe actions in (\ref{sact}) and (\ref{pact}). The algebra of these symmetries is of W-type \cite{phgp}. An investigation reveals that the remaining bilinears do not generate symmetries for the (\ref{sact}) and (\ref{pact}) probe actions.
\section{IIA D-branes}
There is no classification of IIA supersymmetric backgrounds. So to give more examples for which the TCFH can be interpreted as invariance condition for probe particle and string actions under symmetries generated by the form bilinears, we shall turn to some special solutions and in particular to the D-branes\footnote{A consequence of this investigation is that we shall find all form bilinears of the type II D-brane solutions.}. It is convenient to organise the investigation in electric-magnetic brane pairs as the non-vanishing fields that appear in TCFH are the same.
The TCFH for each D-brane pair can be easily found from that of the IIA TCFH given in (\ref{iiatcfha})-(\ref{iiatcfhf}) and (\ref{iiatcfha1})-(\ref{iiatcfha4}) upon setting all the form field strengths to zero apart from those associated to the D-brane under investigation.
\subsection{D0- and D6-branes}
\subsubsection{D0-branes}
The Killing spinors of the D0-brane are given by $\epsilon= h^{-{1\over8}} \epsilon_0$, where $\epsilon_0$ is a constant spinor
restricted as $\Gamma_0\Gamma_{11}\epsilon_0=\pm \epsilon_0$, the worldline is along the 0-th direction and $h$ is a multi-centred harmonic function as in (\ref{mhf}) for $p=0$. Choosing the plus sign and using spinorial geometry \cite{uggp}, one can solve this condition by setting
\begin{eqnarray}
\epsilon_0=\eta-e_5\wedge \Gamma_{11} \eta~,
\label{d0sol}
\end{eqnarray}
where
$\eta\in \Lambda^*(\bb{R}\langle e_1, \dots, e_4\rangle)$ and the reality condition is imposed by $\Gamma_{6789}*\eta=\eta$. Using this, one can compute the form bilinears. These are given in appendix \ref{apb}.
As expected $k$ is a Killing vector. As a result $k$ generates a symmetry in all probe actions (\ref{sact}), (\ref{pact}) and (\ref{1part}) after setting the form couplings to zero. It also generates a symmetry in the probe action of \cite{gpeb} with the 2-form coupling; the D0-brane 2-form field strength $F=F_{0i}\, e^0\wedge e^i$ is invariant under the action of $k$. An investigation of the TCFH for the rest of the form bilinears using that $F_{0i}\not=0$ reveals that these do not generate symmetries for the probe actions we have been considering. Because of this we postpone a more detailed analysis
of the TCFH for later and in particular for the D6- and D2-branes.
\subsubsection{D6-brane}
Choosing the transverse directions of the D6-brane along $549$, the Killing spinor $\epsilon=h^{-{1\over8}} \epsilon_0$ satisfies the condition
\begin{eqnarray}
\Gamma_{549} \Gamma_{11}\epsilon_0=\pm\epsilon_0~,
\label{d6pro}
\end{eqnarray}
where $\epsilon_0$ is a constant spinor and $h$ is a multi-centred harmonic function as in (\ref{mhf}) with $p=6$. To solve this condition with the plus sign using spinorial geometry, set
\begin{eqnarray}
\epsilon_0=\eta+e_4\wedge \lambda~,
\label{d6sol}
\end{eqnarray}
where $\eta, \lambda\in \Lambda^*(\bb{C}\langle e_1, e_2, e_3, e_5\rangle)$. Then the condition (\ref{d6pro}) gives
\begin{eqnarray}
\Gamma_5 \Gamma_{11}\eta=-i\eta~,~~~\Gamma_5 \Gamma_{11}\lambda=i\lambda~.
\label{d6elcon}
\end{eqnarray}
One can proceed to expand $\eta$ and $\lambda$ as $\eta=\eta^1+e_5\wedge \eta^2$ and $\lambda=\lambda^1+e_5\wedge \lambda^2$ in which case the conditions (\ref{d6elcon}) give $\eta^2=i \Gamma_{11} \eta^1$ and $\lambda^2=-i \Gamma_{11}\lambda^1$, where $\eta^1, \lambda^1 \in \Lambda^*(\bb{C}\langle e_1, e_2, e_3\rangle)$
are the independent spinors. However if one proceeds in this way the form bilinears will not be manifestly worldvolume Lorentz covariant, as the $0$-th direction will be separated from the rest. Because of this, we shall not solve (\ref{d6elcon}) and do the computation with $\eta$ and $\lambda$. After the computation of the form bilinears, one can substitute in the formulae the solution of (\ref{d6elcon}) in terms of $\eta^1$ and $\lambda^1$. However this is not necessary for the purpose of this paper.
It remains to impose the reality condition on $\epsilon_0$. This gives $\eta=-i \Gamma_{678}*\lambda$ or equivalently $\lambda=-i \Gamma_{678}*\eta$.
The form bilinears are given in appendix \ref{apb}.
The TCFH for $k$ on a background with a 2-form field strength is
\begin{equation}
\begin{split}
\nabla_M k_N = \frac{1}{8} e^\Phi F_{PQ}\tilde{\zeta}^{PQ}{}_{MN} +\frac{1}{4}e^\Phi F_{MN}\tilde{\sigma}~.
\end{split}
\label{ftcfh1}
\end{equation}
As expected $k$ generates isometries and so symmetries in all the probe actions (\ref{pact}), (\ref{sact}) and (\ref{1part}) with vanishing form couplings. It also generates a symmetry for the probe action of \cite{gpeb} with the 2-form coupling, as the D6-brane 2-form field strength $F=\frac{1}{2} F_{ij}\, e^i\wedge e^j$ is invariant under the action of $k$. In what follows we shall be mostly concerned with the symmetries generated by the form bilinears for the probe action (\ref{1part}). The invariance of this action imposes the weakest conditions on the form bilinears amongst all probe actions that we have been investigating.
Next consider the $\tilde k$ and $\omega$ bilinears on a background with a 2-form field strength. The
TCFH for these is
\begin{equation}
\begin{split}
\nabla_M \tilde{k}_N- \frac{1}{2} e^\Phi F_{MP}\omega^P{}_N = \frac{1}{8}e^\Phi g_{MN} F_{PQ}\omega^{PQ} -\frac{1}{2}e^\Phi F_{[M|P|}\omega^P{}_{N]}~,
\end{split}
\label{ftcfh2}
\end{equation}
\begin{eqnarray}
\nabla_M \omega_{NR} + e^\Phi F_{M[N}\tilde{k}_{R]} & = & \frac{3}{4} e^\Phi F_{[MN}\tilde{k}_{R]} + \frac{1}{2} e^\Phi g_{M[N}F_{R]P}\tilde{k}^P
\cr
&& \qquad\qquad
+\frac{1}{4 \cdot 5!} e^\Phi {}^\star{F}_{MNRP_1\dots P_5}\tau^{P_1\dots P_5}~.
\label{ftcfh3}
\end{eqnarray}
For $\tilde k$ to generate symmetries in probe action (\ref{1part}) with $C=0$, it must be a KY tensor. As for D6-branes $F_{ij}\not=0$, the term proportional to the spacetime metric in the first of the equations above must vanish. This requires that $\omega_{ij}=0$. Then from the expressions of the form bilinears of D6-brane in appendix \ref{apb} and (\ref{d6elcon}), one concludes that $\tilde k=0$. Therefore $\tilde k$ does not generate symmetries for the probe action (\ref{1part}).
Similarly for $\omega$ to generate a symmetry for probe action (\ref{1part}) with $C=0$, one finds from the last TCFH above that $\tilde k=0$. Then from the expressions for the D6-brane form bilinears in appendix \ref{apb}, this implies that $\omega_{ij}=0$ or equivalently
\begin{eqnarray}
{\D{\eta^r}{\Gamma_{11} \lambda^s}}=\mathrm{Im} \D{\eta^r}{\eta^s}=0~.
\end{eqnarray}
Then
\begin{eqnarray}
\omega={1\over2} \omega_{ab}\, e^a\wedge e^b= h^{-\frac{1}{4}} \mathrm{Re}{\D{\eta^r}{\Gamma_{ab}\eta^s}}\, e^a \wedge e^b~,
\end{eqnarray}
is a KY form and generates a (hidden) symmetry for the probe action (\ref{1part}) with $C=0$. Note that there are Killing spinors for which $\omega\not=0$ even though $\omega_{ij}=0$. This can be verified using a decomposition similar to (\ref{repdec}) but now for $\mathfrak{spin}(6,1)$ spinors.
The TCFH for the bilinears $\tilde \omega$ and $\pi$ is
\begin{equation}
\begin{split}
\nabla_M \tilde\omega_{NR} + \frac{1}{2}e^\Phi F_{MP} \pi^P{}_{NR}= -\frac{1}{4}e^\Phi g_{M[N}F_{|PQ|}\pi^{PQ}{}_{R]} + \frac{3}{4} e^\Phi F_{[M|P|} \pi^P{}_{NR]}~,
\end{split}
\label{ftcfh4}
\end{equation}
\begin{equation}
\begin{split}
\nabla_M \pi_{NRS} - &\frac{3}{2} e^\Phi F_{M[N} \tilde\omega_{RS]}= \frac{1}{4 \cdot 4!} e^\Phi {}^\star{F}_{MNRSP_1 \dots P_4} \zeta^{P_1\dots P_4} \\
&- \frac{3}{2} e^\Phi g_{M[N} F_{R|P|} \tilde \omega^P{}_{S]} - \frac{3}{2} e^\Phi F_{[MN} \tilde\omega_{RS]}~.
\end{split}
\label{ftcfh5}
\end{equation}
For $\tilde \omega$ to be a KY form and so generate a symmetry in the probe action (\ref{1part}) with $C=0$, $\pi_{aij}=0$. As it can be seen from the D6-brane bilinears in appendix \ref{apb} after using (\ref{d6elcon}), this implies that $\tilde \omega=0$ and so $\tilde \omega$ does not generate any symmetries. Turning to $\pi$, one finds that this is a KY tensor provided that $\tilde\omega=0$ which implies that $\pi_{aij}=0$ or equivalently
\begin{eqnarray}
{\D{\eta^r}{\Gamma_a \Gamma_{5} \lambda^s}}=\mathrm{Im}{\D{\eta^r}{\Gamma_a \eta^s}}=0~.
\label{bibix}
\end{eqnarray}
The remaining components of $\pi$,
\begin{eqnarray}
\pi={1\over3!} \pi_{abc} e^a\wedge e^b\wedge e^c=\frac{1}{3} h^{-\frac{1}{4}} \mathrm{Re}{\D{\eta^r}{\Gamma_{abc} \eta^s}} e^a \wedge e^b \wedge e^c~,
\label{bibixx}
\end{eqnarray}
generate a (hidden) symmetry for the probe action (\ref{1part}) with $C=0$.
From now on to simplify the analysis that follows on the symmetries generated by TCFHs for all IIA D-branes, we shall only mention the components of the form bilinears that are required to vanish in order for some others become KY forms. In particular, we shall not give the explicit
expressions for the vanishing components of the form bilinears and those of the KY forms in terms of the Killing spinors as we have done in e.g. (\ref{bibix}) and (\ref{bibixx}), respectively. These can be easily read from the expressions of the form bilinears of D-branes given in appendix \ref{apb}.
The TCFH for the bilinears $\zeta$ and $\tilde \pi$ is
\begin{equation}
\begin{split}
\nabla_M \tilde\pi_{NRS}-&\frac{1}{2} e^\Phi F_{MP} \zeta^P{}_{NRS} = \frac{3}{8} e^\Phi g_{M[N} F_{|PQ|} \zeta^{PQ}{}_{RS]} \\
&- e^\Phi F_{[M|P|} \zeta^P{}_{NRS]} - \frac{3}{4}e^\Phi g_{M[N}F_{RS]} \sigma~,
\end{split}
\label{ftcfh6}
\end{equation}
\begin{equation}
\begin{split}
\nabla_M \zeta_{N_1 \dots N_4}& +2 e^\Phi F_{M[N_1} \tilde\pi_{N_2 N_3 N_4]}=
- \frac{1}{4!} e^\Phi {}^\star{F}_{MN_1 \dots N_4 PQR} \pi^{PQR}
\\
&+ 3 e^\Phi g_{M[N_1} F_{N_2|P|}\tilde\pi^P{}_{N_3N_4]} + \frac{5}{2} e^\Phi F_{[MN_1}\tilde\pi_{N_2N_3N_4]} ~ .
\end{split}
\label{ftcfh7}
\end{equation}
A similar analysis to the one presented above reveals that $\tilde \pi$ does not generate symmetries in the probe actions we have been considering. While
for $\zeta$ to be a KY form, and so generate a (hidden) symmetry for the probe action (\ref{1part}) with $C=0$, one requires that $\tilde \pi=0$. This in turn implies that $\zeta_{abij}=0$ and that $\zeta={1\over 24} \zeta_{a_1\dots a_4} e^{a_1}\wedge\dots\wedge e^{a_4}$ is a KY form.
The TCFH for $\tilde \zeta$ and $\tau$ is
\begin{equation}
\begin{split}
\nabla_M \tilde{\zeta}_{N_1 \dots N_4}+& \frac{1}{2} e^\Phi F_{MP}\tau^P{}_{N_1 \dots N_4} =
-\frac{1}{2} e^\Phi g_{M[N_1}F_{|PQ|}\tau^{PQ}{}_{N_2N_3N_4]} + \frac{5}{8} e^\Phi F_{[M|P|}\tau^P{}_{N_1 \dots N_4]} \\
&+ 3e^\Phi g_{M[N_1}F_{N_2N_3}k_{N_4]}~,
\end{split}
\label{ftcfh8}
\end{equation}
\begin{equation}
\begin{split}
\nabla_M \tau_{N_1 \dots N_5} &-\frac{5}{2}e^\Phi F_{M[N_1}\tilde{\zeta}_{N_2 \dots N_5]}= -\frac{1}{8} e^\Phi {}^\star{F}_{MN_1\dots N_5}{}^{PQ}\omega_{PQ} \\
&-5e^\Phi g_{M[N_1}F_{N_2|P|}\tilde{\zeta}^P{}_{N_3N_4N_5]} - \frac{15}{4} e^\Phi F_{[MN_1}\tilde{\zeta}_{N_2\dots N_5]}~.
\end{split}
\label{ftcfh9}
\end{equation}
For $\tilde \zeta$ to generate a symmetry, the above TCFH requires $k_a=0$ and $\tau_{abcij}=0$. These imply that $\tilde \zeta=0$ and so this bilinear does not generate a symmetry.
It turns out that $\tau$ is a KY form provided that $\tilde\zeta_{abci}=0$. As a result $\tau_{abcij}=0$. The remaining non-vanishing components of
$\tau$, $\tau={1\over 5!} \tau_{a_1\dots a_5} e^{a_1}\wedge\dots\wedge e^{a_5}$ generate a (hidden) symmetry of the probe action (\ref{1part}) with $C=0$.
It is clear from the TCFH in (\ref{ftcfh1})-(\ref{ftcfh3}), (\ref{ftcfh4}), (\ref{ftcfh5}), (\ref{ftcfh6}), (\ref{ftcfh7}), (\ref{ftcfh8}) and (\ref{ftcfh9})
that the holonomy of the minimal connection reduces for backgrounds with only a 2-form field strength. In particular, the (reduced) holonomy of the minimal connection reduces
to a subgroup of $O(9,1)\times GL(55, \bb{R})\times GL(165, \bb{R})\times GL(330, \bb{R})\times GL(462, \bb{R})$.
For completeness we state the TCFH on the scalar bilinears
\begin{equation}
\nabla_M \tilde{\sigma} = - \frac{1}{4} e^\Phi F_{MP}k^P~,~~ \nabla_M \sigma = - \frac{1}{8} e^\Phi F_{PQ} \tilde\pi^{PQ}{}_M~.
\end{equation}
These give a trivial contribution to the holonomy of the minimal connection.
To summarise the results of this section, we have concluded as a consequence of the TCFH that there are Killing spinors such that $k$, $\pi$, $\zeta$ and $\tau$, which have non-vanishing components only along the worldvolume directions
of the D6-brane, are KY forms. Therefore they generate symmetries for the probe described by the action (\ref{1part}) with $C=0$ in a D6-brane background. This is the case for any multi-centred harmonic function $h$ that the D6-brane solution depends on.
\subsection{D2 and D4-branes}
\subsubsection{D2 brane}
Choosing the worldvolume directions of the D2-brane along $051$, the Killing spinors $\epsilon=h^{-{1\over8}} \epsilon_0$ of the solution satisfy the condition
\begin{eqnarray}
\Gamma_{051}\epsilon_0=\pm\epsilon_0~,
\label{d2scon}
\end{eqnarray}
where $\epsilon_0$ is a constant spinor and $h$ is given in (\ref{mhf}) for $p=2$. To solve this condition with the plus sign using spinorial geometry, set
\begin{eqnarray}
\epsilon_0=\eta+ e_5\wedge \lambda~,
\label{d2solscon}
\end{eqnarray}
to find that the remaining restrictions on $\eta$ and $\lambda$ are
\begin{eqnarray}
\Gamma_1\eta=\eta~,~~~\Gamma_1\lambda=\lambda~,
\end{eqnarray}
where $\eta, \lambda\in \Lambda^*(\bb{R} \langle e_1, e_2, e_3, e_4\rangle)$; the reality condition is imposed with $\Gamma_{6789}*\eta=\eta$ and $\Gamma_{6789}*\lambda=\lambda$. As in the D6-brane case, the remaining condition on $\eta$ and $\lambda$ can be solved by setting
$\eta=\eta^1+ e_1\wedge \eta^1$ and $\lambda=\lambda^1+e_1\wedge \lambda^1$, where $\eta^1, \lambda^1\in \Lambda^*(\bb{R} \langle e_2, e_3, e_4\rangle)$
label the independent solutions of (\ref{d2scon}).
However, we shall perform the computation of the form bilinears using (\ref{d2solscon}) as otherwise their expression will not be manifestly covariant along the
transverse directions of the D2-brane, e.g. the 6-th direction will have to be treated separately from the rest. The form bilinears of the D2-brane can be found in appendix \ref{apb}.
D2-branes exhibit a non-vanishing 4-form field strength $G_{015i}\not=0$. As the probe actions we have been considering do not exhibit such a coupling, the only remaining coupling is that of the spacetime metric. Therefore for the form bilinears to generate a symmetry, they must be KY forms.
To investigate which of the form bilinears are KY, we shall organise the TCFH according to the domain that the minimal connection acts on.
As expected the TCFH
\begin{equation}
\begin{split}
\nabla_M k_N = \frac{1}{4\cdot 4!}e^\Phi {}^\star{G}_{MNP_1\dots P_4}\tilde{\zeta}^{P_1 \dots P_4} + \frac{1}{8}e^\Phi G_{MNPQ}\omega^{PQ}~,
\end{split}
\label{iiatcfhbx1}
\end{equation}
implies that $k$ is a Killing 1-form. As a result it generates symmetries in all probe action (\ref{pact}), (\ref{sact}) and (\ref{1part})
after setting $b=C=0$.
Next observe that
\begin{equation}
\begin{split}
\nabla_M \tilde{k}_N - \frac{1}{12}e^\Phi G_{MPQR}\tilde{\zeta}^{PQR}{}_N= \frac{1}{4\cdot 4!}e^\Phi g_{MN} G_{P_1\dots P_4}\tilde{\zeta}^{P_1 \dots P_4} - \frac{1}{12}e^\Phi G_{[M|PQR|}\tilde{\zeta}^{PQR}{}_{N]}~,
\end{split}
\label{iiatcfhcx2}
\end{equation}
\begin{eqnarray}
&&\nabla_M \tilde{\zeta}_{N_1 \dots N_4} -\frac{1}{2}e^\Phi {}^\star{G}_{M[N_1N_2|PQR|}\tau^{PQR}{}_{N_3N_4]}+ 2e^\Phi G_{M[N_1N_2N_3}\tilde{k}_{N_4]}
\cr
&&\qquad\qquad= \frac{1}{8}e^\Phi g_{M[N_1}{}^\star{G}_{N_2 N_3|P_1 \dots P_4|}\tau^{P_1 \dots P_4}{}_{N_4]}
\cr
&&\qquad\qquad-\frac{5}{12}e^\Phi {}^\star{G}_{[MN_1N_2|PQR|}\tau^{PQR}{}_{N_3N_4]} -\frac{1}{4}e^\Phi {}^\star{G}_{MN_1\dots N_4P}k^P
\cr
&&\qquad\qquad+ \frac{5}{4}e^\Phi G_{[MN_1N_2N_3}\tilde{k}_{N_4]} + e^\Phi g_{M[N_1}G_{N_2N_3N_4]P}\tilde{k}^P ~.
\label{iiatcfhex3}
\end{eqnarray}
\begin{eqnarray}
&&
\nabla_M \tau_{N_1 \dots N_5}+\frac{5}{2}e^\Phi {}^\star{G}_{M[N_1N_2N_3|PQ|}\tilde{\zeta}^{PQ}{}_{N_4N_5]} +5e^\Phi G_{M[N_1N_2N_3}\omega_{N_4N_5]}
\cr
&&
\qquad\qquad= \frac{15}{8}e^\Phi {}^\star{G}_{[MN_1N_2N_3|PQ|}\tilde{\zeta}^{PQ}{}_{N_4N_5]}
\cr
&&\qquad\qquad
+ \frac{1}{4}e^\Phi {}^\star{G}_{MN_1\dots N_5}\tilde{\sigma} + \frac{5}{6 }e^\Phi g_{M[N_1}{}^\star{G}_{N_2N_3N_4|PQR|}\tilde{\zeta}^{PQR}{}_{N_5]} + \frac{15}{4} e^\Phi G_{[MN_1N_2N_3}\omega_{N_4N_5]}
\cr
&&\qquad\qquad
+ 5 e^\Phi g_{M[N_1}G_{N_2N_3N_4|P|}\omega^P{}_{N_5]} ~,
\label{iiatcfhfx4}
\end{eqnarray}
\begin{eqnarray}
&&\nabla_M \omega_{NR} - \frac{1}{12} e^\Phi G_{MP_1P_2P_3}\tau^{P_1P_2P_3}{}_{NR} = \frac{1}{2\cdot 4!}e^\Phi g_{M[N} G_{|P_1\dots P_4|}\tau^{P_1 \dots P_4}{}_{R]}
\cr
&&
- \frac{1}{8}e^\Phi G_{[M|P_1 P_2 P_3|}\tau^{P_1 P_2 P_3}{}_{NR]} - \frac{1}{4}e^\Phi G_{MNRP} k^P ~,
\label{iiatcfhdx5}
\end{eqnarray}
and so the minimal connection acts on the domain of $\tilde k$, $\tilde\zeta$, $\tau$ and $\omega$ form bilinears. Using that for D2-branes $G_{015i}\not=0$ and the explicit expression for the form bilinears in appendix \ref{apb}, one finds that the TCFH implies that the form bilinears $\tilde k$, $\tilde\zeta$ and $\tau$ cannot be KY tensors. So these do not generate a symmetry in probe actions.
On the other hand for $\omega$ to be a KY tensor, the TCFH implies that $\tau_{abcij}=0$. This in turn implies that $\omega_{ij}=0$. As a result $\omega={1\over2} \omega_{ab} e^a\wedge e^b$ is a KY form and generates a (hidden) symmetry in the probe action (\ref{1part}). The condition $\tau_{abcij}=0$ on the Killing spinors and the expression for $\omega_{ab}$ in terms of Killing spinors can be easily read from the expressions of these form bilinears in appendix \ref{apb}. There are Killing spinors such that $\tau_{abcij}=0$ and $\omega\not=0$.
The TCFH on the remaining form bilinears is
\begin{eqnarray}
&&\nabla_M \pi_{NRS} - \frac{3}{4}e^\Phi G_{M[N|PQ|} \zeta^{PQ}{}_{RS]}= - \frac{1}{4} e^\Phi G_{MNRS} \sigma
-\frac{1}{8} e^\Phi {}^\star{G}_{MNRSPQ} \tilde\omega^{PQ}
\cr
&&\qquad\qquad
- \frac{1}{4} e^\Phi g_{M[N} G_{R|P_1P_2P_3|}\zeta^{P_1P_2P_3}{}_{S]} - \frac{3}{4}e^\Phi G_{[MN|PQ|}\zeta^{PQ}{}_{RS]} ~,
\label{iiatcfha2x1}
\end{eqnarray}
\begin{eqnarray}
&&
\nabla_M \zeta_{N_1 \dots N_4} + 3 e^\Phi G_{M[N_1N_2|P|}\pi^P{}_{N_3N_4]}- e^\Phi {}^\star{G}_{M[N_1N_2N_3|PQ|}\tilde\pi^{PQ}{}_{N_4]}
\cr
&& \qquad\qquad = - \frac{1}{6} e^\Phi g_{M[N_1} {}^\star{G}_{N_2N_3N_4]PQR}\tilde\pi^{PQR}
- \frac{5}{8} e^\Phi {}^\star{G}_{[MN_1N_2N_3|PQ|}\tilde\pi^{PQ}{}_{N_4]}
\cr
&&
\qquad\qquad- \frac{3}{2} e^\Phi g_{M[N_1} G_{N_2N_3|PQ|}\pi^{PQ}{}_{N_4]} + \frac{5}{2} e^\Phi G_{[MN_1N_2|P|} \pi^P{}_{N_3N_4]} ~,
\label{iiatcfha4x2}
\end{eqnarray}
\begin{eqnarray}
&&
\nabla_M \tilde\pi_{NRS} +\frac{3}{2} e^\Phi G_{M[NR|P|} \tilde\omega^P{}_{S]}
+ \frac{1}{4}e^\Phi {}^\star{G}_{M[NR|P_1P_2P_3|}\zeta^{P_1P_2P_3}{}_{S]}
\cr
&&\qquad\qquad =
- \frac{3}{8} e^\Phi g_{M[N}G_{RS]PQ} \tilde \omega^{PQ}+ e^\Phi G_{[MNR|P|} \tilde\omega^P{}_{S]} - \frac{1}{32} e^\Phi g_{M[N} {}^\star{G}_{RS]P_1 \dots P_4}\zeta^{P_1\dots P_4}
\cr
&&\qquad\qquad
+ \frac{1}{6} e^\Phi {}^\star{G}_{[MNR|P_1P_2P_3|}\zeta^{P_1P_2P_3}{}_{S]} ~,
\label{iiatcfha3x3}
\end{eqnarray}
\begin{eqnarray}
&&\nabla_M \tilde\omega_{NR}
- \frac{1}{2} e^\Phi G_{M[N|PQ|} \tilde\pi^{PQ}{}_{R]}= \frac{1}{4!} e^\Phi {}^\star{G}_{MNRP_1P_2P_3}\pi^{P_1P_2P_3}
\cr
&&
\qquad\qquad- \frac{1}{12} e^\Phi g_{M[N}G_{R]P_1P_2P_3}\tilde\pi^{P_1P_2P_3}
- \frac{3}{8}e^\Phi G_{[MN|PQ|} \tilde\pi^{PQ}{}_{R]} ~.
\label{iiatcfha1x4}
\end{eqnarray}
Requiring that these form bilinears must be KY tensors, the above TCFH together with the explicit expressions for the D2-brane form bilinears in \ref{apb} reveal that
$\zeta= \tilde \pi=\tilde \omega=0$. For $\pi$ to be a KY form, the TCFH implies that $ \zeta_{ijab}=0$ which in turn gives $\pi_{ija}=0$. The remaining non-vanishing component of $\pi$, $\pi={1\over3!} \pi_{abc} e^a\wedge e^b\wedge e^c$, is a KY tensor and generates a (hidden) symmetry in the probe action (\ref{1part}) with $C=0$. Again the expression of the conditions $ \zeta_{ijab}=0$ and that of $\pi$ in terms of the Killing spinors can be found in appendix \ref{apb}. There are Killing spinors such that $ \zeta_{ijab}=0$ and $ \pi\not=0$.
It is clear that the holonomy of the minimal connection of the TCFH with only the 4-form field strength reduces. In particular, the reduced holonomy is
included in $O(9,1)\times GL(517, \bb{R})\times GL(495,\bb{R})$. For completeness we give the TCFH on the scalars as
\begin{eqnarray}
\nabla_M \tilde{\sigma} = - \frac{1}{4 \cdot 5!}{}^\star{G}_{MP_1\dots P_5}\tau^{P_1\dots P_5}~,~~\nabla_M \sigma = \frac{1}{4!} e^\Phi G_{MPQR} \pi^{PQR} ~,
\end{eqnarray}
which give a trivial contribution in the holonomy of the minimal connection.
To summarise the results of this section, we have shown that there are choices of Killing spinors such that $\omega$ and $\pi$, with non-vanishing components only along the worldvolume directions of the D2-brane, are KY tensors. Therefore these bilinears generate (hidden) symmetries for a probe described by the action (\ref{1part}) with $C=0$ on all D2-brane backgrounds, including those that depend on a multi-centred harmonic function $h$.
\subsubsection{D4 brane}
Choosing the transverse directions of the D4-brane as $23859$, the Killing spinors $\epsilon=h^{-{1\over8}} \epsilon_0$ of the solution satisfy the condition
\begin{eqnarray}
\Gamma_{23849}\epsilon_0=\pm\epsilon_0~,
\label{d4ks}
\end{eqnarray}
where $\epsilon_0$ is a constant spinor and $h$ is a harmonic function as in (\ref{mhf}) for $p=4$. To solve this condition with the plus sign using spinorial geometry write
\begin{eqnarray}
\epsilon_0=\eta^1+e_{34}\wedge \eta^2+ e_3\wedge \lambda^1+ e_4\wedge \lambda^2~,
\label{d4sol}
\end{eqnarray}
where $\eta, \lambda \in \Lambda^*\langle e_5, e_1, e_2\rangle$. Substituting this into (\ref{d4ks}), one finds that
\begin{eqnarray}
\Gamma_2\eta^{1}=-\eta^{1}~,~~~\Gamma_2\lambda^{1}=-\lambda^{1}~,
\label{rd4ks}
\end{eqnarray}
and similarly for $\eta^2$ and $\lambda^2$. The reality condition on $\epsilon$ implies that $\eta^1=\Gamma_{67}* \eta^2$ and $\lambda^1=\Gamma_{67}* \lambda^2$.
The remaining conditions (\ref{rd4ks}) can be solved by setting $\eta^1=\rho-e_2\wedge \rho$, where $\rho\in \Lambda^*\langle e_5, e_1\rangle$, and similarly for the rest of the spinors. However as for the D2-brane, we shall not do this as otherwise the expression for the form bilinears will not be manifestly covariant in the worldvolume directions because the 6-th
direction will have to be treated separately from the rest. The form bilinears of the D4-brane can be expressed in terms of those of $\eta$ and $\lambda$ spinors.
Their expressions can be found in appendix \ref{apb}.
As in the D2-brane case, the form bilinears generate symmetries in the probe actions we have been considering provided that they are KY forms. This condition requires that certain terms in the TCFH must vanish. Using that for the D4-brane solution $G_{ijkl}\not=0$ and the explicit expression of the form bilinears in appendix \ref{apb}, one finds after a detailed analysis of the TCFH that only $k$, $\tilde \zeta$, $\tau$, $\tilde \omega$ and $\pi$ can be KY tensors while the rest of the bilinears vanish. In particular, as expected, $k$ is Killing
and so generates a symmetry for the probe actions we have been considering.
For $\tilde \zeta$ to be a KY tensor, the TCFH requires that $\tilde k=0$, $\tau_{ija_1a_2a_3}=0$ and $\tau_{a_1\dots a_5}=0$. These imply that $\tilde \zeta_{ija_1a_2}=0$. The non-vanishing component of $\tilde \zeta$, $\tilde\zeta={1\over 4!} \tilde\zeta_{a_1\dots a_4} e^{a_1}\wedge \dots \wedge e^{a_4}$, generates a (hidden) symmetry for the probe action (\ref{1part}) with $C=0$.
Similarly for $\tau$ to be a KY form, the TCFH requires that $\omega=0$ and $\tilde\zeta_{ijab}=0$. These imply that $\tau={1\over 5!} \tau_{a_1\dots\tau_5} e^{a_1}\wedge \dots \wedge e^{a_5}$ is a KY form and generates a (hidden) symmetry for the probe action (\ref{1part}) with $C=0$.
For $\tilde \omega$ to be a KY form the TCFH requires that $\tilde \pi_{ijk}=0$, which in turn implies that $\tilde \omega_{ij}=0$. The remaining component
of $\tilde \omega={1\over2} \tilde\omega_{ab} e^a\wedge e^b$ is a KY tensor and generates a symmetry for probe action (\ref{1part}) with $C=0$.
Similarly for $\pi$ to be a KY form, the TCFH requires that $\zeta_{aijk}=0$, which in turn gives $\pi_{aij}=0$. Then
$\pi={1\over3!} \pi_{abc} e^a\wedge e^b\wedge e^c$ is a KY form and generates a (hidden) symmetry for the probe action (\ref{1part}) with $C=0$.
In all the above cases, the explicit expressions for the vanishing conditions on some of the components of the form bilinears, as well as the expressions of KY forms in terms of the Killing spinors, can be easily read from the results of appendix \ref{apb} and so they will not be repeated here.
To summarise the results of this section, there are Killing spinors such that $k$, $\tilde \zeta$, $\tau$, $\tilde \omega$ and $\pi$ with non-vanishing components only along the worldvolume directions of D4-brane, are KY tensors. Therefore, they generate (hidden) symmetries for the probe described by the action (\ref{1part}) with $C=0$
on any D4-brane background depending of a harmonic function $h$ as in (\ref{mhf}) for $p=4$.
\subsection {D8-brane}
To derive the TCFH on D8-brane type of backgrounds set all the IIA form fields strengths to zero apart from $S$. Then the IIA TCFH in section \ref{iiatcfhs} reduces to
\begin{equation}
\nabla_M \tilde{\sigma} = \frac{1}{4}e^\Phi S \tilde{k}_M~,~~~
\nabla_M k_N = \frac{1}{4}e^\Phi S \omega_{MN} ~,~~~
\nabla_M \tilde{k}_N = \frac{1}{4}e^\Phi g_{MN}S\tilde{\sigma}~,
\end{equation}
\begin{equation}
\nabla_M \omega_{NR} = \frac{1}{2}e^\Phi S g_{M[N}k_{R]}~,~~~
\nabla_M \tilde{\zeta}_{N_1 \dots N_4} = - \frac{1}{4 \cdot 5!}e^\Phi {}^\star{S}_{MN_1\dots N_4P_1\dots P_5}\tau^{P_1\dots P_5} ~,
\end{equation}
\begin{eqnarray}
&&\nabla_M \tau_{N_1 \dots N_5} = \frac{1}{4\cdot 4!}e^\Phi {}^\star{S}_{MN_1\dots N_5 P_1 \dots P_4}\tilde{\zeta}^{P_1\dots P_4}~,~~
\nabla_M \sigma = 0 ~,~~
\cr
&&
\nabla_M \tilde\omega_{NR} = \frac{1}{4} e^\Phi S \tilde\pi_{MNR}~,
\end{eqnarray}
\begin{eqnarray}
&&\nabla_M \pi_{NRS}= \frac{1}{4} e^\Phi S\zeta_{MNRS}~,~~
\nabla_M \tilde\pi_{NRS}= \frac{3}{4} e^\Phi S g_{M[N} \tilde\omega_{RS]}~,~~
\cr
&&
\nabla_M \zeta_{N_1 \dots N_4} = e^\Phi S g_{M[N_1} \pi_{N_2N_3N_4]}~.
\end{eqnarray}
It is clear from this that $k$, $\tilde\zeta$, $\tau$, $\tilde\omega$ and $\pi$ are KY tensors and generate a (hidden) symmetry of the probe action (\ref{1part}) with $C=0$.
Note that all these form bilinears $k$, $\tilde\zeta$, $\tau$, $\tilde\omega$ and $\pi$ have components only along the worldvolume directions of the D8-brane.
Notice also that the (reduced) holonomy of the minimal TCFH connection is included in $SO(9,1)$.
To find an explicit expression of the form bilinears of D8-brane solution
choose the worldvolume directions along $012346789$. The Killing spinors $\epsilon= h^{-{1\over8}} \epsilon_0$ of the solution satisfy the condition
$\Gamma_5\epsilon_0=\pm \epsilon_0$, where $\epsilon_0$ is a constant spinor and $h=1+\sum_\ell q_\ell |y-y_\ell|$. Taking the plus sign, this condition can be solved using spinorial geometry by setting
\begin{eqnarray}
\epsilon_0=\eta+ e_5\wedge \eta~,
\label{d8sol}
\end{eqnarray}
where $\eta\in \Lambda^*(\bb{R}\langle e_1, e_2, e_3, e_4\rangle)$ after imposing the reality condition $\Gamma_{6789} * \eta=\eta$.
Using the solution for $\epsilon_0$ above, one can easily compute the form bilinears of D8-brane in terms of those of $\eta$. Their expressions can be found in appendix \ref{apb}. Imposing the condition that the remaining form bilinears $\tilde k$, $\zeta$, $\omega$ and $\tilde \pi$ must be KY forms, the TCFH together
with their explicit expressions in \ref{apb} imply that they should vanish. Therefore they do not generate symmetries for probe actions.
\section{TCFH and probe symmetries on IIB D-branes}
As in the IIA, there is no classification of IIB supersymmetric backgrounds. So we shall turn to IIB D-branes to give more examples of backgrounds for which the TCFH can be interpreted as the condition for invariance of particle and string probe actions under symmetries generated by the form bilinears. The computation will again be organised in D-brane electric-magnetic pairs.
The TCFH for each pair can be easily found from that of the IIB TCFH given in (\ref{iibtcfh1})-(\ref{iibtcfh6}) upon setting all the form field strengths to zero apart from those associated to the D-brane under investigation.
\subsection{D1- and D5-branes}
\subsubsection{The TCFH of D1- and D5-branes}
To illustrate the construction of symmetries for probes propagating on D1- and D5-brane backgrounds using the IIB TCFH, we shall present the D1- and D5-brane TCFH. This is easily derived from (\ref{iibtcfh1})-(\ref{iibtcfh6}) upon setting $G^{(1)}=G^{(5)}=0$. After a re-arrangement of terms so that $e^{\Phi}\,G^{(3)}$ can be interpreted as torsion of a TCFH connection, one finds
\begin{align}
&
\nabla_M\, k^{rs}_P - \frac{1}{2}\,e^{\Phi}\,G^{(3)}_{MP}{}^N\, k^{(1)rs}_{N} = \frac{1}{12}\,e^{\Phi}\,G^{(3)N_1N_2N_3}\,\tau^{(1)rs}_{N_1N_2N_3MP} \nonumber \\
&
\nabla_M\, k^{(i)rs}_P -\frac{1}{2}\,\delta_{i1}\,e^{\Phi}\,G^{(3)}_{MP}{}^N\,k^{rs}_N +\frac{i}{2}\,\varepsilon_{1ij}\,e^{\Phi}\,G^{(3)N_1N_2}_M\,\pi^{(j)rs}_{PN_1N_2}
=
\frac{1}{12}\,\delta_{i1}\,e^{\Phi}\,G^{(3)N_1N_2N_3}\,\tau^{rs}_{MPN_1N_2N_3} \nonumber \\
&
\qquad+\frac{i}{12}\,\varepsilon_{1ij}\,e^{\Phi}\,g_{MP}\,G^{(3)N_1N_2N_3}\,\pi^{(j)rs}_{N_1N_2N_3}
+ \frac{i}{2}\,\varepsilon_{1ij}\,e^{\Phi}\,G^{(3)N_1N_2}{}_{[M}\,\pi^{(j)rs}_{P]N_1N_2}
\nonumber \\
&
\nabla_M\,\pi^{rs}_{P_1P_2P_3} - 3\,e^{\Phi}\,G^{(3)}_{M[P_1}{}^N\,\pi^{(1)rs}_{P_2P_3]N}
= - \frac{1}{12}\,e^{\Phi}\,{}^{\star}G^{(7)}_{MP_1P_2P_3}{}^{N_1N_2N_3}\,\pi^{(1)rs}_{N_1N_2N_3}
\nonumber \\
&\qquad+\frac{3}{2}\,e^{\Phi}\,g_{M[P_1}
\,G^{(3)}_{P_2}{}^{N_1N_2}\,\pi^{(1)rs}_{P_3]N_1N_2} + 3\,e^{\Phi}\,G^{(3)}_{[P_1P_2}{}^N\,\pi^{(1)rs}_{P_3M]N}
\nonumber \\
&
\nabla_M\,\pi^{(i)rs}_{P_1P_2P_3}- 3\,\delta_{i1}\,e^{\Phi}\,G^{(3)}_{M[P_1}{}^N\,\pi^{rs}_{P_2P_3]N} +\frac{i}{2}\,\varepsilon_{1ij}\,e^{\Phi}\,G^{(3)}_{M}{}^{N_1N_2}\,\tau^{(j)rs}_{P_1P_2P_3N_1N_2}
\nonumber \\
&\qquad- 3i\,\varepsilon_{1ij}\,e^{\Phi}\,G^{(3)}_{M[P_1P_2}\,k^{(j)rs}_{P_3]} =- \frac{1}{12}\,\delta_{i1}\,e^{\Phi}\,{}^{\star}G^{(7)}_{MP_1P_2P_3}{}^{N_1N_2N_3}\,\pi^{rs}_{N_1N_2N_3} \nonumber \\
&
\qquad+
\frac{3}{2}\,\delta_{i1}\,e^{\Phi}\,g_{M[P_1}\,G^{(3)}_{P_2}{}^{N_1N_2}\,\pi^{rs}_{P_3]N_1N_2} + 3\,\delta_{i1}\,e^{\Phi}\,G^{(3)}_{[P_1P_2}{}^N\,\pi^{rs}_{P_3M]N} \nonumber \\
&
\qquad +\frac{i}{4}\,\varepsilon_{1ij}\,e^{\Phi}\,G^{(3)N_1N_2N_3}\,g_{M[P_1}\,\tau^{(j)rs}_{P_2P_3]N_1N_2N_3} - i\,\varepsilon_{1ij}\,G^{(3)}_{[P_1}{}^{N_1N_2}\,\tau^{(j)rs}_{P_2P_3M]N_1N_2} \nonumber \\
&
\qquad
-\frac{3i}{2}\,\varepsilon_{1ij}\,e^{\Phi}\,g_{M[P_1}\,G^{(3)}_{P_2P_3]}{}^N\,k^{(j)rs}_N + 2i\,\varepsilon_{1ij}\,e^{\Phi}\,G^{(3)}_{[P_1P_2P_3}\,k^{(j)rs}_{M]} ~, \nonumber \\
& \nabla_M\,\tau^{rs}_{P_1 \dots P_5} - 5\,e^{\Phi}\,G^{(3)}_{M[P_1}{}^N\,\tau^{(1)rs}_{P_2\dots P_5]N}
= \frac{1}{2}\,e^{\Phi}\,{}^{\star}G^{(7)}_{MP_1\dots P_5}{}^N\,k^{(1)rs}_N
\nonumber \\
&\quad
+\frac{15}{2}\,e^{\Phi}\,G^{(3)}_{[P_1P_2}{}^N\,\tau^{(1)rs}_{P_3P_4P_5M]N} + 5\,e^{\Phi}\,g_{M[P_1}\,G^{(3)}_{P_2}{}^{N_1N_2}\,\tau^{(1)rs}_{P_3P_4P_5]N_1N_2} -10\,e^{\Phi}\,g_{M[P_1}\,G^{(3)}_{P_2P_3P_4}\,k^{(1)rs}_{P_5]}
\nonumber \\
&\nabla_M\,\tau^{(i)rs}_{P_1\dots P_5} - 5\,\delta_{i1}\,e^{\Phi}\,G^{(3)}_{M[P_1}{}^N\,\tau^{rs}_{P_2 \dots P_5]N} +\frac{5i}{2}\,\varepsilon_{1ij}\,e^{\Phi}\,{}^{\star}G^{(7)}_{M[P_1 \dots P_4}{}^{N_1N_2}\,\pi^{(j)rs}_{P_5]N_1N_2}
\nonumber \\
&\qquad-10i\,\varepsilon_{1ij}\,e^{\Phi}\,G^{(3)}_{M[P_1P_2}\,\pi^{(j)rs}_{P_3P_4P_5]} = \frac{1}{2} \delta_{i1}\,e^{\Phi}\,{}^{\star}G^{(7)}_{MP_1\dots P_5}{}^N\,k^{rs}_N +5\,\delta_{i1}\,e^{\Phi}\,g_{M[P_1}\,G^{(3)}_{P_2}{}^{N_1N_2}\,\tau^{rs}_{P_3P_4P_5]N_1N_2} \nonumber \\
&\qquad+ \frac{15}{2}\delta_{i1}\,e^{\Phi}\,G^{(3)}_{[P_1P_2}{}^N\,\tau^{rs}_{P_3P_4P_5M]N} -10\delta_{i1}\,e^{\Phi}\,g_{M[P_1}\,G^{(3)}_{P_2P_3P_4}\,k^{rs}_{P_5]} \nonumber \\
&\qquad + 10i\,\varepsilon_{1ij}\,e^{\Phi}\,G^{(3)}_{[P_1P_2P_3}\,\pi^{(j)rs}_{P_4P_5M]} -15i\,\varepsilon_{1ij}\,e^{\Phi}\,g_{M[P_1}\,G^{(3)}_{P_2P_3}{}^N\,\pi^{(j)rs}_{P_4P_5]N} \nonumber \\
&\qquad + \frac{5i}{12}\,\varepsilon_{1ij}\,g_{M[P_1}\,{}^{\star}G^{(7)}_{P_2\dots P_5]}{}^{N_1N_2N_3}\,\pi^{(j)rs}_{N_1N_2N_3} -\frac{3i}{2}\,\varepsilon_{1ij}\,e^{\Phi}\,{}^{\star}G^{(7)}_{[P_1\dots P_5}{}^{N_1N_2}\,\pi^{(j)rs}_{M]N_1N_2}~.
\end{align}
Clearly the (reduced) holonomy of the minimal TCFH connection for generic backgrounds with only $G^{(3)}$ non-vanishing is included
in $\times^6 SO(9,1)\times ^2 GL(256)$.
The difficulties that one encounters when interpreting the TCFH above as invariance conditions for a particle probe
described by an action\footnote{A probe with action (\ref{1part}) is chosen because it gives the weakest invariance conditions on the couplings and on the forms that generate the symmetries. }, like (\ref{1part}), for symmetries
generated by the form bilinears are twofold. One is that the TCFH connection contains terms that involve double and higher contractions of indices between the
$G^{(3)}$ field strength and the form bilinears. The other is that the right-hand side of the TCFH involves terms that contain the spacetime metric. Terms such as these do not occur as invariance conditions for actions like (\ref{1part}) under symmetries generated by spacetime forms, see (\ref{addcon}). The only option is to set both such terms to zero. As $G^{(3)}$ is given for each solution, this puts restrictions on the form bilinears and, in turn, on the choice of Killing spinors used to construct these bilinears.
\subsubsection{D1-brane}
To find the form bilinears of the D1-brane, choose the worldsheet along the directions $05$. The Killing spinors of the solution are $\epsilon=h^{-{1\over8}} \epsilon_0$, where the constant spinor $\epsilon_0=(\epsilon_0^1, \epsilon^2_0)^t$ is a doublet of Majorana-Weyl $\mathfrak{spin}(9,1)$ spinors satisfying the additional condition
\begin{eqnarray}
\Gamma_{05}\sigma_1 \epsilon_0 =\pm \epsilon_0~,
\end{eqnarray}
and $h$ is a harmonic function on $\bb{R}^8$ as in (\ref{mhf}) for $p=1$. The metric of the D1-brane is given in (\ref{dbrane}) for $p=1$. Choosing the plus sign in the condition above
the components of the doublet $\epsilon_0$ are restricted as $\Gamma_{05} \epsilon^1_0=\epsilon_0^2$ and $\Gamma_{05} \epsilon^2_0=\epsilon_0^1$. As in previous cases, these conditions are solved using spinorial geometry \cite{uggp}. After a short computation, one finds that
\begin{eqnarray}
\epsilon^1_0=\eta+ e_5\wedge \lambda~,~~~\epsilon^2_0=\eta-e_5\wedge \lambda~,
\label{d1ks}
\end{eqnarray}
where $\eta\in \Delta^+_{(8)}=\Lambda^{\mathrm{ev}}(\bb{R}\langle e_1, e_2, e_3, e_4\rangle)$ and $\lambda\in \Delta^+_{(8)}=\Lambda^{\mathrm{odd}}(\bb{R}\langle e_1, e_2, e_3, e_4\rangle)$ are chiral and anti-chiral Majorana-Weyl $\mathfrak{spin}(8)$ spinors, respectively. The form bilinears of $\epsilon$ can be computed in terms of those of $\eta$ and $\lambda$. The result can be found in appendix \ref{apb}.
For the D-string, the non-vanishing components of $G^{(3)}$ are proportional to $G^{(3)}_{05i}$. Using these, and the expression for the form bilinears in appendix \ref{apb}, one concludes from the TCFH that
\begin{eqnarray}
\nabla_M\, k^{rs}_P - \frac{1}{2}\,e^{\Phi}\,G^{(3)}_{MP}{}^N\, k^{(1)rs}_{N} =0~,~~~\nabla_M\, k^{(1)rs}_P -\frac{1}{2}\,e^{\Phi}\,G^{(3)}_{MP}{}^N\,k^{rs}_N=0~.
\end{eqnarray}
Therefore both $\tilde k^{\pm}=k\pm k^{(1)}$ are covariantly constant with respect to the connection $\nabla^{(\pm)}$, as in (\ref{nablapm}), but now with torsion $e^{\Phi}\,G^{(3)}$. As $d(e^{\Phi}\,G^{(3)})=0$, $\tilde k^{\pm}$ generate symmetries for the probe actions (\ref{sact}) and (\ref{pact}), where the coupling $b$ is given by $e^{\Phi}\,G^{(3)}=db$. Furthermore $\tilde k^{+}$ ($\tilde k^{-}$) generates a symmetry for the probe action (\ref{1part}), where the coupling $C$ is $C=e^{\Phi}\,G^{(3)}$ ($C=-e^{\Phi}\,G^{(3)}$). Note that both $\tilde k^{\pm}$ have components along the worldsheet directions of the D-string.
It can be shown that the remaining form bilinears do not generate symmetries for the probe actions
(\ref{sact}), (\ref{pact}) and (\ref{1part}). The details of this analysis is similar to those explained for IIA D-branes and they will not be presented here.
\subsubsection{D5-brane}
Choosing the transverse directions of the D5-brane as $3489$, the condition on the Killing spinors $\epsilon=h^{-{1\over8}} \epsilon_0$ for the D5-branes is
\begin{eqnarray}
\Gamma_{3489} \sigma_1\epsilon_0=\pm\epsilon_0~,
\end{eqnarray}
where $\epsilon_0=(\epsilon_0^1, \epsilon^2_0)^t$ is a doublet of constant Majorana-Weyl $\mathfrak{spin}(9,1)$ spinors and $h$ is a harmonic function as in (\ref{mhf}) for $p=5$. This condition with the plus sign can be solved using spinorial geometry to yield
\begin{eqnarray}
\epsilon^1_0=\eta^1+e_{34}\wedge \lambda^1+e_3\wedge \eta^2+e_4\wedge \lambda^2~,~~~\epsilon^2_0=\eta^1+e_{34}\wedge \lambda^1-e_3\wedge \eta^2-e_4\wedge \lambda^2~,
\label{d5ks}
\end{eqnarray}
where $\eta^1, \lambda^1$ ($\eta^2, \lambda^2)$ are positive (negative) chirality spinors of $\mathfrak{spin}(5,1)$. The reality condition on $\epsilon_0$ implies
that $\lambda^1=-\Gamma_{67}*\eta^1$ and $\lambda^2=-\Gamma_{67}*\eta^2$. Using this, one can calculate the form bilinears of the D5-brane solution. These have been presented in appendix \ref{apb}.
As for D1-branes, let us define $\tilde k^{\pm }=k\pm k^{(1)}$. The TCFH together with the expression of the form bilinears for this background in appendix \ref{apb} give
\begin{eqnarray}
\nabla^{(\pm)}_M \tilde k_N^{\pm}=\nabla^{(\pm)}_{[M} \tilde k_{N]}^{\pm}~.
\end{eqnarray}
Therefore $\tilde k^{\pm}$ satisfy the KY equation with respect to the connection $\nabla^{(\pm)}$ as in (\ref{nablapm}) with torsion $\pm e^\Phi G^{(3)}$.
A consequence of this is that $\tilde k^{\pm}$ generate symmetries in the particle probe action (\ref{1part}) with 3-form coupling $\pm e^\Phi G^{(3)}$. Note that the second condition in (\ref{addcon}) required for this is also satisfied as $i_{\tilde k^{\pm}}d(e^\Phi G^{(3)})=0$ and $i_{\tilde k^{\pm}} G^{(3)}=0$ because $\tilde k^{\pm}$ have components only along the worldvolume directions of the D5-brane. A similar investigation reveals that $k^{(2)}$ and $k^{(3)}$ do not generate symmetries for the probe actions we are considering.
Next define $\tilde \pi^{\pm}=\pi\pm \pi^{(1)}$. The TCFH can be re-organised as a KY equation with respect to a connection with skew-symmetric torsion provided
that the term proportional to the spacetime metric $g$ vanishes. For this the $\tilde \pi^{\pm}_{Mij}$ components of the 3-form bilinears should vanish. In particular, $\tilde \pi^{+}$ is a KY form with respect to $\nabla^{(+)}$ connection provided that
\begin{eqnarray}
\langle \eta^{1r}, \Gamma_a \lambda^{1s}\rangle_D=\mathrm{Im} \langle \eta^{1r}, \Gamma_a \eta^{1s}\rangle_D=0~,
\end{eqnarray}
and similarly $\tilde \pi^{-}$ is a KY form with respect to $\nabla^{(-)}$ connection provided that
\begin{eqnarray}
\langle \eta^{2r}, \Gamma_a \lambda^{2s}\rangle_D=\mathrm{Im} \langle \eta^{2r}, \Gamma_a \eta^{2s}\rangle_D=0~.
\end{eqnarray}
The remaining non-vanishing components of $\tilde \pi^{\pm}$ are
\begin{align}
&\tilde\pi^{+rs} = \frac{4}{3}\,h^{-1/4}\operatorname{Re} \left\langle \eta^{1r}, \Gamma_{abc} \eta^{1s} \right\rangle_D e^a \wedge e^b \wedge e^c~, \nonumber \\
&\tilde\pi^{-rs} = \frac{4}{3}\,h^{-1/4}\operatorname{Re} \left\langle \eta^{2r}, \Gamma_{abc} \eta^{2s} \right\rangle_D e^a \wedge e^b \wedge e^c~.
\end{align}
These generate (hidden) symmetries of the probe action (\ref{1part}) with 3-form coupling $\pm 2 e^\Phi G^{(3)}$, respectively. Note that the second condition in (\ref{addcon}) required for the invariance of the action (\ref{1part}) generated by $\tilde\pi^{\pm }$ is also satisfied as $i_{\tilde \pi^{\pm}} d e^\Phi G^{(3)}=0$ and $i_{\tilde \pi^{\pm}} G^{(3)}=0$. A similar investigation reveals that $\pi^{(2)}$ and $\pi^{(3)}$ do not generate symmetries for the probe actions we have been considering.
The same applies all four 5-form bilinears.
To summarise the results of this section, we have demonstrated that there are Killing spinors such that the form bilinears $\tilde k^{\pm}$ and $\tilde \pi^{\pm}$, with non-vanishing components only along the worldvolume directions of the D5-brane, are KY forms with respect to connections with skew-symmetric torsion proportional to $\pm e^\Phi G^{(3)}$ and $\pm 2 e^\Phi G^{(3)}$, respectively. It turns out that these forms $\tilde k^{\pm}$ ($\tilde \pi^{\pm}$) generate (hidden) symmetries for the probes described by action (\ref{1part}) with form coupling $C$
equal to $\pm e^\Phi G^{(3)}$ ( $\pm 2 e^\Phi G^{(3)}$).
\subsection{D3-brane}
Choosing the worldvolume directions of the D3-brane as 0549, the Killing spinors, $\epsilon=h^{-{1\over8}} \epsilon_0$, of this solution satisfy the condition
\begin{eqnarray}
\Gamma_{0549}\epsilon_0^1=\pm \epsilon^2_0~,
\end{eqnarray}
where $\epsilon_0=(\epsilon_0^1, \epsilon^2_0)^t$ is a doublet of constant Majorana-Weyl spinors of $\mathfrak{spin}(9,1)$ and $h$ a harmonic function as in (\ref{mhf}) with $p=3$. This condition with the plus sign can be solved using spinorial geometry as
\begin{eqnarray}
\epsilon^1_0= \eta^1+e_{45}\wedge \lambda^1+e_4\wedge \eta^2+e_5\wedge \lambda^2~,~~\epsilon_0^2=i \eta^1+i e_{45}\wedge \lambda^1-i e_4\wedge \eta^2-i e_5\wedge \lambda^2~,
\label{d3ks}
\end{eqnarray}
where $\eta^1, \lambda^1 \in \Lambda^{\mathrm{ev}}(\bb{C}\langle e_1, e_2, e_3\rangle)$ ($\eta^2, \lambda^2 \in \Lambda^{\mathrm{odd}}(\bb{C}\langle e_1, e_2, e_3\rangle)$) are positive (negative) chirality Weyl spinors of $\mathfrak{spin}(6)$. Furthermore the reality condition on $\epsilon$ implies that
\begin{eqnarray}
\eta^2=-i \Gamma_{678}*\eta^1~,~~~\lambda^2=i \Gamma_{678} * \lambda^1~.
\end{eqnarray}
Using these, one can easily express the form bilinears of the D3-brane solution in terms of those of the $\eta$ and $\lambda$ $\mathfrak{spin}(6)$ spinors. The form bilinears can be found in appendix \ref{apb}.
As the probe actions (\ref{sact}), (\ref{pact}) and (\ref{1part}) do not exhibit a 5-form coupling, the only coupling one should consider is that of the spacetime metric. For the form bilinears to generate a symmetry for the probe described by the action (\ref{1part}), they must be KY tensors. To see whether this is the case, let us begin with the 1-form bilinears $k$ and $k^{(2)}$. The TCFH\footnote{We have replaced $k^{(2)}, \pi^{(2)}$ and $\tau^{(2)}$ with $ik^{(2)}, i\pi^{(2)}$ and $i\tau^{(2)}$ so that the TCFH for the D3-brane and later for the D7-brane to be manifestly real.}
gives
\begin{eqnarray}
&& \nabla_M\, k^{rs}_P = \frac{1}{12}\,e^{\Phi}\,G^{(5)}_{MP}{}^{N_1N_2N_3}\,\pi^{(2)rs}_{N_1N_2N_3}~,
\cr
&&
\nabla_M\, k^{(2)rs}_P
= -\frac{1}{12}\,e^{\Phi}\,G^{(5)}_{MP}{}^{N_1N_2N_3}\,\pi^{rs}_{N_1N_2N_3}~.
\end{eqnarray}
Clearly both are KY tensors and so generate symmetries for the probe action (\ref{1part}) with $C=0$. Using that the components $G^{(5)}_{a_1\dots a_4 i}$ and $G^{(5)}_{i_1\dots a_5}$ of the 5-form field strength of the D3-brane solution do not vanish and the expressions for the bilinears in appendix B, one can show that the remaining
two 1-form bilinears do not generate a symmetry for the probe actions we have been considering.
Next let us turn to the 3-form bilinears $\pi$ and $\pi^{(2)}$. The TCFH on a D3-background reads
\begin{eqnarray}
\nabla_M\,\pi^{rs}_{P_1P_2P_3}
-\frac{1}{4}\,e^{\Phi}\,G^{(5)}_{M[P_1}{}^{N_1N_2N_3}\,\tau^{(2)rs}_{P_2P_3]N_1N_2N_3}
= -\frac{1}{2}\,e^{\Phi}\,G^{(5)}_{MP_1P_2P_3}{}^N\,k^{(2)rs}_N~,
\end{eqnarray}
\begin{eqnarray}
\nabla_M\,\pi^{(2)rs}_{P_1P_2P_3} +\frac{1}{4}\,e^{\Phi}\,G^{(5)}_{M[P_1}{}^{N_1N_2N_3}\,\tau^{rs}_{P_2P_3]N_1N_2N_3}
=
\frac{1}{2}\,e^{\Phi}\,G^{(5)}_{MP_1P_2P_3}{}^N\,k^{rs}_N~.
\end{eqnarray}
For either $\pi$ or $\pi^{(2)}$ be KY forms, the connection term involving $G^{(5)}$ in the TCFH must vanish. For $\pi$, this requires that
$\tau^{(2)}_{ijabc}=0$ which in turn implies that
\begin{align}
&\operatorname{Re}\left\langle \eta^{1r}, \Gamma_{ij}\eta^{1s} \right\rangle = \operatorname{Re}\left\langle \lambda^{1r}, \Gamma_{ij} \lambda^{1s} \right\rangle =0~, \nonumber \\
&\operatorname{Re}\left\langle \eta^{1r}, \Gamma_{ij}\lambda^{1s} \right\rangle + \operatorname{Re}\left\langle \lambda^{1r}, \Gamma_{ij} \eta^{1s} \right\rangle = 0~, \nonumber \\
&\operatorname{Im}\left\langle \eta^{1r}, \Gamma_{ij}\lambda^{1s} \right\rangle - \operatorname{Im}\left\langle \lambda^{1r}, \Gamma_{ij}\eta^{1s} \right\rangle = 0~. \nonumber \\
\label{d3picon}
\end{align}
There are solutions to these conditions. For example take $\eta^{1r}=\lambda^{1r}$ and $\eta^{1s}=\lambda^{1s}$. The decomposition of two Weyl representations
of $\mathfrak{spin}(6)$ is isomorphic to $\Lambda^0(\bb{C}^6)\oplus \Lambda^2(\bb{C}^6)$. Thus one can choose $\eta^{1r}$ and $\eta^{1s}$ such that the component of their tensor product in $\Lambda^2(\bb{C}^6)$ vanishes. Imposing the above conditions, the non-vanishing components of $\pi$ are
\begin{align}
\pi^{rs} = &-4 h^{-\frac{1}{4}}\operatorname{Im}\left\langle \eta^{1r}, \eta^{1s} \right\rangle(e^0-e^5)\wedge e^4 \wedge e^9 \nonumber \\
&+4 h^{-\frac{1}{4}} \operatorname{Im}\left\langle \lambda^{1r}, \lambda^{1s} \right\rangle(e^0+e^5)\wedge e^4 \wedge e^9 \nonumber \\
&+4 h^{-\frac{1}{4}} \left( \operatorname{Re}\left\langle \eta^{1r}, \lambda^{1s} \right\rangle - \operatorname{Re}\left\langle \lambda^{1r}, \eta^{1s} \right\rangle \right)e^0\wedge e^5 \wedge e^4 \nonumber \\
&+4 h^{-\frac{1}{4}} \left( \operatorname{Im}\left\langle \eta^{1r}, \lambda^{1s} \right\rangle + \operatorname{Im}\left\langle \lambda^{1r}, \eta^{1s} \right\rangle \right)e^0\wedge e^5 \wedge e^9~.
\end{align}
Similarly for $\pi^{(2)}$ to be a KY form, $\tau_{ijabc}=0$. The conditions on the spinors are given as in (\ref{d3picon}) after replacing $\operatorname{Re}$ with $\operatorname{Im}$ and vice versa. After imposing these conditions, the non-vanishing components of $\pi^{(2)}$ are
\begin{align}
\pi^{(2)rs} = &-4h^{-\frac{1}{4}} \operatorname{Re}\left\langle \eta^{1r}, \eta^{1s} \right\rangle(e^0-e^5)\wedge e^4 \wedge e^9 \nonumber \\
&+4 h^{-\frac{1}{4}} \operatorname{Re}\left\langle \lambda^{1r}, \lambda^{1s} \right\rangle(e^0+e^5)\wedge e^4 \wedge e^9 \nonumber \\
&-4 h^{-\frac{1}{4}} \left( \operatorname{Im}\left\langle \eta^{1r}, \lambda^{1s} \right\rangle - \operatorname{Im}\left\langle \lambda^{1r}, \eta^{1s} \right\rangle \right)e^0\wedge e^5 \wedge e^4 \nonumber \\
&+4 h^{-\frac{1}{4}} \left( \operatorname{Re}\left\langle \eta^{1r}, \lambda^{1s} \right\rangle + \operatorname{Re}\left\langle \lambda^{1r}, \eta^{1s} \right\rangle \right)e^0\wedge e^5 \wedge e^9~.
\end{align}
Next let us focus on the two remaining 3-form bilinears $\pi^{(1)}$ and $\pi^{(3)}$. It turns out that they do not generate symmetries for the probe action (\ref{1part}) that we are considering. In particular for $\pi^{(1)}$ to be a KY form, the TCFH requires that $\pi^{(3)}=0$. This in turn implies that $\pi^{(1)}=0$. To establish the latter
the Hodge duality properties of the transverse components of $\pi^{(3)}$ have to be used.
To find the conditions for $\tau$ and $\tau^{(2)}$ be KY forms, the TCFH for these bilinears on a D3-brane background is
\begin{eqnarray}
\nabla_M\,\tau^{rs}_{P_1 \dots P_5}
+10\,e^{\Phi}\,G^{(5)}_{M[P_1P_2P_3}{}^N\,\pi^{(2)rs}_{P_4P_5]N}
&=&
-5\,e^{\Phi}\,g_{M[P_1}\,G^{(5)}_{P_2P_3P_4}{}^{N_1N_2}\,\pi^{(2)rs}_{P_5]N_1N_2}
\cr
&&-\frac{15}{2}\,e^{\Phi}\,G^{(5)}_{[P_1\dots P_4}{}^N\,\pi^{(2)rs}_{P_5M]N} ~,
\end{eqnarray}
\begin{eqnarray}
\nabla_M\,\tau^{(2)rs}_{P_1\dots P_5}
-10\,e^{\Phi}\,G^{(5)}_{M[P_1P_2P_3}{}^N\,\pi^{rs}_{P_4P_5]N}
&=& 5\,e^{\Phi}\,g_{M[P_1}G^{(5)}_{P_2P_3P_4}{}^{N_1N_2}\,\pi^{rs}_{P_5]N_1N_2}
\cr
&&+\frac{15}{2}\,e^{\Phi}\,G^{(5)}_{[P_1\dots P_4}{}^N\,\pi^{rs}_{P_5M]N}~.
\end{eqnarray}
It turns out that for $\tau$ to be a KY tensor, $\pi^{(2)}=0$. Using the chirality of $\eta^1$ and $\eta^2$ as $\mathfrak{spin}(6)$ spinors, one concludes that
$\tau=0$, and so there are no symmetries generated by this 5-form bilinear. Similarly, $\tau^{(2)}$ does not generate any symmetries for the probe action (\ref{1part}) we have been considering.
Finally, let us turn to investigate the TCFH of $\tau^{(1)}$ and $\tau^{(3)}$ on a D3-brane background. One finds that
\begin{eqnarray}
\nabla_M\,\tau^{(1)rs}_{P_1\dots P_5}
+5\,e^{\Phi}\,G^{(5)}_{M[P_1\dots P_4}\,k^{(3)rs}_{P_5]} -\frac{5}{2}\,e^{\Phi}\,G^{(5)}_{M[P_1P_2}{}^{N_1N_2}\,\tau^{(3)rs}_{P_3P_4P_5]N_1N_2} \nonumber \\
= \frac{5}{2}\,e^{\Phi}\,g_{M[P_1}\,G^{(5)}_{P_2\dots P_5]}{}^N\,k^{(3)rs}_N -3\,e^{\Phi}\,G^{(5)}_{[P_1\dots P_5}\,k^{(3)rs}_{M]}~, \nonumber \\
\end{eqnarray}
\begin{eqnarray}
\nabla_M\,\tau^{(3)rs}_{P_1\dots P_5}
- 5\,e^{\Phi}\,G^{(5)}_{M[P_1\dots P_4}\,k^{(1)rs}_{P_5]} +\frac{5}{2}\,e^{\Phi}\,G^{(5)}_{M[P_1P_2}{}^{N_1N_2}\,\tau^{(1)rs}_{P_3P_4P_5]N_1N_2} \nonumber \\
=- \frac{5}{2}\,e^{\Phi}\,g_{M[P_1}\,G^{(5)}_{P_2\dots P_5]}{}^N\,k^{(1)rs}_N +3\,e^{\Phi}\,G^{(5)}_{[P_1\dots P_5}\,k^{(1)rs}_{M]}~. \nonumber \\
\end{eqnarray}
Focusing on the former condition, $\tau^{(1)}$ is a KY tensor provided that $k^{(3)}=0$ and $\tau^{(3)}_{ijkab}=0$. Using the chirality of $\eta^2$ and $\lambda^2$ as $\mathfrak{spin}(6)$ spinors, one finds that $\tau^{(1)}=0$. A similar calculation for $\tau^{(3)}$ reveals that $\tau^{(3)}=0$. These two forms
do not generate symmetries for the probe action (\ref{1part}).
To summarise the results of this section, we have demonstrated that there are Killing spinors such that the form bilinears $k$, $k^{(2)}$, $\pi$ and $\pi^{(2)}$
of the D3-brane background are KY forms and so generate (hidden) symmetries for the probe described by the action (\ref{1part}) with $C=0$. All these forms have components only along the worldvolume directions of the D3-brane.
\subsection{D7-brane}
Choosing the transverse directions of the D7-brane as $49$, the Killing spinor $\epsilon=h^{-{1\over8}} \epsilon_0$ of the solution satisfies the condition
\begin{eqnarray}
\Gamma_{49}\epsilon_0^1=\pm\epsilon_0^2~,
\end{eqnarray}
where $\epsilon_0=(\epsilon_0^1, \epsilon_0^2)^t$ is a constant doublet of Majorana-Weyl $\mathfrak{spin}(9,1)$ spinors and $h=1+\sum_\ell q_\ell \log|y-y_\ell|$. This condition with the plus sign can be solved using spinorial geometry as
\begin{eqnarray}
\epsilon_0^1=\eta+e_4\wedge \lambda~,~~~\epsilon_0^2=i\eta-i e_4\wedge \lambda~,
\label{d7ks}
\end{eqnarray}
where $\eta$ ($\lambda$) is a positive, $\eta\in \Lambda^{\mathrm{ev}}(\bb{C}\langle e_1, e_2, e_3, e_5\rangle)$, (negative, $\lambda\in \Lambda^{\mathrm{odd}}(\bb{C}\langle e_1, e_2, e_3, e_5\rangle)$,) chirality $\mathfrak{spin}(7,1)$ Weyl spinors. The reality condition on $\epsilon$ implies that
\begin{eqnarray}
\lambda=-i \Gamma_{678}*\eta~.
\end{eqnarray}
Using the above expression for the Killing spinors, the form bilinears can be easily computed and can be found in appendix \ref{apb}.
The TCFH for the form bilinears $k$ and $k^{(2)}$ gives
\begin{eqnarray}
\nabla_M\, k^{rs}_P = \frac{1}{2}\, e^{\Phi}\,G^{(1)N}\,\pi^{(2)rs}_{NMP}~,~~~\nabla_M k^{(2)rs}_P = -\frac{1}{2}\,e^{\Phi}\,G^{(1)N}\,\pi^{rs}_{NMP}~.
\end{eqnarray}
As a result, they are both KY forms. Therefore both generate symmetries for the probe action (\ref{1part}) with $C=0$.
It can be shown that the remaining two 1-form bilinears $k^{(1)}$ and $k^{(3)}$ do not generate symmetries for the probe actions we are considering.
Similarly the TCFH of $\pi$ and $\pi^{(2)}$ on D7-brane background reads
\begin{eqnarray}
&&
\nabla_M\pi^{rs}_{P_1P_2P_3} = \frac{1}{2}\,e^{\Phi}\,G^{(1)N}\,\tau^{(2) rs}_{MP_1P_2P_3N} + 3\,e^{\Phi}\,g_{M[P_1}\,G^{(1)}_{P_2}\,k^{(2) rs}_{P_3]}~,
\cr
&&
\nabla_M\pi^{(2)rs}_{P_1P_2P_3} = -\frac{1}{2}\,e^{\Phi}\,G^{(1)N}\,\tau^{rs}_{MP_1P_2P_3N} - 3\,e^{\Phi}\,g_{M[P_1}\,G^{(1)}_{P_2}\,k^{rs}_{P_3]}~.
\end{eqnarray}
For these to be KY forms, it is required that the terms of the TCFH that explicitly contain the spacetime metric must vanish. As the form field strength for the D7-brane $G^{(1)}\not=0$, for $\pi$ this leads to the condition $k^{(2)}=0$, or equivalently,
\begin{equation}
\operatorname{Im}\left\langle \eta^r, \Gamma_a \eta^s \right\rangle_D = 0~.
\end{equation}
Therefore
\begin{align}
\pi^{rs} &= \frac{2}{3} h^{-\frac{1}{4}} \operatorname{Re}\left\langle \eta^r, \Gamma_{abc}\eta^s \right\rangle_D e^a\wedge e^b \wedge e^c~,
\end{align}
is a KY form and generates a (hidden) symmetry for the particle probe described by the action (\ref{1part}) with $C=0$.
Similarly the condition for $\pi^{(2)}$ to be a KY form is
\begin{equation}
\operatorname{Re}\left\langle \eta^r, \Gamma_a \eta^s \right\rangle_D = 0~.
\end{equation}
As a result
\begin{align}
\pi^{(2)rs} &= -\frac{2}{3} h^{-\frac{1}{4}} \operatorname{Im}\left\langle \eta^r, \Gamma_{abc}\eta^s \right\rangle_D e^a\wedge e^b \wedge e^c~,
\end{align}
is a KY form and generates a (hidden) symmetry for the particle probe described by the action (\ref{1part}) with $C=0$. The remaining two 3-form bilinears $\pi^{(1)}$ and $\pi^{(3)}$ do not generate symmetries for the probe action we have been considering.
It remains to investigate whether any of the 5-form bilinears generate symmetries for probe action (\ref{1part}). To begin consider $\tau$ and $\tau^{(2)}$.
The TCFH for these in a D7-background is
\begin{eqnarray}
&&
\nabla_M\tau^{rs}_{P_1\dots P_5} =- \frac{1}{12}\,e^{\Phi}{}^{\star}G^{(9)}_{MP_1\dots P_5}{}^{N_1N_2N_3}\,\pi^{(2)rs}_{N_1N_2N_3} + 10\,e^{\Phi}\,g_{M[P_1}\,G^{(1)}_{P_2}\,\pi^{(2)rs}_{P_3P_4P_5]}~,
\cr
&&
\nabla_M\tau^{(2)rs}_{P_1\dots P_5} = \frac{1}{12}\,e^{\Phi}{}^{\star}G^{(9)}_{MP_1\dots P_5}{}^{N_1N_2N_3}\,\pi^{rs}_{N_1N_2N_3} - 10\,e^{\Phi}\,g_{M[P_1}\,G^{(1)}_{P_2}\,\pi^{rs}_{P_3P_4P_5]}~.
\end{eqnarray}
For $\tau$, the vanishing of the last term in the first TCFH that contains the metric leads to the condition $\pi^{(2)}=0$.
As a result
\begin{equation}
\tau^{rs} = \frac{4}{5!} h^{-\frac{1}{4}} \operatorname{Re}\left\langle \eta^r, \Gamma_{a_1 \dots a_5} \eta^s \right\rangle_D e^{a_1}\wedge \dots \wedge e^{a_5}~,
\end{equation}
is a KY form and generates a (hidden) symmetry for the probe action (\ref{1part}) with $C=0$.
Similarly $\tau^{(2)}$ is a KY form provided that $\pi=0$
and so
\begin{equation}
\tau^{(2)rs} = -\frac{4}{5!} h^{-\frac{1}{4}} \operatorname{Im}\left\langle \eta^r, \Gamma_{a_1 \dots a_5} \eta^s \right\rangle_D e^{a_1}\wedge \dots \wedge e^{a_5}~,
\end{equation}
is a KY form generates a (hidden) symmetry for the probe action (\ref{1part}) with $C=0$. The remaining two 5-forms do not generate symmetries
for the probe actions we have been considering.
In all the above cases, there are Killing spinors such that they satisfy the conditions required for the existence of non-vanishing KY forms. This can be seen from the decomposition of products of two $\mathfrak{spin}(7,1)$ spinor representations in terms of forms as we have described in previous cases.
To summarise the results of this section, we have demonstrated that there are Killing spinors such that the form bilinears $k$, $k^{(2)}$, $\pi$, $\pi^{(2)}$, $\tau$ and $\tau^{(2)}$
of the D7-brane background are KY forms and so generate (hidden) symmetries for the probe described by the action (\ref{1part}) with $C=0$. All these forms have components only along the worldvolume directions of the D7-brane.
\section{Concluding Remarks}
We have presented the TCFH of both IIA and IIB supergravities and demonstrated that the form bilinears satisfy a generalisation
of the CKY equation with respect to the minimal TCFH connection in agreement with the general theorem in \cite{gptcfh}. Then prompted by the well-known result that KY forms generate (hidden) symmetries in spinning particle actions, we explored the question
on whether the form bilinears of some known supergravity backgrounds, which include all type II branes, generate symmetries for various particle and string
probes propagating on these backgrounds.
We have also explored the complete integrability of geodesic flow on all type II brane backgrounds. We have demonstrated that if the harmonic function
that the solutions depend on has at most one centre, i.e. they are spherically symmetric, then the geodesic flow is completely integrable. We have explicitly given all independent conserved
charges in involution. We have also presented the KS, KY and CCKY tensors of these brane backgrounds associated with their integrability structure.
Returning to the symmetries generated by the TCFH, supersymmetric type II common sector backgrounds admit form bilinears which are covariantly constant with respect to a connection
with skew-symmetric torsion given by the NS-NS 3-form field strength. All these bilinears generate (hidden) symmetries for string and particle probe actions with 3-form couplings. The type II fundamental string and NS5-brane background form bilinears have explicitly been given. Common sector backgrounds admit additional form bilinears which satisfy a TCFH but they are not
covariantly constant with respect to a connection with skew-symmetric torsion. Although these forms are part of the geometric structure of common sector backgrounds, their geometric interpretation is less straightforward.
Moreover we found that there are Killing spinors in all Dp-brane backgrounds, for $p\not=1,5$, such that the associated bilinears are KY forms and so generate (hidden) symmetries for spinning particle probes. All these form bilinears
have components only along the worldvolume directions of the Dp-branes. A similar conclusion holds for the D1- and D5-brane solutions, only that in this case the form bilinears are KY forms with respect to a connection with skew-symmetric torsion
that is determined by the 3-form field strength of the backgrounds. These form bilinears generate (hidden) symmetries for particle probes described by the
action (\ref{1part}) with a non-vanishing 3-form coupling. Again these form bilinears have non-vanishing components only along the worldvolume directions of the D-branes.
It is fruitful to compare the KY forms we have obtained from the TCFH with those that are needed to investigate the integrability of the geodesic flow
in type II brane backgrounds. TCFH KY forms exist for any choice of the harmonic function that the brane solutions depend on. Moreover, as we have mentioned, these KY forms have non-vanishing components only along the worldvolume directions of D-branes. It is clear from this that although they generate symmetries for particle probes propagating on D-brane backgrounds these symmetries are not necessarily connected to the integrability
properties of such dynamical systems. This is because it is not expected, for example, that the geodesic flow of brane solutions which depend on a multi-centred harmonic function to be completely
integrable. Indeed the KS and KY tensors we have found that are responsible for the integrability of the geodesic flow on spherically symmetric branes also have components
along the transverse directions of these solutions. As the brane metrics have a non-trivial dependence on the transverse coordinates, this is essential for proving the integrability of the geodesic flow. Therefore one concludes that although the form bilinears of supersymmetric backgrounds can generate symmetries in string and
particle probes propagating in these backgrounds, they are not sufficient to prove the complete integrability of probe dynamics. Nevertheless the TCFH
KY tensors, when they exist, are associated with symmetries of probes propagating on brane backgrounds which are not necessarily spherically symmetric.
To find TCFH KY tensors, we have imposed a rather stringent
set of conditions on the form bilinears. In particular in several D-brane backgrounds, we set all terms of the minimal TCFH connection that depend on a form field
strength to zero. It is likely that such a restriction can be lifted and the only condition necessary for invariance of a probe action will be that the terms
in the TCFH which contain explicitly the metric should vanish. For this a new set a probe actions should be found that have couplings which depend on the form
field strengths of the supergravity theories and generalise (\ref{1part}) which exhibits only a 3-form coupling. We hope to report on such a development in the future.
\section*{Acknowledgments}
JP is supported by the EPSRC grant EP/R513064/1.
|
{
"timestamp": "2021-12-01T02:24:05",
"yymm": "2111",
"arxiv_id": "2111.15405",
"language": "en",
"url": "https://arxiv.org/abs/2111.15405"
}
|
\section{Introduction}
Star formation is observed to occur in giant molecular clouds where the stars form in embedded groups \citep{lada_embedded_2003}. These embedded groups are often part of star-forming regions that have a range of different morphologies (i.e. smooth centrally concentrated spherical distributions or more complex substructured distributions) and densities \citep[][]{bressert_spatial_2010,kruijssen_fraction_2012}. Quantifying the amount of spatial (and kinematic) substructure is key to determining whether star formation is universal (i.e. the same everywhere) or whether it is dependant on local environmental factors. \par
Young star-forming regions are often observed to be substructured and subvirial, but this substructure can be erased over a very short time period (the order of a few crossing times within substructured regions) due to dynamical interactions. These interactions lead to dynamical mass segregation \citep[e.g.][]{mcmillan_dynamical_2007, allison_using_2009, allison_early_2010, moeckel_limits_2009, parker_dynamical_2014, dominguez_how_2017} where the most massive stars migrate to the centre of the region over the order of the crossing time scale \citep[][]{bonnell_mass_1998}. Consequently, the observed locations and spatial arrangement of massive stars are not necessarily identical to those when they formed.\par
To test theories of star formation the overall structure of such regions and the distribution of the massive stars inside them must be detected and quantified. Methods that can accurately detect and quantify structure and mass segregation are therefore needed. The mean surface density of companions (two-point or auto correlation function) has been used in \citet{gomez_spatial_1993}, \citet{larson_star_1995} and \citet{gouliermis_complex_2014} to quantify the distributions of stars. This method looks at the excess number of pairs as a function of the separation compared to a random distribution of stars \citep[][]{gomez_spatial_1993,simon_clustering_1997, bate_interpreting_1998,kraus_spatial_2008}. \par
\citet{cartwright_statistical_2004} introduced the $Q$-Parameter which uses minimum spanning trees (MSTs) to determine the overall structure of a star-forming region (see also \citet{schmeja_evolving_2006}, \citet{cartwright_measuring_2009}, \citet{bastian_spatial_2009}, \citet{sanchez_spatial_2009}, \citet{lomax_statistical_2011} and \citet{jaffa_mathcal_2017}). The $Q$-Parameter can be used as a proxy for the dynamical age, with lower $Q$ values (substructured distributions) corresponding to dynamically younger regions and higher $Q$ values (smooth, centrally concentrated distributions) corresponding to dynamically older regions \citep{parker_dynamics_2014}. Using the $Q$-parameter as a proxy for dynamical age in combination with $\Sigma$ (local stellar surface density), \citet{parker_characterizing_2012} and \citet{parker_dynamical_2014} showed that the current dynamical state of a star-forming region could be estimated. \par
\citet{allison_using_2009} developed the mass segregation ratio ($\Lambda_{\rm{MSR}}$) method which detects and quantifies the amount of mass segregation present in a star-forming region (again using MSTs). The $\Lambda_{\rm{MSR}}$ method allows the degree of mass segregation to be found using a plot of $\Lambda_{\rm{MSR}}$ against the number of stars in the minimum spanning tree (see $\S$~\ref{sec:mass_segregation_ratio}). \par
To mitigate the effects of outlying datapoints on $\Lambda_{\rm{MSR}}$, \citet{maschberger_global_2011} introduced the local stellar surface density ratio, defined as the ratio between the median surface densities of a chosen subset and all stars in the region. This method determines if the most massive stars are located in areas of higher than average surface density. \par
The results obtained from these methods must be interpreted with care, especially when determining whether a star-forming region is mass segregated (see \citet{parker_comparisons_2015}). Using these methods, mass segregation has been defined in two main ways: i) the most massive stars are located in areas of higher than average local stellar surface density (as measured by $\Sigma_{\rm{LDR}}$) and ii) the most massive stars are centrally concentrated in the region (as measured by $\Lambda_{\rm{MSR}}$). \par
INDICATE is a new method proposed in \citet{buckner_spatial_2019} to quantify the clustering tendencies of points in a distribution (e.g. stars in a star-forming region), which they used to characterise the spatial behaviours of stars in the Carina Nebula (NGC 3372). The method was also employed in \citet{buckner_spatial_2020} to investigate the clustering tendencies of young stellar objects (YSOs), and thus the star formation history of NGC 2264 (see also \citet{nony_mass_2021}). \par
In this paper we further investigate the ability of INDICATE to quantify overall structure and its ability to detect and quantify mass segregation. We then apply INDICATE to pre-main sequence stars in a selection of nearby star-forming regions for the first time. \par
The paper is organised as follows. In $\S$~\ref{sec:methods} we describe current methods in more detail. In $\S$~\ref{sec:making_synth_sfr} we describe how the synthetic star-forming regions are constructed. In $\S$~\ref{sec:can_indicate_detect_structure} we test the ability of INDICATE to quantify the overall structure of star-forming regions and in $\S$~\ref{sec:can_indicate_detect_mass_segregation} we test the ability of INDICATE to detect and quantify mass segregation in synthetic star-forming regions. In $\S$~\ref{section:observational_data_indicate_results} we present results of applying INDICATE to real star-forming regions. We conclude in $\S$~\ref{sec:conclusions}.
\section{Methods}
\label{sec:methods}
In this section we will describe the INDICATE method along with other methods for comparing the spatial distributions of stars and the detection of mass segregation in star-forming regions.
\subsection{INDICATE}
\label{sec:INDICATE}
Previous methods of defining structure and clustering tendencies, such as the $Q$-parameter \citep[e.g.][]{cartwright_statistical_2004} involve calculating a value for the entire region which quantifies the amount of substructure present. INDICATE is different in that it assigns a degree of clustering to each star. This allows INDICATE to determine the significance of any clustering on a star by star basis.
The INDICATE algorithm proceeds as follows. First, an evenly spaced control field (i.e. a regular, evenly spaced grid of points) is generated with the same number density as the dataset. The number density is calculated by dividing the number of points in the dataset by the rectangular area covered by the data. We use the minimum and maximum values of the x and y coordinates to define the edges of the rectangle. For each point \textit{j} in the dataset, the Euclidean distance to the $N^{\rm{th}}$ nearest neighbour in the control field is measured (i.e. for $N = 5$ the distance from the data point $j$ to its $5^{\rm{th}}$ nearest neighbour in the control field) and then the mean of those distances, $\Bar{r}$, is calculated. Then the algorithm counts how many other points $N_{\rm{\Bar{r}}}$ from the dataset are within a radius of $\Bar{r}$ of point $j$. The (unit-less) index $I_{\rm{j}}$ for point $j$ is then defined as,
\begin{equation}
I_{\rm j} = \frac{N_{\Bar{\rm r}}}{N},
\label{eq:index}
\end{equation}
where $N$ is the nearest neighbour number. The index is independent of the shape, size and density of the dataset \citep{buckner_spatial_2019}.
To determine the index, above which stars are considered to be clustered and not randomly distributed, \mbox{INDICATE} is applied to a uniformly distributed set of points (see \textit{Appendix~\ref{ab:poisson_ctrl_field}}), with the same number density as the dataset, and using the same control field.
In \citet{buckner_spatial_2020} this is repeated 100 times to remove any statistical fluctuations. In this work we present the results for running this once but we also show results for 100 repeats in \textit{Appendix}~\ref{ab:sig_repeats}.
The significant index is defined as $I_{\rm{sig}} = \Bar{I} + 3\sigma$ where $\Bar{I}$ is the mean index of the uniform distributions and $\sigma$ is the standard deviation of this mean. Any star in the dataset with an index greater than $I_{\rm{sig}}$ is considered to have a degree of clustering above random.
We pick the 50 most massive stars when testing for the presence of mass segregation as it was the minimum sample size that was tested in \citet{buckner_spatial_2019}. For this sample of 50 stars we run 100 repeats to account for statistical fluctuations that can significantly alter the number of stars with indexes above the significant index.
Following \citet{allison_using_2009} and \citet{parker_dynamical_2014} we define a subset of the 10 most massive stars to compare the median index against the entire regions median index.
To ensure that the correct distance to the nearest neighbour is found for points on the outskirts of the regions the control grid is extended beyond the dataset. If the control grid is not extended then edge effects can make a very small change to the index of those stars (see \textit{Appendix~B} in \citet{buckner_spatial_2019}).
\subsection{Local Stellar Surface Density Ratio $\Sigma_{\rm{LDR}}$}
\label{sec:local_stellar_surface_density_ratio}
\citet{maschberger_global_2011} developed the local stellar surface density ratio $\Sigma_{\rm{LDR}}$ to quantify the relative surface density of the most massive stars compared to all stars in a region. This method makes use of the local stellar surface density defined as,
\begin{equation}
\Sigma = \frac{N-1}{\pi R_{\rm{N}}^{2}},
\end{equation}
where \textit{N} is the nearest neighbour number (here we take $N\,=\,5$) and $R_{\rm{N}}$ is the distance to the $N^{\rm{th}}$ nearest neighbour \citep{casertano_core_1985}.
This allows the local surface density of a subset of stars to be quantified by taking the median surface density of that subset.
The same can be done for all of the stars in the region to find the ratio,
\begin{equation}
\Sigma_{\rm{LDR}} = \frac{\Tilde{\Sigma}_{\rm{subset}}}{\Tilde{\Sigma}_{\rm{all}}},
\end{equation}
where $\Tilde{\Sigma}_{\rm{subset}}$ is the median local stellar surface density of the subset (in this paper the 10 most massive stars) and $\Tilde{\Sigma}_{\rm{all}}$ is the median local stellar surface density of the entire star-forming region.
Following \citet{kupper_mass_2011} and \citet{parker_dynamical_2014} we use $\Sigma_{\rm{LDR}}$ to compare the local surface density of the most massive stars to all stars. If the $\Sigma_{\rm{LDR}}\,>\, 1$ then the most massive stars are located in areas of higher than average surface density. If $\Sigma_{\rm{LDR}}\,<\,1$ then the most massive stars are found in areas of lower then average surface density. To determine the significance of this difference a two sample Kolmogorov-Smirnov (KS) test is used. In this work we have chosen an arbitrary threshold value of 0.01, below which the null hypothesis that the two distributions share the same underlying distribution is rejected.
\subsection{Mass Segregation Ratio $\Lambda_{\rm{MSR}}$}
\label{sec:mass_segregation_ratio}
We quantify the mass segregation of stars using the mass segregation ratio $\Lambda_{\rm{MSR}}$ \citep[][]{allison_using_2009}. This method makes use of minimum spanning trees (MSTs); these are graphs where each point is connected to at least one other point such that the total edge length is minimised with no closed loops.
First a minimum spanning tree is generated for the 10 most massive stars and this is compared to the average length of a set of MSTs made by randomly picking 10 stars from the distribution. If the average edge length of the 10 most massive stars' MST $l_{\rm{10}}$ is significantly smaller than the average edge length of the random MSTs $l_{\rm average}$, then the most massive stars are said to be mass segregated.
This is quantified by the ratio,
\begin{equation}
\Lambda_{\rm{MSR}} = \frac{\left<l_{\rm{average}}\right>}{l_{\rm{10}}}_{-\sigma_{1/6}/l_{\rm{10}}}^{+\sigma_{5/6}/l_{\rm{10}}}.
\end{equation}
For this work the 10 most massive stars are chosen, and then we successively add the next 10 most massive stars to the subset group and repeat the method for the new larger subsets.
The uncertainty is found the same way as in \citet{parker_spatial_2018}, where the upper and lower errors are the lengths of random MSTs which lie 5/6 and 1/6 of the way through an ordered list of all random MST lengths, respectively. These values correspond to a 66 per cent deviation from the median value and prevent any single outlying star from heavily influencing the uncertainty, which would be an issue if, like in \citet{allison_using_2009}, a Gaussian dispersion is used as a uncertainty estimator instead.
If $\Lambda_{\rm{MSR}} \gg 1$ then the subset of the most massive stars are mass segregated, if $\Lambda_{\rm{MSR}} \ll 1$ then the subset of the most massive stars are inversely mass segregated and if $\Lambda_{\rm{MSR}} \approx 1$ then the subset of the most massive stars are not mass segregated and are at similar distances from each other as the average stars in the star-forming region.
\subsection{Radial Distribution}
A straightforward way to quantify mass segregation is to compare the cumulative distributions of the most massive stars' positions to the cumulative distributions of all stellar positions.
To do this a central position needs to be defined. We follow \citet{parker_comparisons_2015} and use the origin (0,0) of the regions as the centre, as they find that the origin of the distribution is a reasonably robust estimation of the centre in centrally concentrated and fractal distributions. We then find the distance from the origin of the region to each star and plot the cumulative distribution functions of all stars in the region and the 10 most massive stars.
\section{Making Synthetic Star-Forming Regions}
\label{sec:making_synth_sfr}
To investigate the performance of INDICATE we make use of synthetic star-forming regions of different idealised geometries (substructured, smooth centrally concentrated, and uniform). We create synthetic star-forming regions of 1000 stars each and the set-up of these regions is described in the following subsections. Our choice of 1000 stars is motivated by the observation that star clusters and star-forming regions in the Galaxy follow a $N_{\rm{cl}} \propto M_{\rm{cl}}^{-2}$ power law (where $N_{\rm{cl}}$ is the number of regions and $M_{\rm{cl}}$ is the mass of the region) between $10 < M_{\rm{cl}}/M_{\rm{\odot}} < 10^{5}$ \citep{lada_embedded_2003}. A star-forming region containing 1000 stars therefore sits somewhere in the middle of this distribution. We note that many star-forming regions in the Solar neighbourhood contain fewer stars (e.g. 100s); INDICATE (and other methods to quantify structure such as the $Q$-parameter) are affected by statistical noise when applied to regions with fewer than 50 stars. In each case we randomly assign masses to the synthetic data using the initial mass function from \mbox{\citet{maschberger_function_2013}}, with the lower mass, upper mass and mean stellar mass set as $0.01\, M_{\rm{\odot}}$, $150\, M_{\rm{\odot}}$ and $0.2\, M_{\rm{\odot}}$ respectively. The probability distribution is as follows,
\begin{equation}
p(m) \propto \left(\frac{m}{\mu} \right)^{-\alpha} \left(1 + \left(\frac{m}{\mu}\right)^{1 - \alpha}\right)^{-\beta},
\label{eq:maschberger_imf}
\end{equation}
where $\mu$ is the mean stellar mass, $\alpha = 2.3$ is the high mass index and $\beta = 1.4$ is the low mass index for the power law.
\subsection{Fractal Star-Forming Regions}
We generate a substructured star-forming region using the box fractal method \citep{goodwin_dynamical_2004,cartwright_statistical_2004}. This method has also been used in previous works \citep[e.g.][]{allison_using_2009, parker_comparisons_2015, daffern-powell_dynamical_2020}. An example of a star-forming region that was generated using this method is presented in figure~\ref{fig:example_clusters_fractal_radial}(a), consisting of 1000 stars with a fractal dimension $D = 1.6$.
The method works as follows. A single star is placed at the centre of a cube of side length $N_{\rm{Div}} = 2$. This cube is then subdivided down into $N_{\rm{Div}}^{3}$ (in this case it is 8) sub-cubes. A star is placed at the centre of each sub-cube and each of the star's corresponding cubes has a probability of being subdivided again into 8 more sub-cubes given by $N_{\rm{Div}}^{(3 - D)}$ where $D$ is the fractal dimension of the region.
Stars that do not have their cubes subdivided are removed from the region along with any previous generations of stars that preceded them. A small amount of noise is added to each of the stars to stop them having a regular looking structure. The process of subdivision and adding new stars is done until the target number of stars is reached or exceeded in the last iteration of the method. Only the last generation of stars added are kept in the region meaning all previous stars are removed and then any excess stars are removed at random so that the target number of stars lie inside a spherical boundary \citep[][]{daffern-powell_dynamical_2020}.
\subsection{Smooth Centrally Concentrated Star-Forming Regions}
The stars are distributed using the following expression,
\begin{equation}
n\propto r^{-\alpha},
\end{equation}
where \textit{n} is the number density, \textit{r} is the radial distance from the origin of the region and $\alpha$ is the radial density exponent, where higher values of $\alpha$ will produce more centrally concentrated regions \citep{cartwright_statistical_2004}.
\subsection{Uniformly Distributed Star-Forming Regions}
A uniform distribution of 1000 points is generated to test INDICATE when there is no initial structure. Like all other synthetic star-forming regions in this work all points lie within $-1<x<1$ and $-1<y<1$.
\section{Applying INDICATE to synthetic star-forming regions}
\subsection{Can INDICATE determine the structure of star-forming regions?}
\label{sec:can_indicate_detect_structure}
To see how INDICATE performs on star-forming regions with different structures and to test if it can differentiate between them we run it over different sets of star-forming regions with each set corresponding to a different structural parameter that is used in its creation. Each set contains 100 different realisations of regions made with the same parameter. The following fractal dimensions of $D = 1.6, 2.0, 2.6$ and $3.0$ for example would make up 4 sets of star-forming regions, each set containing 100 realisations of substructured regions with each realisation containing 1000 stars. The same is done for smooth, centrally concentrated star-forming regions were 7 sets of regions are made using the following radial density exponents of $\alpha = 0.0,\, 0.5, \,1.0, \,1.5, \,2.0, \,2.5$ and $2.9$, with each realisation also containing 1000 stars. The results of applying INDICATE to all these different sets is shown in figure~\ref{fig:structure_indicate}, which shows the mean, median, mean median and mean maximum index for each set of these star-forming regions.
The mean and median INDICATE index is calculated for each set by finding the indexes of all stars across all 100 different realisations and then simply finding the mean and median of these values. The mean median INDICATE index is calculated by finding the median INDICATE index for each individual region in a given set resulting in 100 median values for which the mean can be calculated. The mean maximum INDICATE index is calculated by finding the maximum INDICATE index in each individual region for a given set and then finding the mean of these 100 values.
Figure~\ref{fig:structure_indicate} clearly shows that the index is degenerate because different smooth and fractal distributions can have the same INDICATE index. We present the index distributions for the synthetic regions in \textit{Appendix}~\ref{appsec:index_distribution_synth_sfr}. Because the index is similar across star-forming regions with different geometries and levels of substructure, we suggest that INDICATE cannot be used to quantify the type of morphology in the same way that the $Q$-Parameter can.
\begin{figure*}
\subfigure[A typical fractal distribution of 1000 stars with D = 1.6. ]{\includegraphics[width=0.45\linewidth]{figures/Synthetic Data/fractals/fractal1d6_example.png}}
\hspace{10pt}
\subfigure[A typical smooth, centrally concentrated distribution of 1000 stars with a density index $\alpha = 2.0$]{\includegraphics[width=0.45\linewidth]{figures/Synthetic Data/radial/radial2d0_example.png}}
\caption{Typical examples of our synthetic star-forming regions: (a) a substructured star-forming region with fractal dimension $D = 1.6$, (b) a radial smooth centrally concentrated star-forming region with density index $\alpha = 2.0$.}
\label{fig:example_clusters_fractal_radial}
\end{figure*}
\subsection{Can INDICATE be used to quantify mass segregation?}
\label{sec:can_indicate_detect_mass_segregation}
To test the ability of INDICATE to detect and quantify if the most massive stars are in regions of localised above-average stellar surface density we apply it to all 1000 stars in our substructured, smooth centrally concentrated and uniform synthetic star-forming regions where the masses are randomly assigned to stars using the IMF from \citet{maschberger_function_2013}. We change the mass configurations of these regions by swapping the 10 most massive stars with the 10 stars of highest INDICATE index, \textit{high mass high index} configuration (\textit{hmhi}), and by swapping the 10 most massive stars with the 10 most central stars, \textit{high mass centre} configuration \textit{(hmc)}. Table~\ref{tab:indicate repeats all stars} shows how many times across 100 realisations for a substructured region ($D=1.6$), a smooth, centrally concentrated region ($\alpha=2.0$) and a uniform distribution the median index of the entire region of 1000 stars, the 10 most massive stars and 10 random stars are above the significant index. Table~\ref{tab:mass seg tests MSR LDR} shows the results of applying $\Lambda_{\rm{MSR}}$ and $\Sigma_{\rm{LDR}}$ to all stars in the same 100 realisations of each morphology.
\begin{table}
\setlength{\tabcolsep}{4pt}
\centering
\caption{INDICATE results of 100 different realisations for each of the presented morhpologies. From left to right the columns are: the median of the median indexes found across all 100 realisations for all stars, the number of times a realisation's median index for all stars is above its significant index, the median of the median indexes of the 10 most massive stars, the number of times a realisation's median index for the 10 most massive stars is above its significant index, the median of the median index for 10 randomly chosen stars and the number of times a realisation's median index for 10 random stars is above its significant index.}
\begin{tabular}{l|cccccc}
\hline
Region & $\Tilde{I}_{\rm{all}}$ & $\#>I_{\rm{sig}}$ & $\Tilde{I}_{\rm{10,mm}}$ & $\# > I_{\rm{sig}}$ & $\Tilde{I}_{\rm{10,ran}}$ & $\# > I_{\rm{sig}}$\\
\hline
$D=1.6$, m & 4.4 & 100 & 4.4 & 97 & 4.5 & 98 \\
$D=1.6$, hmhi & 4.4 & 100 & 11.8 & 100 & 4.6 & 98 \\
$D=1.6$, hmc & 4.4 & 100 & 3.5 & 84 & 4.5 & 97 \\
$\alpha=2.0$, m & 1.8 & 2 & 2.0 & 38 & 2.0 & 41 \\
$\alpha=2.0$, hmhi & 1.8 & 2 & 22.7 & 100 & 2.0 & 32 \\
$\alpha=2.0$, hmc & 1.8 & 2 & 22.2 & 100 & 2.1 & 32 \\
Uniform, m & 1.0 & 0 & 0.9 & 0 & 0.9 & 0 \\
Uniform, hmhi & 1.0 & 0 & 2.2 & 34 & 0.9 & 0 \\
Uniform, hmc & 1.0 & 0 & 0.9 & 0 & 0.9 & 0 \\
\hline
\end{tabular}
\label{tab:indicate repeats all stars}
\end{table}
\begin{table}
\setlength{\tabcolsep}{4pt}
\centering
\caption{Results of applying $\Lambda_{\rm{MSR}}$ and $\Sigma_{\rm{LDR}}$ to all 1000 stars in 100 different realisations of each morphology and mass configuration. From left to the right the columns are the number of times $\Lambda_{\rm{MSR}} < 0.5$ (which would indicate significant inverse mass segregation, as is observed in the Taurus star-forming region), the number of times $\Lambda_{\rm{MSR}} > 2$ which counts how many times $\Lambda_{\rm{MSR}}$ detects strong signals of mass segregation and the number of times the ratio $\Sigma_{\rm{LDR}}$ is found to be $> 1$ and significant according to a KS test with a threshold p-value < 0.01.}
\begin{tabular}{l|ccc}
\hline
Region & $\# \Lambda_{\rm{MSR}}<0.5$ & $\# \Lambda_{\rm{MSR}}>2$ & $\#\Sigma_{\rm{LDR, Sig}} > 1$ \\
\hline
D=1.6, m & 0 & 0 & 1 \\
D=1.6, hmhi & 0 & 94 & 71 \\
D=1.6, hmc & 0 & 100 & 6 \\
$\alpha=2.0$, m & 0 & 1 & 2 \\
$\alpha=2.0$, hmhi & 0 & 100 & 100 \\
$\alpha=2.0$, hmc & 0 & 100 & 100 \\
Uniform, m & 0 & 0 & 0 \\
Uniform, hmhi & 0 & 34 & 100 \\
Uniform, hmc & 0 & 100 & 9 \\
\hline
\end{tabular}
\label{tab:mass seg tests MSR LDR}
\end{table}
\begin{table}
\setlength{\tabcolsep}{4pt}
\centering
\caption{INDICATE results when applying only to the 50 most massive stars across 100 realisations of each morphology. From left to right the columns are: the median of the median indexes found for all 50 stars across all 100 realisations, the number of times the median index for the 50 most massive stars is above the significant index in a realisation, the median of the median index of the 10 most massive stars found across all regions, the number of times the median index for the 10 most massive stars is greater than the significant index for a realisation, the number of times that $\Lambda_{\rm{MSR}}$ detects mass segregation in the realisations that INDICATE has detected mass segregation, the median of the median indexes found for 10 randomly chosen stars across all regions, the number of times the median index of a realisation is greater than the significant index.}
\begin{tabular}{l|ccccccc}
\hline
Region & $\Tilde{I}_{\rm{50}}$ & $\#>I_{\rm{sig}}$ & $\Tilde{I}_{\rm{10,mm}}$ & $\# > I_{\rm{sig}}$ & $\#$ MS &$\Tilde{I}_{\rm{10,ran}}$ & $\# > I_{\rm{sig}}$\\
\hline
$D=1.6$, m & 1.5 & 12 & 1.5 & 14 & 0 & 1.5 & 16 \\
$D=1.6$, hmhi & 1.7 & 28 & 3.6 & 94 & 90 & 1.7 & 35 \\
$D=1.6$, hmc & 1.6 & 23 & 3.0 & 95 & 95 & 1.8 & 29 \\
$\alpha=2.0$, m & 1.4 & 14 & 1.5 & 29 & 1 & 1.6 & 30 \\
$\alpha=2.0$, hmhi & 2.6 & 54 & 4.8 & 100 & 100 & 2.7 & 65 \\
$\alpha=2.0$, hmc & 2.6 & 57 & 4.8 & 100 & 100 & 2.8 & 62 \\
Uniform, m & 0.8 & 0 & 0.7 & 0 & 0 & 0.7 & 0 \\
Uniform, hmhi & 0.8 & 0 & 1.2 & 15 & 11 & 0.8 & 0 \\
Uniform, hmc & 0.8 & 0 & 2.4 & 87 & 87 & 0.8 & 3 \\
\hline
\end{tabular}
\label{tab:indicate mass segregation tests}
\end{table}
Comparing the median INDICATE index of all the stars in the region with the median index of the 10 most massive stars allows the relative clustering tendencies of the most massive stars to be determined. If the median index of the most massive stars is greater than the median index for the entire region then the most massive stars are more clustered than the typical star in a region, and consequently are found in locations of higher than average local surface density. If the opposite is true then the most massive stars are less clustered than the typical star and are found in areas of lower than average local surface density. To determine if any detected difference in the spatial distribution, according to INDICATE, of the most massive stars in these tests is significant we use a 2 sample KS test with a significance threshold of 0.01, below which we reject the null hypothesis that the 10 most massive stars and the entire population of stars are spatially distributed the same way. If the p-value $\ll 0.01$ then there is a significant difference in the index distributions (and therefore clustering tendencies) of the 10 most massive stars compared to the entire region.
To test the ability of INDICATE to detect and quantify mass segregation we apply INDICATE to just the 50 most massive stars in each of the regions. The criteria used for INDICATE to detect mass segregation are from \citet{buckner_spatial_2019} and require that the 10 most massive stars are non-randomly clustered with respect to all 50 most massive stars (i.e. $\Tilde{I}_{\rm{10}} > I_{\rm{sig}}$).
To see how often INDICATE detects mass segregation of the 10 most massive stars we apply INDICATE to only the 50 most massive stars across the 100 realisations of each morphology. We present these results in table~\ref{tab:indicate mass segregation tests}. INDICATE detects mass segregation in many of these realisations, specifically for the realisations with \textit{hmhi} and \textit{hmc} mass configurations. We apply $\Lambda_{\rm{MSR}}$ to these realisations and count how many times $\Lambda_{\rm{MSR}}$ finds the 10 most massive stars to be mass segregated in the realisations that INDICATE has detected mass segregation. For centrally concentrated regions with the mass configurations \textit{hmhi} and \textit{hmc} both INDICATE and $\Lambda_{\rm{MSR}}$ find mass segregation in all 100 realisations. For the centrally concentrated region with randomly assigned masses INDICATE finds 29 realisations with mass segregation but $\Lambda_{\rm{MSR}}$ detects mass segregation in only one of these. A similar result is seen for the substructured regions where INDICATE finds that 14 of the realisations are mass segregated whereas $\Lambda_{\rm{MSR}}$ finds no mass segregation in these realisations. INDICATE detects mass segregation in the uniform distribution \textit{hmhi} and \textit{hmc} mass configurations in 15 and 87 realisations, respectively. $\Lambda_{\rm{MSR}}$ detects mass segregation in 11 of the 15 realisations and all 87 realisations for the \textit{hmhi} and \textit{hmc} mass configurations, respectively.
For tables~\ref{tab:indicate repeats all stars} and \ref{tab:indicate mass segregation tests} the median indexes are calculated by finding the median index in each region for all stars, the 10 most massive stars and the 10 random stars for each individual realisation then finding the median of these 100 values. For the tests in table~\ref{tab:indicate mass segregation tests} the swap is performed in a full region of 1000 stars then the 50 most massive stars are taken out. For the ranges of indexes across all realisations for the different morphologies see \textit{Appendix}~\ref{app:testing indicate 100 realisations}.
We use the 50 most massive stars as this is the minimum sample size that was tested in \citet{buckner_spatial_2019}. As long as a subset is larger than this value, any subset may be selected. For example \citet{buckner_spatial_2019} used a subset of 121 OB stars.
To see if the 10 most massive stars are spatially clustered differently than the entire subset we run a 2 sample KS test with significance threshold of 0.01. The INDICATE plots for the 50 most massive stars in each of the synthetic regions are shown in \textit{Appendix}~\ref{appsec:classical_massseg_synth_data}.
For all of these tests the $\Sigma_{\rm{LDR}}$, $\Lambda_{\rm{MSR}}$ and cumulative distribution of stellar position methods are also applied for comparison. We further employ the 2 sample KS test to determine if any $\Sigma_{\rm{LDR}}$ or CDF results are significantly different between the 10 most massive stars and the entire population in a region.
\subsection{Fractal Star-Forming Regions}
\label{sec:indicate fractal results}
\begin{figure*}
\subfigure[Mean Indexes]{\includegraphics[width=0.49\linewidth]{newfigs/fig2/mean_index.png}}
\hspace{0.8pt}
\subfigure[Median Indexes]{\includegraphics[width=0.49\linewidth]{newfigs/fig2/median_index.png}}
\hspace{0.8pt}
\subfigure[Mean of Median Indexes]{\includegraphics[width=0.49\linewidth]{newfigs/fig2/mean_median.png}}
\hspace{0.8pt}
\subfigure[Mean Maximum Indexes]{\includegraphics[width=0.49\linewidth]{newfigs/fig2/mean_max_index.png}}
\caption{INDICATE results for regions with different spatial distributions. Panel (a) shows the mean indexes found for 100 different realisations of ideal star-forming regions of differing fractal dimension and radial densities and the error bars represent the standard deviation of the mean indexes. Panel (b) shows the median indexes for the same regions and the error bars here represent the median absolute deviation. Panel (c) shows the mean of the median found for each of the 100 regions with the error bars representing the standard deviation of the mean. In panel (d) the mean maximum index is shown with the error bars being the standard deviation of this value.}
\label{fig:structure_indicate}
\end{figure*}
\subsubsection{Random masses}
Figure~\ref{fig:results_fractal}(a) shows that INDICATE clearly identifies areas of high spatial clustering, and finds that 82.2 per cent of stars have an index greater than the significant index of 2.3. The median index of the significantly clustered stars is $5.2_{-1.4}^{+2.6}$ where the sub and superscript numbers show the uncertainty defined by the 25$^{\rm{th}}$ and 75$^{\rm{th}}$ quantiles respectively (this value is the same for \textit{hmhi} and \textit{hmc} configurations as while the masses have been swapped each star will have the same position). The maximum index for all stars is 15.8 (which is also the same for \textit{hmhi} and \textit{hmc}) with a median index for the entire region of $4.4_{-1.4}^{+2.6}$ and for the 10 most massive stars it is $4.5_{-0.6}^{+3.3}$. As both of the medians are above the significant index both the entire region and the 10 most massive stars are clustered above random. A KS test returns a p-value of 0.9, suggesting that the difference in clustering tendencies of the most massive stars and all stars is not significant and that both high and low mass stars share similar non-random clustering tendencies.
Applying INDICATE to the 50 most massive stars we find that 22 per cent of stars in the subset have indexes above the significant index of 2.1. For significantly clustered stars the median index is $2.2_{-0.0}^{+0.2}$ and the maximum index across all 50 stars is 2.6. The median index for the subset is $1.4_{-0.6}^{+0.6}$ and the median index for the 10 most massive stars is $1.6_{-0.8}^{+0.4}$, which is below the significant index meaning that INDICATE is not detecting mass segregation. A KS test returns a p-value $= 1.00$, correctly identifying that there is no difference in clustering tendencies of the 10 most massive stars and the entire subset.
Figure~\ref{fig:results_fractal}(d) shows local stellar surface density against mass, with $\Sigma_{\rm{LDR}} = 1.3$, and a p-value of 0.69, meaning no significant difference between the local stellar surface density of high mass stars and the entire region. This is in agreement with the INDICATE result that the most massive stars are distributed in a similar way to the other stars in the region.
Figure~\ref{fig:results_fractal}(g) shows the $\Lambda_{\rm{MSR}}$ result, with $\Lambda_{\rm{MSR}} = {0.97}_{-\,0.11}^{+\,0.09}$ for the 10 most massive stars which is consistent with no significant mass segregation and is in agreement with INDICATE.
Figure~\ref{fig:results_fractal}(j) shows the cumulative radial distribution of the 10 most massive stars and all the stars in the region. The radial distribution starts at 0.6 pc for the 10 most massive stars; this is due to the randomly assigned stellar masses, which happen to mainly be in clumps a large distance away from the centre (i.e. the origin at (0,0)). When comparing the two distributions using a KS test a p-value $\ll 0.01$ is returned, meaning that the most massive stars could be mistakenly inferred to have been drawn from a different underlying radial distribution than all of the stars.
\subsubsection{High Mass High Index}
We now swap the 10 most massive stars with stars that have the highest INDICATE index and these results are shown in the middle column of figure~\ref{fig:results_fractal}. In figure~\ref{fig:results_fractal}(b) we show the positions of the 10 most massive stars (the black crosses).
The median index of the 10 most massive stars has increased from $4.5_{-0.6}^{+3.3}$ to $15.3_{-0.4}^{+0.2}$, with the median index for the entire region staying the same at $4.4_{-1.4}^{+2.6}$, as does the significant index of 2.3 and the percentage of stars with indexes greater than it. These parameters stay the same as the overall geometry of the region has not changed, just the masses assigned to 20 of the stars have been swapped. Comparing the 10 most massive stars to the rest of the population using a KS test gives a p-value $\ll 0.01$, implying a significant difference in the clustering tendencies of the most massive stars. Figure~\ref{fig:results_fractal}(b) shows this difference clearly, as the 10 most massive stars are now positioned in the most clustered locations according to INDICATE and, consequently, appear visibly more spatially concentrated.
Applying INDICATE to just the 50 most massive stars we find that the percentage of stars with indexes above the significant index of 2.1 is 38 per cent, with a median index for all 50 stars of $1.3_{-0.5}^{+2.3}$ and a median index of $3.6_{-0.0}^{+0.0}$ for the 10 most massive stars. The median index for significantly clustered stars is $3.6_{-0.0}^{+0.0}$. A maximum INDICATE index of 3.8 is found for the 50 most massive stars. As the median index for the 10 most massive stars is above the significant index mass segregation has been detected in the region. The reason the amount of stars with significant indexes has changed is due to the fact that we have made the swap in the full region of 1000 stars and then taken the 50 most massive from that. A KS test returns a p-value $\ll 0.01$ confirming that the tendency of the 10 most massive stars to cluster with high mass stars is significantly different to that of the entire subset of the 50 most massive stars.
$\Sigma_{\rm{LDR}}$ has increased from 1.3 to 2.3, with a p-value $\ll 0.01$. Figure~\ref{fig:results_fractal}(e) shows the 10 most massive stars are now above the median surface density of all of the stars (shown by the horizontal dashed black line). In this case the reason for this is that swapping stars to the most clustered areas as measured using INDICATE results in them also being swapped into areas with higher than average local stellar surface density.
We now measure mass segregation according to $\Lambda_{\rm{MSR}}$ of the region, and find the 10 most massive stars have a mass segregation ratio of $\Lambda_{\rm{MSR}} = 33.74_{\,-\,5.27}^{\,+\,2.54}$. This implies significant mass segregation. In this particular region this is because all of the most massive stars are located in a single clump with a high INDICATE index.
Because all the most massive stars have been moved to the same region the average distance between them is shorter than when looking at the average distances between random stars in the region. Figure~\ref{fig:results_fractal}(h) shows the peak signal for the 10 most massive stars which then rapidly decreases to $\sim1$, meaning no mass segregation for lower mass stars, which is to be expected because these stars have not been swapped.
When comparing the cumulative distributions of the positions of the 10 most massive stars and all the stars (see figure~\ref{fig:results_fractal}(k)), a clear difference can be seen. The distribution of positions for the 10 most massive stars is very narrow because they are in a very concentrated location therefore they are all a similar distance away from the origin. A KS test between the cumulative distribution of positions for all the stars and the 10 most massive stars returns a p-value $\ll 0.01$.
\subsubsection{High Mass Centre}
The 10 most massive stars are now swapped with the 10 most central stars (the results for this are shown in the right-hand column of figure~\ref{fig:results_fractal}). Because of the box-fractal construction of the region the origin (at (0,0)) is in empty space, so the most massive stars are split into two groups around the origin.
The median INDICATE index for the 10 most massive stars is $2.2_{-0.0}^{+0.0}$ (decreasing from $4.5_{-0.6}^{+3.3}$ for the 10 most massive stars in the original region with randomly assigned masses), below the median index for the region which is $4.4_{-1.4}^{+2.6}$. A KS test returns a p-value $\ll$ 0.01 implying a significant difference in the distributions, in this case the most massive stars are in locations of lower clustering than the rest of the stars in the region. Figure~\ref{fig:results_fractal}(c) shows the two main groups of the most massive stars either side of the centre of the star-forming region and the stars here have a relatively low INDICATE index. The central locations of this region happen to be of relatively low index compared to the rest of the star-forming region.
Applying INDICATE to just the 50 most massive stars we find that 28 per cent of stars have indexes above the significant index of 2.1. For significantly clustered stars the median index is $2.6_{-0.2}^{+0.0}$. The median index for the entire subset is $1.4_{-0.4}^{+0.8}$ and is $2.6_{-0.2}^{+0.0}$ for the 10 most massive stars, which is above the significant index of 2.1 meaning the region is mass segregated. A maximum INDICATE index of 2.6 is found for the 50 most massive stars. A KS test returns p-value $\ll 0.01$ confirming that the tendency of the 10 most massive stars to cluster with high mass stars is significantly different to the entire subset.
In figure~\ref{fig:results_fractal}(f) we show the local surface density against mass plot. We find $\Sigma_{\rm{LDR}} = 0.56$ with a p-value $= 0.03$, meaning there is no significant difference in the local surface density of the 10 most massive stars compared to all the stars.
The surface densities therefore display similar behaviour to INDICATE, which also shows a decrease in the measured index.
When we apply $\Lambda_{\rm{MSR}}$ to this region we detect a significant amount of mass segregation with $\Lambda_{\rm{MSR}}~=~{8.87}_{\,-\,0.78}^{\,+\,0.74}$, (figure~\ref{fig:results_fractal}(i)). This is much lower than when swapping the most massive stars with the most clustered as measured with INDICATE, decreasing from over 30 to 8.87 in figure~\ref{fig:results_fractal}(h) and figure~\ref{fig:results_fractal}(i) respectively. This is due to the areas of highest INDICATE index being highly concentrated in one area, whereas in this case the most central region is in empty space so the most massive stars are spread out around this point.
The cumulative distribution of the positions of stars is shown in figure~\ref{fig:results_fractal}(l), which also shows the massive stars to be mass segregated and much closer to the centre than the average star. A KS test returns a p-value $\ll 0.01$.
\begin{figure*}
\subfigure[INDICATE, m]{\includegraphics[width=0.32\linewidth]{newfigs/fractal1d6_m.png}}
\hspace{0.8pt}
\subfigure[INDICATE, hmhi]{\includegraphics[width=0.32\linewidth]{newfigs/fractal1d6_hmhi.png}}
\hspace{0.8pt}
\subfigure[INDICATE, hmc]{\includegraphics[width=0.32\linewidth]{newfigs/fractal1d6_hmc.png}}
\hspace{0.8pt}
\subfigure[$\Sigma - m$, m]{\includegraphics[width=0.32\linewidth]{newfigs/frac1d6_sigma_m.png}}
\hspace{0.8pt}
\subfigure[$\Sigma - m$, hmhi]{\includegraphics[width=0.32\linewidth]{newfigs/frac1d6_sigma_hmhi.png}}
\hspace{0.8pt}
\subfigure[$\Sigma - m$, hmc]{\includegraphics[width=0.32\linewidth]{newfigs/frac1d6_sigma_hmc.png}}
\hspace{0.8pt}
\subfigure[$\Lambda_{\rm{MSR}}$, m]{\includegraphics[width=0.32\linewidth]{newfigs/correct_scale_yaxis_frac1d6_m.png}}
\hspace{0.8pt}
\subfigure[$\Lambda_{\rm{MSR}}$, hmhi]{\includegraphics[width=0.32\linewidth]{newfigs/newer_frac1d6_hmhi.png}}
\hspace{0.8pt}
\subfigure[$\Lambda_{\rm{MSR}}$, hmc]{\includegraphics[width=0.32\linewidth]{newfigs/newer_frac1d6_hmc.png}}
\hspace{0.8pt}
\subfigure[Radial distribution, m]{\includegraphics[width=0.32\linewidth]{newfigs/frac1d6_cdf_m.png}}
\hspace{0.8pt}
\subfigure[Radial distribution, hmhi]{\includegraphics[width=0.32\linewidth]{newfigs/frac1d6_cdf_hmhi.png}}
\hspace{0.8pt}
\subfigure[Radial distribution, hmc]{\includegraphics[width=0.32\linewidth]{newfigs/frac1d6_cdf_hmc.png}}
\caption{A synthetic fractal star-forming region of 1000 stars with a fractal dimension of 1.6. The rows are (i) the INDICATE values, (ii) the $\Sigma$-m plots, (iii) the $\Lambda_{\rm{MSR}}$ plots and (iv) the cumulative distributions of radial distances from the centre of the star-forming region. From left to right, the columns are the region with randomly assigned masses (m), highest mass moved to highest INDICATE index (hmhi) and highest masses moved to the centre (hmc). In panels (a), (b) and (c) the stars are colour mapped using the range of indexes found for the centrally concentrated region in figure~\ref{fig:results_radial}, as this region was found to have the greatest INDICATE index. In the colour bar the solid black line represents the significant index for the star-forming region, the dashed black line represents the median index for all of the stars and the dotted black line represents the median index of the 10 most massive stars. The 10 most massive stars in (a), (b) and (c) are highlighted with black crosses. The centre of the region is located in the middle of the black ring. In (d), (e) and (f) the median surface density of the stars is shown by the black dashed line, the median surface density of the 10 most massive stars is shown by the red dash-dotted line. In (g), (h) and (i) the mass segregation ratio is shown by the black line; the horizontal dashed dotted red line shows the value of 1 corresponding to no mass segregation. In (j), (k) and (l) the black line; represents the CDF of radial distance from the centre for all the stars, the red dashed line is the CDF for the 10 most massive stars.}
\label{fig:results_fractal}
\end{figure*}
\subsection{Smooth, Centrally Concentrated Star-Forming Regions}
\label{sec:indicate radial results}
\subsubsection{Random Masses}
The region shown in figure~\ref{fig:results_radial}(a) has a clear central region where INDICATE has detected high levels of spatial clustering, finding that 44.1 per cent of stars are spatially clustered above random, with indexes greater than the significant index of 2.3. The median index for stars significantly clustered above random is $5.2_{-1.6}^{+5.2}$. The same values are also found for the \textit{hmhi} and \textit{hmc} configurations. In this case the most massive stars are spread out across the region with none of the 10 most massive stars located in the central area.
The maximum index for the entire region is 20.4 with a median index of $1.8_{-1.0}^{+3.0}$ for all stars and a median index of $2.0_{-0.8}^{+2.4}$ for the 10 most massive stars: neither the massive stars nor rest of the population is typically in areas of non-random stellar affiliation as both have median indexes below the significant index. A KS test confirms that there is no significant difference in their spatial clustering with a p-value = $0.55$.
Applying INDICATE to just the 50 most massive stars in the region we find that 52 per cent of stars have indexes above the significant index of 2.1, with a median index for the entire subset of 2.6 and 1.1 for the 10 most massive stars. A median index of $4.4_{-1.4}^{+0.4}$ is found for significantly clustered stars. A maximum INDICATE index of 5.2 is found for the 50 most massive stars. As the median index for the 10 most massive stars is below the significant index INDICATE detects no mass segregation in the region. A KS test returns a p-value $= 0.48$ implying no difference in the clustering tendencies of the 10 most massive stars compared to the rest in the subset.
Figure~\ref{fig:results_radial}(d) shows the local stellar surface density against mass plot for this region with the red dashed-dotted line showing the median surface density for the 10 most massive stars and the black dashed line showing the median surface density of all the stars.
A similar result is seen with $\Sigma_{\rm{LDR}} = 1.24$ with a p-value $= 0.63$ from the KS test, indicating the difference is not significant.
Figure~\ref{fig:results_radial}(g) shows the mass segregation ratio for the 10 most massive stars is $\Lambda_{\rm{MSR}} = {1.00}_{\,-\,0.23}^{\,+\,0.11}$ meaning that $\Lambda_{\rm{MSR}}$ finds no significant mass segregation for the 10 most massive stars in this region.
The cumulative distribution of positions are very similar between the most massive stars and the rest, with p-value $= 0.67$. Figure~\ref{fig:results_radial}(j) shows the radial distribution of the 10 most massive stars in red, it closely matches the radial distribution of the entire region.
\subsubsection{High Mass High Index}
As before we swap the most massive stars to the areas of greatest clustering as measured by INDICATE. Figure~\ref{fig:results_radial}(b) shows the 10 most massive stars are now located in the centre of the region. The median index for the 10 most massive stars has increased from $2.0_{-0.8}^{+2.4}$ to $19.4_{-0.0}^{+0.8}$, with a p-value $ \ll 0.01$ indicating a significant difference of the clustering tendencies between all the stars (which have a median index of 1.8) and the 10 most massive stars. Therefore, the method correctly detects that the 10 most massive stars are now located in areas of above average stellar affiliation.
We apply INDICATE to just the 50 most massive stars in the region and find that 64 per cent of stars have indexes above the significant index of 2.1, with the subset having a median index of $5.7_{-5.0}^{+0.3}$ and a median index of $6.0_{-0.0}^{+0.2}$ for the 10 most massive stars. The median index for the significantly clustered stars is $6.0_{-0.2}^{+0.0}$. A maximum INDICATE index of 6.2 is found for the 50 most massive stars. As the median index for the 10 most massive stars is larger than significant index INDICATE has detected mass segregation. A KS test returns a p-value $\ll 0.01$ implying a significant difference in the clustering tendencies of the 10 most massive stars despite the median indexes being similar.
Figure~\ref{fig:results_radial}(e) shows the clear difference in the median surface density of the entire region (black dashed line) compared to the median surface density of the 10 most massive stars (red dashed-dotted line). $\Sigma_{\rm{LDR}} = 17$ (increasing from 1.24) with p $\ll$ 0.01, in agreement with INDICATE that the 10 most massive stars are found in areas of greater stellar clustering compared to the rest of the population.
$\Lambda_{\rm{MSR}}$ detects significant mass segregation with $\Lambda_{\rm{MSR}} = {28.23}_{\,-\,7.20}^{\,+\,3.05}$, increasing from 1.00 when compared to this region with randomly assigned masses and is in agreement with \mbox{INDICATE} when applied to the 50 most massive stars. Like in the fractal region in figure~\ref{fig:results_fractal}, there is one area of high clustering so the most massive stars are moved closer together as a result.
This is also reflected in the cumulative distribution of positions (figure~\ref{fig:results_radial}(k)) with a much steeper function for the 10 most massive stars with p-value $\ll$ 0.01, this is very similar to the results in figure~\ref{fig:results_fractal}(k) for a fractal distribution.
\subsubsection{High Mass Centre}
Figure~\ref{fig:results_radial}(c) highlights that the most massive stars are now closer to each other after being swapped with the stars closest to the origin. The median index for the 10 most massive stars is $18.8_{-0.0}^{+0.2}$, larger than the median index for the region of $1.8_{-1.0}^{+3.0}$, meaning the 10 most massive stars find themselves in locations of greater than average stellar affiliation. A KS test returns a p-value $\ll 0.01$ finding a significant difference in the clustering tendencies of the 10 most massive stars compared to the rest.
We apply INDICATE to the 50 most massive stars and find that 62 per cent of them have indexes above the significant index of 2.1, with a median index of $5.6_{-4.9}^{+0.4}$ for the entire region and $6.0_{-0.0}^{+0.0}$ for the 10 most massive stars. The median index found for the significantly clustered stars is again $6.0_{-0.2}^{+0.0}$. A maximum INDICATE index of 6.2 is found for the 50 most massive stars. The median index for the 10 most massive stars is above the significant index and so the region is found to be mass segregated by INDICATE. A KS test returns a p-value $\ll 0.01$ implying a significant difference between the 10 most massive stars' spatial clustering and the 50 most massive stars.
In figure~\ref{fig:results_radial}(f) the most massive stars find themselves in areas of much higher surface density than when they were moved to areas of greatest INDICATE index. $\Sigma_{\rm{LDR}}$ increases from 17 to $\Sigma_{\rm{LDR}} = 174.72$ with a p-value $\ll$ 0.01. As the median INDICATE index for the 10 most massive stars has decreased to 18.8 from 19.4 it demonstrates that INDICATE is not measuring the exact same quantity as the local stellar surface density, because if it was we would expect the median index of the 10 most massive stars to be larger than 19.4. This may be because the origin of the region also happens to be located in the area of greatest local stellar density, which is not the case for the substructured regions.
$\Lambda_{\rm{MSR}} = {120.90}_{\,-\,33.05}^{\,+\,15.71}$, signifying significant mass segregation for the 10 most massive stars (see figure~\ref{fig:results_radial}(i)).
Figure~\ref{fig:results_radial}(l) shows that the cumulative distribution of the positions of the 10 most massive stars are now much closer to the centre of the region than before. A KS test returns a p-value $\ll 0.01$, implying that there is a significant difference in the spatial distribution of the 10 most massive stars compared to the rest.
\begin{figure*}
\subfigure[INDICATE, m]{\includegraphics[width=0.32\linewidth]{newfigs/radial2d0_m.png}}
\hspace{0.8pt}
\subfigure[INDICATE, hmhi]{\includegraphics[width=0.32\linewidth]{newfigs/radial2d0_hmhi.png}}
\hspace{0.8pt}
\subfigure[INDICATE, hmc]{\includegraphics[width=0.32\linewidth]{newfigs/radial2d0_hmc.png}}
\hspace{0.8pt}
\subfigure[$\Sigma - m$, m]{\includegraphics[width=0.32\linewidth]{newfigs/rad2d0_sigma_m.png}}
\hspace{0.8pt}
\subfigure[$\Sigma - m$, hmhi]{\includegraphics[width=0.32\linewidth]{newfigs/rad2d0_sigma_hmhi.png}}
\hspace{0.8pt}
\subfigure[$\Sigma - m$, hmc]{\includegraphics[width=0.32\linewidth]{newfigs/rad2d0_sigma_hmc.png}}
\hspace{0.8pt}
\subfigure[$\Lambda_{\rm{MSR}}$, m]{\includegraphics[width=0.32\linewidth]{newfigs/newer_rad2d0_m.png}}
\hspace{0.8pt}
\subfigure[$\Lambda_{\rm{MSR}}$, hmhi]{\includegraphics[width=0.32\linewidth]{newfigs/newer_rad2d0_hmhi.png}}
\hspace{0.8pt}
\subfigure[$\Lambda_{\rm{MSR}}$, hmc]{\includegraphics[width=0.32\linewidth]{newfigs/newer_rad2d0_hmc.png}}
\hspace{0.8pt}
\subfigure[Radial distribution, m]{\includegraphics[width=0.32\linewidth]{newfigs/rad2d0_cdf_m.png}}
\hspace{0.8pt}
\subfigure[Radial distribtuion, hmhi]{\includegraphics[width=0.32\linewidth]{newfigs/rad2d0_cdf_hmhi.png}}
\hspace{0.8pt}
\subfigure[Radial distribution, hmc]{\includegraphics[width=0.32\linewidth]{newfigs/rad2d0_cdf_hmc.png}}
\caption{A synthetic, centrally concentrated star-forming region of 1000 stars with radial density exponent $\alpha = 2.0$. The rows are (i) the INDICATE values, (ii) the $\Sigma$-m plots, (iii) the $\Lambda_{\rm{MSR}}$ plots and (iv) the cumulative distributions of radial distances from the centre of the star-forming region. From left to right the columns are the region with randomly assigned masses (m), highest mass stars moved to locations of highest INDICATE index (hmhi) and highest mass stars moved to centre of the region (hmc). Panels (a), (b) and (c) show the indexes found for each star, in the colour bar the significant INDICATE index is shown with the solid black line, the median index for the entire region is shown by the dashed black line and the median index for the 10 most massive stars is shown with the dotted black line. The 10 most massive stars in (a), (b) and (c) are highlighted with black crosses. The centre of the region is located in the middle of the black ring. In (d), (e) and (f) the median surface density of the region is shown by the black dashed line, the median surface density of the 10 most massive stars is shown by the red dash-dotted line. In (g), (h) and (i) the mass segregation ratio is shown by the black line and the horizontal dashed dotted red line shows the value of 1 corresponding to no mass segregation. In (j), (k) and (l) the black line and represents the CDF of radial distances from the centre of the region for all the stars, the red dashed line is the CDF for the 10 most massive stars.}
\label{fig:results_radial}
\end{figure*}
\subsection{Uniform Star-Forming Regions}
\label{sec:indicate uniform results}
\subsubsection{Random Masses}
Figure~\ref{fig:results_uniform_cluster}(a) shows the INDICATE results for a uniform distribution with randomly assigned masses. The maximum index is 2.4 with a median INDICATE index for the entire region of $1.0_{-0.4}^{+0.2}$ and $0.8_{-0.0}^{+0.4}$ for the 10 most massive stars. There is no significant difference between the 10 most massive stars and the rest of the region with a p-value $= 0.68$. In this region one star has an INDICATE index equal to the significant index of 2.4.
Applying INDICATE to just the 50 most massive stars we find that no stars have an index above the significant index of 2.1. The median index for the region is $0.8_{-0.4}^{+0.2}$ and the median index of the 10 most massive stars is $0.8_{-0.2}^{+0.3}$. A maximum INDICATE index of 1.4 is found for the 50 most massive stars. As the median for the 10 most massive stars is below the significant index INDICATE detects no mass segregation in the region. A KS test returns a p-value $= 1.00$, confirming that the 10 most massive stars and the entire subset have similar clustering tendencies.
In figure~\ref{fig:results_uniform_cluster}(d) the most massive stars find themselves in similar areas of surface density as the rest of the stars in the region with $\Sigma_{\rm{LDR}} = 0.94$. No significant difference is detected with a KS test returning a p-value $= 0.59$.
Figure~\ref{fig:results_uniform_cluster}(g) shows, as expected, a very weak signal of $\Lambda_{\rm{MSR}} = {1.02}_{\,-\,0.09}^{\,+\,0.11}$ meaning no mass segregation.
Figure~\ref{fig:results_uniform_cluster}(j) shows the cumulative distribution of positions starting further out for the 10 most massive stars than for the entire region but quickly matches the overall distribution. No significant difference is detected with a p-value $= 0.60$.
\subsubsection{High Mass High Index}
We now move the 10 most massive stars to the areas of highest INDICATE index (figure~\ref{fig:results_uniform_cluster}(b)). Unlike in the centrally concentrated and fractal star-forming regions there is more than one region with relatively high indexes. The median index for the 10 most massive stars has increased from $0.8_{-0.0}^{+0.4}$ to $2.2_{-0.2}^{+0.0}$ with a p-value $\ll 0.01$ when comparing the 10 most massive stars to the entire region, suggesting a significant difference. As both the median index for the region and the median index for the 10 most massive stars are below the significant index of 2.4 all the stars still have a random spatial distribution according to INDICATE.
We apply INDICATE to just the 50 most massive stars and find that no stars have an index above the significant index of 2.1. The median index of the entire subset is $0.7_{-0.3}^{+0.5}$ and for the 10 most massive it is $1.1_{-0.4}^{+0.1}$. A maximum INDICATE index of 1.4 is once again found for the 50 most massive stars. As the median index for the 10 most massive stars is below the significant index INDICATE detects no mass segregation. A KS test returns a p-value $= 0.67$ implying no difference in the distribution of the 10 most massive stars compared to the entire subset.
Figure~\ref{fig:results_uniform_cluster}(e) shows the median local stellar surface density of the 10 most massive stars is greater than the median local stellar surface density of the region, with $\Sigma_{\rm{LDR}} = 2.11$ (increasing from 0.94) and a KS test returns a p-value $\ll 0.01$, meaning a significant difference in the local stellar surface density of the 10 most massive stars and all the stars.
Figure~\ref{fig:results_uniform_cluster}(h) shows that $\Lambda_{\rm{MSR}} = {1.32}_{\,-\,0.07}^{\,+\,0.23}$ for the 10 most massive stars suggesting that weak mass segregation is being detected, before decreasing as more stars are added to the subset. This is due to the random picking of stars for the subset MST. In \citet{parker_comparisons_2015} they suggest ignoring results of $\Lambda_{\rm{MSR}} < 2$ to avoid false positives such as this.
In figure~\ref{fig:results_uniform_cluster}(k) the cumulative distributions of positions are shown for the 10 most massive stars in red and all stars in black. A KS test returns a p-value $= 0.19$ showing no significant difference.
\subsubsection{High Mass Centre}
Now the 10 most massive stars are swapped with the 10 most central stars and this is shown in figure~\ref{fig:results_uniform_cluster}(c).
The median index for 10 most massive stars is $0.8_{-0.0}^{+0.1}$ (the same as when masses are randomly assigned) and similarly there is no significant difference in the spatial clustering of the most massive stars and the rest of the stars with a \mbox{p-value $=0.64$}.
We apply INDICATE to just the 50 most massive stars and find that 10 per cent of stars have an index above the significant index of 2.1. The median index for the significantly clustered stars is $2.4_{-0.2}^{+0.0}$. The median index for the region is found to be $0.8_{-0.4}^{+0.4}$ and $2.0_{-0.4}^{+0.4}$ for the 10 most massive stars. A maximum INDICATE index of 2.4 is found for the 50 most massive stars. As the median index for the 10 most massive stars is below the significant index according to INDICATE they are randomly distributed and so no mass segregation has been detected.
Figure~\ref{fig:results_uniform_cluster}(f) shows the local surface density against mass plot. We find $\Sigma_{\rm{LDR}} = 0.88$ (lower than when masses are swapped with stars of greatest INDICATE index) with a KS test giving a p-value $= 0.49$ implying no significant differences in the surface density of the most massive stars compared to all the stars.
Figure~\ref{fig:results_uniform_cluster}(i) shows the $\Lambda_{\rm{MSR}}$ results and a lower value is found than for other examples with the highest masses moved to the centre. $\Lambda_{\rm{MSR}} = {8.25}_{\,-\,1.36}^{\,+\,0.88}$ meaning mass segregation is detected for the 10 most massive stars and this quickly drops off as the rest of the stars are uniformly distributed. This is opposite to the INDICATE result which finds no mass segregation using the given criteria but does find a significant difference in the clustering of the 10 most massive stars compared to the entire subset.
Figure~\ref{fig:results_uniform_cluster}(l) shows the radial cumulative distributions of positions. The 10 most massive stars show a similar trend as the fractal and smooth star-forming regions, with a much steeper function when the most massive stars are swapped with the most central stars. A KS test returns a p-value $\ll 0.01$.
\subsection{Summary}
The INDICATE method has clearly identified regions of clustering in the synthetic datasets.
INDICATE gives results that are in agreement with $\Sigma_{\rm{LDR}}$ when applied to the entire region and results that are generally in agreement with $\Lambda_{\rm{MSR}}$ when applied to only the 50 most massive stars in the synthetic star forming regions. The INDICATE results when applied to all 1000 stars in a region are summarised in table~\ref{tab:indicate_results} and results when only applied to the 50 most massive stars are shown in table~\ref{tab:indicate_results_50_most_massive}. The results for the $\Sigma_{\rm{LDR}}$, $\Lambda_{\rm{MSR}}$ and CDF methods are summarised in table~\ref{tab:other_results}.
{\renewcommand{\arraystretch}{1.5}
\begin{table}
\centering
\caption{Results of INDICATE being applied to all stars in the synthetic star-forming regions. From left to right the columns are: the median index for all stars in the region, the median index for the 10 most massive stars, the significant index and the p-value returned from a KS test comparing the indexes between the 10 most massive stars and all stars in the region. The null hypothesis is rejected when p-value $\ll 0.01$.}
\begin{tabular}{l|cccccr}
\hline
Region & $\Tilde{I}_{\rm{all}}$ & $\Tilde{I}_{\rm{10}}$ & $I_{\rm{sig}}$ & $\%>I_{\rm{sig}}$ & p\\ \hline
$D=1.6$, m & $4.4_{-1.4}^{+2.6}$ & $4.5_{-0.6}^{+3.3}$ & 2.3 & 82.2 & 0.90\\
$D=1.6$, hmhi & $4.4_{-1.4}^{+2.6}$ & $15.3_{-0.4}^{+0.2}$ & 2.3 & 82.2 & $\ll 0.01$\\
$D=1.6$, hmc & $4.4_{-1.4}^{+2.6}$ & $2.2_{-0.0}^{+0.0}$ & 2.3 & 82.2 & $\ll 0.01$\\
$\alpha=2.0$, m & $1.8_{-1.0}^{+3.0}$ & $2.0_{-0.8}^{+2.4}$ & 2.3 & 44.1 & 0.55\\
$\alpha=2.0$, hmhi & $1.8_{-1.0}^{+3.0}$ & $19.4_{-0.0}^{+0.8}$ & 2.3 & 44.1 & $\ll 0.01$\\
$\alpha=2.0$, hmc & $1.8_{-1.0}^{+3.0}$ & $18.8_{-0.0}^{+0.2}$ & 2.3 & 44.1 & $\ll 0.01$\\
Uniform, m & $1.0_{-0.4}^{+0.2}$ & $0.8_{-0.0}^{+0.4}$ & 2.4 & 0.0 & 1.00\\
Uniform, hmhi & $1.0_{-0.4}^{+0.2}$ & $2.2_{-0.2}^{+0.0}$ & 2.4 & 0.0 & $\ll 0.01$\\
Uniform, hmc & $1.0_{-0.4}^{+0.2}$ & $0.8_{-0.0}^{+0.1}$ & 2.4 & 0.0 & 0.64\\
\hline
\end{tabular}
\label{tab:indicate_results}
\end{table}}
{\renewcommand{\arraystretch}{1.5}
\begin{table}
\centering
\caption{Results of applying INDICATE to just the 50 most massive stars in the synthetic regions. From left to right the columns are: the median index for all 50 stars, the median index for the 10 most massive stars, the significant index, the percentage of stars with indexes greater than the significant index and the p-value from a KS test between all 50 stars and the 10 most massive stars.}
\begin{tabular}{l|ccccc}
\hline
Region & $\Tilde{I}_{\rm{50}}$ & $\Tilde{I}_{\rm{10}}$ & $I_{\rm{sig}}$ & $\%>I_{\rm{sig}}$ & p\\ \hline
$D=1.6$, m & $1.4_{-0.6}^{+0.6}$ & $1.6_{-0.8}^{+0.4} $ & 2.1 & 22 & 1.00\\
$D=1.6$, hmhi & $1.3_{-0.5}^{+2.3}$ & $3.6_{-0.0}^{+0.0} $ & 2.1 & 38 & $\ll 0.01$\\
$D=1.6$, hmc & $1.4_{-0.4}^{+0.8}$ & $2.6_{-0.2}^{+0.0} $ & 2.1 & 28 & $\ll 0.01$\\
$\alpha=2.0$, m & $2.6_{-2.0}^{+1.8}$ & $1.1_{-0.7}^{+2.0} $ & 2.1 & 52 & 0.48\\
$\alpha=2.0$, hmhi & $5.7_{-5.0}^{+0.3}$ & $6.0_{-0.0}^{+0.2} $ & 2.1 & 64 & $\ll 0.01$\\
$\alpha=2.0$, hmc & $5.6_{-4.9}^{+0.4}$ & $6.0_{-0.0}^{+0.0} $ & 2.1 & 62 & $\ll 0.01$\\
Uniform, m & $0.8_{-0.4}^{+0.2}$ & $0.8_{-0.2}^{+0.3} $ & 2.1 & 0 & 1.00\\
Uniform, hmhi & $0.7_{-0.3}^{+0.5}$ & $1.1_{-0.4}^{+0.1}$ & 2.1 & 0 & 0.67\\
Uniform, hmc & $0.8_{-0.4}^{+0.4}$ & $2.0_{-0.4}^{+0.4} $ & 2.1 & 10 & 0.01\\
\hline
\end{tabular}
\label{tab:indicate_results_50_most_massive}
\end{table}}
{\renewcommand{\arraystretch}{1.5}
\begin{table}
\centering
\caption{Results of the other methods being applied to all stars in the synthetic star-forming regions. From left to right the columns are: the local stellar surface density ratio, the p-value from a KS comparing the median local stellar surface density of the 10 most massive stars to the median local stellar surface density of the entire region, the mass segregation ratio and the p-value of a KS test comparing the CDF of positions of the 10 most massive stars and all the stars in each region.}
\begin{tabular}{l|ccccl}
\hline
Region & $\Sigma_{\rm{LDR}}$ & $\Sigma_{\rm{LDR}}$ (p) & $\Lambda_{\rm{MSR}}$ & CDF (p) \\ \hline
$D=1.6$, m & 1.30 & 0.69 & $0.97_{\rm{\,-\,0.11}}^{\rm{\,+\,0.09}}$ & $\ll 0.01$ \\
$D=1.6$, hmhi & 2.30 & $\ll 0.01$ & $33.74_{\rm{\,-\,5.27}}^{\rm{\,+\,2.54}}$ & $\ll 0.01$ \\
$D=1.6$, hmc & 0.56 & 0.03 & $8.87_{\rm{\,-\,0.78}}^{\rm{\,+\,0.74}}$ & $\ll 0.01$ \\
$\alpha=2.0$, m & 1.24 & 0.63 & $1.00_{\rm{\,-\,0.23}}^{\rm{\,+\,0.11}}$ & 0.67 \\
$\alpha=2.0$, hmhi & 17.00 & $\ll 0.01$ & $28.23_{\rm{\,-\,7.20}}^{\rm{\,+\,3.05}}$ & $\ll 0.01$ \\
$\alpha=2.0$, hmc & 174.72 & $\ll 0.01$ & $120.90_{\rm{\,-\,33.05}}^{\rm{\,+\,15.71}}$ & $\ll 0.01$ \\
Uniform, m & 0.94 & 0.59 & $1.02_{\rm{\,-\,0.09}}^{\rm{\,+\,0.11}}$ & 0.60 \\
Uniform, hmhi & 2.11 & $\ll 0.01$ & $1.32_{\rm{\,-\,0.07}}^{\rm{\,+\,0.23}}$ & 0.19 \\
Uniform, hmc & 0.88 & 0.49 & $8.25_{\rm{\,-\,1.36}}^{\rm{\,+\,0.88}}$ & $\ll 0.01$ \\
\hline
\end{tabular}
\label{tab:other_results}
\end{table}}
\begin{figure*}
\subfigure[INDICATE, m]{\includegraphics[width=0.32\linewidth]{newfigs/uniform_m.png}}
\hspace{0.8pt}
\subfigure[INDICATE, hmhi]{\includegraphics[width=0.32\linewidth]{newfigs/uniform_hmhi.png}}
\hspace{0.8pt}
\subfigure[INDICATE, hmc]{\includegraphics[width=0.32\linewidth]{newfigs/uniform_hmc.png}}
\hspace{0.8pt}
\subfigure[$\Sigma - m$, m]{\includegraphics[width=0.32\linewidth]{newfigs/uni_sigma_m.png}}
\hspace{0.8pt}
\subfigure[$\Sigma - m$, hmhi]{\includegraphics[width=0.32\linewidth]{newfigs/uni_sigma_hmhi.png}}
\hspace{0.8pt}
\subfigure[$\Sigma - m$, hmc]{\includegraphics[width=0.32\linewidth]{newfigs/uni_sigma_hmc.png}}
\hspace{0.8pt}
\subfigure[$\Lambda_{\rm{MSR}}$, m]{\includegraphics[width=0.32\linewidth]{newfigs/correct_scale_yaxis_uniform_m.png}}
\hspace{0.8pt}
\subfigure[$\Lambda_{\rm{MSR}}$, hmhi]{\includegraphics[width=0.32\linewidth]{newfigs/newer_uni_hmhi.png}}
\hspace{0.8pt}
\subfigure[$\Lambda_{\rm{MSR}}$, hmc]{\includegraphics[width=0.32\linewidth]{newfigs/newer_uni_hmc.png}}
\hspace{0.8pt}
\subfigure[Radial distributions, m]{\includegraphics[width=0.32\linewidth]{newfigs/uni_cdf_m.png}}
\hspace{0.8pt}
\subfigure[Radial distributions, hmhi]{\includegraphics[width=0.32\linewidth]{newfigs/uni_cdf_hmhi.png}}
\hspace{0.8pt}
\subfigure[Radial distributions, hmc]{\includegraphics[width=0.32\linewidth]{newfigs/uni_cdf_hmc.png}}
\caption{Uniform distribution of 1000 stars. The rows are (i) the INDICATE values, (ii) the $\Sigma$-m plots, (iii) the $\Lambda_{\rm{MSR}}$ plots and (iv) the cumulative distributions of radial distances from the centre of the star-forming region. From left to right, we show the distribution with randomly assigned masses (m), highest mass moved to highest INDICATE index (hmhi), highest masses moved to centre (hmc). The colour bars in panels (a), (b) and (c) show the INDICATE index, with the solid black line representing the significant index, the dashed black line represents the median index for the region and the dotted black line represents the median index of the 10 most massive stars. The 10 most massive stars in (a),(b) and (c) are highlighted with black crosses. The centre of the region is located in the middle of the black ring. In (d), (e) and (f) the median surface density of all stars is shown by the black dashed line, the median surface density of the 10 most massive stars is shown by the red dash-dotted line. In (g), (h) and (i) the mass segregation ratio is shown by the black line and the horizontal dashed dotted red line shows the value of 1 corresponding to no mass segregation. In (j), (k) and (l) the black line represents the CDF of radial distance from the centre for all of the stars and the red dashed line is the CDF for the 10 most massive stars.}
\label{fig:results_uniform_cluster}
\end{figure*}
\section{Applying INDICATE to Observational Data}
\label{section:observational_data_indicate_results}
We now apply INDICATE to the following star-forming regions: Taurus, Orion Nebula Cluster (ONC), NGC1333, IC348 and $\rho$ Ophiuchi. The ONC may be incomplete due to saturation due to its high stellar density and therefore may be missing the most massive stars, however in \citet{hillenbrand_preliminary_1998} they find that the optical point source component of the ONC is incomplete but when combined with infrared data it is near complete (see $\S$~2 \citet{hillenbrand_preliminary_1998}).
For the ONC, NGC1333 and IC348 we removed stars without known masses when performing KS tests between the 10 most massive stars and all of the stars in the regions.
The results of applying INDICATE to all points in the observational data sets are presented in table~\ref{tab:results_table_ks_ob}. The results of applying INDICATE to just the 50 most massive stars in these regions is shown in \textit{Appendix}~\ref{sec:classical_massseg_observational_data}.
{\renewcommand{\arraystretch}{1.5}
\begin{table}
\centering
\caption{Results of applying INDICATE to all stars in the observational regions. From the left to right the columns are: the median index for the all the stars, the median index for the 10 most massive stars in the region, the percentage of stars that are significantly clustered above random and the p-value from a KS test comparing the 10 most massive stars to all stars in each region.}
\label{tab:results_table_ks_ob}
\begin{tabular}{l|ccccl}
\hline
Name & $\Tilde{I}(all)$ & $\Tilde{I}(10)$ & $I_{\rm{sig}}$ & $\% > I_{\rm{sig}}$ & p \\
\hline
Taurus & $6.6_{-1.6}^{+1.8}$ & $3.6_{-0.2}^{+3.3} $ & 2.1 & 85.9 & 0.07 \\
ONC & $1.4_{-0.8}^{+3.0}$ & $10.3_{-5.8}^{+16.5} $ & 2.4 & 46.1 & 0.003 \\
NGC1333 & $5.1_{-2.9}^{+2.9}$ & $5.9_{-1.5}^{+2.4} $ & 2.4 & 73.9 & 0.57 \\
IC348 & $3.2_{-1.8}^{+3.5}$ & $2.6_{-0.4}^{+8.3} $ & 2.3 & 59.2 & 0.63 \\
$\rho$ Ophiuchi & $1.8_{-1.0}^{+1.2}$ & $1.3_{-0.7}^{+1.1} $ & 2.2 & 39.6 & 0.82 \\
\hline
\end{tabular}
\end{table}}
\subsection{Taurus}
The Taurus star-forming region is located 140 pc away with an estimated age of around $1$ Myr \citep[][]{bell_pre-main-sequence_2013}. We use the dataset from \citet{parker_mass_2011}, entailing 361 objects. The masses of the 10 most massive stars are between $1.9 \, M_\odot$ and $4.1 \, M_\odot$ (masses are calculated in $\S$~2 of \citet{parker_mass_2011}).
\begin{figure}
\includegraphics[width=\columnwidth]{newfigs/taurus_indicate_1.png}
\caption{The Taurus star-forming region. The ten most massive stars are highlighted with black crosses. In the colour bar the significant INDICATE index is shown by the solid black line, the median index for all the stars is shown by the dashed black line and the median index for the 10 most massive stars is shown using the dotted black line.}
\label{fig:fig_taurus_indicate}
\end{figure}
Previous studies of Taurus find a corresponding fractal dimension $D$ of $1.55 \pm 0.25$, inferred from the $Q$-parameter value 0.45 \citep{cartwright_statistical_2004}. In \citet{parker_mass_2011} they find a mass segregation ratio $\Lambda_{\rm{MSR}} = 0.70 \pm 0.10$ for the 20 most massive stars.
Figure~\ref{fig:fig_taurus_indicate} shows Taurus after INDICATE is applied, revealing that 85.9 per cent of stars are spatially clustered above random with indexes greater than the significant index of 2.1. The median index of significantly clustered stars is $6.8_{-0.8}^{+1.8}$. A maximum INDICATE index of 14.6 is found for the region. Taurus has a median index of $6.6_{-1.6}^{+1.8}$ for the entire region and $3.6_{-0.2}^{+3.3}$ for the 10 most massive stars. The most massive stars are highlighted with crosses in figure~\ref{fig:fig_taurus_indicate}, they are spread out across the star-forming region (see also \citet{parker_mass_2011}), with most lying in less clustered regions. INDICATE detects no significant difference in distribution of indexes between the most massive stars and all stars with a \mbox{p-value $= 0.07$}.
\subsection{ONC}
We use the dataset from \citet{hillenbrand_preliminary_1998} which contains 1576 objects. The ONC is a very dense centrally concentrated region shown in figure~\ref{fig:fig_onc_indicate}. The line of empty space to the south of the area of highest index that extends from the north-east to the south-west is due to a band of extinction. 641 objects do not have an assigned mass in the dataset and are therefore removed when comparing the indexes between the 10 most massive stars and the entire region. The masses of the 10 most massive stars are between $5.7 \, M_\odot$ and $45.7 \, M_\odot$. The distance to the ONC is about 400~pc away with an estimated age of around 1~Myr \citep[][]{jeffries_no_2011, reggiani_quantitative_2011}. The mass segregation ratio of the ONC as found using the 4 most massive stars is $\Lambda_{\rm{MSR}} = 8.0 \pm 3.5$.
We individually determine the $Q$-parameter for stars with and without mass measurements, and all sample stars, respectively, and find that there is no significant difference in the $Q$-parameter, suggesting that the three subsets follow the same spatial distribution. The median index for the ONC is $1.4_{-0.8}^{+3.0}$, the median index for the 10 most massive stars is $10.3_{-5.8}^{+16.5}$ with 46.1 per cent of stars spatially clustered above random (the ONC has a significant index of 2.4). A median index of $8.6_{-4.8}^{+8.4}$ is found for significantly clustered stars. A maximum INDICATE index of 28.8 is found for the ONC. These results are similar to the synthetic region shown in figure~\ref{fig:results_radial}. A KS test gives a p-value of 0.003, below our chosen threshold of 0.01 meaning that the 10 most massive stars have different clustering tendencies when compared to the entire region.
\begin{figure}
\includegraphics[width=\columnwidth]{newfigs/ONC_indicate_1.png}
\caption{The Orion Nebula Cluster. The ten most massive stars are highlighted with black crosses. In the colour bar the significant index is shown with the solid black line, the median index of all the stars is shown by the dashed black line and the median index for the 10 most massive stars is shown by the dotted black line.}
\label{fig:fig_onc_indicate}
\end{figure}
\subsection{NGC 1333}
The NGC 1333 star-forming region (shown in figure~\ref{fig:fig_NGC1333_indicate}) contains 203 objects, 162 of which have an assigned mass in the dataset used by \citet{parker_dynamical_2017}. The masses of the 10 most massive stars are between $1.1 \, M_\odot$ and $3.3 \, M_\odot$. The distance to the region is 235~pc with an age of around 1~Myr \citep{parker_dynamical_2017,pavlidou_substructure_2021}. In \citet{parker_dynamical_2017} they find a mass segregation ratio of $\Lambda_{\rm{MSR}} = 1.2_{-0.3}^{+0.4}$ implying no mass segregation present when looking at the 10 most massive stars.
\begin{figure}
\includegraphics[width=\columnwidth]{newfigs/ngc1333_indicate_1.png}
\caption{The NGC1333 star-forming region. The ten most massive stars are highlighted with black crosses. The significant index from INDICATE is shown in the colour bar by the solid black line, the median index for all the stars is shown by the dashed black line and the median index for the 10 most massive stars is shown by the dotted black line.}
\label{fig:fig_NGC1333_indicate}
\end{figure}
INDICATE finds that 74 per cent of stars are spatially clustered above random (the region has a significant index of 2.4). The median index of significantly clustered stars is $7.4_{-2.8}^{+1.2}$. A maximum INDICATE index of 9.8 is found. INDICATE has highlighted an extended central region of relatively high spatial clustering, with the most massive stars spread out around this region. A median index of $5.1_{-2.9}^{+2.9}$ is found for all stars and for the 10 most massive stars a median index of $5.9_{-1.5}^{+2.4}$ is found with a p-value $= 0.57$. This implies no significant difference in the spatial clustering of the most massive stars compared to all the stars.
\subsection{IC 348}
The data from \citet{parker_dynamical_2017} contains 478 objects for IC 348, 19 of which do not have an assigned mass in the dataset and are ignored when comparing the clustering tendencies of the most massive stars and all stars.
The results of running INDICATE on this region are shown in figure~\ref{fig:fig_ic348_indicate}, clearly showing a central region of relatively higher spatial clustering. The distance to IC 348 is around 300~pc \citep{parker_dynamical_2017} with an age between $2-6$~Myr \citep{cartwright_statistical_2004, bell_pre-main-sequence_2013}. IC 348 has been previously investigated in \citet{parker_dynamical_2017} using the $Q$-parameter to determine overall structure. It was found to have a $Q$-value of 0.85, corresponding to a smooth and centrally concentrated distribution with a radial density exponent of $\alpha = 2.5$. In \citet{parker_dynamical_2017} they find a mass segregation ratio of $\Lambda_{\rm{MSR}} = 1.1_{-0.3}^{+0.2}$ for the 10 most massive stars, meaning no mass segregation is detected.
\begin{figure}
\includegraphics[width=\columnwidth]{newfigs/ic348_indicate_1.png}
\caption{IC348 star-forming region. The ten most massive stars are highlighted with black crosses. The significant index from INDICATE is shown by the solid black line in the colour bar, the median index for all the stars is shown with the dashed black line and the median index for the 10 most massive stars is shown with the dotted black line.}
\label{fig:fig_ic348_indicate}
\end{figure}
INDICATE is applied to IC 348, finding 59.2 per cent of stars to be spatially clustered above random and a significant index of 2.3 for the region. The median index for significantly clustered stars is $5.6_{-2.0}^{+4.0}$. A maximum index of 14.6 is found. Median indexes of $3.2_{-1.8}^{+3.5}$ and $2.6_{-0.4}^{+8.3}$ are found for the entire region and the 10 most massive stars, respectively. The masses of the 10 most massive stars are between $2.4 \, M_\odot$ and $4.7 \, M_\odot$. A KS test between the 10 most massive stars and the rest of the stars gives a p-value $= 0.63$, meaning no significant difference in the distribution of the most massive stars and the rest. This is because the most massive stars are spread out over the entire star-forming region, 5 of them are located within the central region, 4 are found between the edge of this region and the outskirts of the region, with one of the most massive stars found right at the edge of the plot.
\subsection{$\rho$ Ophiuchi}
$\rho$ Ophiuchi is located around 130~pc away with an age of around $0.3 - 2.0$ Myr \citep[][]{parker_search_2012, bontemps_isocam_2001}. We use the dataset from \citet{parker_search_2012} which contains 255 objects. In \citet{parker_search_2012} the mass segregation ratio is found to be $\Lambda_{\rm{MSR}} = 0.89_{-0.13}^{+0.09}$ for the 20 most massive stars, implying no mass segregation is present in the region
\begin{figure}
\includegraphics[width=\columnwidth]{newfigs/rhooph_indicate_1.png}
\caption{$\rho$~Oph star-forming region. The ten most massive stars are highlighted with black crosses. The significant index from INDICATE is shown by solid black line in the colour bar, the median index for all the stars is shown by the dashed black line and the median index for the 10 most massive stars is shown using the dotted black line.}
\label{fig:fig_rho_oph_indicate}
\end{figure}
The region is shown in figure~\ref{fig:fig_rho_oph_indicate}.
INDICATE finds that 39.6 per cent of stars are spatially clustered above random, with a significant index of 2.2. The median index for significantly clustered stars is $3.0_{-0.6}^{+0.6}$. The maximum index of the region is 5.0. A median index of $1.8_{-1.0}^{+1.2}$ is found for the entire region and $1.3_{-0.7}^{+1.1}$ is found for the 10 most massive stars, with a p-value $= 0.82$ meaning no significant difference between clustering tendencies of the most massive stars and the rest of the stars. The masses of the 10 most massive stars are between $3.6 \, M_\odot$ and $7.7 \, M_\odot$. These results are similar to IC 348 which also has its most massive stars spread out over the star-forming region. One of the most massive stars is located in an area of high clustering, but the rest have been spread out over relatively lower clustered locations in $\rho$ Ophiuchi.
\section{Conclusions}
\label{sec:conclusions}
We have investigated the performance of the INDICATE method to detect the spatial clustering tendencies in young star-forming regions. We have assessed its ability to quantify mass segregation, and have applied it to pre-main sequence stars in nearby star-forming regions.
We have shown in figure~\ref{fig:structure_indicate} that whilst INDICATE can be used to quantify the clustering tendencies for individual stars in a region it \mbox{cannot} be used to provide any further information on the overall structure of a star-forming region due to the degeneracy of the INDICATE index across different morphologies.
We confirm that when INDICATE is applied to an entire region it can detect significant differences in the local stellar surface density between the 10 most massive stars and the entire population and will find results that are in agreement with $\Sigma_{\rm{LDR}}$.
When INDICATE is applied to the subset of the 50 most massive stars only it will detect when the 10 most massive stars are more clustered with respect to other massive stars and in most cases will agree with the $\Lambda_{\rm{MSR}}$ method. The two methods are in agreement when applied to regions with \textit{hmhi} and \textit{hmc} mass configurations across 100 realisations of the three different morphologies. The largest discrepancies are found for the substructured realisations and the smooth, centrally concentrated realisations with randomly assigned masses where INDICATE detects 14 and 29 realisations with mass segregation, respectively. When $\Lambda_{\rm{MSR}}$ is applied to the realisations where INDICATE has detected mass segregation it finds no mass segregation in the substructured realisations and 1 in the smooth, centrally concentrated realisations.
We also quantify the clustering tendencies of the most massive stars in these regions compared to all the stars. In the ONC, we find significant differences in the clustering tendencies of the 10 most massive stars when compared to all the stars finding that the 10 most massive stars are in areas of greater local stellar surface density than the average star in the region.
The other observed regions show no significant differences in the clustering tendencies of the 10 most massive stars as measured using INDICATE compared to all stars in the region. This is due to the 10 most massive stars in these regions being spread out (see figures~\ref{fig:fig_taurus_indicate}, \ref{fig:fig_NGC1333_indicate}, \ref{fig:fig_ic348_indicate} and \ref{fig:fig_rho_oph_indicate}), resulting in a wider range of INDICATE indexes.
In summary, whilst INDICATE can be useful to quantify the degree of affiliation between individual stars and can be used to both detect signals of local stellar surface density and mass segregation depending on whether it is applied to all the stars or just a subset of stars in a region, it does not provide any further information on the type of morphology of a star forming region.
In a follow-up paper, we will investigate the evolution of $N$-Body simulations with respect to the clustering tendencies of different subsets of stars as measured by INDICATE.
\section*{Acknowledgements}
We wish to thank the anonymous referee for their feedback and suggestions which have improved the paper.
RJP acknowledges support from the Royal Society in the form of a Dorothy Hodgkin fellowship.
AB is funded by the European Research Council H2020-EU.1.1 ICYBOB project (Grant No. 818940).
\section*{Data Availability}
Data is available on request from the authors.
\bibliographystyle{mnras}
|
{
"timestamp": "2021-12-01T02:25:11",
"yymm": "2111",
"arxiv_id": "2111.15435",
"language": "en",
"url": "https://arxiv.org/abs/2111.15435"
}
|
\section{Introduction}
\label{sec:IntroductionSection}
Image degradation by motion blur generally occurs due to movement during the capture process from the camera or capturing using lightweight devices such as mobile phones and low intensity during camera exposure.
Blur in the images degrades the perceptual quality. For example, blur distorts the object's structure (Fig. {\color{red} \ref{fig:object_detection}}).
\begin{figure}[!htb]
\centering
\subfigure[Blur Image]{
\includegraphics[width=0.36\linewidth,height=35mm]{images/predictions_real_A.jpeg}
}
\subfigure[Sharp Image]{
\includegraphics[width=0.36\linewidth,height=35mm]{images/predictions_real_B.jpeg}
}
\setlength{\belowcaptionskip}{-15pt}
\caption{The figure shows object detection on images becomes easy after deblurring using FMD-cGAN.YOLO{\color{green}\cite{yolo2015detection}} object detection on the (a) blurry picture and on the (b) sharp picture from the GoPro dataset{\color{green}\cite{deblurgan}}}
\label{fig:object_detection}
\vspace{-1mm}
\end{figure}
Image Deblurring is a method to remove the blurring artifacts and distortion from a blurry image. Human vision can easily understand the blur in the image. However, it is challenging to create metrics that can estimate the blur present in the image. Image degradation model using non-uniform blur kernel {\color{green} \cite{blur2017flow,nonuniform2015blur}} is given in Eq.~{\color{red}\ref{eq:Image deblur}}.
\begin{equation}
I_B = K(M) * I_S + N
\label{eq:Image deblur}
\end{equation}
where, $I_B$ denotes a blurred image, K(M) denotes unknown blur kernels depending on M's motion field. $I_S$ denotes a latent sharp image, $*$ denotes a convolution operation, and N denotes the noise. As an inverse problem, we retrieve sharp image $I_S$ from blur image $I_B$ during the deblurring process. The deblurring problem generally classified as non-blind deblurring {\color{green} \cite{nonblind2006deblurring}} and blind deblurring {\color{green} \cite{blind2010deblur,blind2011deblur}}, according to knowledge of blur kernel $K(M)$ is known or not.
Our work aims at a single image blind motion deblurring task using deep-learning. The deep-learning methods are effective in performing various computer vision tasks such as object removal {\color{green} \cite{objrem2018mulmed,srobust}}, style transfer {\color{green} \cite{deep2017styletransf}}, and image restoration {\color{green} \cite{sr_photo_reliastic,deblurgan,deblur_dynamicscene}}. More specifically, convolution neural networks (CNNs) based approaches for image restoration tasks are increasing, e.g., image denoising {\color{green} \cite{gaussiandenoise}}, super-resolution {\color{green} \cite{sr_photo_reliastic}}, and deblurring {\color{green} \cite{deblurgan,deblur_dynamicscene}}.
The applications of Generative Adversarial Networks (GANs) {\color{green} \cite{gan2014ian}} are increasing immensely, particularly image-to-image conversion GANs {\color{green} \cite{pix2pix}} have been successfully used on image enhancement, image synthesis, image editing and style transfer. Image deblurring could be formulated as an image-to-image translation task. Generally, applications that interact with humans (e.g., Object Detection) require to be faster and lightweight for a better experience. Image deblurring could be useful pre-processing steps of other computer vision tasks such as Object Detection (Fig. {\color{red} \ref{fig:object_detection}}).
\begin{figure*}
\centering
\begin{minipage}{0.3\linewidth}
\includegraphics[width=\linewidth,height=30mm]{images/box_3_epoch067_Blurred_Train.jpg}
\end{minipage}\hspace*{5pt
\begin{minipage}{0.3\linewidth}
\includegraphics[width=\linewidth,height=30mm]{images/box_3_epoch067_Restored_Train.jpg}
\end{minipage}\hspace*{5pt
\begin{minipage}{0.3\linewidth}
\includegraphics[width=\linewidth,height=30mm]{images/box_3_epoch067_Sharp_Train.jpg}
\end{minipage} \\%
\begin{minipage}{0.3\linewidth}
\includegraphics[width=\linewidth,height=30mm]{images/box_3_epoch031_Blurred_Train.jpg}\\ \hspace*{1cm} (a) Corrupted Image
\end{minipage}\hspace*{5pt
\begin{minipage}{0.3\linewidth}
\includegraphics[width=\linewidth,height=30mm]{images/box_3_epoch031_Restored_Train.jpg}\\ \hspace*{1cm} (b) FMD-cGAN (ours)
\end{minipage}\hspace*{5pt
\begin{minipage}{0.3\linewidth}
\includegraphics[width=\linewidth,height=30mm]{images/box_3_epoch031_Sharp_Train.jpg}\\ \hspace*{1cm} (c) Original Image
\end{minipage}%
\caption{First-row images are from the GoPro dataset {\color{green} \cite{multi_scale2017}}, and second-row images are from the REDS dataset {\color{green} \cite{reds}} processed by Fast Deblurring cGAN.}
\label{fig:gopro_result}
\vspace{-8mm}
\end{figure*}
In this paper, we propose a Fast Motion Deblurring conditional Generative Adversarial Network architecture (FMD-cGAN). Our FMD-cGAN architecture is based on conditional GANs {\color{green} \cite{conditional2014gan}} and the resnet network architecture {\color{green} \cite{deep_res}} (Fig. {\color{red} \ref{fig:generator architecture}}). We also used depthwise separable convolution (Fig. {\color{red} \ref{fig:modified_resnet_block}}) inspired from MobileNet to improve efficiency. A MobileNet network {\color{green} \cite{mobilenetv1}} has fewer Multiplications and Additions (smaller complexity) operations, and fewer parameters (smaller model size) compare to the same network with regular convolution operation.
Unlike other GAN frameworks, where we give the sharp image (real example) and output image from generator network (fake example) as the inputs into Discriminator network {\color{green} \cite{pix2pix,conditional2014gan}}, we train our Discriminator (Fig. {\color{red} \ref{fig:Discriminator architecture}}) by providing input as combining blurred image with the output image from the generator network (or blurred image with sharp image).
Different from previous work, we propose to use Hinge loss {\color{green} \cite{geometric_gan}} and Perceptual loss {\color{green} \cite{perceptualloss}} to improve the quality of the output image. Hinge loss improves the fidelity and diversity of the generated images {\color{green} \cite{autogan}}. Using the Hinge loss in our FMD-cGAN allows building lightweight neural network architectures for the single image motion deblurring task compared to standard Deep ResNet architectures. The Perceptual loss {\color{green} \cite{perceptualloss}} is used as content loss to generate photo-realistic images in our GAN framework. \vspace*{4pt}
\noindent \textbf{Contributions:} The major contributions are summarized as below.
\begin{itemize}
\item[$\bullet$] We propose a faster and light-weight conditional GAN architecture (FMD-cGAN) for blind motion deblurring tasks. We show that FMD-cGAN (ours) is efficient with lesser inference time than DeblurGAN {\color{green} \cite{deblurgan}}, DeblurGANv2 {\color{green} \cite{deblur_v2}}, and DeepDeblur {\color{green} \cite{multi_scale2017}} models (Table {\color{red} \ref{tab:Performance and efficiency comparison on the GoPro test dataset}}).
\item[$\bullet$] We have performed extensive experiments on GoPro dataset and REDS dataset (Sec.~{\color{red}\ref{sec:experimentalResults}}). The results shows that our FMD-cGAN outputs images with good visual quality and structure similarity (Fig~{\color{red} \ref{tbl:comparison on the REDS dataset}}, Fig.~{\color{red} \ref{tbl:comparison on the GoPro dataset}}, and Table~{\color{red} \ref{tab:Performance and efficiency comparison on the REDS test dataset}}).
\item[$\bullet$] We also provide two variants (WILD and Comb) of FMD-cGAN to show that image deblurring task could be improved by pre-training network (Table {\color{red} \ref{tab:Performance and efficiency comparison on the GoPro test dataset}} and {Sec. \color{red} \ref{sec:training_details}}).
\item[$\bullet$] We have also performed ablation study to illustrate that our network design choices improves the deblurring performance (Sec. {\color{red} \ref{sec:ablation_study}}).
\end{itemize}
\section{Background}
\subsection{Image Deblurring}
Images can have different types of blur problems, such as motion blur, defocus blur, and handshake blur. We have described that image deblurring is classified into two types: Non-blind image deblurring and Blind image deblurring (Sec.~{\color{red} \ref{sec:IntroductionSection}}).
Non-blind deblurring is an ill-posed problem. The noise inverse process is unstable; a small quantity of noise can cause critical distortions. Most of the earlier works {\color{green} \cite{cv_richard,william_hadley,wiener_filter}} aims to perform non-blind deblurring task by assuming that blur kernels $K(M)$ are known. Blind deblurring techniques for a single image, which use Deep-learning based approaches, are observed to be effective in single image deblurring tasks {\color{green} \cite{SRN,multi_scale2017}} because most of the kernel-based methods are not sufficient to model the real world blur {\color{green} \cite{raw2020deblur}}. The task is to estimates both the sharp image $I_S$ and the blur kernel $K(M)$ for image restoration. There are also classical approaches such as low-rank prior {\color{green}\cite{lowrank_prior}} and dark channel prior {\color{green}\cite{darkchannel_prior}} that are useful for deblurring, but they also have shortcomings.
\subsection{Generative Adversarial Networks}
Generative Adversarial Network (GAN) was initially developed and introduced by Ian Goodfellow and his fellow workers in 2014 {\color{green} \cite{gan2014ian}}. GAN framework includes two competing network architectures: a generator network $G$ and a discriminator network $D$.
\noindent Generator ($G$) task is to generate fake samples similar to input by capturing the input data distribution, and on the opposite side, the Discriminator ($D$) aims to differentiate between the fake and real samples; and pass this information to the $G$ so that $G$ can learn. Generator $G$ and Discriminator $D$ follows the minimax objective defined as follows.
\begin{equation}
\min_G\max_D V(D,G) = E_{x\sim p_{data}(x)}[log(D(x))]+E_{z\sim p_{z}(z)}[log(1 - D(G(z))]
\label{eq:gan minimax}
\end{equation}
Here, in Eq.~{\color{red}\ref{eq:gan minimax}}, the generator $G$ aims to minimize the value function $V$, and the discriminator $D$ tries to maximize the value function $V$. Moreover, the generator $G$ faces problems such as mode collapse and gradient diminishing (e.g., Vanilla GAN). \\
\noindent \textbf{WGAN and WGAN-GP:} To deal with mode collapse and gradient diminishing, WGAN method {\color{green} \cite{wgan}} uses Earth-Mover (Wasserstein-1) distance in the loss function. In this implementation, the discriminator output layer is a linear one, not sigmoid (discriminator output's a real value). WGAN {\color{green} \cite{wgan}} performs weight clipping $[{-c},\ c]$ to enforce the Lipschitz constraint on the critic (i.e., discriminator). This method faces the issue of gradient explosion/vanishing without proper value of weight clipping parameter $c$. WGAN with Gradient penalty (WGAN-GP) {\color{green} \cite{gradientpenalty}} resolve above issues with WGAN {\color{green} \cite{wgan}}. WGAN-GP enforces a penalty on the gradient norm for random samples $\tilde{x} \sim P_{\tilde{x}}$.
The objective function of WGAN-GP is as below.
\begin{equation}
V(D,G) = \min_G\max_D E_{\tilde{x} \sim p_{g}}[D(\tilde{x})] - E_{x \sim p_{r}} [D(x)] +\lambda E_{\tilde{x} \sim P_{\tilde{x}}} [(||{\nabla_{\tilde{x}}D(\tilde{x})}||_2 - 1)^2]
\label{eq:gradient panelty}
\end{equation}
\noindent WGAN-GP {\color{green} \cite{gradientpenalty}} makes the WGAN {\color{green} \cite{wgan}} training more stable and does not require hyperparameter tuning. The DeblurGAN {\color{green} \cite{deblurgan}} used WGAN-GP method (Eq. {\color{red}\ref{eq:gradient panelty}}) for single image blind motion deblurring. \\
\noindent \textbf{Hinge Loss:} In our method, we used Hinge loss {\color{green} \cite{geometric_gan,sa_gan}} which is giving better result as compared to WGAN-GP {\color{green} \cite{gradientpenalty}} based deblurring method. Hinge loss output also a real value.
Generator loss $L_G$ and Discriminator loss $L_D$ in the presence of Hinge loss is defined as follows.
\begin{equation}
L_D ={} -E_{(x,y) \sim p_{data}}[min(0, -1+D(x,y))] -E_{z \sim p_{z}, y \sim p_{data}}[min(0,-1-D(G(z),y))]
\label{eq:hinge discriminator}
\end{equation}
\begin{equation}
L_G = - E_{z \sim p_{z}, y \sim p_{data}}D(G(z),y)
\label{eq:hinge generator}
\end{equation}
\noindent Here, $D$ tries that a real image will get a large value, and a fake or generated image will get a small value.
\begin{figure}
\begin{minipage}{0.45\textwidth}
\includegraphics[width=\linewidth,height=45mm]{images/building_block.jpg}
\caption{Modified Resnet Block}
\label{fig:modified_resnet_block}
\end{minipage}\hspace*{0.75cm}%
\begin{minipage}{0.5\textwidth}
\includegraphics[width=\linewidth, height=35mm]{images/discriminator_architecture.jpg}
\caption{The figure shows the architecture of the critic network (Discriminator). }
\label{fig:Discriminator architecture}
\end{minipage}
\vspace{-6mm}
\end{figure}
\section{Related Works}
The deep learning-based methods attempt to estimate the motion blur in the degraded image and use this blurring information to restore the sharp image {\color{green} \cite{nonunblrrem}}. The methods which use the multi-scale framework {\color{green} \cite{multi_scale2017}} to recover the deblurred image are computationally expensive. The use of GANs also increasing in blind kernel free single image deblurring tasks such as Ramakrishnan et al. {\color{green} \cite{ramakrishnan}} used image translation framework {\color{green} \cite{pix2pix}} and densely connected convolution network {\color{green} \cite{densenetwork}}. The methods above performs image-deblurring task, when input image may have blur due to multiple sources. Kupyn et al. {\color{green} \cite{deblurgan}} proposed the DeblurGAN method, which uses the Wasserstein GAN {\color{green} \cite{wgan}} with gradient penalty {\color{green} \cite{gradientpenalty}} and the Perceptual loss {\color{green} \cite{perceptualloss}}. Kupyn et al. {\color{green} \cite{deblur_v2}} proposed a new method DeblurGAN-v2, which is faster and has better results than the previously proposed method; this method uses the feature pyramid network {\color{green} \cite{pyramid_network}} in the generator. A study of various single image blind deblurring methods is provided in {\color{green} \cite{cvpr16_deblur_study}}.
\section{Our Method}
In our proposed method, the blur kernel knowledge is not present, and from a given blur image $I_B$ as an input, our purpose is to develop a sharp image $I_S$ from $I_B$. For the deblurring task, we train a Generator network denoted by $G_{\theta_G}$. During the training period, along with Generator, there is one another CNN also present $D_{\theta_D}$ referred to as the critic network (i.e., Discriminator). The Generator $G_{\theta_G}$ and the Discriminator $D_{\theta_D}$ are trained in an adversarial manner. In what follows, we describe the network architecture and the loss functions for our method.
\subsection{Network Architecture}
The generator network, a chief component of proposed model, is a transformed version of residual network architecture {\color{green} \cite{deep_res}} (Sec.{\color{red}~\ref{ssec:generatorArchitecture})}. The discriminator architecture, which helps to learn the Generator, is a transformed version of Markovian Discriminator (PatchGAN) {\color{green} \cite{pix2pix}} (Sec.{\color{red}~\ref{ssec:discriminatorArchitecture})}. The residual network architecture helps us to build deeper CNN architectures. Also, this architecture is effective because we want our network to learn only the difference between pairs of sharp and blur images as they are almost alike in values.
We used the depthwise separable convolution in place of the standard convolution layer to reduce the inference time and model size {\color{green} \cite{mobilenetv1}}. Generator aims to generate sharp images given the blurred images as input. Note that generated images need to be realistic so that the Discriminator thinks that generated images are from the real data distribution. In this way, the Generator helps to generate a visually attractive sharp image from an input blurred image. Discriminator goal is to classify if the input is from the real data distribution or output from the generator. Discriminator accomplish this by analyzing the patches in the input image for making a decision. The changes which we made in the resnet block displayed in Fig. {\color{red} \ref{fig:modified_resnet_block}}, we convert structure (a) into structure (b).
\begin{figure*}[h]
\centering
\includegraphics[width=\linewidth]{images/architecture.jpg}
\caption{The figure shows the generator architecture of our Fast Motion Deblurring-cGAN. Given a blurred image as input, Generator outputs a realistic-looking sharp image as output.}
\label{fig:generator architecture}
\vspace{-8mm}
\end{figure*}
\subsubsection{Generator Architecture.}
\label{ssec:generatorArchitecture}
The Generator's CNN architecture is displayed in Fig. {\color{red} \ref{fig:generator architecture}}. This architecture is alike to the style transfer architecture which is proposed by Johnson et al. {\color{green} \cite{perceptualloss}}. The generator network has two strided convolution blocks in begining with stride 2, nine depthwise separable convolutions based residual blocks (MobileResnet Block) {\color{green} \cite{mobilenetv1,deep_res}}, and two transposed convolution blocks, and the global skip connection.
In our architecture, most of the computation is done by MobileResNet-Block. Therefore, we use depthwise separable convolution here to reduce computation cost without affecting accuracy.
Every convolution and transposed convolution layer have an instance normalization layer {\color{green} \cite{inst_norm}} and a ReLU activation layer {\color{green} \cite{dl2019relu}} behind it. Each Mobile Resnet block consists of two depthwise separable convolutions {\color{green} \cite{mobilenetv1}}, a dropout layer {\color{green} \cite{dropout}}, two instance normalization layers after each separable convolution block, and a ReLU activation layer. In each mobile resnet block, after the first depth-wise separable convolution layer, a dropout regularization layer with a probability of zero is added. Furthermore, we add a global skip connection in the model, also referred to as ResOut.
When we use many convolution layers, it will become difficult to generalize over first-level features, deep generative CNNs often unintentionally memorize high-level representations of edges. The network will be unable to retrieve sharp boundaries at proper positions from the blur photos as a result of this. We combine the head and tail of the network. Since the gradients value now can reach from the tail straight to the beginning and affect the update in the lower layers, generation efficiency improves significantly {\color{green} \cite{idnmap}}. In the blurred image $I_B$, CNN learns residual correction $I_R$, so the resulting sharp image is $I_S = I_B + I_R$. From experiments, we come to know that such formulation improves the training time, and generalizes the resulting model better.
\subsubsection{Discriminator Architecture.}
\label{ssec:discriminatorArchitecture}
In our model, we create a critic network $D_{\theta_D}$ also refer to as Discriminator. $D_{\theta_D}$ guides Generator network $G_{\theta_G}$ to generate sharp images by giving feedback on the input is from real data distribution or generator output. The architecture of Discriminator network is shown in Fig. {\color{red} \ref{fig:Discriminator architecture}}. We avoid high-depth Discriminator network as it's goal is to perform the classification task unlike image synthesis task of Generator network. In our FMD-cGAN framework, the Discriminator network is similar to the Markovian patch discriminator, also refer to as PatchGAN {\color{green} \cite{pix2pix}}. Except for the last convolutional layer, InstanceNorm layer and LeakyReLU with a value of 0.2, follow all convolutional layers of the network. This architecture looks for explicit structural characteristics at many local patches. It also ensures that the generated raw images have a rich color.
\begin{table*}[htb]
\fontsize{8.5}{10}\selectfont \renewcommand{\arraystretch}{1.2}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
\textbf{Method} & \textbf{PSNR} & \textbf{SSIM} & \textbf{Time (GPU)} & \textbf{Time (CPU)} & \textbf{\#Parameters} & \textbf{MACs} \\
\hline
Sun et al. {\color{green} \cite{nonuniform2015blur}} & 24.64 & 0.842 & N/A & N/A & N/A & N/A\\
\hline
Xu et al. {\color{green} \cite{unnaturalsparse}} & 25.1 & 0.89 & N/A & N/A & N/A & N/A\\
\hline
DeepFilter {\color{green} \cite{ramakrishnan}} & 28.94 & 0.922 & 0.3 sec & 3.09 sec & 3.20M & N/A\\
\hline
$DeblurGAN_{WILD}$ {\color{green} \cite{deblurgan}} & 27.2 & 0.954 & 0.45 sec & 3.36 sec & 6.06M & 35.07G\\
$DeblurGAN_{Comb}$ & 28.7 & 0.958 & & & & \\
\hline
$DeblurGANv2_{Resnetv2}$ {\color{green} \cite{deblur_v2}} & 29.55 & 0.934 & 0.14 sec & 3.67 sec & 66.594M & 274.20G\\
$DeblurGANv2_{Mobnetv2}$ & 28.17 & 0.925 & 0.04 sec & 1.23 sec & 3.12M & 39.05G \\
\hline
SRN {\color{green} \cite{SRN}} & 30.10 & 0.932 & 1.6 sec & 28.85 sec & 6.95M & N/A\\
\hline
DeepDeblur {\color{green} \cite{multi_scale2017,github_deepdeblur}} & \textbf{30.40} & 0.901 & 2.93 sec & 56.76 sec & 11.72M & 4727.22G\\
\hline
FMD-cGAN$_{WILD}$ & 28.33 & 0.962 & \textbf{0.01 sec} & \textbf{0.28 sec} & \textbf{1.98M} & \textbf{18.36G} \\
FMD-cGAN$_{Comb}$ & 29.675 & \textbf{0.971} & & & & \\
\hline
\end{tabular}
\end{center}
\caption{The table shows the results on GoPro test dataset. Here, FMD-cGAN$_{WILD}$ and FMD-cGAN$_{Comb}$ are our methods (Sec.~\ref{sec:training_details}). It could be observed that our frameworks achieves good quantitative performance. }
\label{tab:Performance and efficiency comparison on the GoPro test dataset}
\vspace{-12mm}
\end{table*}
\subsection{Loss Functions}
The total loss function for FMD-cGAN deblurring framework is the mixture of adversarial loss and content loss.
\begin{equation}
L_{total} = L_{GAN} + \lambda \cdot L_X
\label{eq:total loss}
\end{equation}
In Eq.~{\color{red}\ref{eq:total loss}}, $L_{GAN}$ represents the advesarial loss (Sec.{\color{red} \ref{sec:adversarial_loss}}), $L_X$ represents the content loss (Sec.{\color{red} \ref{sec:content_loss}}) and $\lambda$ represents the hyperparameter which controls the effect of $L_X$. The value of $\lambda$ is equal to 100 in the current experiment.
\\
\subsubsection{Adversarial Loss.}
\label{sec:adversarial_loss}
To train a learning-based image restoration network, we need to compare the difference between the restored and the original images during the training stage. Many image restoration works are using an adversarial-based network to generate sharp images {\color{green} \cite{sr_photo_reliastic,deblur_dynamicscene}}. During the training stage, the adversarial loss after pooling with other losses helps to determine how good the Generator is working against the Discriminator {\color{green} \cite{multi_scale2017}}. Initial works based on conditional GANs use the objective function of the vanilla GAN as the loss function {\color{green} \cite{sr_photo_reliastic}}. Lately, least-square GAN {\color{green} \cite{lsgan}} was observed to be better balanced and produce the good quality desired outputs. We apply Hinge loss {\color{green} \cite{geometric_gan}} (Eq. {\color{red} \ref{eq:hinge discriminator}} and Eq. {\color{red}\ref{eq:hinge generator}}) in our model to provide good results with the generator architecture {\color{green} \cite{autogan}}. Generator loss ($L_G$) and Discriminator loss ($L_D$) are computed as follows (Eq.~{\color{red}\ref{eq:hinge generator loss}} and Eq.~{\color{red}\ref{eq:hinge discriminator loss}}).
\begin{equation}
L_G = - \sum\limits_{n=1}^N D_{\theta_D}(G_{\theta_G}(I^B))
\label{eq:hinge generator loss}
\end{equation}
\begin{equation}
L_D = - \sum\limits_{n=1}^N min(0, D_{\theta_D}(I^S)-1) - \sum\limits_{n=1}^N min(0, -D_{\theta_D}(G_{\theta_G}(I^B))-1)
\label{eq:hinge discriminator loss}
\end{equation}
If we do not use adversarial loss in our network, it still converges. However, the output images will be dull with not many sharp edges, and these output images are still blurry because the blur at edges and corners is still intact. If we only use adversarial loss in our network, edges are retained in images, and more practical color assignment happens. However, it has two issues: still, it has no idea about the structure, and Generator is working according to the guidance provided by Discriminator based on the generated image. We remove these issues with the adversarial loss by combining adding with the Perceptual loss.
\subsubsection{Content loss.}
\label{sec:content_loss}
Generally, there are two choices for the pixel-based content loss: (a) L1 or MAE loss and (b) L2 or MSE loss. Moreover, above loss functions may produce blurry artifacts on the generated image due to the average of pixels {\color{green} \cite{sr_photo_reliastic}}. Due to this issue, we used Perceptual loss {\color{green} \cite{perceptualloss}} function for content loss. Unlike L2 Loss, Perceptual compares the difference between CNN feature maps of the restored image and the original image. This loss function puts structural knowledge into the Generator, which helps it against the patch-wise decision of the Markovian Discriminator. The equation of the Perceptual loss is as follows:
\begin{equation}
L_X = \frac{1}{W_{i,j}H_{i,j}} \sum\limits_{x=1}^{W_{i,j}} \sum\limits_{y=1}^{H_{i,j}} (\phi_{i,j}(I^S)_{x,y} - \phi_{i,j}(G_{\theta_G}(I^B))_{x,y})^2
\label{eq:Perceptual loss}
\end{equation}
Where $W_{i,j}$ and $H_{i,j}$ are the width and height of the $(i,j)^{th}$ ReLU layer of the \textbf{VGG-16} network \cite{vgg_19}, here i and j denote ${j^{th}}$ convolution (\,after activation)\, before the $i^{th}$ max-pooling layer. $\phi_{i,j}$ denotes the feature map. In our current method, we use the output of activations from $VGG_{3,3}$ convolutional layer. The output from activations of the end layers of the network represents more features information {\color{green} \cite{sr_photo_reliastic,vu_cnn}}. The Perceptual loss helps to restore the general content {\color{green} \cite{pix2pix,sr_photo_reliastic}}; on the other side adversarial loss helps to restore texture details. If we do not use the Perceptual loss in our network or use simple MSE based loss on pixels, the network will not converge to a good state.
\subsection{Training Datasets}
\subsubsection{GoPro Dataset.} The images of the GoPro dataset {\color{green} \cite{multi_scale2017}} are generated using the GoPro Hero 4 camera. The camera captures 240 frames per second video sequences. The blurred images are captured by averaging consecutive short-exposure frames. It is the most commonly used benchmark dataset in motion deblurring tasks, containing 3214 pairs of blur and sharp images. We use 1111 pairs of images for testing purposes and the remaining 2103 pairs of images for training {\color{green} \cite{multi_scale2017}}.
\subsubsection{REDS Dataset.} The Realistic and Dynamic Scenes dataset {\color{green} \cite{reds}} was designed for video deblurring and super-resolution, but it is also helpful in the image deblurring. The dataset comprises 300 video sequences having a resolution of 720×1280. Here, the training set contains 240 videos, the validation set contains 30 videos, and the testing set contains 30 videos. Each video has 100 frames. REDS dataset is generated from 120 fps videos, synthesizing blurry frames by merging subsequent frames. We have 240*100 pairs of blur and sharp images for training, 30*100 pairs of blur and sharp images for testing.
\begin{table*}
\centering
\begin{tabular}{cccc}
\toprule
\textbf{Blurry} & \textbf{DeepDeblur} {\color{green} \cite{multi_scale2017}} & \textbf{FMD-cGAN(Ours)} & \textbf{Sharp} \\
\midrule
\includegraphics[width=0.24\linewidth,height=30mm]{images/box_2_epoch084_Blurred_Train.jpg} & \includegraphics[width=0.24\linewidth,height=30mm]{images/box_2_DeepDeblur_00000050.jpg} &
\includegraphics[width=0.24\linewidth,height=30mm]{images/box_2_epoch084_Restored_Train.jpg} &
\includegraphics[width=0.24\linewidth,height=30mm]{images/box_2_epoch084_Sharp_Train.jpg} \\
\includegraphics[width=0.24\linewidth,height=30mm]{images/box_2_000_00000057_real_A.jpg} & \includegraphics[width=0.24\linewidth,height=30mm]{images/box_2_DeepDeblur_00000057.jpg} &
\includegraphics[width=0.24\linewidth,height=30mm]{images/box_2_000_00000057_fake_B.jpg} &
\includegraphics[width=0.24\linewidth,height=30mm]{images/box_2_000_00000057_real_B.jpg} \\
\includegraphics[width=0.24\linewidth,height=30mm]{images/box_2_023_00000048_real_A.jpg} & \includegraphics[width=0.24\linewidth,height=30mm]{images/box_2_DeepDeblur_00000048.jpg} &
\includegraphics[width=0.24\linewidth,height=30mm]{images/box_2_023_00000048_fake_B.jpg} &
\includegraphics[width=0.24\linewidth,height=30mm]{images/box_2_023_00000048_real_B.jpg} \\
\bottomrule
\end{tabular}
\captionof{figure}{The figure shows visual comparison on the REDS dataset (images are best viewed after zooming).}
\label{tbl:comparison on the REDS dataset}
\vspace{-6mm}
\end{table*}
\begin{table}[!ht]
\renewcommand{\arraystretch}{1.2}
\begin{minipage}{0.45\textwidth}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\textbf{Method} & \textbf{PSNR} & \textbf{SSIM} \\
\hline
DeepDeblur {\color{green} \cite{multi_scale2017,github_deepdeblur}} & 32.89 & 0.9207\\
\hline
FMD-cGAN (ours) & 31.79 & \textbf{0.9804}\\
\hline
\end{tabular}
\end{center}
\caption{The table shows the PSNR and SSIM comparison between FMD-cGAN (ours) and DeepDeblur {\color{green} \cite{multi_scale2017,github_deepdeblur}} on the REDS test dataset.}
\label{tab:Performance and efficiency comparison on the REDS test dataset}
\end{minipage}\hspace*{0.75cm}%
\begin{minipage}{0.5\textwidth}
\begin{center}
\begin{tabular}{|*4{c|}}
\hline
\textbf{Model} & \textbf{Dataset} & \textbf{\#Train Images} & \textbf{\#Test Images} \\
\hline
FMD-cGAN$_{WILD}$ & GoPro & 2103 & 1111 \\
\hline
FMD-cGAN$_{Comb}$ & 1. REDS & 24000 & 3000 \\
\cline{2-4}
& 2. GoPro & 2103 & 1111 \\
\hline
\end{tabular}
\end{center}
\caption{The table summarises training details of our methods.}
\label{tab:training details}
\end{minipage}
\vspace{-9mm}
\end{table}
\section{Training Details}
\label{sec:training_details}
The Pytorch\footnote{https://pytorch.org/} deep learning library is used to implement our model. The training of the model is accomplished on a single Nvidia Quadro RTX 5000 GPU using different datasets. The model takes image patches as input and fully convolutional to be used on images of arbitrary size. There is no change in the learning rate for the first 150 epochs; after it, we decrease the learning rate linearly to zero for the subsequent 150 epochs. We used Adam {\color{green} \cite{adam}} optimizers for loss functions in both the Generator and the Discriminator with a learning rate of 0.0001. During the training time, we kept the batch size of 1, which gives a better result.\\
Furthermore, we used the dropout layer (rate=0) and the Instancenormalization layer instead of the batch-normalization layer concept both for the Generator and the Discriminator {\color{green} \cite{pix2pix}}. The training time of the network is approximately 2.5 days, which is significantly less than its competitive network. We have provided training details in Table {\color{red} \ref{tab:training details}}. We discuss the two variants of FMD-cGAN as follows. \\
\noindent \textbf{(1). FMD-cGAN$_{wild}$}: our first trained model is \textbf{WILD}, which represents that the model is trained only on a single dataset such as GoPro and REDS dataset on which we are going to evaluate it. For example, in the case of the GoPro dataset model is trained on 2103 pairs of blur and sharp images of the GoPro dataset. \\
\noindent \textbf{(2) FMD-cGAN$_{comb}$}: \label{fmdcgan_comb} The second trained model is \textbf{Comb}, which is first trained on the REDS training dataset; after training, we evaluate its performance on the REDS testing dataset. Now we train this pre-trained model on the GoPro dataset. We test both trained models \textbf{Comb} and \textbf{WILD} final performance on the GoPro dataset's 1111 test images.
\section{Experimental Results}
\label{sec:experimentalResults}
We compare the results of our FMD-cGAN with relevant models using the standard performance metrics (PSNR, SSIM). We also show inference time of each model (i.e., average running time per image) on a single \textbf{GPU (Nvidia RTX 5000)} and \textbf{CPU (2 X Intel Xeon 4216 (16C))}. To calculate Number of parameters and Number of MACs operations in PyTorch based model, we use pytorch-summary\footnote{https://github.com/sksq96/pytorch-summary} and torchprofile\footnote{https://github.com/zhijian-liu/torchprofile} libraries.
\begin{table}[!ht]
\renewcommand{\arraystretch}{1.2}
\begin{minipage}{0.45\textwidth}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\textbf{\#ngf} & \textbf{PSNR} & \textbf{SSIM} & \textbf{Time (CPU)} & \textbf{\#Param} & \textbf{MACs} \\
\hline
48 & 27.95 & 0.960 & 0.20 sec & \textbf{1.13M} & \textbf{10.60G}\\
\hline
\textbf{64} & 28.33 & 0.963 & 0.28 sec & 1.98M & 18.36G\\
\hline
96 & \textbf{28.52} & \textbf{0.964} & 0.5 sec & 4.41M & 40.23G\\
\hline
\end{tabular}
\end{center}
\caption{Performance and efficiency comparison on the different no. of generator filters (\#ngf)}
\label{tab:different generator frames}
\end{minipage}\hspace*{0.75cm}%
\begin{minipage}{0.5\textwidth}
\begin{center}
\begin{tabular}{|p{100pt}|c|c|c|c|}
\hline
\textbf{Model} & \textbf{PSNR} & \textbf{SSIM} & \textbf{\#Param} & \textbf{MACs}\\
\hline
Only ResNetBlock & \textbf{28.33} & 0.963 & 1.98M & 18.36G\\
\hline
Downsample + ResNetBlock & 28.24 & 0.962 & \textbf{1.661M} & 16.81G\\
\hline
Upsample + ResNetBlock & 28.19 & 0.961 & 1.663M & \textbf{11.79G}\\
\hline
\end{tabular}
\end{center}
\caption{Performance comparison after applying convolution decomposition in different parts of network and \#ngf=64}
\label{tab:convolution decomposition in different parts}
\end{minipage}
\vspace{-8mm}
\end{table}
\begin{table}[!ht]
\centering
\begin{tabular}{cm{45mm}m{45mm}m{45mm}}
\toprule
\textbf{Model} & \textbf{Example 1} & \textbf{Example 2} & \textbf{Example 3} \\
\midrule
Blurry & \includegraphics[width=\linewidth,height=30mm]{images/box_2_GOPR0384_11_00_000005_real_A.jpg} &
\includegraphics[width=\linewidth,height=30mm]{images/box_2_GOPR0869_11_00_000028_real_A.jpg} &
\includegraphics[width=\linewidth,height=30mm]{images/box_2_GOPR0854_11_00_000073_real_A.jpg} \\
\pbox{25cm}{DeblurGANv2 \\ {\color{green} \cite{deblur_v2}}} & \includegraphics[width=\linewidth,height=30mm]{images/box_2_deblurv2_GOPR0384_11_00_000005.jpg} &
\includegraphics[width=\linewidth,height=30mm]{images/box_2_GOPR0869_11_00_000028.jpg} &
\includegraphics[width=\linewidth,height=30mm]{images/box_2_deblurv2_GOPR0854_11_00_000073.jpg} \\
\pbox{20cm}{DeepDeblur \\ {\color{green} \cite{multi_scale2017,github_deepdeblur}}} & \includegraphics[width=\linewidth,height=30mm]{images/box_2_deepdeblur_GOPR0384_11_00_000005.jpg} &
\includegraphics[width=\linewidth,height=30mm]{images/box_2_deepdeblur_GOPR0869_11_00_000028.jpg} &
\includegraphics[width=\linewidth,height=30mm]{images/box_2_deepdeblur_GOPR0854_11_00_000073.jpg} \\
\pbox{20cm}{SRN \\ {\color{green} \cite{SRN}}} &
\includegraphics[width=\linewidth,height=30mm]{images/box_2_srn_GOPR0384_11_00_000005.jpg} &
\includegraphics[width=\linewidth,height=30mm]{images/box_2_srn_GOPR0869_11_00_000028.jpg} &
\includegraphics[width=\linewidth,height=30mm]{images/box_2_SRN_GOPR0854_11_00_000073.jpg} \\
\pbox{20cm}{FMD-cGAN$_{Wild}$\\(Ours)} & \includegraphics[width=\linewidth,height=30mm]{images/box_2_Wild_GOPR0384_11_00_000005_fake_B.jpg} &
\includegraphics[width=\linewidth,height=30mm]{images/box_2_WILD_GOPR0869_11_00_000028_fake_B.jpg} &
\includegraphics[width=\linewidth,height=30mm]{images/box_2_wild_GOPR0854_11_00_000073_fake_B.jpg} \\
\pbox{20cm}{FMD-cGAN$_{Comb}$\\(Ours)} & \includegraphics[width=\linewidth,height=30mm]{images/box_2_GOPR0384_11_00_000005_fake_B.jpg} &
\includegraphics[width=\linewidth,height=30mm]{images/box_2_deblurv2_GOPR0869_11_00_000028_fake_B.jpg} &
\includegraphics[width=\linewidth,height=30mm]{images/box_2_GOPR0854_11_00_000073_fake_B.jpg} \\
\bottomrule
\end{tabular}
\captionof{figure}{The figure shows visual comparison on the GoPro dataset (images are best viewed after zooming).}
\label{tbl:comparison on the GoPro dataset}
\vspace{-10mm}
\end{table}
\subsection{Quantitative Evaluation on GoPro Dataset}
Here, we discuss the performance of our method on GoPro Dataset. We used 1111 pairs of blur and sharp images from GoPro test dataset for evaluation. We compare our model's results with other state-of-the-art model's results: where Sun et al. {\color{green} \cite{nonuniform2015blur}} is a traditional method, while others are deep learning-based methods: Xu et al. {\color{green} \cite{unnaturalsparse}}, DeepDeblur {\color{green} \cite{multi_scale2017}}, DeepFilter {\color{green} \cite{ramakrishnan}}, $DeblurGAN$ {\color{green} \cite{deblurgan}}, $DeblurGANv2$ {\color{green} \cite{deblur_v2}} and SRN {\color{green} \cite{SRN}}. We use PSNR and SSIM value of other methods from their respective papers.
We show the results in \textbf{Table \color{red}\ref{tab:Performance and efficiency comparison on the GoPro test dataset}}. It could be observed that FMD-cGAN (ours) has high efficiency in terms of performance and inference time. FMD-cGAN also has the lowest inference time, and in terms of no. of parameters and macs operations also has the lowest value. Furthermore, FMD-cGAN output PSNR and SSIM values comparable to the other models in comparison.
\subsection{Quantitative Evaluation on REDS Dataset}
We also show the performance of our framework on the REDS dataset. We used 3000 pairs of blur and sharp images from REDS test dataset for evaluation. We compare the performance of FMD-cGAN (ours) with the DeepDeblur model {\color{green}\cite{multi_scale2017}}. We used the results of DeepDeblur from official GitHub repository - DeepDeblur-PyTorch\footnote{https://github.com/SeungjunNah/DeepDeblur-PyTorch}.
We show the results in \textbf{Table {\color{red}\ref{tab:Performance and efficiency comparison on the REDS test dataset}}}. It could be observed that our method achieves high SSIM and PSNR values which are comparable to DeepDeblur {\color{green} \cite{multi_scale2017}}. We emphasise that our network has a significantly lesser size as compared to DeepDeblur {\color{green} \cite{multi_scale2017}}. Currently, only the DeepDeblur model used the REDS dataset for training and performance evaluation.
\subsection{Visual Comparison}
Fig. {\color{red} \ref{tbl:comparison on the REDS dataset} } shows the visual comparison on the REDS dataset. It could be observed that FMD-cGAN (ours) restore images comparable to the relevant top-performing works such as DeepDeblur {\color{green} \cite{multi_scale2017}} and SRN {\color{green} \cite{SRN}}. For example, row 1 of Fig. {\color{red} \ref{tbl:comparison on the REDS dataset} } shows that our method preserves the fine object structure details (i.e., building) which are missing in the blurry image. \\
Fig. {\color{red} \ref{tbl:comparison on the GoPro dataset}} shows the visual comparison results on the GoPro dataset. It could be observed that the output of our method is visually appealing in the presence of motion blur in the input image (see Example~3 of Fig. {\color{red} \ref{tbl:comparison on the GoPro dataset}}). To provide more clarity, we show the results for both FMD-cGAN$_{Wild}$ and FMD-cGAN$_{Comb}$ (Sec. {\color{red} \ref{sec:training_details}}). FMD-cGAN (ours) is faster and output better reconstruction than other motion deblurring methods even though our model has fewer parameters (Table {\color{red} \ref{tab:Performance and efficiency comparison on the GoPro test dataset}}). We have provided the extended versions of Fig. {\color{red} \ref{tbl:comparison on the REDS dataset} } and Fig. {\color{red} \ref{tbl:comparison on the GoPro dataset}} in the supplementary material for better visual comparisons.
\section{Ablation Study}
\label{sec:ablation_study}
Table {\color{red} \ref{tab:different generator frames}} shows an ablation study on the generator network architecture for different design choices. Here, we train and test our network's performance only on the GoPro dataset. Suppose \#ngf denotes the initial layer's filters count in the generator network, affecting filters count of subsequent layers. Table {\color{red} \ref{tab:different generator frames}} demonstrates how \#ngf affects model performance. It could be observed that if we increase the \#ngf then image quality (PSNR) will increase. However, it increases \#parameters and MACs operations also, affecting inference time and model size.
We divide our generator network into three parts according to its structure: Downsample (two 3x3 convolutions), ResnetBlocks (9 blocks), and Upsample (two 3x3 deconvolutions). To check the network performance, we put separable convolution into different parts. Table {\color{red} \ref{tab:convolution decomposition in different parts}} demonstrates model performance after applying convolution decomposition in different parts of the generator network. ResNet blocks do most of the computation in the network; from Table~{\color{red} \ref{tab:convolution decomposition in different parts}}, we can see applying convolution decomposition in this part giving better performance.
\section{Conclusion}
We proposed a Fast Motion Deblurring method (FMD-cGAN) for a single image. FMD-cGAN does not require knowledge of the blur kernel. Our method uses the conditional generative adversarial network for this task and is optimized using the multi-part loss function. Our method shows that using MobileNetv1 architecture consists of depthwise separable convolution to reduce computational cost and memory requirement without losing accuracy. We also proposed that using Hinge loss in the network gives good results. Our method produces better blur-free images, as confirmed by the quantitative and visual comparisons. FMD-cGAN is faster with low inference time and memory requirements, and it outperforms various state-of-the-art models for blind motion deblurring of a single image (\textbf{Table \color{red}\ref{tab:Performance and efficiency comparison on the GoPro test dataset}}). We propose as future work to deploy our model in lightweight devices for real-time image deblurring tasks.
|
{
"timestamp": "2021-12-01T02:25:14",
"yymm": "2111",
"arxiv_id": "2111.15438",
"language": "en",
"url": "https://arxiv.org/abs/2111.15438"
}
|
\section{Introduction}
\label{sec:Introduction}
\input{Latexfiles/Introduction.tex}
\section{Model description}
\label{sec:Model}
\input{Latexfiles/Model.tex}
\section{Initial concentrations}
\label{sec:InitialConcentrations}
\input{Latexfiles/InitialConcentrations.tex}
\section{Results}
\label{sec:Results}
\input{Latexfiles/Results.tex}
\section{Conclusions}
\label{sec:Conclusions}
\input{Latexfiles/Conclusions.tex}
\section*{Code and data availability}
\input{Latexfiles/CodeDataAvailability.tex}
\printbibliography
\end{document}
\subsection{Experimental setup}
\label{sec:ExperimentalSetup}
For the calculation of a steady annual cycle, we used the marine ecosystem
toolkit for optimization and simulation in 3D (Metos3D) \parencite{PiwSla16}
and ran each spin-up over \num{10000} model years. We applied the parameter
vectors listed in Table \ref{table:ParameterValues-Modelhierarchy} for the
different biogeochemical models identifying the parameter vector of the N-DOP
model with that of the MITgcm-PO4-DOP model.
We assessed the approximations of the steady annual cycle using different
initial concentrations based on the norm of difference
\eqref{eqn:StoppingCriterion} and the accuracy of the approximation. For
this purpose, we compared these approximations with a reference solution,
denoted by $\mathbf{y}_{\text{default}}^{10000}$, namely the result obtained
by a spin-up with Metos3D using the default initial concentration. We
measured the accuracy of an approximation $\mathbf{x} \in
\mathbb{R}^{n_y n_x}$ by the relative difference
\begin{align}
\label{eqn:relativeError}
\frac{\left\| \mathbf{x} - \mathbf{y}_{\text{default}}^{10000} \right\|_2}
{\left\| \mathbf{y}_{\text{default}}^{10000} \right\|_2}
\end{align}
and called this quantity \eqref{eqn:relativeError} the \emph{(relative)
error} of the respective result $\mathbf{x}$.
\subsection{Numerical Results}
\label{sec:NumericalResults}
\begin{figure}[!tb]
\centering
\subfloat[N model: Norm of difference \eqref{eqn:StoppingCriterion}.]{\includegraphics{Figures/Spinup_N.pdf}}
\quad
\subfloat[N model: Relative error \eqref{eqn:relativeError}.]{\includegraphics{Figures/2Norm_N.pdf}}
\quad
\subfloat[N-DOP model: Norm of difference \eqref{eqn:StoppingCriterion}.]{\includegraphics{Figures/Spinup_N-DOP.pdf}}
\quad
\subfloat[N-DOP model: Relative error \eqref{eqn:relativeError}.]{\includegraphics{Figures/2Norm_N-DOP.pdf}}
\quad
\subfloat[MITgcm-PO4-DOP model: Norm of difference \eqref{eqn:StoppingCriterion}.]{\includegraphics{Figures/Spinup_MITgcm-PO4-DOP.pdf}}
\quad
\subfloat[MITgcm-PO4-DOP model: Relative error \eqref{eqn:relativeError}.]{\includegraphics{Figures/2Norm_MITgcm-PO4-DOP.pdf}}
\caption{Convergence of the spin-up using different initial concentrations
for the N, N-DOP and MITgcm-PO4-DOP model. Shown are the norm of
difference \eqref{eqn:StoppingCriterion} between consecutive
iterations in the spin-up and the relative error
\eqref{eqn:relativeError} for one exemplary parameter vector of
each initial concentration type.}
\label{fig:Convergence_1}
\end{figure}
\begin{figure}[!tb]
\centering
\subfloat[N model.]{\includegraphics{Figures/fig2/fig2a.pdf}}
\quad
\subfloat[N-DOP model.]{\includegraphics{Figures/fig2/fig2b.pdf}}
\quad
\subfloat[MITgcm-PO4-DOP model.]{\includegraphics{Figures/fig2/fig2c.pdf}}
\caption{Visualization of the norm of difference
\eqref{eqn:StoppingCriterion} and the relative error
\eqref{eqn:relativeError} for $\ell = 10000$ for the N, N-DOP and
MITgcm-PO4-DOP model. Shown are the results for \num{100}
different initial concentrations respectively of the various
initial concentrations types. The figures in the right
column contain a detail of the figure in the left column.}
\label{fig:ScatterPlot_1}
\end{figure}
Regardless of the initial concentration, the spin-up calculation resulted in
the same approximation of the steady annual cycle for the N, N-DOP and
MITgcm-PO4-DOP model. Figure \ref{fig:Convergence_1} demonstrates a similar
convergence behavior for all different initial concentrations indicating that
the spin-up reached nearly the same accuracy of the norm of differences
\eqref{eqn:StoppingCriterion}. Particularly, the spin-up ended with the same
approximation of the steady annual cycle for the different initial
concentrations (Figure \ref{fig:Convergence_1}, right column). Using an
initial concentration with the whole concentration in only one box of the
discretization for each tracer, the spin-up required several thousand model
years to distribute the tracer concentration throughout the ocean.
Consequently, the norm of differences is slightly larger and the accuracy
after \num{10000} model years is slightly worse. To reach the accuracy of the
other initial concentration types, further model years would be necessary.
Furthermore, the marginal differences using a random partitioning of the mass
(N-DOP and MITgcm-PO4-DOP model) resulted from the smaller concentration for
the tracer N compared to the default initial concentration because most
of the concentration was present as nutrients for the reference solution.
Figure \ref{fig:ScatterPlot_1} shows the same results for all used initial
concentrations. Except for the use of the initial concentration with the
whole concentration in only one box of the spatial discretization for each
tracer, the spin-ups reached almost the same norm of differences
\eqref{eqn:StoppingCriterion}. More importantly, each of the spin-ups
calculated an excellent approximation of the reference solution. Using the
initial concentrations with the whole concentration in one single box for
each tracer, the slightly larger relative error resulted from the required
model years to distribute the tracer concentrations from the single boxes
throughout the ocean. Nevertheless, these approximations adequately reflected
the reference solution.
\begin{figure}[!tb]
\centering
\subfloat[NP-DOP model: Norm of difference \eqref{eqn:StoppingCriterion}.]{\includegraphics{Figures/Spinup_NP-DOP.pdf}}
\quad
\subfloat[NP-DOP model: Relative error \eqref{eqn:relativeError}.]{\includegraphics{Figures/2Norm_NP-DOP.pdf}}
\quad
\subfloat[NPZ-DOP model: Norm of difference \eqref{eqn:StoppingCriterion}.]{\includegraphics{Figures/Spinup_NPZ-DOP.pdf}}
\quad
\subfloat[NPZ-DOP model: Relative error \eqref{eqn:relativeError}.]{\includegraphics{Figures/2Norm_NPZ-DOP.pdf}}
\quad
\subfloat[NPZD-DOP model: Norm of difference \eqref{eqn:StoppingCriterion}.]{\includegraphics{Figures/Spinup_NPZD-DOP.pdf}}
\quad
\subfloat[NPZD-DOP model: Relative error \eqref{eqn:relativeError}.]{\includegraphics{Figures/2Norm_NPZD-DOP.pdf}}
\caption{Convergence of the spin-up using different initial concentrations
for the NP-DOP, NPZ-DOP and NPZD-DOP model. Shown are the norm of
difference \eqref{eqn:StoppingCriterion} between consecutive
iterations in the spin-up and the relative error
\eqref{eqn:relativeError} for one exemplary parameter vector of
each initial concentration type.}
\label{fig:Convergence_2}
\end{figure}
\begin{figure}[!tb]
\centering
\subfloat[NP-DOP model.]{\includegraphics{Figures/fig4/fig4a.pdf}}
\quad
\subfloat[NPZ-DOP model.]{\includegraphics{Figures/fig4/fig4b.pdf}}
\quad
\subfloat[NPZD-DOP model.]{\includegraphics{Figures/fig4/fig4c.pdf}}
\caption{Visualization of the norm of difference
\eqref{eqn:StoppingCriterion} and the relative error
\eqref{eqn:relativeError} for $\ell = 10000$ for the NP-DOP,
NPZ-DOP and NPZD-DOP model. Shown are the results for \num{100}
different initial concentrations respectively of the various
initial concentration types. The figures in the right column
contain a detail of the figure in the left column.}
\label{fig:ScatterPlot_2}
\end{figure}
For the NP-DOP, NPZ-DOP and NPZD-DOP model, the initial concentration
influenced the approximation of the steady annual cycle. The relative errors
in Figure \ref{fig:Convergence_2} indicate the spin-up calculation of nearly
the same approximation of the steady annual cycle using the different initial
concentrations for each of the three biogeochemical models except for some
outliers with a huge relative error. Clearly, the norm of differences
\eqref{eqn:StoppingCriterion} was very small using the initial concentration
that had been created with the lognormal distribution and random ratio of the
tracer mass for the NP-DOP model as well as the initial concentration with the
whole concentration in a single box for each tracer for the NPZ-DOP and
NPZD-DOP model. Still, the spin-up ended with an invalid approximation of the
steady annual cycle (Figure \ref{fig:Convergence_2}). In these approximations,
first, all tracers were nearly constant, second, the tracer N contained more
mass than was initially available and, third, the tracer concentrations of the
other tracers (i.e., P, Z, D and DOP, if present) were exclusively negative.
Likewise, the approximation computed with the initial concentration, which
was generated with the normal distribution and random ratio of the tracer
mass, was inadmissible because, as above, apart from the tracer N, all tracers
were nearly constant and had exclusively negative concentrations (Figure
\ref{fig:Convergence_2}). Conversely, the spin-up calculated a reasonable
approximation of the reference solution for the NPZ-DOP and NPZD-DOP model
starting from the initial concentration generated with the lognormal
distribution and random ratio of the tracers (for the NPZ-DOP and NPZD-DOP
model) or the initial concentration with the whole concentration in a single
box and random ratio of the tracers (for the NPZ-DOP model), although the
error (especially on the surface in the North Atlantic (Baffin Bay) or South
Atlantic (southwest coast of Africa) for the tracers N and DOP) was slightly
larger (Figure \ref{fig:Convergence_2}). Interestingly, the proportion of the
tracer N compared to the other tracers was lowest of these three initial
concentrations. Figure \ref{fig:ScatterPlot_2} summarizes the error of the
approximations using the spin-up that was initialized with all different
initial concentrations for the three different biogeochemical models NP-DOP,
NPZ-DOP and NPZD-DOP. These approximations were mainly almost identical to the
reference solution. As a result of the necessary distribution of the
concentration throughout the ocean, the error was generally somewhat larger
when an initial concentration with the whole concentration in single boxes
was used. Especially in the case of using either a random ratio of the mass
between the tracers or an initial concentration with the whole concentration
in a single box for each tracer, there were, however, some initial
concentrations for which the spin-up calculated inadmissible approximations
containing negative tracer concentrations and, consequently, the error was
large (Figure \ref{fig:ScatterPlot_2}).
\subsection{Model equations for marine ecosystems}
\label{sec:ModelEquation}
A system of partial differential equations represents the marine ecosystem
model. The number of modeled tracers determines the complexity of the marine
ecosystem model and, thus, the size of the system of differential equations.
In the rest of this paper, we consider marine ecosystem models using an
offline model with $n_y \in \mathbb{N}$ tracers on a spatial domain
$\Omega \subset \mathbb{R}^3$ (i.e., the ocean) and a time interval $[0,1]$
(i.e., one model year). Function $y_i: \Omega \times [0,1] \rightarrow
\mathbb{R}$, $i \in \left\{1, \ldots, n_y \right\}$, describes the tracer
concentrations of tracer $y_i$ and $\mathbf{y} := \left( y_i
\right)_{i=1}^{n_{y}}$ summarizes the tracer concentrations of all tracers.
For $i = 1, \ldots, n_y$, the system of parabolic partial differential
equations
\begin{align}
\label{eqn:Modelequation}
\frac{\partial y_i}{\partial t} (x,t)
+ \left( D (x,t) + A(x,t) \right) y_i (x,t)
&= q_i \left( x, t, \mathbf{y}, \mathbf{u} \right),
& x \in \Omega, t &\in [0,1], \\
\label{eqn:Boundarycondition}
\frac{\partial y_i}{\partial n} (x,t) &= 0,
& x \in \partial \Omega, t &\in [0,1],
\end{align}
describes the tracer transport of a marine ecosystem model. Here, the
homogeneous Neumann boundary condition \eqref{eqn:Boundarycondition} includes
the normal derivative and models no fluxes on the boundary.
The ocean currents, modeled by spatially discretized advection and diffusion,
transport the tracers in marine water. The linear operator $A: \Omega \times
[0,1] \rightarrow \mathbb{R}$ describes the advection as
\begin{align}
\label{eqn:Advection}
A(x,t) y_i (x,t) &:= \textrm{div} \left( v(x,t) y_i (x,t) \right),
& x \in \Omega, t &\in [0,1],
\end{align}
$i \in \{1, \ldots, n_y\}$, using a given velocity field $v: \Omega \times
[0,1] \rightarrow \mathbb{R}^3$. The diffusion operator $D: \Omega \times
[0,1] \rightarrow \mathbb{R}$ models the turbulent effects of the ocean
circulation but neglects the molecular diffusion of the tracers themselves
because this is known to be much smaller than the diffusion induced by
turbulence. Due to the quite different spatial scales in horizontal and
vertical direction, the diffusion operator requires a splitting
$D = D_h + D_v$ into a horizontal and a vertical part and an implicit
treatment of the vertical part $D_v$ in the time-integration. Both directions
are modeled in the second-order form as
\begin{align}
\label{eqn:Diffusion-horizontal}
D_{h} (x,t) y_i (x,t) &:= -{\textrm{div}_h} \left( \kappa_h (x,t) \nabla_h
y_i (x,t) \right)
& x \in \Omega, t &\in [0,1],\\
\label{eqn:Diffusion-vertical}
D_{v} (x,t) y_i (x,t) &:= -\frac{\partial}{\partial z}
\left(\kappa_{v} (x,t)\frac{\partial y_i}{\partial z}(x,t)\right),
& x \in \Omega, t &\in [0,1],
\end{align}
$i \in \{1, \ldots, n_y\}$, where $\textrm{div}_h$ and $\nabla_h$ denote the
horizontal divergence and gradient, $\kappa_h, \kappa_v: \Omega \times [0,1]
\rightarrow \mathbb{R}$ the diffusion coefficient fields and $z$ the vertical
coordinate. The diffusion coefficient fields are identical for all tracers
since the molecular diffusion is neglected.
The biogeochemical model contains the biogeochemical processes modeled in the
marine ecosystem. In addition to the biogeochemical model, the marine
ecosystem model also takes the effects of the ocean dynamics into account and,
therefore, contains the whole system \eqref{eqn:Modelequation} to
\eqref{eqn:PeriodicCondition}. The nonlinear function $q_i: \Omega \times
[0,1] \rightarrow~\mathbb{R}, \left( x, t \right) \mapsto q_i \left( x, t,
\mathbf{y}, \mathbf{u} \right)$ represents the biogeochemical processes for
tracer $y_i$, $i \in \{1, \ldots, n_y\}$. Indeed, these functions $q_i$
depend, firstly, on space and time (for example, on the variability of the
solar radiation), secondly, on the coupling to the other tracers and, thirdly,
on $n_u \in \mathbb{R}$ model parameters $\mathbf{u} \in \mathbb{R}^{n_u}$
(such as growth, loss and mortality rates or sinking speed). The
biogeochemical model $\mathbf{q} = \left( q_i \right)_{i=1}^{n_{y}}$
summarizes the biogeochemical processes of all tracers.
An annually periodic solution of the marine ecosystem model (i.e., a steady
annual cycle) fulfills in addition to \eqref{eqn:Modelequation} and
\eqref{eqn:Boundarycondition}
\begin{align}
\label{eqn:PeriodicCondition}
y_i (x, 0) &= y_i (x, 1), & x &\in \Omega,
\end{align}
for $i = 1, \ldots, n_y$. Therefore, we assume that the operators $A, D$ and
the functions $q_i$ are also annually periodic in time.
\subsection{Biogeochemical models}
\label{sec:BiogeochemicalModels}
The biogeochemical models differ in the given number of ecosystem species. In
the present paper, we applied a hierarchy of five different biogeochemical
models with an increasing complexity introduced by \textcite{KrKhOs10} as well
as a biogeochemical model introduced by \textcite{DuSoScSt05}. In the
following, we briefly introduce the biogeochemical models but we refer to
\textcite{KrKhOs10, DuSoScSt05, PiwSla16} for a detailed description of the
modeled processes and model equations. Table
\ref{table:Parameter-Modelhierarchy} summarizes the model parameters of the
biogeochemical models. Moreover, Table
\ref{table:ParameterValues-Modelhierarchy} contains the assignment of the
model parameters to the various biogeochemical models and the values used in
this paper.
\begin{table}[tb]
\caption{Model parameters of the biogeochemical models.}
\label{table:Parameter-Modelhierarchy}
\centering
\begin{tabular}{l l l}
\hline
Parameter & Description & Unit \\
\hline
$k_w$ & Attenuation coefficient of water & \si{\per \metre} \\
$k_c$ & Attenuation coefficient of phytoplankton & \si{\Bracketo \milli \mol \Phosphate \per \cubic \metre \per \Bracketc \per \metre} \\
$\mu_P$ & Maximum growth rate & \si{\per \day} \\
$\mu_Z$ & Maximum grazing rate & \si{\per \day} \\
$K_N$ & Half saturation constant for $\textrm{PO}_4$ uptake & \si{\milli \mol \Phosphate \per \cubic \metre} \\
$K_P$ & Half saturation constant for grazing & \si{\milli \mol \Phosphate \per \cubic \metre} \\
$K_I$ & Light intensity compensation & \si{\watt \per \square \metre} \\
$\sigma_Z$ & Fraction of production remaining in $\textrm{Z}$ & \si{1} \\
$\sigma_\text{DOP}$ & Fraction of losses assigned to $\textrm{DOP}$ & \si{1} \\
$\lambda_P$ & Linear phytoplankton loss rate & \si{\per \day} \\
$\kappa_P$ & Quadratic phytoplankton loss rate & \si{\Bracketo \milli \mol \Phosphate \per \cubic \metre \per \Bracketc \per \day} \\
$\lambda_Z$ & Linear zooplankton loss rate & \si{\per \day} \\
$\kappa_Z$ & Quadratic zooplankton loss rate & \si{\Bracketo \milli \mol \Phosphate \per \cubic \metre \per \Bracketc \per \day} \\
$k_c$ & Attenuation coefficient of phytoplankton & \si{\Bracketo \milli \mol \Phosphate \per \cubic \metre \per \Bracketc \per \day} \\
$\lambda'_P$ & Phytoplankton mortality rate & \si{\per \day} \\
$\lambda'_Z$ & Zooplankton mortality rate & \si{\per \day} \\
$\lambda'_D$ & Degradation rate & \si{\per \day} \\
$\lambda'_\text{DOP}$ & Decay rate & \si{\per \year} \\
$b$ & Implicit representation of sinking speed & \si{1} \\
$a_D$ & Depth-dependent increase of sinking speed & \si{\per \day} \\
$b_D$ & Initial sinking speed & \si{\metre \per \day} \\
\hline
\end{tabular}
\end{table}
\begin{table}[tb]
\caption{Assignment of the model parameters to the biogeochemical models and used parameter values. The model parameters of the MITgcm-PO4-DOP model correspond to those of the N-DOP model.}
\label{table:ParameterValues-Modelhierarchy}
\centering
\begin{tabular}{l r r r r r}
\hline
Parameter & N & N-DOP & NP-DOP & NPZ-DOP & NPZD-DOP \\
\hline
$k_w$ & 0.02 & 0.02 & 0.02 & 0.02 & 0.02 \\
$k_c$ & & & 0.48 & 0.48 & 0.48 \\
$\mu_P$ & 2.0 & 2.0 & 2.0 & 2.0 & 2.0 \\
$\mu_Z$ & & & 2.0 & 2.0 & 2.0 \\
$K_N$ & 0.5 & 0.5 & 0.5 & 0.5 & 0.5 \\
$K_P$ & & & 0.088 & 0.088 & 0.088 \\
$K_I$ & 30.0 & 30.0 & 30.0 & 30.0 & 30.0 \\
$\sigma_Z$ & & & & 0.75 & 0.75 \\
$\sigma_\text{DOP}$ & & 0.67 & 0.67 & 0.67 & 0.67 \\
$\lambda_P$ & & & 0.04 & 0.04 & 0.04 \\
$\kappa_P$ & & & 4.0 & & \\
$\lambda_Z$ & & & & 0.03 & 0.03 \\
$\kappa_Z$ & & & & 3.2 & 3.2 \\
$\lambda'_P$ & & & 0.01 & 0.01 & 0.01 \\
$\lambda'_Z$ & & & & 0.01 & 0.01 \\
$\lambda'_D$ & & & & & 0.05 \\
$\lambda'_\text{DOP}$ & & 0.5 & 0.5 & 0.5 & 0.5 \\
$b$ & 0.858 & 0.858 & 0.858 & 0.858 & \\
$a_D$ & & & & & 0.058 \\
$b_D$ & & & & & 0.0 \\
\hline
\end{tabular}
\end{table}
Many biogeochemical processes depend on the amount of available light. Based
on the astronomical formula of \textcite{PalPla76} and taking into account the
ice cover, the exponential attenuation of water as well as phytoplankton (if
included in the model), the light intensity is described by the light
limitation function $I: \Omega \times [0, 1] \rightarrow \mathbb{R}_{\geq 0}$.
Due to the decreasing light intensity, the ocean is divided into a euphotic
(sun lit) zone of about \SI{100}{\metre} and an aphotic zone below. The
biological production (for example, photosynthesis, grazing or mortality)
takes place mainly in the euphotic zone, and particulate matter sinks to depth
where it remineralizes according to the empirical law relationship
\parencite{MKKB87}.
The N model contains only one tracer modeling phosphate ($\textrm{PO}_4$) as
inorganic nutrients (\textrm{N}) (i.e., $\mathbf{y} = \mathbf{y}_{\text{N}}$)
and is the simplest biochemical model of the hierarchy
\parencite[cf.][]{BacMai90, KrKhOs10}. Depending on available nutrients and
light, the phytoplankton production (or biological uptake)
\begin{align}
\label{eqn:Phytoplankton}
f_P: \Omega \times [0,1] \rightarrow \mathbb{R},
f_P (x, t) &= \mu_P y_P^* \frac{I(x,t)}{K_I + I(x,t)}
\frac{\mathbf{y}_N (x,t)}{K_N + \mathbf{y}_N (x,t)}
\end{align}
is limited by a maximum production rate $\mu_P \in \mathbb{R}_{>0}$ and
applies an implicitly prescribed concentration of phytoplankton $y_P^* =
0.0028$~\si{\milli\mole\Phosphate\per\cubic\metre}. Table
\ref{table:ParameterValues-Modelhierarchy} lists the $n_u = 5$ model
parameters.
The N-DOP model includes two tracers, nutrients (\textrm{N}) and dissolved
organic phosphorus (\textrm{DOP}), i.e. $\mathbf{y} =
(\mathbf{y}_{\textrm{N}}, \mathbf{y}_{\textrm{DOP}})$
\parencite[cf.][]{BacMai91, PaFoBo05, KrKhOs10}. Using the same phytoplankton
production \eqref{eqn:Phytoplankton} as the N model, the N-DOP contains
$n_u = 7$ model parameters (see Table
\ref{table:ParameterValues-Modelhierarchy}).
The NP-DOP model comprises three tracers: nutrients (\textrm{N}),
phytoplankton (\textrm{P}) and dissolved organic phosphorus (\textrm{DOP}),
i.e., $\mathbf{y} = (\mathbf{y}_{\textrm{N}}, \mathbf{y}_{\textrm{P}},
\mathbf{y}_{\textrm{DOP}})$ \parencite[cf.][]{KrKhOs10}. Instead of using an
implicit treatment $y_P^*$ of phytoplankton, the NP-DOP model computes the
phytoplankton production \eqref{eqn:Phytoplankton} using the explicit
phytoplankton concentration $\mathbf{y}_{\text{P}}$. Using the implicitly
prescribed zooplankton concentration $y_Z^* =
0.01$~\si{\milli\mole\Phosphate\per\cubic\metre}, the zooplankton grazing
\begin{align}
\label{eqn:Zooplankton}
f_Z: \Omega \times [0,1] \rightarrow \mathbb{R},
f_Z (x,t) &= \mu_Z y_Z^* \frac{\mathbf{y}_P(x,t)^2}{K_P^2 + \mathbf{y}_P(x,t)^2}
\end{align}
models the loss of phytoplankton. Overall, this model includes the $n_u = 13$
model parameters listed in Table \ref{table:ParameterValues-Modelhierarchy}.
The NPZ-DOP model includes four tracers, nutrients (\textrm{N}), phytoplankton
(\textrm{P}), zooplankton (\textrm{Z}) and dissolved organic phosphorus
(\textrm{DOP}), i.e., $\mathbf{y} = (\mathbf{y}_{\textrm{N}},
\mathbf{y}_{\textrm{P}}, \mathbf{y}_{\textrm{Z}}, \mathbf{y}_{\textrm{DOP}})$
\parencite[cf.][]{KrKhOs10}. The phytoplankton production
\eqref{eqn:Phytoplankton} and zooplankton grazing \eqref{eqn:Zooplankton} are
the same as for the NP-DOP model but this model uses explicitly the
zooplankton concentration $\mathbf{y}_{\text{Z}}$ instead of implicitly
prescribed concentration $y_Z^*$. Table
\ref{table:ParameterValues-Modelhierarchy} lists the $n_u = 16$ model
parameters.
The NPZD-DOP model comprises five tracers, nutrients (\textrm{N}),
phytoplankton (\textrm{P}), zooplankton (\textrm{Z}), detritus (\textrm{D})
and dissolved organic phosphorus (\textrm{DOP}), i.e., $\mathbf{y} =
(\mathbf{y}_{\textrm{N}}, \mathbf{y}_{\textrm{P}},$ $\mathbf{y}_{\textrm{Z}},
\mathbf{y}_{\textrm{D}}, \mathbf{y}_{\textrm{DOP}})$, and is the most complex
biogeochemical model of the hierarchy \parencite[cf.][]{SOGES05, KrKhOs10}.
Using the phytoplankton production \eqref{eqn:Phytoplankton} and zooplankton
grazing \eqref{eqn:Zooplankton} as for the NPZ-DOP model, the NPZD-DOP
contains $n_u = 18$ model parameters (see Table
\ref{table:ParameterValues-Modelhierarchy}).
The MITgcm-PO4-DOP model contains two tracers, phosphate ($\textrm{PO}_4$)
and dissolved organic phosphorus (\textrm{DOP}), i.e. $\mathbf{y} =
(\mathbf{y}_{\textrm{N}}, \mathbf{y}_{\textrm{DOP}})$
\parencite[cf.][]{DuSoScSt05}. This model resembles the N-DOP model and,
therefore, we identified the $n_u = 7$ model parameters with those of the
N-DOP model (see Table \ref{table:ParameterValues-Modelhierarchy}).
\subsection{Transport matrix method}
\label{sec:TransportMatrixMethod}
The \emph{transport matrix method} (TMM) efficiently approximates the tracer
transport of the ocean circulation by matrix-vector multiplications
\parencite{KhViCa05, Kha07}. The discretized advection-diffusion equation can
be written as a linear matrix equation because the application of the
advection and diffusion operator, $A$ and $D$, on a spatially discretized
tracer vector is linear. Therefore, the TMM replaces the direct
implementation of a discretization scheme for the advection and diffusion by
the application of matrices approximating the ocean circulation. In
particular, the TMM approximates the ocean circulation only with these
matrices including the influence of all parameterized processes represented
in the underlying ocean circulation model on the transport
\parencite{KhViCa05}.
Using the TMM, each time step of the simulation for a marine ecosystem model
consists only of two matrix-vector multiplications, modeling the ocean
circulation, and an evaluation of the biogeochemical model. In order to
discretize the advection-diffusion equation, we use a grid with
$n_x \in \mathbb{N}$ grid points $\left( x_k \right)_{k=1}^{n_x}$ as spatial
discretization of the domain $\Omega$ (i.e., the ocean) and the time steps
$t_0, \ldots, t_{n_{t}} \in [0,1]$, $n_t \in \mathbb{N}$, specified by
\begin{align*}
t_j &:= j \Delta t, & j &= 0, \ldots, n_t, & \Delta t &:= \frac{1}{n_t},
\end{align*}
as an equidistant grid for the discretization of the time interval $[0,1]$
(i.e., one model year). For time instant $t_j$, $j \in \{0, \ldots,
n_t -1\}$, vector $\mathbf{y}_{ji} \approx \left( y_{i} \left( t_{j},
x_{k} \right) \right)_{k=1}^{n_x} \in \mathbb{R}^{n_x}$, firstly, represents
the numerical approximation of a spatially discrete tracer $y_i$, $i \in \{1,
\ldots, n_x\}$, and $\mathbf{q}_{ji} \approx \left( q_i \left( x_k, t_j,
\mathbf{y_j}, \mathbf{u} \right) \right)_{k=1}^{n_x} \in \mathbb{R}^{n_x}$,
$\mathbf{u} \in \mathbb{R}^{n_u}$, the spatially discretized biogeochemical
term $q_i$ for the tracer $y_i$. Using a reasonable concatenation,
$\mathbf{y}_{j} := \left( \mathbf{y}_{ji} \right)_{i=1}^{n_y} \in
\mathbb{R}^{n_y n_x}$ and $\mathbf{q}_j := \left( \mathbf{q}_{ji}
\right)_{i=1}^{n_y} \in \mathbb{R}^{n_y n_x}$, secondly, combine the numerical
approximations as well as the biogeochemical terms of all tracers at time
instant $t_j$. With an explicit discretization of the advection and horizontal
diffusion as well as an implicit discretization of the vertical diffusion, the
application of a semi-discrete Euler scheme for \eqref{eqn:Modelequation}
results in a time-stepping
\begin{align*}
\mathbf{y}_{j+1} &= \left( \mathbf{I} + \Delta t \mathbf{A}_j
+ \Delta t \mathbf{D}_j^h \right) \mathbf{y}_j
+ \Delta t \mathbf{D}_j^v \mathbf{y}_{j+1}
+ \Delta t \mathbf{q}_j \left( \mathbf{y}_j, \mathbf{u} \right),
& j &= 0, \ldots, n_t -1,
\end{align*}
with the identity matrix $I \in \mathbb{R}^{n_x \times n_x}$ and the spatially
discretized counterparts $\mathbf{A}_j, \mathbf{D}_j^h$ and $\mathbf{D}_j^v$
of the operators $A, D_h$ and $D_v$ at time instant $t_j$, $j \in \{0, \ldots,
n_t - 1\}$. Defining the explicit and implicit transport matrices
\begin{align*}
\mathbf{T}_{j}^{\text{exp}} &:= \mathbf{I} + \Delta t \mathbf{A}_j
+ \Delta t \mathbf{D}_j^h \in \mathbb{R}^{n_x \times n_x}, \\
\mathbf{T}_{j}^{\text{imp}} &:= \left( \mathbf{I} - \Delta t \mathbf{D}_j^v
\right)^{-1} \in \mathbb{R}^{n_x \times n_x}
\end{align*}
for each time instant $t_j$, $j \in \{0, \ldots, n_t - 1\}$, a time step of a
marine ecosystem model simulation using the TMM is specified by
\begin{align}
\label{eqn:TMM}
\mathbf{y}_{j+1} &= \mathbf{T}_{j}^{\text{imp}}
\left( \mathbf{T}_{j}^{\text{exp}} \mathbf{y}_j
+ \Delta t \mathbf{q}_j \left( \mathbf{y}_j, \mathbf{u} \right)
\right)
=: \varphi_j \left( \mathbf{y}_j, \mathbf{u} \right),
& j &= 0, \ldots, n_t - 1.
\end{align}
In practical computations, twelve sparse explicit and implicit transport
matrices represent the monthly averaged tracer transport. These matrices are
sparse because they are generated using a grid-point based ocean circulation
model and the implicit ones (i.e., the inverse of the discretization matrices)
include only the vertical part of the diffusion. For any time instant $t_j$,
$j \in \{0, \ldots, n_t - 1\}$, the matrices are interpolated linearly. In
the present paper, we used transport matrices computed with the MIT ocean
model \parencite{MAHPH97} using a global configuration with a latitudinal and
longitudinal resolution of \ang{2.8125} and \num{15} vertical layers
\parencite[see][]{KhViCa05}.
\subsection{Computation of steady annual cycles}
\label{sec:ComputationSteadyAnnualCycles}
For a marine ecosystem model, an annual periodic solution (i.e., a steady
annual cycle) is in a fully discrete setting a fixed-point of the nonlinear
mapping
\begin{align*}
\Phi &:= \varphi_{n_t -1} \circ \ldots \circ \varphi_{0}
\end{align*}
describing the time integration of \eqref{eqn:TMM} over one model year with
$\varphi_j$, $j \in \{0, \ldots, n_t -1\}$, defined in \eqref{eqn:TMM}.
Accordingly, an annual periodic solution fulfills
\begin{align*}
\mathbf{y}_{n_t} &= \Phi \left( \mathbf{y}_0 \right) = \mathbf{y}_0
\end{align*}
applying the above iteration \eqref{eqn:TMM} over one model year. Starting
from an arbitrary initial concentration vector $\mathbf{y}^{0} \in
\mathbb{R}^{n_y n_x}$ and using the fixed model parameters $\mathbf{u} \in
\mathbb{R}^{n_u}$, a classical fixed-point iteration takes the form
\begin{align}
\label{eqn:Spin-upIteration}
\mathbf{y}^{\ell + 1} &= \Phi \left( \mathbf{y}^{\ell}, \mathbf{u} \right),
& \ell = 0, 1, \ldots.
\end{align}
If we interpret this fixed-point iteration \eqref{eqn:Spin-upIteration} as
pseudo-time stepping or \emph{spin-up}, vector $\mathbf{y}^{\ell} \in
\mathbb{R}^{n_y n_x}$ contains the tracer concentrations at the first time
instant of model year $\ell \in \mathbb{N}$.
The difference between two consecutive iterates determined by
\begin{align}
\label{eqn:StoppingCriterion}
\varepsilon_{\ell} := \left\| \mathbf{y}^{\ell} - \mathbf{y}^{\ell - 1} \right\|_2
\end{align}
is a measure for the numerical convergence (i.e., the periodicity of the
steady annual cycle) of the iteration \eqref{eqn:Spin-upIteration} for model
year $\ell \in \mathbb{N}$.
|
{
"timestamp": "2021-12-01T02:24:47",
"yymm": "2111",
"arxiv_id": "2111.15424",
"language": "en",
"url": "https://arxiv.org/abs/2111.15424"
}
|
\section{Introduction}
\label{intro}
The Universal Dependencies (UD) resources have steadily grown over the years, and now treebanks for over 100 languages are available. The UD community has made a tremendous effort in providing a rich toolset for utilizing the treebanks for downstream applications, including pre-trained models for dependency parsing \cite{straka-etal-2016-udpipe,qi-etal-2020-stanza} and tools for manipulating UD trees \cite{popel-etal-2017-udapi,peng-zeldes-2018-roads,kalpakchi-boye-2020-udon2}.
Such an extensive infrastructure makes it more appealing to develop multilingual downstream applications based on UD, as a deterministic and more explainable competitor to the currently dominant neural methods. It is also compelling to use UD-based metrics for evaluation in multilingual settings. In fact, researchers have already started exploring such possibilities on both mentioned tracks. Kalpakchi and Boye \shortcite{kalpakchi2021quinductor} proposed a UD-based multilingual method for generating reading comprehension questions. Chaudhary et al. \shortcite{chaudhary-etal-2020-automatic} designed a UD-based method for automatically extracting rules governing morphological agreement. Pratapa et al. \shortcite{pratapa2021evaluating} proposed a UD-based metric to evaluate the morphosyntactic well-formedness of generated texts.
The authors of the latter two articles trained their own more robust versions of the dependency parsers, suitable for their needs. The authors of the first article relied on the off-the-shelf model, making the robustness of pre-trained dependency parsers crucial for the success of the downstream applications. For instance, sentence simplification rules based on dependency trees might simply not fire due to a mistakenly identified head or dependency relation. In fact, state-of-the-art dependency parsers are somewhat error-prone and not perfect, and assuming otherwise might potentially harm the performance of downstream applications. A more relaxed (and realistic) assumption is that the errors made by the parser are at least {\em consistent\/}, so that potentially useful patterns for the task at hand can still be inferred from data. These patterns might not always be linguistically motivated, but if the dependency parser makes consistent errors, they can still be useful for the task at hand.
In this article, we perform a case study operating under this relaxed assumption and investigate the consistency of errors while parsing sentences containing numerals. This step is useful, for instance, in question generation (especially for reading comprehension in the history domain) or numerical entity identification (e.g., distinguishing years from weights or distances).
\section{Background: Convolution partial tree kernels}
In order to measure parser accuracy, metrics like Unlabelled or Labelled Attachment Score (UAS and LAS, respectively) are often used. However, these metrics they do not fully reflect the usefulness of the parsers in downstream applications. A minor error in attaching one dependency arc will result in a minor decrease in UAS and LAS. In fact, the very same minor error might lead to a completely unusable tree for the task at hand, depending on how close the error is to the root. Therefore, we need a metric that penalizes errors more the closer the errors are to the root.
One metric possessing this desirable property is the convolution partial tree kernel (CPTK), originally proposed by Moschitti \shortcite{moschitti2006efficient} as a similarity measure for dependency trees. The basic idea is to represent trees as vectors in a common vector space, in such a way that the more common substructures two given trees have, the higher the dot product is between the corresponding two vectors (as illustrated in Figure~\ref{fig:cptk_example}). However, the vector space is induced only implicitly, whereas the dot product (the CPTK) itself is calculated using a dynamic programming algorithm (for more details we refer to the original article). CPTK values increase with the size of the trees, and thus can take any non-negative values, making them hard to interpret. Hence, we use normalized CPTK (NCPTK) which takes values between 0 and 1, and is calculated as shown in Figure \ref{fig:cptk_example}.
However, CPTKs can not handle labeled edges and were originally applied to dependency trees containing only lexicals.
In this article, we use an extension proposed by Croce et al. \shortcite{croce-etal-2011-structured}, which includes edge labels (DEPREL) as separate nodes. The resulting computational structure, the Grammatical Relation Centered Tree (GRCT), is illustrated in Figure~\ref{fig:grct_example}. A dependency tree is transformed into a GRCT by making each UPOS node a child of a DEPREL node and a father of a FORM node.
\begin{figure}
\centering
\begin{minipage}{.68\textwidth}
\centering
\includegraphics[width=\textwidth]{conv_kernels.pdf}
\caption{A simple example illustrating \emph{the concept} behind convolution partial tree kernels (in practice the vector space is induced only implicitly and CPTK is calculated using dynamic programming)}
\label{fig:cptk_example}
\end{minipage}%
\hspace{1.3em}
\begin{minipage}{0.28\textwidth}
\centering
\includegraphics[width=\textwidth]{t1_combined.pdf}
\caption{A simple example of a GRCT transformation}
\label{fig:grct_example}
\end{minipage}
\end{figure}
\section{Method}
\label{sec:method}
To explore the consistency of errors while parsing numerals, we have used UD treebanks for 4 European languages (2 Germanic and 2 Slavic). To simplify, we considered only sentences containing numerals representing years, later referred to as \emph{original sentences}. We defined these numerals as 4 digits surrounded by spaces, via the simple regular expression {\tt "(?<= )\textbackslash d\{4\}(?= )"}. We then sampled uniformly at random 50 integers between 1100 and 2100 using a fixed random seed, and replaced the occurrences of the previously identified numerals in the original sentences by each of these numbers. Thus, for every found original sentence in a treebank, we synthesized 50 \emph{augmented sentences} (later referred to as \emph{an augmented batch}), only differing in the 4-digit numbers.
We only substituted the first found occurrence of a 4-digit number in a sentence. However, if the same number appeared multiple times in the sentence, then all its occurrences were substituted.
Given such minor changes, a consistent dependency parser should output the same dependency tree for every sentence in each augmented batch. These trees should not necessarily be the same as gold original trees (although this is obviously desirable), but at the very least, the errors made in each augmented batch should be of the same kind. We consider two trees to have the errors of the same kind, and thus belonging to the same \emph{cluster of errors}, if their dependency trees only differ in the 4-digit numerals. All DEPRELs, UPOS tags and FEATS should be exactly the same for any two trees in the same cluster.
Evidently, not all 4-digit numbers in the original sentences were actually years, but the argument about the consistency of errors still stands even if the numbers were amounts of money, temperatures, etc. The magnitude of the numbers was not drastically changed (they are still 4-digit numbers), so the sentences should remain intelligible also after substitution.
In order to evaluate both the consistency of errors and correctness of a dependency parser after introducing the changes above, we need to answer the following questions.
\begin{enumerate}[label=Q\arabic*]
\item How many augmented batches are parsed completely correctly?
\begin{itemize}
\item if the corresponding original sentence is parsed correctly
\item if the corresponding original sentence is parsed incorrectly
\end{itemize}
\item How many sentences in each augmented batch are parsed correctly on average?
\begin{itemize}
\item if the corresponding original sentence is parsed correctly
\item if the corresponding original sentence is parsed incorrectly
\end{itemize}
\item How many augmented batches corresponding to incorrectly parsed original sentences have consistent errors, i.e. have the same dependency trees within a batch except FORMs and LEMMAs?
\item On average, how many clusters of errors does an augmented batch with inconsistent errors have?
\item On average, how similar are dependency trees in the clusters found in Q4?
\end{enumerate}
Answering Q1 to Q3 is trivial by parsing original and augmented sentences using a pre-trained dependency parser and calculating descriptive statistics. To answer Q4 and Q5, we propose to calculate NCPTK for each pair of trees in an augmented batch. To perform the calculations, we transform each dependency tree to GRCT replacing FORMs (which will be different by experimental design) with the FEATS. We can then construct an undirected graph, where each node is a dependency tree in the batch and two nodes are connected if their NCPTK is exactly 1 (i.e., their dependency trees are identical). Then the problem of finding error clusters in Q4 boils down to finding all maximal cliques in the induced undirected graph, for which we use Bron–Kerbosch algorithm \cite{bron1973algorithm}. Similarity of dependency trees in the given clusters can be assessed using the already calculated NCPTKs, which will provide the answer to Q5.
In hopes of improving parsers' performance and consistency of errors we have also tried to retrain the tokenizer, lemmatizer, PoS tagger and dependency parser (later referred to as a \emph{pipeline}) from scratch using two approaches. The first approach relies on \emph{numeral augmentation} and starts by sampling 20 four-digit integers using a different random seed (while ensuring no overlap with the previously used 50 integers). Using these 20 new numbers and the same procedure as before, we synthesized 20 additional sentences per each previously found original sentence in the training and development treebanks. We will refer to treebanks formed by original and newly synthesized sentences as \emph{augmented treebanks}. The second approach uses \emph{token substitution} and replaces previously found four-digit integers with a special token {\tt NNNN}. The training and development treebanks after this procedure keep their size the same (in constrast to the numeral augmentation method) and will be later referred to as \emph{substituted treebanks}.
We have used Stanza \cite{qi-etal-2020-stanza} to get pretrained dependency parsers as well as to train the whole pipeline from scratch and UDon2 \cite{kalpakchi-boye-2020-udon2} to perform the necessary manipulations on dependency trees and calculate NCPTK. The code is available at \href{https://github.com/dkalpakchi/ud_parser_consistency}{{\tt https://github.com/dkalpakchi/ud\_parser\_consistency}}.
\section{Experimental results}
\subsection{Pretrained pipeline}
We have started the experiment by parsing all original and augmented sentences in the training and development treebanks of the respective languages. The results summary for the off-the-shelf parser are presented in Table~\ref{tab:pretrained_desc}. To our surprise, some sentences were not segmented correctly, i.e. one sentence became multiple, both among original and augmented sentences. However, we did not find any consistent pattern: for instance, the Swedish parser made more segmentation errors for augmented sentences, whereas all the other parsers exhibited the opposite. Nonetheless, we have excluded the cases with wrong sentence segmentation from further analysis. The final number of sentences considered is shown in the rows ``Original considered'' and ``Augmented considered'' in Table~\ref{tab:pretrained_desc}.
\begin{table}[h]
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\bf Metric} & \multicolumn{2}{c|}{\bf English} & \multicolumn{2}{c|}{\bf Swedish} & \multicolumn{2}{c|}{\bf Russian} & \multicolumn{2}{c|}{\bf Ukrainian} \\ \cline{2-9}
& Train & Dev & Train & Dev & Train & Dev & Train & Dev \\ \hline
Original in total & 235 & 14 & 108 & 5 & 1420 & 270 & 103 & 29\\
Wrong sent. segm. & 12 & 0 & 2 & 0 & 25 & 5 & 1 & 1 \\
Original considered & 223 & 14 & 106 & 5 & 1395 & 265 & 102 & 28 \\
Corr. parsed sent. & 53 & 1 & 76 & 1 & 360 & 53 & 27 & 2 \\
Corr. parsed sent. (\%) & 23.8\% & 7.1\% & 71.7\% & 20\% & 25.8\% & 20\% & 26.5\% & 7.1\% \\ \hline
Augmented in total & 11150 & 700 & 5300 & 250 & 69750 & 13250 & 5100 & 1400 \\
Wrong sent. segm. & 0 & 0 & 17 & 14 & 13 & 0 & 0 & 0 \\
Augmented considered & 11150 & 700 & 5283 & 236 & 69737 & 13250 & 5100 & 1400 \\
Corr. parsed sent. & 2689 & 50 & 3525 & 43 & 17787 & 2540 & 1227 & 100 \\
Corr. parsed sent. (\%) & 24.1\% & 7.1\% & 66.7\% & 18.2\% & 25.5\% & 19.2\% & 24.1\% & 7.1\% \\ \hline
\end{tabular}
\caption{Results of parsing the original and augmented sentences with pre-trained parsers from Stanza. ``Corr'' stands for ``Correctly'', ``sent'' stands for sentence(s)}
\label{tab:pretrained_desc}
\end{table}
We have excluded metrics commonly used within UD community, e.g.\ UAS, LAS or BLEX, because for these metrics we observed only minor changes (less than 1 percentage point).
Another argument for omitting these metrics is that while they are useful in comparing different parsers, they do not fully reflect the usefulness of the parsers in downstream applications. In fact, even a minor error in attaching one dependency arc might lead to a completely wrong tree for the task at hand (depending on how close the error is to the root). Keeping this in mind, we compared accuracy on the sentence level only (reported in the rows ``Correctly parsed'' in Table \ref{tab:pretrained_desc}). We deemed a sentence to be correctly parsed if the NCPTK between its dependency tree and its gold counterpart was 1. We transformed all trees to GRCT and replaced FORM with FEATS, thus requiring not only all DEPREL to be identical, but also all UPOS and FEATS. As can be seen, the number of correctly parsed sentences is either on par or worse for augmented sentences, reaching a performance drop of 5 percentage points for the Swedish training set!
Results of a more detailed analysis needed for answering questions 1 - 5 (posed in Section \ref{sec:method}) are reported in Tables \ref{tab:pretrained_en} - \ref{tab:pretrained_uk}. We adopt the following notation for these tables: ``Original +'' (``Original -'') indicates cases when the original sentence was correctly (incorrectly) parsed. ``QX'' indicates a row with data necessary for answering question X, ``Corr'' stands for ``Correct(ly)'', ``sent'' stands for sentences.
We observe a number of interesting patterns from these reports. If the original sentences are incorrectly parsed, the vast majority of sentences in the corresponding augmented batches will also be incorrectly parsed (see mean and median in Q2 rows for ``Original -''). The fact that an original sentence is correctly parsed does not mean that all sentences in augmented batches will be correctly parsed (see mean and median in Q2 rows for ``Original +''). In fact, the number of wrong batches in such a case can be surprisingly large, e.g.\ 24 (31.5\%) for the Swedish training set.
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline \multirow{2}{*}{\bf Metric} & \multicolumn{2}{c|}{\bf Training set} & \multicolumn{2}{c|}{\bf Development set} \\ \cline{2-5}
& Original + & Original - & Original + & Original -\\ \hline
Batches considered & 53 & 170 & 1 & 13\\
Completely corr. batches (Q1) & 49 & 0 & 1 & 0 \\
\hline
Corr. parsed sent. within a batch (Q2) & & & & \\
\FirstIndent Mean (SD) & 49 (6.14) & 0.54 (3.67) & 50 (0) & 0 (0)\\
\FirstIndent Median (Min - Max) & 50 (5 - 50) & 0 (0 - 37) & 50 (50 - 50) & 0 (0 - 0)\\
\hline
Batches with consistent errors (Q3) & 0 & 101 & NA & 4\\
\hline
Number of error clusters (Q4) & & & &\\
\FirstIndent Mean (SD) & 2 (0) & 2.63 (0.95) & NA & 3.89 (2.64)\\
\FirstIndent Median (Min - Max) & 2 (2 - 2) & 2 (2 - 7) & NA & 3 (2 - 10)\\
\hline
Between-cluster NCPTK (Q5) & & & &\\
\FirstIndent Mean (SD) & 0 (0) & 0.07 (0.15) & NA & 0.04 (0.09)\\
\FirstIndent Median (Min - Max) & 0 (0 - 0) & 0 (0 - 0.8) & NA & 0 (0 - 0.28)\\
\hline
\end{tabular}
\caption{A detailed analysis of the parsing results for English using a pretrained pipeline}
\label{tab:pretrained_en}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline \multirow{2}{*}{\bf Metric} & \multicolumn{2}{c|}{\bf Training set} & \multicolumn{2}{c|}{\bf Development set} \\ \cline{2-5}
& Original + & Original - & Original + & Original -\\ \hline
Batches considered & 76 & 30 & 1 & 4\\
Completely corr. batches (Q1) & 52 & 0 & 0 & 0 \\
\hline
Corr. parsed sent. within a batch (Q2) & & & & \\
\FirstIndent Mean (SD) & 45.05 (10.77) & 3.37 (10.5) & 43 (0) & 0 (0)\\
\FirstIndent Median (Min - Max) & 50 (0 - 50) & 0 (0 - 42) & 43 (43 - 43) & 0 (0 - 0)\\
\hline
Batches with consistent errors (Q3) & 0 & 16 & 0 & 1\\
\hline
Number of error clusters (Q4) & & & &\\
\FirstIndent Mean (SD) & 2.29 (0.68) & 2.43 (1.05) & 2 (0) & 2.33 (0.47)\\
\FirstIndent Median (Min - Max) & 2 (2 - 4) & 2 (2 - 5) & 2 (2 - 2) & 2 (2 - 3)\\
\hline
Between-cluster NCPTK (Q5) & & & &\\
\FirstIndent Mean (SD) & 0.04 (0.12) & 0.04 (0.11) & 0 (0) & 0.0002 (0.0003)\\
\FirstIndent Median (Min - Max) & 0 (0 - 0.67) & 0 (0 - 0.37) & 0 (0 - 0) & 0 (0 - 0.0008)\\
\hline
\end{tabular}
\caption{A detailed analysis of the parsing results for Swedish using a pretrained pipeline}
\label{tab:pretrained_sv}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline \multirow{2}{*}{\bf Metric} & \multicolumn{2}{c|}{\bf Training set} & \multicolumn{2}{c|}{\bf Development set} \\ \cline{2-5}
& Original + & Original - & Original + & Original -\\ \hline
Batches considered & 360 & 1035 & 53 & 212\\
Completely corr. batches (Q1) & 341 & 0 & 48 & 0 \\
\hline
Corr. parsed sent. within a batch (Q2) & & & & \\
\FirstIndent Mean (SD) & 48.85 (6.34) & 0.19 (2.11) & 47.87 (7.81) & 0.01 (0.21)\\
\FirstIndent Median (Min - Max) & 50 (2 - 50) & 0 (0 - 41) & 50 (3 - 50) & 0 (0 - 3)\\
\hline
Batches with consistent errors (Q3) & 0 & 860 & 0 & 173\\
\hline
Number of error clusters (Q4) & & & &\\
\FirstIndent Mean (SD) & 2.21 (0.69) & 2.16 (0.43) & 2.2 (0.4) & 2.13 (0.4)\\
\FirstIndent Median (Min - Max) & 2 (2 - 5) & 2 (2 - 4) & 2 (2 - 3) & 2 (2 - 4)\\
\hline
Between-cluster NCPTK (Q5) & & & &\\
\FirstIndent Mean (SD) & 0.08 (0.18) & 0.04 (0.14) & 0 (0) & 0.08 (0.2)\\
\FirstIndent Median (Min - Max) & 0 (0 - 0.67) & 0 (0 - 0.75) & 0 (0 - 0) & 0 (0 - 0.72)\\
\hline
\end{tabular}
\caption{A detailed analysis of the parsing results for Russian using a pretrained pipeline}
\label{tab:pretrained_ru}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline \multirow{2}{*}{\bf Metric} & \multicolumn{2}{c|}{\bf Training set} & \multicolumn{2}{c|}{\bf Development set} \\ \cline{2-5}
& Original + & Original - & Original + & Original -\\ \hline
Batches considered & 27 & 75 & 2 & 26\\
Completely corr. batches (Q1) & 24 & 0 & 2 & 0 \\
\hline
Corr. parsed sent. within a batch (Q2) & & & & \\
\FirstIndent Mean (SD) & 45.41 (13.14) & 0.01 (0.11) & 50 (0) & 0 (0)\\
\FirstIndent Median (Min - Max) & 50 (4 - 50) & 0 (0 - 1) & 50 (50 - 50) & 0 (0 - 0)\\
\hline
Batches with consistent errors (Q3) & 0 & 52 & NA & 11\\
\hline
Number of error clusters (Q4) & & & &\\
\FirstIndent Mean (SD) & 2 (0) & 2.61 (1.37) & NA & 2.8 (0.9)\\
\FirstIndent Median (Min - Max) & 2 (2 - 2) & 2 (2 - 8) & NA & 3 (2 - 5)\\
\hline
Between-cluster NCPTK (Q5) & & & &\\
\FirstIndent Mean (SD) & 0 (0) & 0.12 (0.22) & NA & 0.06 (0.19)\\
\FirstIndent Median (Min - Max) & 0 (0 - 0) & 0 (0 - 0.775) & NA & 0 (0 - 0.77)\\
\hline
\end{tabular}
\caption{A detailed analysis of the parsing results for Ukrainian using a pretrained pipeline}
\label{tab:pretrained_uk}
\end{table}
The errors in augmented batches are not consistent. The degree of inconsistency varies between the languages ranging from around 17\% (175 of 1035) for the Russian training set to 75\% (3 of 4) for the Swedish development set (see Q3 rows). The average observed inconsistency of errors is around 44\%. The degree of inconsistency has a similar magnitude between the training and development sets. The most typical number of error clusters is 2 and maximum observed is 10 (see Q4 rows). The trees between the error clusters have mostly low NCPTK (see Q5 rows) indicating either a large number of errors or errors occurring early on (close to the root). We provide some examples of batches with inconsistent errors in the Appendix.
\subsection{Pipeline trained from scratch on treebanks with numeral augmentation}
We have repeated the same experiment as in the previous section, but with a pipeline trained from scratch on augmented treebanks (as outlined in Section \ref{sec:method}). The results summary is reported in Table~\ref{tab:retrained_desc}.
\begin{table}[h]
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\bf Metric} & \multicolumn{2}{c|}{\bf English} & \multicolumn{2}{c|}{\bf Swedish} & \multicolumn{2}{c|}{\bf Russian} & \multicolumn{2}{c|}{\bf Ukrainian} \\ \cline{2-9}
& Train & Dev & Train & Dev & Train & Dev & Train & Dev \\ \hline
Original in total & 235 & 14 & 108 & 5 & 1420 & 270 & 103 & 29\\
Wrong sent. segm. & 5 & 0 & 3 & 0 & 18 & 5 & 0 & 0 \\
Original considered & 230 & 14 & 105 & 5 & 1402 & 265 & 103 & 29 \\
Corr. parsed sent. & 230 & 0 & 97 & 2 & 976 & 48 & 102 & 3 \\
Corr. parsed sent. (\%) & \textbf{100\%} & 0\% & \textbf{92.4\%} & \textbf{40\%} & \textbf{69.6\%} & 18.1\% & \textbf{99\%} & \textbf{10.3\%} \\ \hline
Augmented in total & 11500 & 700 & 5250 & 250 & 70100 & 13250 & 5150 & 1450 \\
Wrong sent. segm. & 0 & 0 & 0 & 0 & 13 & 0 & 0 & 0 \\
Augmented considered & 11500 & 700 & 5250 & 250 & 70087 & 13250 & 5150 & 1450 \\
Corr. parsed sent. & 11452 & 0 & 4864 & 100 & 49005 & 2437 & 5100 & 133 \\
Corr. parsed sent. (\%) & \textbf{99.6\%} & 0\% & \textbf{92.7\%} & \textbf{40\%} & \textbf{69.9\%} & 18.4\% & \textbf{99\%} & \textbf{9.2\%} \\ \hline
\end{tabular}
\caption{Results of parsing the original and augmented sentences with the pipeline trained on augmented treebanks. ``Corr'' stands for ``Correctly'', ``sent'' stands for sentence(s). Performance improvements with respect to the pre-trained parser (see Table \ref{tab:pretrained_desc}) are indicated in \textbf{bold}.}
\label{tab:retrained_desc}
\end{table}
Retraining with numeral augmentation resulted in a clear and substantial performance boost for all languages, especially for the training treebanks. Performance boost on the development treebanks is less pronounced and sometimes leads to a slight performance degradation. We attribute this to a possible overfitting, indicating that 20 samples per an original sentence might have been too many and the procedure needs to be refined in future. Nevertheless, the detailed analysis, reported in Appendix, shows that the number of wrong sentence segmentations decreased for all languages and a consistency of errors is either better or on par with the pretrained counterparts. The number of error clusters got reduced to a maximum of 4 compared to 10 for the off-the-shelf parser.
\subsection{Pipeline trained from scratch on treebanks with token substitution}
We have repeated the same experiment as in the previous section, but with a pipeline trained from scratch on substituted treebanks (as outlined in Section \ref{sec:method}). The results summary is reported in Table~\ref{tab:retrained_tokens_desc}.
\begin{table}[h]
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\bf Metric} & \multicolumn{2}{c|}{\bf English} & \multicolumn{2}{c|}{\bf Swedish} & \multicolumn{2}{c|}{\bf Russian} & \multicolumn{2}{c|}{\bf Ukrainian} \\ \cline{2-9}
& Train & Dev & Train & Dev & Train & Dev & Train & Dev \\ \hline
Substituted in total & 235 & 14 & 108 & 5 & 1420 & 270 & 103 & 29\\
Wrong sent. segm. & 14 & 0 & 1 & 0 & 10 & 1 & 2 & 1 \\
Substituted considered & 221 & 14 & 107 & 5 & 1410 & 269 & 101 & 28 \\
Corr. parsed sent. & 81 & 1 & 73 & 2 & 341 & 59 & 23 & 2 \\
Corr. parsed sent. (\%) & \textbf{36.7\%} & 7.1\% & 68.2\% & \textbf{40\%} & 24.2\% & \textbf{21.9\%} & 22.8\% & 7.1\% \\ \hline
\end{tabular}
\caption{Results of parsing the substituted sentences with the pipeline trained on treebanks with token susbtitution. ``Corr'' stands for ``Correctly'', ``sent'' stands for sentence(s). Performance improvements with respect to the pre-trained parser (see Table \ref{tab:pretrained_desc}) are indicated in \textbf{bold}.}
\label{tab:retrained_tokens_desc}
\end{table}
Retraining with token substitution resulted in a slight performance boost for Russian and Swedish on the development treebanks and a slight performance degradation on the training treebanks for all languages except English. Interestingly, more sentences have been segmented correctly for Russian and Swedish, while the parsers for English and Ukrainian produce more segmentation errors compared to pre-trained parsers. At the same time, more sentences have been segmented incorrectly compared to the numeral augmentation method (except for Russian). Given that all models were re-trained with the same default seed from Stanza, we are unsure what this can be attributed to, other than the choice of the token {\tt NNNN} itself. The tokenization model in Stanza is based on unit (character) embeddings, so a tokenization model might benefit from a token without letters or just from replacing all 4-digit numerals with one fixed integer, say 0000. This is, however, highly speculative and requires further investigation.
An obvious advantage of token substitution is that the errors become consistent (since no clusters of errors could potentially be formed). However, the observed effect on performance suggests that token substitution with this specific token {\tt NNNN} is not the best solution to the problem.
\section{Conclusion}
We have observed that such a minor change as changing one 4-digit number for another leads to surprising performance fluctuations for pretrained parsers. Furthermore, we have noted the errors to be inconsistent, making the development of downstream applications more complicated. To alleviate the issue we tried out two methods and trained two proof-of-concept pipelines from scratch. One of the methods, namely the numeral augmentation scheme, resulted in substantial performance gains.
Finally, the results of the experiment suggest that UD treebanks might be biased towards specific time intervals, e.g.\ the 19th and 20th centuries. Bias in the data leads to bias in the models making it harder to use the parser for some downstream applications, e.g.\ in the history domain. The results of this experiment also prompt a further and more extensive investigation of possible other biases, such as names of geographical entities, gender pronouns, currencies, etc.
\section*{Acknowledgements}
This work was supported by Vinnova (Sweden's Innovation Agency) within the project 2019-02997. We would like to thank the anonymous reviewers for their comments and the suggestion to try token substitution.
\bibliographystyle{acl}
\section{Introduction}
\label{intro}
The Universal Dependencies (UD) resources have steadily grown over the years, and now treebanks for over 100 languages are available. The UD community has made a tremendous effort in providing a rich toolset for utilizing the treebanks for downstream applications, including pre-trained models for dependency parsing \cite{straka-etal-2016-udpipe,qi-etal-2020-stanza} and tools for manipulating UD trees \cite{popel-etal-2017-udapi,peng-zeldes-2018-roads,kalpakchi-boye-2020-udon2}.
Such an extensive infrastructure makes it more appealing to develop multilingual downstream applications based on UD, as a deterministic and more explainable competitor to the currently dominant neural methods. It is also compelling to use UD-based metrics for evaluation in multilingual settings. In fact, researchers have already started exploring such possibilities on both mentioned tracks. Kalpakchi and Boye \shortcite{kalpakchi2021quinductor} proposed a UD-based multilingual method for generating reading comprehension questions. Chaudhary et al. \shortcite{chaudhary-etal-2020-automatic} designed a UD-based method for automatically extracting rules governing morphological agreement. Pratapa et al. \shortcite{pratapa2021evaluating} proposed a UD-based metric to evaluate the morphosyntactic well-formedness of generated texts.
The authors of the latter two articles trained their own more robust versions of the dependency parsers, suitable for their needs. The authors of the first article relied on the off-the-shelf model, making the robustness of pre-trained dependency parsers crucial for the success of the downstream applications. For instance, sentence simplification rules based on dependency trees might simply not fire due to a mistakenly identified head or dependency relation. In fact, state-of-the-art dependency parsers are somewhat error-prone and not perfect, and assuming otherwise might potentially harm the performance of downstream applications. A more relaxed (and realistic) assumption is that the errors made by the parser are at least {\em consistent\/}, so that potentially useful patterns for the task at hand can still be inferred from data. These patterns might not always be linguistically motivated, but if the dependency parser makes consistent errors, they can still be useful for the task at hand.
In this article, we perform a case study operating under this relaxed assumption and investigate the consistency of errors while parsing sentences containing numerals. This step is useful, for instance, in question generation (especially for reading comprehension in the history domain) or numerical entity identification (e.g., distinguishing years from weights or distances).
\section{Background: Convolution partial tree kernels}
In order to measure parser accuracy, metrics like Unlabelled or Labelled Attachment Score (UAS and LAS, respectively) are often used. However, these metrics they do not fully reflect the usefulness of the parsers in downstream applications. A minor error in attaching one dependency arc will result in a minor decrease in UAS and LAS. In fact, the very same minor error might lead to a completely unusable tree for the task at hand, depending on how close the error is to the root. Therefore, we need a metric that penalizes errors more the closer the errors are to the root.
One metric possessing this desirable property is the convolution partial tree kernel (CPTK), originally proposed by Moschitti \shortcite{moschitti2006efficient} as a similarity measure for dependency trees. The basic idea is to represent trees as vectors in a common vector space, in such a way that the more common substructures two given trees have, the higher the dot product is between the corresponding two vectors (as illustrated in Figure~\ref{fig:cptk_example}). However, the vector space is induced only implicitly, whereas the dot product (the CPTK) itself is calculated using a dynamic programming algorithm (for more details we refer to the original article). CPTK values increase with the size of the trees, and thus can take any non-negative values, making them hard to interpret. Hence, we use normalized CPTK (NCPTK) which takes values between 0 and 1, and is calculated as shown in Figure \ref{fig:cptk_example}.
However, CPTKs can not handle labeled edges and were originally applied to dependency trees containing only lexicals.
In this article, we use an extension proposed by Croce et al. \shortcite{croce-etal-2011-structured}, which includes edge labels (DEPREL) as separate nodes. The resulting computational structure, the Grammatical Relation Centered Tree (GRCT), is illustrated in Figure~\ref{fig:grct_example}. A dependency tree is transformed into a GRCT by making each UPOS node a child of a DEPREL node and a father of a FORM node.
\begin{figure}
\centering
\begin{minipage}{.68\textwidth}
\centering
\includegraphics[width=\textwidth]{conv_kernels.pdf}
\caption{A simple example illustrating \emph{the concept} behind convolution partial tree kernels (in practice the vector space is induced only implicitly and CPTK is calculated using dynamic programming)}
\label{fig:cptk_example}
\end{minipage}%
\hspace{1.3em}
\begin{minipage}{0.28\textwidth}
\centering
\includegraphics[width=\textwidth]{t1_combined.pdf}
\caption{A simple example of a GRCT transformation}
\label{fig:grct_example}
\end{minipage}
\end{figure}
\section{Method}
\label{sec:method}
To explore the consistency of errors while parsing numerals, we have used UD treebanks for 4 European languages (2 Germanic and 2 Slavic). To simplify, we considered only sentences containing numerals representing years, later referred to as \emph{original sentences}. We defined these numerals as 4 digits surrounded by spaces, via the simple regular expression {\tt "(?<= )\textbackslash d\{4\}(?= )"}. We then sampled uniformly at random 50 integers between 1100 and 2100 using a fixed random seed, and replaced the occurrences of the previously identified numerals in the original sentences by each of these numbers. Thus, for every found original sentence in a treebank, we synthesized 50 \emph{augmented sentences} (later referred to as \emph{an augmented batch}), only differing in the 4-digit numbers.
We only substituted the first found occurrence of a 4-digit number in a sentence. However, if the same number appeared multiple times in the sentence, then all its occurrences were substituted.
Given such minor changes, a consistent dependency parser should output the same dependency tree for every sentence in each augmented batch. These trees should not necessarily be the same as gold original trees (although this is obviously desirable), but at the very least, the errors made in each augmented batch should be of the same kind. We consider two trees to have the errors of the same kind, and thus belonging to the same \emph{cluster of errors}, if their dependency trees only differ in the 4-digit numerals. All DEPRELs, UPOS tags and FEATS should be exactly the same for any two trees in the same cluster.
Evidently, not all 4-digit numbers in the original sentences were actually years, but the argument about the consistency of errors still stands even if the numbers were amounts of money, temperatures, etc. The magnitude of the numbers was not drastically changed (they are still 4-digit numbers), so the sentences should remain intelligible also after substitution.
In order to evaluate both the consistency of errors and correctness of a dependency parser after introducing the changes above, we need to answer the following questions.
\begin{enumerate}[label=Q\arabic*]
\item How many augmented batches are parsed completely correctly?
\begin{itemize}
\item if the corresponding original sentence is parsed correctly
\item if the corresponding original sentence is parsed incorrectly
\end{itemize}
\item How many sentences in each augmented batch are parsed correctly on average?
\begin{itemize}
\item if the corresponding original sentence is parsed correctly
\item if the corresponding original sentence is parsed incorrectly
\end{itemize}
\item How many augmented batches corresponding to incorrectly parsed original sentences have consistent errors, i.e. have the same dependency trees within a batch except FORMs and LEMMAs?
\item On average, how many clusters of errors does an augmented batch with inconsistent errors have?
\item On average, how similar are dependency trees in the clusters found in Q4?
\end{enumerate}
Answering Q1 to Q3 is trivial by parsing original and augmented sentences using a pre-trained dependency parser and calculating descriptive statistics. To answer Q4 and Q5, we propose to calculate NCPTK for each pair of trees in an augmented batch. To perform the calculations, we transform each dependency tree to GRCT replacing FORMs (which will be different by experimental design) with the FEATS. We can then construct an undirected graph, where each node is a dependency tree in the batch and two nodes are connected if their NCPTK is exactly 1 (i.e., their dependency trees are identical). Then the problem of finding error clusters in Q4 boils down to finding all maximal cliques in the induced undirected graph, for which we use Bron–Kerbosch algorithm \cite{bron1973algorithm}. Similarity of dependency trees in the given clusters can be assessed using the already calculated NCPTKs, which will provide the answer to Q5.
In hopes of improving parsers' performance and consistency of errors we have also tried to retrain the tokenizer, lemmatizer, PoS tagger and dependency parser (later referred to as a \emph{pipeline}) from scratch using two approaches. The first approach relies on \emph{numeral augmentation} and starts by sampling 20 four-digit integers using a different random seed (while ensuring no overlap with the previously used 50 integers). Using these 20 new numbers and the same procedure as before, we synthesized 20 additional sentences per each previously found original sentence in the training and development treebanks. We will refer to treebanks formed by original and newly synthesized sentences as \emph{augmented treebanks}. The second approach uses \emph{token substitution} and replaces previously found four-digit integers with a special token {\tt NNNN}. The training and development treebanks after this procedure keep their size the same (in constrast to the numeral augmentation method) and will be later referred to as \emph{substituted treebanks}.
We have used Stanza \cite{qi-etal-2020-stanza} to get pretrained dependency parsers as well as to train the whole pipeline from scratch and UDon2 \cite{kalpakchi-boye-2020-udon2} to perform the necessary manipulations on dependency trees and calculate NCPTK. The code is available at \href{https://github.com/dkalpakchi/ud_parser_consistency}{{\tt https://github.com/dkalpakchi/ud\_parser\_consistency}}.
\section{Experimental results}
\subsection{Pretrained pipeline}
We have started the experiment by parsing all original and augmented sentences in the training and development treebanks of the respective languages. The results summary for the off-the-shelf parser are presented in Table~\ref{tab:pretrained_desc}. To our surprise, some sentences were not segmented correctly, i.e. one sentence became multiple, both among original and augmented sentences. However, we did not find any consistent pattern: for instance, the Swedish parser made more segmentation errors for augmented sentences, whereas all the other parsers exhibited the opposite. Nonetheless, we have excluded the cases with wrong sentence segmentation from further analysis. The final number of sentences considered is shown in the rows ``Original considered'' and ``Augmented considered'' in Table~\ref{tab:pretrained_desc}.
\begin{table}[h]
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\bf Metric} & \multicolumn{2}{c|}{\bf English} & \multicolumn{2}{c|}{\bf Swedish} & \multicolumn{2}{c|}{\bf Russian} & \multicolumn{2}{c|}{\bf Ukrainian} \\ \cline{2-9}
& Train & Dev & Train & Dev & Train & Dev & Train & Dev \\ \hline
Original in total & 235 & 14 & 108 & 5 & 1420 & 270 & 103 & 29\\
Wrong sent. segm. & 12 & 0 & 2 & 0 & 25 & 5 & 1 & 1 \\
Original considered & 223 & 14 & 106 & 5 & 1395 & 265 & 102 & 28 \\
Corr. parsed sent. & 53 & 1 & 76 & 1 & 360 & 53 & 27 & 2 \\
Corr. parsed sent. (\%) & 23.8\% & 7.1\% & 71.7\% & 20\% & 25.8\% & 20\% & 26.5\% & 7.1\% \\ \hline
Augmented in total & 11150 & 700 & 5300 & 250 & 69750 & 13250 & 5100 & 1400 \\
Wrong sent. segm. & 0 & 0 & 17 & 14 & 13 & 0 & 0 & 0 \\
Augmented considered & 11150 & 700 & 5283 & 236 & 69737 & 13250 & 5100 & 1400 \\
Corr. parsed sent. & 2689 & 50 & 3525 & 43 & 17787 & 2540 & 1227 & 100 \\
Corr. parsed sent. (\%) & 24.1\% & 7.1\% & 66.7\% & 18.2\% & 25.5\% & 19.2\% & 24.1\% & 7.1\% \\ \hline
\end{tabular}
\caption{Results of parsing the original and augmented sentences with pre-trained parsers from Stanza. ``Corr'' stands for ``Correctly'', ``sent'' stands for sentence(s)}
\label{tab:pretrained_desc}
\end{table}
We have excluded metrics commonly used within UD community, e.g.\ UAS, LAS or BLEX, because for these metrics we observed only minor changes (less than 1 percentage point).
Another argument for omitting these metrics is that while they are useful in comparing different parsers, they do not fully reflect the usefulness of the parsers in downstream applications. In fact, even a minor error in attaching one dependency arc might lead to a completely wrong tree for the task at hand (depending on how close the error is to the root). Keeping this in mind, we compared accuracy on the sentence level only (reported in the rows ``Correctly parsed'' in Table \ref{tab:pretrained_desc}). We deemed a sentence to be correctly parsed if the NCPTK between its dependency tree and its gold counterpart was 1. We transformed all trees to GRCT and replaced FORM with FEATS, thus requiring not only all DEPREL to be identical, but also all UPOS and FEATS. As can be seen, the number of correctly parsed sentences is either on par or worse for augmented sentences, reaching a performance drop of 5 percentage points for the Swedish training set!
Results of a more detailed analysis needed for answering questions 1 - 5 (posed in Section \ref{sec:method}) are reported in Tables \ref{tab:pretrained_en} - \ref{tab:pretrained_uk}. We adopt the following notation for these tables: ``Original +'' (``Original -'') indicates cases when the original sentence was correctly (incorrectly) parsed. ``QX'' indicates a row with data necessary for answering question X, ``Corr'' stands for ``Correct(ly)'', ``sent'' stands for sentences.
We observe a number of interesting patterns from these reports. If the original sentences are incorrectly parsed, the vast majority of sentences in the corresponding augmented batches will also be incorrectly parsed (see mean and median in Q2 rows for ``Original -''). The fact that an original sentence is correctly parsed does not mean that all sentences in augmented batches will be correctly parsed (see mean and median in Q2 rows for ``Original +''). In fact, the number of wrong batches in such a case can be surprisingly large, e.g.\ 24 (31.5\%) for the Swedish training set.
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline \multirow{2}{*}{\bf Metric} & \multicolumn{2}{c|}{\bf Training set} & \multicolumn{2}{c|}{\bf Development set} \\ \cline{2-5}
& Original + & Original - & Original + & Original -\\ \hline
Batches considered & 53 & 170 & 1 & 13\\
Completely corr. batches (Q1) & 49 & 0 & 1 & 0 \\
\hline
Corr. parsed sent. within a batch (Q2) & & & & \\
\FirstIndent Mean (SD) & 49 (6.14) & 0.54 (3.67) & 50 (0) & 0 (0)\\
\FirstIndent Median (Min - Max) & 50 (5 - 50) & 0 (0 - 37) & 50 (50 - 50) & 0 (0 - 0)\\
\hline
Batches with consistent errors (Q3) & 0 & 101 & NA & 4\\
\hline
Number of error clusters (Q4) & & & &\\
\FirstIndent Mean (SD) & 2 (0) & 2.63 (0.95) & NA & 3.89 (2.64)\\
\FirstIndent Median (Min - Max) & 2 (2 - 2) & 2 (2 - 7) & NA & 3 (2 - 10)\\
\hline
Between-cluster NCPTK (Q5) & & & &\\
\FirstIndent Mean (SD) & 0 (0) & 0.07 (0.15) & NA & 0.04 (0.09)\\
\FirstIndent Median (Min - Max) & 0 (0 - 0) & 0 (0 - 0.8) & NA & 0 (0 - 0.28)\\
\hline
\end{tabular}
\caption{A detailed analysis of the parsing results for English using a pretrained pipeline}
\label{tab:pretrained_en}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline \multirow{2}{*}{\bf Metric} & \multicolumn{2}{c|}{\bf Training set} & \multicolumn{2}{c|}{\bf Development set} \\ \cline{2-5}
& Original + & Original - & Original + & Original -\\ \hline
Batches considered & 76 & 30 & 1 & 4\\
Completely corr. batches (Q1) & 52 & 0 & 0 & 0 \\
\hline
Corr. parsed sent. within a batch (Q2) & & & & \\
\FirstIndent Mean (SD) & 45.05 (10.77) & 3.37 (10.5) & 43 (0) & 0 (0)\\
\FirstIndent Median (Min - Max) & 50 (0 - 50) & 0 (0 - 42) & 43 (43 - 43) & 0 (0 - 0)\\
\hline
Batches with consistent errors (Q3) & 0 & 16 & 0 & 1\\
\hline
Number of error clusters (Q4) & & & &\\
\FirstIndent Mean (SD) & 2.29 (0.68) & 2.43 (1.05) & 2 (0) & 2.33 (0.47)\\
\FirstIndent Median (Min - Max) & 2 (2 - 4) & 2 (2 - 5) & 2 (2 - 2) & 2 (2 - 3)\\
\hline
Between-cluster NCPTK (Q5) & & & &\\
\FirstIndent Mean (SD) & 0.04 (0.12) & 0.04 (0.11) & 0 (0) & 0.0002 (0.0003)\\
\FirstIndent Median (Min - Max) & 0 (0 - 0.67) & 0 (0 - 0.37) & 0 (0 - 0) & 0 (0 - 0.0008)\\
\hline
\end{tabular}
\caption{A detailed analysis of the parsing results for Swedish using a pretrained pipeline}
\label{tab:pretrained_sv}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline \multirow{2}{*}{\bf Metric} & \multicolumn{2}{c|}{\bf Training set} & \multicolumn{2}{c|}{\bf Development set} \\ \cline{2-5}
& Original + & Original - & Original + & Original -\\ \hline
Batches considered & 360 & 1035 & 53 & 212\\
Completely corr. batches (Q1) & 341 & 0 & 48 & 0 \\
\hline
Corr. parsed sent. within a batch (Q2) & & & & \\
\FirstIndent Mean (SD) & 48.85 (6.34) & 0.19 (2.11) & 47.87 (7.81) & 0.01 (0.21)\\
\FirstIndent Median (Min - Max) & 50 (2 - 50) & 0 (0 - 41) & 50 (3 - 50) & 0 (0 - 3)\\
\hline
Batches with consistent errors (Q3) & 0 & 860 & 0 & 173\\
\hline
Number of error clusters (Q4) & & & &\\
\FirstIndent Mean (SD) & 2.21 (0.69) & 2.16 (0.43) & 2.2 (0.4) & 2.13 (0.4)\\
\FirstIndent Median (Min - Max) & 2 (2 - 5) & 2 (2 - 4) & 2 (2 - 3) & 2 (2 - 4)\\
\hline
Between-cluster NCPTK (Q5) & & & &\\
\FirstIndent Mean (SD) & 0.08 (0.18) & 0.04 (0.14) & 0 (0) & 0.08 (0.2)\\
\FirstIndent Median (Min - Max) & 0 (0 - 0.67) & 0 (0 - 0.75) & 0 (0 - 0) & 0 (0 - 0.72)\\
\hline
\end{tabular}
\caption{A detailed analysis of the parsing results for Russian using a pretrained pipeline}
\label{tab:pretrained_ru}
\end{table}
\begin{table}[H]
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline \multirow{2}{*}{\bf Metric} & \multicolumn{2}{c|}{\bf Training set} & \multicolumn{2}{c|}{\bf Development set} \\ \cline{2-5}
& Original + & Original - & Original + & Original -\\ \hline
Batches considered & 27 & 75 & 2 & 26\\
Completely corr. batches (Q1) & 24 & 0 & 2 & 0 \\
\hline
Corr. parsed sent. within a batch (Q2) & & & & \\
\FirstIndent Mean (SD) & 45.41 (13.14) & 0.01 (0.11) & 50 (0) & 0 (0)\\
\FirstIndent Median (Min - Max) & 50 (4 - 50) & 0 (0 - 1) & 50 (50 - 50) & 0 (0 - 0)\\
\hline
Batches with consistent errors (Q3) & 0 & 52 & NA & 11\\
\hline
Number of error clusters (Q4) & & & &\\
\FirstIndent Mean (SD) & 2 (0) & 2.61 (1.37) & NA & 2.8 (0.9)\\
\FirstIndent Median (Min - Max) & 2 (2 - 2) & 2 (2 - 8) & NA & 3 (2 - 5)\\
\hline
Between-cluster NCPTK (Q5) & & & &\\
\FirstIndent Mean (SD) & 0 (0) & 0.12 (0.22) & NA & 0.06 (0.19)\\
\FirstIndent Median (Min - Max) & 0 (0 - 0) & 0 (0 - 0.775) & NA & 0 (0 - 0.77)\\
\hline
\end{tabular}
\caption{A detailed analysis of the parsing results for Ukrainian using a pretrained pipeline}
\label{tab:pretrained_uk}
\end{table}
The errors in augmented batches are not consistent. The degree of inconsistency varies between the languages ranging from around 17\% (175 of 1035) for the Russian training set to 75\% (3 of 4) for the Swedish development set (see Q3 rows). The average observed inconsistency of errors is around 44\%. The degree of inconsistency has a similar magnitude between the training and development sets. The most typical number of error clusters is 2 and maximum observed is 10 (see Q4 rows). The trees between the error clusters have mostly low NCPTK (see Q5 rows) indicating either a large number of errors or errors occurring early on (close to the root). We provide some examples of batches with inconsistent errors in the Appendix.
\subsection{Pipeline trained from scratch on treebanks with numeral augmentation}
We have repeated the same experiment as in the previous section, but with a pipeline trained from scratch on augmented treebanks (as outlined in Section \ref{sec:method}). The results summary is reported in Table~\ref{tab:retrained_desc}.
\begin{table}[h]
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\bf Metric} & \multicolumn{2}{c|}{\bf English} & \multicolumn{2}{c|}{\bf Swedish} & \multicolumn{2}{c|}{\bf Russian} & \multicolumn{2}{c|}{\bf Ukrainian} \\ \cline{2-9}
& Train & Dev & Train & Dev & Train & Dev & Train & Dev \\ \hline
Original in total & 235 & 14 & 108 & 5 & 1420 & 270 & 103 & 29\\
Wrong sent. segm. & 5 & 0 & 3 & 0 & 18 & 5 & 0 & 0 \\
Original considered & 230 & 14 & 105 & 5 & 1402 & 265 & 103 & 29 \\
Corr. parsed sent. & 230 & 0 & 97 & 2 & 976 & 48 & 102 & 3 \\
Corr. parsed sent. (\%) & \textbf{100\%} & 0\% & \textbf{92.4\%} & \textbf{40\%} & \textbf{69.6\%} & 18.1\% & \textbf{99\%} & \textbf{10.3\%} \\ \hline
Augmented in total & 11500 & 700 & 5250 & 250 & 70100 & 13250 & 5150 & 1450 \\
Wrong sent. segm. & 0 & 0 & 0 & 0 & 13 & 0 & 0 & 0 \\
Augmented considered & 11500 & 700 & 5250 & 250 & 70087 & 13250 & 5150 & 1450 \\
Corr. parsed sent. & 11452 & 0 & 4864 & 100 & 49005 & 2437 & 5100 & 133 \\
Corr. parsed sent. (\%) & \textbf{99.6\%} & 0\% & \textbf{92.7\%} & \textbf{40\%} & \textbf{69.9\%} & 18.4\% & \textbf{99\%} & \textbf{9.2\%} \\ \hline
\end{tabular}
\caption{Results of parsing the original and augmented sentences with the pipeline trained on augmented treebanks. ``Corr'' stands for ``Correctly'', ``sent'' stands for sentence(s). Performance improvements with respect to the pre-trained parser (see Table \ref{tab:pretrained_desc}) are indicated in \textbf{bold}.}
\label{tab:retrained_desc}
\end{table}
Retraining with numeral augmentation resulted in a clear and substantial performance boost for all languages, especially for the training treebanks. Performance boost on the development treebanks is less pronounced and sometimes leads to a slight performance degradation. We attribute this to a possible overfitting, indicating that 20 samples per an original sentence might have been too many and the procedure needs to be refined in future. Nevertheless, the detailed analysis, reported in Appendix, shows that the number of wrong sentence segmentations decreased for all languages and a consistency of errors is either better or on par with the pretrained counterparts. The number of error clusters got reduced to a maximum of 4 compared to 10 for the off-the-shelf parser.
\subsection{Pipeline trained from scratch on treebanks with token substitution}
We have repeated the same experiment as in the previous section, but with a pipeline trained from scratch on substituted treebanks (as outlined in Section \ref{sec:method}). The results summary is reported in Table~\ref{tab:retrained_tokens_desc}.
\begin{table}[h]
\centering
\begin{tabular}{|l|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{\bf Metric} & \multicolumn{2}{c|}{\bf English} & \multicolumn{2}{c|}{\bf Swedish} & \multicolumn{2}{c|}{\bf Russian} & \multicolumn{2}{c|}{\bf Ukrainian} \\ \cline{2-9}
& Train & Dev & Train & Dev & Train & Dev & Train & Dev \\ \hline
Substituted in total & 235 & 14 & 108 & 5 & 1420 & 270 & 103 & 29\\
Wrong sent. segm. & 14 & 0 & 1 & 0 & 10 & 1 & 2 & 1 \\
Substituted considered & 221 & 14 & 107 & 5 & 1410 & 269 & 101 & 28 \\
Corr. parsed sent. & 81 & 1 & 73 & 2 & 341 & 59 & 23 & 2 \\
Corr. parsed sent. (\%) & \textbf{36.7\%} & 7.1\% & 68.2\% & \textbf{40\%} & 24.2\% & \textbf{21.9\%} & 22.8\% & 7.1\% \\ \hline
\end{tabular}
\caption{Results of parsing the substituted sentences with the pipeline trained on treebanks with token susbtitution. ``Corr'' stands for ``Correctly'', ``sent'' stands for sentence(s). Performance improvements with respect to the pre-trained parser (see Table \ref{tab:pretrained_desc}) are indicated in \textbf{bold}.}
\label{tab:retrained_tokens_desc}
\end{table}
Retraining with token substitution resulted in a slight performance boost for Russian and Swedish on the development treebanks and a slight performance degradation on the training treebanks for all languages except English. Interestingly, more sentences have been segmented correctly for Russian and Swedish, while the parsers for English and Ukrainian produce more segmentation errors compared to pre-trained parsers. At the same time, more sentences have been segmented incorrectly compared to the numeral augmentation method (except for Russian). Given that all models were re-trained with the same default seed from Stanza, we are unsure what this can be attributed to, other than the choice of the token {\tt NNNN} itself. The tokenization model in Stanza is based on unit (character) embeddings, so a tokenization model might benefit from a token without letters or just from replacing all 4-digit numerals with one fixed integer, say 0000. This is, however, highly speculative and requires further investigation.
An obvious advantage of token substitution is that the errors become consistent (since no clusters of errors could potentially be formed). However, the observed effect on performance suggests that token substitution with this specific token {\tt NNNN} is not the best solution to the problem.
\section{Conclusion}
We have observed that such a minor change as changing one 4-digit number for another leads to surprising performance fluctuations for pretrained parsers. Furthermore, we have noted the errors to be inconsistent, making the development of downstream applications more complicated. To alleviate the issue we tried out two methods and trained two proof-of-concept pipelines from scratch. One of the methods, namely the numeral augmentation scheme, resulted in substantial performance gains.
Finally, the results of the experiment suggest that UD treebanks might be biased towards specific time intervals, e.g.\ the 19th and 20th centuries. Bias in the data leads to bias in the models making it harder to use the parser for some downstream applications, e.g.\ in the history domain. The results of this experiment also prompt a further and more extensive investigation of possible other biases, such as names of geographical entities, gender pronouns, currencies, etc.
\section*{Acknowledgements}
This work was supported by Vinnova (Sweden's Innovation Agency) within the project 2019-02997. We would like to thank the anonymous reviewers for their comments and the suggestion to try token substitution.
\bibliographystyle{acl}
|
{
"timestamp": "2021-12-01T02:24:32",
"yymm": "2111",
"arxiv_id": "2111.15413",
"language": "en",
"url": "https://arxiv.org/abs/2111.15413"
}
|
\section{Introduction}
$(\alpha,n)$ reactions are a common feature of thermonuclear environments with a neutron excess. Temperatures near or above tens of megakelvin are sufficient for $\alpha$-particles to overcome the coulomb barrier and, for the neutron-rich nuclides in these environments, the $Q$-value for the neutron-emitting exit channel is generally the most favorable. For extreme neutron excesses, the reverse reactions $(n,\alpha)$, which are related to the forward $(\alpha,n)$ reactions via detailed balance, are themselves dominant. Furthermore, $\alpha$-emission from actinides present in fissile material or as contaminants leads to an additional neutron flux from subsequent $(\alpha,n)$ reactions on otherwise inactive surrounding material, complicating neutron flux analyses and contributing to detector backgrounds.
As such, $(\alpha,n)$ reactions play an important role for a variety of topics, including energy generation and diagnostics from nuclear fission and nuclear fusion~\cite{Serp14,Cerj18}, nucleosynthesis in astrophysical environments~\cite{Brav12,Blis20}, dark matter and neutrino detector backgrounds~\cite{Apri13,Hari05}, and forensic analyses and predictions associated with nuclear explosives~\cite{Runk10,Dola14}. Meanwhile, a large fraction of $(\alpha,n)$ cross sections for stable targets have not been measured and many of the measured reactions have been shown to disagree with theory predictions~\cite{Pere16}.
($\alpha$,n) cross sections can be measured using various techniques including both activation and direct neutron detection. In an activation measurement, target nuclei are bombarded with a monoenergetic beam for some time. Gamma rays from the resulting radioactive sample are measured to determine the cross section. This method, while precise, is not applicable to all reactions of interest due to measurement time constraints created by either too short or too long lived decay products, or decay radiation that is overwhelmed by prominent background radiation. The stacked foil technique has been used to measure the cross section at several energies simultaneously with a single particle beam; however, a large reaction energy uncertainty is introduced for the lowest center of mass energies included in the measurement~\cite{Hagi18}.
Direct neutron detection often involves neutron long counters. Neutron long counters~\cite[e.g.][]{Utsu17,Arno11,Laur14,Gome11,Tari17,Fala13,Pere10,Math12} offer high efficiency and many are designed to have near 4$\pi$ solid angle coverage. In the long counter scheme, neutrons from the reaction are emitted from the target location and are moderated typically through a low atomic number ($Z$) material before being captured in neutron-sensitive proportional counters that are embedded within the moderator. Due to the moderating material, the initial energy of a neutron is lost before detection. In many cases the initial neutron energy distribution is unknown, so a large variation in efficiency with respect to initial neutron energy can lead to large relative uncertainties~\cite{Banu19}. When this is not taken into consideration, systematic uncertainties can be introduced into the measurement result~\cite{Pete17,Mohr18}.
The $^{3}{\rm He}$ BF$_{3}$ Giant Barrel (HeBGB) at the Edwards Accelerator Laboratory at Ohio University is a neutron long counter optimized to measure ($\alpha$,n) cross sections of interest for nuclear astrophysics and applications. HeBGB has been optimized to provide near-constant neutron detection efficiency for the relevant neutron energies.
Reaction-based validation measurements and source measurements have been performed to confirm the simulated efficiency of the detector. The following section will discuss the design process of the detector in MCNP6~\cite{Goor12}, validation measurements are discussed in Section~\ref{section:Experimental Validations}, and conclusions are given in Section~\ref{section:Conclusions}.
\section{Design of HeBGB}
\label{section:Design}
The HeBGB moderator is made of four 5.0'' x 23.6'' x 23.6'' pieces of natural ultra-high-molecular-weight polyethylene having a density of 0.95 g\,cm$^{-3}$. Embedded in the polyethylene are rings of 16 $^{3}$He and 18 BF$_{3}$ proportional counters and a T-style beam pipe which allows for passage of the beam and a target drive system as shown in Figure~\ref{HeBGBView}. The BF$_{3}$ proportional counters were previously used in Ref.~\cite{Mick03}. See Table~\ref{tab:i} for a full list of the detector specifications.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.9\columnwidth,angle=0]{Figures/HeBGBViewsNoBkgrd.png}
\caption{(color online) Cross sectional view of the HeBGB long counter. Left: top down view and right: view from the beam path. The moderating material shown in green is enclosed on 3 sides by borated polyethylene shown in red. The bottom of the detector is held by a steel table shown in orange. BF$_{3}$ detectors make up the inner and middle rings shown as dark blue circles, while $^{3}$He detectors make up the outer ring shown as lighter blue circles in the right image.
\label{HeBGBView}}
\end{center}
\end{figure}
\begin{table}[htbp]
\centering
\caption{\label{tab:i} Detector Specification for the $^{3}$He (model number SA - P4-0814-102 and operating voltage 975V) and BF$_{3}$ (model number RS-P1-0813-101 and operating voltage 2400V) proportional counters.}
\smallskip
\begin{tabular}{|lrrrrr|}
\hline
Type & Manufacturer & Pressure [atm] & Sensitive Length [in] & Length [in] & Diameter [in]\\
\hline
$^{3}$He & Baker Hughes & 4.00 & 14.00 & 16.19 & 1.00\\
BF$_{3}$ & Reuter--Stokes & 0.723 & 12.25 & 14.44 & 1.00\\
\hline
\end{tabular}
\end{table}
A large number of MCNP6 calculations were performed to optimize the configuration of the proportional counters within the moderator. The configurations were created by varying the number of rings, ring radii, and number of proportional counters in each ring. Exploratory calculations were performed for 2, 3 and 4 rings with ring radii anywhere between 5 to 28 cm, where the radial spacing between rings was a minimum of 3 cm to prevent overlap of the detectors within the simulation. The number of proportional counters in each ring varied between 2 and 16, with equal numbers in the left and right quadrants, and the location of the BF$_{3}$ and $^{3}$He counters in the inner versus outer rings was also explored. Based on these initial calculations, a smaller phase space with 3 rings was simulated in more detail with 0.50 to 0.1~cm steps in the ring radius.
For each configuration, 10,000 monoenergetic events were generated at each energy, from 10~keV to 9~MeV in 1~MeV steps, with an isotropic angular distribution. The neutron detection efficiency was evaluated at each energy and an average $\epsilon_{\rm avg}$ was evaluated over the energy range. The relative uncertainty in the efficiency $\delta\epsilon$ was defined as half of the spread between the maximum efficiency $\epsilon_{\rm max}$ and minimum efficiency $\epsilon_{\rm min}$ over the energy range:
\begin{equation}
\delta\epsilon = \frac{\epsilon_{\rm max} - \epsilon_{\rm min}}{2\epsilon_{\rm avg}}.
\end{equation}
We use $\epsilon_{\rm avg}$ and $\delta\epsilon$ as metrics to compare the anticipated performance of each proportional counter configuration.
Before presenting the results of all calculations, we first consider an example subset, which involves one inner ring of BF$_{3}$ and one outer ring of $^{3}$He counters. These calculations were performed using two rings in each configuration, beginning with an inner ring radius of 8 cm and outer ring radius of 11 cm. For a fixed inner ring radius, the outer ring radius was increased in steps of 1~cm until the outer dimensions of the moderator are reached. The inner ring radius was then increased by 1~cm and the process repeated. Figure~\ref{TwoRings} shows $\epsilon_{\rm avg}$ and $\delta\epsilon$ for the example subset, where each point corresponds to a detector configuration and lines connect the results of calculations performed with the same inner ring radius.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.9\columnwidth,angle=0]{Figures/2ringsAvgEandDeltaE.png}
\caption{(color online) Sampled $\epsilon_{\rm avg}$ and $\delta\epsilon$ parameter space for the 2 ring configurations. Lines are drawn connecting the inner ring radius cases for 8, 9, 10, and 11 cm.
\label{TwoRings}}
\end{center}
\end{figure}
Generally, smaller ring radii result in a larger $\epsilon_{\rm avg}$. This is because the density of detectors within a ring decreases within increasing ring radius and because of the neutron thermalization process. The majority of neutron-captures within a proportional counter occur for thermal neutron energies and the majority of neutrons are thermalized within a short path length. The average path length that a neutron of an initial energy travels before capture is the sum of the mean-free-paths $\ell_{\rm mfp}$ of the neutron over its scattering history. On average, for neutron energies below 10~MeV, each scattering event will reduce the neutron energy from its precollision energy by half~\cite{Knoll}. For a given energy $\ell_{\rm mfp}=1/\Sigma_{\rm s}$, where the macroscopic scattering cross section is
\begin{equation}
\Sigma_{\rm s}=\frac{\rho N_{\rm A}}{M}(2\sigma_{\rm C}+4\sigma_{\rm H}).
\label{eqn:ScatterCS}
\end{equation}
Here, $\rho=0.95$~g\,cm$^{-3}$ and $M=28$ are the density and molecular weight of polyethylene and $\sigma_{\rm C}$ and $\sigma_{\rm H}$ are the microscopic neutron scattering cross sections from Ref.~{\cite{Chad11}}. Following this procedure, for a neutron with a 2~MeV initial energy, the average path length is 2.7~cm. For a 9~MeV neutron, the average path length is 7.5~cm.
Meanwhile, $\delta\epsilon$ tends to decrease with increasing distance between the inner and outer ring radii. This follows from the preceding discussion of the average path length, where, for a fixed inner ring radius, increasing the outer ring radius will improve the efficiency for higher-energy neutron detection.
The curvature within the $\epsilon_{\rm avg}$--$\delta\epsilon$ phase space for the subset of calculations featured in Figure~\ref{TwoRings} differs depending on the inner ring radius. Case A, with an 8~cm inner ring radius initially decreases in $\delta\epsilon$ with increasing outer ring radius, until finally increasing again. Case D, with an inner ring radius of 11~cm, has the opposite behavior. To understand this, we turn to the by-ring efficiencies shown in Figures~\ref{fig:IR8} and \ref{fig:IR11}. For case A, increasing the outer ring radius decreases $\epsilon$ at all energies, but the decrease is more rapid for low neutron energies. As such, $\delta\epsilon$ initially decreases. However, for the largest outer-ring radius, the outer ring efficiency drops significantly even for the highest energy neutrons, thus increasing $\delta\epsilon$. For case D, the inner ring alone already has a relatively flat $\delta\epsilon$, as the distance is within the path length distribution of some of the lowest and highest-energy neutrons. As such, it is advantageous for the outer ring radius to be large enough that it is only enhancing the efficiency for the highest energy neutrons.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.9\columnwidth,angle=0]{Figures/A1_A3withLabel.png}
\caption{(color online) Efficiency curves for 3 sample cases along the green 8 cm line of Figure 2. Open black circles are the total efficiency, blue triangles are the inner ring BF$_{3}$ efficiency and pink squares are the outer ring $^{3}$He efficiency.
\label{fig:IR8}}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.9\columnwidth,angle=0]{Figures/D1_D3withLabel.png}
\caption{(color online) Efficiency curves for 3 sample cases along the pink 11 cm line of Figure 2. Open black circles are the total efficiency, blue triangles are the inner ring BF$_{3}$ efficiency and pink square are the outer ring $^{3}$He efficiency.
\label{fig:IR11}}
\end{center}
\end{figure}
Figure~\ref{AllPoints} shows the results for all configurations. Distributing the proportional counters amongst more rings generally enables a larger $\epsilon_{\rm avg}$, as detectors can be placed near the average path length for several neutron energies. However, increasing the number of rings ultimately forces inner rings to smaller radii, which increases $\delta\epsilon$ due to the high detection efficiency for low-energy neutrons.
The optimum configuration for HeBGB was identified by taking the configuration with the smallest $\delta\epsilon$ from the initial configuration phase space and then taking smaller step sizes, 0.1~cm, in radius. Within these, the optimum configuration was defined as the one with the highest $\epsilon_{\rm avg}$. The geometry of the optimum configuration is shown in Figure~\ref{HeBGBView}, where the corresponding neutron detection efficiency is shown in Figure~\ref{EffandE}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.9\columnwidth,angle=0]{Figures/EfficiencyVsRelativeUncertainty.pdf}
\caption{(color online) The average efficiency and relative uncertainty for each design configuration simulated. Black filled circles are 4 rings with equal numbers in each ring alternating between helium and boron. Cyan open up-pointing triangles are 4 rings with equal numbers in each ring with 2 inner rings of boron and 2 with helium . Green down-pointing open triangles are 3 rings with no helium counters. Blue open squares are 3 rings with 2 inner of helium and 1 outer of boron. Purple down-pointing filled triangles are 3 rings with 2 inner of boron and 1 outer of helium. Red filled diamonds are 2 rings with 1 inner of boron and 1 outer of helium.
\label{AllPoints}}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.9\columnwidth,angle=0]{Figures/CompareLongCounters.pdf}
\caption{HeBGB efficiency (black triangles) calculated with MCNP for the final configuration as compared to the efficiency caluclated for similar detectors ~\cite[e.g.][]{Utsu17,Laur14,Tari17,Pere10,Hari05} Note that the final efficiency of HeBGB involves scaling the efficiency by ring and is shown in Figure~\ref{EffandData}.
\label{EffandE}}
\end{center}
\end{figure}
The effects of non isotropic distributions of neutrons was not considered during the optimization process but was explored after the optimum configuration was identified. Figure~\ref{EffandCosTheta} shows the efficiency as a function of angle for a select range of neutron energies with the final detector design configuration. The effects of anisotropy are greatest for higher energy neutrons, as high energy neutrons require more collisions within the moderator to thermalize and are therefore more likely to encounter the empty spaces left by the target ladder and beam pipe.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.9\columnwidth,angle=0]{Figures/AngularEfficiency.pdf}
\caption{(color online) Calculated angular dependence of the HeBGB neutron detection efficiency for several initial neutron energies.
\label{EffandCosTheta}}
\end{center}
\end{figure}
\section{Design Validation}
\label{section:Experimental Validations}
\subsection{Experimental Setup}
HeBGB is located at the Edwards Accelerator Laboratory at Ohio University~\cite{Meis17}. The experimental setup of the HeBGB detector system consists of a 0.5 cm gold plated collimator 48 cm upstream from the center of the detector, a 2 cm by 6 cm faraday cup located 37 cm downstream from the center of the detector, and a gold plated target ladder drive located in the center of the detector. The beam current is read separately from the collimator, cup, and target during the tuning process to ensure all beam is on target and is integrated during experiments to mitigate double-counting of accumulated charge due to escaping electrons.
Neutron signals from each proportional counter are read together for both the inner and middle ring. The outer ring of $^{3}$He detectors is split into two sections consisting of eight counters in both the left and right hemispheres.
Signals from the BF$_{3}$ and $^{3}$He detector sets are amplified using an EG\&G Ortec Model 142B and 142IH preamplifier, respectively. A single Ortec Model 419 Precision Pulse Generator 60 Hz signal is used in each detector set to measure deadtime, where the voltage is set sufficiently high to not interfere with the neutron counts in the spectra. Example spectra for each detector set are shown in Figure~\ref{Spectra}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.9\columnwidth,angle=0]{Figures/SpectraExample.pdf}
\caption{Sample spectra taken with a $^{252}$Cf source for the inner and middle ring BF$_{3}$ proportional counters (top row) and segmented $^{3}$He outer ring (bottom row). The right most peak in each spectrum is the pulser, which is added to each detector set to measure deadtime.
\label{Spectra}}
\end{center}
\end{figure}
\subsection{Validation Measurements}
The MCNP6 simulations of HeBGB were validated with three measurements. The first involved measuring neutrons from a calibrated $^{252}{\rm Cf}$ source. The source, provided by Eckert and Ziegler Isotope Products, contained 5.198 $\mu$Ci $^{252}$Cf referenced on 2017-10-01 with total quoted uncertainty 3.6$\%$. Mass and activity percentages of other Cf isotopes are quoted by the manufacturer and their contributing influences are accounted for in the calculation of activity at the measurement time. $^{248}$Cm decay products were last separated from this source in 2014 and are therefore not included in the calculation of neutron count rate, as the contribution is negligible~\cite{Rado14}. The source was placed at the center of HeBGB, in the usual target location, using a custom jig. Our efficiency uncertainty for this data point is due to the source activity uncertainty. The measured efficiency is compared with MCNP6 simulations of a $^{252}$Cf energy distribution using a Watt fission spectrum
\begin{equation}
N(E)=e^{-E_{n}/a}\sinh\left(\sqrt{bE_{n}}\right)
\label{eqn:watt}
\end{equation}
with parameters $a=1.18$ and $b=1.03419$~~\cite{Rado14}. These parameters are given in the MCNP6 documentation and are extremely close to the values given in the evaluation of Frohner ~\cite{Froh90}. Within this distribution, the most probable energy is 0.70 MeV and the average is 2.13 MeV. When comparing to the efficiency simulated for monoenergetic neutrons, we set the energy at the most probable neutron energy and set the upper and lower error bars to encompass 68\% of the neutron spectrum about the mean.
The second validation measurement was $^{51}{\rm V}(p,n)^{51}{\rm Cr}$ activation using 1.808 MeV protons. This results in neutrons with $E_{n}=0.209-0.259$~MeV. Following a 2.3~hr irradiation, the activity of the irradiated sample was determined by counting 321~keV $\gamma$-rays emitted from $^{51}{\rm Cr}$ using a high-purity germanium detector (HPGe) located 10~cm from the sample, with both housed inside stacked lead bricks. The HPGe detector efficiency was determined using $^{152}{\rm Eu}$, $^{133}{\rm Ba}$, and $^{60}{\rm Co}$ sources, where a summing correction was performed following the method of Ref.~\cite{Semk90}. Via the activation equation, the number of neutrons emitted over the irradiation time is
\begin{equation}
N=\frac{A(t)}{1-\exp\left(-{\rm ln}(2)t/t_{1/2}\right)},
\label{eqn:activation}
\end{equation}
where $A(t)$ is target activity measured at time $t$ and the $^{51}{\rm Cr}$ half-life $t_{1/2}$ is from Ref.~\cite{Wang17}.
Our efficiency uncertainty for this point is primarily due to the 3.0\% HPGe detector efficiency uncertainty. The measured efficiency of HeBGB is compared with MCNP6 simulations of this reaction assuming an isotropic distribution of neutrons in the center-of-mass frame, which follows from the statistical nature of this reaction.
The third validation measurement was the measurement of the well-characterized resonance in $^{13}{\rm C}(\alpha,n)$ at laboratory $\alpha$ energy $E_{\alpha}=1.053$~MeV, resulting in $E_{n}\sim2.45-3.27$~MeV. The neutron yield was measured for $E_{\alpha}=1.041-1.112$~MeV, as shown in Figure~\ref{C13Res}. Background measurements with an empty target frame were performed at each energy and were found to be at the level of room background. Room background was measured before, after and during the scan. An average room background contribution was calculated and subtracted from the data runs. The data was fit with a standard resonant yield curve~\cite{Iliadis} combined with a linear background intended to account for the non-resonant contribution to the reaction:
\begin{equation}
Y(E_{\alpha})=a+bE_{\alpha}+Y^{\rm meas}_{\rm max}\left[\arctan\left(\frac{E_{\alpha}-E_{\rm r}}{\Gamma/2 + c }\right)-\left(\frac{E_{\alpha}-E_{\rm r}-\Delta E}{\Gamma/2 + d}\right)\right].
\label{eqn:ResFit}
\end{equation}
Here, $a$ and $b$ are terms describing the background, $c$ and $d$ are diffuseness parameters describing the beam resolution and straggling, $Y^{\rm meas}_{\rm max}$ is the maximum yield in the resonance curve, $E_{\rm r}=1.053$~MeV is the resonance energy, $\Delta E$ is the total energy loss of the $\alpha$-particle within the $^{13}{\rm C}$ target, and $\Gamma$ is the resonance width, which is fixed to the value reported by Ref.~\cite{Bair73} during the fit.
The predicted maximum yield is determined by the resonance strength $\omega\gamma$ and stopping power $\varepsilon_{\rm r}$ (each in the center-of-mass frame) of an $\alpha$-particle in the $^{13}{\rm C}$ target at $E_{\rm r}$~\cite{Iliadis}:
\begin{equation}
Y^{\rm calc}_{\rm max}=\frac{\lambda_{r}^2\omega\gamma}{2\varepsilon_{\rm r}}.
\label{eqn:Ymax}
\end{equation}
For $\omega\gamma$, we employ the results from a recent reanalysis of Refs.~\cite{Kell89,Brun92,Brun93} that uses an improved model for neutron detection efficiency and takes advantage of updated nuclear data. That reanalysis will be reported in an upcoming work~\cite{BruneTBD} . For $\varepsilon_{\rm r}$, we take a weighted average of the world data for $\alpha$-particles in $^{13}{\rm C}$ at this energy~\cite{Mont17}. Our detector efficiency is $Y^{\rm meas}_{\rm max}/Y^{\rm calc}_{\rm max}$. In order to to compare this efficiency to the results of Figure~\ref{EffandE}, we simulated the impact of the angular distribution as determined by $R$-matrix calculations, also described by Ref.~\cite{BruneTBD}. The 8.2\% uncertainty for this efficiency determination is primarily due to the uncertainties for $\varepsilon_{\rm r}$ (6.5\%) and $\omega\gamma$ (5.0\%).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.9\columnwidth,angle=0]{Figures/C13ResScanTotal.pdf}
\caption{Neutron yield from $^{13}{\rm C}(\alpha,n)$, measured with HeBGB (points) and fit with a resonance yield curve and a linear background.
\label{C13Res}}
\end{center}
\end{figure}
\section{Efficiency Corrections}
The results of the validation measurements and MCNP6 simulations are compared in Figure~\ref{EffandData}. The measured efficiencies are larger than simulation results. To account for this discrepancy, we have examined the effects of modifying both the polyethylene density and gas pressure in the MCNP calculations. A sample piece provided by the manufacturer from the same batch as the larger polyethylene pieces measured 0.95 gcm$^{-3}$ and was used for the initial simulations. Ranges of density for ultrahigh molecular weight polyethylene are from 0.931 -- 0.949 gcm$^{-3}$~\cite{Huss20}. Using densities within this range resulted in negligible differences between calculation results, consistent with the findings of earlier investigations~\cite{Char03}. An increased BF$_{3}$ pressure from 550 Torr to 760 Torr was also simulated and found to more closely resemble the data; however, the manufacturer quoted uncertainty in proportional counter fill pressure is only 0.25\%\footnote{This fill-pressure uncertainty was communicated by Baker-Hughes, which is the modern-day parent company of the manufacturer of our BF$_{3}$ detectors, Reuter-Stokes. We have no original records from the detectors and all specifications come from interpreting the detector model numbers.}. Alternatively, keeping the original fill pressure and scaling each ring by a scale factor found by minimizing $\chi^{2}$ between the measurements and calculated results for each ring, as performed in Refs.~\cite{Pere10,Csed21}, results in a similar efficiency to the adjusted BF$_{3}$ pressure. Scale factors 1.104, 1.177, and 1.159 are used for the inner ring, middle ring, and outer ring, respectively, to arrive at the final detector efficiency. For the final efficiency, the HeBGB neutron detection efficiency across the neutron energy range 0.01--9.00~MeV is 7.6$\pm$0.6\% .
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=.9\columnwidth,angle=0]{Figures/ComparisonTotal.pdf}
\caption{(color online) Measured neutron detection efficiency for HeBGB (open blue triangles), compared to various MCNP calculations. The black filled triangles are the original MCNP simulations, red circles are the simulations with modified polyethylene density, green stars are simulations with modified BF$_{3}$ pressure, and the open cycan circles are the scaled simulation results to data. Simulations of $^{252}$Cf are shown in pink with an open diamond for the original simulation with no modifications and close diamond for the simulation with modified BF$_{3}$ pressure.
\label{EffandData}}
\end{center}
\end{figure}
\section{Average Initial Neutron Energy Determination}
Due to the statistical nature of the path of the neutrons in the moderator, the initial energy of a particular neutron emitted from the target location is unknown. However, average energies can be determined from the ratio of neutrons detected in the inner ring to neutrons detected in the combined two outer rings. MCNP results in Figure~\ref{RingRatioSim} show the trend in the ring ratio with increasing initial neutron energy, where larger energies result in a smaller ratio. Therefore, we can use the measured ring ratio to obtain a coarse estimate of the average initial neutron energy for a particular measurement.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=.9\columnwidth,angle=0]{Figures/RingRatioSimModBF.pdf}
\caption{MCNP results for the ratio of counts detected within the inner ring of HeBGB to counts within the outer two rings for several initial neutron energies.
\label{RingRatioSim}}
\end{center}
\end{figure}
To demonstrate this method, we have calculated the ring ratio of data from a measurement of $^{13}$C($\alpha$,n) from E$_{\alpha}=1.041-1.112$ and $3.00-8.00$~MeV. The full details of the measurement and results will be presented in a future publication. Only the pertinent details for the ring-ratio measurement are discussed here. The ratio of neutron detections within the inner ring to detections in the outer rings is calculated for each $^{13}{\rm C}(\alpha,n)$ measurement energy and that ratio is mapped on to the ring ratio calculations for an initial neutron energy shown in Figure~\ref{RingRatioSim}. Figure~\ref{C13NeutronEnergy} shows the resulting estimate for initial neutron energy over the measured $\alpha$ energy range together with the average neutron energy calculated from kinematics for all de-excitations to the $^{16}$O nucleus. The branching ratios for each state are taken from the Hauser-Feshbach based calculations of Ref.~\cite{Mohr18}.
Figure~\ref{C13NeutronEnergy} shows that the ring ratio method qualitatively agrees with the predicted average neutron energy. The disagreement between $E_{\alpha}\sim$3-5~MeV is likely due to the shallow slope of the ring-ratio versus neutron-energy trend shown in Figure~\ref{RingRatioSim}. Given the shallow slope, small changes in the ring ratio can lead to large shifts in the estimated initial neutron energy. Above $\sim$5~MeV, decay branchings open to excited states of $^{16}{\rm O}$ and it is possible that some of the disagreement is due to inaccurate branching ratio estimates in Ref~\cite{Mohr18}. Our preliminary investigations indicate that the large fluctuations in the estimated neutron energy are due to angular distribution impacts on the ring ratio.
\section{Conclusions}
\label{section:Conclusions}
We developed the HeBGB neutron detector, designed to accurately count neutrons emitted from $(\alpha,n)$ reactions. Monte Carlo calculations were used to optimize neutron-sensitive proportional counter placement within a polyethylene moderator in order to achieve a near-constant neutron detection efficiency, 7.6$\pm$0.6\%, over the neutron energy range 0.01--9.00~MeV. The detector performance was validated using neutron source and neutron-emitting reaction measurements. It was demonstrated that coarse constraints on the initial neutron energy can be obtained by assessing the ratio of neutron detections in the inner ring of HeBGB detectors to detections in the outer ring of HeBGB detectors. The HeBGB detector will enable accurate measurements of $(\alpha,n)$ cross sections of interest for astrophysics and applications, providing a near-constant neutron detection efficiency, avoiding a significant systematic uncertainty present in earlier set-ups.
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=.9\columnwidth,angle=0]{Figures/RingRatioMohrComparisonModBF.pdf}
\caption{(color online) Average initial neutron energy from $^{13}{\rm C}(\alpha,n)$ determined using the ring ratio method (black solid points) compared to the average neutron energy predicted in Ref.~\cite{Mohr18}.
\label{C13NeutronEnergy}}
\end{center}
\end{figure}
\paragraph{}
\bibliographystyle{JHEP}
|
{
"timestamp": "2021-12-01T02:26:34",
"yymm": "2111",
"arxiv_id": "2111.15472",
"language": "en",
"url": "https://arxiv.org/abs/2111.15472"
}
|
\section{Introduction}
Galaxy clusters are located at the peaks of the (dark) matter density field and, as they evolve, they accrete galaxies, galaxy groups, and other clusters from the cosmic web. Some of those merging events are among the most energetic and violent events in the Universe, releasing energies up to 10$^{64}$ ergs \citep{Sarazin2002, Sarazin2004}, providing extreme conditions to study a range of phenomena, from particle physics \citep[e.g.][]{markevitch04,harvey15,kim2017} to cosmology \citep[e.g.][]{clowe06,thompson15}, including galaxy evolution \citep[e.g.][]{ribeiro13,zenteno2020}.
The cluster assembly process affects galaxies via several physical processes, including harassment, galaxy-galaxy encounters \citep[e.g.,][]{ToomreToomre72}, tidal truncation, starvation, and ram pressure stripping \citep{GunnGott72}, which act upon the galaxies at different cluster centric distances \citep[e.g.,][]{treu03}. Such events not just change the galaxies in terms of stellar populations and morphologies \citep[e.g., ][]{Kapferer2009, mcpartland16, poggianti16, Kelkar20}, but also by destroying them, as indicated by a Halo Occupation Number index lower than 1 \citep[e.g.,][]{lin04a,zenteno11,zenteno16,hennig17}.
In such extreme environments, galaxies are exposed to conditions that may quench \citep[e.g.][]{Poggianti2004, Pallero2021} or trigger star formation \citep[e.g.][]{Ferrari2003, Owers2012} . For example, \citet{Kalita2019} found evidence of a Jellyfish galaxy in the dissociative merging galaxy cluster A1758N ($z\sim0.3$), concluding that it suffered from ram-pressure striping due to the merging event. \citet{pranger14} studied the galaxy population of the post-merger system Abell 2384 (z$\sim$0.094), finding that the population of spiral galaxies at the center of the cluster does not show star formation activity, and proposing that this could be a consequence of ram-pressure stripping of spiral galaxies from the field falling into the cluster. \citet{ma10} discovered a fraction of lenticular post-starburst galaxies in the region in-between two colliding structures, in the merging galaxy cluster MACS J0025.4-1225 (z$\sim$0.59), finding that the starburst episode occurred during the first passage ($\sim$0.5-1 Gyr ago), while the morphology was already affected, being transformed into lenticular galaxies because of either ram-pressure events or tidal forces towards the central region.
On the other hand, \citet{Yoon2020} found evidence of increase in the star formation activity of galaxies in merging galaxy clusters, alleging that it could be due to an increment of barred galaxies in this systems \citep{Yoon2019}. \citet{Stroe2014} found an increase of H$$\alpha$$ emission in star-forming galaxies in the merging cluster ``Sausage''(CIZA J2242.8+5301) and, by comparing the galaxy population with the more evolved merger cluster ``Toothbrush'' (1RXS J0603.3+4213), concluded that merger shocks could enhance the star formation activity of galaxies, causing them to exhaust their gas reservoirs faster \citep{Stroe2015}.
To understand how the merger process impacts cluster galaxies, it is crucial to assemble large samples of merging clusters and determine their corresponding merger phase: pre, ongoing or post. The SZ-selected samples are ideal among the available cluster samples, as they are composed of the most massive clusters in the Universe and are bound to be the source of the most extreme events. The South Pole Telescope \citep[SPT][]{carlstrom11} has completed a thermal SZ survey, finding 677 cluster candidates \citep{bleem15b}, providing a well understood sample to study the impact of cluster mergers on their galaxy population. There is rich available information on those clusters, including the gas centroids (via SZ and/or X--ray), optical imaging, near-infrared imaging, cluster masses, photometric redshifts, etc. Furthermore, as the SPT cluster selection is nearly independent of redshift, a merging cluster sample will also allow evolutionary studies to high redshifts.
Using SPT-SZ selected clusters and optical imaging, \citet[][]{song12b} reported the brightest cluster galaxy (BCG) positions on 158 SPT cluster candidates and, by using the separation between the cluster BCG and the SZ centroid as a dynamical state proxy, found that SPT-CLJ0307-6225 is the most disturbed galaxy cluster of the sample. Recently, \cite{zenteno2020} employed optical data from the first three years of the Dark Energy Survey \citep[DES; ][]{abbott18,morganson18,des16} to use the BCG in 288 SPT SZ-selected clusters \citep{bleem15b} to classify their dynamical state. They identified the 43 most extreme systems, all with a separation greater than 0.4 $r_{200}$, including once again SPT-CLJ0307-6225. Furthermore, an X--ray morphological analysis done by \cite{Nurgaliev2017} over 90 SPT-selected galaxy clusters shows SPT-CLJ0307-6225 as one of two most extreme cases \citep[the other one is `El Gordo'][]{Marriage2011,Menanteau2012}, making this cluster an interesting system to test the impact of a massive merging event in galaxy evolution, the goal of this paper. We use VLT/MUSE and Gemini/GMOS spectroscopy, X--ray data from {\it Chandra}, and Megacam imaging to characterize the SPT-CLJ0307-6225 merger stage, and its impact on galaxy population.
The paper is organized as follow: in $\S$\ref{sec:observations} we provide details of the observations and data reduction.
In $\S$\ref{sec:analysis} we show the analysis for the spectroscopic and optical data, while in $\S$\ref{sec:results} we report our findings for both the merging scenario and the galaxy population. In $\S$\ref{sec:discussion} we propose an scenario for the merging event and connect it to the galaxy population. In $\S$\ref{sec:conclusions} we give a summary of the results. Throughout the paper we assume a flat Universe, with a $\Lambda$CDM cosmology, $h=0.7$, $\Omega_m = 0.27$ \citep[][]{Komatsu2011}. Within this cosmology, 1 arcsec corresponds to $\sim$6.66 kpc.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{plots/0307_muse4.pdf}
\vskip-0.13in
\caption{Pseudo-color image, from $gri$ filters combination, of the central area of SPT-CLJ0307-6225. Magenta squares show the MUSE footprints, where the numbers on the top-right corner of each square shows the cube's number. Orange contours where derived from archival \textit{Chandra} images. The cyan plus-sign marks the X--ray centroid \citep{mcdonald13}. The arrows show the positions of the two brightest galaxies of the cluster. The white bar on the bottom shows the scale of 1 arcminute. The inset shows the 2D galaxy number density with the same scale as the main figure, where the two highest intensity areas correspond to the areas around the BCGs.}
\label{fig:rgb_image}
\end{figure*}
\section{Observations and Data Reduction}
\label{sec:observations}
\subsection{Optical Imaging}
\label{sec:imaging}
Optical images were obtained using Magellan Clay with Megacam during a single night on November 26, 2011 (UT). Megacam has a 24\hbox{$^{\prime}$} x 24\hbox{$^{\prime}$} field-of-view, which at redshift 0.579 correspond to $\sim$10 Mpc. Several dithered exposures were taken in $g$, $r$, and $i$ filters for a total time of 1200 s, 1800 s, and 2400 s respectively. The median seeing of the images was approximately 0.79 arcsec or about 5 kpc, with a better seeing in r-band, averaging 0.60 arcsec. The 10$\sigma$ limit magnitudes in $gri$ are 24.24, 24.83, and 23.58, respectively \citep{chiu16b}. In Fig.~\ref{fig:rgb_image} we show the $gri$ pseudo-color image, centered on the SZ cluster position of SPT-CLJ0307-6225, with the white bar on the bottom right showing the corresponding scale.
The catalogs for the photometric calibration were created following \citet{High2012} and \citet{Dietrich2019} including standard bias subtraction and bad-pixel masking, as well as flat fielding, illumination, and fringe (for i-band only) corrections. The stellar locus regression \cite{High2009} were constrained by cross-matching with 2MASS catalogs and give uncertainties in absolute magnitude of 0.05 mag and in color of 0.03 mag.
For the creation of the galaxy photometric catalogs, we use a combination of Source Extractor \citep[\texttt{SExtractor};][]{bertin96} and the Point Spread Function Extractor \citep[\textsc{PSFex};][]{Bertin2011} softwares. \textsc{SExtractor} is run in dual mode, using the $i$-band image as the reference given the redshift of the cluster, we extract all detected sources with at least 6 pixels connected above the 4$\sigma$ threshold, using a 5 pix Gaussian kernel. Deblending is performed with 64 sub-thresholds and a minimum contrast of 0.0005.
Galaxy magnitudes are \textsc{SExtractor}'s \texttt{MAG\_AUTO} estimation, whereas colors are derived from aperture magnitudes.
The star-galaxy separation in our sample is performed following \citet{Crocce2019}, by using the \textsc{SExtractor} parameter \textsc{spread\_model}, and its corresponding error, \textsc{spreaderr\_model}, derived from the $i$-band image, for objects within $R_{200}$ from the SZ center \citep[$R_{200} = 3.84'$;][]{song12b, zenteno2020}. \citet{Crocce2019} classified a source as a galaxy if its satisfies
\begin{equation}\label{eq:phot_gals}
\textsc{spread\_model} + \left(\frac{5}{3}\right)\times\textsc{spreaderr\_model} > 0.007
\end{equation}
\noindent
ensuring a 97\% purity galaxy catalog.
With this separation we find 423 sources which are classified as galaxies. A visual inspection reveals that only 3 of them are not galaxies, however, most of the galaxies spectroscopically classified as cluster members are not included. To remedy this, we change the limit in Eq. \ref{eq:phot_gals} to $>0.004$ \citep[for reference, 0.005 is $\sim$95\% purity;][]{Sevilla-Noarbe2018}, which includes then most of the spectroscopic galaxies (30 missing out of 131, see $\S$\ref{sec:redshifts}), but also increasing the contamination by other sources (e.g. stars). To improve upon this, we apply a second cut in magnitude for a source to be classified as a galaxy, such that $i_{\rm{auto}}<18.5$ mag, which is $\sim0.5$ mag brighter than the BCG. On the faint end the cut is set at $i_{\rm{auto}} < m^*+3 = 23.39$, which is beyond the limit of our spectroscopic catalogue (see $\S$\ref{sec:completeness}). With this we obtain 789 galaxies, plus the 30 spectroscopic galaxies which did not make the cut.
We inspect the properties of these 30 missing objects by comparing their measured \textsc{SExtractor} parameter \texttt{class\_star} on the same filter with that of the other 789. \texttt{class\_star} is derived using a neural network, giving a probability of a source to be a star (\texttt{class\_star} $\approx$ 1) or a galaxy (\texttt{class\_star} $\approx$ 0). The 30 missing galaxies are all in the higher end of this parameter with respect to the other 789, with \texttt{class\_star}$_i$ $\geq$ 0.80. On the other hand, \cite{Sevilla-Noarbe2018} uses a limit of $<0.002$ in Eq. \ref{eq:phot_gals} to classify a source as a star, and these 30 sources are located in the area between the star cut and the galaxy cut we set above.
The flux weighted galaxy surface density maps (see $\S$~\ref{sec:subcluster} and the inset in Fig.~\ref{fig:rgb_image}) is generated from the population of red sequence galaxies, determined from the star-galaxy separation. Missing potential galaxies from the photometric catalog does not alter the general form of the surface density map, thus not altering our conclusions.
\subsection{Spectroscopic data}
\label{sec:reductions}
The MUSE \citep{bacon12} observations were taken on August 22nd, 23rd and 24th, 2016 (program id: 097.A-0922(A), PI: Zenteno), and November 10 and December 20, 2017 (program id: 100.A-0645(A), PI: Zenteno). The observations consisted of four pointings, with a total exposure time of 1.25 hours per data cube, with an airmass = 1.4. The positions of the pointings were selected to cover the two BCGs (labeled as BCG1 and BCG2 on Fig.~\ref{fig:rgb_image}) and the area between them. The MUSE footprints for the 4 observed data cubes are shown as magenta squares on Fig.~\ref{fig:rgb_image}, with the cubes enumerated in the top right corner of each square. We use these numbers to refer to the cubes throughout the paper.
\begin{table}
\caption{Central coordinates and seeing conditions of the observed MUSE fields}
\label{tab:muse_data}
\centering
\resizebox{\linewidth}{!}{%
\begin{tabular}{c c c c c}
\hline
\hline
CUBE & Program & \multicolumn{2}{c}{Coordinates} & Seeing \\
& ID & R.A. (J2000) & Dec. (J2000) & (")\\
\hline
1 & 097.A-0922(A) & $03^{\rm h}\ 07^{\rm m}\ 16.34^{\rm s}$ & $-62^\circ\ 26^{\prime}\ 54.98^{\prime\prime}$ & 0.56 \\
2 & 097.A-0922(A) & $03^{\rm h}\ 07^{\rm m}\ 19.052^{\rm s}$ & $-62^\circ\ 25^{\prime}\ 36.430^{\prime\prime}$ & 0.70 \\
3 & 0100.A-0645(A) & $03^{\rm h}\ 07^{\rm m}\ 22.271^{\rm s}$ & $-62^\circ\ 24^{\prime}\ 42.140^{\prime\prime}$ & 0.68 \\
4 & 0100.A-0645(A) & $03^{\rm h}\ 07^{\rm m}\ 25.302^{\rm s}$ & $-62^\circ\ 23^{\prime}\ 46.570^{\prime\prime}$ & 0.97 \\
\hline
\hline
\end{tabular}
}
\end{table}
The data was taken in WFM-NOAO-N mode, with a position angle of 18 deg for three of the cubes and 72 deg for the one to the south, and using the dithering pattern recommended for best calibration: 4 exposures with offsets of 1" and 90 degrees rotations (MUSE User Manual ver. 1.3.0). The raw data were reduced through the MUSE pipeline \citep{weilbacher14, Weilbacher2016} provided by ESO.
We construct 1D spectra from the MUSE cube using the {\tt MUSELET} software \citep{bacon16}. {\tt MUSELET} finds source objects by constructing line-weighted (spectrally) 5x1.25 Angstrom wide narrow band images and running \texttt{SExtractor} on them. In order to create well fitted masks to their respective sources, the parameter {\tt DETECT\_THRESH} is set to be 2.5. If the chosen value is below that, {\tt SExtractor} will detect noise and output wrong shapes in the segmentation map. We proceed to use the source file to extract the {\tt SExtractor} parameters {\tt A\_WORLD}, {\tt B\_WORLD} and {\tt THETA\_WORLD} to create an elliptical mask centered in each source.
Finally, we use the {\tt MUSELET} routines {\tt mask\_ellipse} and {\tt sum} to create the 1D weighted spectra of the sources. To make sure the objects fit into their apertures, the SExtractor parameter {\tt PHOT\_FLUXFRAC} is set at 0.9, which means that 90\% of the source's flux will be contained within the mask's radius.
We complemented MUSE galaxy redshifts with Gemini/GMOS data published by \citet{bayliss16}. Bayliss galaxy redshift sample consists in 35\ cluster galaxies redshifts, with 8\ not present in our MUSE data. The spectroscopic data from their sample can be found online at the \textsc{VizieR Catalogue Service} \citep{Vizier}, with the details on the data reduction described in \cite{bayliss16} and \cite{Bayliss2017}. For SPT-CLJ 0307-6225, they used 2 spectroscopic masks with an exposure time of 1 hour each. The target selection consisted mostly of galaxies from the red sequence (selected as an overdensity in the color-magnitude and color-color spaces) up to $m^* + 1$, prioritising BCG candidates.
\subsection{X-ray data}
\label{subsec:xraydata}
SPT-CLJ0307-6225 was observed by {\it Chandra} as part of a larger, multi-cycle effort to follow up the 100 most massive SPT-selected clusters spanning $0.3 < z < 1.8$ \citep{mcdonald13,McDonald2017}. In particular, this observation (12191) was obtained via the ACIS Guaranteed Time program (PI: Garmire). A total of 24.7\,ks was obtained with ACIS-I in VFAINT mode, centering the cluster $\sim$1.5$^{\prime}$ from the central chip gap. The data was reprocessed using \textsc{ciao} v4.10 and \textsc{caldb} v.4.8.0. For details of the observations and data processing, see \cite{mcdonald13}. The derived X--ray centroid is shown as a cyan plus-sign on Fig.~\ref{fig:rgb_image}.
\section{Analysis}
\label{sec:analysis}
\subsection{Color-Magnitude Diagram and RCS selection}
\label{sec:photometric}
The color-magnitude diagram (CMD) for the cluster is shown in Fig.~\ref{fig:cmd}, where the magenta triangles are galaxies from our spectroscopic sample (see $\S$\ref{sec:redshifts}) and the dots represent galaxies from our photometric sample (selected as described in \S~\ref{sec:imaging}). For the selection of the red cluster sequence (RCS) galaxies, which consist mostly of passive galaxies which are likely to be at the redshift of the cluster \citep{Gladders2000}, we examine the location of the galaxies from our spectroscopic sample in the CMD. With this information, we then select all galaxies with $r-i>0.65$ and perform a 3$\sigma$-clipping cut on the color index to remove outliers. We keep all the galaxies from our previous magnitude cut in $\S$~\ref{sec:imaging} ($i_{\rm auto}< 23.39$). Finally, we fit a linear regression to the remaining objects, which is shown with a red dashed line in Fig.~\ref{fig:cmd}. The green dotted lines denote the limits for the RCS, chosen to be $\pm$0.22 [mag] from the fit, which corresponds to the average scatter of the RCS at 3$\sigma$ \citep{lopez04}. This gives us a total of 187\ optically selected RCS galaxy candidates, with 64\ of those being spectroscopically confirmed members.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{plots/0307_cmd_v2.pdf}
\vskip-0.13in
\caption{Color-magnitude diagram (CMD) of SPT-CLJ0307-6225 from Megacam data within R$_{200}$. The $y$-axis shows the color index $r-i$ estimated from aperture magnitudes, with a fix aperture of $\sim$40 kpc ($\sim$6 arcseconds) at the cluster redshift, while the $x$-axis shows \textsc{SExtractor}'s \texttt{MAG\_AUTO}. Magenta triangles represent galaxies from our spectroscopic sample, whereas dots are galaxies from the photometric sample. The red cluster sequence (RCS) estimated for the cluster is shown as a red-dashed line, while the green dotted lines are the 0.22 mag width established for the RCS.}
\label{fig:cmd}
\end{figure}
\subsection{Spectroscopic catalog}
\subsubsection{Galaxy redshifts}
\label{sec:redshifts}
To obtain the redshifts,
we use an adapted version of \textsc{MARZ} \citep{Hinton2016} for MUSE spectra\footnotemark \footnotetext{\url{http://saimn.github.io/Marz/\#/overview} (Hinton, private communication)}. \textsc{MARZ} is an automatic redshifting Javascript web application that can be used interactively or via command line, for which we give the 1D spectra of each object as an input, obtaining the spectral type (late-type galaxy, star, quasar, etc.) and the redshift that best fits as an output. The results are examined visually for each of the objects, calibrating them using the 4000\AA\ break and the Calcium $H$ and $K$ lines. Heliocentric correction was applied to all redshifts using the \textsc{rvcorrect} task from \textsc{iraf}.
There are three sources in the cube 4 region which appeared to be part of the cluster, but were not well fitted
by \textsc{MARZ}. These sources are labelled by the white arrows in the top panel of Fig.~\ref{fig:manual_redshift}, whereas their spectra are shown in black on the bottom panel. For comparison, the red arrow points towards a galaxy with a redshift automatically estimated as close to the cluster redshift, whereas the cyan arrow points towards a galaxy with an estimated redshift higher than that of the cluster.
In total we estimate spectroscopic redshifts for 116\ objects within the MUSE fields, with 4 of them classified as stars. In Table~\ref{tab:all_objs_properties} we show the redshifts and magnitudes for this objects. For details of the different columns please refer to \ref{sec:appendix_catalog}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{plots/0307N_manual_redshift.pdf}\\
\includegraphics[width=\linewidth]{plots/manual_redshift_spec_v4.pdf}
\vskip-0.13in
\caption{\textit{top}: Zoom-in of Fig.~\ref{fig:rgb_image}, into cube 4. White arrows point towards galaxies that had their redshift estimated manually using \textsc{MARZ}, red arrow points towards a galaxy with automatic redshift detection while the cyan arrow points towards a galaxy with similar characteristic to those of the cluster, but at $z=0.716$. \textit{bottom}: Spectrum of the sources pointed by arrows on the top panel. Sources pointed by white arrows are shown as black spectrum, while the others are colored according to the arrow pointing at them. The redshift found by \textsc{MARZ} of each source is written on top of each spectrum. The black dotted lines and cyan dashed lines mark the Calcium $H$ and $K$ lines redshifted at $z=0.58$ and $z=0.7156$, respectively. We also show as a black dotted line the G-band feature at 4304 \AA, redshifted to $z=0.58$.}
\label{fig:manual_redshift}
\end{figure}
In addition, we supplement this data with 35\ GMOS archival spectroscopic reduced data \citep{bayliss16}. Unfortunately, the header of these spectroscopic data did not have the information regarding the wavelength calibration, so we estimate that manually and then use \textsc{fxcor} to estimate redshifts.
For the \textsc{fxcor} estimations we use 4 template spectra from the \textsc{IRAF} package \textsc{rvsao}; \textit{eltemp} and \textit{sptemp} that are composites of elliptical and spiral galaxies, respectively, produced with the \textsc{FAST} spectrograph for the Tillinghast Telescope \citep{Fabricant_1998}; \textit{habtemp0} produced with the \textsc{hectospec} spectrograph for the MMT as a composite of absorption line galaxies \citep{Fabricant_1998}; and a synthetic galaxy template \textit{syn4} from stellar spectra libraries constructed using stellar light ratios \citep{Quintana_2000}. The redshifts are solved in the spectrum mode of \textsc{fxcor} taking the $r$-value \citep{Tonry1979} as the main reliability factor of the correlation following \cite{Quintana_2000}. They consider $r > 4$ as the limit for a reliable result, here we use the resulting velocity only if it follows that (a) at least 3 out of the 4 estimated redshifts from the templates agree with the heliocentric velocity within $\pm$100 km s$^{-1}$ from the median and (b) at least 2 of those have $r > 5$. Finally, the radial heliocentric velocity of the galaxy and its error is calculated as the mean of the values from the "on-redshift" correlations.
Out of the 35\ GMOS spectra from above, we have 12\ galaxies with a common MUSE measurement, 10\ belonging to the cluster (see below for details on the selection of the cluster members). We use these 12\ galaxies in common to compare the results given by \textsc{fxcor} and \textsc{MARZ}, obtaining a mean difference of $60\pm205$ km s$^{-1}$ on the heliocentric reference frame. However, only one galaxy showed a velocity difference higher than 3$\sigma$. Excluding this galaxy from the analysis gives a mean velocity difference of $4\pm96$ km s$^{-1}$.
With respect to the redshift measurements presented in \cite{bayliss16}, we find that the velocity difference within $\pm$5000 km s$^{-1}$ from their redshift estimation of the cluster ($z_{\rm cl} = 0.5801$) is of $|\Delta cz| \approx 300$ km s$^{-1}$ with a big dispersion. Regarding potential cluster members, we select only galaxies where the redshifts reported by \cite{bayliss16} and the ones estimated using \textsc{fxcor} have a difference smaller than 500 km s$^{-1}$, which at $z_{\rm cl} = 0.5801$ corresponds to a difference of $\sim$0.1\%. This eliminates 2 potential cluster members, one from each method.
In Table~\ref{tab:all_objs_properties} we show the properties of 22 objects from GMOS, excluding the 12 in common with MUSE and the potential cluster member from our measured redshifts. The other potential cluster member is ID 27 from GMOS-2, where \cite{bayliss16} estimated $z = 0.5811$. Redshifts in Table~\ref{tab:all_objs_properties} correspond to the ones measured using \textsc{fxcor}. Our final spectroscopic catalog is composed of 136 objects; 131 galaxies and 5 stars.
\subsubsection{Cluster redshift estimation}
\label{sec:cluster_redshifts}
The cluster's redshift is estimated following the biweight average estimator from \citet{Beers1990}, using the median redshift from all objects with measured redshift in our sample. This estimated redshift is then used instead of the median in their equation, in order to estimate a new redshift. This process is iterated 3 times. We select only spectroscopic sources with a peculiar velocity (see below) within $\pm$5000 km s$^{-1}$ from the cluster's estimated redshift, in order to exclude most of the foreground and background objects \citep[eg.][]{Bosch2013,pranger14}. We then estimate the velocity dispersion ($\sigma_v$) using the biweight sample variance presented in \citet{Ruel2014}, so that
\begin{equation}\label{eq:vdisp}
\sigma_{\rm bi}^2 = N \frac{\sum_{|u_i|<1} (1-u_i^2)^4(v_i-\bar{v})^2}{D(D-1)}
\end{equation}
\begin{equation}
D = \sum_{|u_i|<1} (1-u_i^2)(1-5u_i^2)
\end{equation}
\noindent
where the proper velocities of the galaxies, $v_i$, and the biweight weighting, $u_i$, are estimated as
\begin{equation}
v_i = \frac{c (z_i - z_{\rm cl})}{1+z_{\rm cl}}
\end{equation}
\begin{equation}
u_i = \frac{v_i - \bar{v}}{9\rm{MAD}(v_i)}
\end{equation}
\noindent
with $c$ being the speed of light, $\rm{MAD}$ corresponds to the median absolute deviation and $z_i$, $z$ being the redshifts of the galaxies and the biweight estimation of the redshift of the sample, respectively. Then, the velocity dispersion is estimated as the square root of $\sigma_{\rm bi}^2$, with its uncertainty estimated as 0.92$\sigma_{\rm bi} \times \sqrt{ N_{\rm members} - 1}$. To obtain a final redshift for the cluster we use a 3$\sigma$-clipping iteration (with $\sigma=\sigma_v$), obtaining $z_{\rm cl}=0.5803\pm0.0006$, where the error estimated as the standard error, i.e., the standard deviation over the square root of the number of cluster members. The velocity cut for the selection of the cluster members is discussed below.
\subsubsection{Cluster member selection}
\label{sec:cluster_member_selection}
Observationally, galaxies belonging to a cluster are selected by imposing restrictions on their distance to the center of the cluster and their relative velocities to the BCG. In this section, we study the appropriate cut in the Line of Sight (LoS) projected velocity of the galaxies relative to their BCG using the Illustris TNG300 simulations. Illustris TNG is a suite of cosmological-magnetohydrodynamic simulation which aims to study the physical processes that drive galaxy formation \citep{nelson17, pillepich17, springel17, naiman18, marinacci18}. The TNG300 is the simulation of the suite with the largest volume, having a side length of $L\sim 250 h^{-1 }$ Mpc. This volume contains $2000^3$ Dark Matter (DM) particles and $2000^3$ baryonic particles. The relatively large size of the simulated box allow us to identify a significant number of massive structures to be analyzed. The mass resolution of TNG300 is $5.9\times 10^7 M_{\odot} $, and $1.1\times 10^7 M_{\odot}$ for the DM and baryonic matter respectively. Also, the adopted softening length is 1 h$^{-1}$ kpc for the DM particles and 0.25 h$^{-1}$ kpc for the baryonic particles \citep{marinacci18}.
From this simulation we select a total of 80 clusters with masses between $4 \times 10^{14}{M_\odot}\leq M_{200}\leq 9 \times 10^{14}{M_\odot}$, located at redshift between $0.1\leq z\leq 1$. Here $M_{200}$ is the mass within a sphere having a mean mass density of 200 times the critical density of the Universe. To ensure that our results are not affected by numerical resolution effects, we only selected subhalos with at least 1000 dark matter particles per galaxy ($M_{\rm DM}\geq 5.9 \times 10^{10}{M_\odot}$) and at least 100 stellar particles ($M_{\rm stellar}\geq 1.1 \times 10^{9}{M_\odot}$). The final set of 80 virialized and perturbed cluster provides a sample 9163 associated cluster galaxies. The bounded substructures were identified using the SUBFIND algorithm \citep{Springel2001}.
To stack information from the 80 selected clusters we normalize the velocity distributions using the $\sigma_v - M_{200}$ scaling relation from \citet{Munari2013}. This scaling relation was obtained from a radiative simulation which included both (a) star formation and supernova triggered feedback, and (b) active galactic nucleus feedback (which they call the AGN-set). The equation is described as follows:
\begin{equation}\label{eq:mass}
\sigma_{\rm 1D} = A_{\rm 1D} \left[ \dfrac{h(z) M_{200}}{10^{15} M_\odot} \right] ^$\alpha$
\end{equation}
\noindent
where $\sigma_{\rm 1D}$ is the one-dimensional velocity dispersion and h(z) = H(z)/100 km s$^{-1}$ Mpc$^{-1}$. We choose the values of $A_{\rm 1D} = 1177 \pm 4.2$ and $$\alpha$=0.364 \pm 0.0021$, obtained using galaxies associated to subhaloes in the AGN-set simulation \citep{Munari2013}.
To find the intrinsic Line of Sight (LoS) velocity distribution of a simulated cluster with mass $M_{200}=5 \times 10^{14}{M_\odot}$, at a given redshift of $z=0.6$, we followed the following procedure. We first fit the projected 1D velocity distribution of the cluster galaxies relative to the BCG using a Gaussian distribution with mean $\mu_0$ and dispersion $\sigma_0$. After, using the Equation \ref{eq:mass}, we compute the value of the 1D velocity dispersion $\sigma_1$ that the cluster would have if it had a mass of $M_{200}=5 \times 10^{14}{M_\odot}$. Then, we obtain the 1D velocities for each galaxy normalized by the mass and the redshift using the equation \ref{transformation}. Finally, we obtained the LoS velocities applying 200 different randomized rotations to each cluster,
\begin{equation}
z=\sigma_1 \left( \frac{x-\mu_0}{\sigma_0} +\mu_0\right).
\label{transformation}
\end{equation}
Figure \ref{histogram} presents the histogram of the stacked LoS velocities for the galaxies in the different projections (blue histogram), the best fit normal distribution (red dashed line), and the confidence intervals shaded red areas. We conclude that, for a theoretical cluster of mass $M_{200}=7.64 \times 10^{14}{M_\odot}$, the LoS velocities are normally distributed with a dispersion of $\sigma_v$= 960 km $s^{-1}$. This means that $95 \%$ of the galaxies belonging to this cluster would have LoS velocities lower than 1920 km s$^{-1}$, and $99\% $ of them have LoS velocities lower than 2900 km s$^{-1}$. In what follows we adopt a cut of 3,000 km s$^{-1}$.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{plots/vel_distribution_1D.png}
\vskip-0.13in
\caption{Histogram for the LoS satellite velocities distribution for a cluster with mass $M_{200}=7.64 \times 10^{14}{M_\odot}$ at redshift $z=0.6$, in red the fitted normal distribution and in light red the confidence intervals. }
\label{histogram}
\end{center}
\end{figure}
Applying the $\pm$~3,000 km/s cut we obtain a total number of cluster redshifts of \NgalallMUSEGMOS{}, including 25{} members from cube 1, 21{} from cube 2, 11{} from cube 3, 22{} from cube 4 and 8 from the GMOS data.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{plots/0307_redshift_histogram_correct.pdf}
\vskip-0.13in
\caption{Redshift distribution of spectroscopic sources with good measurement from \textsc{MARZ} and \textsc{fxcor}. Hashed red bars represent the region within a range of $\pm$3000 km s$^{-1}$ in peculiar velocity from the cluster's redshift. The histogram insert on the top left shows the distribution of galaxies within this velocity range, where the black dashed and dotted lines represent the cuts at $\pm$3000 km s$^{-1}$ and the velocity of the BCG, respectively.}
\label{fig:redshift_distribution}
\end{figure}
\subsubsection{Summary of spectroscopic catalog}
\label{sec:spec_summary}
In total, we obtain 87 galaxies with spectroscopic redshifts for SPT-CLJ 0307-6225. Out of those, 79 come from the 1D MUSE objects from $\S$\ref{sec:reductions} and 8 from the GMOS archival spectroscopic data \citep{bayliss16}. The final redshift, estimated as the biweight average estimator, is $z_{\rm cl} = 0.5803 \pm 0.0006$. The final galaxy cluster redshift distributions is shown in Fig.~\ref{fig:redshift_distribution}. The inset shows the peculiar velocity of these selected galaxies, with the black dashed lines denoting the velocity cut and the black dotted line marking the velocity of the BCG. The velocity dispersion for the cluster, estimated following Eq.~\ref{eq:vdisp}, is $\sigma_v = 1093\pm108$ km $s^{-1}$.
\subsubsection{Completeness of MUSE catalog}
\label{sec:completeness}
Since our aim is to look at the properties of the galaxy population, we need to first characterise a limiting magnitude to define that population. Fig.~\ref{fig:cmd} shows that the population of spectroscopic RS galaxies stops at $i_{\rm auto} \approx 22.8$, with blue galaxies going as deep as $i_{\rm auto} \approx 23.3$. In order to find out the limiting magnitude we want to use, we compare our red-sequence catalog inside the cubes footprints within a magnitude bin, checking the fraction of spectroscopically confirmed galaxies within each bin. This check allows us to (1) validate our method for selecting RCS members, which will become important when looking for substructures (see $\S$\ref{sec:subcluster}), and (2) to look for potential cluster members not found by \textsc{MARZ}.
In Fig.~\ref{fig:completeness} we show the estimated completeness within magnitude bins of 0.5 mags. The $y$-axis shows the ratio of spectroscopically confirmed red sequence cluster galaxies to the total number of red-sequence galaxies (photometrically selected + spectroscopically confirmed)
within magnitude bins, while the N$_{\rm red}$ on the top $x$-axis shows the number of red galaxies within a bin. The dashed lines show the limits for the regions with magnitudes $i_{\rm auto}<m^*$, $m^*+1$, $m^*+2$, $m^*+3$, with the completeness of each luminosity range written to the left of each dashed line.
The one ``missing'' galaxy at $i_{\rm auto}<m^*$ is at z = 0.611 ($\Delta v=5,940$ km s$^{-1}$), while the two missing galaxies near $i_{\rm auto}<m^* +1$ correspond to spectroscopically confirmed background galaxies at $z=0.612$ and $z=0.716$ ($\Delta v=6,130$ km s$^{-1}$ and $\Delta v=25,867$ km s$^{-1}$, respectively). The latter one showed similar properties to the galaxies that belong to the cluster; size, visual color and spatially close to the BCG. Its $r-i$ color index was also part of, towards the higher end, the rather generous width used for our RCS catalog. At $i_{\rm auto}\geq m^* +2$, galaxies look like they belong to the cluster, but do not show strong spectral features with which we can estimate the redshift accurately.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{plots/completeness_all.pdf}
\vskip-0.13in
\caption{Ratio of the spectroscopically confirmed members with respect to the red galaxies from our catalog at different bins of magnitudes. The top axis shows the number of red galaxies per magnitude bin. The dashed lines denote the limits for $m^*$, $m^*+1$, $m^*+2$ and $m^*+3$, with the percentages being the accumulated completeness for a given limit.}
\label{fig:completeness}
\end{figure}
We then require a cut at $i_{\rm auto} < m^* +2$ (over 80\% completeness) to the analysis regarding the galaxy population for all the galaxies in our spectroscopic sample.
\subsubsection{Spectral classification}
\label{sec:spectral_class}
To understand if the merger is playing a role in the star formation activity of the galaxies, we make use of two measurements; the equivalent widths (EW) of the [OII] $\lambda$3727 \AA\ and H$\delta$ lines. [OII] $\lambda$3727 \AA\ traces recent star formation activity in timescales $\leq$10 Myr, while the Balmer line H$\delta$ has a scale between 50 Myr and 1 Gyr \citep{Paulino2019}. A strong H$\delta$ absorption line is interpreted as evidence of an explosive episode of star formation which ended between 0.5-1.5 Gyrs ago \citep{Dressler1983}. To measure the equivalent widths of [OII] $\lambda$3727 \AA, EW(OII), and H$\delta$, EW(H$\delta$), the flux spectra for each object is integrated following the ranges described by \citet{balogh99} using the \textsc{IRAF} task \textsc{sbands}. Also, we only make use of MUSE galaxies, excluding the 8 GMOS galaxies added, given that the MUSE selection is unbiased. We do not expect this to change our main results since these galaxies are not located along the merger axis.
We use the same scheme defined by \citet{balogh99} to classify our galaxies into different categories; passive, star forming (SF), short-starburst (SSB), post-starburst \citep[PSB, K+A in][]{balogh99} and A+em (which could be dusty star-forming galaxies). For this classification we only take into account galaxies with $i_{\rm auto} < m^* +2$ and a signal-to-noise ratio, SNR $>3$ (62 galaxies), given that galaxies with low SNR can affect the measurements of lines in crowded sections, like in the region of the [OII] $\lambda$3727 \AA\ line \citep{Paccagnella2019}. The median signal-to-noise ratio (SNR) of our MUSE galaxies is 12.0 for sources with a magnitude $i_{\rm{auto}} < m^*$, 7.8 for sources with $m^* \leq i_{\rm{auto}} < m^*+1$, 4.0 for sources with $m^*+1 \leq i_{\rm{auto}} < m^*+2$, and 2.3 for sources with $i_{\rm{auto}} \ge m^*+2$. We estimate the SNR in the entire spectral range of our data by using the \textsc{der\_snr} algorithm \citep{Stoehr2007}. The results of this classification will be further discussed in $\S$\ref{sec:galpopulation}.
\subsection{Galaxies association}
\label{sec:subcluster}
Depending on the stage of the merging event, it can be possible to determine what the main colliding-structures are, and which galaxies belong to each structure.
Several techniques are available to estimate the level of substructure in galaxy clusters using the velocities. One of the most common techniques is to analyze the galaxy velocity distribution on a one-dimensional space, where it is assumed that for a relaxed cluster it should be close to a Gaussian shape \citep{Menci1996, ribeiro13}. \citet{Hou2009} used Monte Carlo simulations to show that the Anderson-Darling (AD) test is among the most powerful to classify Gaussian (G) and non-Gaussian (NG) clusters, which is why it has been widely used in astronomy with different separation criteria \citep[e.g.][]{Hou2009,ribeiro13,Nurgaliev2017, Lopes2018}.
\citet{Hou2009} estimates an $$\alpha$$ value (the significance value of the statistic) to separate G and NG clusters (see Eq. 17 in their paper), where $$\alpha$<0.05$ indicates a NG distribution. \citet{Nurgaliev2017} uses the p-value of the statistic (p$_{\rm AD}$) and separates the clusters using p$_{\rm AD} < 0.05/n$ for NG clusters, where $n$ indicates the number of tests being conducted. \citet{Roberts2018} also uses the p-value, following that p$_{\rm AD} < 0.1$ indicates a NG cluster. We divide our data in 4 subsets for the application of the AD test; Cubes 2 and 3 for the middle overdensity, Cubes 1 and 4 to compare the two most overdense regions, all the data cubes and all the data cubes plus GMOS data.
To test for 3D substructures (using the velocities and the on-sky positions), we use the Dressler-Shectman test \citep[DS-test,][]{dressler88}, which uses the information of the on-sky coordinates along with the velocity information, and can be used to trace perturbed structures \citep[e.g.][]{pranger14, Olave2018}. The DS-test uses the velocity information of the closest (projected) neighbors of each galaxy to estimate a $\Delta$ statistic, which is given by
\begin{equation}
\Delta = \sum^{N_{\rm tot}}_i \delta_i,
\end{equation}
\noindent
where $N_{\rm tot}$ corresponds to the total number of members of the cluster and
\begin{equation}
\delta^2 = \frac{N+1}{\sigma^2_{\rm cl}} \left[ (\bar{v}_{\rm loc}-\bar{v}_{\rm cl})^2 + (\sigma_{\rm loc} - \sigma_{\rm cl})^2 \right],
\end{equation}
\noindent
where $\delta$ is estimated for each galaxy. $N$ corresponds to the number of neighbors of the galaxy to use to estimate the statistic, estimated as $N = \sqrt[]{N_{\rm tot}}$ \citep{Pinkney1996}, $\sigma_{\rm cl}$ and $\sigma_{\rm loc}$ correspond to the velocity dispersion of the whole cluster and the velocity dispersion of the $N$ neighbors, respectively, and $\bar{v}_{\rm cl}$ and $\bar{v}_{\rm loc}$ correspond to the mean peculiar velocity of the cluster and the mean peculiar velocity of the $N$ neighbors, respectively. A value of $\Delta/N_{\rm tot} \leq 1$ implies that there are no substructures on the cluster.
To calibrate our DS-test results, we perform $10^4$ Monte Carlo simulations by shuffling the velocities, i.e., randomly interchanging the velocities among the galaxies, while maintaining their sky coordinates (meaning that the neighbors are always the same). The p-value of the statistic (p$_{\Delta}$) is estimated by counting how many times the simulated $\Delta$ is higher than that of the original sample, and divide the result by the total number of simulations. Choosing p$_{\Delta} < 0.05$ ensures a low probability of false identification \citep{Hou2012} and is accepted for the distribution to be considered non-random. Both AD and DS test results are shown in Table~\ref{tab:tests} below.
When velocities are not available, or the velocity difference of the clusters are small, another common practice is to use the sky positions of the galaxies and build surface density maps to look for substructures \citep[see, e.g., ][]{White2015, Monteiro2017, Monteiro2018, Monteiro2020, Yoon2019}. The galaxy surface density map at the top right of Fig.~\ref{fig:rgb_image} implies that there are at least two colliding-structures. To obtain the density map we use the RCS galaxy catalog and the \textsc{sklearn.neighbors.KernelDensity} python module, applying a gaussian kernel with a bandwidth of 50 kpc.
\subsection{X-ray morphology}
\label{subsec:xray}
An image in the 0.5--4.0\,keV bandpass was extracted and adaptively smoothed using \textsc{csmooth}\footnote{\url{https://cxc.harvard.edu/ciao/ahelp/csmooth.html}}. This smoothed image, shown as orange contours in Fig.~\ref{fig:rgb_image}, reveals a highly asymmetric X-ray morphology, with a bright, dense core offset from the large-scale centroid by $\sim$1$^{\prime}$ ($\sim$400\,kpc). \cite{Nurgaliev2017} used these same data to make an estimate of the X-ray asymmetry for this system, finding it to be the second\footnote{In \citet{zenteno2020}, the most asymmetric system, SPT-CLJ2332-5053, was said to be a cluster in pre-merger state with a close companion, which would then contaminate the estimated asymmetry index. Excluding SPT-CLJ2332-5053 would make SPT-CLJ0307-6225 the most asymmetric system in the sample.} most asymmetric system in the full SPT-Chandra sample, with an X-ray morphology as disturbed as El Gordo, a well-known major merger \citep[][]{williamson11,Menanteau2012}.
\section{Results}
\label{sec:results}
\subsection{Cluster substructures}
\label{sec:dynamics}
In Table \ref{tab:tests} we show the results of both the AD-test and the DS-test applied to different subsets. The second column corresponds to the number of spectroscopic galaxies belonging to a given subsample. The subset which gives the smallest p-values for both the AD-test and the DS-test is the Cubes 1+4 subset, with these cubes located on top of the two density peaks, enclosing also the area next to the two brightest galaxies (see Fig.~\ref{fig:rgb_image}). We find that both the AD-test and the DS-test provide no evidence of substructure. Applying a 3$\sigma$-clipping iteration to the samples does not change the results. The results, along the X--ray morphology, show no evidence of substructure along the line of sight, and rather support a merger in the plane of the sky, thus we take a look into the spatial distribution of the galaxies.
\begin{table}\caption{Results for the substructure-identification tests applied to different subsamples.}\label{tab:tests}
\centering
\begin{tabular}{l r r r r r}
\hline
\hline
Subsample&N&\multicolumn{2}{c}{AD-test}&\multicolumn{2}{c}{DS-test}\\
&&$$\alpha$$&P-value&$\Delta/N_{\rm{tot}}$&P-value\\
\hline
Cubes 2+3 & 32 & 0.264 & 0.674 & 0.967 & 0.421 \\
Cubes 1+4 & 48 & 0.383 & 0.383 & 1.329 & 0.097 \\
All Cubes & 79\ & 0.234 & 0.789 & 1.205 & 0.138 \\
MUSE+GMOS & \NgalallMUSEGMOS\ & 0.272 & 0.662 & 1.203 & 0.152 \\
\hline
\end{tabular}
\end{table}
In Fig.~\ref{fig:mpcdensitymap} we show the contours of the unweighted and flux weighted density maps, top and bottom figures respectively, of the RCS galaxies. The contour levels begin at 100 gal Mpc$^{-2}$ and increase in intervals of 50 gal Mpc$^{-2}$. Dots correspond to galaxies from our spectroscopic samples. In these figures, regardless of whether they are weighted or unweighted, it can be seen the core of the two main structures with corresponding BCGs, and a high density of galaxies in-between them.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{plots/0307_density_100gal_zoomin4_unweighted_weighted_flux_v3.pdf}
\vskip-0.13in
\caption{Unweighted (top) and flux weighted (bottom) RCS galaxies (photometric and spectroscopic) numerical density map is shown in black contours, where levels begin at 100 galaxies per Mpc$^{2}$ and the flux was estimated from the $i$ band. Galaxies not close to the density levels or classified as not being part of any structure by the \texttt{DBSCAN} algorithm are shown as black dots, while dots in different substructures according to the algorithm are shown with different colors according to the substructure; 0307-6225N (red), 0307-6225S (orange) and a in-between overdensity (green).}
\label{fig:mpcdensitymap}
\end{figure}
For the definition of the substructures we take into account only spectroscopic members within (or near) the limits of our density contours. To distinguish the galaxies with a higher probability of being part of each structure we use the Density-Based Spatial Clustering of Applications with Noise \citep[\texttt{DBSCAN},][]{Ester1996} algorithm. The advantage of using this algorithm is that the galaxies are not necessarily assigned to a given group, leaving some of them out. We use a \textsc{python}-based application of this algorithm, following the work of \citet[][substructure defined as at least three neighbouring galaxies within a separation of $\sim$140 kpc]{Olave2018}.
The results of the different structures found are shown with different coloured dots in Fig.~\ref{fig:mpcdensitymap}. Black dots represent galaxies that either were too far from our density contours or were discarded by the \texttt{DBSCAN} algorithm. We name the two most prominent structures, defined by DBSCAN, as 0307-6225N (red dots) and 0307-6225S (orange dots), comprised by 23 members and 25 members, respectively. The BCGs for 0307-6225S and 0307-6225N are marked in Table~\ref{tab:all_objs_properties} by the upper scripts $S_1$ and $N$, respectively. Both structures show a Gaussian velocity distribution when applying the AD test, and the distance between them is: $\sim$1.10 Mpc between their BCGs and $\sim$1.15 Mpc between the peaks of the density distribution.
Regarding the in-between overdensity, with 19 galaxies (green dots in Fig.~\ref{fig:mpcdensitymap}), we chose to discard it as an actual structure given that (1) unlike the other two structures, it does not have a massive dominant galaxy and (2) the estimated velocity dispersion is $\sigma_v=1400$ km s$^{-1}$, which translates to an unlikely mass of $1.7\times10^{15}$ M$_\odot$ (see $\S$\ref{subsec:dmass}). We comeback to this overdensity in $\S$~\ref{sec:dis_recoverymh}.
\subsection{Cluster dynamical mass}\label{subsec:dmass}
We estimate the masses using \citet{Munari2013} scaling relations between the mass and the velocity dispersion of the cluster (see Eq.~ \ref{eq:mass}). The Gaussian velocity distribution together with the large separation between the center of both structures ($\sim$1.1 Mpc between the BCGs) and the fact that the velocity difference between them is $\Delta v_{N-S} = 342$ km s$^{-1}$ (at the cluster's frame of reference) strongly suggest a plane of the sky merger \citep[see, e.g.][]{Dawson2015, Mahler2020} and could therefore, imply that the overestimation of the masses using scaling relations is minimal \citep{Dawson2015}. We further explore this in $\S$\ref{sec:mass_overstimation_merging}. In order to minimize the possible overstimation of using scaling relations, we only use RCS spectroscopic galaxies to estimate $\sigma_v$, since in clusters with a high accretion rate, blue galaxies tend to raise the value of the velocity dispersion \citep{Zhang2012}.
In Table \ref{tab:substructures} we show the properties of the two substructures. It can be seen then that the two structures have similar masses with the most probable ratio of $M_{\rm S}/M_{\rm N}\approx$ 1.3{} with large uncertainties. Galaxies selected for the dynamical mass estimation are likely to belong to the core regions of the two clusters. Galaxies in these regions are expected to be virialized and should more closely follow the gravitational potential of the clusters during a collision, giving a better estimation of the masses when using the velocity dispersion.
\begin{table}
\caption{Substructure properties}
\label{tab:substructures}
\centering
\resizebox{\linewidth}{!}{%
\begin{tabular}{c c c c c c c}
\hline
\hline
Structure & R.A. & Dec. & $z$ & $\sigma_{v}$ & M$_{200,\rm{dyn}}$ & N$_{\rm Members}$ \\
0307-6225 & (J2000) & (J2000) & & km s$^{-1}$ & $\times$10$^{14}$ M$_\odot$ & \\
\hline
S & 46.8198 & -62.4465 & 0.5792 $\pm$ 0.0003 & 753 $\pm$ 163 & 3.13 $\pm$ 1.87 & 25\\
N & 46.8494 & -62.4028 & 0.5810 $\pm$ 0.0002 & 686 $\pm$ 145 & 2.42 $\pm$ 1.40 & 23\\
\hline
\end{tabular}
}
\end{table}
\subsection{Cluster merger orbit}\label{sec:history}
To understand the merging event, we use the Monte Carlo Merger Analysis Code \citep[\texttt{MCMAC},][]{Dawson2013}, which analyzes the dynamics of the merger and outputs its kinematic parameters. The model assumes a two-body collision of two spherically symmetric halos with a NFW profile \citep[][]{NFW0, NFW1}, where the total energy is conserved and the impact parameters is assumed to be zero. The different parameters are estimated from the Monte Carlo analysis by randomly drawing from the probability density functions of the inputs. The inputs required for each substructure are the redshift and the mass, with their respective errors, along with the distance between the structures with the errors on their positions. We use the values shown in Table \ref{tab:substructures} as our inputs, where the errors for the redshifts are estimated as the standard error, while the errors for the distance are given as the distances between the BCGs and the peak of the density distribution of each structure (0.144' and 0.017' for 0307-6225N and 0307-6225S, respectively). The results are obtained by sampling the possible results through 10$^5$ iterations, and are showed and described in Table \ref{tab:mcmac}, with the errors corresponding to the 1$\sigma$ level.
\begin{table}
\caption{Output from the \texttt{MCMAC} code, with the priors from Table \ref{tab:substructures}. Errors correspond to the $1\sigma$ level.}
\label{tab:mcmac}
\centering
\begin{tabular}{l r c l}
\hline
\hline
Param. & Median & Unit & Description\\
\hline
\vspace{3.5pt}
$$\alpha$$ & 39$^{+13}_{-11}$ & deg & Merger axis angle\\
\vspace{3.5pt}
$d3D_{\rm obs}$ & 1.29$^{+0.32}_{-0.15}$ & Mpc & \footnotesize{3D distance of the halos at T$_{\rm{obs}}$.}\\
\vspace{3.5pt}
$d3D_{\rm{max}}$ & 1.72$^{+0.44}_{-0.22}$ & Mpc & \footnotesize{3D distance of the halos at apoapsis.}\\
\vspace{3.5pt}
$v3D_{\rm{col}}$ & 2300$^{+122}_{-96}$ & km/s & \footnotesize{3D velocity at collision time.}\\
\vspace{3.5pt}
$v3D_{\rm{obs}}$ & 547$^{+185}_{-103}$ & km/s & \footnotesize{3D velocity at T$_{\rm{obs}}$.}\\
\vspace{3.5pt}
$v_{\rm{rad}}$ & 339$^{+28}_{-28}$ & km/s & \footnotesize{Radial velocity of the halos at T$_{\rm{obs}}$.}\\
\vspace{3.5pt}
TSP0 & 0.96$^{+0.31}_{-0.18}$ & Gyr & \footnotesize{TSP for outgoing system.}\\
\vspace{3.5pt}
TSP1 & 2.60$^{+1.07}_{-0.53}$ & Gyr & \footnotesize{TSP for incoming system.}\\
\hline
\end{tabular}
\end{table}
\texttt{MCMAC} gives as outputs the merger axis angle $$\alpha$$, the estimated distances and velocities at different times and two possible current stages of the merger; outgoing after first pericentric passage and incoming after reaching apoapsis. The time since pericentric passage (TSP) for both possible scenarios are described as TSP0 for the outgoing scenario and TSP1 for the incoming one. This last two estimates are the ones that we will further discuss when recovering the merger orbit of the system.
To further constrain the stage of the merger we compare the observational features with simulations. We use the Galaxy Cluster Merger Catalog \citep{ZuHone2018}\footnote{\url{http://gcmc.hub.yt/simulations.html}}, in particular, the ``A Parameter Space Exploration of Galaxy Cluster Mergers'' simulation \citep{ZuHone2011}, which consists of an adaptive mesh refinement grid-based hydrodynamical simulation of a binary collision between two galaxy clusters, with a box size of 14.26 Mpc. The binary merger initial configuration separates the two clusters by a distance on the order of the sum of their virial radii, with their gas profiles in hydrostatic equilibrium. With this simulation one can explore the properties of a collision of clusters with a mass ratio of 1:1, 1:3 and 1:10, where the mass of the primary cluster is $M_{200}=6\times10^{14}$ M$_\odot$, similar to the SZ derived mass of $M_{200}=7.63 \times h^{-1}_{70} 10^{14}$ M$_\odot$ for SPT-CLJ 0307-6225 \citep{bleem15b}, and with different impact parameters ($b = 0, 500, 1000$ kpc).
We use both a merger mass ratio of 1:3 and 1:1. Since we cannot constrain the impact parameter, we use all of them and study their differences, where, for example, the bigger the impact parameter, the longer it takes for the merging clusters to reach the apoapsis. We also note that for our analysis we use a projection on the $z$-axis, since evidence suggests a collision taking place on the plane of the sky.
\subsubsection{Determining TSP0 and TSP1 from the simulations}
To determine the collision time, we use the dark matter distribution of both objects, focusing on the distance between their density cusps at different snapshots. Also, to determine the snapshots for an outgoing and an incoming scenario, which would be the closest to what we see in our system, we look for the snapshot where the separation between the peaks is similar to the projected distance between our BCGs ($\sim$1.10 Mpc).
In Table \ref{tab:simulation_prop} we show the results for the different impact parameters, where the second column indicates the mass ratio. The third column shows the simulation time where the distance between the two halos is minimal (pericentric passage time). The errors are the temporal resolution of the simulation at the chosen snapshot. Following the previous nomenclature the fourth column, TSP0$_{\rm sim}$, corresponds to the amount of time from the first pericentric passage (minimum approach), while the fifth column, TSP1$_{\rm sim}$, corresponds to the amount of time from the pericentric passage, to the first turn around, and heading towards the second passage. Times are either the snapshot time or an average between two snapshots if the estimated separations are nearly equally close to the $\sim$1.10 Mpc distance.
For $b=0$ kpc, the maximum achieved distance between the two dark matter halos in the 1:3 mass ratio simulation was 1.05 Mpc, while for the 1:1 mass ratio it was 0.99 Mpc, meaning that we cannot separate between both scenarios when comparing the projected distance of 0307-6225N and 0307-6225S.
\begin{table}
\caption{Estimated collision times and times since collision for the simulations with different impact parameters and mass ratios.}
\label{tab:simulation_prop}
\centering
\begin{threeparttable}
\begin{tabular}{r c c c c}
\hline
\hline
$b$ & Mass ratio & Collision time & TSP0$_{\rm sim}$ & TSP1$_{\rm sim}$ \\
kpc & & Gyr & Gyr & Gyr \\
\hline
0 & 1:3 & 1.22 $\pm$ 0.02 & 0.78 $\pm$ 0.20 & -\\
500 & 1:3 & 1.24 $\pm$ 0.02 & 0.66 $\pm$ 0.20 & 0.96 $\pm$ 0.20 \\
1000 & 1:3 & 1.34 $\pm$ 0.02 & 0.56 $\pm$ 0.20 & 1.46 $\pm$ 0.20 \\
0 & 1:1 & 1.32 $\pm$ 0.02 & 0.68 $\pm$ 0.20 & -\\
500 & 1:1 & 1.34 $\pm$ 0.02 & 0.46 $\pm$ 0.20 & - \\
1000 & 1:1 & 1.40 $\pm$ 0.02 & 0.80 $\pm$ 0.20 & 1.00 $\pm$ 0.20 \\
\hline
\end{tabular}
\begin{tablenotes}
\small
\item {\bf Notes.} No TSP1 value is provided when we cannot separate between the outgoing and incoming scenarios by requiring a distance of $\sim$1.1 Mpc.
\end{tablenotes}
\end{threeparttable}
\end{table}
In Fig.~\ref{fig:density_simulation} we show the density contours of the galaxies from the simulation with mass ratio 1:3 and $b=1000$ kpc as an example, where the contours were estimated as described in $\S$\ref{sec:subcluster}. The density contours at T = 1.9 Gyrs and T = 2.7 Gyrs are shown on the top (outgoing scenario) and bottom (incoming scenario) panels, respectively, where T is the time since the beginning of the simulation. Dots are from our spectroscopic sample, where the colors are the same as in Fig.~\ref{fig:mpcdensitymap}, with the red contours being the unweighted RCS galaxies numerical density map from the same figure. It is worth noting that, although the density contours from the simulations and the galaxies from our observations do seem to be well correlated, the simulations (and therefore the density contours) were not influenced whatsoever by our observations. The only manipulation to the contours is rotation and translation of the coordinate system from the simulation, so that they would match the position of the galaxies from 0307-6225S.
The results shown in Table \ref{tab:simulation_prop} suggest that the estimate of TSP1 by \texttt{MCMAC} is too large, giving preference to the outgoing system scenario. We further discuss this in $\S$\ref{sec:constraining_tsc_with_sims}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{plots/0307_density_simulation_v4_final_v5.pdf}
\vskip-0.13in
\caption{Density contours similar to the ones shown in Fig.~\ref{fig:mpcdensitymap}, but drawn from galaxies from the merger simulation (1:3 mass ratio and b=1000 kpc) with t=1.9 Gyrs and t=2.7 Gyrs (top and bottom panels, respectively) since the beginning of the simulation. The coordinates of the density maps were rotated and translated in order to be comparable with the position of the galaxies (dots) from SPT-CLJ0307-6225. For comparison, the red contours show SPT-CLJ0307-6225 unweighted density map from Fig.~\ref{fig:mpcdensitymap}, with dots being the spectroscopic galaxies following the same color scheme.}
\label{fig:density_simulation}
\end{figure}
\subsubsection{X--ray morphology}
The hydrodynamical simulations render a gas distribution that can be directly compared to the observations. Fig.~\ref{fig:sim_TSP0_xray} shows the snapshots of the outgoing scenario, while Fig.~\ref{fig:sim_TSP1_xray} shows the snapshots of the incoming scenario, where the X--ray projected emission is overplotted as blue contours on top of the projected total density, for the simulation snapshots close to the derived TSP (Table \ref{tab:simulation_prop}), with the simulation time shown on the bottom left of each panel. Note however that for the 1:1 Mass ratio and $b=500$ kpc, the system has the $\sim$1.1 Mpc distance at turnaround, which means that we cannot differentiate between and outgoing and incoming scenario. We decided to keep the same snapshot in both Figures \ref{fig:sim_TSP0_xray} and \ref{fig:sim_TSP1_xray} just for comparison. It can be seen that the scenarios for 1:3 mass ratio closest resemble the gas distribution from our {\it Chandra} observations (orange contours on Fig.~\ref{fig:rgb_image}). We comeback to this in $\S$~\ref{sec:constraining_tsc_with_sims}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{plots/simulations_xray_contours_tsc0_v2.png}
\vskip-0.13in
\caption{Density and X--ray contours of the different simulations.
The simulation times are shown on the bottom left corner, and correspond to (or are close to in case of averaging over two snapshots) the collision time plus the TSP0 time since collision (see Table \ref{tab:simulation_prop}).
The projected total density of the simulations is shown in red in the background, with the contrast starting at $1\times10^{7}$ M$_\odot$ kpc$^{-2}$. Blue contours where derived from the projected X--ray emission, with the levels being $0.5, 1, 5, 10, 15\times10^{-8}$ photons/s/cm$^{2}$/arcsec$^{2}$. Simulations are divided according to their mass ratio (1:3 on top and 1:1 on the bottom) and according to the impact parameter (500 kpc on the left panels and 1000 kpc on the right panels). The used box size is the same to the one used in Fig.~\ref{fig:rgb_image}. The white bar also corresponds to the same length of 1 arcmin shown in Fig.~\ref{fig:rgb_image}.
}
\label{fig:sim_TSP0_xray}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{plots/simulations_xray_contours_tsc1_v2.png}
\vskip-0.13in
\caption{Same as Fig.~\ref{fig:sim_TSP0_xray}, but derived from the simulations at the TSP1 times.}
\label{fig:sim_TSP1_xray}
\end{figure}
\subsection{The impact of the merging event in the galaxy populations}
\label{sec:galpopulation}
In Fig.~\ref{fig:gal_properties} we show the CMD for each subsample; all galaxies, galaxies belonging to 0307-6225N and 0307-6225S, and galaxies not belonging to either of them. Galaxies are color coded according to their spectral classification. Most of the star-forming galaxies are located within the two main structures (9 out of 10 SF+SSB galaxies), with some of them being classified as RCS galaxies (4; 2 SF and 2 SSB). Galaxies with SNR < 3 and/or $i_{\rm auto}$ > $m^* + 2$ are plotted as black crosses.
For simplicity, we use the following notation (and their combinations) to refer to the different galaxy populations throughout the text:
\begin{itemize}
\item \textit{SSB:} Short starburst galaxies, following \cite{balogh99}.
\item \textit{PSB:} Post-starburst galaxies, defined as galaxies with EW(H$\delta$) $\geq 5$ \AA\ and EW(OII) $< 5$ \AA\ \citep[K+A in][]{balogh99}.
\item \textit{EL:} To refer to emission-line galaxies (galaxies with EW(OII) $\geq 5$ \AA), including SSB, star-forming (SF) and A+em galaxies, which are believed to be dusty star-forming galaxies \citep{balogh99}.
\item \textit{NEL:} To refer to non emission-line galaxies; passive and PSB. This are galaxies with EW(OII) $< 5$ \AA.
\item \textit{Red galaxies:} Galaxies belonging to (or redder than) the red cluster sequence from $\S$\ref{sec:photometric}.
\item \textit{Blue galaxies:} Galaxies with colors lower than the red cluster sequence.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{plots/0307_specclas_prop6_v3.pdf}
\vskip-0.13in
\caption{CMD of the cluster for the different samples. Galaxies are color-coded depending on their spectral classification described in ~\S\ref{sec:galpopulation}. \textit{top left}: entire spectroscopic data sample. \textit{top right}: sample comprising galaxies not belonging to 0307-6225N and 0307-6225S, i.e., galaxies from the in-between overdensity plus galaxies not belonging to any substructure according to \textsc{DBSCAN}. \textit{bottom}: 0307-6225S and 0307-6225N samples shown in left and right panels, respectively. The green dotted lines are the limits for the RCS zone. Black crosses are galaxies with SNR < 3 or $i_{\rm auto} \geq m^* + 2$. Filled colors are galaxies classified as SSB.}
\label{fig:gal_properties}
\end{figure}
Given that most of the SF galaxies seem to be located at the cluster's cores, especially the red SF galaxies, it is plausible that they were part of the merging event, instead of being accreted after it.
In Fig.~\ref{fig:phase_spectro} we show a phase-space diagram, with the X-axis being the separation from the SZ-center, negative for objects to the south of it. Circles are red galaxies, while triangles are blue galaxies. Inverted triangles are blue galaxies with no-emission lines (filled for PSB and non-filled for passive), while filled circles are SSB galaxies. Galaxies are color coded dark-red if they belong to 0307-6225N, dark-orange for 0307-6225S, and black for none of the above. In Fig.~\ref{fig:crop_galaxies} we show small crops of 7$\times$7 arcseconds (47$\times$47 kpc at the cluster's redshift) of the EL galaxies plus the two NEL blue galaxies. On the top and middle rows, galaxies from 0307-6225S and 0307-6225N are shown, respectively, while the bottom row shows galaxies which do not belong to the clusters cores.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{plots/0307_phase-space_emission_v5.pdf}
\vskip-0.13in
\caption{Phase-space diagram of spectroscopic members with SNR $\geq3$ and $i_{\rm auto} < m^* + 2$. Galaxies are colored as dark red, dark orange and black if they were classified as belonging to the 0307-62255N, 0307-6225S or to neither of them, respectively. Crosses are galaxies classified as non-emission line galaxies. Emission line galaxies which belong to (or have redder colors than) the RCS are plotted as circles, triangles are galaxies with colors lower than the RCS, whereas inverted triangles are blue post-starburst (filled) or passive (unfilled) galaxies. The sizes of EL galaxies are correlated with their EW(OII) strength. Filled circles correspond to SSB galaxies.}
\label{fig:phase_spectro}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.16\linewidth]{plots/13_Red_SSB_1.png}
\includegraphics[width=0.16\linewidth]{plots/3_Red_SSB_1.png}
\includegraphics[width=0.16\linewidth]{plots/66_Blue_SF_1.png}
\includegraphics[width=0.16\linewidth]{plots/64_Blue_SF_1.png}
\includegraphics[width=0.16\linewidth]{plots/63_Blue_SF_1.png}
\includegraphics[width=0.16\linewidth]{plots/65_Blue_A+em_1.png}
\\
\includegraphics[width=0.16\linewidth]{plots/73_Blue_SF_2.png}
\includegraphics[width=0.16\linewidth]{plots/40_Red_SF_2.png}
\includegraphics[width=0.16\linewidth]{plots/75_Blue_A+em_2.png}
\\
\includegraphics[width=0.16\linewidth]{plots/41_Red_SF_0.png}
\includegraphics[width=0.16\linewidth]{plots/71_Blue_PSB_0.png}
\includegraphics[width=0.16\linewidth]{plots/72_Blue_Passive_0.png}
\caption{Pseudo-color crop images (box size of 7$\times$7 arcseconds) of the SF, A+em, SSB and PSB galaxies from our sample (plus one blue passive galaxy). On the bottom left of each image the spectral type of the galaxy is shown, with a white bar on the bottom right representing the scale size of 1 arcsecond. Galaxies on the top and middle row belong to 0307-6225S and 0307-6225N, respectively, while galaxies on the bottom row are those who do not below to any of the aforementioned.
}
\label{fig:crop_galaxies}
\end{figure*}
\subsection{The particular case of 0307-6225S}\label{sec:southernstr}
Fig.~\ref{fig:gal_properties} shows that 0307-6225S has (1) the bluest members from our sample and (2) two very bright galaxies with nearly the same magnitudes (galaxies with ID 35 and 46 from the MUSE-1 field in Table~\ref{tab:all_objs_properties}, marked with an upper script $S_1$ and $S_2$, respectively). In Fig.~\ref{fig:southernstr} we provide a zoom from Fig.~\ref{fig:rgb_image}, to show in more detail the southern structure. Red circles mark spectroscopic members for this region with SNR $>3$ and $i_{\rm auto} < m^* + 2$. The two brightest galaxies are the two elliptical galaxies in the middle marked with red stars, with $\Delta m_i = 0.0152 \pm 0.0063$ and $\Delta v = 600 km/s$. The on-sky separation between the center of them ($\sim$41 kpc), suggests that these galaxies could be interacting with each other
\begin{figure}
\centering
\includegraphics[width=\linewidth]{plots/0307S_v4.pdf}
\vskip-0.13in
\caption{Zoom from Fig.~\ref{fig:rgb_image} into 0307S, with the white bar on the top left showing the scale of the image. Spectroscopic members with SNR < 3 or $i_{\rm auto} \geq m^* + 2$ are shown as cyan circles, while red and green circles/stars represent passive and emission-line cluster galaxies, respectively, where emission-line refers SF or SSB galaxies. The 2 brightest galaxies are marked with stars.
}
\label{fig:southernstr}
\end{figure}
In Fig.~\ref{fig:0307S_vel_distr} we show the peculiar velocity distribution, with respect to the redshift of 0307-6225S ($z=0.5810$), of all galaxies (black unfilled histogram) and of RCS galaxies (red hashed lines) belonging to this structure. The blue shaded area denotes the area within 1$\sigma_v$ for this structure, and the black dashed lines represent the peculiar velocities of the two BCG candidates, where the one to the south has a peculiar velocity closer to 0 ($\Delta v = -8$ km s$^{-1}$). For this reason, we choose this galaxy (ID 46) as the BCG of 0307-6225.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{plots/0307_southveldistr.png}
\vskip-0.13in
\caption{Velocity distribution of galaxies belonging to the 0307-6225S substructure, estimated with respect to its redshift. Hatched red lines denote where the RCS members are located. Dashed black lines show the velocities of the two brightest galaxies in this subsample, with the shaded area representing the area within 1$\sigma_v$ for 0307-6225S.}
\label{fig:0307S_vel_distr}
\end{figure}
\section{Discussion}
\label{sec:discussion}
\subsection{Merging history of 0307-6225S and 0307-6225N}\label{sec:disc_merger}
Here we discuss the estimated masses, how they compare with previous estimations and the risks of using scaling relations to study dynamically perturbed systems. Then we discuss how the merging parameters derived by \texttt{MCMAC} could be further constrained by constraining the merging angle, especially the error bars on the estimated times for an outgoing and an incoming system. Finally, we show how the comparison with simulations favors an outgoing scenario given the estimated times and the X--ray morphology, with the latter also showing the preferred mass ratio of 1:3.
\subsubsection{Mass estimation of a merging cluster}\label{sec:mass_overstimation_merging}
Being able to recover the merging history of two observed galaxy clusters is not trivial. Most methods require a mass estimation
of the colliding components, which is not always an easy task \citep[see merging effect on cluster mass in][]{takizawa10,nelson12,nelson14}. The use of lensing measurements is one of the most precise ways of obtaining a mass estimation for the components \citep[e.g.][]{clowe06,Pandge19,Monteiro2020}, however this method requires deep photometric high quality images for the measurement of the distortions.
\citet{Dietrich2019} used the same ground-based optical imaging described in this paper to measure the weak lensing surface mass density of SPT-CLJ0307-6225. However, their result shows that for this cluster the signal was not strong enough (as shown in their Figure B4) as the peak of the surface mass density is at a distance greater then R$_{200}$ from the SZ center.
The velocity dispersion (along the line-of-sight) of the galaxies of a cluster can also be used to infer its mass, using for example the virial theorem \citep[e.g.][]{Rines2013,White2015} or scaling relations \citep[e.g.][]{Evrard2008, Saro2013, Munari2013, Dawson2015, Monteiro-Oliveira21}. For the mass estimations of our structures we use the later one, although it is important to note that these measurements are also affected by the merging event, as
colliding structures
could show alterations in the velocities of their members. \citet{White2015} argues that the masses of merging systems estimated by using scaling relations can be overestimated by a factor of two. Since we have a separation of $\sim$1.1 Mpc between the two structures and the distribution of velocities of the two clusters is Gaussian, we believe the overestimation is low. Also, the velocity difference being $|\Delta v_{N-S}|= 342$ km s$^{-1}$ suggests that the merger is taking place close to the plane of the sky, similar to what \citet{Mahler2020} find for the dissociative merging galaxy cluster SPT-CLJ0356-5337. Furthermore, the velocity difference between the BCGs and the redshift of each substructure is $\leq$20 km s$^{-1}$ for both 0307-6225N and 0307-6225S, which might indicate that the two merging substructures were not too dynamically perturbed by the merger. In order to further minimize the bias of using scaling relations, we use only RCS galaxies, however blue galaxies are taken into account when reporting the number of members on Table~\ref{tab:substructures}, and also when analysing the galaxy populations below.
It is worth noting that recently \cite{Ferragamo2020} suggested correction factors on both $\sigma_v$ and the estimated mass to account for cases with a low number of galaxies. They also apply other correction factors to turn $\sigma_v$ into an unbiased estimator by taking into account, for example, interlopers and the radius in which the sources are enclosed. However, applying these changes does not change our results drastically, with the new derived masses being within the errors of the previously derived ones.
To check how masses derived from the velocity dispersion of merging galaxy clusters could be overestimated, we estimate the masses, following the equations from \cite{Munari2013}, of the simulated clusters from the 1:3 merging simulation (from $\S$\ref{sec:history}) at all times (and $b$) using their velocity dispersion. It is worth noting that we cannot separate RCS members to estimate the velocity dispersions, since the simulation does not give information regarding the galaxy population. Fig.~\ref{fig:masses_sims} shows the $\sigma_v$ derived masses at different times for the 1:3 mass ratio simulation for different values of $b$. The black dotted lines represent the collision time and the dashed lines with the gray shaded areas represent the TSPs and their errors from Table \ref{tab:simulation_prop}, respectively. It can be seen that before the collision and some Gyr after it, the masses are overestimated, especially for the case of the smaller mass cluster. However, near the TSP0 times, the derived masses are in agreement, within the errors, with respect to the real masses. This is true also for the TSP1 with $b=500$ kpc, but for the same time with $b=1000$ kpc, the main cluster's mass is actually underestimated. Although we cannot further constrain the masses from the simulation using only RCS members, this information does suggest that our derived masses are not very affected by the merging itself given the possible times since collision.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{plots/masses_sim1to3.png}
\vskip-0.13in
\caption{Velocity dispersion derived masses for the 1:3 mass ratio simulations used in this work, with different $b$. The x-axis is the time since the simulation started running, with the blue and orange dots corresponding to the main cluster and the secondary cluster, respectively. The blue and orange dashed lines represent the masses of $6\times10^{14}$ and $2\times10^{14}$ M$_\odot$, respectively. Black dotted lines mark the collision times estimated following $\S$\ref{sec:history}. Vertical black dashed lines mark the estimated TSP0 and TSP1 shown in Table \ref{tab:simulation_prop}, with the gray area being the errors on this estimation.}
\label{fig:masses_sims}
\end{figure}
\citet{bleem15b} estimated a total Sunyaev-Zeldovich based mass of M$_{500,\rm SZ}= 5.06\pm0.90\times 10^{14}$ $h_{70}^{-1}$ M$_{\odot}$, corresponding to M$_{200,\rm SZ}= 7.63\pm1.37\times 10^{14}$ $h_{70}^{-1}$ M$_{\odot}$ \citep{zenteno2020}, which is in agreement to our estimation of the total dynamical mass from scaling relations M$_{200,\rm dyn}$ = M$_{\rm S}$ + M$_{\rm N}$ = $5.55 \pm 2.33 \times 10^{14}$ M$_{\odot}$, at the 1$\sigma$ level.
\subsubsection{Recovery of the merger orbit}\label{sec:dis_recoverymh}
With the masses estimated, the merging history can be recovered by using a two-body model \citep{Beers1990,Cortese2004,Gonzalez2018} or by using hydrodynamical simulations constrained with the observed properties of the merging system \citep[e.g.][]{Mastropietro2008, Machado+2015, Doubrawa2020,Moura21}, with the disadvantage being that the latter method is computationally expensive. The method presented by \citet{Dawson2013}, \texttt{MCMAC}, is a good compromise between computational time and accuracy of the results, with a dynamical parameter estimation accuracy of about 10\% for two dissociative mergers; Bullet Cluster and Musket Ball Clusters.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{plots/density_all_sims_v2.pdf}
\vskip-0.13in
\caption{Density maps for the simulated 1:3 mass ratio cluster merger. Each row represents the time evolution around the TSP0 for the different impact parameters $b=0,500,1000$ kpc shown at the top, middle and bottom rows, respectively. For each panel, the simulation time is written on the bottom left.}
\label{fig:density_sims_all}
\end{figure*}
\texttt{MCMAC} gives as a result two different time since collision, TSP0=0.96$^{+0.31}_{-0.18}${} Gyr and TSP1=2.60$^{+1.07}_{-0.53}${} Gyr, for an outgoing and an incoming merger, respectively, after the first pericentric passage. A more detailed analysis of the X--ray could further constrain both the \texttt{MCMAC} output, e.g. by constraining the merging angle \citep{Monteiro2017, Monteiro2018} and the TSP \citep{Dawson2013, Ng2015, Monteiro2017} from shocks (if any), and also the merging scenario from hydrodynamical simulations, e.g. by comparing the temperature maps or by running a simulation which recovers the features (both of the galaxies and of the ICM) of this particular merger. This is particularly interesting given that the simulations that we use to compare have a merger axis angle of $$\alpha$ = 0.0$ deg. \cite{Dawson2013} runs \texttt{MCMAC} on the Bullet Cluster data and finds $$\alpha$ = 50^{+23}_{-23}$ deg, however, by adding a prior using the X--ray shock information, he is able to constrain the angle to $$\alpha$ = 24^{+14}_{-8}$ deg, which is closer to the plane of the sky and also decreases significantly the error bars on the estimated collision times.
For instance, if we assume that the merger is nearly on the plane of the sky and constrain the merging angle, $$\alpha$$, from \texttt{MCMAC} to be between 0$^\circ$ and $45^\circ$, then the resulting values are $$\alpha$ = 25^{+6}_{-6}$ deg, TSP0=$0.73^{+0.09}_{-0.09}$ and TSP1=$2.10^{+0.51}_{-0.30}$, which are still within the previous estimated values (within the errors) and have smaller error bars. However, the estimated TSP1 is still higher than any of the ones estimated from the simulations (see Table~\ref{tab:simulation_prop}).
A similar system is the one studied by \citet{Dawson2012}; DLSCL J0916.2+2951, a major merging at $z=0.53$, with a projected distance of $1.0^{+0.11}_{-0.14}$ Mpc. Their dynamical analysis gives masses similar to that of our structures (when using $\sigma_v - M$ scaling relations), with the mass ratio between their northern and southern structures of $M_{\rm S}/ M_{\rm N} = 1.11 \pm 0.81$. Using an analytical model, they were able to recover a merging angle $$\alpha$ = 34^{+20}_{-14}$ degrees and a physical separation of $d3D = 1.3^{+0.97}_ {-0.18}$, both values in agreement with what we found. Furthermore, their time since collision is also similar to the one found for our outgoing system TSP$ = 0.7^{+0.2}_{-0.1}$, however they do not differentiate between an outgoing or incoming system.
Regarding the in-between structure, the estimated velocity dispersion is very high ($\sigma_v = 1400$ km s$^{-1}$) and the density map shows that this region is not as dense as the other two. To check whether it is common for a merging of two galaxy clusters, we take a look at how the density map varies in the 1:3 mass ratio simulations near the estimated TSP0. In Fig.~\ref{fig:density_sims_all}, we show on each row, the density maps of the simulations with the corresponding time shown at the bottom left, and the impact parameter of the row at the top left of the first figure of each row. Levels start at 100 galaxies Mpc$^{-2}$ and increase in levels of 50. The cluster with 6$\times$10$^{14}$ M$_\odot$ is located at the bottom. The middle column indicates the density map at TSP0, where the previous and next 2 snapshots are also shown. At different times, the density maps for the same impact parameter show to be rather irregular, with the in-between region changing from snapshot to snapshot. In particular, both $b=0$ kpc and $b=1000$ kpc show an overdense in-between area near the TSP0. However, this is not the case in other snapshots, so we cannot state with confidence that this is common for a merging cluster to show such a pronounced in-between overdense region.
\subsubsection{Constraining the TSP with simulations}\label{sec:constraining_tsc_with_sims}
We compare the results derived by \texttt{MCMAC} with those estimated from a hydrodynamical simulation of two merging structures with a mass ratio of 1:3 \citep{ZuHone2011, ZuHone2018}. We chose this ratio since the X--ray morphologies of both the simulation and the system are a better match than the 1:1 mass ratio, where the X--ray intensity from the simulation is similar for the two structures (see Fig.~\ref{fig:sim_TSP0_xray} and \ref{fig:sim_TSP1_xray}), unlike our system, which have two distinctly different structures (see the orange contour in Fig.~\ref{fig:rgb_image}).
To compare the results from \texttt{MCMAC} with the simulation, it is necessary to have a good estimate of (1) the time when the two structures have their first pericentric passage and (2) the TSP$_{\rm sim}$ for the outgoing and incoming scenarios. For the former, we determined the time on the simulation where the separation of the dark matter halos was at its minimum, while for the later we used the time where the separation was similar to that of our BCGs. For each $b$, the estimated TSP0$_{\rm sim}$ is smaller but in agreement with the result from \texttt{MCMAC}, however the estimated TSP1$_{\rm sim}$ is never in agreement (at least at $1\sigma$).
Using dark matter only simulations, \cite{Wittman2019} looked for halos with similar configurations to those of observed merging clusters (such as the Bullet and Musket Ball clusters) and compared the time since collisions to those derived by \texttt{MCMAC} and other hydrodynamical simulations, finding that with respect to the latter the derived merging angles and TSP are consistent. However, both the outgoing and incoming TSP and the angles are lower than those derived by \texttt{MCMAC}, attributing the differences to the \texttt{MCMAC} assumption of zero distance between the structures at the collision time.
\citet{Sarazin2002} discuss that most merging systems should have a small impact parameter, of the order of a few kpc. \citet{Dawson2012} argues that, given the displayed gas morphology, the dissociative merging galaxy cluster DLSCL J0916.2+2951, has a small impact parameter. The argument is that simulations show that the morphology for mergers with small impact parameters, is elongated transverse to the merger direction \citep{Schindler1993,Poole2006, Machado2013}. The X--ray morphology shown in this paper is similar to that from \citet{Dawson2012}. It is also similar to that of Abell 3376 \citep{Monteiro2017}, a merging galaxy cluster which was simulated by \citet{Machado2013} with different impact parameters ($b=0, 150, 350$ and $500$ kpc), with their results suggesting that a model with $b<150$ kpc is preferred. Given the similitude between SPT-CLJ0307-6225 X--ray morphology and that of other systems such as Abell 3376 and DLSCL J0916.2+2951, which have small impact parameters, then we suggest that the simulations with $b=0$ kpc or $b=500$ kpc are better representations of our system. This implies that the preferred scenario for this merging cluster is that of an outgoing system or a system very close to turnaround. This can also be seen when comparing the X--ray morphology of SPT-CLJ0307-6225 with that of the 1:3 mass ratio simulations at the estimated TSP0$_{\rm sim}$ and TSP1$_{\rm sim}$, shown in Fig.~\ref{fig:sim_TSP0_xray} and Fig.~\ref{fig:sim_TSP1_xray}, respectively, where it is noticeable that the X--ray contours at TSP0$_{\rm sim}$ are more similar than the ones at TSP1$_{\rm sim}$ for $b=500,1000$ kpc.
\subsection{Galaxy population in a merging galaxy cluster}\label{sec:disc_galpop}
From Fig.~\ref{fig:gal_properties}, it is noticeable that EL galaxies are located preferentially towards the cluster cores. We will divide the discussion of the galaxy population by studying the differences between the two clumps, analysing the red EL galaxy population and also the population in the area in-between the merger.
\subsubsection{Comparison between North and South}
One interesting optical feature of 0307-6225S, is the two very bright pairs of galaxies ($d_{\rm proj}=41$ kpc) at the center of its distribution (Fig.~\ref{fig:southernstr}). A similar, but rather extreme case is that of the galaxy cluster Abell 3827 at $z=0.099$, which shows evidence for a recent merger with four nearly equally bright galaxies within 10 kpc from the central region \citep{Carrasco2010, Massey2015}. Using GMOS data, \cite{Carrasco2010} found that the peculiar velocities of at least 3 of these galaxies are within $\sim$300 km s$^{-1}$ from the cluster redshift, with the remaining one having an offset of $\sim$1000 km s$^{-1}$.
BCGs have low peculiar velocities in relaxed clusters, whereas for disturbed clusters it is expected that their peculiar velocity is 20-30\% the velocity dispersion of the cluster \citep{Yoshikawa2003, Ye2017}. For 0307-6225S, one of the bright galaxies has a peculiar velocity of $\sim$666 km s$^{-1}$, which is $\sim$88\% the velocity dispersion of this subcluster. This could be evidence of a past merging between 0307-6225S and another cluster previous to the merger with 0307-6225N. The AD test gives a Gaussian distribution, where the results do not change by applying a 3-$\sigma$ iteration, which could indicate that the substructure is a post-merger.
\cite{Raouf2019R} uses the magnitude difference between the first and second brightest galaxy of a group ($\Delta M_{12}$), along with the BCG to the luminosity center distance ($D_{\rm offset}$) to separate between relaxed and unrelaxed systems. They propose a value of $\Delta M_{12} < 0.5$ and $\log_{10}(D_{\rm offset}) > 1.8$ to define unrelaxed clusters, whereas for relaxed systems the definition goes as $\Delta M_{12} > 1.7$ and $\log_{10}(D_{\rm offset}) < 1.8$. In our case, we only check the magnitude difference since we are already studying a merging cluster.
For 0307-6225S the magnitude difference is $\Delta M_{12} = 0.0152 < 0.5$, which supports the scenario that 0307-6225S suffered a previous merger prior to the one with 0307-6225N. Central galaxies take $\approx$1 Gyr to settle to the cluster centre during the post-merger phase \citep{White1976,Bird1994}, meaning that this previous merger must have taken place over 1 Gyr before observed merger between 0307-6225S and 0307-6225N. On the other hand, for 0307-6225N the value is $\Delta M_{12} \approx 1.8 > 1.7$, meaning 0307-6225N was a relaxed system prior to this merger.
Regarding the overall galaxy population, the fraction of EL galaxies in 0307-6225S (24\%) is nearly two times that of 0307-6225N ($\sim$13\%), although consistent within 1$\sigma$. However, it can be seen in Fig.~\ref{fig:phase_spectro} that all the EL galaxies from 0307-6225N have small peculiar velocities (with the SF galaxies within 1$\sigma_v$), while for 0307-6225S we notice that most of the blue SF galaxies have velocities higher than 2$\sigma_v$. These galaxies, which are bluer than the blue EL galaxies of 0307-6225N, could be in the process of being accreted.
Considering the scenario where mergers between clusters accelerate the quenching of galaxies by increasing their star formation activity \citep{Stroe2014,Stroe2015}, then the fact that there are less star forming galaxies towards the central region of 0307-6225S compared to 0307-6225N, could be an indication that the previous merger of 0307-6225S already exhausted the star formation of the cluster, with the observed blue SF galaxy population (with larger peculiar velocities) being recently, or in the process of being, accreted from the field.
\subsubsection{Red EL galaxies}
Of particular interest are our EL galaxies located in the RCS. Out of the 4 red EL galaxies, 3 are located in the cores of the two main structures, with 2 of them classified as SSB. Most of the blue SF galaxies are best matched by a high-redshift star forming or late-type emission galaxy template, whereas most of the red SF galaxies are best matched with an early-type absorption galaxy template.
\cite{Koyama2011} studied the region in and around the $z=0.41$ rich cluster CL0939+4713 (A851) using H$$\alpha$$ imaging to distinguish SF emission line galaxies. A851 is a dynamically young cluster with numerous groups at the outskirts. They found that the red H$$\alpha$$ emitters are preferentially located in low-density environments, such as the groups and the outskirts, whereas for in the core of the cluster they did not find red H$$\alpha$$ emitters. \cite{ma10} studied the galaxy population of the merging galaxy cluster MACS J0025.4-1225 at $z=0.586$. In the areas around the cluster cores (with a radius of 150 kpc) they find emission line galaxies corresponding to two spiral galaxies (one for each subcluster), plus some spiral galaxies without spectroscopic information, accounting for 14\% of the total galaxies within the radius. Their Fig. 15 shows that they also have red EL galaxies, however they don't specify whether the 2 spiral galaxies within the cluster core are part of this population. Both results from \cite{ma10} and \cite{Koyama2011} indicate that red EL galaxies are not likely to be found within the cores of dense regions.
It can be observed from Fig.~\ref{fig:crop_galaxies} that most of our red EL galaxies do not have close neighbours which can supplement gas to them. It is possible then that these objects accreted gas from the ICM, with the merger triggering then the SF. Given the peculiar velocity of the two SSB galaxy from our sample (which is classified as red), at least one of them was most likely part of the merging event. If, for example, merger shocks travelling through the ICM can trigger a starburst episode on galaxies with gas reservoirs for a few 100 Myr \citep[][]{Owers2012,Stroe2014,Stroe2015}, then these galaxies would make the outgoing scenario a better candidate than the incoming one.
\subsubsection{Area in-between the merger}
With respect to the in-between area, its mostly comprised by red passive galaxies, with the only EL galaxy belonging to the RCS. Moreover, the 2 blue galaxies are classified as a passive and a PSB. \citet{ma10} found a fraction of post-starburst galaxies in the major cluster merger MACS J0025.4-1225, on the region in-between the collision between the two merging components, where, given the timescales, the starburst episode of them occurred during first passage. Similarly to our blue galaxies in this region, they found that their colors are located between those of blue EL galaxies and red passive galaxies.
\section{Summary and Conclusions}
\label{sec:conclusions}
In this paper we use deep optical imaging and new MUSE spectroscopic data along with archival GMOS data to study the photometric and spectral properties of the merging cluster candidate SPT-CLJ0307-6225, estimating redshifts for 69 new galaxy cluster members. We used the data to characterize (a) its merging history by means of a dynamical analysis and (b) its galaxy population by means of their spectroscopic and photometric properties.
With respect to the merging history, we were able to confirm the merging state of the cluster and conclude that:
\begin{itemize}
\item Using the galaxy surface density map of the RCS galaxies we can see a bi-modality in the galaxy distribution. However, the cluster does not show signs of substructures along the line-of-sight.
\item We assign galaxy members to each substructure by means of the \textsc{DBSCAN} algorithm. We name the two main substructures as 0307-6225N and 0307-6225S, referring to the northern and southern overdensities, respectively.
\item For each substructure we measured the redshift, velocity dispersion and velocity-derived masses from scaling relations. We find a mass ratio of $M_S/M_N \approx$ 1.3{} and a velocity difference of $v_N-v_s=342$ km s$^{-1}$ between the northern and southern structures.
\item To estimate the time since collision we use the \textsc{MCMAC} algorithm, which gave us the times for an outgoing and incoming system. By means of hydrodynamical simulations we constrained the most likely time to that of an outgoing system with TSP=0.96$^{+0.31}_{-0.18}${} Gyr.
\item The outgoing configuration is also supported by the comparison between the observed and simulated X--ray morphologies. This comparison between the X--ray morphologies also provide a constraint on the masses, where a merger with a mass ratio of 1:3 seems more likely than that of a 1:1 mass merger.
\end{itemize}
With respect to the galaxy population, we find that:
\begin{itemize}
\item EL galaxies are located preferentially near the cluster cores (projected separations), where the average low peculiar velocities of red SF galaxies indicates that they were most likely accreted before the merger between 0307-6225N and 0307-6225S occurred.
\item EL galaxies on 0307-6225N have smaller peculiar velocities than those of 0307-6225S, where in the latter it appears that blue SF galaxies were either recently accreted or are in the process of being accreted.
\item 0307-6225S shows two possible BCGs, which are very close in projected space. The magnitude and velocity differences between them are $\sim0$ mag and $\sim$674 km s$^{-1}$, respectively, with one of them having a peculiar velocity close to 0 km s$^{-1}$ with respect to 0307-6225S, while the other is close to the estimated $1\sigma_v$. However, the velocity distribution of the cluster shows no signs of being perturbed. This suggests that 0307-6225S could be the result of a previous merger which was at its last stage when the observed merger occurred.
\item With respect to the in-between region, the galaxy population is comprised mostly of red galaxies, with the population of blue galaxies classified as passive or PSB, with colors close to the RCS.
\end{itemize}
In summary, our work supports a nearly face-on, in the plane of the sky, major merger scenario for SPT-CLJ0307-6225. This interaction accelerates the quenching of galaxies as a result of a rapid enhancement of their star formation activity and the subsequent gas depletion. This is in line with literature findings indicating that the dynamical state of a cluster merger has a strong impact on galaxy population. Of particular importance is to differentiate dynamically young and old mergers. Comparisons between such systems will further increase our understanding on the connection between mergers and the quenching of star formation in galaxies. In future studies, we will replicate the analysis performed on SPT-CLJ0307-6225, to a larger cluster sample, including the most disturbed cluster candidates on the SPT sample. These studies will be the basis for a comprehensive analysis of star formation in mergers with a wide dynamical range.
\section*{Acknowledgements}
DHL acknowledges financial support from the MPG Faculty Fellowship program, the new ORIGINS cluster funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311, and the Ludwig-Maximilians-Universit\"at Munich. FA was supported by the doctoral thesis scholarship of ANID-Chile, grant 21211648. FAG acknowledges financial support from FONDECYT Regular 1211370. FAG and FA acknowledge funding from the Max Planck Society through a Partner Group grant. J. L. N. C. is grateful for the financial support received from the Southern Office of Aerospace Research and Development of the Air Force Office of the Scientific Research International Office of the United States (SOARD/AFOSR) through grants FA9550-18-1-0018 and FA9550-22-1-0037. AS is supported by the ERC-StG `ClustersXCosmo' grant agreement 716762, by the FARE-MIUR grant `ClustersXEuclid' R165SBKTMA, and by INFN InDark Grant.
\section*{Data Availability}
\textit{Chandra} and Megacam/Magellan are available upon request from McDonalds, M. and Stalder, B., respectively. GMOS data are available in \cite{bayliss16}. The raw MUSE data are available from the ESO Science Archive (\url{https://archive.eso.org/}, with programs IDs: 097.A-0922(A) and 100.A-0645(A)). Additional data on derived physical parameters are available in this paper.
\section*{Affiliations}
\noindent
{\it
$^{1}$Faculty of Physics, Ludwig-Maximilians-Universit\"{a}t, Scheinerstr.\ 1, 81679 Munich, Germany \\
$^{2}$Cerro Tololo Inter-American Observatory, NSF's National Optical-Infrared Astronomy Research Laboratory, Casilla 603, La Serena, Chile\\
$^{3}$Departamento de Astronom\'ia, Universidad de La Serena, Avenida Juan Cisternas 1200, La Serena, Chile\\
$^{4}$School of Physics, University of Melbourne, Parkville, VIC 3010, Australia \\
$^{5}$Instituto de F\'isica y Astronom\'ia, Universidad de Valpara\'iso, Avda. Gran Bretaña 1111, Valpara\'iso, Chile\\
$^{6}$Academia Sinica, Institute of Astronomy and Astrophysics, 11F of AS/NTU Astronomy-Mathematics Building, No.1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.\\
$^{7}$Universidade Estadual de Santa Cruz, Laborat\'orio de Astrof\'isica Te\'orica e Observacional - 45650-000, Ilh\'eus-BA, Brazil\\
$^{8}$Instituto de Investigaci\'on Multidisciplinar en Ciencia y Tecnolog\'ia, Universidad de La Serena, Ra\'ul Bitr\'an 1305, La Serena, Chile\\
$^{9}$ Gemini Observatory, NSF’s National Optical-Infrared Astronomy Research Laboratory, Casilla 603, La Serena, Chile \\
$^{10}$European Southern Observatory, Alonso de Cordova 3107, Vitacura, Casilla 19001, Santiago de Chile, Chile \\
$^{11}$Center for Astrophysics — Harvard \& Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA\\
$^{12}$Vera C. Rubin Observatory Project Office, 950 N. Cherry Ave, Tucson, AZ 85719, USA\\
$^{13}$Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 \\
$^{14}$ Department of Physics, University of Cincinnati, Cincinnati, OH 45221, USA\\
$^{15}$ Department of Physics and Astronomy, University of Missouri--Kansas City, 5110 Rockhill Road, Kansas City, MO 64110, USA\\
$^{16}$ Huntingdon Institute for X-ray Astronomy, LLC, 10677 Franks Road, Huntingdon, PA 16652, USA\\
$^{17}$Department of Geography, Ludwig-Maximilians-Universit\"at, Luisenstr 37, D-80333 Munich, Germany \\
$^{18}$Potsdam Institute for Climate Impact Research, Telegrafenberg, 14473 Potsdam, Germany\\
$^{19}$Department of Physics and Astronomy, University of Potsdam, Karl-Liebknecht-Str. 24/25, 14476 Potsdam-Golm, Germany\\
$^{20}$Department of Astronomy, University of Michigan, 1085 South University Ave, Ann Arbor, MI 48109, USA\\
$^{21}$Direcci\'on de Investigaci\'on y Desarrollo, Universidad de La Serena, Av. Ra\'ul Bitr\'an Nachary Nº 1305, La Serena, Chile.\\
$^{22}$Dipartimento di Fisica, Sezione di Astronomia, Universit\'a di Trieste, Via Tiepolo 11, I-34143 Trieste, Italy
\\
$^{23}$INAF – Osservatorio Astronomico di Trieste, via Tiepolo 11, I-34131 Trieste, Italy \\
$^{22}$IFPU – Institute for Fundamental Physics of the Universe, via Beirut 2, 34151, Trieste, Italy \\
$^{24}$INFN – Sezione di Trieste, I-34100 Trieste, Italy
}
\bibliographystyle{mnras}
|
{
"timestamp": "2021-12-01T02:25:33",
"yymm": "2111",
"arxiv_id": "2111.15443",
"language": "en",
"url": "https://arxiv.org/abs/2111.15443"
}
|
\section{Introduction}
Channel modeling plays an important role in communication system design. For the traditional channel modeling process, channel measurement is an important approach to obtain data for channel characterization. However, the measurement-based channel modeling is facing various problems, especially in the complex scenarios where channel measurements are challenging. Moreover, with the development of 5G and 6G communications \cite{C}, large bandwidth and antenna array are used, and high mobility is supported. This further makes it challenging to obtain enough data for channel characterization and modeling. Take mobile channel measurement as an example, based on the Lee's theorem\cite{Lee}, to accurately extract the large-scale channel information, at least 36-50 data points need to be obtained in a range of 40 wavelength. This means that if the measurement is conducted in a high mobility scenario and the sampling rate is limited by hardware, it may be challenging to obtain enough data within the 40 wavelength window, and the accuracy of channel characterization (such as path loss and shadow fading modeling) is reduced. Such problem has existed for a long time and the reduced measurement data significantly limits the development of accurate channel modeling. A possible solution is to reconstruct the reduced channel data set using some well-designed algorithms, which can avoid the high cost of repeating channel measurements.\\
Recently, Artificial Intelligence (AI) has been widely considered in the investigation of channel characterization and modeling. The machine learning algorithms can well solve the problems such as clustering, classification, and regression in wireless channel data\cite{w}. In \cite{w44}, the authors used transmitter (Tx) and receiver (Rx) height, distance from Tx to Rx and carrier frequency as Radial Basis Function Neural Network (RBF-NN) input to predict path loss. Ref.\cite{w47} applies an network for modeling and storing ray launching results, and the proposed method achieves high gain in terms of computational efficiency. Ref.\cite{w48} uses Artificial Neural Network (ANN) to model wireless channel and proposes a cluster-nuclei based channel model, and with the measurement data, the prediction of channel is achieved. In \cite{w51}, the received signal strength is predicted by Multi-Layer Perceptron (MLP). The distance from Tx to Rx are regarded as input, and the output is the received signal strength. Ref.\cite{w53} discusses an optimal AI-based approach to fit channel sounder data to a time-variant tapped delay line model. Ref.\cite{w54} proposes an AI-based method to predict coverage and field strength map in the environments where measurements are unavailable. In \cite{w45}, the Relevence Vector Machine (RVM) is used to estimate channel direction of arrival, which obtains signal locations by the sparsity-including RVM on a predefined spatial grid and then obtains refined direction estimation by searching. Moreover, the K-means, Fuzzy C-Means (FCM) and Density-Based Spatial Clustering of Applications with Noise (DBSCAN) have been widely used for channel data clustering \cite{w55}. However, the above existing works mainly use AI to improve channel characterization and modeling, and they actually rely on having a large number of channel data. There is still a lack of effective method to solve the problem of having insufficientchannel data for analysis.\\
This paper proposes a channel prediction algorithm, which trains ANN to learn channel history and predict path loss and shadow fading at different distances. Fig. \ref{fig1} illustrates the framework of the proposed algorithm. In the training process, $ d_{t} $ and $ d_{t+q+1} $ are the distances of the measured channel data in time $\mathit{t}$ and $\mathit{t+q}+1$, respectively, whereas $ \mathit{q} $ is the number of the predicted points. Parameters $ l_{t} $ and $ l_{t+q+1} $ are network output in training process (including path loss and shadowing). After the training, the distances of those unmeasured points (from $ d_{t+1} $ to $ d_{t+q} $) are used as network input, and the predicted channel data (from $ \hat{l}_{t+1} $ to $ \hat{l}_{t+q} $) are obtained as the output of ANN. It is found that the proposed approach can achieve fairly high accuracy in channel prediction, which can be used to improve channel characterization when measurements are insufficient.\\
\begin{figure}[t]
\centering
\includegraphics[width=0.4\textwidth]{chazhi.pdf}
\caption{Process of training and prediction}
\label{fig1}
\end{figure}
\section{Channel Measurement}
For ANN training, measurement data is needed. In this work, channel measurements were carried out in railway mobile scenario. The measurements were carried out at 460 MHz with a bandwidth of 20 MHz. The sample interval $ T_{r} $ is 5.12 ms which satisfies the Nyquist criterion for mobile channel measurements\cite{M2011}. In measurements, the inteval of distance between each data point is 1.42 m. The total number of measurement data is 3000. More details of the measurements can be found in \cite{wen}.\\
The measurement system records channel transfer functions $ \mathit{H(d,f_{l})} $, where $ \mathit{d} $ is the distance between Tx and Rx antennas and $ f_{l} $ is the $ \mathit{l} $ th frequency point. The power of the received signal $ P_{r}(d) $ is determined from $ \mathit{H(d,f_{l})} $ as
\begin{equation}\label{PG}
P_{r}(d)=\dfrac{1}{N_{f}}(\sum_{l=1}^{N_{f}}\left|H(d,f_{l})\right| ^{2})
\end{equation}
where $ \mathit{N_{f}} $ is the total number of the measured frequency points, and in the measurements, $ \mathit{N_{f}}=1024 $.\\
The raw channel large-scale component $ \mathit{PL} $, including both path loss and large-scale fading (LSF), can be calculated as follows
\begin{equation}\label{PL}
PL(dB)=P_{t}+G_{T_{X}}+G_{R_{X}}-P_{r}
\end{equation}
To remove the components of small-scale fading, $ \mathit{PL} $ is averaged with a sliding window of 40 wavelength. To extract LSF component $ \mathit{X_{\sigma}}(dB) $, the linear model of path loss should be calculated first, and then the LSF $ \mathit{X_{\sigma}}$ can be obtained as\\
\begin{equation}\label{LSF}
\mathit{X_{\sigma}}(dB)=PL(dB)-\bar{PL}(dB)
\end{equation}
where $ \overline{PL} $ is the linear model of path loss.
\section{Channel Prediction with ANN}
The ANN includes many types of neural networks such as back propagation network (BPN), RBF-NN, extreme learning machine (ELM), etc. They have been widely used for different applications, including pattern recongnition, image processing, forcasting, automatic control, etc\cite{plrailway}. In this paper, the BPN, RBF-NN and ELM are chosen due to the good performances in interpolation prediction. Those methods can well realize complex nonlinear mapping and have good self-learning ability. The basic architecture of the three kinds of ANN are shown in Fig. \ref{fig2}. The number of hidden layer and the number of neurons in input layer and output layer are all set to be 1 in this paper. The output of the ANN can be expressed as
\begin{equation}\label{BPN}
y_{i}=F_{o}(\sum_{j=1}^{M}v_{j}F_{n}(w_{j}d_{i}))
\end{equation}
where $ d_{i} $ is the $\mathit{i}$ th input data, and in this paper it represents distance $ \mathit{d} $. Parameter $ y_{i} $ is the $\mathit{i}$ th output data, and in this paper it represents the predicted $ \mathit{PL} $. Parameter $ F_{o}(\cdot) $ is the activation function of the output layer, and $ F_{n}(\cdot) $ is the activation function of the hidden layer, Parameter $ v_{j} $ represents the synaptic weight from the $ \mathit{j} $ th neuron in the hidden layer to the output layer, $ w_{j} $ represents the synaptic weight from the input layer to the $ \mathit{j} $ th neuron in the hidden layer, and $ \mathit{M} $ is the number of the hidden neurons. For BPN and ELM, $ v_{j} $ and $ w_{j} $ are decided by training process. For RBF-NN, the value of $ w_{j} $ always equals to $ 1 $, and $ v_{j} $ is decided by training process. For channel prediction, some measurement data are equally-spaced chosen as training data set to predict the rest data. After the ANN has been trained successfully, the distances that needs to be predicted are input to the ANN to obtain channel data.\\
In the training process of BPN, the core is the back propagation algorithm to adjust network weights to reduce the deviation between network output and measurement data, which can be expressed as
\begin{equation}\label{ek}
E_{i}=\frac{1}{2}\sum(y_{i^{'}}-y_{i})^{2}
\end{equation}
\begin{equation}\label{d1}
\Delta v_{j}=-\eta\frac{ \partial E_{i}}{ \partial v_{j}}
\end{equation}
\begin{equation}\label{d4}
\Delta v_{j}=(y_{i}-y_{i^{'}})y_{i}(1-y_{i})b_{j}
\end{equation}
where $y_{i^{'}}$ is the $\mathit{i}$ th measurement data and $E_{i}$ is the current predictive error. Parameter $ \eta =10^{-6} $ is the learning rate of BPN and $\Delta v_{j}$ is adjusted value of weight base on the current $E_{i}$. Parameter $b_{j}$ is output of the $ \mathit{j} $ th neuron. With the back propagation algorithm, using the current predicted result to adjust synaptic weights is possible. Each round of prediction and back propagation is called an iteration. When $ E_{i} $ is smaller than the error threshold ($ 10^{-5} $) or the number of iterations reaches the preset upper limit, the training process is finished, and the maximum number of iterations is set to be 1000 in order to prevent the error always being larger than the error threshold.\\
The ELM abandons the iteration process of BPN training. When $ w_{j} $ is set randomly, the function of $ w_{j} $ and $ v_{j} $ can be determined by linear equations.\\
The RBF-NN focuses on extracting partial characteristics of data. The RBF-NN finds $ \mathit{M} $ central data points by K-Means. From each central point, the activation function can be determined by a Gaussian function and the weights between hidden layer and output layer are also determined by linear equations.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{BPnetwork.pdf}
\caption{Structure of ANN}
\label{fig2}
\end{figure}
\begin{figure*}[t]
\centering
\subfloat[]{\includegraphics[width=0.4\textwidth]{BP_14_1-10_3000data_linear-eps-converted-to}%
\label{fig3a}}
\subfloat[]{\includegraphics[width=0.4\textwidth]{BP_50_1-10_3000data_linear-eps-converted-to}%
\label{fig3b}}
\hfil
\subfloat[]{\includegraphics[width=0.4\textwidth]{BP_14_1-50_3000data_linear-eps-converted-to}%
\label{fig3c}}
\subfloat[]{\includegraphics[width=0.4\textwidth]{BP_50_1-50_3000data_linear-eps-converted-to}%
\label{fig3d}}
\caption{Prediction results of $ \mathit{PL} $ by BPN. (a) 14$\%$ training ratio and 10 neurons; (b) 50$\%$ training ratio and 10 neurons; (c) 14$\%$ training ratio and 50 neurons; (d) 50$\%$ training ratio and 50 neurons}
\label{fig3}
\end{figure*}
\section{Simulation Result}
In order to verify the proposed prediction method and explore the possible factors that could affect prediction performance, some simulatons are conducted. In the simulations, different proportions of training data set and number of neurons are chosen for analysis. For the proportions of training data set, it can be expressed as $ \mathit{r=\frac{1}{q+1}} $ where $ \mathit{q} $ is the number of predicted data between the adjacent measurement data as shown in Fig. \ref{fig1}. It can be seen that $ \mathit{r} $ will not be more than 50$ \% $. For the number of neurons, the conditions of 10, 20, 30, 40 and 50 neurons are considered in simulations. In this paper, the single layer ANN is considered, which is found to have fairly high accuracy and low complexity.\\
Fig. \ref{fig3} shows the simulations of having 14$ \% $ (with 6 prediction data between each adjacent training data) and 50 $ \% $ (with 1 prediction data between each adjacent training data) training ratios with the BPN that constituted by 10 and 50 neurons, respectively. The black data points are the measurement data; the red data points are the output data for comparison, which include the training data set and predicted data. In Fig. \ref{fig3}, it is found that the predicted results are generally in good agreement with the measurements. It is also found that the number of neurons has larger impact on prediction result than the proportion of training data set. A larger number of neurons can significantly improve prediction, whereas the improvement of using higher proportion of training data set is relatively small.\\
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{3000data_linear_rmse-eps-converted-to}
\caption{RMSE of $ \mathit{PL} $ prediction}
\label{fig4}
\end{figure}
\begin{figure*}[t]
\centering
\subfloat[]{\includegraphics[width=0.4\textwidth]{BP_14_1-10_3000data_distribution-eps-converted-to}%
\label{fig6a}}
\subfloat[]{\includegraphics[width=0.4\textwidth]{BP_50_1-10_3000data_distribution-eps-converted-to}%
\label{fig6b}}
\hfil
\subfloat[]{\includegraphics[width=0.4\textwidth]{BP_14_1-50_3000data_distribution-eps-converted-to}%
\label{fig6c}}
\subfloat[]{\includegraphics[width=0.4\textwidth]{BP_50_1-50_3000data_distribution-eps-converted-to}%
\label{fig6d}}
\caption{Predicting result of LSF by BPN. (a) 14$\%$ training ratio and 10 neurons; (b) 50$\%$ training ratio and 10 neurons; (c) 14$\%$ training ratio and 50 neurons; (d) 50$\%$ training ratio and 50 neurons}
\label{fig6}
\end{figure*}
To better evaluate prediction accuracy, the Root-Mean-Square error (RMSE) $ \mathit{R} $ is calcaulated as
\begin{equation}\label{RMSE}
\mathit{R}=\sqrt{\dfrac{\sum_{i=1}^{N}(m_{i}-p_{i})^{2}}{N}}
\end{equation}
where $ m_{i} $ is $ \mathit{i} $ th $ \mathit{PL} $ from measurements, $ p_{i} $ is the predicted result of $ \mathit{PL} $, and $ \mathit{N} $ is the total number of measurement data points.\\
In Fig. \ref{fig4}, the RMSE of predicted results using different types of networks are shown. It is found that the BPN has the best performence, and the performances of ELM and RBF-NN are generally worse. The mainly advantage of ELM is running speed, which uses a simplified training process to reduce calcaulation time. The running speed of RBF-NN is between ELM and BPN, because RBF-NN adopts partial learning. It is also observed in Fig. \ref{fig4} that for a specific network, a high proportion of training data set leads to low prediction error. However, the impact of the proportion of training data is fairly small. For most cases, RMSE reduces less than 0.2 dB when the proportion increases from 14$ \% $ to 50$ \% $, as shown in Fig. \ref{fig4}. Compared with the impact of proportion of training data set, it is found that the decreasing of RMSE caused by more neurons is much more distinct, especially for BPN and RBF. It can be seen from Fig. \ref{fig4} that the RMSE of 50 neurons case is generally 1 dB less than the case of 10 neurons in BPN. Therefore, when ANN is used to predict $ \mathit{PL} $, more neurons significantly improve prediction accuracy. Moreover, Fig. \ref{fig4} shows that ANN can be well used for channel prediction. Even for the case with few training data (i.e., the proportion of training data is low), the accuracy is fairly high compared with using a large number of training data (e.g., the proportion of training data is 50$ \% $).\\
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{3000data_lsf_rmse-eps-converted-to}
\caption{RMSE of LSF prediction}
\label{fig5}
\end{figure}
For LSF prediction analysis, the zero mean Gaussian distribution is chosen as the model of LSF, and the density of probability is used for comparison between the prediction and measurements. The results of BPN are shown in Fig. \ref{fig6}, where the black circles represent the probability density from measurement data and the red circles represent the probability density from predicted data. It is observed that more neurons and higher $ \mathit{r} $ can significantly improve the statistical distribution fitting of LSF, especially in Fig. \ref{fig6}(d). Fig. \ref{fig5} shows the RMSE results of LSF prediction. It is found that the trend of curves are generally similar with the results in Fig. \ref{fig4}. Moreover, the values of RMSEs of the predicted $ \mathit{PL} $ and LSF are also fairly close to each other. It is concluded that the BPN has the best performance for LSF prediction, and using more neurons significantly improves prediction accuracy.
\section{Conclusion}
In this paper, a scheme of using ANN to predict channel path loss and large-scale fading is proposed. The ANN can extract the characteristic of channel and predict the channel data of the neighboring unmeasured points. The performances of using different types of ANN are compared. The BPN is found to have the smallest prediction error, the error of ELM is slightly larger than BPN and the RBF-NN is found to have the highest channel prediction error. Moreover, it is found that the number of neurons and the proportion of training data can effect the error of prediction. Using more number of neurons can significantly reduce prediction error and the influence of proportion of training data is relatively small.
|
{
"timestamp": "2021-12-02T02:18:39",
"yymm": "2111",
"arxiv_id": "2111.15476",
"language": "en",
"url": "https://arxiv.org/abs/2111.15476"
}
|
\section{Introduction}
We are concerned with a mathematically novel and somewhat unusual combination of PDEs aiming to describe the dynamic interplay of magnetization structures and (collision-free) electric currents, which is essential for various spintronic applications. Magnetization structures are given in terms of an
$\mathbb S^2$ valued field $\bm m=\bm m(\bm x,t)$ which is governed by a
micromagnetic interaction energy $E=E(\bm m)$, a quadratic integral functional
that we shall specify later. We focus on a dynamic model,
where $\bm m$ evolves according to the following Landau-Lifshitz-Gilbert equation (LLG)
\begin{equation}\label{eq:abstract_LLG}
\partial_t \bm m + (\bm j \cdot \nabla) \bm m
= \bm m \times \left( \alpha \, \partial_t \bm m - \bm h_{\rm eff} \right) \, .
\end{equation}
Here $\alpha>0$ is the Gilbert damping factor and $\bm h_{\rm eff}=-\frac{\delta E}{\delta \bm m}(\bm m)$ is the effective field induced by $\bm m$. The dynamics is
driven by a current $\bm j= \bm j(\bm x,t)$ of electrons whose spins are assumed to adiabatically align with the local magnetization direction $\bm m$, giving rise to a spin current $\bm Q = \bm j \otimes \bm m$. The divergence of $\bm Q$ perpendicular to $\bm m$, featured in \eqref{eq:abstract_LLG}, is called adiabatic spin-transfer torque \cite{ZhangLi2004}.
In the absence of $\bm j$, LLG is a hybrid of heat and Hamiltonian flow of $E$. The spin-transfer torque has the form of a transport term. In a simplified approach it is assumed that $\bm j$ is constant. Due to the hybrid structure, however, the term cannot be eliminated by means of a simple Galilean transformation. In models of current driven domain walls it is customary to include non-adiabatic spin-transfer terms $\beta \bm m \times (\bm j \cdot \nabla) \bm m$ with an additional parameter $\beta$. Existence and well-posedness results for this so-called Landau-Lifshitz-Slonczewski equation have been derived in \cite{MelcherPtashnyk2013, DoeringMelcher2017}.
In the context of topological phases on very small scales the interplay of electron currents and magnetization structures becomes more complex and multifaceted, calling for a more precise description that takes into account mutual interactions, i.e. the counter-effect of magnetization structures
on the electron flow. Electron transport is described in term of an electron distribution function $f=f(t,\bm x,\bm v)$ depending on time $t$, position $\bm x\in \mathbb R^3$ and velocity $\bm v \in \mathbb R^3$ so that the current is obtained as the first velocity moment
$\bm j =q \int_{\mathbb R^3} \bm v f \, d\bm v$. Ignoring collisions, distribution functions generally evolve according to Vlasov equations
\begin{equation*}
\partial_t f + \bm v \cdot \nabla_{\bm x} f + \bm F \cdot \nabla_{\bm v} f =0 \, ,
\end{equation*}
where $\bm F=q (\bm E + \bm v \times \bm B)$ with $q=-1$ is the Lorentz force induced by electromagnetic fields $\bm E$ and $\bm B$ that satisfy Maxwell's equations in a self-consistent way, giving rise to the Vlasov-Maxwell system, see \cite{Schmeiser1990,Juengel2009} for a detailed discussion in the context of semiconductors.\\
Short-time existence and uniqueness of classical solutions to the Vlasov-Maxwell system have been proven by Wollman in \cite{Wollman1984}
based on a generalization of a general local existence result for quasilinear symmetric hyperbolic systems \cite{Kato1975}.
Global existence of a weak solution with has been obtained by DiPerna and Lions in \cite{DiPernaLions1989} based on a regularization procedure, velocity averaging and the method of renormalization. \\
Coupling to the magnetization structure is based on a recent physical observation in connection with Hall effects in the presence of nontrivial topologies, namely the emergence of virtual electromagnetic fields $\bm e$ and $\bm b$, which contribute to the conventional Maxwell fields on the level of electron transport with a modified Lorentz force
\begin{equation} \label{eq:total_Lorentz}
\bm F=q \left(\bm E + \bm e + \bm v \times (\bm B + \bm b)\right),
\end{equation}
\cite{Schulz2012, Nagaosa2013}. The emergent fields are derived from the evolving magnetization field $\bm m$, see below.
The resulting system will be called Landau-Lifshitz-Gilbert-Vlasov-Maxwell system (LLG-VM). A key property of this system is the following energy-dissipation law
\begin{equation} \label{eq:abstract_energy_law}
\alpha \int_0^T \|\partial_t \bm m\|^2_{L^2} \, dt + \Big[ \mathbb{E}(f,\bm E, \bm B, \bm m)\Big]_{t=0}^T \le 0 \, ,
\end{equation}
for the total energy
\begin{equation} \label{eq:abstract_energy}
\mathbb{E}(f,\bm E, \bm B, \bm m) =
\int_{\mathbb R^3} \int_{\mathbb R^3} \frac{|\bm v|}{2}^2 f \, d\bm v \, d\bm x + \frac{1}{2}\left( \varepsilon_r \, \|\bm E\|^2_{L^2} + \frac{1}{\mu_r} \, \|\bm B\|^2_{L^2} \right) + E(\bm m) \, ,
\end{equation}
valid for sufficiently regular solutions and weak limits. Here $\varepsilon_r \geq 1$ and $\mu_r \geq 1$ represent relative permittivity and permeability constants, respectively. Notably, there in no explicit dependence on the emergent fields $\bm e$ and $\bm b$, which indicates that this coupling is indeed natural. Depending on the choice of $E=E(\bm m)$, the resulting a priori bounds are the basis of the our global existence result
for (weak) solutions
\[
(f,\bm E, \bm B, \bm m) \in L^\infty((0,\infty);\{\mathbb{E}< \infty\})
\quad \text{and} \quad \frac{\partial \bm m}{\partial t} \in L^2((0, \infty) \times \mathbb R^3) \, ,
\]
under further requirements on the initial data and with further specifications on the regularity.
To this end we shall focus on a small scale model for a frustrated magnet that takes into account second order gradient terms\begin{align}\label{eq:frustratedmagnet}
E(\bm m) = \frac{1}{2} \int_{\mathbb{R}^3} |\nabla^2 \bm m|^2 - |\nabla \bm m|^2 + h |\bm m - \bm{\hat {e}}_3|^2 d \bm x.
\end{align}
The first and second order gradient terms account for competing nearest-neighbor ferromagnetic and higher-neighbor antiferromagnetic interaction, respectively. The last term accounts for the interaction with a Zeeman field pointing in the $\bm{\hat {e}}_3$ direction.
In the ferromagnetic regime with $h>1/4$
so that $E(\bm m) \gtrsim \|\bm m - \bm{\hat{e}}_3\|_{H^2}^2$,
such models are known to host topological solitons in space dimension $d=2,3$, magnetic skyrmions and hopfions, respectively, see e.g. \cite{Sutcliffe2017}. A key analytical consequence of a second order gradient term is that LLG behaves subcritical with respect to the energy $E(\bm m)$, i.e., concentration effects are ruled out by the fundamental energy law \eqref{eq:abstract_energy_law}. Moreover, the topology of the field $\bm m$ is preserved under a flow that exhibits the space-time bounds provided by \eqref{eq:abstract_energy_law}. Finally, the resulting regularity of the emergent fields $\bm e$ and $\bm b$ facilitate the compactness arguments for the transport equation.
Related systems arising in connection with domain wall motion and magnetic switching in multi-layers has been developed amd examined in \cite{Chen2015, Chen2016, Chai2018}. Starting from Schr\"odinger-Poisson equations for spinors, the semiclassical mean-field limit yields a Vlasov-Poission equation for the associated Wigner function coupled to the Landau-Lifshitz-Gilbert.
The coupling is realized by means of a spin transfer term in the effective field, which is induced by Pauli projections, and a source term in the Wigner equation, respectively, rather than an adiabatic spin-transfer torque and emergent electromagnetic fields as in our case. Conventional magnetostatic stray-fields playing a particular role for conventional micromagnetic structures such as domain walls are taken into account as lower order perturbation on the level of LLG. \\
Here our focus is on models to describe the transport magnetic topological solitons occurring on very small scales. Stray-fields are less relevant in this regime and are neglected in our discussion, focussing on the new difficulties due to the lack of regularizing properties of the full Maxwell equations in combination with emergent electromagnetism and topology that we now discuss in more detail.
\subsection*{Emergent electromagnetism and topology}\label{subsection:emergentfields}
A smoothly evolving magnetization field $\bm m$
induces a space-time vorticity, i.e., a two-form $\omega= \frac 1 2 \, \omega_{\mu \nu} \, dx_\mu \wedge dx_\nu$ with components
\begin{align*}
\omega_{\mu \nu}= \bm m \cdot \left( \partial_\mu \bm m \times \partial_\nu \bm m \right)\, ,
\end{align*}
for space-time indices $0 \le \mu, \nu \le 3$ so that $\partial_0 = \partial_t$. The two-form $\omega$ is the pull-back $\bm m^\ast \omega_{\mathbb{S}^2}$
of the standard volume form $\omega_{\mathbb{S}^2}$ on $\mathbb{S}^2$ by $\bm m$, see e.g. \cite{KurzkeMelcherMoser2011, KurzkeMelcherMoserSpirn2010}.
It follows that $\omega$ is closed so that Bianchi's identity
holds true for $0 \le \mu, \nu, \sigma \le 3$:
\begin{align*}
\partial_\sigma \omega_{\mu \nu} +\partial_\nu \omega_{\sigma \mu } +\partial_\mu \omega_{\nu \sigma} =0 \, .
\end{align*}
In the spirit of the Faraday form from electromagnetism, the decomposition
\begin{align*}
\omega =e_i \, dx_i \wedge dt + \frac 1 2 \omega_{jk} \, dx_j \wedge dx_k,
\end{align*}
for spatial indices $1 \le i,j,k \le 3$, gives rise to the previously mentioned
emergent electromagnetic fields $\bm e$ and $\bm b$ with components
\begin{align} \label{eq:emergent_fields}
b_i = \frac{1}{2}\epsilon_{ijk} \, \bm m \cdot (\partial_j \bm m \times \partial_k \bm m) \quad \text{and} \quad e_i = \bm m \cdot (\partial_i \bm m \times \partial_t \bm m).
\end{align}
According to the Bianchi identity the emergent fields satisfy the homogeneous Maxwell equations (Gau{\ss} law for magnetism and Faraday's law)
\begin{align*}
\nabla \cdot \bm b =0 \quad \text{and} \quad \frac{\partial \bm b}{\partial t} + \nabla \times \bm e =0.
\end{align*}
The emergent electromagnetism bears physical relevance as it captures the interplay between the magnetization structure and electric currents.
Charge $q$ particles traversing at velocity $\bm v$ and momentum $\bm p$ experience an additional Lorenz force $q \left( \bm e + \bm v \times \bm b\right)$, giving rise to the total force $\bm F$ in \eqref{eq:total_Lorentz} such that $\frac{d \bm p}{dt}=\bm F$. There is no universal counter part to the inhomogeneous Maxwell equations on the level of emergent fields.
\medskip
An intriguing new feature compared to conventional electromagnetism is quantization, which gives certain localized magnetization structures the character of charged particles. Such topological solitons in magnetism are classified according to their dimension as skyrmions and hopfions, respectively.
The Gau{\ss} law $\nabla \cdot \bm b=0$ gives rise to a vector potential $\bm a$ such that $\bm b = \nabla \times \bm a$. The scalar product
$\bm a \cdot \bm b$ is the emergent magnetic helicity density. Under suitable decay conditions $\bm m \to \bm{\hat e}_3$ as $|\bm x| \to \infty$, the helicity integral exists and is quantized
\begin{align*}
\int_{\mathbb R^3} \bm a \cdot \bm b \; d \bm x =(4 \pi)^2 \, H(\bm m) \, ,
\end{align*}
where $H(\bm m) \in \mathbb{Z}$ is the Hopf invariant associated to $\bm m$,
considered as a continuous map from the compactification $\mathbb{S}^3$.
The Hopf invariant is a homotopy invariant and describes the topology of the field $\bm m$ in terms of the linking number of two generic fibers of $\bm m$.
Moreover, the flux $\bm b$ through a hyperplane, say $\mathbb{R}^2$ is also quantized
\begin{align*}
\int_{\mathbb R^2} b_3 \; dx =4 \pi \, Q(\bm m) \, ,
\end{align*}
where $Q(\bm m)\in \mathbb Z$ is the Brouwer degree or skyrmion number associated to $\bm m =\bm m|_{\mathbb R^2}$, considered as a continuous map from the compactification $\mathbb{S}^2$. The invariants $Q$ and $H$ are used for topological classification of localized structures in magnetism.
\medskip
The mathematical theory of topological solitons in magnetism has mainly been developed in the context of two-dimensional structures. The so-called chiral skyrmions have been predicted to exist 20 years before \cite{Bogdanov1989} and owe their stability to the spin-orbit effects (Dzyaloshinskii-Moriya interaction), see e.g. \cite{Melcher2014, LiMelcher2017} for a mathematical account. An alternative stabilization mechanism is based on magnetic frustration, i.e., alternating ferromagnetic and anti-ferromagnetic interaction in a Heisenberg lattice, see e.g.
\cite{Meyer2019}. A continuum theory for frustrated magnets
including \eqref{eq:frustratedmagnet}
is derived in \cite{Lin2016} and is shown to support two-dimensional skyrmions \cite{Lin2016} as well as their three-dimensional topological counterparts (hopfions) \cite{Sutcliffe2017, Rybakov2019}. Here we focus on this three-dimensional case. Hopfion dynamics have been recently studied in chiral magnets \cite{Wang2019} and frustrated magnets \cite{Liu2020}. The model considered in this work is the combined system of equations of motion that has been suggested in \cite{Nagaosa2013} for the case of magnetic skyrmions.
\subsection*{Main result}\label{subsection:LLGVM}
The complete Landau-Lifshitz-Gilbert-Vlasov-Maxwell (LLG-VM) system for the magnetization field $\bm m=\bm m (\bm x, t)$,
the distribution function $f=f(\bm x, \bm v, t)$ and the
electromagnetic fields
$\bm E=\bm E(\bm x, t)$ and $\bm B=\bm B(\bm x, t)$ with
$(\bm x, \bm v, t) \in \mathbb{R}^3 \times \mathbb{R}^3 \times (0, \infty)$ has the following explicit form:
The Landau-Lifshitz-Gilbert equation
\begin{equation}\label{eq:LLG}
\partial_t \bm m = \bm m \times ( \alpha\, \partial_t \bm m + \Delta \bm m + \Delta^2 \bm m-h\, \bm{\hat{e}}_3) - (\bm j \cdot \nabla) \bm m,
\end{equation}
inducing emergent fields $\bm e$ and $\bm b$ given by
\eqref{eq:emergent_fields},
is coupled to the Vlasov-Maxwell system
\begin{equation} \label{eq:Vlasov-1}
\partial_t f + \bm v \cdot \nabla_x f - (\bm E + \bm e + \bm v \times(\bm B + \bm b)) \cdot \nabla_v f =0 \, ,
\end{equation}
with the homogeneous Maxwell equations
\begin{equation}
\partial_t \bm B + \nabla \times \bm E = 0
\quad \text{and} \quad \nabla \cdot \bm B = 0 \label{eq:Maxwell4} \, ,
\end{equation}
and the inhomogeneous Maxwell equations
\begin{equation}
\partial_t \bm D - \, \nabla \times \bm H = - \bm j
\quad \text{and} \quad \nabla \cdot \bm D = \rho \label{eq:Maxwell-1} \, ,
\end{equation}
with constitutive laws in terms of relative permittivity and permeability $\varepsilon_r \geq 1$ and $\mu_r \geq 1$
\begin{equation}
\bm D = \varepsilon_r \bm E \quad \text{and}
\quad \bm B = \mu_r \bm H \, ,
\end{equation}
and current and charge densities
\begin{equation*}
\bm j =- \int_{\mathbb R^3} \bm v f \, d\bm v \quad \text{and} \quad \rho = -\int_{\mathbb R^3} f \, d\bm v. \end{equation*}
\subsubsection*{Notation and function spaces}
By $\mathscr{D}(\mathbb R^3)$ we denote the space of infinitely smooth functions with compact support, and by $\mathscr{D}'(\mathbb R^3)$ the
space of Schwartz distributions.
For any $s\in\mathbb R$ and $1<p<\infty$, with $W^{s,p}(\mathbb R^N)$ we denote the fractional Sobolev space
\begin{equation*}
W^{s,p}(\mathbb R^N) \defeq \big( I-\Delta \big)^{-s/2}L^p(\mathbb R^N) \, .
\end{equation*}
In the case $p=2$ we let $H^s(\mathbb R^N)\defeq W^{s,2}(\mathbb R^N).$ We refer to \cite{Taylor3} for the definition on domains and other properties. In the formulation of the Theorems we use the following notation
\begin{equation*}
H^s(\mathbb R^3 ;\mathbb S^2) \defeq \{ \bm m :\mathbb R^3 \rightarrow \mathbb S^2 : \bm m - \bm{\hat {e}}_3 \in H^s(\mathbb R^3;\mathbb R^3) \} \, .
\end{equation*}
With $C>0$ we always denote a generic constant that may change from line to line.
With $\langle \cdot,\cdot \rangle$ we denote the $L^2$ scalar product with respect to the spatial varible $\bm x$.
With $B_R$ we denote an open ball in $\mathbb R^3$ with radius $R>0$.
With $\mathcal{F}$ we denote the Fourier transform.
\subsubsection*{Global weak solutions of the LLG-VM system}
We are concerned with global existence of distributional solutions for initial data in the energy space. Under some further integrability properties on the initial distribution
function we obtain the following existence result:
\begin{thm}\label{thm:glavni}
Let $\bm m_0 \in H^2 (\mathbb R^3;\mathbb S^2)$ and $h>1/4\,$. Let $f_0\in L^1\cap L^r(\mathbb R^3 \times \mathbb R^3)$ for $r>3$ be non-negative and satisfy
\begin{equation*}
\int\int_{\mathbb R^3 \times \mathbb R^3} f_0 |\bm v|^2 \, d\bm x \, d\bm v < \infty.
\end{equation*}
Let $\bm E_0,\bm B_0 \, \in L^2(\mathbb R^3; \mathbb R^3)$ satisfy the following compatibility condition:
\begin{equation*}
\nabla \cdot \bm E_0 = - \int_{\mathbb R^3} f_0 \, d\bm v\, , \quad \nabla \cdot \bm B_0 = 0 \quad \text{in} \quad \mathscr{D}'(\mathbb R^3) \, .
\end{equation*}
Then there exist
\begin{gather*}
\bm m \in L^\infty((0,\infty);H^2 (\mathbb R^3;\mathbb S^2)) \cap \dot H^1((0,\infty);L^2(\mathbb R^3;\mathbb R^3)) \, , \\
f\in L^\infty((0,\infty) \, ;L^1 \cap L^r ( \mathbb R^3 \times \mathbb R^3))\, , \\
\bm E, \bm B \in L^\infty((0,\infty) ; L^2(\mathbb R^3;\mathbb R^3)) \, ,
\end{gather*}
which satisfy the LLG-VM system \eqref{eq:LLG}-\eqref{eq:Maxwell4} in the sense of distributions such that
\begin{alignat*}{3}
&\hphantom{\bm E,}\bm m \in C([0,\infty)\, ;\, H^s(\mathbb R^3 \,;\mathbb S^2)) \quad &&\text{for all } s<2 \, , \\
&f \in C([0,\infty)\, ; \,W_{loc}^{-s,\,p}(\mathbb R^3 \times \mathbb R^3)) \quad &&\text{for all } s>0 \, , \; p\in [3r/(2r+3),\, 3] \, , \\%,\text{ all } R<\infty \, , \\
&\bm E,\, \bm B \in C([0,\infty)\, ;\, H^{-s}_{loc}(\mathbb R^3\,;\mathbb R^3)) \quad &&\text{for all } s>0 \,
\end{alignat*}
where $f$ is non-negative and $f|_{t=0} = f_0,\ \bm E|_{t=0}=\bm E_0, \ \bm B|_{t=0}=\bm B_0,\ \bm m|_{t=0} = \bm m_0$.
\end{thm}
\paragraph{Remarks.}
\begin{enumerate}
\item The magnetization field $\bm m$ obtained in Theorem \ref{thm:glavni} is, by virtue of Sobolev embedding, continuous in space-time and therefore topology preserving. Alternatively, it can be shown that
$t \mapsto H(\bm m(t)) \in \mathbb{Z}$ is continuous and therefore constant, see Lemma \ref{lem:hopfion}. The model is therefore suitable to describe the current driven dynamics skyrmions and hopfions in frustrated magnets.
\item The solution $(\bm m, f, \bm E, \bm B)$ satisfies the energy inequality \eqref{eq:abstract_energy_law}. An interesting open question concerns strong continuity in the energy norm, i.e., whether \eqref{eq:abstract_energy_law} upgrades to an identity, which is unknown even for weak solutions of the Vlasov-Maxwell system alone.
\item Due to the strong regularizing effect of the governing micromagnetic energy \eqref{eq:frustratedmagnet}, the free LLG equation \eqref{eq:LLG} for $\bm j=0$ allows for regular solutions $\bm m$ up to any order.
In the coupled system, however, the regularity of $\bm m$ is limited by the regularity of $\bm j$ arising from $f$.
\item The critical regularity assumption for $f_0$ is $r=3$ in order to have integrability of the coupling term $\bm e \, f$ in the Vlasov equation. Consequently, from the velocity moment estimate in Lemma \ref{lem:jregularity} and energy-dissipation law \eqref{eq:abstract_energy_law}, $\bm j$ has the spatial regularity given in $L^{6/5}$. Interestingly, in view of Sobolev embedding, this is a mutual critical exponent for the coupling in the LLG equation $(\bm j \cdot \nabla) \bm m\,$. In contrast to \cite{DiPernaLions1989}, the proof of Theorem \ref{thm:glavni} fails in the critical case due to the lack of strong convergence of $\bm j$ in $L^{6/5}$.
\item Another open question is uniqueness, which is unknown even for weak solutions of the Vlasov-Maxwell system alone. The regularity theory for the LLG part is strong enough to obtain a partial uniqueness result, even for the critical exponent $L^{6/5}$.
\end{enumerate}
\begin{thm}\label{thm:uniqueness}
Let $\bm j \in L^\infty ((0,\infty); L^{6/5}(\mathbb R^3;\mathbb R^3))$ be fixed. Then the distributional solution to equation \eqref{eq:LLG} with regularity from the previous Theorem
\begin{align*}
\bm m \in L^\infty((0,\infty);H^2 (\mathbb R^3;\mathbb S^2)) \cap \dot H^1((0,\infty);L^2(\mathbb R^3;\mathbb R^3))
\end{align*}
is unique.
\end{thm}
\paragraph{Further questions and possible extensions}
\begin{enumerate}
\item The system of equations considered in this work are perhaps the most basic model that
features a coupling of dissipative magnetization dynamics and classical electron transport via emergent electromagnetic fields. The compactness methods of
\cite{DiPernaLions1989} and thus the present result extends to
the relativistic case where $\bm v$ is replaced by $\bm v/\sqrt{1+|\bm v|^2}$, see also \cite{Rein2004, Glassey1986} for existence results in this case. The classical solutions can be extended globally in time provided the momentum support can be controlled.
\item A weightier generalization towards a more realistic model of electron transport in solids lies in the inclusion of particle interactions in the form of a collisions by means of a Boltzmann operator or BGK model, a simplified form of it. Ignoring magnetic fields, global weak solution to the Vlasov-Poisson-BGK system have been constructed in \cite{Zhang2010,Zhang2013}. Due to the limited regularity of the Lorentz force, however, the arguments based on velocity moments lemmata as in \cite{Perthame1989, Perthame1990} do not extend to the Vlasov-Maxwell-BGK system, an open problem of its own.
\item In the context of the Landau-Litshitz-Gilbert equations, we neglect the coupling of the magnetization field to the Maxwell equations being part of the magnetic field,
i.e. a constitutive law $\bm B= \bm H + \bm m$, giving rise to the Landau-Lifshitz-Gilbert-Maxwell system. In micromagnetics, it is customary to assume a quasistatic situation where electric fields are ignored and the magnetic Gau{\ss} law
gives rise to the so-called stray-field interaction, which is a non-local but lower order contribution, see e.g. \cite{Melcher2010} an literature therein.
\item It would also be interesting to further investigate the role of Gilbert damping $\alpha>0$. The corresponding space-time $L^2$ bound on $\partial_t \bm m$
provides a suitable bound for the emergent electric field. The lack of a natural uniform bound in the case $\alpha=0$ requires high regularity of $f$, and it would be interesting to investigate local well-posedness results for this fully conservative system.
\end{enumerate}
\section{Solving the Landau-Lifshitz-Gilbert equation}
We examine global solvability of \eqref{eq:LLG_fourth} for a fixed current density $\bm j$ with regularity specified below. The requisite higher order Sobolev estimates can eventually be reduced to an $H^2$ estimate which is bounded by the energy \eqref{eq:frustratedmagnet}
for large enough $h$.
\begin{lem}\label{lem:energija}
Suppose $h>1/4$. Then $E(\bm m)$ is equivalent to $\|\bm m- \bm{\hat {e}}_3 \|_{H^2}^2\,.$
\end{lem}
\begin{proof}
The upper bound is straightforward. To obtain a lower bound, using Young's inequality for arbitrary $\epsilon>0$ we have
\begin{align*}
\|\nabla \bm m \|_{L^2}^2 = \int_{\mathbb R^3} |\xi|^2 \left| \mathcal F (\bm m - \bm{\hat {e}}_3) \right|^2 \, d\bm \xi \leq \int_{\mathbb R^3} \left(\frac{|\xi|^4}{4\epsilon} + \epsilon \right) \left| \mathcal F (\bm m - \bm{\hat {e}}_3) \right|^2 \, d\bm \xi \, .
\end{align*}
Then it follows
\begin{align*}
E(\bm m ) \geq \left(1-\frac{1}{4\epsilon}\right)\| \nabla^2 \bm m \|^2_{L^2} + (h-\epsilon) \| \bm m - \bm{\hat {e}}_3 \|_{L^2}^2 \, .
\end{align*}
Since $h>1/4$ we can take any $\epsilon>0$ such that $1/4<\epsilon<h$ and conclude.
\end{proof}
$H^2$ coercivity of the energy is closely related to uniform parabolicity of the governing Landau-Lifshitz-Gilbert equation.
To highlight the structure of \eqref{eq:LLG} as a fourth order parabolic system, we pass to the so-called Landau-Lifshitz formulation, see e.g. \cite{MelcherPtashnyk2013}. Extracting the leading fourth order terms yields
\begin{equation} \label{eq:LLG_fourth}
(1+\alpha^2) \partial_t \bm m + A(\bm m) \Delta^2 \bm m = A(\bm m) \bm f - \alpha \Lambda \bm m \, ,
\end{equation}
where $A(\bm m) \in \mathbb{R}^{3 \times 3}$ is such that for all $\bm \xi \in \mathbb{R}^3$
\[
A(\bm m) \bm \xi = \alpha \bm \xi - \bm m \times \bm \xi.
\]
The vector field $\bm f$ and the function $\Lambda$ depend on $\bm m$ and its derivatives. More precisely
\begin{equation*}
\bm f = \left[ h \, \bm{\hat {e}}_3 - \bm m \times (\bm j \cdot \nabla) \bm m - \Delta \bm m \right]^{\rm tan} \, ,
\end{equation*}
where $\bm \xi^{\mathrm tan}= \bm \xi - (\bm \xi \cdot \bm m) \bm m$. Moreover $\Lambda = -\bm m \cdot \Delta^2 \bm m$.
Taking into account that $|\bm m|=1$
\begin{equation}
\Lambda = |\Delta \bm m|^2 + \Delta | \nabla \bm m|^2 + 2 \nabla \bm m \cdot \nabla \Delta \bm m \, .
\end{equation}
\begin{lem} Suppose that, for $T>0$ and $l\ge 4$,
\[
\bm m\in C([0,T]; H^l(\mathbb R^3;\mathbb S^2)) \quad \text{with} \quad \partial_t \bm m \in C([0,T];H^{l-4}(\mathbb R^3;\mathbb R^3)) \, ,
\]
is a solution of \eqref{eq:LLG}. Then
\begin{equation} \label{eq:LLG_energy_1}
E(\bm m (T)) + \alpha \int_0^T \| \partial_t \bm m \|_{L^2}^2 \, dt = E(\bm m_0) -\int_0^T \langle \bm j, \bm e \rangle dt.
\end{equation}
and, for $\lambda=\alpha/(1+\alpha^2)$,
\begin{align} \label{eq:LLG_energy_2}
\|\Delta \bm m(T)\|_{L^2}^2 + \lambda \int_0^T \|\Delta^2 \bm m\|_{L^2}^2 \, dt
\le\|\Delta \bm m_0\|_{L^2}^2 + C\, &\int_0^T \alpha^{-1} \| \bm f\|_{L^2}^2 + \lambda\| \Lambda \|_{L^2}^2 \, dt \, .
\end{align}
\end{lem}
\begin{proof}
\eqref{eq:LLG_energy_1} is obtained upon multiplying \eqref{eq:LLG} by $\bm m \times \partial_t \bm m$. \eqref{eq:LLG_energy_2} is obtained upon multiplying \eqref{eq:LLG_fourth} by $\Delta^2 \bm m$, using Young's inequality and
the fact that $|A \bm \xi|^2=(1+\alpha^2)|\bm \xi|^2$.
\end{proof}
\paragraph{Short-time solutions} Fourth order quasilinear parabolic systems of the more general form
$\partial_t \bm u +A(\bm u) \Delta^2 \bm u = \bm B(t, \bm x, \bm u, \dots, \nabla^3 \bm u)$
admit a local theory of existence theory in Sobolev spaces $H^l(\mathbb{R}^3; \mathbb{R}^3)$ with $l\ge 5$ so that, by Sobolev embedding, $\nabla^k \bm u$ is uniformly bounded in space for $0 \le k \le 3$.
Assuming that
$A$ is smooth and uniformly elliptic, i.e., there exists $\alpha>0$ such that $\bm \xi A(\bm u) \bm \xi \ge \alpha |\bm \xi|^2$ for all $\bm u$ and $\bm \xi$, and $\bm B$ is continuous with smooth dependence on $\bm u$ and its derivatives and local bounds that are independent of $x$ and $t$, a priori estimates
are obtained by using multipliers $\Delta^k \bm u$ as in \eqref{eq:LLG_energy_2} but for $1 \le k \le l$. A bootstrap and Gronwall-type argument yields a $H^l$-bound up to some time $T>0$. Analogue bounds
can be obtained for suitably approximated or truncated systems that can be solved locally by a ODE argument. Compactness arguments then yield a short time solution to the original system as in \cite{Taylor3}. Details can be found in \cite{Tvrtko_Diss}. Letting $\bm u = \bm m - \bm{\hat {e}}_3$
as is \cite{Melcher2014, MelcherPtashnyk2013}, this modified Galerkin method applies to \eqref{eq:LLG} and yields short time solutions $\bm m\in C([0,T]; H^l(\mathbb R^3;\mathbb S^2)) \cap C^1([0,T]; H^{l-4}(\mathbb R^3;\mathbb S^2))$.
\subsubsection*{Global smooth solution}
Owing to the special structure of the geometric nonlinearities of \eqref{eq:LLG}, uniform bounds extend to all times.
\begin{thm}\label{thm:LLGglobalno}
Suppose $\bm m_0 \in H^l(\mathbb R^3;\mathbb S^2)$ and $\bm j \in C([0,\infty );H^{l-2}(\mathbb R^3;\mathbb R^3))$ for some integer $l\geq 5\,$. Then there exists a unique global solution of \eqref{eq:LLG_fourth} such that
\begin{equation*}
\bm m\in C([0,\infty); H^l(\mathbb R^3;\mathbb S^2)) \quad \text{and} \quad \partial_t \bm m \in C([0,\infty);H^{l-4}(\mathbb R^3;\mathbb R^3))\, .
\end{equation*}
\end{thm}
\begin{rem}
In Theorem \ref{thm:uniqueness} we assume less regularity on $\bm j$ and thus, in particular, it applies to obtain uniqueness for the solution from Theorem \ref{thm:LLGglobalno}.
\end{rem}
\begin{proof}[\textbf{Proof of Theorem \ref{thm:LLGglobalno}}]
We proceed in three steps. We first use Gronwall's inequality on the energy inequality \eqref{eq:LLG_energy_1}. Then we write down estimates that justify the use of Gronwall's inequality in \eqref{eq:LLG_energy_2} which proves $\Delta^2 \bm m$ remains bounded in $L^2((0,T)\times \mathbb R^3;\mathbb R^3)$ for all $T>0$. Finally, we obtain global extension.
\textbf{\textit{Step 1}} From \eqref{eq:LLG_energy_1}, Lemma \ref{lem:energija} and Young's inequality we obtain
\begin{align*}
\frac{\alpha}{2} \, \int_0^T \|\partial_t \bm m\|_{L^2}^2 \, dt + C_1\, \| \bm m(T) - \bm{\hat {e}}_3\|_{H^2}^2 \leq E(\bm m_0) + C_2 \, \|\bm j\|_{L^\infty_{t,x}}^2 \int_0^T \| \nabla \bm m \|^2_{L^2} \, dt \, ,
\end{align*}
for some constants $C_1,C_2>0$. Using Gronwall's inequality we get
\begin{align}\label{eq:energyStep1}
\frac{\alpha}{2} \, \int_0^T \|\partial_t \bm m\|_{L^2}^2 \, dt + C_1 \, \| \bm m(T) - \bm{\hat {e}}_3\|_{H^2}^2 \leq E(\bm m_0) \, e^{C(T,\bm j) } \, ,
\end{align}
where $C(T,\bm j) = C_2/C_1 \,T \, \|\bm j\|_{L^\infty_{t,x}}^2 \, .$
\textbf{\textit{Step 2}} We expand on \eqref{eq:LLG_energy_2}. To estimate $\|\Lambda\|_{L^2}$ we use the following Sobolev and interpolation inequalities
\begin{align*}
\| f \|_{L^4} \leq C \, \| f\|_{\dot H^{3/4}} \, , \quad
\| f\|_{\dot H^s} \leq \| f\|_{\dot H^{s_1}}^{(s_2-s)/(s_2-s_1)} \, \| f\|_{\dot H^{s_2}}^{(s-s_1)/(s_2-s_1)} \, ,
\end{align*}
where $0\leq s_1<s<s_2$. For the first term of $\Lambda$ we have
\begin{align*}
\| |\Delta \bm m|^2 \|_{L^2} &\leq C \, \| \Delta \bm m \|_{\dot H^{3/4}}^2
\leq C \, \| \Delta \bm m \|^{3/4}_{\dot H^{2}} \| \Delta \bm m \|_{L^2}^{5/4}
\leq C \, \| \Delta^2 \bm m \|_{L^2}^{3/4} \| \bm m - \bm{\hat {e}}_3 \|_{H^2}^{5/4} \, .
\end{align*}
The second term of $\Lambda$ is bounded by
\begin{align*}
\| \Delta |\nabla \bm m|^2\|_{L^2} \leq C \left( \| |\nabla^2 \bm m |^2 \|_{L^2} + \| \nabla \bm m \cdot \nabla \Delta \bm m \|_{L^2} \right) \, ,
\end{align*}
where $\| |\nabla^2 \bm m |^2 \|_{L^2}$ satisfies the same estimate as $\| |\Delta \bm m|^2 \|_{L^2}$.
Moreover,
\begin{align*}
\| \nabla \bm m \cdot \Delta \nabla \bm m \|_{L^2} &\leq C \, \| \nabla \bm m \|_{L^4} \| \Delta \nabla \bm m \|_{L^4} \\
&\leq C \, \| \nabla \bm m \|_{\dot H^{3/4}} \| \Delta \nabla \bm m \|_{\dot H^{3/4}} \\
&\leq C \, \| \bm m - \bm{\hat {e}}_3\|_{H^2} \| \Delta \bm m \|_{\dot H^{7/4}} \\
&\leq C \, \| \bm m - \bm{\hat {e}}_3\|_{H^2} \| \Delta \bm m \|_{L^2}^{1/8} \| \Delta \bm m \|_{\dot H^2}^{7/8} \\
&\leq C \, \| \bm m - \bm{\hat {e}}_3 \|_{H^2}^{9/8} \| \Delta^2 \bm m \|^{7/8}_{L^2} \, ,
\end{align*}
which provides the required estimate for the last term of $\Lambda$ as well. We estimate $\|\bm f\|_{L^2}$ by
\begin{align*}
\| \bm f \|_{L^2} \leq h \| \bm{\hat {e}}_3^{\rm tan} \|_{L^2} + \| (\bm j \cdot \nabla ) \bm m \|_{L^2} + \|\Delta \bm m \|_{L^2} \, ,
\end{align*}
where we used $|\bm \xi ^{\rm tan} | \leq |\bm \xi|.$ Since $|\bm m| = 1$ we have $2(1-m_3) = |\bm m- \bm{\hat {e}}_3|^2$ and hence
\begin{align*}
\| \bm{\hat {e}}_3^{\rm tan} \|_{L^2} = \left(\int_{\mathbb R^3} 1-m_3^2 \, d\bm x \right)^{1/2} = \left(\int_{\mathbb R^3} \frac{1}{2}|\bm m - \bm{\hat {e}}_3 |^2 (1+m_3) \, d\bm x \right)^{1/2} \leq \|\bm m - \bm{\hat {e}}_3\|_{L^2} \, .
\end{align*}
The term including $\bm j$ is bounded by
\begin{align*}
\| (\bm j \cdot \nabla ) \bm m \|_{L^2} \leq \| \bm j \|_{L^\infty_{t,x}} \|\nabla \bm m \|_{L^2} \, .
\end{align*}
Therefore
\begin{align*}
\| \bm f \|_{L^2} \leq C \, (1+\| \bm j\|_{L^\infty_{t,x}} ) \| \bm m - \bm{\hat {e}}_3\|_{H^2}
\end{align*}
for a constant only depending on $h$.
Using the estimates above in \eqref{eq:LLG_energy_2} we get
\begin{align*}
\| \Delta \bm m (T) \|^2_{L^2} + \lambda \int_0^T \|\Delta^2\bm m\|^2_{L^2} \, dt \leq \| \Delta \bm m_0 \|^2_{L^2} + C &\int_0^T R\big(\| \bm m - \bm{\hat {e}}_3\|_{H^2},\| \Delta^2\bm m \|_{L^2}\big) \\
\ &\phantom{{} = }+ \| \bm j \|_{L^\infty_{t,x}}^2 \|\bm m - \bm{\hat {e}}_3 \|_{H^2}^2 \, dt \, ,
\end{align*}
where the function $R(a,b)$ is given by
\begin{align*}
R(a,b) = a^2 + a^4 + a^{5/2} b^{3/2} + a^{9/4} b^{7/4}.
\end{align*}
Using Young's inequality and absorbing the highest order term on the left-hand side leads to
\begin{align*}
\| \Delta \bm m (T) \|^2_{L^2} + \frac{\lambda}{2} \int_0^T \|\Delta^2\bm m\|^2_{L^2} \, dt \leq \| \Delta \bm m_0\|^2_{L^2} + C &\int_0^T 1+ \| \bm m - \bm{\hat {e}}_3\|_{H^2}^{18} \\
\ &\phantom{{} = } +\| \bm j \|_{L^\infty_{t,x}}^2 \| \bm m - \bm{\hat {e}}_3\|_{H^2}^2 \, dt \, .
\end{align*}
Using the $H^2$ estimate \eqref{eq:energyStep1}, we obtain
\begin{align}\label{eq:energyStep2}
\| \Delta \bm m(T) \|^2_{L^2} + \frac{\lambda}{2} \, \int_0^T \|\Delta^2\bm m\|^2_{L^2} \, dt \leq \| \Delta \bm m_0\|_{L^2}^2 + C(E(\bm m_0))\, T \, e^{C(T,\bm j)} \left(1+\| \bm j \|_{L^\infty_{t,x}}^2 \right) \, ,
\end{align}
where $C(T,\bm j) = C_2/C_1 \, T \, \|\bm j\|_{L^\infty_{t,x}}^2 \, .$
\textbf{\textit{Step 3}} We show that the estimates from the previous two steps imply uniform bounds on higher
order Sobolev norms of $\bm m - \bm{\hat {e}}_3$. To this end, we use multipliers $\Delta^k \bm m$ for all $1\leq k \leq l$ and integrate by parts, i.e., letting $D^k = \nabla \otimes \dots \otimes \nabla$ the $k$ fold tensor product, we apply $D^k$ to \eqref{eq:LLG_fourth} and integrate against $D^k \bm m$. We focus on the highest order terms, using Moser's inequality
$\|fg\|_{H^k} \le C \left( \|f\|_{H^k} \|g\|_{L^\infty}+ \|f\|_{L^\infty} \|g\|_{H^k} \right)$
as an additional tool.
We estimate the first term coming from $\Lambda$ by
\begin{align}\label{eq:LLGStep3_21}
\big \langle D^k(\bm m \, |\Delta \bm m |^2),D^k \bm m \big \rangle &\leq \| |\Delta \bm m|^2 \bm m \|_{H^l} \| \bm m - \bm {\hat e}_3\|_{H^l} \\
&\leq C \, \left ( \| \Delta \bm m \|_{L^\infty}^2 \| \bm m - \bm {\hat e}_3\|_{H^l} + \| \Delta \bm m \|_{L^\infty} \| \Delta \bm m \|_{H^l} \right) \| \bm m - \bm {\hat e}_3 \|_{H^l} \nonumber \, .
\end{align}
To estimate the second term from $\Lambda$ we first rewrite it as in Step 2
\begin{align*}
\big \langle D^k( \bm m \, \Delta |\nabla \bm m|^2), D^k \bm m \big \rangle = \big \langle D^k(\bm m |\nabla^2 \bm m |^2 ) + D^k \big(\bm m (\nabla \bm m \cdot \nabla \Delta \bm m )\big), D^k \bm m \rangle \eqdef (\RN{1})+ (\RN{2}) \, .
\end{align*}
We estimate $(\RN{1})$ in the same way as \eqref{eq:LLGStep3_21} to obtain the same bound.
To estimate $(\RN{2})$ we need to integrate by parts to obtain
\begin{alignat*}{3}
\big \langle D^k \big(\bm m (\nabla \bm m \cdot \nabla \Delta \bm m )\big) , D^k \bm m \big \rangle &= &&- \big \langle D^k (\bm m \, |\Delta \bm m |^2 ), D^k \bm m \big \rangle \\
& && - \big \langle D^k \big( (\nabla \bm m \cdot \Delta \bm m)\big) \nabla \bm m, D^k \bm m\big \rangle \\
& &&- \big \langle D^k \big(\bm m (\nabla \bm m \cdot \Delta \bm m ) \big), \nabla D^k \bm m \big \rangle \\
&\eqdef && \, (a)+(b)+(c) \, .
\end{alignat*}
The first term $(a)$ we already estimated in \eqref{eq:LLGStep3_21}. We estimate $(b)$ as follows
\begin{align*}
|(b)| &\leq C \,\|\bm m - \bm{\hat {e}}_3\|_{H^l} \left(\| \nabla \bm m\|^2_{L^\infty}\|\Delta \bm m \|_{H^l} + \|\nabla \bm m \|_{L^\infty} \|\Delta \bm m \|_{L^\infty} \|\nabla \bm m \|_{H^l} \right)\, .
\end{align*}
In the same way we get
\begin{align*}
|(c)|\leq C\, \| \nabla \bm m \|_{H^l} &\Big( \| \Delta \bm m \|_{L^\infty} \| \nabla \bm m \|_{H^l} + \| \nabla \bm m \|_{L^\infty} \| \Delta \bm m \|_{H^l} \\
\ &\phantom{{} + }+ \| \nabla \bm m \|_{L^\infty} \|\Delta \bm m \|_{L^\infty} \|\bm m - \bm{\hat {e}}_3\|_{H^l} \Big).
\end{align*}
Remark that we have already estimated the last term coming from $\Lambda$ in $(\RN{2})$.
To treat the leading order term we have
\begin{align*}
\big \langle D^k \left(A(\bm m) \Delta^2 \bm m \right) , D^k \bm m \big \rangle = \alpha \, \|D^k \Delta \bm m \|_{L^2}^2 + \big \langle D^k ( \bm m \times \Delta^2 \bm m), D^k \bm m \big \rangle \, .
\end{align*}
To estimate the second term we first integrate by parts to get
\begin{align*}
\big \langle D^k ( \bm m \times \Delta^2 \bm m), D^k \bm m \big \rangle = 2\, \big \langle D^k (\nabla \bm m \times \Delta \bm m ) , \nabla D^k \bm m \big \rangle + \big \langle D^k (\bm m \times \Delta \bm m ), \Delta D^k \bm m \big \rangle \, .
\end{align*}
We only need to estimate the second term on the right-hand side since the first one is estimated analogously to (c). Since $\langle \bm m \times D^k \Delta \bm m, D^k \Delta \bm m \rangle= 0$ we have
\begin{align*}
\big \langle D^k (\bm m \times \Delta \bm m ), \Delta D^k \bm m \big \rangle &= \big \langle D^{k-1} (D \bm m \times \Delta \bm m ), \Delta D^k \bm m \big \rangle \\
&\leq C\, \| \Delta \bm m \|_{H^l} \left( \| \nabla \bm m \|_{L^\infty} \|\Delta \bm m \|_{H^{l-1}} + \|\Delta \bm m \|_{L^\infty} \| \bm m - \bm{\hat {e}}_3\|_{H^l} \right) \, .
\end{align*}
We have now estimated all of the highest order terms. We now estimate terms coming from $\bm f$. We have
\begin{align*}
\left \langle D^k \left[A(\bm m) (\Delta \bm m )^{\rm tan} \right] , D^k \bm m \right \rangle &\leq C\, \|\bm m - \bm{\hat {e}}_3\|_{H^l} \left( \| \Delta \bm m \|_{H^l} + \|\Delta \bm m \|_{L^\infty} \| \bm m - \bm{\hat {e}}_3\|_{H^l} \right) \, , \\
\left \langle D^k \left[ A(\bm m) (h \, \bm{\hat {e}}_3)^{\rm tan} \right] , D^k \bm m \right \rangle &\leq C \, \| \bm m - \bm{\hat {e}}_3\|_{H^l}^2 \, .
\end{align*}
The terms including the current density for $k\geq 2$ we estimate as follows
\begin{align*}
\big \langle D^k \left[ A(\bm m) \, (\bm m \times (\bm j \cdot \nabla ) \bm m) \right], D^k \bm m \big \rangle &= \big \langle D^{k-2} \left[ A(\bm m) \, (\bm m \times (\bm j \cdot \nabla ) \bm m) \right], D^{k-2} \Delta^2 \bm m \big \rangle \\
&\leq C \, \| \Delta \bm m \|_{H^l} \Big( \| \bm m - \bm{\hat {e}}_3\|_{H^{l-2}} \| \bm j \|_{L^\infty} \| \nabla \bm m \|_{L^\infty} \\
\ &\phantom{{} = } + \|\nabla \bm m\|_{L^\infty} \|\bm j \|_{H^{l-2}} + \|\nabla \bm m\|_{H^{l-2}} \|\bm j \|_{L^\infty} \Big) \, .
\end{align*}
For $k=1$ we simply have
\begin{align*}
\big \langle D \left[ A(\bm m) \, (\bm m \times (\bm j \cdot \nabla ) \bm m) \right], D \bm m \big \rangle &= -\big \langle \left[ A(\bm m) \, (\bm m \times (\bm j \cdot \nabla ) \bm m) \right], D^2 \bm m \big \rangle \\ &\leq C \, \|\bm j\|_{L^\infty} \| \bm m - \bm{\hat {e}}_3\|_{H^l}^2 \, .
\end{align*}
Making use of the interpolation inequality $\|D f \|_{H^l} \leq C\,\|D^2 f \|_{H^{l}}^{1/2} \| f \|_{H^{l}}^{1/2}\,$ we have
\begin{align}\label{eq:interpolation}
\|\nabla \bm m \|_{H^l} \leq C\, \|\bm m - \bm{\hat{e}}_3 \|_{H^{l}}^{1/2} \| \Delta \bm m \|_{H^l}^{1/2}\,\quad \text{and} \quad \| \Delta \bm m \|_{H^{l-1}} \leq C\, \|\bm m - \bm{\hat{e}}_3 \|_{H^l}^{1/2} \| \Delta \bm m \|_{H^l}^{1/2} \, .
\end{align}
Summing up the above estimates over all $k \leq l$, using \eqref{eq:interpolation}, Young's inequality, Sobolev embedding and absorbing the highest order term on the left-hand side we obtain
\begin{align*}
\|\bm m(T) - \bm{\hat {e}}_3\|_{H^l}^2 + \lambda \int_0^T \| \Delta \bm m \|_{H^l}^2 \, dt \leq \| \bm m_0- \bm{\hat {e}}_3 \|_{H^l}^2 + C \int_0^T \| \bm m - \bm{\hat {e}}_3\|_{H^l}^2\, F(\bm m , \bm j ) \, dt \, ,
\end{align*}
where
\begin{align*}
F(\bm m , \bm j ) = 1+ \| \Delta \bm m\|_{L^\infty}^2 + \| \nabla \bm m \|_{L^\infty}^4 + \|\nabla \bm m \|_{L^\infty}^{4/3} \| \Delta \bm m \|_{L^\infty}^{4/3}
+ \big(1+ \| \nabla \bm m \|^2_{L^\infty}\big) \| \bm j \|_{H^{l-2}}^2 \, ,
\end{align*}
and lower order terms are taken into account as well.
Using Gronwall's inequality we get
\begin{align}\label{eq:Gronwallograda}
\| \bm m (T) - \bm{\hat {e}}_3\|_{H^l}^2 \leq \| \bm m_0- \bm{\hat {e}}_3 \|_{H^l}^2 \, e^{C(T,\bm m, \bm j)} \, ,
\end{align}
where
\begin{align*}
C(T,\bm m,\bm j) = C \int_0^T F(\bm m, \bm j) \, dt \, .
\end{align*}
We now turn to estimating $F(\bm m, \bm j)$ by using inequality \eqref{eq:energyStep2}. Let $3/2 < s \leq 2$ and $q$ a real number such that $q \,s = 4$. Then we have
\begin{align*}
\int_0^T \| \Delta \bm m\|_{L^\infty}^q \, dt \leq C \int_0^T \| \Delta \bm m \|_{\dot H^s}^q \, dt &\leq C \int_0^T \| \Delta \bm m\|_{\dot H^2}^{qs/2} \|\Delta \bm m\|_{L^2}^{(2-s)q/2} \, dt \\
&\leq C\, \| \Delta \bm m \|^{(2-s)q/2}_{L^\infty_t L^2_x} \, \int_0^T \| \Delta^2 \bm m \|_{L^2}^{2} \, dt \\
&\leq C(T,\bm j, E(\bm m_0)) \, ,
\end{align*}
where $ C(T,\bm j, E(\bm m_0))$ is a constant coming from \eqref{eq:energyStep2}. We have therefore obtained $\Delta \bm m \in L^q_t L^\infty_x$ for all $ 1\leq q < 8/3$\,. Similarly let $3/2<s \leq 2$ and $q$ a real number such that $q (s-1)/2 = 2$, then
\begin{align*}
\int_0^T \| \nabla \bm m \|_{L^\infty}^q \, dt \leq C \int_0^T \| \nabla \bm m\|_{\dot H^s}^q \, dt &\leq C \int_0^T \|\nabla \bm m\|_{\dot H^1}^{(3-s)q/2} \|\nabla \bm m \|_{\dot H^3}^{q(s-1)/2} \, dt \\
&\leq C \,\| \Delta \bm m \|_{L^\infty_t L^2_x}^{(3-s)q/2} \int_0^T \| \Delta^2 \bm m \|_{L^2}^2 \, dt \\
&\leq C(T,\bm j ,E(\bm m_0)) \, ,
\end{align*}
where again $ C(T,\bm j, E(\bm m_0))$ is the appropriate constant coming from \eqref{eq:energyStep2}. We have then obtained $\nabla \bm m \in L^q_t L^\infty_x$ for all $1\leq q < 8\,$. It remains to bound the mixed term with the help of Holder's inequality
\begin{align*}
\int_0^T \| \nabla \bm m \|_{L^\infty}^{4/3} \| \Delta \bm m \|_{L^\infty}^{4/3} \, dt \leq \left(\int_0^T \| \nabla \bm m \|_{L^\infty}^4 \, dt\right)^{1/3} \left(\int_0^T \| \Delta \bm m \|_{L^\infty}^2\, dt \right)^{2/3} \, .
\end{align*}
Going back to inequality \eqref{eq:Gronwallograda} we obtain the bound
\begin{align}\label{eq:Gronwallograda1}
\| \bm m(T)-\bm{\hat {e}}_3\|_{H^l} \leq C(T, \bm j , E(\bm m_0)) \, ,
\end{align}
for all times $T>0$ which gives us a global solution.
\end{proof}
\section{Tools for transport equations}\label{subsection:VM}
We summarize some well known methods and results from the theory of transport equations that will be necessary in our analysis afterwards.
\subsubsection*{Characteristic flow}
One of the main tools in the topic of kinetic equations is the theory of characteristic flow. In the case of a smooth and divergence-free vector field, the initial distribution gets transported along the characteristics. In particular, let us take the Vlasov equation
\begin{align}\label{eq:Vlasovkarakteristike}
\partial_t f + \bm v \cdot \nabla_x f + \left( \bm F_1(t,\bm x) + \bm v \times \bm F_2(t,\bm x) \right) \cdot \nabla_v f = 0 \, ,
\end{align}
for some bounded functions $\bm F_1,\bm F_2\in C([0,T]\times \mathbb R^3 ;\mathbb R^3)$ which are continuously differentiable with respect to $\bm x$. Then for every $t\in [0,T]$ and $(\bm x,\bm v) \in \mathbb R^3 \times \mathbb R^3$ there exists a unique solution $[0,T] \ni s \rightarrow (\bm X,\bm V) (s,t,\bm x,\bm v)$ to the characteristic system of ODEs
\begin{align*}
\begin{cases}
\dot{\bm X}(s) = \bm V(s), \quad &\bm X(t) = \bm x \, , \\
\dot{\bm V}(s) = \bm F_1(s,\bm X(s)) + \bm V(s) \times \bm F_2(s,\bm X(s)), \quad &\bm V(t) = \bm v \, .
\end{cases}
\end{align*}
The characteristic flow is volume preserving as the generating vector field \[
\bm u(t,\bm x, \bm v)=(\bm v,\bm F_1(t ,\bm x) + \bm v \times \bm F_2(t,\bm x))\, ,\]
is divergence-free in $(\bm x, \bm v)$. Note that by means of this vector field, the Vlasov equation \eqref{eq:Vlasovkarakteristike} can be recast into the linear transport equation
\begin{equation}\label{eq:Vlasovtransport}
\partial_t f + \nabla_{x,v} \cdot (\bm u f) = 0.
\end{equation}
For smooth initial data $f_0\in C^1(\mathbb R^3\times \mathbb R^3)$ it is well known that the function given by
\begin{align*}
f(t,\bm x,\bm v) \defeq f_0(\bm X(0,t,\bm x,\bm v), \bm V(0,t,\bm x, \bm v)), \quad t\in [0,T],(\bm x,\bm v) \in \mathbb R^3\times \mathbb R^3 \, ,
\end{align*}
is the unique solution to \eqref{eq:Vlasovkarakteristike} in the space $C^1([0,T]\times \mathbb R^3 \times \mathbb R^3)$. The solution is constant along every solution of the characteristic system. Moreover, if $f_0$ is non-negative then so is $f$. Finally, by the volume preservation of the characteristic flow,
$f$ satisfies the $L^p$ conservation property
\begin{align*}
\|f(t)\|_{L^p} =\|f_0\|_{L^p}, \quad t\in [0,T],\; p \in [1,\infty] \, .
\end{align*}
We refer to \cite{ReinKineticElsevier} for the proof of these results.
\subsubsection*{Velocity averaging}
Let $f\in L^2(\mathbb R \times \mathbb R^N\times \mathbb R^N)$ satisfy the transport equation
\begin{equation} \label{eq:transport}
\partial_t f + \bm v \cdot \nabla_x f = g \quad \text{in} \quad \mathscr{D}'(\mathbb R\times \mathbb R^N\times \mathbb R^N).
\end{equation}
Depending on the reguarity of the distribution $g$, local averages in $\bm v$ satisfy improve regularity properties in fractional Sobolev spaces in
space-time:
\begin{thm}[DiPerna, Lions \cite{DiPernaLions1989}]\label{thm:dipernalions}
Let $m$ be a non-negative integer, let $f\in L^2 (\mathbb R\times \mathbb R^N \times \mathbb R^N)$ satisfy \eqref{eq:transport} where $g$ is given by
\begin{equation*}
g= g_1 + D_v^m g_2
\end{equation*}
and $g_1,\, g_2 \in L^q(B_R \, ; L^p(\mathbb R\times \mathbb R^N_x))$ for all $R<\infty$, where $2(N+1)/(N+3) < p \leq 2$ and $1\leq q \leq 2$. Then,
\begin{equation*}
\int_{\mathbb R^N} f(t,\bm x , \bm v) \psi(\bm v) \, d\bm v \in H^s(\mathbb R \times \mathbb R^N) \quad \text{for each} \quad \psi \in \mathscr D (\mathbb R^N) \, ,
\end{equation*}
where $s=1/2 (1-\theta) (m+1/q+1/2)^{-1}$ and $\theta = (N+1)(2-p) /2p$.
\end{thm}
\subsubsection*{Velocity moment estimates}
To bound the current density $\bm j$ uniformly we make use of the
velocity moment estimate from kinetic theory. We refer to Lemma 1.8 in \cite{ReinKineticElsevier} for the proof of this result.
\begin{lem} \label{lem:jregularity}
For $k\geq 0$ we denote the kth order moment density and the kth order moment in velocity of a nonnegative, measurable function $f:\mathbb R^6\rightarrow [0,\infty)$ by
\begin{align*}
m_k(f) (\bm x) \defeq \int_{\mathbb R^3} |\bm v|^k \, f(\bm x,\bm v) \, d \bm v \, ,
\end{align*}
and
\begin{align*}
M_k(f) \defeq \int_{\mathbb R^3} m_k(f) (\bm x) \, d\bm x = \int \int_{\mathbb R^3 \times \mathbb R^3} |\bm v|^k f(\bm x,\bm v)\, d\bm v \, d\bm x \, .
\end{align*}
Let $1\leq p,q\leq \infty$ with $1/p+1/q = 1$, $0\leq k'\leq k <\infty$ and
\begin{align*}
\ell \defeq \frac{k+3/q}{k'+3/q+(k-k')/p} \, .
\end{align*}
If $f\in L^p_+ (\mathbb R^6)$ with $M_k(f)<\infty$ then $m_{k'} (f) \in L^\ell(\mathbb R^3)$ and
\begin{align*}
\| m_{k'}(f)\|_{L^\ell} \leq C \, \| f\|_{L^p}^{(k-k')/(k+3/q)} \, M_k(f)^{(k'+3/q)/(k+3/q)} \, ,
\end{align*}
where $C=C(k,k',p)>0$.
\end{lem}
\section{Proof of Theorem \ref{thm:glavni}} \label{section:Theorem1}
The arguments closely follow the strategy from \cite{DiPernaLions1989}, starting from a regularized system which admits global smooth solutions so that the energy estimate provides the requisite uniform bounds. This enables us to apply compactness arguments based on velocity averaging and renormalization to the extended Vlasov equation containing emergent electromagnetic field contribution.
\subsection*{Regularized LLG-VM system}
We first regularize the initial conditions for the VM system, i.e., we consider families
$f_0^\varepsilon \in \mathscr{D}_+ (\mathbb R^3 \times \mathbb R^3)$ and $\bm E^\varepsilon_0, \, \bm B^\varepsilon_0 \in \mathscr{D}(\mathbb R^3;\mathbb R^3)$ so that
\begin{equation*}
\int \int_{\mathbb R^3 \times \mathbb R^3} |f_0 - f_0^\varepsilon| (1+|\bm v|^2) + |f_0 - f_0^\varepsilon|^r \; d\bm x \, d\bm v \xrightarrow[]{\varepsilon} 0 \,
\end{equation*}
and
\begin{equation*}
\int_{\mathbb R^3} |\bm E_0 - \bm E_0^\varepsilon|^2 + |\bm B_0 - \bm B_0^\varepsilon|^2 \, d\bm x \xrightarrow[]{\varepsilon} 0 \, .
\end{equation*}
Moreover, for an integer $l\ge 7$ there exists $\bm m_0^\varepsilon \in H^l(\mathbb R^3;\mathbb S^2)$ such that
\begin{equation*}
\|\bm m_0 - \bm m_0^\varepsilon\|_{H^2} \xrightarrow[]{\varepsilon} 0 \,,
\end{equation*}
see e.g. \cite{Melcher2012}. Then a regularized system is obtained by
regularizing the current density and the Lorentz force by means of a
suitable mollifier $K_\varepsilon$, i.e.
\begin{equation}\label{eq:LLG1}
\partial_t \bm m^\varepsilon = \bm m^{\varepsilon} \times \left( \alpha \, \partial_t \bm m^\varepsilon + \Delta \bm m ^\varepsilon + \Delta^2 \bm m^\varepsilon - h\, \bm{\hat {e}}_3 \right) - (K_\varepsilon \, \bm j^\varepsilon \cdot \nabla ) \bm m^\varepsilon \,,
\end{equation}
coupled to the regularized VM system
\begin{gather}
\partial_t f^\varepsilon + \bm v \cdot \nabla_x f^\varepsilon
+ (K_\varepsilon \,\bm F^{\varepsilon}) \cdot \nabla_v f^\varepsilon =0
\, , \label{eq:Vlasov0} \\[1mm]
\varepsilon_r \, \partial_t \bm E^\varepsilon - \frac{1}{\mu_r} \, \nabla \times \bm B^\varepsilon = - K_\varepsilon \, \bm j^\varepsilon \, , \quad \partial_t \bm B^\varepsilon + \nabla \times \bm E^\varepsilon= 0 \, , \label{eq:Maxwell}
\end{gather}
where
\begin{gather}
\bm j^\varepsilon = -\int_{\mathbb R^3} f^\varepsilon \, \bm v \, d\bm v \, , \nonumber \\
\bm F^{\varepsilon} = - ( \bm E^\varepsilon + \bm e^\varepsilon + \bm v \times( \bm B^\varepsilon + \bm b^\varepsilon)) \, , \nonumber \\
e^\varepsilon_i = \bm m^\varepsilon \cdot (\partial_i \bm m^\varepsilon \times \partial_t \bm m^\varepsilon), \quad b^\varepsilon_i = \epsilon^{ijk}\, \bm m^\varepsilon \cdot (\partial_j \bm m^\varepsilon \times \partial_k \bm m^\varepsilon). \label{eq:regularizedemergent}
\end{gather}
With a slight abuse of notation, the operator $K_\varepsilon$ is a convolution operator defined by
\begin{equation*}
K_\varepsilon f (\bm x) = \int_{\mathbb R^3} K_\varepsilon(\bm x-\bm y)\, f(\bm y) \, d\bm y \quad \text{for} \quad f\in L^1_{loc}(\mathbb R^3)\, ,
\end{equation*}
where $K_\varepsilon$ is a standard mollifier satisfying
\begin{gather*}
K_\varepsilon \in C_c^\infty(\mathbb R^3),\quad \mathrm{supp}\, K_\varepsilon \subseteq \overline{ B_{\varepsilon}}, \quad \int_\mathbb{R^3} K_\varepsilon\, d\bm x = 1,\\
\quad K_\varepsilon(\bm x) \geq 0 \quad \text{and} \quad K_\varepsilon(\bm x) = K_\varepsilon (-\bm x) \text{ for } \bm x\in \mathbb R^3 \, .
\end{gather*}
From $K_\varepsilon (\bm x) = K_\varepsilon (-\bm x)$ it follows that the operator $K_\varepsilon$ is self-adjoint with respect to the $L^2$ scalar product.
Equation \eqref{eq:LLG1} is convenient due to the divergence structure of the highest order term, i.e. we have
\begin{align}\label{eq:divergencestructure}
\bm m^\varepsilon \times \Delta ^2 \bm m^\varepsilon &= \Delta (\bm m^\varepsilon \times \Delta \bm m^\varepsilon) - 2 \sum_{k=1}^3 \partial_k \bm m^\varepsilon \times \partial_k (\Delta \bm m^\varepsilon)\notag \\
&= \Delta (\bm m^\varepsilon \times \Delta \bm m^\varepsilon) - 2 \sum_{k=1}^3 \partial_k \left(\partial_k \bm m^\varepsilon \times \Delta \bm m^\varepsilon \right)\, .
\end{align}
\subsection*{Short-time solution to the regularized system}
\begin{prop}
Let $\varepsilon > 0 $ be fixed. Let $l\geq 7$ be an integer, $R>0$ and suppose we have initial conditions
\begin{gather*}
f_0^\varepsilon \in H^{l-2} (\mathbb R^3\times \mathbb R^3), \; \mathrm{supp} \, f_0^\varepsilon \subseteq B_R \times B_R \, ,\\
\bm m^\varepsilon_0 \in H^l(\mathbb R^3;\mathbb S^2) \,, \; \bm E_0^\varepsilon, \; \bm B_0^\varepsilon \in H^{l-2}(\mathbb R^3;\mathbb R^3).
\end{gather*}
Then there exists $T^*>0$ and a local solution for \eqref{eq:LLG1}-\eqref{eq:Maxwell} such that
\begin{gather*}\label{eq:epsilonOgrade}
\begin{gathered}
\bm m^\varepsilon \in C([0,T^*];H^l(\mathbb R^3;\mathbb S^2))\quad \text{and} \quad \partial_t \bm m^\varepsilon \in C ( [0,T^*] ; H^{l-4}(\mathbb R^3;\mathbb R^3))\, , \\
f^\varepsilon \in C([0,T^*];H^{l-2} (\mathbb R^3\times \mathbb R^3) ) \cap C^1([0,T^*];H^{l-3}(\mathbb R^3\times \mathbb R^3)), \\
\bm E^\varepsilon, \; \bm B^\varepsilon \in C([0,T^*];H^{l-2}(\mathbb R^3;\mathbb R^3)) \cap C^1 ( [0,T^*] ; H^{l-3}(\mathbb R^3;\mathbb R^3)) \, ,
\end{gathered}
\end{gather*}
and $\mathrm{supp} \, f^\varepsilon(t) \subseteq B_{2R} \times B_{2R}$ for all $t\in[0,T^*]$.
\end{prop}
\begin{proof}
We set up an iteration scheme: Starting from $\bm j_0^{\varepsilon}= - \int_{\mathbb{R}^3} \bm v f_0^{\varepsilon} \, d\bm v$, there exists,
by Theorem \ref{thm:LLGglobalno}, a global (unique) solution
$\bm m_1^{\varepsilon}$ of \eqref{eq:LLG1}. The solution gives rise to emergent fields $\bm e_1^{\varepsilon}$ and $\bm b_1^{\varepsilon}$ according to \eqref{eq:emergent_fields}. Moreover, by virtue of Theorem \RN{1} from \cite{Kato1975}, there exist unique $\bm E_1^{\varepsilon}$ and $\bm B_1^{\varepsilon}$ solving \eqref{eq:Maxwell} for the given initial fields. After changing the equation \eqref{eq:Vlasov0} like in \cite{Wollman1984} to obtain integrable coefficients, Theorem \RN{1} from \cite{Kato1975} provides a unique global
solution $f_1^{\varepsilon}$ with the total Lorentz force $\bm F_0^{\varepsilon}$ for the given initial distribution. Since we can prove that the support of $f_1^{\varepsilon}$ remains bounded for finite time, it coincides with the solution to equation \eqref{eq:Vlasov0}, providing an update $\bm j_1^{\varepsilon}$. Hence, we arrive to the following iterating scheme with the LLG equation
\begin{align*}
\begin{aligned
\partial_t \bm m^\varepsilon_n = \bm m^{\varepsilon}_n \times \left( \alpha \, \partial_t \bm m^\varepsilon_n + \Delta \bm m ^\varepsilon_n + \Delta^2 \bm m^\varepsilon_n - h\, \bm{\hat {e}}_3 \right) - (K_\varepsilon \, \bm j^\varepsilon_{n-1} \cdot \nabla ) \bm m^\varepsilon_n \,,
\end{aligned}
\end{align*}
the Vlasov equation
\begin{equation*
\partial_t f_n^\varepsilon + \bm v \cdot \nabla_x \, f_n^\varepsilon + (K_\varepsilon \bm F_{n-1}^{\varepsilon}) \cdot \nabla_v\, f_n^\varepsilon = 0 \,
\end{equation*}
with Lorentz force $\bm F_{n-1}^{\varepsilon} = - \left( \bm E^\varepsilon_{n-1} + \bm e^\varepsilon_{n-1} + \bm v \times ( \bm B^\varepsilon_{n-1} + \bm b_{n-1}^\varepsilon) \right)$ and Maxwell equations
\begin{equation*
\varepsilon_r \, \partial_t \bm E_n^\varepsilon - \frac{1}{\mu_r} \, \nabla \times \bm B_{n}^\varepsilon = - K_\varepsilon \, \bm j_{n-1}^\varepsilon \, , \quad
\partial_t \bm B_n^\varepsilon + \nabla \times \bm E_{n}^\varepsilon = 0 \, .
\end{equation*}
The smoothing properties of $ K_\varepsilon $, the compact support of $f_{n}^\varepsilon$, and the fact that $H^l(\mathbb R^3)$ is an algebra imply
\begin{gather*}
K_\varepsilon \, \bm j_{n-1}^\varepsilon , K_\varepsilon \, \bm E^\varepsilon_{n-1}, K_\varepsilon \, \bm e^\varepsilon_{n-1}, K_\varepsilon \, \bm B^\varepsilon_{n-1} , K_\varepsilon \, \bm b^\varepsilon_{n-1}\in C([0,\infty);H^{l-2}(\mathbb R^3 ; \mathbb R^3)) \, .
\end{gather*}
Hence we find sequences such that
\begin{gather*}
\bm m_{n}^\varepsilon \in C([0,\infty); H^l(\mathbb R^3;\mathbb S^2)) \quad \text{and} \quad \partial_t \bm m_{n}^\varepsilon \in C([0,\infty);H^{l-4}(\mathbb R^3;\mathbb R^3)) \, ,\\
f^\varepsilon_{n} \in C([0,\infty); H^{l-2}(\mathbb R^3 \times \mathbb R^3))\cap C^1([0,\infty); H^{l-3}(\mathbb R^3 \times \mathbb R^3)) \, , \\
\bm E^\varepsilon_{n}, \, \bm B^\varepsilon_{n} \in C([0,\infty);\, H^{l-2}(\mathbb R^3;\mathbb R^3))\cap C^1([0,\infty);\, H^{l-3}(\mathbb R^3;\mathbb R^3))\,.
\end{gather*}
It can be shown by arguments similar to \cite{Wollman1984} that there exists some terminal time $T^*>0$ small enough such that this sequence remains bounded in their respective function spaces and $\mathrm{supp} f^\varepsilon_n \subseteq B_{2R}\times B_{2R}$ uniformly in $n\,$. Since $l\geq 7$ is large enough, we get pointwise compactness in space-time from Ascoli's Theorem. The limit functions
\begin{gather*}
\begin{gathered}
\bm m^\varepsilon \in L^\infty((0,T^*);H^l(\mathbb R^3;\mathbb S^2)) \cap \dot W^{1,\infty}( (0,T^*) ; H^{l-4}(\mathbb R^3;\mathbb R^3))\, ,\\
f^\varepsilon \in L^\infty((0,T^*); H^{l-2}(\mathbb R^3\times\mathbb R^3))\cap W^{1,\infty} ( (0,T^*) ; H^{l-3}(\mathbb R^3 \times \mathbb R^3)) \, , \\
\bm E^\varepsilon, \; \bm B^\varepsilon \in L^\infty((0,T^*);H^{l-2}(\mathbb R^3;\mathbb R^3)) \cap W^{1,\infty} ( (0,T^*) ; H^{l-3}(\mathbb R^3;\mathbb R^3)) \, .
\end{gathered}
\end{gather*}
solve the regularized system \eqref{eq:LLG1}-\eqref{eq:Maxwell}. In particular we have that
\begin{gather*}
\bm m^\varepsilon \in C([0,T^*];H^{l-4}(\mathbb R^3;\mathbb S^2)) \, , \\
f^\varepsilon \in C([0,T^*];H^{l-3}(\mathbb R^3\times \mathbb R^3)) \, , \\
\bm E^\varepsilon,\bm B^\varepsilon \in C([0,T^*]; H^{l-3} (\mathbb R^3, \mathbb R^3))\, .
\end{gather*}
Using the compactness of the support of $f^\varepsilon$ we get $\bm j^\varepsilon \in C([0,T^*];H^{l-3}(\mathbb R^3))$. By making use of the mollifier $K_\varepsilon$ again we obtain from Theorem \ref{thm:LLGglobalno} the required regularity for $\bm m^\varepsilon$. Similarly using Theorem \RN{1} from \cite{Kato1975} we get that %
$f^\varepsilon, \bm E^\varepsilon$ and $\bm B^\varepsilon$ belong to spaces stated in the Proposition.
\end{proof}
\subsection*{Global solution to the regularized system}
Once we have a short time solution to the regularized system \eqref{eq:LLG1}-\eqref{eq:Maxwell} we can use the energy argument to extend the solution to a global one.
\begin{lem}
Let $\varepsilon>0$ be fixed. Then for the regularized system of equations \eqref{eq:LLG1}-\eqref{eq:Maxwell} we have the following energy-dissipation law
\begin{align}\label{eq:energyestimate2}
\alpha \int_0^T \|\partial_t \bm m^\varepsilon\|^2_{L^2} \, dt + \Big[ \mathbb{E}(f^\varepsilon ,\bm E^\varepsilon, \bm B^\varepsilon, \bm m^\varepsilon)\Big]_{t=0}^T = 0 \, ,
\end{align}
where the total energy is defined in \eqref{eq:abstract_energy}.
\end{lem}
\begin{proof}
We note that the obtained local solution from the previous chapter is smooth and all of the calculus below is therefore rigorous. We multiply the Vlasov equation \eqref{eq:Vlasov0} by $|\bm v|^2$ and integrate by parts to get
\begin{equation}\label{eq:Vlasov}
\frac{1}{2}\, \frac{d}{dt} \int \int_{\mathbb R^3 \times \mathbb R^3} f^\varepsilon \, |\bm v|^2 \, d\bm v \, d\bm x = \langle \bm j^\varepsilon , \, K_\varepsilon \, \bm e^\varepsilon + K_\varepsilon \, \bm E^\varepsilon \rangle \, .
\end{equation}
Multiplying the Maxwell's equations \eqref{eq:Maxwell} by $\bm E^\varepsilon$ and $\bm B^\varepsilon /\mu_r$ respectively and integrating we get
\begin{equation}\label{eq:Maxwell1}
\frac{1}{2}\, \frac{d}{dt} \int_{\mathbb R^3} \varepsilon_r \, |\bm E^\varepsilon|^2 + \frac{1}{\mu_r} \, |\bm B^\varepsilon|^2 \, d\bm x = -\langle K_\varepsilon \, \bm j^\varepsilon , \, \bm E^\varepsilon \rangle \, .
\end{equation}
We then use $\bm m^\varepsilon \times \partial_t \bm m^\varepsilon $ as a test function for \eqref{eq:LLG1} and integrate by parts to get
\begin{equation}\label{eq:LL}
\frac{1}{2}\,\frac{d}{dt} E(\bm m^\varepsilon) + \alpha \int_{\mathbb R^3} |\partial_t \bm m^\varepsilon |^2 \, d\bm x = - \langle K_\varepsilon \, \bm j^\varepsilon,\, \bm e^\varepsilon \rangle \, .
\end{equation}
Since $K_\varepsilon$ is self-adjoint, adding \eqref{eq:Vlasov}, \eqref{eq:Maxwell1} and \eqref{eq:LL} together and integrating in time we obtain the energy estimate \eqref{eq:energyestimate2}.
\end{proof}
We would now like to show that our local solution does not explode at any arbitrary $T>0$ in order to extend the solution from $[0,T^*]$ to $[0,\infty)$. In particular, \eqref{eq:energyestimate2} yields an estimate for
\begin{equation*}
\int \int_{\mathbb R^3 \times \mathbb R^3} f^{\varepsilon}(t) |\bm v|^2 \, d\bm v \, d \bm x \leq C \quad \text{for all} \quad t\in [0,T)\, .
\end{equation*}
We recall that from the method of characteristics, we have $f^\varepsilon\geq 0$ and
\begin{equation}\label{eq:LpConservation}
\|f^\varepsilon (t) \|_{L^p} = \| f_0^\varepsilon \|_{L^p} \quad \text{for all} \quad t \in [0,T), \; p\in[1,\infty] \, .
\end{equation}
By a simple Holder's inequality we then get $\|\bm j^\varepsilon (t) \|_{L^1}\leq C$ for all $t\in [0,T)$. We therefore know that $K_\varepsilon\, \bm j^\varepsilon$ remains bounded in $C([0,T);H^{l-2}(\mathbb R^3;\mathbb R^3))$. Then from \eqref{eq:Gronwallograda1} $\bm m^\varepsilon$ remains bounded in $C([0,T);H^l(\mathbb R^3;\mathbb S^2)$, more precisely
\begin{equation*}
\|\bm m^\varepsilon(t) - \bm{\hat {e}}_3\|_{H^l} \leq C(T,\varepsilon) \quad \text{for all} \quad t\in [0,T)\, .
\end{equation*}
We use Theorem \RN{1} from \cite{Kato1975} to get that $\bm E^\varepsilon, \, \bm B^\varepsilon$ remain bounded in $C([0,T);H^{l-2}(\mathbb R^3 \times \mathbb R^3))$. From the characteristic equations we then get that $ \mathrm{supp} f^\varepsilon$ remains bounded for finite time $T>0$. We can now use Theorem \RN{1} from \cite{Kato1975} again to obtain that $f^\varepsilon$ is bounded in $C([0,T);H^{l-2}(\mathbb R^3 \times \mathbb R^3).$ Our solution can therefore be extended by continuity to a global solution to the regularized LLG-VM system \eqref{eq:LLG1}-\eqref{eq:Maxwell}.
\subsection*{Compactness}
In this section, we finish the proof of Theorem \ref{thm:glavni} by passing to the limit $\varepsilon\rightarrow 0^+$. By this we mean we consider some sequence $\varepsilon_k\rightarrow 0^+$ and its subsequences when necessary without relabeling for simplicity.
From the energy-dissipation law \eqref{eq:energyestimate2} and \eqref{eq:LpConservation}, the solutions to the regularized system \eqref{eq:LLG1}-\eqref{eq:Maxwell} given in the previous section are bounded, uniformly in $\varepsilon$, in their respective function spaces
\begin{gather}
\bm m^\varepsilon \in C([0,\infty);H^2(\mathbb R^3;\mathbb R^3)) \cap \dot H^1((0,\infty);L^2(\mathbb R^3;\mathbb R^3)) \, , \label{eq:mUniformbound} \\
f^\varepsilon \in C([0,\infty);L^1\cap L^r(\mathbb R^3\times \mathbb R^3) ) \, , \quad |\bm v|^2 f^\varepsilon \in C([0,\infty);L^1(\mathbb R^3\times \mathbb R^3)) \, , \label{eq:fUniformbound}\\
\bm E^\varepsilon, \; \bm B^\varepsilon \in C([0,\infty);L^2(\mathbb R^3;\mathbb R^3)) \, . \label{eq:EBUniformbound}
\end{gather}
Together with Sobolev embedding, we obtain the following uniform bounds for emergent electromagnetic fields given by \eqref{eq:regularizedemergent}
\begin{align}
\begin{aligned}\label{eq:ebUniformbound}
\bm e^\varepsilon &\in L^2((0,\infty)\, ; L^{3/2}(\mathbb R^3 ;\mathbb R^3)) \, , \\
\bm b^\varepsilon &\in C ([0,\infty) \, ; L^3(\mathbb R^3;\mathbb R^3)) \, .
\end{aligned}
\end{align}
Let
\begin{equation*}
\ell \defeq \frac{5\,r-3}{4\,r-2} > \frac{6}{5} \, .
\end{equation*}
Then from Lemma \ref{lem:jregularity} we have
\begin{align*}
\| \bm j^\varepsilon (t) \|_{L^\ell} \leq C\, \| f^\varepsilon(t) \|_{L^r}^{r/(5r-3)} \left (\int \int _{\mathbb R^3 \times \mathbb R^3} f^\varepsilon(t) |\bm v|^2 \, d\bm v \, d\bm x \right) ^{(4r-3)/(5r-3)} \, ,
\end{align*}
and thus we get the uniform bound for
\begin{equation}\label{eq:jUnifomBound}
\bm j^\varepsilon \in C([0,\infty)\, ; L^1 \cap L^{\ell}(\mathbb R^3;\mathbb R^3)) \, .
\end{equation}
We can now prove existence of a solution for the LLG equation.
\begin{lem}\label{lem:LLGcompactness}
Let $(\bm m^\varepsilon, \bm j^\varepsilon)$ be the sequence solving the regularized LLG equations \eqref{eq:LLG1} satisfying uniform bounds \eqref{eq:mUniformbound} and \eqref{eq:jUnifomBound}. Then, up to a subsequence, there exist limits
\begin{gather}\label{eq:jregularity}
\begin{gathered}
\bm m \in L^\infty((0,\infty);H^2(\mathbb R^3 ; \mathbb S^2)) \cap \dot H^1((0,\infty);L^2(\mathbb R^3;\mathbb R^3)) \, , \\
\bm j \in L^\infty((0,\infty);L^\ell (\mathbb R^3;\mathbb R^3)) \, ,
\end{gathered}
\end{gather}
solving \eqref{eq:LLG} in the sense of distributions. In addition, for emergent electromagnetic fields given by \eqref{eq:regularizedemergent}, we have that
\begin{alignat*}{3}
e^\varepsilon_i &\xrightarrow[\varepsilon]{} e_i = \bm m \cdot (\partial_i \bm m \times \partial_t \bm m) \quad &&\text{in} \quad \mathscr D'((0,\infty) \times \mathbb R^3) \, , \\
b^\varepsilon_i &\xrightarrow[\varepsilon]{} b_i = \epsilon^{ijk} \, \bm m \cdot (\partial_j \bm m \times \partial_k \bm m) \quad &&\text{in} \quad \mathscr D'((0,\infty) \times \mathbb R^3) \,
\end{alignat*}
\end{lem}
\begin{proof}[\textbf{Proof of Lemma \ref{lem:LLGcompactness}}]
By \eqref{eq:mUniformbound} the functions $\bm m^\varepsilon-\bm{\hat {e}}_3$ are weakly* compact in the energy space
\begin{equation*}
\mathcal E= L^\infty((0,\infty)\, ; H^2(\mathbb R^3;\mathbb R^3)) \cap \dot H^1((0,\infty)\,;L^2(\mathbb R^3;\mathbb R^3)) \, .
\end{equation*}
Up to a subsequence, by Aubin-Lions lemma we may assume that, for some $\bm m$ such that $\bm m - \bm{\hat {e}}_3 \in \mathcal E$
\begin{alignat}{2}
\bm m^\varepsilon - \bm{\hat {e}}_3 &\xrightharpoonup[\varepsilon]{} \bm m - \bm{\hat {e}}_3 \quad &&\text{weakly* in } \mathcal E\, , \label{eq:weakcvg} \\
\bm m ^\varepsilon&\xrightarrow[\varepsilon]{} \bm m \quad &&\text{in} \quad C ([0,T]\, ; H^s_{loc}( \mathbb R^3\,;\mathbb R^3)) \quad \text{for all} \quad s<2\, . \label{eq:strongcvg}
\end{alignat}
From $|\bm m^\varepsilon|=1$ and \eqref{eq:strongcvg} we have $|\bm m |=1$ almost everywhere in $(0,\infty)\times \mathbb R^3$. Hence
\begin{align*}
\bm m \in L^\infty((0,\infty)\, ;H^2(\mathbb R^3;\mathbb S^2))\, \cap\, \dot H^1((0,\infty)\,;L^2(\mathbb R^3;\mathbb R^3))\,.
\end{align*}
From \eqref{eq:strongcvg} and Sobolev embedding we have
\begin{alignat}{3}\label{eq:Lqkonvergencija}
\begin{aligned}
\bm m^\varepsilon &\xrightarrow[\varepsilon]{} \bm m \quad &&\text{in} \quad L^p((0,T)\, ;L^p_{loc}(\mathbb R^3\,;\mathbb R^3)) \, ,\\
\nabla \bm m^\varepsilon &\xrightarrow[\varepsilon]{} \nabla \bm m \quad &&\text{in} \quad L^p((0,T)\, ;L^q_{loc}(\mathbb R^3\,;\mathbb R^3)) \, ,
\end{aligned}
\end{alignat}
where $1\leq p \leq \infty$ and $1\leq q < 6$.
From \eqref{eq:weakcvg} we deduce that in particular
\begin{align*}
\partial_t \bm m^\varepsilon \xrightharpoonup[\varepsilon]{} \partial_t \bm m\, \quad \text{and} \quad \Delta \bm m^\varepsilon \xrightharpoonup[\varepsilon]{} \Delta \bm m \quad \text{weakly in} \quad L^2((0,\infty)\times \mathbb R^3 ;\mathbb R^3).
\end{align*}
Therefore using \eqref{eq:Lqkonvergencija} we get that
\begin{alignat*}{2}
\bm m^\varepsilon \times \partial_t \bm m^\varepsilon &\xrightarrow[\varepsilon]{} \bm m \times \partial_t \bm m \quad &&\text{in} \quad \mathscr D'((0,\infty)\times \mathbb R^3) \, ,\\
\bm m^\varepsilon \times \Delta \bm m^\varepsilon &\xrightarrow[\varepsilon]{} \bm m \times \Delta \bm m \quad &&\text{in} \quad \mathscr D'((0,\infty)\times \mathbb R^3) \, ,\\
\Delta (\bm m^\varepsilon \times \Delta \bm m^\varepsilon ) &\xrightarrow[\varepsilon]{} \Delta (\bm m \times \Delta \bm m ) \quad &&\text{in} \quad \mathscr D'((0,\infty)\times \mathbb R^3) \, ,\\
\partial_k \left(\partial_k \bm m^\varepsilon \times \Delta \bm m^\varepsilon \right) &\xrightarrow[\varepsilon]{} \partial_k \left(\partial_k \bm m \times \Delta \bm m \right) \quad &&\text{in} \quad \mathscr D'((0,\infty)\times \mathbb R^3) \, .
\end{alignat*}
From \eqref{eq:jUnifomBound} there exists $\bm j \in L^\infty((0,\infty);L^{\ell} (\mathbb R^3;\mathbb R^3))$ such that, up to a subsequence
\begin{align*}
\bm j^\varepsilon \xrightharpoonup[\varepsilon]{} \bm j \quad \text{weakly* in} \quad L^\infty((0,\infty);L^{\ell} (\mathbb R^3;\mathbb R^3)) \, .
\end{align*}
Since $\ell^* < (6/5)^*=6\,$, using \eqref{eq:Lqkonvergencija} we get that
\begin{equation*
(K_\varepsilon\,\bm j^\varepsilon \cdot \nabla)\bm m^\varepsilon \xrightarrow[\varepsilon]{} (\bm j \cdot \nabla) \bm m \quad \text{in} \quad \mathscr D'((0,\infty)\times \mathbb R^3)\, .
\end{equation*}
In view of \eqref{eq:divergencestructure} the above convergences prove that $\bm m$ is a weak solution to the LLG equation \eqref{eq:LLG}. Moreover from \eqref{eq:Lqkonvergencija} we have that $\bm e^\varepsilon,\, \bm b^\varepsilon$ weakly converge to $\bm e,\, \bm b,$ respectively, i.e.
\begin{alignat*}{3}
\bm m^\varepsilon \cdot (\nabla \bm m^\varepsilon \times \partial_t \bm m^\varepsilon ) &\xrightarrow[\varepsilon]{} \bm m \cdot (\nabla \bm m \times \partial_t \bm m) \quad &&\text{in} \quad \mathscr D'((0,\infty) \times \mathbb R^3) \, , \\
\epsilon^{ijk}\, \bm m^\varepsilon \cdot (\partial_j \bm m^\varepsilon \times \partial_k \bm m^\varepsilon) &\xrightarrow[\varepsilon]{} \epsilon^{ijk} \, \bm m \cdot (\partial_j \bm m \times \partial_k \bm m) \quad &&\text{in} \quad \mathscr D'((0,\infty) \times \mathbb R^3) \, . \quad \mbox{\qedhere}
\end{alignat*}
\end{proof}
It remains to show compactness for terms in the Vlasov equation \eqref{eq:Vlasov0}. The key ingredient is the following velocity averaging lemma.
\begin{lem}\label{lem:Stability}
Let $(f^\varepsilon,\bm E^\varepsilon, \bm B^\varepsilon, \bm e^\varepsilon, \bm b^\varepsilon)$ be the sequence solving the regularized Vlasov equation \eqref{eq:Vlasov0} satisfying uniform bounds \eqref{eq:fUniformbound}-\eqref{eq:ebUniformbound} and $\psi \in \mathscr{D} (\mathbb R^3) $. Then, up to a subsequence, we have
\begin{equation}\label{eq:L1compactness}
\int_{\mathbb R^3} \, f^\varepsilon \psi \, d\bm v \xrightarrow[\varepsilon]{} \int_{\mathbb R^3} \, f \psi \, d\bm v \quad \text{in} \quad L^1_{loc}((0,\infty)\times \mathbb R^3) \, .
\end{equation}
\end{lem}
\begin{proof}[\textbf{Proof of Lemma \ref{lem:Stability}}]
We will prove the Lemma in two steps like in \cite{DiPernaLions1989}. In the first step, we additionally assume that $f^\varepsilon$ is uniformly bounded in $L^\infty((0,\infty)\times \mathbb R^3 \times \mathbb R^3)$. In the second step we prove for general $f^\varepsilon$.
\textbf{\textit{Step 1}} We first remark that we can rewrite \eqref{eq:Vlasov0} as
\begin{equation*}
\partial_t f^\varepsilon + \bm v \cdot \nabla_x f^\varepsilon = \nabla_v \cdot \bm g_2^\varepsilon \, ,
\end{equation*}
where
\begin{equation*}
\bm g_2^\varepsilon= K_\varepsilon \, (\bm E^\varepsilon + \bm e^\varepsilon + \bm v \times (\bm B^\varepsilon + \bm b^\varepsilon )) \, f^\varepsilon \, .
\end{equation*}
Then if $f^\varepsilon$ is uniformly bounded in $L^\infty((0,T)\times \mathbb R^3 \times \mathbb R^3)$ (for all $T<\infty$), from \eqref{eq:energyestimate2} we deduce that $\bm g_2^\varepsilon$ is uniformly bounded in $L^{2}_{loc}(\mathbb R^3_v\,;L^{3/2}((0,T)\times \mathbb R^3_x))$ (for all $T<\infty$).
Next, for all $\delta>0$ and $T>0$, we choose $\zeta \in \mathscr D(\mathbb R)$ such that $\zeta \equiv 1 $ on $[\delta, T]$, $\mathrm{supp} \, \zeta \subset [\frac{1}{2} \, \delta, 2T]$ and we observe that $\tilde f^\varepsilon \defeq \zeta \, f^\varepsilon$ solves
\begin{align*}
\partial_t \tilde f^\varepsilon + \bm v \cdot \nabla_x \tilde f^\varepsilon = {g}_1^\varepsilon + \nabla_v \cdot \tilde{ \bm g_2}^\varepsilon
\end{align*}
where both ${g}_1^\varepsilon = \zeta ' f^\varepsilon$ and $\tilde{ \bm g_2}^\varepsilon=\zeta \, \bm g_2^\varepsilon$ are uniformly bounded in ${L^2_{loc}(\mathbb R^3_v \, ; L^{3/2}(\mathbb R \times \mathbb R^3_x))}$. Remark also that $\tilde f^\varepsilon$ is uniformly bounded in ${L^2(\mathbb R \times \mathbb R^3 \times \mathbb R^3)}$ and that
\begin{equation*}
3/2 > 2(N+1)/(N+3)=4/3\, .
\end{equation*}
Therefore if $\psi \in \mathscr D (\mathbb R^3)$, we deduce from Theorem \ref{thm:dipernalions} that
\begin{equation*}
\int_{\mathbb R^3} \tilde f^\varepsilon \, \psi \, d\bm v \quad \text{is bounded in} \quad H^{1/12}(\mathbb R \times \mathbb R^3) \, .
\end{equation*}
In particular, we get
\begin{equation*}
\int_{\mathbb R^3}\zeta \, f^\varepsilon \psi \, d\bm v \xrightarrow[\varepsilon]{} \int_{\mathbb R^3} \zeta \, f \psi \, d\bm v \quad \text{in} \quad L^1_{loc}((0,T)\times \mathbb R^3) \, .
\end{equation*}
Since $\zeta \equiv 1$ on $[\delta,T]$, we conclude that
\begin{equation*}
\int_{\mathbb R^3} \, f^\varepsilon \psi \, d\bm v \xrightarrow[\varepsilon]{} \int_{\mathbb R^3} \, f \psi \, d\bm v \quad \text{in} \quad L^1_{loc}((0,\infty)\times \mathbb R^3) \, ,
\end{equation*}
provided we show that
\begin{equation*}
\sup_{\varepsilon} \int_0^\delta \int \int_{B_R\times B_R} |f^\varepsilon| \, d\bm x \, d\bm v \, dt \rightarrow 0 \quad \text{as} \quad \delta \rightarrow 0_+ \, .
\end{equation*}
This claim is an immediate consequence of the $L^r((0,T)\times \mathbb R^3 \times \mathbb R^3)$ bound on $f^\varepsilon$.
\textbf{\textit{Step 2}} We then define a $C^1([0,\infty))$ function
\begin{align*}
\beta_\delta(t) = \frac{t}{1+\delta t} \quad \text{for} \quad 0<\delta < 1,\; t\geq 0 \, ,
\end{align*}
and therefore $\beta_\delta (f^\varepsilon)$ solves
\begin{align*}
\partial_t ( \beta_\delta(f^\varepsilon)) + \bm v \cdot \nabla_x\, \beta_\delta(f^\varepsilon) = \nabla_v \cdot \bm g_2^\varepsilon \, ,
\end{align*}
where
\begin{equation*}
\bm g_2^\varepsilon \defeq K_\varepsilon \, ( \bm E^\varepsilon + \bm e^\varepsilon + \bm v \times ( \bm B^\varepsilon + \bm b^\varepsilon )) \,\beta_\delta( f^\varepsilon) \, .
\end{equation*}
Since $\beta_\delta(t)\leq t$ and $\beta_\delta(t)\leq 1/\delta$ we know that $\beta_\delta(f^\varepsilon)$ is bounded in $L^1 \cap L^\infty(\mathbb R^3\times \mathbb R^3)$. Then from \eqref{eq:energyestimate2} we deduce that $\bm g_2^\varepsilon$ is uniformly bounded in $L^{2}_{loc}(\mathbb R^3_v \,;L^{3/2}((0,T)\times \mathbb R^3_x))$ (for all $T<\infty$). We can therefore apply the proof from Step 1 to obtain
\begin{align*}
\int_{\mathbb R^3} \beta_\delta (f^\varepsilon) \, \psi \, d\bm v \xrightarrow[\varepsilon]{} \int_{\mathbb R^3} f_\delta \, \psi \, d\bm v \quad \text{in} \quad L^1_{loc}((0,\infty)\times \mathbb R^3)\, ,
\end{align*}
where $f_\delta$ is the weak limit of $\beta_\delta(f^\varepsilon)$ and $\psi\in \mathscr{D}(\mathbb R^3)$ is arbitrary. In order to conclude that
\begin{equation*
\int_{\mathbb R^3} \, f^\varepsilon \psi \, d\bm v \xrightarrow[\varepsilon]{} \int_{\mathbb R^3} \, f \psi \, d\bm v \quad \text{in} \quad L^1_{loc}((0,\infty)\times \mathbb R^3) \, ,
\end{equation*}
we have to prove that
\begin{align*}
\sup_{\varepsilon} \int_0^T \int_{B_R \times B_R} |f^\varepsilon - \beta_\delta(f^\varepsilon)| \, d\bm x \, d\bm v \, dt \rightarrow 0 \quad \text{as} \quad \delta \rightarrow 0_+ \quad \text{for all}\quad R,T<\infty \, .
\end{align*}
Indeed, we have
\begin{align*}
0\leq f^\varepsilon - \beta_\delta(f^\varepsilon) = \frac{\delta \, (f^\varepsilon) ^2}{1+\delta f^\varepsilon} \leq M\delta f^\varepsilon + f^\varepsilon \mathds{1}_{\{ f^\varepsilon > M\}} \, ,
\end{align*}
where $f^\varepsilon \mathds{1}_{\{ f^\varepsilon > M\}}$ can be made arbitrarily small in $L^1((0,T)\times B_R \times B_R)$ uniformly in $\varepsilon$ by taking $M$ large.
\end{proof}
We are now ready to finish the proof of Theorem \ref{thm:glavni}.
\begin{proof}[\textbf{Proof of Theorem \ref{thm:glavni}}] Let $(\bm m^\varepsilon, f^\varepsilon,\bm E^\varepsilon, \bm B^\varepsilon)$ be the sequence solving the regularized system \eqref{eq:LLG1}-\eqref{eq:Maxwell}. From Lemma \ref{lem:LLGcompactness} there exist $(\bm m,\bm j)$ solving \eqref{eq:LLG} in the sense of distributions. In exactly the same way as in \cite{DiPernaLions1989}, from Lemma \ref{lem:Stability} we get that
\begin{align*}
\bm j^\varepsilon \xrightarrow[\varepsilon]{} - \int_{\mathbb R^3} \bm v \, f \, d\bm v \quad \text{in} \quad L^1_{loc}((0,\infty)\times \mathbb R^3) \, ,
\end{align*}
and therefore $\bm j=- \int_{\mathbb R^3}\bm v \, f \, d\bm v \, .$ Moreover, since $\int_{\mathbb R^3} f^\varepsilon \, \psi \, d\bm v$ is uniformly bounded in $L^r_{loc}((0,\infty)\times \mathbb R^3)$ we deduce from Lemma \ref{lem:Stability} that
\begin{equation*}
\int_{\mathbb R^3}f^\varepsilon \psi \, d\bm v \rightarrow \int_{\mathbb R^3} f \psi \, d\bm v \quad \text{in} \quad L^p_{loc}((0,\infty)\times \mathbb R^3) \quad \text{for all} \quad 1\leq p <r.
\end{equation*}
In view of \eqref{eq:Vlasovtransport}, we obtain weak convergence of all terms in the Vlasov equation \eqref{eq:Vlasov0} taking into account uniform bounds \eqref{eq:EBUniformbound}-\eqref{eq:ebUniformbound} and $r>3$.
Moreover, from weak convergence of $\bm E^\varepsilon,\,\bm B^\varepsilon$ we get
\begin{equation*}
\varepsilon_r \, \partial_t \bm E - \frac{1}{\mu_r} \, \nabla \times \bm B = -\bm j\,,\quad \partial_t \bm B + \nabla \times \bm E = 0 \quad \text{in} \quad \mathscr D'((0,\infty)\times \mathbb R^3)\, .
\end{equation*}
From Sobolev embedding we have that $\bm m \in C^{1/2}([0,\infty);L^2(\mathbb R^3;\mathbb S^2))$ and thus the initial data is attained by the limit, i.e. $\bm m|_{t=0}=\bm m_0$. Then by interpolation we have
\begin{align*}
\| \bm m (t_1) - \bm m (t_2)\|_{H^s} &\leq \| \bm m (t_1) - \bm m (t_2)\|_{H^2}^{s/2} \| \bm m (t_1) - \bm m (t_2)\|_{L^2}^{1-s/2} \\
&\leq \| \bm m \|^{s/2}_{L^\infty_t H^2_x} \, |t_1 - t_2|^{1/2 - s/4}\, .
\end{align*}
In particular we have $\bm m \in C([0,\infty);H^s(\mathbb R^3;\mathbb S^2))$ for $s<2$.
The VM system \eqref{eq:Vlasov0}-\eqref{eq:Maxwell} combined with the uniform bounds \eqref{eq:fUniformbound}-\eqref{eq:ebUniformbound} gives that
\begin{gather*}
\partial_t f^\varepsilon \quad \text{is bounded in} \quad L^2((0,\infty); W^{-1,\,3r/(2r+3)}_{loc}(\mathbb R^3 \times \mathbb R^3)) \, , \\%\quad \text{for all} \quad R<\infty \, ,\\%\quad \text{for all} \quad R<\infty,
\partial_t \bm E^\varepsilon , \; \partial_t \bm B^\varepsilon \quad \text{are bounded in} \quad L^\infty((0,\infty); H^{-2}(\mathbb R^3))\, .
\end{gather*}
These bounds imply that $\bm E^\varepsilon, \bm B^\varepsilon$ are compact in $C([0,T];H^{-s}_{loc}(\mathbb R^3))$ and that $f^\varepsilon$ is compact in $C([0,T];W_{loc}^{-s,\,3r/(2r+3)} (\mathbb R^3 \times \mathbb R^3))$ (for all $T<\infty$, all $s>0$). This, in particular, shows that the initial data are attained by the limit, i.e. $f|_{t=0} = f_0,\ \bm E|_{t=0}=\bm E_0, \ \bm B|_{t=0}=\bm B_0$.
The statement $f\geq 0$ follows from $f_0\geq 0$ and Mazur's lemma. Lastly, the proof to obtain
\begin{equation*}
\nabla \cdot \bm E = \frac{\rho}{\varepsilon_r} \,, \quad \nabla \cdot \bm B= 0 \quad \text{in}\quad \mathscr D'((0,\infty)\times \mathbb R^3) \, ,
\end{equation*}
and that the mass $\int \int_{\mathbb R^3 \times \mathbb R^3} f \, d\bm x \, d \bm v $ is independent of $t\geq 0$ is analogous to \cite{DiPernaLions1989}.
\end{proof}
Finally we show that helicity functional behaves continuously along the flow of $\bm m$.
\begin{lem}\label{lem:hopfion} The Hopf invariant $H=H(\bm m)$ is a smooth functional over the class of fields
$\bm m: \mathbb{R}^3 \to \mathbb{S}^2$ such that $\bm m \in H^s(\mathbb{R}^3; \mathbb{S}^2)$ where $s>3/2$.
\end{lem}
\begin{proof}It is clear that $H(\bm m)$ is independent of the special choice of $\bm a$.
By translation invariance and Sobolev embedding
\begin{align*}
|\bm b| \lesssim |\nabla \bm m|^2 \in L^1 \cap L^{\frac{3}{5-2s}}(\mathbb{R}^3).
\end{align*}
By the properties of the Biot-Savart operator $\bm b \mapsto \bm a$ given by the singular integral
\begin{align*}
\bm a(\bm x) = \int_{\mathbb{R}^3} \frac{\bm b(\bm y) \times (\bm x -\bm y)}{|\bm x - \bm y|^3} d \bm y
\end{align*}
we obtain $|\bm a| \in L^p(\mathbb{R}^3)$ for all
$3/2< p < \frac{3}{2(2-s)}$ which is larger than the dual exponent of $\frac{3}{5-2s}$, and the claim follows.
\end{proof}
\section{Uniqueness for the LLG equation}\label{section:uniqueness}
In this section, we prove Theorem \ref{thm:uniqueness}, i.e., uniqueness for
weak solutions to the LLG equation \eqref{eq:LLG} for the fixed current
$\bm j \in L^\infty((0,\infty); L^{6/5}(\mathbb{R}^3;\mathbb{R}^3))$. In particular, since in \eqref{eq:jregularity} we have $\ell>6/5$, the result holds for the solution given by Theorem \ref{thm:glavni}.
\begin{proof}[\textbf{Proof of Theorem \ref{thm:uniqueness}}]We start from a weak solution to the LLG equation \eqref{eq:LLG}, i.e. the solution $\bm m$ satisfies
\begin{align*
\langle \partial_t \bm m, \bm v \rangle &= \alpha \langle \bm m \times \partial_t \bm m, \bm v \rangle + \langle \bm m \times (\Delta \bm m - h\, \bm{\hat {e}}_3), \bm v\rangle + \langle\bm m \times \Delta \bm m, \Delta \bm v\rangle \\
\ &\phantom{{} = }+ 2 \sum_{k=1}^3 \langle \partial_k \bm m \times \Delta \bm m , \partial_k \bm v \rangle - \langle (\bm j \cdot \nabla ) \bm m, \bm v\rangle \, ,
\end{align*}
for almost every $t \in [0,\infty)$ and for all $\bm v \in H^2(\mathbb R^3;\mathbb R^3).$ Since $H^2(\mathbb R^3)$ is closed under pointwise multiplication $\bm v \times \bm m \in H^2(\mathbb R^3;\mathbb R^3)$ is a valid test function and using the identity $\langle \bm m \times \partial_t \bm m ,\bm v \rangle = \langle \partial_t \bm m , \bm v \times \bm m\rangle$ we can pass to the Landau-Lifshitz formulation
\begin{align}\label{eq:LLGpravioblikweakform}
(1+\alpha^2) \langle \partial_t \bm m, \bm v\rangle +\langle A(\bm m )\Delta^2 \bm m , \bm v \rangle = \langle A(\bm m) \bm f , \bm v \rangle -\alpha \langle \Lambda \bm m , \bm v \rangle \, ,
\end{align}
where we use the following convenient notations
\begin{gather*}
\langle \Delta^2 \bm m , \bm v \rangle \defeq \langle \Delta \bm m, \Delta \bm v \rangle \, , \quad \langle \bm m \times \Delta^2 \bm m , \bm v \rangle \defeq \langle \Delta \bm v \times \bm m , \Delta \bm m \rangle + 2\sum_{k=1}^3 \langle \partial_k \bm v \times \partial_k \bm m, \Delta \bm m \rangle \, , \\
\langle \Lambda \bm m , \bm v \rangle = - \langle (\bm m \cdot \Delta^2 \bm m ) \bm m , \bm v \rangle \defeq - \big\langle |\Delta \bm m |^2 \bm m , \bm v \big\rangle + 2 \,\big\langle D\bm m \otimes D \bm m , D^2(\bm v \cdot \bm m )\big\rangle \, .
\end{gather*}
where $\left \langle D \bm m \otimes D \bm m , D^2(\bm v \cdot \bm m) \right \rangle $ stands for $\sum_{i,j} \left \langle \partial _i \bm m \cdot \partial_j \bm m , \partial_{ij}^2 (\bm v \cdot \bm m ) \right \rangle $. Details can be found in \cite{ChugreevaMelcher2017}. We now assume there exist two distinct solutions to the LLG equation \eqref{eq:LLG}, $\bm m_1$ and $\bm m_2$. We can then subtract the two equations of form \eqref{eq:LLGpravioblikweakform} and integrate in time. We choose the test function to be
\begin{equation*}
\bm v (t) = \bm m_1 (t) - \bm m_2 (t) \, ,
\end{equation*}
and get
\begingroup
\allowdisplaybreaks
\begin{subequations}
\begin{align}
\frac{1+\alpha^2}{2}\| \bm v (T)\|^2_{L^2} = &-\int_0^T \alpha \, \langle \Delta \bm v, \Delta \bm v \rangle \, dt \label{eq:subeqa} \\
&+ \int_0^T \alpha \, \big\langle |\Delta \bm m_1|^2 \bm m_1 - |\Delta \bm m_2|^2 \bm m_2, \bm v \big\rangle \, dt \\
&- \int_0^T \alpha \, \big\langle D \bm m_1 \otimes D \bm m_1 , D^2 (\bm m_1 \cdot \bm v) \big \rangle \, dt \\
&+\int_0^T \alpha \, \big\langle D \bm m_2 \otimes D \bm m_2 , D^2 (\bm m_2 \cdot \bm v) \big \rangle \, dt \\
&+ \int_0^T \langle \Delta \bm m_1 , \bm m_1 \times \Delta \bm v \rangle \, dt - \int_0^T \langle \Delta \bm m_2, \bm m_2 \times \Delta \bm v \rangle \, dt \\
&+ 2\, \sum_{k=1}^3 \left [ \int_0^T \langle \Delta \bm m_1 , \partial_k \bm m_1 \times \partial_k \bm v \rangle \, dt - \int_0^T \langle \Delta \bm m_2 , \partial_k \bm m_2 \times \partial_k \bm v \rangle \, dt \right ] \\
&+ \int_0^T \alpha \, \langle \bm m_1 \times \bm m_1 \times \Delta \bm m_1 - \bm m_2 \times \bm m_2 \times \Delta \bm m_2 , \bm v \rangle \, dt \\
&+\int_0^T \langle \bm m_1 \times \Delta \bm m_1 - \bm m_2 \times \Delta \bm m_2 , \bm v \rangle \, dt \, \label{eq:subeqh} \\
&- \int_0^T \alpha \, \langle \bm m_1 \times \bm m_1 \times h \, \bm{\hat {e}}_3) - \bm m_2 \times \bm m_2 \times h \, \bm{\hat {e}}_3) , \bm v \rangle \, dt \label{eq:subeqi} \\
&- \int_0^T \left \langle \bm m_1 \times h \, \bm{\hat {e}}_3) - \bm m_2 \times h \, \bm{\hat {e}}_3) ,\bm v \right \rangle \, dt \label{eq:subeqj}\\
&- \int_0^T \alpha \, \langle \bm m_1 \times (\bm j \cdot \nabla) \bm m_1 - \bm m_2 \times (\bm j \cdot \nabla) \bm m_2 , \bm v \rangle \, dt \label{eq:subeqk}\\
&- \int_0^T \langle (\bm j \cdot \nabla) \bm v , \bm v \rangle \, dt. \label{eq:subeql}
\end{align}
\end{subequations}
\endgroup
The estimates concerning the terms \eqref{eq:subeqa}-\eqref{eq:subeqh} were obtained in \cite{ChugreevaMelcher2017}. We estimate \eqref{eq:subeqi} by
\begin{align*}
\langle \bm m_1 \times \bm m_1 \times h \, \bm{\hat {e}}_3) - \bm m_2 \times \bm m_2 \times h \, \bm{\hat {e}}_3) , \bm v \rangle &= h\, \big\langle (\bm m_1 \cdot \bm{\hat {e}}_3) \bm v + (\bm v \cdot \bm{\hat {e}}_3) \bm m_2, \bm v\big\rangle \leq C \, \|\bm v\|_{L^2}^2 \, .
\end{align*}
Estimate for \eqref{eq:subeqj} goes similarly. It remains to treat the terms \eqref{eq:subeqk} and \eqref{eq:subeql}.
\begin{align*}
\langle \bm m_1 \times (\bm j \cdot \nabla) \bm m_1 - \bm m_2 \times (\bm j \cdot \nabla) \bm m_2 , \bm v\rangle &= \langle \bm m_2 \times (\bm j \cdot \nabla) \bm v, \bm v\rangle \\
& \leq \| \bm j \|_{L^{6/5}} \| \nabla \bm v \|_{L^6} \| \bm v \|_{L^{\infty}} \\
&\leq \| \bm j \|_{L^{6/5}} \| \bm v \|_{H^2}^{7/4} \| \bm v \|_{L^2}^{1/4} \, .
\end{align*}
The last term \eqref{eq:subeql} is estimated in the same way. We therefore get
\begin{align*}
\frac{1}{2} \| \bm v(T) \|_{L^2}^2 + \lambda \int_0^T \| \Delta \bm v \|_{L^2}^2 \, dt \leq C \int_0^T R \big(\|\bm v \|_{L^2}, \|\bm v\|_{H^2} \big) \, dt \, ,
\end{align*}
where the function $R(a,b)$ is given by
\begin{align*}
R(a,b)= a^2 + ab+a^{1/4} b^{7/4}+ a^{1/2} b^{3/2} \,
\end{align*}
Using Young's inequality and absorbing the $\|\Delta \bm v \|_{L^2}$ terms on the left-hand side we obtain
\begin{align*}
\| \bm v(T) \|_{L^2}^2 + \lambda \int_0^T \| \Delta \bm v \|_{L^2}^2 \, dt \leq C \int_0^T \|\bm v \|_{L^2}^2 \, dt \, .
\end{align*}
We conclude with the Gronwall's lemma that $\|\bm v(T)\|^2_{L^2}=0$ for all $T\in [0,\infty)$.
\end{proof}
\paragraph{Acknowledgments.} We would like to thank Martin Frank for numerous helpful and stimulating discussions on transport equations.
This work was funded by the Deutsche Forschungsgemeinschaft (DFG)
RTG 2326 \textit{Energy, Entropy, and Dissipative Dynamics}.
\bibliographystyle{siam}
|
{
"timestamp": "2021-12-01T02:26:43",
"yymm": "2111",
"arxiv_id": "2111.15482",
"language": "en",
"url": "https://arxiv.org/abs/2111.15482"
}
|
\section{Introduction}
We consider the following optimisation problem of servicing
timed requests on the line.
For a given integer $k \ge 1$ and
a set of $n$ timed requests $\{(x_i,t_i,w_i),
1 \le i \le n\}$, where $x_i\in (-\infty, + \infty)$,
$t_i \ge 0$ and $w_i \ge 0$ are the location
(on the line),
the time and the weight of request $i$, respectively,
maximise the total weight of requests
which can be serviced
by $k$ robots. Initially, at time $t=0$, all robots are
at the origin point of the line and they can move
freely along
the line, changing direction and speed when needed,
but never exceeding a given maximum speed $v$.
To service request $i$, one of the robots has to be
at location $d_i$ exactly at time $t_i$.
Servicing a request is instantaneous and the robot can
move immediately to serve another request.
This is an off-line optimisation problem
with all data about the requests known in advance,
which appeared,
for example,
in the context of
the \emph{ball collecting problems} (BCPs)
considered by Asahiro~{\it et al.}\cite{DBLP:journals/dam/AsahiroHMOSY06}.
The basic BCP is essentially the optimisation
problem stated in the previous paragraph.
There are $n$ weighted balls approaching
the line $L$ where the robots can move.
Each ball will cross $L$
at a specified time and point, and
if a robot is there, then
the ball is intercepted (collected). For the \textit{weighted} case the objective is to compute the movement of the robots so
that the total weight of the intercepted balls is maximised. For the \textit{unweighted} case (i.e. all balls have the same weight) the objective is to maximise the number of intercepted balls.
Asahiro~{\it et al.}\cite{DBLP:journals/dam/AsahiroHMOSY06}
studied a number of BCP variants,
putting them in the context of the \emph{Kinetic Travelling Salesman Problem}
(KTSP)
and establishing the tractability--intractability
(polynomiality vs. NP-hardness) frontier through
the landscape of the studied variants.
The literature of the KTSP consists of similar work, focusing on approximation algorithms \cite{Hammar:1999:ARK:646229.681565,Chalasani1996AlgorithmsFR,Asahiro2008}, polynomial time exact algorithms \cite{Asahiro2008,HELVIG2003153,RePEc:spr:annopr:v:289:y:2020:i:2:d:10.1007_s10479-019-03412-x}
for special problem settings and real world applications\cite{Menezes2015158,6858759,5400538}.
Variants of the BCP are obtained
by giving each robot $i$ its own
line $L_i$ where it moves and intercepts balls,
or by not-fixing the position (i.e. the angle) of the common line $L$
(or the positions of the robots' individual lines $L_i$), asking instead for the optimal position of the line
to be determined as part of the output,
or by considering different optimisation objectives
(e.g., minimizing the number of robots needed to
collect all balls). Asahiro~{\it et al.}\
\cite{DBLP:journals/dam/AsahiroHMOSY06} showed that
maximising the total weight of collected balls
when robots move on a common line $L$ is polynomially
solvable, but $\mathcal{NP}$-Hard when
each robot moves on its own line.
They also showed that the BCP problems with
a common line $L$, which is not fixed
but part of the
optimisation decision,
can be solved by solving $O(n^2)$ instances
with a fixed line.
The problem
of servicing timed requests on the line
corresponds
to the \textit{weighted} ball collecting problem,
so we will refer to it as BCP, or BCP$(k)$:
a given number of $k$ robots, a single
given (fixed) line $L$,
and the objective of maximising
the total weight.
As shown in
Asahiro {\it et al.}~\cite{DBLP:journals/dam/AsahiroHMOSY06},
for $k\geq 2$
the objective of maximising the total weight with
$k$ robots can be modeled as a minimum cost flow problem
in a directed acyclic graph ($DAG$)
$G^*$, which has $n$ nodes representing balls
and two additional
special source and sink nodes $s$ and $t$, respectively.
There is an edge in $G^*$ from a node $v'$,
representing ball $b'$, to a node $v''$, representing
ball $b''$, if there is enough time for a robot
to move from intercepting $b'$ to intercepting $b''$. An $s-t$ path in $G^*$ represents a schedule for one robot and its weight is equal to the total weight of the balls intercepted by the robot.
The corresponding minimum cost flow problem has
unit node capacities (maximum one unit of flow
through each node) and node weights equal to
negations of the weights of balls,
so is equivalent to finding $k$ node-disjoint paths
from $s$ to $t$
(the paths share only nodes $s$ and $t$)
such that the total weight of the selected paths
is minimised. This problem
can be solved in $O(kn^2)$ time by the successive shortest
path algorithm~\cite{Ahuja:1993:NFT:137406}.
The quadratic dependence on $n$ is due to the fact
that graph $G^*$ can have quadratic number of edges.
Looking into some technical details,
graph $G^*$ has actually $2n+2$ nodes since each node
representing a ball
is split into two nodes (connected by an edge) as
in the standard reduction from node capacities to
edge capacities.
The $DAG$ $G^*$ can be implicitly represented
by a set ${\cal{P}}$ of $2n+2$ points
in the 2-D Euclidean plane, illustrated
in Figure~\ref{fig1and2}.
The BCP input with $5$ requests given
in Figure~\ref{fig1and2}(a) is shown in
Figure~\ref{fig1and2}(b)
in the location-time coordinates
(the distances and times are normalised so that
the maximum speed of a robot is equal to $1$).
The arrows show the edges of $G^*$.
Vertex $t$, not shown in the diagram,
is on the time axis
sufficiently high so that there are edges
to $t$ from all other nodes.
For clarity, we also do not show the splitting of
nodes into two. For the \textit{unweighted} BCP and $k=1$, Asahiro {\it et al.}~\cite{DBLP:journals/dam/AsahiroHMOSY06} show an $O(n \log n)$-time algorithm
using the implicit plane representation of graph $G^*$ with the points in ${\cal{P}}$. However, for the weighted BCP and $k=1$, the problem is solved in the standard way of computing a longest path in a directed acyclic graph $G^*$, which requires $O(n^2)$ time.
For $k \ge 2$,
\cite{DBLP:journals/dam/AsahiroHMOSY06} gives only
the $O(kn^2)$ computation as indicated above, which applies
to both \textit{unweighted} and \textit{weighted} BCP.
We show that the implicit plane
representation of graph $G^*$ can lead also to
efficient algorithms for the {weighted}
BCP for $k\ge 1$. More precisely, for the weighted BCP and the special case $k=1$ we show an iterative algorithm with running time of $O(n \log^3 n)$ which improves the previous bound of $O(n^2)$. For the weighted BCP and $k \geq 2$, we show a recursive algorithm for finding a minimum weight collection of $k$ node-disjoint $s-t$ paths in graph $G^*$ with the running time of $O(k^{3k}n\log^{2k+3} n)$, improving
the previous bound of $O(kn^2)$ if $k$ is considered constant. This result also gives an algorithm with the running time of $O(k^{3k}n^3\log^{2k+3} n)$ for
the BCP variant
where the placement of the line $L$ is to be chosen. A summary of the previous and new results for the $BCP$
is shown in Table \ref{t1}.
We also show properties of BCP solutions that may be useful
within the context of applications that motivate the problem.
Specifically, an $s-t$ path in $G^*$ is a schedule for one
robot in the BCP, and the representation of this path
as a concatenation of straight-line segments on the plane
({\it e.g.} path $(s,1,2,4)$ in Figure~\ref{fig1and2}(b))
gives the direction and the speed for each part of the
schedule.
If two paths on the plane cross, then the two robots
following these paths collide (are at the same point at the
same time).
We show that for $k \geq 2$, there is at least one
minimum-weight
collection of $k$ node-disjoint non-crossing $s-t$ paths,
which ensures that the $k$ robots do not collide, and that
such a collection of optimal non-crossing paths can be
computed from any optimal collection of paths within
$O(k n\log n)$ time.
\begin{table}[h]
\centering
\caption{(\textit{Weighted}) BCP: maximize the total weight}
\label{t1}
\begin{tabular}{cccccccccc}
\hline
\textbf{Line $L$}
& \textbf{$k= 1$} \cite{DBLP:journals/dam/AsahiroHMOSY06}
& \textbf{ $k=1$}
& \textbf{$k\geq 2$} \cite{DBLP:journals/dam/AsahiroHMOSY06}
& \textbf{$k\geq 2$} \\ \hline
as part of input & $O(n^2)$ & $O(n \log^3 n)$ & $O(kn^2)$ & $O(k^{3k}n\log^{2k+3} n)$ \\
as part of output & $O(n^4)$ & $O(n^3 \log ^3 n)$ & $O(kn^4)$ & $O(k^{3k}n^3\log^{2k+3} n)$ \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.55]{Chapter3/C3Images/BCP_basic.png}
\caption{}
\label{fig1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.62]{Chapter3/C3Images/BCP_transformation.png}
\caption{}
\label{fig2}
\end{subfigure}
\caption{Figure \ref{fig1}: Five timed requests $1,2,3,4,5$ on the line $L$. Figure \ref{fig2}: Representation of the BCP input in the location-time coordinates and the $\alpha$-$\beta$
coordinates.
}
\label{fig1and2}
\end{figure}
The remaining part of the paper is organised in the following way. In section \ref{c3s2} we discuss the directed acyclic graph (DAG) model and its implicit planar representation. In section \ref{c3s2a} we describe the input and output of algorithm $\mathcal{A}_k$ and provide an overview of its recursive structure. In section \ref{k1sect} we consider the special case $k=2$ as an introduction to our recursive approach. In section \ref{knopi} we consider the general case $k \geq 3$. In section \ref{noncrossingpathsfork} we show the additional property of BCP solutions which ensures the $k$ robots do not collide.
\section{Preliminaries}
\label{c3s2}
\subsection{DAG model of BCP}
\label{sub:dag}
The input of the BCP, as specified in \cite{DBLP:journals/dam/AsahiroHMOSY06}, consists of $n$ tuples $(x_1,y_1,v_1),..,(x_n,y_n,v_n)$ and two additional parameters $v$ and $k$. The parameter $k$ is the number of (identical) robots and $v$ specifies their maximum speed. The tuple $(x_i,y_i,v_i)$, for $1 \leq i \leq n$, specifies speed $v_i$ of ball $b_i$ and the initial position $(x_i,y_i)$ in a $2D$ plane with $x$ and $y$ coordinates. We assume that $y_i \ge 0$.
Starting at time $t=0$, ball $b_i \in B$ moves from
$(x_i,y_i)$ with constant speed
of $v_i$ towards the $x$-axis, reaching the point $(x_i, 0)$ at time $t_i={y_i}/{v_i}$.
A robot intercepts (or collects) ball $b_i$, if this
robot is at time $t_i$ at point $(x_i, 0)$.
In this BCP model, to optimise the interception of balls, we need to know only the numbers $x_i$ and $t_i$, so we will
assume that these numbers are given directly as the input.
Notice that two or more balls can cross the line $L$ at the same time $t$ at the same distance $x$ from the origin.
When a robot is at time $t$ at $x$, it can intercept all
these balls.
In the graph model, we assume that if $w$ balls
are at the same time at the same place on the line, then they are represented by a single ball with weight $w$.
That is, the input to the problem is $n$ weighted timed requests $\{(x_i,t_i,w_i),
1 \le i \le n\}$, where $x_i\in (-\infty, + \infty)$,
$t_i \ge 0$ and $w_i \ge 0$ are the location
(on the line),
the time and the weight of request $i$, respectively.
We assume that the speed $v$ of the $k$ robots is equal to $1$ (this is achieved by dividing $x_i$ by $v$ for $i=1,2,\ldots,n$).
We model the input as a directed graph $G(V,E)$ with $n$ nodes $(1,2,..,n)$ representing the $n$ balls and two special nodes $s$ and $t$.
For $1 \le i \le n$, $1 \le j \le n$, $i \neq j$,
we have an edge $(i,j)$ in $G$, if and only if, $|x_j-x_i| \le t_j-t_i$ (recall that after normalising, $v=1$), which means that if a robot is at point $x_i$ at time $t_i$, having presumably just intercepted ball $b_i$, then it can arrive at point $x_j$ by time $t_j$ to intercept ball $b_j$.
For $1 \le i \le n$, we also have an edge $(s,i)$, if a robot starting at time $t=0$ from the origin $O$ of $L$ can reach point $x_i$ by time
$t_i$ (to intercept ball $b_i$), and we have all edges
$(i,t)$.
Graph $G(V,E)$ is acyclic since an edge $(i,j)$ implies
that $t_i < t_j$.
We assign weight $w_i$ to node $i$ and weight $0$ to nodes
$s$ and $t$.
There are no edges $(s,i)$ for the balls $b_i$ which cannot be intercepted (because $x_i > t_i$). Such balls
can be removed from the input and they do not have to
be included in graph $G$. We can therefore assume
that graph $G$ has an edge $(s,i)$ for each ball~$b_i$. Figure \ref{Fig2} shows the directed acyclic graph $G$ constructed from the BCP input shown in Figure \ref{fig1}.
An $s-t$ path $(s, i_1, i_2, \ldots, i_p, t)$ in $G$ corresponds to a feasible movement
of one robot which intercepts balls $b_{i_1}, b_{i_2}, \ldots, b_{i_p}$, in this order.
The weight of this path (the sum of the weights of the nodes on this path) is equal to the total weight of the intercepted balls.
Consequently, we can find a schedule for one robot that maximizes the number of intercepted balls by finding the maximum weight path from $s$ to $t$ in the directed acyclic graph $G$. This can be done in the standard way by negating the weights and move from node weights to edge weights. That is, we construct graph $G'(V',E')$ with the same set of nodes and edges, such that the weight $w'(i,j)$ of an edge $(i,j) \in E'$ is equal to $w'(i,j)=-(w(i,j)+w_j)$. Finding the maximum weight path from $s$ to $t$ in $G$ is equivalent to finding a shortest (that is, minimum weight) path from $s$ to $t$ in $G'$.
Since graph $G$ (and subsequently $G'$) may have $\Theta(n^2)$ edges in the worst
case, without referring to a special structure of~$G$, we can only conclude that such a path can be computed in $O(n^2)$ time.
\begin{figure}
\centering
\includegraphics[bb=0 0 500 200,scale=0.7]{Chapter3/C3Images/DAG1.png}
\caption{The Directed Acyclic Graph corresponds of the BCP input shown Figure \ref{fig1} for the appropriate values of $x_i,t_i$. The weight of each node $i \in V \setminus{\{s,t\}}$ is equal to $w_i$.}
\label{Fig2}
\end{figure}
For $k \geq 2$ the problem asks for $k$ node-disjoint paths from $s$ to $t$ in $G$ such that the total weight of the selected paths is minimized. The condition of node-disjoint paths refers to the internal nodes and ensures that no intercepted ball is counted twice.
We change from node-disjoint paths to edge-disjoint paths
in the standard way by considering the following
modified graph $G^{*}$ obtained from $G$ by splitting
nodes, as explained below.
Every node $i \in V\setminus\big\{s,t\big\}$ in $G$ is represented in $G^*$ by two nodes $i^{-},i^{+}$ connected by a \textit{short} edge from $i^{-}$ to $i^{+}$.
The set $V^*$ of nodes in $G^*$ includes also nodes $s$
and $t$. Each edge $(i,j)$ in $G$, where $i,j\in V \setminus (s,t)$ is replaced by a \textit{long} edge $(i^{+},j^{-})$. Each edge $(s,i)$ is replaced by a \textit{long} edge $(s,i^-)$ and each edge $(i,t)$ is replaced by a \textit{long} edge $(i^+,t)$ for $i=1,2,\ldots,n$. We note that two paths from $s$ to $t$ share a node $i$ in $G$, if, and only if, the corresponding paths in $G^*$ share edge $(i^{-},i^{+})$.
The weights are moved from nodes in $G$ onto the corresponding short edges in $G^*$: the weight of every \textit{short }edge $e$ connecting nodes $i^{-},i^{+}, \forall i \in V^{*}\setminus \{s,t\}$ is equal to $-w_i$. The weight of each \textit{long} edge is equal to zero. The capacity of every edge (long and short) is set equal to $1$.
Figure \ref{mcf} illustrates the obtained directed acyclic graph $G^{*}$ by applying the transformation described above to graph $G$ of Figure \ref{Fig2}.
Each collection of $k$ node disjoint $s-t$ paths in $G$
corresponds in a natural way to a collection of $k$ edge-disjoint $s-t$ paths in $G^*$, with corresponding paths
having the same weight.
Finding $k$ edge-disjoint $s-t$ paths with the minimum
total weight is equivalent to finding a minimum-cost
flow with source $s$, destination $t$ and demand $k$,
assuming unit edge capacities. A collection of $k$ node-disjoint $s-t$ paths in $G^*$
(which must be also edge-disjoint) with minimum total weight gives an optimal schedule for $k$ robots in the BCP.
A collection of $k$ node-disjoint paths from $s$ to $t$ in $G^*$ with the minimum total weight can be found in $O(kn^2)$ time by the {\em successive shortest path algorithm}~\cite{Ahuja:1993:NFT:137406}.
For $k=1$ and the unweighted BCP, that is, when we are looking for a shortest
$s-t$ path in $G$ and all nodes have weight equal to $1$,
it is shown in~\cite{DBLP:journals/dam/AsahiroHMOSY06} that such a path can be found in $O(n \log n)$ time
by using the special geometric representation of graph $G$
described in Section~\ref{sub:plane}. For $k=1$ and the weighted BCP computing a shortest
$s-t$ path in $G$ requires $O(n^2)$ time.
For $k\ge 2$, \cite{DBLP:journals/dam/AsahiroHMOSY06} gives only
the straightforward $O(kn^2)$ computation as indicated above which applies both to the weighted and unweighted case.
The main contributions of our work is that the geometric
representation of graph $G$ can lead also to
efficient algorithms for the weighted BCP and $k\ge 1$.
\begin{figure}
\centering
\includegraphics[bb=0 0 500 200,scale=0.7]{Chapter3/C3Images/DAG2.png}
\caption{Directed Acyclic Graph $G^{*}$ with $2n+2$ nodes. The weight of a short edge $(x^-,x^+)$ is equal to $-w_x$ (for clarity we assume that $w_x=1$ $ \forall x \in V^*$). The weight of a long edge $(x,y)$ is equal to $0$. }
\label{mcf}
\end{figure}
\subsection{The plane representation of DAG $G^*$}
\label{sub:plane}
It was shown in \cite{DBLP:journals/dam/AsahiroHMOSY06} that the directed acyclic graph $G$ (and graph $G^*$) can be implicitly represented with a set of points ${\cal{P}}$ on the Euclidean 2-D plane. There are $2n+2$ points in ${\mathcal{P}}$ which correspond to the
nodes in graph $G^*$.
There are two special points $s$ and $t$
which correspond to the special nodes in $G^*$. The remaining $2n$ "regular" points can be seen as $n$ pairs of points $(1^-,1^+),...,(n^-,n^+)$. For $i=1,2,...,n$, a pair of points $(i^-,i^+)$ in $\cal{P}$ corresponds to pair of nodes $(i^-,i^+)$ in $G^*$ and therefore corresponds to
ball $b_i$ of the BCP input.
Because of this correspondence, we will use the terms
ball, node and point (in ${\mathcal{P}}$) interchangeably
(remembering that the special nodes/points $s$ and $t$
do not correspond to any ball).
The placement of a pair of points $(i^-,i^+) \in \cal{P}$ in the $2$-D plane is described with coordinates $\alpha$ and $\beta$ defined in the following way: \begin{itemize}
\item $\alpha_i = t_i + x_i \quad$ and $\quad \beta_i= t_i - x_i$
\item $\alpha_i^+ = \alpha_i, \quad \beta_i^+= \beta_i \quad $ and $\quad \alpha_i^- =\alpha^+_i - \epsilon, \quad \beta_i^-=\beta_i^+ - \epsilon$
\end{itemize}
where $\epsilon$ is an arbitrary small number to ensure that $i^-$ and $i^+$ are sufficiently "close" to each other such that there is no point $j\neq i$ satisfying $\alpha_i^- \leq \alpha_j \leq \alpha_i^+$ or $\beta_i^- \leq \beta_j \leq \beta_i^+$. This transformation essentially consists of rotating the location-time coordinates by $45^{\circ}$ to the new system of $\alpha$-$\beta$
coordinates -- see Figure~\ref{fig1and2}(b).
To simplify matters we will refer to pair of points $(i^-,i^+)$ by simply referring to point $i$. Figure \ref{Figure3b} shows the $\alpha-\beta$ planar representation of the directed acyclic graph $G^*$ shown in Figure \ref{mcf} (for clarity we do not show the splitting of points into two). The $\alpha-\beta$ planar representation (which can be constructed in $\Theta(n)$ time), implicitly represents the directed acyclic graphs $G$ and $G^*$.
There is an edge in $G$ from node $i$ to node $j$, if, and only if, a robot can intercept
ball $b_j$ after intercepting ball $b_i$. This means that $t_i-t_j\geq |x_i-x_j|$, which is equivalent to having $\alpha_i \geq \alpha_j$ and $\beta_i \geq \beta_j$.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.42]{Chapter3/C3Images/planarA.png}
\caption{}
\label{Figure3b}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.42]{Chapter3/C3Images/planarB.png}
\caption{}
\label{Figure3}
\end{subfigure}
\caption{Figure \ref{Figure3b} shows the implicit plane representation of $G^*$ in the $\alpha-\beta$ coordinate system. Figure \ref{Figure3} shows the transformation of the input such that all points have distinct integer $\alpha$ and $\beta$ coordinates.}
\end{figure}
We want the set of points $\cal{P}$ to represent
correctly the topology (the edges) of graph $G^*$,
but otherwise the values of the
coordinates of the points in $\cal{P}$ are not important.
We can therefore assume that $s=(0,0)$, $t = (n+1,n+1)$, each
regular point in $\cal{P}$ has integral coordinates
and any two distinct points in $\cal{P}$ have
both coordinates distinct. This can be achieved by sorting the $\alpha$ and $\beta$ coordinates of all points in $\cal{P}$ and setting the value of the $m^{th}$ smallest $\alpha$ (resp. $\beta$) coordinate equal to $m$, where $m=1,2,..,n$.
Notice that two points $i$ and $j$ can not have the same $\alpha$ and $\beta$ coordinate because this implies that balls $b_i$ and $b_j$ cross the line $L$ at the same time at the distance from the origin and thus $b_i$ and $b_j$ correspond to the same point $i$. It is possible however for two points $i$ and $j$ in $\cal{P}$ to share the same $\alpha$ or $\beta$ coordinate. If for two points $i$ and $j$ in $\cal{P}$ we have $\alpha_i=\alpha_j$ (resp. $\beta_i=\beta_j$) but $\beta_i \leq \beta_j$ (resp. $\alpha_i \leq \alpha_j$) then to ensure that $\cal{P}$ represents correctly the topology (the edges) of graph $G^*$ we set the value of $\alpha_i$ equal to $m$ and the value of $\alpha_j$ equal to $m+1$. Figure \ref{Figure3} illustrates an example of the replacement of the $\alpha$ and $\beta$ coordinates with integral values(for clarity we do not show the splitting of points into two).
For two points $i$ and $j$ in the plane,
with coordinates $(\alpha_i,\beta_i)$
and $(\alpha_j,\beta_j)$, respectively,
we write $i \prec j$ to denote that point $j$ dominates
point $i$ in the sense that $i \neq j$,
$\alpha_i \le \alpha_j$ and $\beta_i \le \beta_j$.
We write $i \bowtie j$ to denote that points $i$ and $j$
are distinct and neither $i \prec j$ nor $j \prec i$.
We have $(0,0) = s\prec t$ and for
each regular point $i$ in ${\cal{P}}$,
$s\prec i\prec t$.
Thus for any two points $i$ and $j$ in $\cal{P}$
(regular or special) $(i,j)$
is an edge in $G^*$ if, and only if,
$i\prec j$.
\section{Algorithm $\mathcal{A}_k$}
\label{c3s2a}
\subsection{Input and Output}
\label{sec2:mincostflow}
Consider the implicit representation of the directed acyclic graph $G^*$ with the points in $\mathcal{P}$.
The edges of $G^*$ are represented by straight-line
segments in the $\alpha$-$\beta$ plane.
\begin{definition}
We say that two node-disjoint edges $(u,v)$ and $(x,y)$
in $G^*$ \emph{cross}, if the two (closed)
segments $[u,v]$ and $[x,y]$ in the plane
have a common point.
\end{definition}
Recall that we can assume w.l.o.g.\
that the $\alpha$ coordinates and the $\beta$ coordinates
of the $n$ nodes are distinct integers in $[1,n]$ (see
subsection~\ref{sub:plane}).
We also assume that all points are in
general position (the reduction to achieve this
requires increasing the range of the integer
coordinates). Therefore, if two edges node-disjoint edges $(u,v)$ and $(x,y)$
in $G^*$ cross then the common point of the closed segments does not correspond to a point in $\mathcal{P}$. We say that two paths $Q$ and $Q'$ in $G^*$ \emph{cross},
if there is an edge $(u,v) \in Q$ crossing with an edge $(x,y) \in Q'$. We say that a path $Q$ in $G^*$ is \textit{non-self-crossing} if $Q$ does not traverse two edges that cross.
To provide an overview of our algorithm in the context of the minimum cost flow problem,
we denote by $\mathcal{N}$ the flow network
based on graph $G^*$, as discussed in subsection \ref{sub:dag}, with negative node weights (\textit{i.e.} weights of the short edges)
and all edges (short and long) having unit capacities.
For network $\mathcal{N}$ and $k-1$ node-disjoint $s$-$t$
paths $\mathcal{Y}_1, \mathcal{Y}_2,$
$\ldots,$ $\mathcal{Y}_{k-1}$ in $\mathcal{N}$, which
represent
an integral flow of value $k-1$ in $\mathcal{N}$,
we define the residual network
${\mathcal{N}}_{k-1}$ in the usual way, by
reversing the edges of
the paths $\mathcal{Y}_1, \mathcal{Y}_2, \ldots,$ $\mathcal{Y}_{k-1}$.
The base case is $\mathcal{N}_0 \equiv \mathcal{N}$.
We show an algorithm $\mathcal{A}_{k}$ which
for an input
$(\mathcal{N}; \mathcal{Y}_1, \mathcal{Y}_2,
\ldots,\mathcal{Y}_{k-1})$, where
$\mathcal{Y}_1, \mathcal{Y}_2,$ $\ldots,$ $\mathcal{Y}_{k-1}$
are $k-1$ node-disjoint \emph{non-crossing} $s$-$t$ paths
minimizing the total weight of any collection of
$k-1$ \emph{node-disjoint} $s$-$t$ paths,
computes
a shortest path tree $T^*$ rooted at $s$ in the residual
network $\mathcal{N}_{k-1}$. The paths $\mathcal{Y}_1, \mathcal{Y}_2,$
$\ldots,$ $\mathcal{Y}_{k-1}$ in $\mathcal{N}$ are given in a left-right order in their plane representation. We maintain two global arrays $h$ and $predecessor$, which are
indexed by the points $\mathcal{P}\setminus\{s\}$.
At the end of the computation,
for each $x\in \mathcal{P}\setminus\{s\}$,
the values $h(x)$ and $predecessor(x)$
should be the shortest path weight from $s$ to $x$ and
the predecessor of point $x$ in the tree $T^*$.
When $k=2$, we have only one path $\mathcal{Y}_1$, so the condition that paths
$\mathcal{Y}_1, \mathcal{Y}_2,$ $\ldots,$ $\mathcal{Y}_{k-1}$
are \emph{non-crossing} is trivially satisfied.
For subsequent values of $k$, this condition will be
ensured inductively.
From now on, when we refer to paths $\mathcal{Y}_1, \mathcal{Y}_2,$ $\ldots,$
$\mathcal{Y}_{k-1}$, we assume that they are non-crossing
paths representing a minimum-cost flow value of $k-1$. The paths $\mathcal{Y}_1, \mathcal{Y}_2,$ $\ldots,$ $\mathcal{Y}_{k-1}$ in network $\mathcal{N}$ and
the computed $s$-$t$ shortest path in the residual network
$\mathcal{N}_{k-1}$ give in the usual way
a minimum-cost flow of value $k$ in $\mathcal{N}$.
This flow is represented by $k$ node-disjoint
$s$-$t$ paths $Y_1,Y_2,..,Y_k$ in $\mathcal{N}$,
which are not necessarily non-crossing.
Let $\mathcal{P}_k \subseteq \mathcal{P}$ the set of all
points covered by paths $Y_1,Y_2,..,Y_k$.
The following theorem states that
a valid input for algorithm $\mathcal{A}_{k+1}$ exists
and can be computed in an efficient way.
\begin{theorem}
\label{maintheoremuncrossing}
Given a point set $\mathcal{P}_k$ such that all points $\mathcal{P}_k$ can be covered with $k$ paths, there is an $O(kn\log n)$ algorithm $\widetilde U_k$ which computes a collection of $k$ node-disjoint non-crossing $s$-$t$ paths $\mathcal{Y}_1,\mathcal{Y}_2,\ldots,\mathcal{Y}_k$ covering all points in $\mathcal{P}_k$.
\end{theorem}
The proof of Theorem \ref{maintheoremuncrossing} is given separately in Section \ref{noncrossingpathsfork}. Starting with the network $\mathcal{N}$,
we compute a minimum-cost integral flow of value $k$
in $\mathcal{N}$,
which gives a solution for BCP,
by iterating algorithm $\mathcal{A}_i$ followed by
algorithm $\widetilde{U}_i$, for $i = 1,2, \ldots, k$. Algorithm $\mathcal{A}_k$ is an instance of the \emph{relaxation technique} for
the single-source shortest paths
problem~\cite{Cormen2001introduction}
in the residual network $\mathcal{N}_{k-1}$. Arrays $h$
and $predecessor$ are only updated by
the following $\mbox{\it relax}(y,x)$ operation,
where $(y,x)$ is an edge and $w(x)$ is the weight of node
$x$:
If $h(x) > h(y) + w(x)$, then
$h(x) \leftarrow h(y) + w(x)$ and
$predecessor(x) \leftarrow y$.
In algorithm $\mathcal{A}_k$ $\mbox{\it relax}$
operations occur in groups:
$\mbox{\it Relax}(x) \equiv
\{\mbox{\it relax}(y,x): y \prec x\}$. The detailed description of operation $\mbox{\it Relax}$ and its implementation details are given in Subsection \ref{section:arrays}.
For the worst-case running-time efficiency,
we implement operation $\mbox{\it Relax}(x)$ not by
performing explicitly all operations $\mbox{\it relax}(y,x)$ (this would take $O(n)$ time),
but by finding a point $z\prec x$ such
that $h(z) =\min\{ h(y): y\prec x\}$ and
performing only $\mbox{\it relax}(z,x)$.
Finding point $z$ takes $O(\log^3(n))$ time
using a data structure
introduced for the two-dimensional orthogonal-search problem\cite{Willard:1985:NDS:3674.3690}.
\subsection{Overview of Algorithm $\mathcal{A}_k$}
In this section we give an overview of algorithm $\mathcal{A}_k$ for $k\geq 3$. The detailed description and the analysis of algorithm $\mathcal{A}_k$ for
$k \geq 3$ is given in section \ref{knopi}. The analysis of algorithm $\mathcal{A}_k$ for the special case
$k = 2$ is given separately in section \ref{k1sect}.
This special case does not refer to
some of the elaborations of the general case, so
the arguments are simpler and shorter, and can be treated
as preliminaries to the general case.
To facilitate the recursive structure of algorithm
${\mathcal{A}_{k}}$, we extend the input specification
to a sub-network
of $\mathcal{N}_{k-1}$
induced by the points in $\mathcal{P}$
with the $\beta$ coordinates in the interval
$(\beta_1, \beta_2]$,
for given $\beta_1 < \beta_2$.
We denote this sub-network by
$(\mathcal{N};
\mathcal{Y}_1, \mathcal{Y}_2,
\ldots,\mathcal{Y}_{k-1})[\beta_1, \beta_2]$,
or $\mathcal{N}[\beta_1, \beta_2]$ for short.
The initial input, that is, the input to the initial
call to algorithm ${\mathcal{A}_{k}}$, is the whole residual network $\mathcal{N}_{k-1}$ which is defined
by the interval $(0, n+1]$.
Arrays $h$ and $predecessor$ are global
and initialized outside of the computation of
algorithm~${\mathcal{A}_{k}}$
(details of this global initialisation are
in subsection \ref{section:arrays}).
The subsequent recursive calls to~${\mathcal{A}_{k}}$
continue from the current state of these arrays, without
re-initialising.
More precisely,
when algorithm $\mathcal{A}_k$ is applied to a sub-network
$\mathcal{N}' = \mathcal{N}[\beta_1,\beta_2]$
(a recursive call), then the computation starts
with each point $x$ in $\mathcal{N}'$
having some value $h(x) \leq 0$, and array $predecessor$
restricted to $\mathcal{N}'$
representing a forest in $\mathcal{N}'$.
At the end of the computation,
for each point $x$ in the sub-network,
$h(x)$ is equal to the weight of some path
to $x$, hopefully smaller than its starting
value, and array $predecessor$
represents a new forest.
A call to algorithm ${\mathcal{A}_{k}}$ for a sub-network $\mathcal{N}[\beta_1, \beta_2]$
includes two recursive calls to
${\mathcal{A}_{k}}$
applied to sub-networks
$\mathcal{N}[\beta_1,(\beta_1+\beta_2)/2]$ and
$\mathcal{N}[(\beta_1+\beta_2)/2,\beta_2]$.
We denote by $\mathcal{N}_1$ and $\mathcal{N}_2$
these two sub-networks, respectively, or the sets of
nodes (points) in these sub-networks,
depending on the context.
The sub-network $\mathcal{N}_1$ (the lower half) has $\lceil 2n/2 \rceil$
points from $\mathcal{P}$ and the
sub-network $\mathcal{N}_2$ (the upper half) has $\lfloor 2n/2 \rfloor$ points
from $\mathcal{P}$.
The base case of the recursion are sub-problems of size smaller than some constant threshold.
Algorithm $\mathcal{A}_k$ also includes a \textit{coordination} phase which takes place between the two recursive calls and consists of calling a coordination algorithm $\mathcal{C}_k$ on $\mathcal{N}[\beta_1,\beta_2]$.
While the recursive calls to algorithm $\mathcal{A}_k$ on $\mathcal{N}_1$ and $\mathcal{N}_2$ consider paths
which are wholly either in $\mathcal{N}_1$ or $\mathcal{N}_2$,
the coordination algorithm $\mathcal{C}_k$ is responsible for considering paths which have points both in $\mathcal{N}_1$ and $\mathcal{N}_2$. Putting everything together, when algorithm $\mathcal{A}_k$ is applied to a sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation consists of three phases. The first phase is the recursive call of algorithm $\mathcal{A}_k$ to sub-network $\mathcal{N}_1$, the second phase is the coordination of $\mathcal{N}_1$ and $\mathcal{N}_2$ by algorithm $\mathcal{C}_k$ and the third phase is the recursive call of algorithm $\mathcal{A}_k$ to $\mathcal{N}_2$.
\begin{definition}
We say that the computation of a shortest-path algorithm,
or a part of such algorithm,
follows a given path
$Q = (x_0,x_1,\ldots,x_m)$,
if the computation includes all relax operations $\mbox{\it relax}(x_i,x_{i+1})$,
$i = 1,2,\ldots,m-1$ in this order.
\end{definition}
Note that each operation $\mbox{\it relax}(x_i,x_{i+1})$
may be implicitly included within operation
$\mbox{\it Relax}(x_{i+1})$. Recall that for a sub-network $\mathcal{N}[\beta_1,\beta_2]$ a path $Q$ in the sub-network is non-self-crossing if $Q$ does not traverse two edges that cross.
\begin{definition}
For a sub-network
$(\mathcal{N}; \mathcal{Y}_1, \mathcal{Y}_2,
\ldots,\mathcal{Y}_{k-1})[\beta_1,\beta_2]$
and a point $v$ in this sub-network, we define
path $\widetilde Q_v$ as the minimum weight path
among all non-self-crossing paths in this sub-network which end at $v$.
We denote by $\widetilde h(v)$
the weight of path $\widetilde Q_v$.
\end{definition}
The following theorem describes the specification
of algorithm~$\mathcal{A}_k$.
\begin{theorem}\label{Thm1noPi}
When algorithm~$\mathcal{A}_k$ is applied
to a sub-network
$(\mathcal{N}; \mathcal{Y}_1, \mathcal{Y}_2,
\ldots,\mathcal{Y}_{k-1})[\beta_1,\beta_2]$, the computation follows every non-self-crossing path in this sub-network and the running time is $O(k^{3k}n \log^{2k+3} n)$ where $n$ is the size of the sub-network.
\end{theorem}
If the computation follows a path $Q = (x_0,x_1,\ldots,x_m)$, then
at the end of this computation, the computed shortest path weight $h(x_m)$ is at most the weight of $Q$. Theorem \ref{Thm1noPi} implies the following corollary.
\begin{corollary}
\label{Thm1CornoPi}
When the call of algorithm $\mathcal{A}_{k-1}$ on a sub-network $(\mathcal{N}; \mathcal{Y}_1, \mathcal{Y}_2,
\ldots,\mathcal{Y}_{k-1})[\beta_1,\beta_2]$ terminates for every point $v$ in the sub-network we have $h(v) \leq \widetilde h(v)$.
\end{corollary}
The proof of Theorem \ref{Thm1noPi} consists of
showing that the computation of
$\mathcal{A}_k(\,\mathcal{N}[\beta_1,\beta_2]\,)$
follows every non-self-crossing path in
$\mathcal{N}[\beta_1,\beta_2]$.
First we analyse the combinatorial structure of a non-self crossing path $Q$ by considering its geometric representation on the $\alpha-\beta$ plane and then we show how the consecutive computational phases of
algorithm $\mathcal{A}_k$ follow the consecutive
sections of path $Q$.
To use Theorem~\ref{Thm1noPi} to conclude that algorithm $\mathcal{A}_k$ applied to the whole residual network
$\mathcal{N}_{k-1}$ is
correct,
that is, that the computed tree is indeed a shortest path tree in $\mathcal{N}_{k-1}$,
we need Theorem \ref{mainThm2nopi} (given below) which
asserts that there are non-self-crossing shortest paths in $\mathcal{N}_{k-1}$. To simplify the presentation of a non-self-crossing path followed by algorithm $\mathcal{A}_k$ (not necessarily a shortest path) we distinguish between \emph{red} and \emph{black} points and edges.
The {red} points and {red} edges are the points and edges on the paths $\mathcal{Y}_1, \mathcal{Y}_2, \ldots,\mathcal{Y}_{k-1}$. All other points and edges are black. Recall that in the residual network $\mathcal{N}_{k-1}$, the red edges (short and long) of the paths $\mathcal{Y}_1, \mathcal{Y}_2, \ldots,\mathcal{Y}_{k-1}$ have reversed direction and negated weights, as in the standard way. That is, a long red edge $(u,v)\in \mathcal{Y}_j$ where $j \in [1,k-1]$ such that $u \prec v$ has weight equal to $0$ and reversed direction from $v$ to $u$. A short edge $(u^-,u^+)$ has direction from $u^+$ to $u^-$ and weight equal to $w_u$.
\begin{theorem}\label{mainThm2nopi}
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$,
there exists
a non-self-crossing shortest path
to every point $v$ in the sub-network.
\end{theorem}
\begin{proof}
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$ consider a point $v$ in this sub-network. Among all shortest paths to point $v$ let $Q^*$ be the shortest path with the minimum number of edges. We claim that $Q^*$ is non-self-crossing. Assume towards contradiction that $Q^*$ is self-crossing.
Recall that each point $x$ is a pair of points $(x^-,x^+)$ connected with a short edge of capacity $1$. We denote by $h(x)$ the weight of the sub-path of $Q^*$ to point $x^+$ and by $h^-(x)$ the weight of the sub-path of $Q^*$ to point $x^-$. Notice that if $x$ is a black point then the path to $x^+$ must traverse the short residual edge $(x^-,x^+)$ with weight $-w_x<0$. Therefore we have that $h(x)=h^-(x)-w_x$. If $x$ is a red point then the short edge $(x^+,x^-)$ is not residual and its weight is equal to $w_x>0$, which means that the path to $x^+$ can not traverse the short red edge $(x^+,x^-)$ and therefore we have $h(x) \leq h^-(x) \leq h(x)+w(x^+,x^-)$.
It is easy to see that if $Q^*$ is self-crossing then it must traverse at least on red edge of a path $\mathcal{Y}_j$ where $j \in [1,k-1]$. Notice that since paths $\mathcal{Y}_1,\mathcal{Y}_2,\ldots,\mathcal{Y}_{k-1}$ are non-crossing pairwise, if $Q^*$ is self-crossing then either $Q^*$ has two black edges $(u,v)$ and $(x,y)$ that cross (see Figure \ref{nonselfcrossingtheorem1}) or a black edge $(u,v)$ crossing with a red edge $(y,x)$ (see Figure \ref{nonselfcrossingtheorem2}).
Without loss of generality, we assume that edge $(u,v)$ appears before edge $(x,y)$ in $Q^*$. Let $\pi$ be the crossing point of edge $(u,v)$ and edge $(x,y)$. Since $\pi$ is a point on the closed segment $[u,v]$ of the black edge $(u,v)$ we have that $u \prec \pi$. Similarly, since $\pi$ is a point on the closed segment $[x,y]$ of the black edge $(x,y)$ (resp. red edge $(y,x)$) we have that $\pi \prec y$. Thus, we conclude that $u \prec y$. Symmetrically, we obtain that $x \prec v$.
We first claim that $h(u)=h^-(y)$. If $h(u)<h^-(y)$, then consider the path $Q_u^ \cup \{(u^+,y^-)\}$ to point $y^-$ where $Q_{u}$ is the sub-path of $Q^*$ to point $u^+$. The weight of the long edge $(u^+,y^-)$ is equal to zero and since $h(u)<h^-(y)$, the weight of path $Q_{u}\cup \{(u^+,y^-)\}$ is smaller than the weight of path $Q_{y}$ where $Q_y$ is the sub-path of $Q^*$ to point $y^-$. However, this makes a contradiction that $Q_{y}$ is a shortest path to point $y^-$. If $h(u)>h^-(y)$, then we obtain that $h^-(v)>h(x)$ since the weight of the long edges $(u^+,v^-)$ and $(x^+,y^-)$ is equal to zero. Consider the cycle $C=Q_{vx} \cup \{(x^+,v^-)\}$ where $Q_{vx}$ is the sub-path of $Q^*$ from $v^-$ to $x^+$. If $h^-(v)>h(x)$ then the total weight of cycle $C$ is negative. This makes a contradiction since there are no negative cycles in the residual network.
Let $m^*$ be the number of edges in path $Q^*$. Consider the decomposition of $Q^*$ into $Q_u \cup Q_{uy} \cup Q_{y}$ where $Q_u$ is the sub-path of $Q^*$ from its starting point to point $u^+$, $Q_{uy}$ is the sub-path of $Q^*$ from $u^+$ to $y^-$ and $Q_{y}$ is the sub-path of $Q^*$ from $y^-$ to $v$. Consider the path $Q=Q_u \cup \{(u^+,y^-)\} \cup Q_{y}$ and let $m$ be the number of edges in path $Q$. Path $Q$ has the same weight as $Q^*$ since $h(u)=h^-(y)$. Further, $m < m^*$ since the sub-path $Q_{uy}$ consists of at least two edges. However, this makes a contradiction since $Q^*$ is chosen as the shortest path to $v$ with the minimum number of edges.
\end{proof}
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.5]{Chapter3/C3Images/nonselfcrossingtheorem1.png}
\caption{}
\label{nonselfcrossingtheorem1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.5]{Chapter3/C3Images/nonselfcrossingtheorem2.png}
\caption{}
\label{nonselfcrossingtheorem2}
\end{subfigure}
\caption{Figures \ref{nonselfcrossingtheorem1} and \ref{nonselfcrossingtheorem2}: The schematic representation of the proof for Theorem \ref{mainThm2nopi}.}
\end{figure}
Consider the computation of
$\mathcal{A}_k(\mathcal{N}_{k-1})$, that is, the initial call of algorithm $\mathcal{A}_{k}$ to the whole residual network $\mathcal{N}_{k-1}$.
The array $predecessor$ is initialised to some tree
rooted at $s$. The details of this initialisation are given in subsection \ref{section:arrays}. Array $predecessor$ is updated only by
the $\mbox{\it relax}$ operation. Therefore, by the general properties of the shortest-paths relaxation technique,
since there are no negative cycles in $\mathcal{N}_{k-1}$, the array $predecessor$ always represents some tree.
Let $h$ and $predecessor$ be the arrays when the computation terminates.
From Corollary~\ref{Thm1CornoPi},
for every point $v$ in $\mathcal{N}_{k-1}$,
we have $h(v) \leq \widetilde h(v)$.
From Theorem \ref{mainThm2nopi},
$\widetilde h(v) = h^*(v)$,
where $h^*(v)$ is the weight of a shortest path
from $s$ to $v$ in $\mathcal{N}_{k-1}$.
Therefore we have that
$h^*(v) \leq h(v) \leq \widetilde h(v) = h^*(v)$,
so $h(v) = h^*(v)$. Thus, the computed tree must be a shortest-path tree (from the general properties of
the relaxation technique: $h(v)$ is never smaller than
the weight of the current tree path from $s$ to $v$).
For the special case $k=1$, in Section \ref{k1sect} we show an iterative algorithm $\mathcal{A}_1$ with the running time of
$O(n \log^3 n)$ which considers all points
in topological order (two points $x$ and $x'$ are in topological order if $x \prec x'$) and for each point $x$ performs operation $\mbox{\it Relax}(x)$. Algorithm $\mathcal{A}_1$ essentially implements the standard methodology\footnote{For any directed acyclic graph (DAG) $G$, a shortest path between two points in $G$ can be computed by traversing the nodes in topological order and for each node $v$ perform operation relax in all edges outgoing from $v$.} of computing a shortest path in a directed acyclic graph, but accounts for incoming edges (instead of outgoing edges) using operation $\mbox{\it Relax}(x)$. A topological order of the points can be found in $O(n \log n)$-time as shown in \cite{DBLP:journals/dam/AsahiroHMOSY06}.
Operation $\mbox{\it Relax}(x)$ takes $O(\log^3(n))$ amortized time
using a data structure for orthogonal-search queries \cite{Willard:1985:NDS:3674.3690}.
For $k \geq 2$ the proof of Theorem \ref{Thm1noPi} is outlined below. For some $\beta_1,\beta_2$ such that $0 \leq \beta_1 \leq \beta_2 \leq n+1$ consider a sub-network $\mathcal{N}[\beta_1,\beta_2]$ and let $Q$ be a non-self-crossing path in this sub-network. Without loss of generality, we assume that $Q$ has points both in $\mathcal{N}_1$ and $\mathcal{N}_2$.
\begin{definition}
\label{Def:x}
If path $Q$ starts in $\mathcal{N}_1$ then we define $x \in \mathcal{N}_1$ to be the last point in $ Q$ such that all points before $x$ are in $\mathcal{N}_1$. If path $Q$ starts in $\mathcal{N}_2$ we define $x \in \mathcal{N}_2$ to be the starting point of $Q$.
\end{definition}
\begin{definition}
\label{Def:x'}
If path $Q$ ends in $\mathcal{N}_1$ then we define $x' \in \mathcal{N}_1$ to be the last point of $ Q$. If path $Q$ ends in $\mathcal{N}_2$ we define $x'$ to be the first point of $Q$ in $\mathcal{N}_2$ such that all points after $x'$ are in $\mathcal{N}_2$.
\end{definition}
Notice that points $x$ and $x'$ are always unique and well-defined for any path $Q$ in a sub-network $\mathcal{N}[\beta_1,\beta_2]$. Path $Q$ can be decomposed into three parts $(Q_x,q_{xx'},Q_{x'})$ where $Q_x$ is the sub-path of $Q$ from its starting point to point $x$, $q_{xx'}$ is the sub-path of $Q$ from $x$ to $x'$ and $Q_{x'}$ is the sub-path of $Q$ from point $x'$ to its end point. Following Definitions \ref{Def:x} and \ref{Def:x'}, observe that if $x$ is in $\mathcal{N}_1$ then the sub-path $Q_x$ has only points in $\mathcal{N}_1$ and if $x$ is in $\mathcal{N}_2$ then $Q_x$ is empty. Similarly, if $x'$ is in $\mathcal{N}_2$ then all points in the sub-path $Q_{x'}$ are in $\mathcal{N}_2$ and if $x'$ is in $\mathcal{N}_1$ then $Q_{x'}$ is empty. The sub-path $q_{xx'}$ has points both in $\mathcal{N}_1$ and $\mathcal{N}_2$ and it is empty if $x=x'$.
Theorem \ref{Thm:coordinationnoPi} describes the specification of the coordination algorithm $\mathcal{C}_k$. Using Theorem \ref{Thm:coordinationnoPi}, the proof of Theorem \ref{Thm1noPi} follows by double induction on both parameter $k$ and the size of the network~$n$.
\begin{theorem}
\label{Thm:coordinationnoPi}
For $k \geq 2$, assuming that Theorem \ref{Thm1noPi} is true for $k-1$, when algorithm $\mathcal{C}_k$ is applied to a sub-network $(\mathcal{N}; \mathcal{Y}_1, \mathcal{Y}_2,
\ldots,\mathcal{Y}_{k-1})[\beta_1,\beta_2]$ the computation follows the sub-path $q_{xx'}$ of every non-self-crossing path $Q$ in this sub-network.
\end{theorem}
We say that the sub-path $q_{xx'}$ of $Q$ \textit{crosses} from $\mathcal{N}_2$ to $\mathcal{N}_1$ if it traverses a red edge $(u,u') \in \mathcal{Y}_j, j \in [1,k-1]$ such that $u \in \mathcal{N}_2$ and $u' \in \mathcal{N}_1$. Notice that for $k \geq 2$ there are exactly $k-1$ red edges $(u_1,u'_1),\ldots,(u_{k-1},u'_{k-1})$ that cross from $\mathcal{N}_2$ to $\mathcal{N}_1$. The proof of Theorem \ref{Thm:coordinationnoPi} depends on the fact that for a sub-network $\mathcal{N}[\beta_1,\beta_2]$, the sub-path $q_{xx'}$ can cross at most $(k-1)$ times from $\mathcal{N}_2$ to $\mathcal{N}_1$ (as each such crossing traverse one of the $k-1$ red
edges from $\mathcal{N}_2$ to $\mathcal{N}_1$)
and on the analysis of the structure of a non-self-crossing path $Q$.
\paragraph{Computational Example}\textit{} \newline
To resolve any ambiguity, in Figures \ref{originalInstance}, \ref{shortestpathtreecross}, \ref{shortestpathtree} and \ref{finalinstance} we show an example of the input and output of algorithm $\mathcal{A}_k$ for the special case $k=2$. Figure \ref{originalInstance} shows the implicit representation of the residual network $\mathcal{N}_{k-1}$ for $k=2$. The red segment represents path $\mathcal{Y}_1=(s,y_2,y_3,y_4,y_5,t)$.
For clarity we do not show the black edges (long and short). Further, to simplify matters, we assume that the weight of each point $i$ is equal to $1$. That is, the weight of a black short edge $(i^-,i^+)$ is equal to $-1$ and the weight of a red short edge is equal to $1$.
Figure \ref{shortestpathtreecross} shows the shortest path tree $T$ computed by algorithm $\mathcal{A}_2$ in the residual network $\mathcal{N}_1$. The computed shortest path from $s$ to $t$ in $T$ is path $Q=(s,b_3,b_4,y_4,y_3,y_2,b_2,b_5,b_6,t)$. Figure \ref{shortestpathtree} shows the two optimal node-disjoint paths $Y_1$ and $Y_2$ (which can cross) in $\mathcal{N}$ if we obtain the flow for $k=2$ in the usual way.
Finally, Figure \ref{finalinstance} shows the resulting collection of two optimal node-disjoint, non-crossing paths $\mathcal{Y}_1$ and $\mathcal{Y}_2$ in $\mathcal{N}$ obtained by the additional post-processing algorithm $\widetilde{U}_2$, which will be the input for algorithm $\mathcal{A}_3$.\footnote{Observe that we need at least 3 robots to collect balls $b_3,b_4$ and $y_3$ and the schedule shown for two robots collects every ball except $y_3$ so it must be optimal.}
To conclude that algorithm $\mathcal{A}_2$ computes a shortest path from $s$ to $t$ in the residual
network, we will show that
the sequence of $\mbox{\it relax}$ operations executed during
the computation includes a sub-sequence of $\mbox{\it relax}$ operations which
corresponds, or 'follows', a non-self-crossing shortest path.
For the example shown in Figures \ref{originalInstance},\ref{shortestpathtreecross},\ref{shortestpathtree} and \ref{finalinstance}, this sub-sequence of $\mbox{\it relax}$ operations is $(s,b_3),(b_3,b_4),(b_4,y_4),(y_4,y_3),(y_3,y_2),(y_2,b_2),(b_2,b_5),(b_5,b_6),(b_6,t)$. Notice that only the relative order of these relax operations is important.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.55]{Chapter3/C3Images/ex1.png}
\caption{}
\label{originalInstance}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.55]{Chapter3/C3Images/ex2.png}
\caption{}\label{shortestpathtreecross}
\end{subfigure}
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.55]{Chapter3/C3Images/ex3.png}
\caption{}\label{shortestpathtree}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.55]{Chapter3/C3Images/ex4.png}
\caption{}\label{finalinstance}
\end{subfigure}
\caption{Figure \ref{originalInstance},\ref{shortestpathtreecross},\ref{shortestpathtree} and \ref{finalinstance} show a computational example of algorithm $\mathcal{A}_k$ for $k=2$.}
\end{figure}
\subsection{Implementation Details}
\label{section:arrays}
Before we discuss the implementation details of algorithm $\mathcal{A}_k$ we remind the reader the structural details of the residual network $\mathcal{N}_{k-1}$. Recall that all nodes and edges of the paths $\mathcal{Y}_1, \mathcal{Y}_2, \ldots,$ $\mathcal{Y}_{k-1}$ are red. All other nodes and edges are black. As discussed in subsection \ref{sub:dag}, the node set $V(\mathcal{N}_{k-1})\setminus{\{s,t\}}$ consists of $n$ pairs of nodes $(1^-,1^+),(2^-,2^+),\ldots,(n^-,n^+)$ connected with a short edge. To simplify matters, we refer to a pair of nodes $(x^-,x^+)$ as a \textit{pair} node $x$ in $\mathcal{N}_{k-1}$ or as a \textit{pair} point $x$ in $\mathcal{P}$, depending on the context.
For two pair nodes $x$ and $y$, if the residual network $\mathcal{N}_{k-1}$ has an edge $(y,x)$, then for the corresponding pair points $x$ and $y$ in $\mathcal{P}$ we say that $x$ dominates $y$ which is denoted by $y \prec x$.
For two pair points $y$ and $x$ such that $y \prec x$, the residual network $\mathcal{N}_{k-1}$ has either a long black edge $(y^+,x^-)$ or
a long red edge $(x^-,y^+)$ (the latter if
edge $(y,x) \in \mathcal{Y}_j$ where $j \in [1,k-1]$),
with weight equal to zero.
For a black pair node $x$, the weight of the short edge $(x^-,x^+)$ is equal to $-w_x<0$. For a red pair node $x$ the
short edge $(x^-,x^+)$ has reversed direction from $x^+$ to $x^-$
and weight equal to $w_x>0$. The capacity of every edge regardless of colour (red or black) or type (long or short) is equal to $1$.
\paragraph{Initialisation of arrays $h$ and $pred$} \textit{ }
Consider the two arrays $h$ and $\mbox{\it pred}$, which are
indexed by the nodes $V(\mathcal{N}_{k-1})\setminus\{s\}$.
For a pair node $x \in V(\mathcal{N}_{k-1})\setminus\{s\}$, terms $h(x)$ and $h^-(x)$ denote the current shortest path weight from the source $s$ to point $x^+$ and $x^-$, respectively. Similarly, we denote by $\mbox{\it pred}(x)$ and $\mbox{\it pred}^- (x)$ the predecessor of node $x^+$ and $x^-$ in the current tree. We initialize arrays $h$ and $\mbox{\it pred}$
in the following way.
For each black pair node
$x\in V(\mathcal{N}_{k-1})$,
we set $\mbox{\it pred}^-(x) = s$, $h^-(x) = 0$,
$\mbox{\it pred}(x) = x^-$ and $h(x) = -w_x$.
For each red edge
$(x^-, y^+)$ in $\mathcal{N}_{k-1}$
we set $\mbox{\it pred}^-(x) = s$, $h^-(x) = 0$,
$\mbox{\it pred}(y) = x^-$ and $h(y) = 0$.
For node $t$, we set $\mbox{\it pred}(t) = s$ and
$h(t) = 0$.
Finally, to have the initial tree which reaches
all nodes in $\mathcal{N}_{k-1}$,
for each red edge $(s,x^-)$ in $\mathcal{N}_{k-1}$ ,
we set $\mbox{\it pred}^-(x) = x^+$ and $h^-(x) = w_x$,
and for each red edge $(t,x^+)$ in $\mathcal{N}_{k-1}$,
we set $\mbox{\it pred}(x) = t$ and $h(x) = 0$.
The initialization of arrays
$h$ and $\mbox{\it pred}$ described above is valid for the relaxation
technique since array $\mbox{\it pred}$ defines a tree in
$\mathcal{N}_{k-1}$ which is rooted at $s$
and for each node $x$ in $\mathcal{N}_{k-1}$
other than $s$,
$h(x)$ is the weight of the tree path from $s$ to $x$. An algorithm based on the relaxation technique
updates
arrays $h$ and $\mbox{\it pred}$ only by
the following classic $\mbox{\it relax}(y,x)$ operation \cite{Cormen2001introduction}: if $h(x) > h(y) + w(y,x)$, then $h(x) \leftarrow h(y) + w(y,x)$ and $\mbox{\it pred}(x) \leftarrow y$, where
$(y,x)$ is an edge in the input graph (or equivalently $y \prec x$),
and $w(y,x)$ is the weight of this edge. Notice that for an operation $\mbox{\it relax}(x,y)$ edge $(x,y)$ can be either short or long.
At the end of the computation, for each node $x$,
$h(x)$ (resp. $h^-(x)$) should be equal to the shortest-path weight
from $s$ to $x$ (resp. $x^-$), and array $\mbox{\it pred}$ should
represent a shortest path tree from the source $s$ to all
reachable nodes.
Since we want to compute a shortest path
from $s$ to $t$, we will only require
(and we will verify in the proofs) that
at the end of the computation array $\mbox{\it pred}$
includes a shortest path from $s$ to $t$ and that
values $h(x)$ and $h^-(x)$ are correct for each node $x$ on this path.
An algorithm based on operation $\mbox{\it relax}(x,y)$
computes a shortest $s$-$t$ path for a given input network,
if there is a shortest $s$-$t$ path
$(s=x_0,x_1,x_2,\ldots,x_q=t)$ such that
the sequence of $\mbox{\it relax}$ operations
executed by the algorithm
includes as a sub-sequence $\mbox{\it relax}(x_{i-1},x_i)$,
$i = 1,2,\ldots,q$.
Only the relative order of
such operations $\mbox{\it relax}(x_{i-1},x_i)$ is important,
but they do not have to be
consecutive. They can be interleaved
in arbitrary way with any number of other
$\mbox{\it relax}$ operations.
\paragraph{Two-Dimensional Orthogonal Search Problem} \textit{ }
In the Two-Dimensional Orthogonal Search Problem we are given a set $S$ of $n$ points in a two dimensional plane where each point $i$ for $i=1,2,\ldots,n$ is identified with two coordinates $(x_i,y_i)$ and a weight value $u_i$. Given a rectangle query $R=[x_1,x_2]$x$[y_1,y_2]$ the orthogonal search problem asks for the point $j$ within $R$ that has the minimum weight value $u_j$. The operation of retrieving the point with the minimum $u$ value within $R$, can be seen as an answer to a query which has to be completed relatively fast.
We want to store all points in $S$ in a data structure such that given a query (\textit{i.e.} a rectangle $R$) we can perform the two basic operations: (i) Report the point with the minimum weight value within rectangle $R$ and (ii) Update the weight value of a given point in $S$. In such data structures the operations would usually be either only queries (the static version of the problem) or queries and insertions and deletions of points (the dynamic version of the problem). When we update the weight value of a point $i$ from $u_i$ to $u^{\prime}_i$ in $S$, we assume that we delete point $i$ and add a new point $i^{'}$ with weight $u^{\prime}_i$.
A variety of dynamic data structures such as range trees \cite{Lueker197828} \cite{x} \cite{Lee:1980:QTF:320613.320618}, layered range trees\cite{Gabow:1984:SRT:800057.808675} \cite{Willard:1985:NDS:3674.3690} and weight balanced trees\cite{Nievergelt:1972:BST:800152.804906} have been designed for dynamic and static versions of the Orthogonal Searching Problem. In \cite{Lueker197828} it was shown that the asymptotic upper bound of the time to respond to one query (i.e report the minimum weight point within a rectangle $R$) is $O(\log^{3}(n))$ in the case of a two dimensional space. Furthermore, it was shown that the upper bound on the running time of a sequence of $n$ operations which can be queries, insertions and deletions is $O(n \log^2 n)$.
\paragraph{Operation $\mbox{\it Relax}$} \textit{ }
\label{sub:Relax}
When algorithm ${\mathcal{A}_{k}}$ is applied to a sub-network $\mathcal{N}[\beta_1,\beta_2]$ of the residual network $\mathcal{N}_{k-1}$,
operations $\mbox{\it relax}$ are grouped together
for the edges incoming to the same pair node.
For a pair node $x$,
we define operation $\mbox{\it Relax}(x)$ as
a sequence of all operations
$\{ \mbox{\it relax}(y^+,x^-):
(y^+,x^-) \in \mathcal{N}[\beta_1,\beta_2]\}$,
in arbitrary (because not relevant) order,
followed by the $\mbox{\it relax}$ operation applied to the
residual edge outgoing from $x^-$ (if any). That edge
is either
$(x^-,x^+)$, for a black pair node $x$,
or $(x^-,\pi_x^+)$,
for a red pair node $x$ on a path $\mathcal{Y}_j, j\in [1,k-1]$ with predecessor $\pi_x$.
For the worst-case running-time efficiency,
we implement operations $\{ \mbox{\it relax}(y^+,x^-):
(y^+,x^-) \in \mathcal{N}[\beta_1,\beta_2]\}$
not by performing all of them explicitly
(this would take $O(n)$ time)
but by finding the pair node $y_{min}$ such
that $h(y_{min}) =\min\{ h(y):
(y^+,x^-) \in \mathcal{N}[\beta_1,\beta_2]\}$ and
performing only operation $\mbox{\it relax}(y^+_{min},x^-)$.
We keep all pair nodes in
$\mathcal{N}[\beta_1,\beta_2]$
in a data structure
for answering rectangle queries~\cite{Lueker197828}.
The weight-value of a pair node $y$
in this data structure
is equal to the current shortest path weight
$h(y)$.
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$
and a pair node $x$ in this sub-network, we denote by $D_x[\beta_1,\beta_2]$
the set of all pair nodes $y$ in the sub-network such that there is an edge $(y^+,x^-)$, or equivalently $y \prec x$.
Notice that for a pair node $y \in D_x[\beta_1,\beta_2]$ the corresponding pair point $y$ in $\mathcal{P}$ must be within the rectangle $R_x=[0,\alpha_x]\times[\beta_1,\beta_x]$ since $y \prec x$ (\textit{i.e.} $\alpha_y \leq \alpha_x$ and $\beta_y \leq \beta_x$).
Thus, for a pair node $x$ finding pair node $y_{min}$ in $D_x[\beta_1,\beta_2]$ amounts to finding the minimum value pair point in rectangle~$R_x$.
For a black pair node $x$, finding pair node $y_{min}$ consists of answering the rectangle query for $R_x$ since every edge $(y^+,x^-) \in \mathcal{N}_{k-1}[\beta_1,\beta_2]$ is a residual edge. For a red pair node $x$ on some path $\mathcal{Y}_j$ where $j \in [1,k-1]$, we first remove from the data structure the predecessor pair node $\pi_x$ of $x$ on $\mathcal{Y}_k$
(since $(\pi_x^+,x^-)$ is not a residual edge), then
find $y_{min}$ by answering the rectangle query for $R_x$,
and finally re-insert $\pi_x$ back to the data
structure.
Each single operation on the data structure from~\cite{Lueker197828} (rectangle query,
update of the value of a given element, deleting
a given element, or inserting a new element) takes
$O(\log^3 n)$ time, so the running time of
operation $\mbox{\it Relax}$ is $O(\log^3 n)$.
\section{Shortest Path Algorithm $\mathcal{A}_k$ for $k =2$}
\label{k1sect}
In this section we consider the special case $k=2$ as an introduction to our recursive approach. For a sub-network $(\mathcal{N}; \mathcal{Y}_1)[\beta_1,\beta_2]$ or $\mathcal{N}[\beta_1,\beta_2]$ for short, when algorithm $\mathcal{A}_2$ is applied to this sub-network, the computation consists of three phases. Consider the two sub-networks $\mathcal{N}[\beta_1,(\beta_1+\beta_2)/2]$ and $\mathcal{N}[(\beta_1+\beta_2)/2,\beta_2]$ which we denote by $\mathcal{N}_1$ and $\mathcal{N}_2$, respectively. The first phase is the recursive call of algorithm $\mathcal{A}_2$ on $\mathcal{N}_1$. The second phase calls the coordination algorithm $\mathcal{C}_2$ on sub-network $\mathcal{N}[\beta_1,\beta_2]$. The third phase is the recursive call of algorithm $\mathcal{A}_2$ on $\mathcal{N}_2$.
The description of algorithm $\mathcal{A}_2$ is shown in pseudo-code in Algorithm \ref{PseudoA2main}.
\begin{algorithm}[h]
\caption{Algorithm $\mathcal{A}_2$ on input $(\mathcal{N}; \mathcal{Y}_1)[\beta_1,\beta_2] \equiv \mathcal{N}[\beta_1,\beta_2]$ }
\SetAlgoLined
\begin{algorithmic}
\State $\mathcal{N}_1 \leftarrow \mathcal{N}[\beta_1,(\beta_1+\beta_2)/2];$ $\mathcal{N}_2 \leftarrow \mathcal{N}[(\beta_1+\beta_2)/2,\beta_2];$
\State $\mathcal{A}_2 (\mathcal{N}_1);$
\State $\mathcal{C}_{2}(\mathcal{N}[\beta_1,\beta_2]);$
\State $\mathcal{A}_2 (\mathcal{N}_2);$
\end{algorithmic}
\label{PseudoA2main}
\end{algorithm}
When algorithm $\mathcal{C}_2$ is applied to a sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation consists of three steps. The first and third step call algorithm $\mathcal{A}_1$ on the sub-network $\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_1$ which denotes the sub-network without the red edges of path $\mathcal{Y}_1$. The second step calls algorithm $\Delta(\mathcal{Y}_1)$ on the sub-network $\mathcal{N}[\beta_1,\beta_2]$.
The computational steps of algorithm $\mathcal{C}_2$ are described in pseudo-code in Algorithm \ref{PseudoC2main}.
For an input sub-network $\mathcal{N}[\beta_1,\beta_2]$ algorithm $\mathcal{A}_1$ consists of two steps. The first step computes a topological order of all points in the sub-network using the $O(n \log n)$-time algorithm of Asahiro. {\it et al.}~\cite{DBLP:journals/dam/AsahiroHMOSY06}. The second step considers the points of the sub-network in topological order, that is, for two points $v$ and $v^{\prime}$ such that $v \prec v^{\prime}$ point $v$ is considered first and when a point $v$ is considered it performs operation $\mbox{\it Relax}(v)$ as described in Sub-section \ref{sub:Relax}.
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$, algorithm $\Delta(\mathcal{Y}_1)$ traverses the red edges of path $\mathcal{Y}_1$ in the sub-network (if any) and performs operation $\mbox{\it relax}$ on the red edges(long and short). Specifically, let $y_i$ for $i=1,2, \ldots,m$ be the $i^{th}$ red point on path $\mathcal{Y}_1$ in the sub-network, such that $y_1 \prec y_2 \prec \ldots \prec y_m$. Algorithm $\Delta(\mathcal{Y}_1)$ performs operation $\mbox{\it relax}(y^+_j,y^-_j)$ and operation $\mbox{\it relax}(y^-_j,y^+_{j-1})$ for $j=m,m-1,\ldots,2$. For $j=1$ only operation $\mbox{\it relax}(y^+_j,y^-_j)$ is performed since edge $(y^-_j,y^+_{j-1})$ does not exist.
\begin{algorithm}[h]
\caption{Algorithm $\mathcal{C}_2$ on input $\mathcal{N}[\beta_1,\beta_2]$ }
\SetAlgoLined
\begin{algorithmic}
\State $\mathcal{A}_{1}(\mathcal{N}[\beta_1,\beta_2]\setminus \mathcal{Y}_1);$
\State $\Delta(\mathcal{Y}_1);$
\State $\mathcal{A}_{1}(\mathcal{N}[\beta_1,\beta_2]\setminus \mathcal{Y}_1);$
\end{algorithmic}
\label{PseudoC2main}
\end{algorithm}
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$ we say that a path $Q$ is non-chromatic (red-chromatic) if it traverses only black (red) edges.
Lemmas \ref{lemma:onlyblack} and \ref{lemma:onlyred} describe the specification of algorithms $\mathcal{A}_1$ and $\Delta(\mathcal{Y}_1)$, respectively.
\begin{lemma}
\label{lemma:onlyblack}
When algorithm $\mathcal{A}_1$ is applied to a sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation follows every non-chromatic path in the sub-network and the running time is $O(n\log^3 n)$ where $n$ is the size of the sub-network.
\end{lemma}
\begin{proof}
Let $Q$ be a path in the sub-network $\mathcal{N}[\beta_1,\beta_2]$ such that $Q$ traverses only black edges. Consider the ordering of the edges $(y_1,y_2)(y_2,y_3),...,(y_{m-1},y_m)$ in $Q$.
Recall that every point $v$ in sub-network $\mathcal{N}[\beta_1,\beta_2]$ is a pair of points $(v^-,v^+)$ which are connected with a short edge of capacity $1$. We show that the computation of algorithm $\mathcal{A}_1$ includes a sequence of $\mbox{\it relax}$ operations on edges $(y_1^-,y_1^+),(y_{1}^+,y^-_2),\ldots,(y_{m-1}^+,y^-_m),(y_1^-,y_1^+)$ in this relative order.
For a black edge $(x,y)$ points $x$ and $y$ can be either black or red. Therefore, if $Q$ includes a red point $v$ (\textit{i.e.} a point on path $\mathcal{Y}_1$) then $v$ must be either the starting or ending point of $Q$. In detail, if the starting point $y_1$ of $Q$ is red, then the first edge of $Q$ is edge $(y_1^+,y_2^-)$ and if the ending point $y_m$ of $Q$ is red, then the last edge of $Q$ is edge $(y_{m-1}^+,y^-_m)$. This is because the short red edges $(y^-_1,y^+_1)$ and $(y^-_m,y^+_m)$ are not residual edges.
Because $Q$ traverses only black edges, for edge $(y_{i},y_{i+1}) \in Q$ where $i=1,2,\ldots,m-1$ it holds that $y_{i} \prec y_{i+1}$. Algorithm $\mathcal{A}_1$ considers all points in the sub-network in topological order and when a point $x$ is considered it performs operation $\mbox{\it Relax}(x)$. Therefore, for any $i \in [1,m-1]$ and an edge $(y_i,y_{i+1}) \in Q$ operation $\mbox{\it Relax}(y_i)$ precedes operation $\mbox{\it Relax}(y_{i+1})$.
For a point $x$ operation $\mbox{\it Relax}(x)\equiv
\{ \mbox{\it relax}(y^+,x^-): y \in D_x[\beta_1,\beta_2]\}$ is equivalent to sequence of operations $\mbox{\it relax}(y^+,x^-)$ for every point $y$ in the sub-network such that $y \prec x$ (as defined in sub-section \ref{sub:Relax}). Thus, operation $\mbox{\it relax}(y_{i-1}^+,y_{i}^-)$ is implicitly included in operation $\mbox{\it Relax}(y_i)$ for $i=2,3,\ldots,m$. Further, for $i=1,2,\ldots,m$ if point $y_i$ is black then operation $\mbox{\it Relax}(y_i)$ also includes operation $\mbox{\it relax}(y_i^-,y^+_i)$.
For the special case $i=1$(resp. $i=m$), if point $y_i$ is red then the first (resp. last) $\mbox{\it relax}$ operation in the sequence is on edge $(y_{1}^+,y^-_2)$ (resp. $(y_{m-1}^+,y^-_m)$).
We conclude that the computation of algorithm $\mathcal{A}_1$ includes all operations $\mbox{\it relax}(y_{i-1},y_i)$ for $i=2,3,\ldots,m$ and all operations $\mbox{\it relax}(y_i^-,y_i^+)$ for $i=1,2,\ldots,m$, in this relative order.
One operation $\mbox{\it Relax}$ requires $O(\log^3n)$ where $n$ is the size of the sub-network $\mathcal{N}[\beta_1,\beta_2]$ and therefore the total running time of algorithm $\mathcal{A}_1$ is $O(n\log^3 n)$.
\end{proof}
\begin{lemma}
\label{lemma:onlyred}
When algorithm $\Delta(\mathcal{Y}_1)$ is applied on a sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation follows every red-chromatic path in the sub-network and the running time is $O(n)$ where $n$ is the size of the sub-network.
\end{lemma}
\begin{proof}
Algorithm let $m'$ be the number of red points on path $\mathcal{Y}_1$ and denote by $y_i$ for $i=1,2,\ldots,m$ the $i^{th}$ red point such that $y_1 \prec y_{2}\prec \ldots \prec y_m$. Let $Q$ be a path in the sub-network such that $Q$ traverses only red edges. Denote by $(y_i,y_{i+1}),\ldots,(y_{j-1},y_j)$ the ordering of the red edges in $Q$ where $i,j \in [1,m]$ and $j \leq i$. Algorithm $\Delta(\mathcal{Y}_1)$ performs all relax operations $(y^+_m,y^-_m)(y^-_m,y^+_{m-1}),\ldots,(y^-_2,y^+_1),(y^+_1,y^-_1)$.
The proof simply follows by induction for $k=i,i-1,\ldots,j$. Path $\mathcal{Y}_1$ can have at most $n$ points and therefore it can have at most $(n-1)$ edges. Operation $\mbox{\it relax}$ takes constant time and therefore the total time needed is $O(n)$.
\end{proof}
Recall that for sub-network $\mathcal{N}[\beta_1,\beta_2]$ and a given path $Q = (x_0,x_1,\ldots,x_m)$, in this sub-network (not necessarily a shortest
path or an $s$-$t$ path) we say that the computation follows path $Q$, if the computation includes all relax operations $\mbox{\it relax}(x_i,x_{i+1})$, $i = 1,2,\ldots,m-1$ in this order.
Each operation $\mbox{\it relax}(x_i,x_{i+1})$ may be implicitly included within operation $\mbox{\it Relax}(x_{i+1})$. The following theorem describes the specification of algorithm $\mathcal{A}_2$.
\begin{theorem}
\label{Theorem:specialcasek=2}
When algorithm $\mathcal{A}_2$ is applied to a sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation follows every path in the sub-network and the running time is $O(n \log^4 n)$ where $n$ is the size of the sub-network.
\end{theorem}
Recall that if the computation follows path $Q=(x_0,x_1,\ldots,x_m)$, then
at the end of this computation, the computed shortest path weight $h(x_m)$ is at most the weight of $Q$. Therefore, Theorem \ref{Theorem:specialcasek=2} implies that at the termination of the computation of algorithm $\mathcal{A}_2$ on a sub-network $\mathcal{N}[\beta_1,\beta_2]$, for every point $v$ in the sub-network we have $h(v)=h^*(v)$ where $h^*(v)$ is the weight of a shortest path to $v$.
\subsection{Proof of Theorem \ref{Theorem:specialcasek=2}}
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$ let $Q$ be a path in the sub-network. Denote by $\mathcal{N}_1$ and $\mathcal{N}_2$ the sub-networks $\mathcal{N}[\beta_1,(\beta_2+\beta_1)/2]$ and $\mathcal{N}[(\beta_2+\beta_1)/2,\beta_2]$, respectively. Without loss of generality, we assume that $Q$ has points both in $\mathcal{N}_1$ and $\mathcal{N}_2$.
Recall that according to Definition \ref{Def:x}, if path $Q$ starts in $\mathcal{N}_1$ then we define $x \in \mathcal{N}_1$ to be the last point in $Q$ such that all points before $x$ are in $\mathcal{N}_1$. If path $Q$ starts in $\mathcal{N}_2$ we define $x$ to be the starting point of $ Q$.
Similarly, according to Definition \ref{Def:x'}, if path $Q$ ends in $\mathcal{N}_1$ then we define $x' \in \mathcal{N}_1$ to be the ending point of $ Q$. If path $Q$ ends in $\mathcal{N}_2$ we define $x'$ to be the first point of $Q$ in $\mathcal{N}_2$ such that all points after $x'$ are in $\mathcal{N}_2$.
We can decompose $Q$ into the following parts $Q_x,q_{xx'},Q_{x'}$ where $Q_x$ is the sub-path of $Q$ from its starting point to point $x$, $q_{xx'}$ is the sub-path of $Q$ from $x$ to $x'$ and $Q_{x'}$ is the sub-path of $Q$ from point $x'$ to the ending point of $Q$. Definition \ref{Def:x} implies that if sub-path $Q_x$ is not empty then all points in $Q_x$ are in $\mathcal{N}_1$. Similarly, Definition \ref{Def:x'} implies that if $Q_{x'}$ is not empty then all points in $Q_{x'}$ are in $\mathcal{N}_2$. The proof of Theorem \ref{Theorem:specialcasek=2} is outlined below.
When algorithm $\mathcal{A}_2$ is applied to a sub-network $\mathcal{N}[\beta_1,\beta_2]$ the first phase of the computation is the recursive call of algorithm $\mathcal{A}_2$ on $\mathcal{N}_1$.
The second phase of the computation calls algorithm $\mathcal{C}_2$ on sub-network $\mathcal{N}[\beta_1,\beta_2]$. Finally, the third phase of the computation is the recursive call of algorithm $\mathcal{A}_2$ on $\mathcal{N}_2$.
The following theorem describes the specification of algorithm $\mathcal{C}_2$.
\begin{theorem}
\label{lemma:coordinationfork=2}
When algorithm $\mathcal{C}_2$ is applied to a sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation follows the sub-path $q_{xx'}$ of every path $Q$ in the sub-network and the running time is $O(n \log^3 n)$ where $n$ is the size of the sub-network.
\end{theorem}
\begin{figure}
\centering
\includegraphics[scale=0.32]{Chapter3/C3Images/combk1.png}
\caption{The structure of the sub-path $q_{xx'}$ of $Q$ from $x$ to $x'$ if $q_{xx'}$ crosses from $\mathcal{N}_2$ to $\mathcal{N}_1$.}
\label{fig:combk1}
\end{figure}
It remains to show the proof of Theorem \ref{lemma:coordinationfork=2} and to conclude the proof of Theorem \ref{Theorem:specialcasek=2}. The following definitions facilitate the analysis of the combinatorial structure of a path $Q$ in a sub-network $\mathcal{N}[\beta_1,\beta_2]$, followed by algorithms $\mathcal{C}_2$ and $\mathcal{A}_2$.
\begin{definition}
\label{def:run}
For a path $Q$,
a \textit{run} $r$ is a maximal sub-path of $Q$ such that all edges in $r$ are of the same colour.
\end{definition}
A run is \textit{non-chromatic} if it consists of black edges. A run is \textit{chromatic} if all of its edges are of red colour. Notice that an edge $e(u,u^{\prime})$ can be traversed at most once by a (simple) path $Q$ and therefore we have the following corollary.
\begin{corollary}
\label{cor:runsdisjoint}
Two chromatic runs $r$ and $r^{\prime}$ in $Q$
of the same colour $c_i$ are edge-disjoint.
\end{corollary}
\begin{proof}[Proof of Theorem \ref{lemma:coordinationfork=2}]
When algorithm $\mathcal{C}_2$ is applied to sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation consists of the following three steps $\mathcal{A}_1(\mathcal{N}[\beta_1,\beta_2]),\Delta(\mathcal{Y}_1),\mathcal{A}_1(\mathcal{N}[\beta_1,\beta_2])$.
Let $Q$ be a path in the sub-network $\mathcal{N}[\beta_1,\beta_2]$ and let $q_{xx'}$ be the sub-path of $Q$ from $x$ to $x'$.
We show that algorithm $\mathcal{C}_2$ follows path $q_{xx'}$.
For $k=2$ there is a unique red edge $(u,u') \in \mathcal{Y}_1$ such that $u \in \mathcal{N}_2$ and $u' \in \mathcal{N}_1$ crossing from $\mathcal{N}_2$ to $\mathcal{N}_1$. We say that the sub-path $q_{xx'}$ of $Q$ \textit{crosses} from $\mathcal{N}_2$ to $\mathcal{N}_1$ if it has a chromatic run that includes the red edge $(u,u')$. We consider two cases about the sub-path $q_{xx'}$.
The first case is that $q_{xx'}$ does not cross from $\mathcal{N}_2$ to $\mathcal{N}_1$ and the second case is that $q_{xx'}$ crosses from $\mathcal{N}_2$ to $\mathcal{N}_1$ exactly once. In the former case, following the definition of points $x$ and $x'$ the sub-path $q_{xx'}$ must consist of the single black edge $(x,x')$. Thus, according to Lemma \ref{lemma:onlyblack} the computation of the first step of $\mathcal{C}_2$, that is, the computation of algorithm $\mathcal{A}_1$, follows the sub-path $q_{xx'}$ since it traverses only one black edge.
In the latter case, the sub-path $q_{xx'}$ must consist of the following ordering of runs $(r_b,r,r^{\prime}_b)$ where $r_b$ and $r^{\prime}_b$ are non-chromatic runs and $r$ is a red chromatic run which includes the red edge $(u,u')$. Thus, we can decompose $q_{xx'}$ into three parts $(x,\pi^*_1), (\pi^*_1,\tau^*_1)$ and $(\tau^*_1,x')$ where $\pi^*_1$ and $\tau^*_1$ is the first and last point of the red chromatic run $r$. An example of this decomposition is shown in Figure \ref{fig:combk1}.
According to Lemma \ref{lemma:onlyblack}, the computation of the first step of $\mathcal{C}_2$, that is, the computation of algorithm $\mathcal{A}_1$, follows the path from $x$ to $\pi^*_1$ since it traverses only black edges. According to Lemma \ref{lemma:onlyred}, the computation of the second step of $\mathcal{C}_2$, that is, algorithm $\Delta(\mathcal{Y}_1)$ follows the path from $\pi^*_1$ to $\tau^*_1$ since it traverses only red edges. Finally, according to Lemma \ref{lemma:onlyblack} the computation of the third step of $\mathcal{C}_2$, that is, algorithm $\mathcal{A}_1$ follows the path from $\tau^*_1$ to $x'$ since it traverses only black edges.
The running time of algorithm $\mathcal{C}_2$ when applied to a sub-network $\mathcal{N}[\beta_1,\beta_2]$ of size $n$ is given by the following relationship $T_{\mathcal{C}_2}(n)=2T_{\mathcal{A}_1}(n)+ T_{\Delta}(n)$. According to Lemma \ref{lemma:onlyblack} and Lemma \ref{lemma:onlyred} we have that $T_{\mathcal{A}_1}(n)=O(n\log^3 n)$ and $T_{\Delta}(n)=O(n)$ and therefore $T_{\mathcal{C}_2}(n)=O(n \log^3 n)$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Theorem:specialcasek=2}]
When algorithm $\mathcal{A}_2$ is applied to sub-network $\mathcal{N}[\beta_1,\beta_2]$
the computation consists of the following three phases: $\mathcal{A}_2(\mathcal{N}_1),\mathcal{C}_2(\mathcal{N}[\beta_1,\beta_2]),\mathcal{A}_2(\mathcal{N}_2)$ which denote the recursive call of algorithm $\mathcal{A}_2$ on $\mathcal{N}_1$, the call of algorithm $\mathcal{C}_2$ on $\mathcal{N}[\beta_1,\beta_2]$ and the recursive call of algorithm $\mathcal{A}_2$ on $\mathcal{N}_2$, respectively.
Let $Q$ be a path in this sub-network and consider the decomposition of $Q$ into $Q_x,q_{xx'},Q_{x'}$. Recall that if sub-path $Q_x$ (resp. $Q_{x'}$) is not empty then it must include points only in $\mathcal{N}_1$ (resp. $\mathcal{N}_2$). By induction, Theorem \ref{Theorem:specialcasek=2} implies that when algorithm $\mathcal{A}_2$ is applied to $\mathcal{N}_1$ then the computation follows the sub-path $Q_x$ of $Q$ since it has points only in $\mathcal{N}_1$. According to Theorem \ref{lemma:coordinationfork=2} when algorithm $\mathcal{C}_2$ is applied to sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation follows the sub-path $q_{xx'}$ of $Q$. Finally, by induction Theorem \ref{lemma:coordinationfork=2} implies that when algorithm $\mathcal{A}_2$ is applied to sub-network $\mathcal{N}_2$ the computation follows the sub-path $Q_{x'}$ of $Q$ since it has points only in $\mathcal{N}_2$.
The running time of algorithm $\mathcal{A}_2$ when applied to a sub-network of size $n$ is given by the following recurrence relationship: $T_{\mathcal{A}_2}(n)=T_{\mathcal{A}_2}(\frac{n}{2})+T_{\mathcal{C}_2}(n)+T_{\mathcal{A}_2}(\frac{n}{2})$ where $T_{\mathcal{C}_2}(n)$ is the running time of coordination algorithm $\mathcal{C}_2$. According to Theorem \ref{lemma:coordinationfork=2}, we have that $T_{\mathcal{C}_2}(n)=O(n \log^3 n)$ and therefore by solving the recurrence relationship we obtain that $T_{\mathcal{A}_2}(n)=O(n\log^4 n)$.
\end{proof}
\section{Shortest Path Algorithm $\mathcal{A}_k$ for $k \geq 3$}
\label{knopi}
For $k \geq 3$ when algorithm $\mathcal{A}_k$ is applied to a sub-network
$(\mathcal{N}; \mathcal{Y}_1, \mathcal{Y}_2,
\ldots,\mathcal{Y}_{k-1})[\beta_1,\beta_2]$ or $\mathcal{N}[\beta_1,\beta_2]$ for short, the computation consists of three phases. The first phase and third phase are
the recursive calls of $\mathcal{A}_k$ on $\mathcal{N}_1$ and $\mathcal{N}_2$, respectively. The second, coordination
phase calls algorithm $\mathcal{C}_k$ to the sub-network $\mathcal{N}[\beta_1,\beta_2]$.
Algorithm
$\mathcal{C}_k$ repeats for $(k-1)$ times the following three steps. The first and third step consist of the following $k-1$ calls of algorithm $\mathcal{A}_{k-1}$: $\mathcal{A}_{k-1}(\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_1),\ldots,$
$\mathcal{A}_{k-1}(\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_{k-1})$. Term $\mathcal{N}[\beta_1,\beta_2]\setminus
\mathcal{Y}_i$ denotes the sub-network $\mathcal{N}[\beta_1,\beta_2]$ without the red edges of paths $\mathcal{Y}_{i}$. We group this sequence of calls to algorithm $\mathcal{A}_{k-1}$ in this order to facilitate analysis and for simplicity we denote this sequence by $\widehat{\mathcal{A}}_{k-1}$.
The second step of algorithm $\mathcal{C}_k$ calls algorithm $\mathcal{Z}_k$ which is applied to
the sub-network $\mathcal{N}[\beta_1,\beta_2]$.
Algorithm $\mathcal{Z}_k$ has a recursive structure
similar to algorithm $\mathcal{A}_k$
except that it works in the opposite direction.
In more detail, the computation of algorithm $\mathcal{Z}_k$
consists of three phases.
The first and third phase are recursive calls to
$\mathcal{Z}_k$ on $\mathcal{N}_2$ and $\mathcal{N}_1$, respectively
(so the first recursive call is to the top half of the
sub-network).
The second, coordination phase consists of two steps, as explained below.
The first step calls algorithm $\Delta(\mathcal{Y}_1,\mathcal{Y}_2,\ldots,\mathcal{Y}_{k-1})$ which is the natural generalisation of algorithm $\Delta(\mathcal{Y}_1)$. That is, algorithm $\Delta(\mathcal{Y}_1,\mathcal{Y}_2,\ldots,\mathcal{Y}_{k-1})$ traverses the red edges (short and long) of each path $\mathcal{Y}_1,\mathcal{Y}_2,\ldots,\mathcal{Y}_{k-1}$ in the sub-network starting from the last edge and moving towards the first edge. When a red edge $(u,u')$ is considered it performs operation $\mbox{\it relax}(u,u')$. The second step repeats for $(k-2)$ times
two calls to the sequence $\widehat{\mathcal{A}}_{k-1}$. That is, one iteration consists of $2(k-1)$ calls to algorithm $\mathcal{A}_{k-1}$.
Algorithms $\mathcal{A}_k$, $\mathcal{C}_k$ and
$\mathcal{Z}_k$ are described in pseudocode as
Algorithms \ref{PseudoAmainnoPi}, \ref{PseudoCmainnoPi} and
\ref{PseudoZmainnoPi}, respectively.
\begin{algorithm}[H]
\caption{Algorithm $\mathcal{A}_k$ for input $\mathcal{N}[\beta_1,\beta_2]$ }
\SetAlgoLined
\begin{algorithmic}
\State $\mathcal{N}_1 \leftarrow \mathcal{N}[\beta_1,(\beta_1+\beta_2)/2]$ $\mathcal{N}_2 \leftarrow \mathcal{N}[(\beta_1+\beta_2)/2,\beta_2]$
\State $\mathcal{A}_k (\mathcal{N}_1);$
\State $\mathcal{C}_{k}(\mathcal{N}[\beta_1,\beta_2]);$
\State $\mathcal{A}_k (\mathcal{N}_2);$
\end{algorithmic}
\label{PseudoAmainnoPi}
\end{algorithm}
\begin{algorithm}[H]
\caption{Algorithm $\mathcal{C}_k$ for input $\mathcal{N}[\beta_1,\beta_2]$ }
\SetAlgoLined
\begin{algorithmic}
\State \textbf{Repeat} for $i=1,2,\ldots,k-1$
\State $\mathcal{A}_{k-1}(\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_1);,\ldots, \mathcal{A}_{k-1}(\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_{k-1});$
\State $\mathcal{Z}_k(\mathcal{N}[\beta_1,\beta_2]);$
\State $\mathcal{A}_{k-1}(\mathcal{N}[\beta_1,\beta_2]\setminus \mathcal{Y}_1);,\ldots, \mathcal{A}_{k-1}(\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_{k-1});$
\end{algorithmic}
\label{PseudoCmainnoPi}
\end{algorithm}
\begin{algorithm}[H]
\caption{Algorithm $\mathcal{Z}_k$ for input $\mathcal{N}[\beta_1,\beta_2]$}
\SetAlgoLined
\begin{algorithmic}
\State $\mathcal{N}_1 \leftarrow \mathcal{N}[\beta_1,(\beta_1+\beta_2)/2]$ $\mathcal{N}_2 \leftarrow \mathcal{N}[(\beta_1+\beta_2)/2,\beta_2]$
\State $\mathcal{Z}_k (\mathcal{N}_2);$
\State $\Delta(\mathcal{Y}_1,\mathcal{Y}_2,\ldots,\mathcal{Y}_{k-1});$
\State \For{$i=1,2,\ldots,(k-2)$}{
\State $\mathcal{A}_{k-1}(\mathcal{N}[\beta_1,\beta_2]\setminus \mathcal{Y}_1);,\ldots, \mathcal{A}_{k-1}(\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_{k-1});$
\State $\mathcal{A}_{k-1}(\mathcal{N}[\beta_1,\beta_2]\setminus \mathcal{Y}_1);,\ldots, \mathcal{A}_{k-1}(\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_{k-1});$}
\State $\mathcal{Z}_k (\mathcal{N}_1);$
\end{algorithmic}
\label{PseudoZmainnoPi}
\end{algorithm}
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$ let $Q$ be a non-self-crossing path in this sub-network. Without loss of generality, we assume that $Q$ has points both in $\mathcal{N}_1$ and $\mathcal{N}_2$. Analogously as for $k=2$ and according to Definitions \ref{Def:x} and \ref{Def:x'}, path $Q$ can be decomposed into three parts $(Q_x,q_{xx'},Q_{x'})$ where $Q_x$ is the sub-path of $Q$ from its starting point to point $x$, $q_{xx'}$ is the sub-path of $Q$ from $x$ to $x'$ and $Q_{x'}$ is the sub-path of $Q$ from point $x'$ to its ending point. Recall that if $Q_x$ (resp. $Q_{x'}$) is not empty then it must include points only in $\mathcal{N}_1$ (resp. $\mathcal{N}_2$).
For $k \geq 3$ the proof of Theorem \ref{Thm1noPi} is outlined below. We assume by induction that when algorithm $\mathcal{A}_k$ is applied to $\mathcal{N}_1$ the computation follows every non-self-crossing path that has only points in $\mathcal{N}_1$. Thus, the computation of the recursive call of algorithm $\mathcal{A}_k$ on $\mathcal{N}_1$ follows the sub-path $Q_x$ of $Q$. According to Theorem \ref{Thm:coordinationnoPi} the computation of algorithm $\mathcal{C}_k$ follows the sub-path $q_{xx'}$ of $Q$. Finally, we assume by induction that when algorithm $\mathcal{A}_k$ is applied to $\mathcal{N}_2$ the computation follows every non-self-crossing path that has only points in $\mathcal{N}_2$. Thus, the computation of the second recursive call follows the sub-path $Q_{x'}$ of $Q$.
The remaining part of the section is organized in the following way: In Subsection \ref{sec:algoCknoPi} we specify the structure of paths followed by algorithm $\mathcal{C}_k$. In Subsection \ref{sec:AlgorithmZnoPi} we specify the structure of paths followed by algorithm $\mathcal{Z}_k$. Finally, based on the analysis of Subsection \ref{sec:AlgorithmZnoPi} in Subsection \ref{sec:knoPi} we show the proof of Theorems \ref{Thm1noPi} and \ref{Thm:coordinationnoPi} for $k \geq 3$.
\subsection{Algorithm $\mathcal{C}_k$}
\label{sec:algoCknoPi}
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$ and a non-self-crossing path $Q$ in the sub-network, algorithm $\mathcal{C}_k$ is employed to follow the sub-path $q_{xx'}$ of $Q$ from $x$ to $x'$ . In this sub-section we outline the combinatorial structure of path $q_{xx'}$.
To facilitate analysis we first introduce "shades" of red colour
to distinguish between red edges of different paths $\mathcal{Y}_1, \mathcal{Y}_2,..,\mathcal{Y}_{k-1}$. Specifically, the edges of path $\mathcal{Y}_i$ for $i=1,2,\ldots,k-1$ are coloured with red colour $c_i$.
Recall that according to Definition \ref{def:run}, a run $r$ is a maximal sub-path of a path $Q$ such that all edges are of the same colour. A run is \textit{non-chromatic} if it consists of black edges. A run is \textit{chromatic} if all of its edges are of the same red colour $c_i$, for some $1 \leq i \leq k-1$.
\begin{definition}
For $0\leq d\leq k-1$ we say that a non-self-crossing path $Q$ (or a sub-path of $Q$) is $d$-chromatic, if the number of red colours in all chromatic runs is equal to $d$.
A path $Q$ (or a sub-path of $Q$) that does not have a chromatic run, is
$0$-chromatic, or
non-chromatic, and traverses only black edges.
\end{definition}
\begin{definition}
We say that a $(k-1)$-chromatic path $Q$ is short $(k-1)$-chromatic if all chromatic runs of colour $c_1$ appear before all chromatic runs of colour $c_{k-1}$ or vice versa.
\end{definition}
Recall that for a sub-network $\mathcal{N}[\beta_1,\beta_2]$ we denote by $\widehat{\mathcal{A}}_{k-1}$ the algorithm which performs the following sequence of $(k-1)$ calls $\mathcal{A}_{k-1}(\mathcal{N}[\beta_1,\beta_2]\setminus \mathcal{Y}_1),\ldots,\mathcal{A}_{k-1}(\mathcal{N}[\beta_1,\beta_2]\setminus \mathcal{Y}_{k-1})$ to algorithm $\mathcal{A}_{k-1}$. The following describes the specification of $\widehat{\mathcal{A}}_{k-1}$ when applied to a sub-network.
\begin{lemma}
\label{followatmostnoPi}
For $k\geq 3$, assuming that Theorem \ref{Thm1noPi} holds for $k-1$, when algorithm $\widehat{\mathcal{A}}_{k-1}$ is applied to a sub-network $\mathcal{N}[\beta_1,\beta_2]$ then the computation follows any non-self-crossing path $Q$ in the sub-network such that $Q$ is at most $(k-2)$-chromatic.
\end{lemma}
\begin{proof}
Sub-networks $\mathcal{N}[\beta_1,\beta_2]\setminus \mathcal{Y}_1,\ldots,\mathcal{N}[\beta_1,\beta_2]\setminus \mathcal{Y}_{k-1}$ do not have a negative cycle because there are sub-networks of the residual network $\mathcal{N}_{k-1}$ (\textit{i.e.} the sub-network for $\beta_1=0$ and $\beta_2=n+1$) which does not have a negative cycle. Consider a non-self-crossing path $Q$ such that $Q$ is at most $(k-2)$-chromatic. This means that $Q$ can traverse red edges of all paths except one path $\mathcal{Y}_i$ where $i \in [1,k-1]$.
Thus, path $Q$ must be a non-self-crossing path in one of the sub-networks $\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_1,\ldots,\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_{k-1}$. Without loss of generality, we assume that $Q$ is a path on sub-network $\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_i$ where $i \in [1,k-1]$. Algorithm $\widehat{\mathcal{A}}_{k-1}$ consists of applying algorithm $\mathcal{A}_{k-1}$ on sub-networks $\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_1,\ldots,\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_{k-1}$. Thus, assuming that Theorem \ref{Thm1noPi} holds for $k-1$, when algorithm $\mathcal{A}_{k-1}$ is applied to sub-network $\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_i$ the computation follows every non-self-crossing path in the sub-network. This completes the proof.
\end{proof}
When algorithm $\widehat{\mathcal{A}}_{k-1}$ is applied twice on a sub-network $\mathcal{N}[\beta_1,\beta_2]$ (denoted by $\widehat{\mathcal{A}}_{k-1}$ $\widehat{\mathcal{A}}_{k-1}$) the computation consists of the following calls to algorithm $\mathcal{A}_{k-1}$: $\mathcal{A}_{k-1}(\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_1),\ldots,\mathcal{A}_{k-1}(\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_{k-1})$ and $\mathcal{A}_{k-1}(\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_1),\ldots,\mathcal{A}_{k-1}(\mathcal{N}[\beta_1,\beta_2] \setminus \mathcal{Y}_{k-1})$ in this order.
\begin{lemma}
\label{followatmostshortnoPi}
For $k\geq 3$, assuming that Theorem \ref{Thm1noPi} holds for $k-1$, when algorithm $\widehat{\mathcal{A}}_{k-1}$ is applied twice to a sub-network $\mathcal{N}[\beta_1,\beta_2]$ then the computation follows any non-self-crossing path $Q$ in the sub-network such that $Q$ which is short $(k-1)$-chromatic.
\end{lemma}
\begin{proof}
Consider a non-self-crossing path $Q$ from a point $u$ to a point $u'$ such that $Q$ is short $(k-1)$-chromatic. Without loss of generality, we assume that all chromatic runs of colour $c_1$ appear before all chromatic runs of colour $c_{k-1}$ in $Q$. Let $u''$ be the first point of the first chromatic run of colour $c_{k-1}$ in $Q$. The sub-path of $Q$ from $u$ to $u''$ and the sub-path of $Q$ from $u''$ to $u'$ can be at most $(k-2)$-chromatic.
That is, there is no chromatic run of colour $c_{k-1}$ (resp. $c_1$) between $u$ and $u''$ (resp. between $u''$ and $u'$). According to Lemma \ref{followatmostnoPi} when algorithm $\widehat{\mathcal{A}}_{k-1}$ is applied on sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation follows every non-self-crossing path which is at most $(k-2)$-chromatic. Thus, the first call to $\widehat{\mathcal{A}}_{k-1}$ follows the sub-path of $Q$ from $u$ to $u''$. Similarly, the second call to algorithm $\widehat{\mathcal{A}}_{k-1}$ follows the sub-path of $Q$ from $u''$ to $u'$.
\end{proof}
Note that algorithm $\mathcal{C}_k$ includes at least two calls to algorithm $\widehat{\mathcal{A}}_{k-1}$ (see steps 1 and 3 in Algorithm \ref{PseudoCmainnoPi}). This means that if the sub-path $q_{xx'}$ is at most $(k-2)$-chromatic or short $(k-1)$-chromatic then according to Lemmas \ref{followatmostnoPi} and \ref{followatmostshortnoPi}, the computation of algorithm $\mathcal{C}_k$ follows the sub-path $q_{xx'}$.
Thus, for the remaining part of the analysis we consider the case where $q_{xx'}$ is $(k-1)$-chromatic.
We need the following definitions to outline the combinatorial structure of a $(k-1)$-chromatic non-self-crossing path in a sub-network $\mathcal{N}[\beta_1,\beta_2]$, with respect to the two sub-networks $\mathcal{N}_1$ and $\mathcal{N}_2$.
\begin{definition}
We say that a non-chromatic run crosses from $\mathcal{N}_1$ to $\mathcal{N}_2$ if it traverses a black edge $(u,u')$ such that $u \in \mathcal{N}_1$ and $u' \in \mathcal{N}_2$. We say that a chromatic run $r$ of colour $c_j$ where $j\in [1,k-1]$ crosses from $\mathcal{N}_2$ to $\mathcal{N}_1$ if it traverses a red edge $(u,u') \in \mathcal{Y}_j$ such that $u \in \mathcal{N}_2$ and $u' \in \mathcal{N}_1$.
\end{definition}
If a run $r$ has only points in $\mathcal{N}_1$ (resp. $\mathcal{N}_2$) we say that $r$ is placed in $\mathcal{N}_1$ (resp. $\mathcal{N}_2$).
\begin{definition}
We say that a path $Q$ crosses from $\mathcal{N}_2$ to $\mathcal{N}_1$ if $Q$ has a chromatic run $r$ which crosses from $\mathcal{N}_2$ to $\mathcal{N}_1$. We say that a path $Q$ crosses from $\mathcal{N}_1$ to $\mathcal{N}_2$ if $Q$ has a non-chromatic run that crosses from $\mathcal{N}_1$ to $\mathcal{N}_2$.
\end{definition}
Similarly as for $k=2$, if the sub-path $q_{xx'}$ is not empty and does not cross from $\mathcal{N}_2$ to $\mathcal{N}_1$ it must hold that $x \in \mathcal{N}_1$, $x' \in \mathcal{N}_2$ and the sub-path $q_{xx'}$ simply consists of the black edge $(x,x')$. If the sub-path $q_{xx'}$ crosses at least once from $\mathcal{N}_2$ to $\mathcal{N}_1$, observe that for $k \geq 3$ there are exactly $(k-1)$ red edges which cross from $\mathcal{N}_2$ to $\mathcal{N}_1$ (one for each path $\mathcal{Y}_1,\mathcal{Y}_2,...,\mathcal{Y}_{k-1}$).
Therefore, the sub-path $q_{xx'}$ can cross at most $m \leq k-1$ times from $\mathcal{N}_2$ to $\mathcal{N}_1$ (as each such crossing must traverse one of the $k-1$ red
edges from $\mathcal{N}_2$ to $\mathcal{N}_1$). It is easy to see that every red edge of $q_{xx'}$ crossing from $\mathcal{N}_2$ to $\mathcal{N}_1$ must appear after a black edge crossing from $\mathcal{N}_1$ to $\mathcal{N}_2$.
\begin{definition}
We define $w_i$ for $i=1,2,\ldots,m$ to be the last point in $\mathcal{N}_1$ before the $i^{th}$ crossing of $q_{xx'}$ from $\mathcal{N}_1$ to $\mathcal{N}_2$. We denote by $q_i$ the sub-path of $q_{xx'}$ from $w_i$ to $w_{i+1}$.
\end{definition}
For clarity, we denote $x$ and $x'$ by $w_1$ and $w_{k}$, respectively, and w.l.o.g, we assume that $m=k-1$. For $i=1$ and the special case where $w_1$ is on $\mathcal{N}_2$ then $p_1$ does not cross from $\mathcal{N}_1$ to $\mathcal{N}_2$. Similarly, for $i=k-1$ and the special case where $w_k$ is on $\mathcal{N}_2$ then $p_{k-1}$ does not cross from $\mathcal{N}_2$ to $\mathcal{N}_1$. Denote by $w'_i \in \mathcal{N}_2$ the successor of $w_i$ in $q_i$ for $i=1,2,\ldots,k-1$. The following corollary outlines the structure of $q_i$ for $i =1,2,\ldots,k-1$.
\begin{corollary}
\label{cor:CoordinationCk}
For $i=1,2,\ldots,k-1$ the sub-path $q_i$ of $q_{xx'}$ from $w_i$ to $w_{i+1}$ crosses the boundary between $\mathcal{N}_1$ and $\mathcal{N}_2$ twice. The first crossing is from $\mathcal{N}_1$ to $\mathcal{N}_2$ identified with the black edge $(w_i,w'_i)$. The second crossing is from $\mathcal{N}_2$ to $\mathcal{N}_1$ identified with a red chromatic run $r^*$ of colour $c_j$ where $j \in [1,k-1]$.
\end{corollary}
For $i=1,2,\ldots,k-1$ the sub-path $q_i$ of $q_{xx'}$ can be either at most $(k-2)$-chromatic or $(k-1)$-chromatic. In the former case, according to Lemma \ref{followatmostnoPi} the computation of the first step in the $i^{th}$ iteration of algorithm $\mathcal{C}_k$, follows the sub-path $q_i$ of $q_{xx'}$. For the latter case, we provide the following definition to facilitate analysis.
\begin{definition}
\label{def:pointssplit}
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$ consider a $(k-1)$-chromatic path $Q$ from a point $u$ to a point $u'$. Let $\pi,\tau$ be the points in $Q$ such that the sub-path of $Q$ from $u$ to $\pi$ (resp. from $\tau$ to $u'$) is a maximal $(k-2)$-chromatic path.
\end{definition}
Points $\pi$ and $\tau$ are always unique and well-defined for any $(k-1)$-chromatic path $Q$. Following Definition \ref{def:pointssplit}, let $\pi_i$ and $\tau_i$ for $i=1,2,\ldots,k-1$ be the points in the sub-path $q_i$ of $q_{xx'}$ from $w_i$ to $w_{i+1}$. Consider the decomposition of sub-path $q_{xx'}$ into the following parts $(w_1,\pi_1,\tau_1,w_2),\ldots,(w_{k-1},\pi_{k-1},\tau_{k-1},w_k)$, as shown in Figure \ref{fig:description}.
Note that for $i=1,2,\ldots,k-1$ if the path from $\pi_i$ to $\tau_i$ is empty (\textit{i.e.} $\tau_i$ appears before $\pi_i$ in $q_i$) then according to Definition \ref{def:pointssplit} the path from $\pi_i$ to $w_{i+1}$ is also $(k-2)$-chromatic. In this case, based on Lemma \ref{followatmostnoPi} we will show that the computation of the first and third step in the $i^{th}$ iteration of algorithm $\mathcal{C}_k$, follows the sub-path $q_i$ of $q_{xx'}$. From now on we consider the case where the path from $\pi_i$ to $\tau_i$ is not empty (\textit{i.e.} $\tau_i$ appears after $\pi_i$ in $q_i$).
\begin{figure}
\centering
\includegraphics[scale=0.45]{Chapter3/C3Images/descr.png}
\caption{The combinatorial structure of the sub-path $q_{xx'}$ from $x$ to $x'$ of a non-self-crossing path $Q$, for $k \geq 3$. For clarity, we assume that $m=k-1$.}
\label{fig:description}
\end{figure}
\begin{definition}
\label{def:closedterritory}
Consider the plane representation of the residual network $\mathcal{N}_{k-1}$.
We denote by $\Phi$ the closed subset of the plane whose boundary is described by the leftmost and rightmost path $\mathcal{Y}_1$ and $\mathcal{Y}_{k-1}$, respectively.
\end{definition}
The exterior of $\Phi$ contains only black points. A black point $u$ in the exterior of $\Phi$ must be either on the left side of path $\mathcal{Y}_1$ or on the right side of path $\mathcal{Y}_{k-1}$. For the former case we say that $u$ is on the left exterior of $\Phi$, whereas in the latter case we say that $u$ is on the right exterior of $\Phi$. We say that a red point $u$ is a left (resp. right) boundary point of $\Phi$ if $u$ is a red point on path $\mathcal{Y}_1$ (resp. $\mathcal{Y}_{k-1}$). We say that a red or black point is in the interior of $\Phi$ if $u$ is a black point between two consecutive paths $\mathcal{Y}_j$ and $\mathcal{Y}_{j+1}$ where $j \in [1,k-2]$ or a red point on path $\mathcal{Y}_{j}$ where $j \in [2,k-2]$. A boundary point or a point in the interior of $\Phi$ is said to be in $\Phi$.
An edge $(u,u')$ is in $\Phi$ if the closed straight line segment $[u,u']$, corresponding to edge $(u,u')$) in the planar representation, is in $\Phi$. Observe that all red edges of the paths $\mathcal{Y}_1,\mathcal{Y}_2,\ldots,\mathcal{Y}_{k-1}$ must be in $\Phi$. Thus, we have the following corollary.
\begin{corollary}
\label{cor:chromaticrunsinPhi}
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$ and a non-self-crossing path $Q$ in this sub-network, all chromatic runs of $Q$ are in $\Phi$.
\end{corollary}
If a black edge is not in $\Phi$, denoted by $(u,u') \notin \Phi$, then the closed segment $[u,u']$ in the planar representation, must have a closed (sub)-segment in the exterior of $\Phi$. A non-chromatic run $r$ is not in $\Phi$ if it has at least one black edge $(u,u')$ such that $(u,u') \notin \Phi$.
A black edge $(u,u') \notin \Phi$ is a \textit{boundary} edge if point $u$ is a right or left boundary point. A black edge $(u,u') \notin \Phi$ is a \textit{crossing} edge if point $u$ is in the interior of $\Phi$. Recall that paths $\mathcal{Y}_1,\mathcal{Y}_2,\ldots,\mathcal{Y}_{k-1}$ are non crossing pairwise and therefore a crossing edge $(u,u') \notin \Phi$ must necessarily cross at least one red edge of path $\mathcal{Y}_1$ or path $\mathcal{Y}_{k-1}$.
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$ and a non-self-crossing path $Q$ in the sub-network, consider the geometric representation of $Q$ with the set of points on the plane. Path $Q$ can be seen as a concatenation of straight line segments which represent the edges of $Q$ and form a continuous segment $\phi$ in the planar representation.
To facilitate analysis, we distinguish between points and space points. A point $u$ in $\phi$ corresponds to node $u$ in the directed acyclic graph model (on which the sub-network $\mathcal{N}[\beta_1,\beta_2]$ is based on). A \textit{space} point $l$ in $\phi$ is a geometrical point on the closed segment $[u,u']$ of an edge $(u,u')\in Q$ and does not correspond to a node in the directed acyclic graph model.
\begin{definition}
\label{def:phisegment}
A sub-path $q$ of a non-self-crossing path $Q$ is a covering-path if the continuous segment $\phi$ corresponding to $q$ connects two space points (or points) on the right and left boundary of $\Phi$, respectively.
\end{definition}
Notice that the continuous segment $\phi$ of a covering path $q$ forms a boundary which splits $\Phi$ into two subsets, the bottom subset and the top subset.
Recall that for $i=1,2,\ldots,k-1$ we denote by $q_i$ the path from $w_i$ to $w_{i+1}$. Further according to Definition \ref{def:pointssplit}, for $i=1,2,\ldots,k-1$ path $q_i$ is decomposed into the following parts $(w_i,\pi_i)(\pi_i,\tau_i),(\tau_i,w_{i+1})$ (see Figure \ref{fig:description}).
\begin{lemma}
\label{lemma:CoordinationCknoPi}
For $i=1,2,\ldots,k-1$, all runs (chromatic and non-chromatic) in the sub-path of $q_i$ from $\pi_i$ and $\tau_i$ are in $\Phi$.
\end{lemma}
\begin{proof}
According to Corollary \ref{cor:chromaticrunsinPhi}, all chromatic runs in the sub-path of $q_i$ from $\pi_i$ and $\tau_i$ must be in $\Phi$. Thus, it remains to show that all non-chromatic runs are also in $\Phi$.
Assume towards contradiction that for some $i \in [1,k-1]$ there is a non-chromatic run $r_b$ between $\pi_i$ and $\tau_i$ such that $r_b$ is not in $\Phi$. This means that $r_b$ must include at least one black edge which is not in $\Phi$.
We denote by $(u,u')$ the first black edge in $r_b$ such that $(u,u') \notin \Phi$. Recall that a black edge which is not in $\Phi$ must be either a crossing edge or a boundary edge. Let $l$ be the space point which is defined in the following way. If edge $(u,u')$ is a boundary edge then $l$ is defined as point $u$. If edge $(u,u')$ is a crossing edge then $l$ is defined as the first crossing point on the closed segment $[u,u']$ with a red edge of path $\mathcal{Y}_1$ or path $\mathcal{Y}_{k-1}$.
Without loss of generality, we assume that edge $(u,u')$ is a crossing edge and that space point $l$ is on path $\mathcal{Y}_1$. According to Corollary \ref{cor:CoordinationCk} there is exactly one chromatic run $r^*$ in $q_i$ which crosses from $\mathcal{N}_2$ to $\mathcal{N}_1$. There are two possible cases: (1) edge $(u,u')$ appears before $r^*$ and (2) edge $(u,u')$ appears after $r^*$.
\paragraph{Case 1}(see Figure \ref{CoordinationCknoPia})\newline
According to Definition \ref{def:pointssplit}, the path from $w_i$ to $\pi_i$ is a maximal $(k-2)$-chromatic path. If $\pi_i$ is a red point on path $\mathcal{Y}_j$ where $j \in [1,k-2]$ then the path from $w_i$ to $\pi_i$ has at least one chromatic run of colour $c_{k-1}$. If $\pi_i$ is a red point on path $\mathcal{Y}_{k-1}$ then clearly the path from $w_i$ to $\pi_i$ has at least one chromatic run of colour $c_{k-1}$. Because $\pi_i$ appears before edge $(u,u')$ we conclude that there is at least one chromatic run of colour $c_{k-1}$ before edge $(u,u')$. Let $r$ be the last chromatic run of colour $c_{k-1}$ before the black edge $(u,u')$.
Let $p$ be the path from the last point of run $r$ to point $u'$.
According to Definition \ref{def:phisegment} $p$ must be a covering path since there is a continuous segment $\phi$ which connects a right boundary point (the first point of run $r$ on $\mathcal{Y}_{k-1}$) and a left boundary point (the crossing point $l$ on path $\mathcal{Y}_1$). Notice that all runs in $p$ appear before $r^*$ and therefore $p$ has only points in $\mathcal{N}_2$. This means that the continuous segment $\phi$ is above the boundary separating $\mathcal{N}_1$ and $\mathcal{N}_2$.
Let $\Phi \cap \mathcal{N}_2$ be the subset of $\Phi$ above the boundary separating $\mathcal{N}_1$ and $\mathcal{N}_2$. Consider the subset $\widetilde{\Phi}$ of $\Phi \cap \mathcal{N}_2$ which is described with the following two boundaries. The top boundary is the continuous segment $\phi$. The bottom boundary is the boundary separating $\mathcal{N}_1$ and $\mathcal{N}_2$. In Figure \ref{CoordinationCknoPia}, the subset $\widetilde{\Phi}$ of $\Phi \cap \mathcal{N}_2$ is shown with the shaded area.
The last point of run $r^*$ must be in $\mathcal{N}_1$ since $r^*$ crosses from $\mathcal{N}_2$ to $\mathcal{N}_1$. Since $\widetilde{\Phi}$ is a subset of $\Phi \cap \mathcal{N}_2$, the last point of run $r^*$ must be in the exterior of $\widetilde{\Phi}$. This means that the first point of $r^*$ must be between the top and bottom boundary of $\widetilde{\Phi}$ since otherwise run $r^*$ crosses with the continuous segment $\phi$ which implies a self-crossing. Thus, the first point of run $r^*$ must be in $\widetilde{\Phi}$.
Let $p'$ be the path from point $u$ to the first point of run $r^*$ and let $\phi'$ be the continuous segment (corresponding to $p'$) from the space point $l$ to the first point of run $r^*$. All runs in $p'$ appear before $r^*$ which means that $p'$ has only points in $\mathcal{N}_2$. Thus, the continuous segment $\phi'$ is above the boundary separating $\mathcal{N}_1$ and $\mathcal{N}_2$. Since edge $(u,u') \notin \Phi$, the continuous segment $\phi'$ must have a closed segment $[l,l']$ in the exterior of $\Phi \cap \mathcal{N}_2$ and subsequently in the exterior of $\widetilde{\Phi}$.
If the closed segment $[l,u']$ does not cross any red edges, then space point $l'$ is defined as point $u'$. If the closed segment $[l,u']$ crosses with at least one red edge, then the first crossing point on the closed segment $[l,u']$ must be with a red edge of $\mathcal{Y}_{1}$, since there are no red edges in the exterior of $\Phi$ and subsequently in the exterior of $\widetilde{\Phi}$. In this case, space point $l'$ is defined as the first crossing point on the closed segment $[l,u']$.
The continuous segment from any arbitrary space point on the closed segment $[l,l']$ which is on the exterior of $\widetilde{\Phi}$ to the first point of run $r^*$ which is in $\widetilde{\Phi}$ must cross the top boundary of $\widetilde{\Phi}$. This implies, that path $p$ crosses with path $p'$, which makes a contradiction.
\paragraph{Case 2}(see Figure \ref{CoordinationCknoPib})\newline
Consider the path $p$ from the last point of run $r^*$ to point $u'$ and let $\phi$ be the continuous segment (corresponding to path $p$) from the last point of run $r^*$ to the space point $l$ on edge $(u,u')$. All runs in $p$ appear after $r^*$, which means that $p$ has only points in $\mathcal{N}_1$ and subsequently the continuous segment $\phi$ must be below the boundary separating $\mathcal{N}_1$ and $\mathcal{N}_2$.
Let $\Phi \cap \mathcal{N}_1$ be the subset of $\Phi$ below the boundary separating $\mathcal{N}_1$ and $\mathcal{N}_2$. Consider the subset $\widetilde{\Phi}$ of $\Phi \cap \mathcal{N}_1$ which is described with the following top and bottom boundary. The bottom boundary of $\widetilde{\Phi}$ is described with the continuous segment $\phi$. The top boundary of $\widetilde{\Phi}$ is described with the boundary separating $\mathcal{N}_1$ and $\mathcal{N}_2$. In Figure \ref{CoordinationCknoPib}, the subset $\widetilde{\Phi}$ of $\Phi \cap \mathcal{N}_1$ is shown with the shaded area.
Let $p'$ be the path from $u'$ to point $w_{i+1}$. All runs in $p'$ appear after $r^*$ and therefore $p'$ has only points in $\mathcal{N}_1$. Path $p'$ is non-self-crossing and therefore all chromatic runs in $p'$ must be in $\widetilde{\Phi}$.
The only red edges of path $\mathcal{Y}_{k-1}$ in $\widetilde{\Phi}$ (if any) are the red edges traversed by path $p$. Therefore any red edges of path $\mathcal{Y}_{k-1}$ in $\widetilde{\Phi}$ can not be traversed by $p'$ which means that $p'$ can be at most $(k-2)$-chromatic.
Points $u$ and $u'$ are connected with a black edge. Hence, the path form $u$ to $w_{i+1}$ can also be at most $(k-2)$-chromatic. According to Definition \ref{def:pointssplit} the path from $\tau_i$ to $w_{i+1}$ is a maximal $(k-2)$-chromatic path. Therefore, point $\tau_i$ can not appear after point $u$ in the path from $\pi_i$ to $w_{i+1}$ (\textit{i.e.} either $\tau_i=u$ or $\tau_i$ precedes $u$). All non-chromatic runs in the path from $\pi_i$ to $u$ must be in $\Phi$ since edge $(u,u')$ is the first black edge such that $(u,u')\notin \Phi$. Therefore, all non-chromatic runs between $\pi_i$ and $\tau_i$ must also be in $\Phi$ since $\tau_i$ does not appear after $u$.
\end{proof}
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.7]{Chapter3/C3Images/NoPiCoordination1.png}
\caption{}
\label{CoordinationCknoPia}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.8]{Chapter3/C3Images/NoPiCoordination2.png}
\caption{}
\label{CoordinationCknoPib}
\end{subfigure}
\caption{Figure \ref{CoordinationCknoPia}: The schematic representation of Case 1 for the proof of Lemma \ref{lemma:CoordinationCknoPi}. Figure \ref{CoordinationCknoPib}: The schematic representation of Case 2 for the proof of Lemma \ref{lemma:CoordinationCknoPi}.}
\end{figure}
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$ let $Q$ be a non-self-crossing path in this sub-network. Consider the sub-path $q_{xx'}$ of $Q$ from $x$ to $x'$ and more specifically its decomposition as shown in Figure \ref{fig:description} (for clarity we denote $x$ and $x'$ by $w_1$ and $w_{k}$, respectively, and assume that $m=k-1$). For $i=1,2,\ldots,k-1$ recall that $q_i$ denotes the sub-path of $q_{xx'}$ from $w_i$ to $w_{i+1}$. Without loss of generality, we assume that path $q_i$ for $i=1,2,\ldots,k-1$ is $(k-1)$-chromatic. Let $\mathcal{C}^i_k$ for $i=1,2,\ldots,k-1$ denote the $i^{th}$ iteration of algorithm $\mathcal{C}_k$. The proof of Theorem \ref{Thm:coordinationnoPi} is outlined below.
For $i=1,2,\ldots,k-1$ the first step of $\mathcal{C}^i_k$ calls algorithm $\widehat{\mathcal{A}}_{k-1}$ on sub-network $\mathcal{N}[\beta_1,\beta_2]$. According to Lemma \ref{followatmostnoPi} when algorithm $\widehat{\mathcal{A}}_{k-1}$ is applied on a sub-network the computation follows every non-self-crossing path $Q$ such that $Q$ is at most $(k-2)$-chromatic. According to Definition \ref{def:pointssplit}, the sub-path of $q_i$ from $w_i$ to $\pi_i$ is at most $(k-2)$-chromatic. Thus the computation of the first step follows the sub-path of $q_i$ from $w_i$ to $\pi_i$.
The second step of algorithm $\mathcal{C}^i_k$ calls algorithm $\mathcal{Z}_k$ on sub-network $\mathcal{N}[\beta_1,\beta_2]$. As we will show in the next sub-section when algorithm $\mathcal{Z}_k$ is applied on a sub-network the computation follows any non-self-crossing path $Q$ such that all runs (chromatic and non-chromatic) in $Q$ are in $\Phi$. According to Lemma \ref{lemma:CoordinationCknoPi}, all runs in the sub-path of $q_i$ from $\pi_i$ to $\tau_i$ are in $\Phi$. Thus, the computation of algorithm $\mathcal{Z}_k$ follows the sub-path of $q_i$ from $\pi_i$ to $\tau_i$.
The third step of $\mathcal{C}^i_k$ calls algorithm $\widehat{\mathcal{A}}_{k-1}$ on sub-network $\mathcal{N}[\beta_1,\beta_2]$.
Similarly as for the first step, according to Lemma \ref{followatmostnoPi} when algorithm $\widehat{\mathcal{A}}_{k-1}$ is applied on a sub-network the computation follows every non-self-crossing path $Q$ such that $Q$ is at most $(k-2)$-chromatic. According to Definition \ref{def:pointssplit}, the sub-path of $q_i$ from $\tau_i$ to $w_{i+1}$ is at most $(k-2)$-chromatic. Thus, the computation of the third step of follows the sub-path of $q_i$ from $\tau_i$ to $w_{i+1}$.
\subsection{Algorithm $\mathcal{Z}_k$}
\label{sec:AlgorithmZnoPi}
In this section we outline the combinatorial structure of paths followed by algorithm $\mathcal{Z}_k$.
\begin{definition}
\label{def:Phipath}
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$ and a non-self-crossing path $Q$ in this sub-network we say that $Q$ is a $\Phi$-path if all runs (chromatic and non-chromatic) of $Q$ are in $\Phi$.
\end{definition}
Recall that when algorithm $\mathcal{Z}_k$ is applied to a sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation consists of three phases (see algorithm \ref{PseudoZmainnoPi}). The first and third phase call algorithm $\mathcal{Z}_k$ recursively on $\mathcal{N}_2$ and $\mathcal{N}_1$, respectively. The second phase, coordinates $\mathcal{N}_1$ and $\mathcal{N}_2$ and consists of two steps. The first step calls algorithm $\Delta(\mathcal{Y}_1,\mathcal{Y}_2,\ldots,\mathcal{Y}_{k-1})$. The second step consists of $(k-2)$ iterations where each iteration performs two calls of algorithm $\widehat{\mathcal{A}}_{k-1}$ to the sub-network $\mathcal{N}[\beta_1,\beta_2]$. Theorem \ref{algo:Znopi} describes the specification of algorithm $\mathcal{Z}_k$.
\begin{theorem}
\label{algo:Znopi}
Assuming that Theorem \ref{Thm1noPi} holds for $k-1$, when algorithm $\mathcal{Z}_k$ is applied to a sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation follows every $\Phi$-path in this sub-network.
\end{theorem}
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$, let $Q$ be a $\Phi$-path in the sub-network from a point $\pi$ to a point $\tau$. Consider the two sub-networks $\mathcal{N}[\beta_1,(\beta_1+\beta_2)/2]$ and $\mathcal{N}[(\beta_1+\beta_2)/2,\beta_2]$, denoted by $\mathcal{N}_1$ and $\mathcal{N}_2$, respectively. Without loss of generality we assume that $Q$ is $(k-1)$-chromatic and that is has points both in $\mathcal{N}_1$ and $\mathcal{N}_2$. There are only two possible cases: Path $Q$ does not cross from $\mathcal{N}_2$ to $\mathcal{N}_1$ or path $Q$ crosses at least once from $\mathcal{N}_2$ to $\mathcal{N}_1$.
\begin{lemma}
\label{nocrossinginZnoPi1}
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$ and a $(k-1)$-chromatic $\Phi$-path $Q$ in the sub-network, if path $Q$ has points both in $\mathcal{N}_1$ and $\mathcal{N}_2$ but does not cross from $\mathcal{N}_2$ to $\mathcal{N}_1$ then the starting point of $Q$ is in $\mathcal{N}_1$ and the ending point of $Q$ is in $\mathcal{N}_2$.
\end{lemma}
\begin{proof}
Let $\pi$ and $\tau$ be the starting and ending point of path $Q$. We claim that if $Q$ has points both in $\mathcal{N}_1$ and $\mathcal{N}_2$ but does not cross from $\mathcal{N}_2$ to $\mathcal{N}_1$ then $\pi$ must be on $\mathcal{N}_1$ and $\tau$ must be on $\mathcal{N}_2$. Assume towards contradiction that our claim is not true. If point $\pi$ is on $\mathcal{N}_2$ and point $\tau$ is on $\mathcal{N}_1$ then $Q$ necessarily crosses from $\mathcal{N}_2$ to $\mathcal{N}_1$, which makes a contradiction. Similarly, if both $\pi$ and $\tau$ are on $\mathcal{N}_2$ (resp. $\mathcal{N}_1$) and $Q$ has points both in $\mathcal{N}_1$ and $\mathcal{N}_2$, then there is at least one point $x$ in $\mathcal{N}_1$ (resp. $\mathcal{N}_2$) between $\pi$ and $\tau$, which means that $Q$ crosses from $\mathcal{N}_2$ to $\mathcal{N}_1$. Again, this makes a contradiction. We conclude that the start point $\pi$ of $Q$ must be in $\mathcal{N}_1$ and the ending point $\tau$ of $Q$ must be in $\mathcal{N}_2$.
\end{proof}
Recall that a $(k-1)$-chromatic path $Q$ is short $(k-1)$-chromatic if all chromatic runs of colour $c_1$ appear before all chromatic runs of colour $c_{k-1}$, or vice versa.
\begin{lemma}
\label{nocrossinginZnoPi}
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$ and a $(k-1)$-chromatic $\Phi$-path $Q$ in the sub-network, if path $Q$ has points both in $\mathcal{N}_1$ and $\mathcal{N}_2$ but does not cross from $\mathcal{N}_2$ to $\mathcal{N}_1$ then $Q$ is short $(k-1)$-chromatic.
\end{lemma}
\begin{proof}
According to Lemma \ref{nocrossinginZnoPi1} the starting point $\pi$ of $Q$ must be in $\mathcal{N}_1$ and the ending point $\tau$ of $Q$ must be in $\mathcal{N}_2$. This implies that $Q$ has exactly one black edge $(u,u')$ which crosses from $\mathcal{N}_1$ to $\mathcal{N}_2$. Let $l$ be the space point corresponding to the crossing point of edge $(u,u')$ with the boundary separating $\mathcal{N}_1$ and $\mathcal{N}_2$ (shown with green in Figures \ref{fig:NoPiZ1} and \ref{fig:NoPiZ2}).
Since $Q$ is $(k-1)$-chromatic it must have at least one chromatic run of colour $c_1$ and at least one chromatic run of colour $c_{k-1}$. Without loss of generality, we assume that the first run of colour $c_{k-1}$ appears before the first chromatic run $r'$ of colour $c_{1}$. It is sufficient to show that there is no chromatic run of colour $c_{k-1}$ after $r'$ in $Q$.
Among all chromatic runs of colour $c_{k-1}$ before $r'$ let $r$ be the last chromatic run of colour $c_{k-1}$. We first claim that run $r$ must have all of its points in $\mathcal{N}_1$. Assume towards contradiction that our claim is not true. This means that run $r$ has all of its points in $\mathcal{N}_2$. Notice that $r$ can not have points both in $\mathcal{N}_2$ and $\mathcal{N}_1$, since this implies that $Q$ crosses from $\mathcal{N}_2$ to $\mathcal{N}_1$. If $r$ has all of its points in $\mathcal{N}_2$ then edge $(u,u')$ must appear before $r$. An example is shown in Figure \ref{fig:NoPiZ1}.
Let $p$ be the sub-path of $Q$ from point $u$ to the first point of run $r$. Denote by $\phi$ the continuous segment (corresponding to path $p$) from the space point $l$ to the first point of run $r$. Notice that $\phi$ must be above the boundary separating $\mathcal{N}_1$ and $\mathcal{N}_2$. Consider the subset $\widetilde{\Phi}$ of $\Phi\cap \mathcal{N}_2$ which is described by the following two boundaries. The bottom boundary is the boundary separating $\mathcal{N}_1$ and $\mathcal{N}_2$. The top boundary is the continuous segment $\phi$.
Run $r$ appears before the first chromatic run $r'$ of colour $c_1$ in $Q$ which means that $p$ does not have a chromatic run of colour $c_1$. By definition, all non-chromatic runs in path $Q$ and subsequently in $p$ are in $\Phi$. Therefore, there are no red edges of path $\mathcal{Y}_1$ in $\widetilde{\Phi}$. Let $p'$ be the sub-path of $Q$ from the last point of run $r$ to the ending point $\tau$ of $Q$. Clearly, run $r'$ must be in path $p'$.
Path $p'$ can not cross the boundary from $\mathcal{N}_2$ to $\mathcal{N}_1$. This means that all runs (chromatic and non-chromatic) in $p'$ must be in $\Phi \cap \mathcal{N}_2$. Further, because $Q$ is non-self-crossing, all chromatic runs in $p'$ must be in $\widetilde{\Phi}$. There are no red edges of path $\mathcal{Y}_1$ in $\widetilde{\Phi}$ which means that $p'$ can not have a chromatic run of colour $c_1$. However, this makes a contradiction because the chromatic run $r'$ of colour $c_1$ must be in path $p'$. We conclude that $r$ has all of its points in $\mathcal{N}_1$.
We now claim that run $r'$ must have all of its points in $\mathcal{N}_2$. Assume towards contradiction that our claim is not true. Similarly, as before, it must be that $r'$ has all of its points in $\mathcal{N}_1$, otherwise we obtain a contradiction\footnote{If run $r'$ has point both in $\mathcal{N}_2$ and $\mathcal{N}_1$ this implies that $Q$ crosses from $\mathcal{N}_2$ to $\mathcal{N}_1$.}. This means that $r'$ (and subsequently $r$) appear before edge $(u,u')$ crossing from $\mathcal{N}_1$ to $\mathcal{N}_2$. According to Definition \ref{def:phisegment} the path from the last point of $r$ to the first point of $r'$ has a covering path since there is a continuous segment $\phi$ which connects two points on the left and right boundary of $\Phi$ (\textit{i.e.} the last point of $r$ on path $\mathcal{Y}_{k-1}$ and the first point of $r'$ on path $\mathcal{Y}_1$).
The continuous segment $\phi$ is below the boundary separating $\mathcal{N}_2$ and $\mathcal{N}_1$, since $r$ and $r'$ appear before edge $(u,u')$. An example is shown in Figure \ref{fig:NoPiZ2}. All runs (chromatic and non-chromatic) in $Q$ are in $\Phi$. Further, $Q$ is non-self-crossing. Thus, all runs after run $r'$ in $Q$ must be below $\phi$ and subsequently below the boundary separating $\mathcal{N}_1$ and $\mathcal{N}_2$. However, this makes a contradiction since the ending point $\tau$ of $Q$ is in $\mathcal{N}_2$. We conclude that $r'$ must have all of its points in $\mathcal{N}_2$.
We now show that $Q$ can not have a chromatic run of colour $c_{k-1}$ which appears after the first chromatic run $r'$ of colour $c_1$. Let $p$ be the sub-path of $Q$ from point $u$ to the first point of run $r'$. Notice that path $p$ can not have a chromatic run of colour $c_{k-1}$ because run $r$ is the last chromatic run of colour $c_{k-1}$ before run $r'$ and appears before edge $(u,u')$. Let $\phi$ be the continuous segment (corresponding to path $p$) from space point $l$ to the first point of run $r'$.
Consider the subset $\widetilde{\Phi}$ of $\Phi \cap \mathcal{N}_2$ which is described by the following two boundaries. The bottom boundary is the boundary separating $\mathcal{N}_1$ and $\mathcal{N}_2$. The top boundary is the continuous segment $\phi$. Notice that there are not any red edges of path $\mathcal{Y}_{k-1}$ in $\widetilde{\Phi}$ since path $p$ does not have a chromatic run of colour $c_{k-1}$. Let $p'$ be the sub-path of $Q$ from the last point of run $r'$ to the ending point $\tau$.
Path $p'$ can not cross the boundary from $\mathcal{N}_2$ to $\mathcal{N}_1$. This means that all runs (chromatic and non-chromatic) in $p'$ must be in $\Phi \cap \mathcal{N}_2$. Further, because $Q$ is non-self-crossing, all chromatic runs in $p'$ must be in $\widetilde{\Phi}$. However, there is no chromatic run of colour $c_{k-1}$ in $\widetilde{\Phi}$ and subsequently $p'$ can not have a chromatic run of colour $c_{k-1}$. Thus, there can not be a chromatic run of colour $c_{k-1}$ after run $r'$, which means that $Q$ is short $(k-1)$-chromatic.
\end{proof}
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.6]{Chapter3/C3Images/NoPiZ1.png}
\caption{}
\label{fig:NoPiZ1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.6]{Chapter3/C3Images/NoPiZ2.png}
\caption{}
\label{fig:NoPiZ2}
\end{subfigure}
\caption{Figures \ref{fig:NoPiZ1},\ref{fig:NoPiZ2}: The schematic representation of the proof for Lemma \ref{nocrossinginZnoPi}.}
\end{figure}
Lemma \ref{nocrossinginZnoPi} specifies the combinatorial structure of a $(k-1)$-chromatic $\Phi$-path $Q$ for the special case where $Q$ does not cross from $\mathcal{N}_2$ to $\mathcal{N}_1$. If path $Q$ crosses at least once and at most $m \leq k-1$ times from $\mathcal{N}_2$ to $\mathcal{N}_1$, we provide the following definition which will allow us to decompose $Q$ with respect to its crossings from $\mathcal{N}_2$ to $\mathcal{N}_1$.
\begin{definition}
\label{Def:winZ}
Define $w_i$ for $i=1,2,\ldots,m$ to be the last point in $\mathcal{N}_2$ before the $i^{th}$ crossing of $Q$ from $\mathcal{N}_2$ to $\mathcal{N}_1$. Let $w'_i$ be the successor point of $w_i$ in $Q$.
\end{definition}
Without loss of generality, we assume that $m=k-1$. Let $\pi$ and $\tau$ be the starting and ending point of $Q$, respectively. We decompose path $Q$ into the following parts $(\pi,w'_1),(w'_1,w'_2),\ldots,(w'_{k-2},w'_{k-1}),(w'_{k-1},\tau)$.
For $i=1,2,\ldots,k-2$ the sub-path of $Q$ from $w'_i$ to $w_{i+1}$ does not cross from $\mathcal{N}_2$ to $\mathcal{N}_1$ since the red edge $(w_{i+1},w'_{i+1})$ denotes the next crossing of $Q$ from $\mathcal{N}_2$ to $\mathcal{N}_1$. Further, point $w'_i$ is in $\mathcal{N}_1$ and point $w_{i+1}$ is in $\mathcal{N}_2$ which means that the sub-path of $Q$ from $w'_i$ to $w_{i+1}$ has at least one point both in $\mathcal{N}_1$ and $\mathcal{N}_2$. Thus, according to Lemma \ref{nocrossinginZnoPi} we obtain the following corollary.
\begin{corollary}
\label{cor:Znopisubpath}
For $i=1,2,\ldots,k-2$ if the sub-path of $Q$ from $w'_i$ to $w_{i+1}$ is $(k-1)$-chromatic then it is short $(k-1)$-chromatic.
\end{corollary}
\begin{lemma}
\label{lemma:ZnoPiCord}
For $i=1,2,\ldots,k-2$ if the sub-path of $Q$ from $w'_i$ to $w'_{i+1}$ is $(k-1)$-chromatic then it is short $(k-1)$-chromatic.
\end{lemma}
\begin{proof}
For any $i \in [1,k-2]$ let $q_i$ be the sub-path of $Q$ from $w'_i$ to $w'_{i+1}$ and let $q'_i=q_i \setminus \{(w_{i+1},w'_{i+1}\}$ be the sub-path of $q_i$ without the last red edge $(w_{i+1},w'_{i+1})$. According to Corollary \ref{cor:Znopisubpath} if path $q'_i$ is $(k-1)$-chromatic then it must be short $(k-1)$-chromatic. Without loss of generality, we assume that all chromatic runs of colour $c_1$ appear before all chromatic runs of colour $c_{k-1}$ in $q'_i$.
We show that if we construct $q_i$ from $q'_i$ by adding the red edge $(w_{i+1},w'_{i+1})$, then the short $(k-1)$-chromatic condition is preserved. That is, all chromatic runs of colour $c_1$ appear before all chromatic runs of colour $c_{k-1}$ in $q_i$.
Let $c_j$ where $j \in [1,k-1]$ be the colour of the red edge $(w_{i+1},w'_{i+1})$. If $j \in [2,k-1]$ then clearly our claim is true.
That is, the red edge $(w_{i+1},w'_{i+1})$ is not of colour $c_1$ and therefore all chromatic runs of colour $c_1$ appear before all chromatic runs of colour $c_{k-1}$ in path $q_i$. Thus, it remains to show that $j \neq 1$. Assume towards contradiction that $j=1$.
Notice that since $w'_i \in \mathcal{N}_1$ and $w_{i+1} \in \mathcal{N}_2$, path $q'_i$ must have exactly one black edge $(u,u')$ which crosses from $\mathcal{N}_1$ to $\mathcal{N}_2$. Let $l$ be the space point corresponding to the crossing point of edge $(u,u')$ with the boundary separating $\mathcal{N}_1$ and $\mathcal{N}_2$. Let $r$ be the last chromatic run of colour $c_1$ before the first chromatic run $r'$ of colour $c_{k-1}$ in $q'_i$. Following the same methodology as in Lemma \ref{nocrossinginZnoPi}, we can obtain that edge $(u,u')$ must appear between $r$ and $r'$. This means that $r$ has all of its points in $\mathcal{N}_1$ and run $r'$ has all of its points in $\mathcal{N}_2$.
Let $p$ be the sub-path of $q'_i$ from point $u$ to the first point of run $r'$ and let $\phi$ be the continuous segment (corresponding to this path) from the space point $l$ to the first point of run $r'$. All points after $u$ in $q'_i$ must be in $\mathcal{N}_2$ and therefore the continuous segment $\phi$ is above the boundary separating $\mathcal{N}_1$ from $\mathcal{N}_2$. Further, run $r$ is the last chromatic run of colour $c_1$ before $r'$ and $r$ appears before edge $(u,u')$. Thus, path $p$ does not have a chromatic run of colour $c_1$.
Consider the subset $\widetilde{\Phi}$ of $\Phi \cap \mathcal{N}_2$ which is described with the following two boundaries. The top boundary is the continuous segment $\phi$. The bottom boundary is the boundary separating $\mathcal{N}_1$ from $\mathcal{N}_2$. An example is shown in Figure \ref{fig:mainLemmaZnoPi1}. There are not any red edges of path $\mathcal{Y}_1$ in $\widetilde{\Phi}$ since path $p$ does not traverse any chromatic run of colour $c_1$.
Therefore, if $j=1$, that is, the red edge $(w_{i+1},w'_{i+1})$ is of colour $c_1$, then point $w_{i+1} \in \mathcal{N}_2$ must be in the exterior of $\widetilde{\Phi}$. Clearly, the last point of run $r'$ is in $\widetilde{\Phi}$. Let $p'$ be the sub-path of $q'_i$ from the last point of run $r'$ to point $w_{i+1}$. All runs (chromatic and non-chromatic) in $p'$ must be in $\Phi$. Path $p'$ can not cross the boundary separating $\mathcal{N}_1$ and $\mathcal{N}_2$. Thus, all runs in $p'$ must be in $\Phi \cap \mathcal{N}_2$.
Therefore, path $p'$ must cross with the top boundary of $\widetilde{\Phi}$ (\textit{i.e.} the continuous segment $\phi$ corresponding to path $p$) since $w_{i+1}$ is on the exterior of $\widetilde{\Phi}$ and the last point of run $r'$ is in $\widetilde{\Phi}$. This, implies that path $p'$ crosses with path $p$, which makes a contradiction.
\end{proof}
Figure \ref{fig:worstcaseZ} shows an example of a non-self-crossing $\Phi$-path $Q$ which crosses from $\mathcal{N}_2$ to $\mathcal{N}_1$ exactly $(k-1)$ times for $k=6$. Observe that the sub-path of $Q$ from $w'_i$ to $w'_{i+1}$ for $i=1,2,\ldots,k-2$ is short $(k-1)$-chromatic.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.7]{Chapter3/C3Images/mainLemmaZnoPi2.png}
\caption{}
\label{fig:mainLemmaZnoPi1}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.62]{Chapter3/C3Images/Zexample.png}
\caption{}
\label{fig:worstcaseZ}
\end{subfigure}
\caption{Figure \ref{fig:mainLemmaZnoPi1}: The subset $\widetilde{\Phi}$ of $\Phi \cap \mathcal{N}_2$ is shown with the shaded territory.
Figure \ref{fig:worstcaseZ}: An example of a non-self-crossing $\Phi$-path $Q$ from a point $\pi$ to a point $\tau$ which crosses exactly $(k-1)$ times from $\mathcal{N}_2$ to $\mathcal{N}_1$ (for $k=6$).}
\end{figure}
\begin{proof}[Proof of Theorem \ref{algo:Znopi}]
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$ consider a $\Phi$-path $Q$ in this sub-network. Without loss of generality, we assume that $Q$ is $(k-1)$-chromatic and that $Q$ has points both in $\mathcal{N}_1$ and $\mathcal{N}_2$. Recall that when algorithm $\mathcal{Z}_k$ is applied to sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation consists of three phases.
The first and third phase call algorithm $\mathcal{Z}_k$ recursively on $\mathcal{N}_2$ and $\mathcal{N}_1$, respectively. The second phase consists of two steps. The first step calls algorithm $\Delta(\mathcal{Y}_1,\mathcal{Y}_2,\ldots,\mathcal{Y}_{k-1})$. The second step consists of $(k-2)$ iterations, where each iteration performs two calls of algorithm $\widehat{\mathcal{A}}_{k-1}$ on sub-network $\mathcal{N}[\beta_1,\beta_2]$.
If $Q$ does not cross from $\mathcal{N}_2$ to $\mathcal{N}_1$, according to Lemma \ref{nocrossinginZnoPi} we have that $Q$ short $(k-1)$-chromatic. According to Lemma \ref{followatmostshortnoPi} when algorithm $\widehat{\mathcal{A}}_{k-1}$ is applied twice to sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation follows every non-self-crossing path which short $(k-1)$-chromatic. Algorithm $\mathcal{Z}_k$ includes at least two calls to algorithm $\widehat{\mathcal{A}}_{k-1}$ and therefore the computation follows path $Q$.
If $Q$ crosses at least once and at most $m \leq k-1$ from $\mathcal{N}_2$ to $\mathcal{N}_1$, recall that according to Definition \ref{Def:winZ}, we define $w_i$ for $i=1,2,\ldots,m$ to be the last point in $Q$ before the $i^{th}$ crossing from $\mathcal{N}_2$ to $\mathcal{N}_1$. We denote by $w'_i$ the successor of $w_i$ in $Q$. Consider the decomposition of $Q$ into the following parts $(\pi,w'_1),(w'_1,w'_2),\ldots,(w'_{k-2},w'_{k-1})(w'_{k-1},\tau)$ where $\pi$ and $\tau$ is the starting and ending point of $Q$, respectively.
If the the sub-path of $Q$ from $\pi$ to $w_1$ is not empty then it can only have points in $\mathcal{N}_2$. Similarly, if the the sub-path of $Q$ from $w'_{k-1}$ to $\tau$ is not empty then it can only have points in $\mathcal{N}_1$. We now assume by induction that Theorem \ref{algo:Znopi} holds for $k$ and a sub-network of size less than $n$.
By the induction hypothesis, when algorithm $\mathcal{Z}_k$ is applied to sub-network $\mathcal{N}_2$ the computation follows every $\Phi$-path that has only points in $\mathcal{N}_2$. Thus, the computation of the first phase (\textit{i.e.} first recursive call) follows follows the sub-path of $Q$ from $\pi$ to $w_1$.
The second phase consists of two steps. The first step calls algorithm $\Delta(\mathcal{Y}_1,\mathcal{Y}_2,\ldots,\mathcal{Y}_{k-1})$ whose computation follows the red edge $(w_1,w'_1)$. We now claim that for $i=1,2,\ldots,k-2$ the computation in the $i^{th}$ iteration of the second step, follows the sub-path of $Q$ from $w'_i$ to $w'_{i+1}$. According to Lemma \ref{lemma:ZnoPiCord}, the sub-path of $Q$ from $w'_i$ to $w'_{i+1}$ can be either at most $(k-2)$-chromatic or short $(k-1)$-chromatic
In the former case, according to Lemma \ref{followatmostnoPi} when algorithm $\widehat{\mathcal{A}}_{k-1}$ is applied to sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation follows every non-self-crossing path which is at most $(k-2)$-chromatic. In the latter case, according to Lemma \ref{followatmostshortnoPi} when algorithm $\widehat{\mathcal{A}}_{k-1}$ is applied twice to sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation follows every non-self-crossing path which is short $(k-1)$-chromatic.
The $i^{th}$ iteration of the second step for $i=1,2,\ldots,k-2$ consists of two applications of algorithm $\widehat{\mathcal{A}}_{k-1}$ to the sub-network. We conclude that the computation in the $i^{th}$ iteration for $i=1,2,\ldots,k-2$ follows the sub-path of $Q$ from $w'_i$ to $w'_{i+1}$.
By the induction hypothesis, when algorithm $\mathcal{Z}_k$ is applied to sub-network $\mathcal{N}_1$ the computation follows every $\Phi$-path that has only points in $\mathcal{N}_2$. Thus, the computation in the third phase (\textit{i.e.} second recursive call) follows follows the sub-path of $Q$ from $w'_{k-1}$ to $\tau$. This completes the proof.
\end{proof}
\subsection{Proof of Theorem \ref{Thm:coordinationnoPi} and \ref{Thm1noPi} for $k \geq 3$}
\label{sec:knoPi}
\begin{proof}[Proof of Theorem \ref{Thm:coordinationnoPi} for $k \geq 3$]
Recall that when algorithm $\mathcal{C}_k$ is applied to sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation consists of $(k-1)$ iterations where each iteration has three steps (see algorithm \ref{PseudoCmainnoPi}). The first and third step call algorithm $\widehat{\mathcal{A}}_{k-1}$ on the sub-network while the second step calls algorithm $\mathcal{Z}_k$ on the sub-network.
For a sub-network $\mathcal{N}[\beta_1,\beta_2]$ let $Q$ be a non-self-crossing path in the sub-network. Consider the sub-path $q_{xx'}$ of $Q$, according to Definitions \ref{Def:x} and \ref{Def:x'}. If $q_{xx'}$ does not cross from $\mathcal{N}_2$ to $\mathcal{N}_1$ it must be that $x \in \mathcal{N}_1$,$x' \in \mathcal{N}_2$ and path $q_{xx'}$ consists of a single black edge $(x,x')$.
According to Lemma \ref{nocrossinginZnoPi} when algorithm $\widehat{\mathcal{A}}_{k-1}$ is applied to sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation follows every non-self-crossing path which is at most $(k-2)$-chromatic. Algorithm $\mathcal{C}_k$ includes at least one call to algorithm $\widehat{\mathcal{A}}_{k-1}$ and therefore the computation of $\mathcal{C}_k$ follows the sub-path $q_{xx'}$ of $Q$.
If $q_{xx'}$ crosses at least once from $\mathcal{N}_2$ to $\mathcal{N}_1$, then decompose $q_{xx'}$ into the following parts $(w_1,w_2),\ldots,(w_{k-1},w_k)$, where $w_i$ for $i=1,2,\ldots,k$ is the last point in $\mathcal{N}_1$ before the $i^{th}$ crossing of $q_{xx'}$ from $\mathcal{N}_1$ to $\mathcal{N}_2$ (see subsection \ref{sec:algoCknoPi}).
Without loss of generality, assume that the sub-path $q_i$ of $q_{xx'}$ from $w_i$ to $w_{i+1}$ for $i=1,2,\ldots,k-1$ is $(k-1)$-chromatic. Decompose $q_i$ into $(w_i,\pi_i,\tau_i,w_{i+1})$, according to Definition \ref{def:pointssplit}, such that the path from $w_i$ to $\pi_i$ (resp. $\tau_i$ to $w_{i+1}$) is a maximal $(k-2)$-chromatic path (see Figure~\ref{fig:description}).
We claim that the computation in the $i^{th}$ iteration $\mathcal{C}^i_k$ of algorithm $\mathcal{C}_k$ for $i=1,2,\ldots,k-1$ follows the sub-path of $q_{xx'}$ from $w'_i$ to $w'_{i+1}$. The first step in $\mathcal{C}^i_k$ calls algorithm $\widehat{\mathcal{A}}_{k-1}$ on the sub-network, whose computation, follows any at most $(k-2)$-chromatic non-self-crossing path in the sub-network, according to Lemma \ref{followatmostnoPi}. Thus, the computation of the first step follows the path from $w_i$ to $\pi_i$.
According to Lemma \ref{lemma:CoordinationCknoPi} the path from $\pi_i$ to $\tau_i$ is a $\Phi$-path (\textit{i.e.} all runs are in $\Phi$). The second step in $\mathcal{C}^i_k$ calls algorithm $\mathcal{Z}_k$ whose computation follows every $\Phi$-path in the sub-network, according to Theorem \ref{algo:Znopi}. Thus, the computation of the second step follows the path from $\pi_i$ to $\tau_i$.
The third step in $\mathcal{C}^i_k$ calls algorithm $\widehat{\mathcal{A}}_{k-1}$ on the sub-network, whose computation, follows any at most $(k-2)$-chromatic non-self-crossing path in the sub-network according to Lemma \ref{followatmostnoPi}. Thus, the computation of the third step follows the path from $\tau_i$ to $w_{i+1}$. This completes the proof.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm1noPi} for $k \geq 3$]
The proof follows by induction. The base case of the induction is $k=2$, where according to Theorem \ref{Theorem:specialcasek=2} when algorithm $\mathcal{A}_2$ is applied to a sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation follows every path in the sub-network. We now assume by induction that Theorem \ref{Thm1noPi} holds for any $k'<k$. We also assume by induction that Theorem \ref{Thm1noPi} holds for $k$ and a sub-network of size less than $n$.
Let $Q$ be a non-self-crossing path in a sub-network $\mathcal{N}[\beta_1,\beta_2]$. Consider the decomposition of $Q$ into $(Q_x,q_{xx'},Q_{x'})$ according to Definitions \ref{Def:x} and \ref{Def:x'}. Recall that if path $Q_x$ (resp. $Q_{x'})$ is not empty then all points in this path are in $\mathcal{N}_1$ (resp. $\mathcal{N}_2$).
Recall that when algorithm $\mathcal{A}_k$ is applied to sub-network $\mathcal{N}[\beta_1,\beta_2]$ the computation consists of three phases (see algorithm \ref{PseudoAmainnoPi}). The first and third phase call algorithm $\mathcal{A}_k$ recursively on $\mathcal{N}_1$ and $\mathcal{N}_2$, respectively. The second phase calls algorithm $\mathcal{C}_k$ on the sub-network.
By the induction hypothesis, when algorithm $\mathcal{A}_k$ is applied recursively to $\mathcal{N}_1$ the computation follows the sub-path $Q_x$ of $Q$ since $Q_x$ has only points in $\mathcal{N}_1$. According to Theorem \ref{Thm:coordinationnoPi} when algorithm $\mathcal{C}_k$ is applied to $\mathcal{N}[\beta_1,\beta_2]$ the computation follows the sub-path $q_{xx'}$ of $Q$. By the induction hypothesis, when algorithm $\mathcal{A}_k$ is applied to $\mathcal{N}_2$ the computation follows the sub-path $Q_{x'}$ of $Q$ since it has only points in $\mathcal{N}_2$.
For $k \ge 2$, the running time of algorithm $\mathcal{A}_{k}$ over $n$ points is given by the following relationship $\mathcal{T}^k_A(n)=\mathcal{T}^k_A(\frac{n}{2})+\mathcal{T}^k_C(n)+ \mathcal{T}^k_A(\frac{n}{2})$, where $\mathcal{T}^k_C(n)$ is the running time of algorithm $\mathcal{C}_k$ over $n$ points. The running time of algorithm $\mathcal{C}_k$ is given by the following relationship $\mathcal{T}^k_C(n)=(k-1)[\mathcal{T}^{k-1}_A(n)+\mathcal{T}^k_Z(n)+\mathcal{T}^{k-1}_A(n)]$, where $\mathcal{T}^k_Z(n)$ is the running time of algorithm $\mathcal{Z}_k$ over $n$ points.
The running time of algorithm $\mathcal{Z}_k$ over $n$ points is given by the relationship $\mathcal{T}^k_Z(n)=\mathcal{T}^k_Z(\frac{n}{2})+k^2 \mathcal{T}^{k-1}_A(n)+\mathcal{T}^k_Z(\frac{n}{2})+O(n)$. By solving the recurrence relationship we obtain that $\mathcal{T}^k_Z(n)=O(k^2\log n) \mathcal{T}^{k-1}_A(n)$ and $\mathcal{T}^k_C(n)=O(k^3 \log n)\mathcal{T}^{k-1}_A(n)$ which results into $\mathcal{T}^k_A(n)=O(k^{3k}\log^{2k}n) \mathcal{T}^1_A(n)$. Putting everything together, we conclude that $\mathcal{T}^k_A(n)=O(k^{3k}n \log^{2k+3}n)$.
\end{proof}
\section{Proof of Theorem \ref{maintheoremuncrossing} for $k \geq 3$}
\label{noncrossingpathsfork}
In this section we show the proof of Theorem \ref{maintheoremuncrossing}. That is, for a set of points $\mathcal{P}_k$ such that all points can be covered with $k$ paths we show an algorithm $\widetilde{U}_k$ with the running time of $O(k n\log n)$ which computes a collection of $k$ node-disjoint non-crossing paths covering all points in $\mathcal{P}_k$.
For $k \geq 3$ consider the implicit plane $\alpha-\beta$ representation of the directed acyclic graph $G^*$ with the points in $\mathcal{P}$ (see sub-section \ref{sub:plane}). Recall that for an edge $(u,u')$ in $G^*$, point $u \in \mathcal{P}$ is (Pareto) dominated by point $u' \in \mathcal{P}$ ($\alpha_u \le \alpha_v$ and $\beta_u \le \beta_v$), which is denoted by $u \prec u'$. An edge $(u,u')$ in $G^*$ can be seen as a closed segment $[u,u']$ in the plane representation. An $s-t$ path in $G^*$, forms a continuous segment on the plane which consists of a concatenation of straight line segments, corresponding to the edges of this path.
Recall that two node-disjoint edges $(u,u')$ and $(x,x')$ in $G^*$
cross, if the two (closed) segments $[u,u']$ and $[x,x']$ in the plane have a common point. All points are in general position and therefore the common point $\pi$ on the closed segments $[u,u']$ and $[x,x']$ is a \textit{crossing} point, which is geometrically defined with coordinates $\alpha_{\pi}$ and $\beta_{\pi}$, but it does not correspond to a point in $\mathcal{P}$ and subsequently to a node in $G^*$.
\begin{lemma}
\label{l5}
If two edges $(u,u')$ and $(x,x')$ in $G^*$ cross then $G^*$ has edges $(u,x')$ and $(x,u')$.
\end{lemma}
\begin{proof}
It suffices to show that that $u \prec x'$ and $x \prec u'$.
Clearly, for edges $(u,u')$ and $(x,x')$ we have $u \prec u'$ and $x \prec x'$, respectively.
Let $\pi$ be the crossing point of edge of the two closed segments $[u,u']$ and $[x,x']$. We have that $u \prec \pi \prec u'$ (resp. $x \prec \pi \prec x'$) since the crossing point $\pi$ is on the closed segment $[u,u']$ (resp. $[x,x']$). This implies that $u \prec \pi \prec x'$ and $x \prec \pi \prec u'$.
\end{proof}
We say that two paths in $G^*$ cross if there is an edge of the first path crossing with an edge of the second path. From now on we consider explicitly the plane representation of $G^*$ with the points in $\mathcal{P}$.
\begin{definition}
For $k \geq 3$, let $\mathcal{P}_k\subseteq \mathcal{P} $ be a point set such that all points in $\mathcal{P}_k$ can be covered by $k$ node-disjoint $s-t$ paths $(Y_1,Y_2,\ldots,Y_k)$.
\end{definition}
To facilitate analysis, we denote a collection of $k$ node-disjoint (but not necessarily non-crossing) paths by $(Y_1,Y_2,\ldots,Y_k)$ and a collection of $k$ node-disjoint, non-crossing paths by $(\mathcal{Y}_1,\mathcal{Y}_2,\ldots,\mathcal{Y}_k)$.
We show an $O(kn\log n )$-time algorithm $\widetilde U_k$ for $k \geq 3$, which takes as an input a point set $\mathcal{P}_k$ and gives as an output a collection $(\mathcal{Y}_1,\mathcal{Y}_2,..,\mathcal{Y}_k)$ $k$ node-disjoint, non-crossing $s-t$ paths. The paths $(\mathcal{Y}_1,\mathcal{Y}_2,\ldots,\mathcal{Y}_k)$ are given in left to right order in their planar representation, that is, $\mathcal{Y}_1$ is the leftmost path and $\mathcal{Y}_k$ is the rightmost path.
The rest of the section is organized as follows: In Subsection \ref{collection}, based on a simple argument, we show that for a point set $\mathcal{P}_k$ there exists at least one collection of $k$ node-disjoint, non-crossing paths $(\mathcal{Y}_1,\mathcal{Y}_2,..,\mathcal{Y}_k)$ that cover all points in $\mathcal{P}_k$. In Subsection \ref{selection} we discuss a subroutine algorithm $\mathcal{S}$ which is employed by algorithm $\widetilde U_k$. Given a point set $\mathcal{P}_k$, algorithm $\mathcal{S}$ computes a collection of $k$ node-disjoint (but not necessarily non-crossing) paths by $(Y_1,Y_2,\ldots,Y_k)$ covering all points in $\mathcal{P}_k$. The collection of paths obtained by algorithm $\mathcal{S}$ satisfies some geometrical properties which we use in the analysis of algorithm $\widetilde U_k$. In Subsection \ref{algoUK} we provide the detailed description of algorithm $\widetilde U_k$ and show the proof of Theorem \ref{maintheoremuncrossing}.
\subsection{Existence of $k$ non-crossing paths}
\label{collection}
For a point set $\mathcal{P}_k$ let $(Y_1,Y_2,..,Y_k)$ be a collection of $k$ node-disjoint $s-t$ paths (but not necessarily non-crossing) covering all points in $\mathcal{P}_k$.
\begin{definition}
For two indexes $i,j \in [1,k]$ such that $i \neq j$ and two crossing edges $(u,u') \in Y_i$ and $(x,x') \in Y_j$, we define operation uncross which replaces edge $(u,u')\in Y_i$ and edge $(x,x') \in Y_j$ with edge $(u,x')$ and $(x,u')$, respectively.
\end{definition}
Essentially, for a collection $Y=(Y_1,Y_2,..,Y_k)$ of $k$ paths covering all points in $\mathcal{P}_k$, operation uncross takes as an input two crossing edges $(u,u') \in Y_i$ and $(x,x') \in Y_j$ and outputs a collection $Y'=Y \setminus{\{Y_i,Y_j\}} \cup \{Y'_i,Y'_j\}$ of $k$ paths. Path $Y'_i$ consists of the sub-path of $Y_i$ from $s$ to point $u$, edge $(u,x')$ and the sub-path of $Y_j$ from $x'$ to $t$. Similarly, path $Y'_j$ consists of the sub-path of $Y_j$ from $s$ to point $x$, edge $(x,u')$ and the sub-path of $Y_i$ from $u'$ to $t$.
Notice that $Y'$ is also a collection of $k$ node-disjoint $s-t$ paths, covering all points in $\mathcal{P}_k$. That is, for two crossing edges $(u,u') \in Y_i$ and $(x,x') \in Y_j$ operation uncross simply removes point $u'$ from $Y_i$ (resp. point $x'$ from $Y_{j}$) and adds point $u'$ to path $Y_j$ (resp. point $x'$ to path $Y_i$). Therefore, every point in $\mathcal{P}_k$ belongs to a path in $Y'$.
For two crossing edges $(u,u') \in Y_i$ and $(x,x') \in Y_j$ and an operation uncross, notice that the length of the closed segment $[u,x']$ is smaller than the length of the closed segments $[u,\pi]$ and $[\pi,x']$ because of the triangle inequality. Similarly, the length of the closed segment $[x,u']$ is smaller than the length of the closed segments $[x,\pi]$ and $[\pi,u']$.
Therefore, starting from an arbitrary collection $(Y_1,Y_2,..,Y_k)$ of $k$ node-disjoint $s-t$ paths, covering all points in $\mathcal{P}_k$ we can select pairs of crossing edges among paths in arbitrary order and perform operation uncross. Clearly, this procedure must terminate since the length of the edges appended by operation uncross is monotonically decreasing. Therefore, we have the following corollary.
\begin{corollary}
\label{collectionwithPi}
Given a point set $\mathcal{P}_k$ such that all points $\mathcal{P}_k$ can be covered with $k$ paths, there is at least one collection $(\mathcal{Y}_1,\mathcal{Y}_2,..,\mathcal{Y}_k)$ of $k$ node-disjoint, non-crossing paths that covers all points in $\mathcal{P}_k$.
\end{corollary}
\subsection{Selection algorithm $\mathcal{S}$}
\label{selection}
Asahiro et al. \cite{DBLP:journals/dam/AsahiroHMOSY06} show an $O(n \log n)$-time algorithm $\mathcal{S}$ which takes as an input a point set $P_j$ such that all points in $P_j$ can be covered with $j$ paths, and gives as an output a collection of $j$ node-disjoint $s-t$ paths $(Y_1,Y_2,..,Y_j)$ that cover all points in $P_j$. Paths $(Y_1,Y_2,..,Y_j)$ computed by algorithm $\mathcal{S}$ may be crossing, but satisfy some useful geometrical properties. Algorithm $\widetilde{U}_k$ is based on these geometrical properties to obtain a collection of $k$ non-crossing $s-t$ paths.
\begin{definition}
For a point $x \in P_j$ we denote by $D^+_x$ all points $x^{\prime}$ in $P_j$ such that $x \prec x^{\prime}$. Similarly, we denote by $D^-_x$ all points $x^{\prime}$ in $P_j$ such that $x^{\prime} \prec x$.
\end{definition}
Algorithm $\mathcal{S}$ is an iterative process based on point variable $v$. At each iteration we select the point $x$ in $D^+_u$ such that $\alpha_x < \alpha_{x^{\prime}}$ for all points $x^{\prime}$ in $D^+_u \setminus{x}$. We append edge $(u,x)$, set $u \leftarrow x$ repeat the same until the sink $t$ is selected. Upon termination of this process we have path $Y_1$. To obtain the next path $Y_2$ we repeat the same iterative process for all points in $P_j \setminus{P(Y_1)}$ where $P(Y_1)$ is the set of all points on path $Y_1$.
As shown in Asahiro et al. \cite{DBLP:journals/dam/AsahiroHMOSY06} after $j$ repetitions of this iterative process we have a collection of $j$ paths $(Y_1,Y_2,...Y_j)$ covering all points in $P_j$. For the remaining part of the section we denote by $(Y_1,Y_2,\ldots,Y_j)$ the collection of $j$ paths obtained by algorithm $\mathcal{S}$ over a set of points $P_j$ such that all points in $P_j$ can be covered with $j$ paths.
\begin{definition}
\label{def:rightleft}
We say that a point $v$ is on the left (resp. right) side of path $Y_i$ where $i \in [1,j]$ if the horizontal $\beta_v$ crosses with an edge $(u,u') \in Y_i$ and the crossing point $x$ on the closed segment $[u,u']$ satisfies $\alpha_x \leq \alpha_v$ (resp. $\alpha_x \geq \alpha_v$).
\end{definition}
For an edge $(u,w) \in Y_i$ where $i \in [1,j]$ we denote by $B(u,w)=[\alpha_u,\alpha_w]\cdot [\beta_u,\beta_w]$ the rectangle formed by the verticals $\alpha_u$ and $\alpha_w$ and the horizontals $\beta_u$ and $\beta_w$. Following the description of algorithm $\mathcal{S}$ we obtain the following two corollaries.
\begin{corollary}
\label{nopointsinbox}
For an edge $(u,w) \in Y_i$ where $i \in [1,j]$ it holds that there are no points of paths $Y_{z>i}$ in the rectangle $B(u,w)$.
\end{corollary}
\begin{corollary}
\label{Scor2}
For two indexes $i,i' \in [1,j]$ such that $i< i'$ it holds that all points of path $Y_{i'}$ are on the right side of path $Y_i$.
\end{corollary}
Notice that for two paths $Y_i$ and $Y_{i'}$ such that $i,i' \in [1,j]$ and $i<i'$, it is possible that path $Y_i$ has points on the right side of path $Y_{i'}$ (as paths $Y_i$ and $Y_{i'}$ can cross).
Consider a set of points $P_j$ such that all points in can be covered with $j$ paths. Let $(Y_1,Y_2,\ldots,Y_j)$ be a collection of $j$ paths obtained by algorithm $\mathcal{S}$, covering all points in $P_j$. Lemmas \ref{Sprop}, \ref{Sprop1} and \ref{sprop2} specify the geometrical properties satisfied by paths $(Y_1,Y_2,\ldots,Y_j)$.
\begin{lemma}
\label{Sprop}
For two indexes $i,i' \in [1,j]$ such that $i<i'$ and an edge $(x,x^{\prime}) \in Y_{i'}$, let $v_1,v_2,...,v_m$ be all points of path $Y_i$ (ordered in topological order) within the horizontals $\beta_x$ and $\beta_{x'}$ on the right side of path $Y_{i'}$. It holds that $\alpha_x < \alpha_{v_i} < \alpha_{x^{\prime}}$ for $i=1,2,..,m$.
\end{lemma}
\begin{proof}
We refer the reader to Figure \ref{fig:withintriangle} for the schematic representation of the proof.
Notice that since points $v_1,v_2,\ldots,v_m$ are given in topological order we naturally have $\alpha_{v_i} < \alpha_{v_{i+1}}$ for $i=1,2,...,m-1$. Thus, it suffices to show that $\alpha_x < \alpha_{v_1}$ and $\alpha_{v_m} \leq \alpha_{x'}$. The inequality $\alpha_x < \alpha_{v_1}$ holds because point $v_1$ is on the right side of edge $(x,x^{\prime})$ (see Definition \ref{def:rightleft}).
Assume towards contradiction that $\alpha_{v_m} > \alpha_{x^{\prime}}$. All points in $Y_i$ after $v_m$ must be above the horizontal $\beta_{v_m}$ and on the right of the vertical $\alpha_{v_m}$. Thus, the edge $(v_m,\sigma(v_m))$, where $\sigma(v_m)$ is the successor of $v_m$ in $Y_i$ crosses the horizontal $\beta_{x'}$ at a crossing point $p$ such that $\alpha_p > \alpha_{x'}$. This means that $x'$ is on the left of $Y_i$. However, this contradicts Corollary \ref{Scor2} since $x^{\prime}$ is a point in path $Y_{i'>i}$ and therefore must be on the right side of $Y_i$.
\end{proof}
\begin{lemma}
\label{Sprop1}
For an edge $(x,x') \in Y_z$ where $z \in [1,j]$, let $f_i$ and $f'_i$ be the first point of path $Y_i$ for $i=1,2,..,z-1$ above the horizontal $\beta_x$ and above the horizontal $\beta_{x'}$, respectively. It holds that for $i=1,2,..,z-1$ point $f_i$ is on the left of the vertical $\alpha_x$ and point $f'_i$ is on the left of vertical $\alpha_{x'}$.
\end{lemma}
\begin{proof}
We refer the reader to Figure \ref{fig:pointsbelowabove} for the schematic representation of the proof.
Consider an edge $(x,x') \in Y_z$ where $z \in [1,j]$ and assume towards contradiction that for some $i<z$ point $f_i$ is not placed on the left of the vertical $\alpha_x$. Let $\pi(f_i)$ be the predecessor of $f_i$ in $Y_i$. Since $f_i$ is the first point of $Y_i$ above the horizontals $\beta_x$ we have that edge $(\pi(f_i),f_i)$ must cross the horizontal $\beta_x$. Thus, we have $\beta_{\pi(f_i)} \leq \beta_x \leq \beta_{f_i}$.
Because $x$ is a point on path $Y_{z>i}$, according to Corollary \ref{Scor2} it must be on the right side of path $Y_i$. Therefore the horizontal $\beta_x$ must cross with edge $(\pi(f_i),f_i)$ at a point $p$ such that $\alpha_p < \alpha_x$ which means that point $\pi(f_i)$ is on the left of the vertical $\alpha_x$.
If $f_i$ is not on the left of the vertical $\alpha_x$ then $x$ satisfies the inequality $\alpha_{\pi(f_i)} < \alpha_x < \alpha_{f_i}$. As discussed above, we have that $\beta_{\pi(f_i)} \leq \beta_x \leq \beta_{f_i}$ and therefore $\pi(f_i) \prec x \prec f_i$. However, this contradicts Corollary \ref{nopointsinbox} because there is an edge $(\pi(f_i),f_i) \in Y_i$ and a point $x$ on $Y_{z>i}$ such that $x$ is within the rectangle $B(\pi(f_i),f_i)$.
Similarly, assume towards contradiction that $f^{\prime}_i$ is not on the left of the vertical $\alpha_{x^{\prime}}$. Let $\pi(f^{\prime}_i)$ be the predecessor of $f^{\prime}_i$ in $Y_i$. Since $f^{\prime}_i$ is the first point of $Y_i$ above the horizontal $\beta_{x^{\prime}}$ it holds that the horizontal $\beta_{x^{\prime}}$ crosses with edge $(\pi(f^{\prime}_i),f^{\prime}_i)$. Thus, we have that $\beta_{\pi(f'_i)} < \beta_x' < \beta_{f'_i}$.
Because $x^{\prime}$ is a point on $Y_{z>i}$, according to Corollary \ref{Scor2} it must be on the right side of path $Y_i$. Thus, the horizontal $\beta_{x^{\prime}}$ crosses with edge $(\pi(f^{\prime}_i),f^{\prime}_i)$ at a point $p$ such that $\alpha_p < \alpha_{x^{\prime}}$. Therefore, point $\pi(f^{\prime}_i)$ must be on the left of the vertical $\alpha_{x^{\prime}}$.
If $f^{\prime}_i$ is not on the left of the vertical $\alpha_{x^{\prime}}$ then $x^{\prime}$ satisfies the inequality $\alpha_{\pi(f^{\prime}_i)} < \alpha_x^{\prime} < \alpha_{f^{\prime}_i}$. We also have $\beta_{\pi(f'_i)} < \beta_x' < \beta_{f'_i}$ which implies that $\pi(f'_i) \prec x' \prec f'_i$ which subsequently contradicts Corollary \ref{nopointsinbox} because $x'$ is a point on $Y_{z>i}$ and is within the rectangle $B(\pi(f'_i),f'_i)$ where $(\pi(f_i),f_i) \in Y_i$.
\end{proof}
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.5]{Chapter3/C3Images/withintriangle.png}
\caption{}
\label{fig:withintriangle}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[scale=0.5]{Chapter3/C3Images/pointsbelowabove.png}
\caption{}
\label{fig:pointsbelowabove}
\end{subfigure}
\caption{Figure \ref{fig:withintriangle} shows the schematic representation of the proof for Lemma \ref{Sprop}. Figure \ref{fig:pointsbelowabove} the schematic representation of the proof for Lemma \ref{Sprop1}.}
\end{figure}
\begin{lemma}
\label{sprop2}
For any $i \in [1,j]$ and a point $x$ on path $Y_i$ there is no collection of $i-1$ paths that covers all points on paths $Y_1,Y_2,..,Y_{i-1}$ and point $x$.
\end{lemma}
\begin{proof}
Assume towards contradiction that for some $i \in [1,j]$ and a point $x$ on path $Y_i$, there is a collection of $i-1$ paths that includes all points on paths $Y_1,Y_2,..,Y_{i-1}$ and additionally point $x$. Notice that if such a collection exists then there must be at least one point $u$ on some path $Y_{z<i}$ such that $u \prec x $, since we need to append edge $(u,x)$ (\textit{i.e.} in order to cover point $x$). Let $\sigma_u$ be the successor of point $u$ in $Y_z$. From corollary \ref{nopointsinbox} we have that there are no points of paths $Y_{i>z}$ within the rectangle $B(u,\sigma_u)$ which contradicts that $u \prec x$, since $x$ is on path $Y_i$.
\end{proof}
\subsection{Algorithm $\widetilde U_k$ description}
\label{algoUK}
For $k \geq 2$ algorithm $\widetilde U_k$ takes as an input a set of points $\mathcal{P}_k$ such that all points in $\mathcal{P}_k$ can be covered with $k$ node disjoint $s-t$ paths and outputs a collection of $(\mathcal{Y}_1,\mathcal{Y}_2,..,\mathcal{Y}_k)$ of $k$ node-disjoint non-crossing paths. The description of algorithm $\widetilde{U}_k$ for $k \geq 2$ is given below.
For $k \geq 2$ the working of algorithm $\widetilde U_k$ is described as sequence of algorithms $U_k,U_{k-1},..,U_1$ such that for $j=k,k-1,\ldots,1$ algorithm $U_j$ computes the $j^{th}$ path $\mathcal{Y}_j$ of the collection $(\mathcal{Y}_1,\mathcal{Y}_2,..,\mathcal{Y}_k)$. That is, algorithm $\widetilde U_k$ outputs the paths $(\mathcal{Y}_1,\mathcal{Y}_2,..,\mathcal{Y}_k)$ from right to left, starting with the rightmost path $\mathcal{Y}_k$ and ending with the leftmost path $\mathcal{Y}_1$.
For $j =k,k-1,\ldots,1$ the input to algorithm $U_j$ is a set of points $P_j$ such that all points in $P_j$ can be covered with $j$ paths. We call this condition the \textit{input condition} of algorithm $U_j$. For the base case $j=k$, the input condition is satisfied since $P_k=\mathcal{P}_k$. For $j<k$ the input condition of algorithm $U_j$ will be satisfied inductively, as explained below.
For $j =k,k-1,\ldots,1$, algorithm $U_j$ gives as an output the $j^{th}$ path $\mathcal{Y}_j$ of the collection $(\mathcal{Y}_1,\mathcal{Y}_2,..,\mathcal{Y}_k)$. Let $P(\mathcal{Y}_j)$ be all points on path $\mathcal{Y}_j$. For $j=k,k-1,..,1$ the \textit{output condition} of algorithm $U_j$ is described by the two properties shown below:
\begin{itemize}
\item Property 1: All points in $P_j \setminus{P(\mathcal{Y}_j)}$ can be covered with $(j-1)$ paths.
\item Property 2: Any collection of $(j-1)$ paths that collects all points in $P_j \setminus{P(\mathcal{Y}_j)}$ does not cross with path $\mathcal{Y}_j$.
\end{itemize}
For $j=k,k-1,\ldots,2$ the input to algorithm $U_{j-1}$ is the set of points $P_j \setminus{P(\mathcal{Y}_j)}$. It is easy to see that for $j \leq k$ if algorithm $U_j$ satisfies Property 1 of the output condition, then the set of points $P_j \setminus{P(\mathcal{Y}_j)}$ satisfies the input condition of algorithm $U_{j-1}$. That is, all points in $P_{j} \setminus{P(\mathcal{Y}_{j})}$ can be covered with $j-1$ paths.
For $j=k,k-1,\ldots,1$ and a point set $P_j$ such that all points in $P_j$ can be covered with $j$ paths, algorithm $U_j$ consists of two computational steps. The first step obtains a collection of $j$ paths $(Y_1,Y_2,..,Y_j)$ covering all points in $P_j$ by calling algorithm $S$, as described in subsection \ref{selection}.
The second step traverses the edges of the leftmost path $Y_j$ from $s$ to $t$ in order to build the output path $\mathcal{Y}_j$. The output path $\mathcal{Y}_j$ has all points of path $Y_j$ and some additional points of paths $Y_1,Y_2,\ldots,Y_{j-1}$. For an edge $(x,x') \in Y_j$ algorithm $U_j$ builds the segment of the output path $\mathcal{Y}_j$ from $x$ to $x'$ by performing either operation $A$ or operation $B$. Before we describe operation $A$ and operation $B$ for an edge $(x,x') \in Y_j$ we provide the following definition.
\begin{definition}
\label{defprop2}
Let $P_j \setminus{P(Y_j)}$ be the set of all points on paths $Y_1,Y_2,..,Y_{j-1}$.
For an edge $(x,x') \in Y_j$ we denote by $P_{xx'}$ the set of all points in $P_j \setminus{P(Y_j)}$ within the horizontals $\beta_x$ and $\beta_{x'}$. We denote by $V_{xx'}$ the set of all points on the output path $\mathcal{Y}_j$ within the horizontals $\beta_x$ and $\beta_{x'}$.
\end{definition}
Operation A, sets $V_{xx'}= \emptyset$ and therefore the segment of $\mathcal{Y}_j$ from $x$ to $x'$ simply consists of edge $(x,x')$. Operation B sets $V_{xx'}= (v_1,v_2,..,v_m)$ where point $v_i \in P_{xx'}$ for $i=1,2,..m$ and therefore the segment of $\mathcal{Y}_j$ from $x$ to $x'$ is a path $(x,v_1,v_2,..,v_m,x')$. We will shortly explain in detail how we select points $v_1,v_2,..,v_m$.
Essentially, operation $A$ simply considers the next edge in $Y_j$ whereas operation $B$ can be seen as $m$ insertions where for $i=1,2,..,m$ the $i^{th}$ insertion removes a point $v_i \in P_{xx'}$ on a path $Y_{z<j}$ and adds $v_i$ to the segment of $\mathcal{Y}_j$ from $x$ to $x'$.
\subsubsection{Property 1}
\label{property1}
For $j=k,k-1,..,2,1$ and a point set $P_j$ such that all points can be covered with $j$ paths let $(Y_1,Y_2,\ldots,Y_j)$ be the collection of paths obtained by algorithm $\mathcal{S}$, covering all points in $P_j$. According to Lemma \ref{sprop2}, there is no collection of $(j-1)$ paths that covers all points in~$(P_j \setminus{P(Y_j)}) \cup x$ where $x$ is a point on path $Y_j$. This implies, that for $j=k,k-1,\ldots,1$
if algorithm $U_j$ satisfies Property 1 of the {output condition}, then any point $x$ on path $Y_j$ is also a point on path $\mathcal{Y}_j$.
\begin{lemma}
\label{LemmaProperty1}
For $j=k,k-1,\ldots,2$ algorithm $U_j$ takes as an input a point set $P_j$ such that all points can be covered with $j$ paths and outputs a path $\mathcal{Y}_j$ such that all points in $P_j \setminus{P(\mathcal{Y}_j})$ can be covered with $j-1$ paths.
\end{lemma}
\begin{proof}
We show the Lemma by induction. For the base case $j=k$, we assume that the point set $P_k$ can be covered with $k$ path and show that algorithm $U_k$ outputs path $\mathcal{Y}_k$ such that all points in $P_k \setminus{P(\mathcal{Y}_k)}$ can be covered with $k-1$ paths.
For $j=k$ the first step of algorithm $U_k$ obtains an initial collection of $k$ paths $(Y_1,Y_2,..,Y_k)$ using algorithm $S$. If all points in $P_k$ can be covered with $k$ paths, then naturally all points in $P_k \setminus{P(Y_k)}$ or any subset $X \subseteq P_k \setminus{P(Y_k)}$ can be covered with $k-1$ paths.
Algorithm $U_k$ builds path $\mathcal{Y}_k$ by considering the edges of $Y_k$ from $s$ to $t$ and for an edge $(x,x') \in Y_k$ performs either operation $A$ or operation $B$ to build the segment of $\mathcal{Y}_k$ from $x$ to $x'$. Notice that for an edge $(x,x') \in Y_k$, neither operation $A$ nor operation $B$ removes a point from path $Y_j$.
Therefore, for path $\mathcal{Y}_k$ we have that $P(\mathcal{Y}_k)=P(Y_k) \cup V $ where $V \subseteq P_k \setminus{P(Y_k)}$. This implies that $P_k \setminus{P(\mathcal{Y}_k)} \subseteq P_k \setminus{P(Y_k)}$ since $|P(\mathcal{Y}_k)|>|P(Y_k)|$. Therefore, all points in $P_k \setminus{P(\mathcal{Y}_k)}$ can be covered with $k-1$ paths.
We now assume that our induction holds for $k,k-1,\ldots,j+1$ and show that it holds for $j$. If our induction holds for $j+1$ then this means that the set of points $P_{j+1} \setminus{P(\mathcal{Y}_{j+1}})$ can be covered with $j$ paths. We show that when algorithm $U_j$ takes as input point set $P_j=P_{j+1} \setminus{P(\mathcal{Y}_{j+1}})$ and outputs a path $\mathcal{Y}_j$ then the point set $P_j \setminus{P(\mathcal{Y}_j})$ can be covered with $j-1$ paths. To complete the proof we simply follow the same methodology as for $j=k$.
\end{proof}
\subsubsection{Property 2}
\label{Property2}
For $j=k,k-1,..,2,1$ and a point set $P_j$ such that all points can be covered with $j$ paths let $(Y_1,Y_2,\ldots,Y_j)$ be the collection of paths obtained by algorithm $\mathcal{S}$. Recall that for an edge $(x,x') \in Y_j$, we denote by $f_i$ and $f'_i$ for $i=1,2,...,j-1$ the first point of path $Y_i$ above the horizontal $\beta_{x}$ and $\beta_{x'}$, respectively. Further, according to Definition \ref{defprop2}, $P_{xx'}$ is the set of all points in $P_j \setminus{P(Y_j)}$ within the horizontals $\beta_x$ and $\beta_{x'}$ and $V_{xx'}$ is the set of all points on the segment of $\mathcal{Y}_j$ from $x$ to $x'$.
\begin{definition}
\label{def:invariantIxx}
For an edge $(x,x') \in Y_j$ we say that invariant $I_{xx'}$ is satisfied if the edge between two points $u,u' \in(P_{xx'} \cup (f'_1,f'_2,..,f'_{j-1}) \setminus{V_{xx'}})$ can not cross the segment of $\mathcal{Y}_j$ from $x$ to $x'$.
\end{definition}
For an edge $(x,x') \in Y_j$ let $(x,x',\pi)$ be the triangle formed by the closed segments $[x,x'],[x,\pi]$ and $[x',\pi]$ where $\pi$ is the crossing point of the vertical $\alpha_x$ with the horizontal $\beta_{x'}$. Notice that any point $u \in P_{xx'}$ within the triangle $(x,x',\pi)$ must satisfy $x \prec u \prec x'$ according to Lemma \ref{Sprop}. Thus, for an edge $(x,x') \in Y_j$ there are only two cases, as shown below.
\begin{itemize}
\item Case 1: There is no point $u \in P_{xx'}$ within the triangle $(x,x',\pi)$.
\item Case 2: There is at least one point $u \in P_{xx'}$ within the triangle $(x,x',\pi)$.
\end{itemize}
If an edge $(x,x') \in Y_j$ belongs to Case 1, algorithm $U_j$ performs operation $A$ which sets $V_{xx'}=\emptyset$, whereas if edge $(x,x')$ belongs to Case 2, algorithm $U_j$ performs operation $B$, as defined below.
\begin{definition}
For an edge $(x,x') \in Y_j$ we define $\mathcal{C}_{xx'}=(v_1,v_2,...,v_m)$ to be the convex hull of all points in $P_{xx'}$ within the triangle $(x,x',\pi)$. For an edge $(x,x') \in Y_j$ operation $B$ sets $V_{xx'}=\mathcal{C}_{xx'}$.
\end{definition}
\begin{lemma}
\label{case1indedge}
If an edge $(x,x') \in Y_j$ belongs to Case 1, algorithm $U_j$ performs operation $A$ and invariant $I_{xx'}$ is satisfied.
\end{lemma}
\begin{proof}
Operation $A$ sets $V_{xx'}=\emptyset$ and therefore the segment of $\mathcal{Y}_j$ from $x$ to $x'$ simply consists of edge $(x,x')$. Thus, it is sufficient to show that the edge between two points $u,u' \in P_{xx'} \cup{(f'_1,f'_2,..,f'_m)}$ does not cross edge $(x,x')$. Because edge $(x,x')$ belongs to Case 1, we have that all points in $P_{xx'}$ are on the exterior of the triangle $(x,x',\pi)$. Thus, any point $u \in P_{xx'}$ is on the left of edge $(x,x')$.
Therefore, any edge $(u,u')$ such that $u,u' \in P_{xx'}$ can not cross edge $(x,x')$.
For the special case where the first edge $(s,x) \in Y_j$ belongs to Case 1, then an edge $(s,u')$ where $u' \in P_{xx'}$ can not cross edge $(s,x)$ because they share a point.
We now consider an edge of the form $(u,f'_i)$ for any $i \in [1,j-1]$ such that $u \in P_{xx'}$. According to Lemma \ref{Sprop1}, for $i=1,2,..,j-1$ we have that $f'_i$ is on the left of vertical $\alpha_{x'}$. Since every point in $P_{xx'}$ is on the left of edge $(x,x')$ we have that edge $(u,f'_i)$ can not cross edge $(x,x')$. For the special case where the first edge $(s,x) \in Y_j$ belongs to Case 1, trivially an edge $(s,f'_i)$ can not cross edge $(s,x)$ because they share a point.
We conclude that any edge $(u,u')$ where $u,u' \in P_{xx'} \cup (f'_1,f'_2,..,f'_{j-1})$ does not cross the segment of $\mathcal{Y}_j$ from $x$ to $x'$ and therefore invariant $I_{xx'}$ is satisfied. This completes the proof.
\end{proof}
\begin{lemma}
\label{Case2ConvH}
If an edge $(x,x') \in Y_j$ belongs to Case 2, algorithm $U_j$ performs operation $B$, sets $V_{xx'}=\mathcal{C}_{xx'}$ and invariant $I_{xx'}$ is satisfied.
\end{lemma}
\begin{proof}
For an edge $(x,x') \in Y_j$ that belongs to Case 2 let $\mathcal{C}_{xx'}=(v_1,v_2,...,v_m)$ be the convex hull of all points in $P_{xx'}$. Because $\mathcal{C}_{xx'}=V_{xx'}$ it suffices to show that an edge $(u,u')$ such that $u,u' \in P_{xx'}\cup (f'_1,f'_2,..,f'_m) \setminus{C_{xx'}}$ can not cross the segment of $\mathcal{Y}_j$ from $x$ to $x'$. That is, edge $(u,u')$ can not cross any of the edges $(x,v_1),\ldots,(v_i,v_{i+1}),\ldots, (v_m,x')$.
Notice that for $i=1,2,..,m-1$ we have that $v_i \prec v_{i+1}$ because $v_i$ and $v_{i+1}$ are points in the convex hull $\mathcal{C}_{xx'}$. We also have $x,\prec v_1$ and $v_m \prec x'$ since according to Lemma \ref{Sprop} for every point $u \in P_{xx'}$ within the triangle $(x,x',\pi)$ we have $x \prec u \prec x'$.
We refer the reader to Figure \ref{fig:convexhull} for the schematic representation of the proof.
According to the convex hull principle, an edge between two points $u,u' \in P_{xx'} \setminus{C_{xx'}}$ can not cross edge $(v_i,v_{i+1})$ for $i=1,2,..,m-1$, as this would imply that point $u'$ is in the exterior of the convex hull. For the same reason the edge between two points $u,u' \in P_{xx'} \setminus{C_{xx'}}$ can not cross edge $(x,v_1)$ or edge $(v_m,x')$.
We now consider an edge of the form $(u,f'_z)$ where $z \in [1,j-1]$ such that $u \in P_{xx'} \setminus{C_{xx'}}$. Consider the convex body which extends above the horizontal $\beta_{x'}$ by taking the vertical $\alpha_{x'}$, as shown in Figure \ref{fig:convexhull}.
Point $u$ is in the interior of the convex body. According to Lemma \ref{Sprop1}, for $z=1,2,..,j-1$ we have that $f'_z$ is on the left of vertical $\alpha_{x'}$ and therefore $f'_z$ is also on the interior of the convex body. Thus, edge $(u,f'_z)$ can not cross any of the edges $(x,v_1),\ldots,(v_i,v_{i+1}),\ldots, (v_m,x')$. This completes the proof.
\end{proof}
\begin{figure}
\centering
\includegraphics[scale=0.46]{Chapter3/C3Images/convexhull.png}
\caption{The schematic representation of the proof for Lemma \ref{Case2ConvH}.}
\label{fig:convexhull}
\end{figure}
\begin{lemma}
\label{LemmaProperty2}
For $j=k,k-1,\ldots,2$ algorithm $U_j$ takes as an input a point set $P_j$ such that all points in $P_j$ can be covered with $j$ paths and outputs a path $\mathcal{Y}_j$ such that any collection of $j-1$ paths that covers all points in $P_j \setminus{P(\mathcal{Y}_j})$ does not cross with path $\mathcal{Y}_j$.
\end{lemma}
\begin{proof}
For $j=k,k-1,\ldots,2$, let $(Y_1,Y_2,\ldots,Y_j)$ be the collection of $j$ paths covering $P_j$ obtained by algorithm $\mathcal{S}$. Algorithm $U_j$ traverses the edges of path $Y_j$ from $s$ to $t$ and when an edge $(x,x') \in Y_j$ is considered, it performs operation $A$ if edge $(x,x')$ belongs to Case 1 and operation $B$ if edge $(x,x')$ belongs to Case 2. According to Lemmas \ref{case1indedge} and \ref{Case2ConvH}, for an edge $(x,x') \in Y_j$ when algorithm $U_j$ performs either operation $A$ or operation $B$, invariant $I_{xx'}$ is satisfied.
According to Definition \ref{def:invariantIxx}, if invariant $I_{xx'}$ is satisfied for an edge edge $(x,x') \in Y_j$ then the edge between two points $u,u' \in(P_{xx'} \cup (f'_1,f'_2,..,f'_{j-1}) \setminus{V_{xx'}})$ can not cross the segment of $\mathcal{Y}_j$ from $x$ to $x'$. Summing over all edges $(x,x') \in Y_j$, invariant $I_{xx'}$ implies that any edge $(u,u)$ such that $u,u' \in P_j \setminus{P(\mathcal{Y}_j)}$ can not cross with an edge of path $\mathcal{Y}_j$. Thus, any collection of $j-1$ paths that covers all points in $P_j \setminus{P(\mathcal{Y}_j})$ can not cross with path $\mathcal{Y}_j$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{maintheoremuncrossing}]
We show the proof of Theorem \ref{maintheoremuncrossing} by induction using Lemmas \ref{LemmaProperty1} and \ref{LemmaProperty2}. For $k \geq 2$, algorithm $\widetilde U_k$ consists of the following sequence of algorithm $U_k,U_{k-1},\ldots,U_2,U_1$. We show that for $j=k,k-1,\ldots,2,1$ algorithm $U_j$ computes the $j^{th}$ path $\mathcal{Y}_j$ of a collection of $k$ node-disjoint, non-crossing $s-t$ paths $(\mathcal{Y}_1,\mathcal{Y}_2,\ldots,\mathcal{Y}_k)$. The paths are given in left to right order in their planar representation, meaning that $\mathcal{Y}_1$ is the leftmost path and $\mathcal{Y}_{k}$ is the rightmost path. Algorithm $\widetilde U_k$ iteratively outputs the paths from right to left (\textit{i.e.}. from $\mathcal{Y}_k$ to $\mathcal{Y}_1$).
For the base case $j=k$, algorithm $U_k$ takes as input a set of points $P_k$ such that all points in $P_k$ can be collected with $k$ paths and gives as an output path $\mathcal{Y}_k$. According to Lemma \ref{LemmaProperty1} we have that all points in $P_k \setminus{P(\mathcal{Y}_k})$ can be covered with $k-1$ paths, which means that the set of points $P_k \setminus{P(\mathcal{Y}_k})$ is a valid input for algorithm $U_{k-1}$. According to Lemma \ref{LemmaProperty2} we have that any collection of $k-1$ paths covering all points in $P_k \setminus{P(\mathcal{Y}_k})$ can not cross with path $\mathcal{Y}_k$.
We now assume that our induction holds for $i=k,k-1,\ldots,j+1$ and show that it also holds for $j$. According to our inductive hypothesis, we have a collection of $(k-j)$ non-crossing $s-t$ paths $\mathcal{Y}_k,\mathcal{Y}_{k-1},\ldots, \mathcal{Y}_{j+1}$ and a set of points $P_j=\mathcal{P}_k \setminus{P(\mathcal{Y}_k,\mathcal{Y}_{k-1},\ldots, \mathcal{Y}_{j+1})}$ such that all points in $P_j$ can be covered with $j$ paths.
Furthermore, any collection of $j$ paths over $P_j$ can not cross with path $\mathcal{Y}_{j+1}$. This implies that any collection of $j$ paths over $P_j$ can not cross with paths $\mathcal{Y}_{j+2},\ldots,\mathcal{Y}_{k-1},\mathcal{Y}_k$ since they are on the right side of $\mathcal{Y}_{j+1}$ and do not cross pairwise.
Algorithm $U_j$ takes point set $P_j$ as an input and outputs path $\mathcal{Y}_j$. According to Lemma \ref{LemmaProperty1}, the set of points $P_j \setminus{P(\mathcal{Y}_j)}$ can be covered with $j-1$ paths, which means that point set $P_j \setminus{P(\mathcal{Y}_j)}$ is a valid input for algorithm $U_{j-1}$. According to Lemma \ref{LemmaProperty2}, any collection of $j-1$ paths that covers all points in $P_j \setminus{P(\mathcal{Y}_j})$ can not cross with path $\mathcal{Y}_j$.
Notice that the computed path $\mathcal{Y}_j$ and any collection of $j-1$ paths that covers all points in $P_j \setminus{P(\mathcal{Y}_j)}$ is a collection of $j$ paths that covers all points in $P_j$. According to the induction hypothesis, any collection of $j$ paths that covers all points in $P_j$ can not cross with paths $\mathcal{Y}_k,\mathcal{Y}_{k-1},\ldots, \mathcal{Y}_{j+1}$. Thus, the computed path $\mathcal{Y}_j$ can not cross with paths $\mathcal{Y}_k,\mathcal{Y}_{k-1},\ldots, \mathcal{Y}_{j+1}$.
To complete the proof it remains to show that algorithm $\widetilde U_k$ requires $O(k n \log n)$ time. We show that for $j=k,k-1,\ldots,2,1$ algorithm $U_j$ requires $O(n \log n)$ time. For any $j \in [1,k]$ algorithm $U_j$ consists of two computational steps. The first step obtains a collection of $j$ paths $(Y_1,Y_2,\ldots,Y_j)$ using algorithm $\mathcal{S}$ which requires $O(n \log n)$ time, as shown in Asahiro et al.\cite{DBLP:journals/dam/AsahiroHMOSY06}. The second step, traverses the edges of the rightmost path $Y_j$ from $s$ to $t$ and for an edge $(x,x') \in Y_j$ performs either operation $A$ or operation $B$.
It is easy to see that operation $A$ requires $O(1)$ time since we simply consider the next edge of $Y_j$. We recall that for an edge $(x,x') \in Y_j$ we denote by $P_{xx'}$ the set of all points on paths $Y_1,Y_2,\ldots,Y_{j-1}$ within the horizontals $\beta_x$ and $\beta_{x'}$. For an edge $(x,x') \in Y_j$, operation $B$ computes the convex hull of all points in $P_{xx'}$ using Graham's scan algorithm \cite{Cormen2001introduction} and therefore the running time of operation $B$ is $O(|P_{xx'}| \log |P_{xx'}|)$. Summing over all edges of path $Y_j$, it is easy to see that the total running time of all operations $B$ is $O(n \log n)$ since the input point set $P_j$ can have at most $n$ points.
\end{proof}
\section{Conclusion and Future Work}
We study the optimisation problem of servicing $n$ timed requests on a line by $k$ robots, a generalisation of the Ball Collecting Problem \cite{DBLP:journals/dam/AsahiroHMOSY06} for arbitrary ball weights. The optimisation problem is modelled as a minimum cost flow problem on a flow network $\mathcal{N}$, which can be implicitly represented by a set of points in the two-dimensional plane.
We show an algorithm with the running time of $O(k^{2k}n \log^{2k} n)$ for computing a minimum-cost flow of value $k$
in $\mathcal{N}$, which improves the previous upper bound of $O(kn^2)$ if $k$ is considered constant.
For $k \geq 2$, a natural question is whether there exists an algorithm with the running time of $O(kn \log^c n)$ for some constant $c \geq 1$ (or ideally independent of $k$), that computes a minimum-cost flow of value $k$ in the flow network $\mathcal{N}$.
For $k \geq 2$, we compute a minimum cost flow of value $k$ in $\mathcal{N}$ by iteratively finding an $s-t$ shortest path in the residual network $\mathcal{N}_i$ for $i=1,2,\ldots,k-1$. For $k=1$, an $s-t$ shortest path is computed in the standard way using appropriate data structures \cite{Lueker197828} for efficiency.
Our algorithm is based on the analysis of the geometric structure of non-self-crossing shortest paths, using the implicit representation of the flow network.
We also rely on the fact that for $k \geq 2$, a minimum-cost flow of value $k$ can be represented by $k$ non-crossing red paths $\mathcal{Y}_1,\mathcal{Y}_2,\ldots,\mathcal{Y}_{k}$ (Theorem \ref{maintheoremuncrossing}). We do not know how to find efficiently a non-self-crossing $s-t$ shortest path in the residual network without the assumption that the red paths are non-crossing. We also do not know how to find efficiently an $s-t$ shortest path that does not cross the red paths, and subsequently ensures that the new $k$ paths do not cross, if we obtain the flow in the usual way. This is also why we need the follow-up computation of
un-crossing (algorithm $\widetilde{U}_k$).
Faster algorithms may depend on the existence of shortest paths with more specialised structures. For example, non-self-crossing shortest paths which additionally do not cross the red paths. For $k =2$, we do not rely on the existence of non-self-crossing shortest paths (Theorem \ref{Theorem:specialcasek=2}) so these two conditions may not be useful. However, for $k \geq 3$, a $(k-2)$-chromatic path satisfying these two conditions, can traverse red edges of either path $\mathcal{Y}_1$ or path $\mathcal{Y}_{k-1}$, but not both, since the red paths are non-crossing. This observation can yield to more efficient algorithms, but it requires the existence of shortest paths with this specialised structure (not trivial).
Finally, an interesting research direction is generalising the problem of $n$ servicing timed requests with $k$ robots, in three dimensions, where now a request takes place at time $t_i$ at point $(x_i,y_i)$ in space. Considering the fact that the appropriate data structures for orthogonal search queries can be generalized to higher dimensions, this is an interesting research direction even for $k=1$ robot.
|
{
"timestamp": "2021-12-01T02:25:09",
"yymm": "2111",
"arxiv_id": "2111.15434",
"language": "en",
"url": "https://arxiv.org/abs/2111.15434"
}
|
\section{Introduction}
Abstractive summarization has made significant strides since the introduction of the transformer based model in NLP~\cite{vaswani2017attention}. However, the quadratic computational and memory complexities of large transformers have limited their scalability for long document summarization as the token length for a standard transformer is limited to 512 tokens. One can try extractive summarization to reduce the length of the document while retaining the key elements of the article then taking an abstractive approach on the reduced document. In the extractive step, only the most important sentences are chosen to reduce the size of the document to fit within the token limits of the transformer model. Another way is to use extractive summarization to summarize the document thus retaining only the salient information. This approach is computationally very expensive. Alternative transformer approaches such as the longformer~\citep{beltagy2020longformer} and Bigbird~\citep{zaheer2021big} alleviate the computation burden of the self-attention mechanism by limiting the attention window each token has access to. Longformer was created for this purpose, using a pluggable sparse attention mechanism that combines dilated windowed attention for local context with full global attention on some tokens, of which the latter varies per task. This introduces an attention mechanism that grows linearly with sequence length using a sliding window of size $w$ allowing for dealing documents in excess of 8000 tokens.
More recently, a group at Google~\citep{leethorp2021fnet} has introduced a new implementation that replaces the entire self-attention heads in the transformer encoder with a non-parameterized Fourier transform mixing of the tokens that does not suffer from this quadratic computation penalty. We propose to extend this architecture to the long document summarization problem and compare the results to the two current baseline practices: Extracting the salient information then applying abstractive summarization using PEGASUS~\citep{zhang2020pegasus} and using a Longformer implementation. On both baseline approaches, we investigated multiple hyperparameter optimization and evaluated the summaries relative to their corresponding abstracts. This becomes the method for comparing the performance of each methodology.
The primary dataset used for this work is the PubMed dataset \citep{dernoncourt2017pubmed} as there exists several prior work on long document summarization with it that we can compare to. According to~\citet{zaheer2021big}, this dataset has a median token length of 2,715 with the $90^{th}$ percentile token length being 6,101. \citet{dernoncourt2017pubmed} shows how extensive this dataset is, with close to 200,00 articles.
We decided to use the most common evaluation technique for document summarization -- ROUGE scores \citep{lin-2004-rouge}. In our analyses, we include F1-scores for Rouge-1, Rouge-2, Rouge-3, and Rouge-l scores for completeness.
\begin{table}
\twocolumn[{
\centering
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
\multicolumn{7}{|c|}{PEGASUS Baseline Tuned with 100 PubMed articles} \\
\hline
\hline
Baseline Models & Training & Validation &Rouge-1 & Rouge-2 & Rouge-3 & Rouge-L \\ \hline
\citet{zhang2020pegasus} & None & None & 34.05 & 12.75 & NA & 21.12 \\ \hline
PEGASUS Tuned & None & None & \textbf{36.47} & \textbf{15.46} & 10.08 & \textbf{23.98} \\ \hline
Experiment 1 & BERT & None & \textbf{35.43} & \textbf{12.94} & 8.08 & \textbf{22.18} \\ \hline
Experiment 2 & TextRank & None & 33.11 & 12.54 & 7.57 & \textbf{21.62} \\ \hline
Experiment 3 & BERT & BERT & \textbf{36.08} & 11.61 & 6.08 & \textbf{21.81} \\ \hline
Experiment 4 & BERT & TextRank & \textbf{35.58} & \textbf{12.07} & 6.48 & \textbf{21.29} \\ \hline
Experiment 5 & TextRank & TextRank & \textbf{38.07} & \textbf{15.72} & 9.31 & \textbf{23.96} \\ \hline
Experiment 6 & TextRank & BERT & \textbf{38.79} & \textbf{15.83} & 9.74 & \textbf{24.84} \\ \hline
\end{tabular}
\caption{Baseline model results for Extractive PEGASUS summarization High F-scores. Bold score are higher than PEGASUS \citep{zhang2020pegasus} for 100 PubMed training articles using PEGASUS\textsubscript{LARGE}. See Appendix for tuning parameters. \textbf{Note:} in \citep{zhang2020pegasus} abstracts longer that 256 tokens were truncated.}
\vspace*{.5\baselineskip}
\label{tab:table1}
}]
\end{table}
\section{Baselines}
\label{sec:Baselines}
\subsection{Long Document Summarization with PEGASUS}
\label{sec: Extractive tuning PEGASUS}
PEGASUS~\citep{zhang2020pegasus} is a state of the art abstractive summarization built on the transformer framework. However, as discussed before, computational overheads of the self-attention mechanisms limits the length to 512 tokens thus limiting its applicability to long document summarization. To overcome this, we propose to use two approaches to carry out an extractive summarization to 512 tokens and then apply PEGASUS to generate the final abstractive summary of the document. In the first approach, we apply encode each sentence in the Pubmed abstract with BERT embeddings of segments of 512 tokens each followed by k-means clustering to pick out the most important sentences in the abstract~\citep{devlin2019bert} - those closest to the cluster centers. The other extractive approach uses TextRank~\citep{Mihalcea2004TextRankBO} to reduce the long document to 512 important tokens before we apply PEGASUS- details of both approaches are discussed below. The novelty of our approach is to use these two extractive methods as a first pass followed by a fine tuning of PEGASUS with this reduced set.
\subsubsection{BERT tuning}
\label{BERT tuning}
Here we rely on an extractive text summarization service using BERT to create embeddings that was originally created on AWS to summarize class lectures~\citep{miller2019leveraging}. This service uses BERT to generate embeddings for each sentence in the long document. These embeddings are then clustered using k-means with a user specified ratio parameter for the number of important sentences to extract from this document. This number also corresponds to the number of clusters in k-means. The algorithm then chooses sentences that are closest to the centers of each cluster in k-means as the extractive summary. To stay within the token requirements of PEGASUS, we used a ratio of 512 divided by the number of tokens in the long article. This ratio then approximates the ratio of sentences that will be extracting from the original article.
\subsubsection{TextRank tuning}
\label{Text tuning}
TextRank~\citep{Mihalcea2004TextRankBO} is a another extractive summariztion method. TextRank uses a graph based ranking algorithm to extract the important phrases out of text using a similar path of a random walker on a graph as Google's PageRank.~\citep{Pageetal98} algorithm. We chose to use a python package~\citep{DBLP:journals/corr/BarriosLAW16} to implement TextRank. Similar to BERT tuning, the TextRank package has a ratio parameter which we set to 512 divided by the number of tokens in the article. However, one needs to exercise caution as the ratio for TextRank does not have the limitation that it has to produce a complete sentences and can thus works at a lower level or granularity to extract important phrases.
\subsubsection{Baseline results}
\label{baseline results}
As seen in Table~\ref{tab:table1} we produced higher Rouge-1, Rouge-2, and Rouge-L scores tuning PEGASUS\textsubscript{LARGE} with only 100 training examples than in \citep{zhang2020pegasus}. Our highest Rouge scores were produced when tuning PEGASUS\textsubscript{LARGE} using TextRank and running BERT extractive summarization on the validation articles. We used the same 100 PubMed articles for validation across all experiments for consistency. Our top Rouge scores show a 3 - 5 point improvement over the original paper. We could have theoretically achieved higher ROUGE scores by increasing the amount of training data or adjusting the tuning parameters to find a more optimal set. Due to the computational limitations of Google Colab, which was our tool of choice for these experiments, we focused our efforts on a small dataset and used creative techniques to address PEGASUS's\textsubscript{LARGE} 512 token tuning limit. However there are other models such as Longformer that have higher token limit and can achieve higher Rouge scores than PEGASUS\textsubscript{LARGE} for abstractive summarization.
\begin{table}
\twocolumn[{
\centering
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
\multicolumn{7}{|c|}{Longformer Baseline Tuned on PubMed articles} \\ \hline \hline
Baseline Models & Train Articles & Epochs &Rouge-1 & Rouge-2 & Rouge-3 & Rouge-L \\ \hline
Longformer 1 & 100 & 1 & \textbf{36.63} & 11.05 & 4.38 & 17.85 \\ \hline
Longformer 2 & 100 & 3 & 33.97 & 11.15 & 5.27 & 17.98 \\ \hline
Longformer 3 & 1,000 & 1 & 36.05 & \textbf{13.36} & \textbf{7.13} & \textbf{21.04} \\ \hline
\end{tabular}
\caption{Baseline model results for Longformer summarization High F-scores. Bold score are the highest value for each metric. Note as the number of Training articles increase, so did most of the scores. More resources (GPU and Memory) would be required to train on more articles.}
\vspace*{.5\baselineskip}
\label{tab:table2}
}]
\end{table}
\subsection{Longformer Baseline}
\label{sec:longformer}
Longformer was first introduced by Allen AI in the paper Longformer: The Long-Document Transformer~\citep{beltagy2020longformer}. The idea behind the approach is to remove the quadratic dependency on sequence length in the self-attention layer. The approach instead uses an attention operation that scales linearly with the sequence length. This is achieved by using alternatives to the full attention architecture. These alternative approaches are: Sliding window attention, Dilated window attention, and Global plus Sliding window. These methodologies allow the expensive quadratic term $QK^T$ from (Vaswani et al., 2017) to be replaced with a term that computes only a fixed number of diagonals of $QK^T$, using the dilated sliding window attention.
Allen AI's paper does not introduce this methodology for summarization tasks. However, longformer has since been used in a number of summarization tasks, including examples done on the PubMed dataset. The approach basically consists of fine tuning the smaller LED checkpoint \href{https://huggingface.co/allenai/led-base-16384}{"allenai/led-base-16384"}.
The dataset is first tokenized and then, in addition to the attention mask, we make use of the global attention mask, as is the case in longformer. Following the suggestions from \citet{beltagy2020longformer} we only use global attention for the very first token. The baseline results in Table \ref{tab:table2} below are based on using a pre-trained model from the LED checkpoint \href{https://huggingface.co/allenai/led-base-16384}{"allenai/led-base-16384"}. The checkpoint was then trained on the PubMed dataset using 250 articles for training, and 25 for validation. The model was trained varying the number of training articles and epochs.
\subsubsection{Longformer baseline results}
\label{sec:Baseline Results}
Table \ref{tab:table2} shows the results from using Longformer to summarize PubMed articles. As can be seen by the items shown in bold, as the number of training articles increased, so did most of the rouge scores. However, due to the limited resources available, going over 1,000 articles for training was not feasible. The results are also inline with the scores obtained by the previous baseline discussed above (i.e., PEGASUS) and shown in Table \ref{tab:table1}.
\section{Fourier Transform Based Attention (FNET)}
New research from a Google team~\citep{leethorp2021fnet} proposes replacing the multi-headed self-attention sub-layers with simple linear transformations that “mix” input tokens to significantly speed up the transformer encoder with limited accuracy cost. Additionally, the complexity and memory footprint of the transformer architecture is reduced. Even more surprisingly, the team discovered that replacing the self-attention sub-layer with a standard, unparameterized Fourier transform achieves 92 percent of the accuracy of BERT on the GLUE benchmark, with training times that are seven times faster on GPUs and twice as fast on TPUs.
\begin{figure}[ht]
\includegraphics[width=7cm]{./images/FNET_transformer_1}
\caption{The original Google FNET paper is implemented with a Fourier module replacing the multi-head self attention mechanism. In this work, we have completed the decoder architecture with a similar replacement of the self-attention mechanism as shown in the figure. To our knowledge, this is the first full implementation of the transformer model using only Fourier transformations for the attention mechanism.}
\label{fig:fourier_1}
\end{figure}
\begin{figure}[ht]
\includegraphics[width=7cm]{./images/FNET_transformer_2}
\caption{Hybrid version of the Fourier token mixing transformer that still uses the multi-headed self attention on the one decoder module that processes the encoder output. This hybrid model yield higher accuracy but with less computational requirements relative to the original multi-headed self attention transformer. In this implementation, all the multi-headed self-attention blocks are removed in both the encoder and the decoder except for the one that connects the encoder to the decoder. This hybrid approach was proposed by the Google research team but we have extended that concept to a transformer implementation.}
\label{fig:fourier_2}
\end{figure}
We investigated several different modifications of this Fourier attention architecture and generated their corresponding Rouge F1 scores. The three main architectural changes that we experimented with are are shown in the following figures. Initially, we added a decoder to the FNET architecture with the Fourier token mixing modules as shown in Figure~\ref{fig:fourier_1}. In this work, we needed the entire transformer for long document summarization and hence we extended the idea by replacing the multi-headed self-attention blocks in the decoder as well and re-implementing the entire decoder from scratch as shown in the red block in the figure. As the Google researchers pointed out in their paper \citep{leethorp2021fnet}, "designing the equivalent of encoder-decoder cross-attention remains an avenue for future work", which is exactly the main contribution of this work and its application to the long document summarization problem. To our knowledge, this is the first toy implementation of the transformer model using only Fourier transformations for the multi-head attention blocks. In the subsequent discussions, we will refer to this model as an FNET-transformer. We also investigated the use of the norm and the imaginary components of the Fourier transform but found the real component to provide the best rouge scores. It is still puzzling why the norm did not provide the best scores as it incorporates both components of the Fourier token mixing. We leave this investigation for future study. As well, we looked at using a single token mixing head as opposed to a multi-headed self-attention mechanism and found that it performed just as well as a multi-headed token mixing approach but with few epochs to achieve high rouge scores. This significantly reduces the complexity and the computational load of this new transformer architecture.
In Figure~\ref{fig:fourier_2}, we investigated a hybrid approach that has Fourier token mixing in both the encoder and decoder inputs but with the conventional multi-headed self-attention connecting the encoder to the decoder. We will refer to this architecture as the hybrid-FNET approach. The other architectural investigation we carried out was moving the Fourier transforms completely outside of the original self-attention blocks for both the encoder and the decoder as shown in Figure~\ref{fig:fourier_3} while completely converting the multi-headed attention heads to matrix multipliers with no learnable parameters. We will refer to this architecture as Full-Fourier-FNET.
Since no prior full implementation of a pretrained FNET transformer was available in the public domain, we leveraged a toy example to demonstrate the key concepts and extensions that we discuss in this work. For this purpose, we used only 2000 PubMed articles for training and 500 for validation. In all the FNET transformer architectures that we investigated had the following parameters: word embedding layer of 200 using pretrained Glove, 9000 neurons in the feed-forward networks, 20 Fourier token mixing heads, with 1-6 stacked layers hyperparameter. The performance of all these experiments are summarized in Table \ref{tab:table3}. The table shows that the hybrid approach gives the best performance but is 10 times slower than the full Fourier-FNET transformer model or the FNET-transformer which do not use any of the computationally expensive self-attention mechanics. A sample training and validation curves we used in the hyperparameter tuning of the models is shown in Figure~\ref{fnet_loss:fig}.
\begin{figure}
\includegraphics[width=7cm]{./images/FNET_transformer_3}
\caption{Alternative implementation that moves the Fourier token mixing completely to the outside of the transformer by Fourier transforming the inputs to both the encoder and decoder while completely removing the multi-headed self-attention blocks. A final Fourier transformation module is added to the output of the decoder before the softmax.}
\label{fig:fourier_3}
\end{figure}
\begin{figure}
\includegraphics[width=7cm]{./images/loss_curve_fnet}
\caption{Training and validation loss curves for a full Fourier token mixing transformer replacement for self attention heads.}
\label{fnet_loss:fig}
\end{figure}
\begin{table}[h]
\twocolumn[{
\centering
\begin{tabular}{|l|c|c|c|c|}
\hline
\multicolumn{5}{|c|}{FNET Based Architectures} \\ \hline \hline
Model & Rouge-1 & Rouge-2 & Rouge-3 & Rouge-L \\ \hline
FNET-Transformer & 30.3 & 11.2 & 5.2 & 10.4 \\ \hline
Hybrid-FNET & 35.6 & 11.5 & 7.2 & 14.5 \\ \hline
Fourier-FNET & 38.3 & 12.0 & 10.5 & 12.1 \\ \hline
\end{tabular}
\caption{Baseline model results for Longformer summarization High F-scores. Bold score are the highest value for each metric. Note as the number of Training articles increase, so did most of the scores. More resources (GPU and Memory) would be required to train on more articles.}
\vspace*{.5\baselineskip}
\label{tab:table3}
}]
\end{table}
\section{Key Contributions}
The key contributions of this work can be summarized as:
\begin{enumerate}
\item First application of the new Fourier attention based FNET architecture to the summarization task.
\item The original FNET architecture as proposed by Google researchers was an encoder only model. We have extended this Fourier based token mixing approach to the decoder using causally masked Fourier transform matrices and thus coded a full transformer model with this type of attention from scratch. As far as we know, there is nothing in the current literature on such a decoder.
\item We also investigated an alternative hybrid approach that used a Fourier transform token mixing for the encoder and a full self-attention decoder (Hybrid-FNET).
\item For dealing with document level sequence memory level dependencies, we have further proposed and demonstrated a modification to the FNET architecture that entirely does away with the concept of multi-headed self-attention by simply carrying out a 2D Fourier transform of the entire source and target corpus apriori. More exhaustive work is planned in the future to validate this concept on a variety of full datasets. Initial experiments with a toy model and reduced PubMed abstracts on a summarization task show a promising Rouge-2 F1 score of 8.
\end{enumerate}
\section{Conclusion}
We have demonstrated for the first time that the recently proposed FNET architecture can be extended to a full transformer model on an abstractive summarization task with a PubMed dataset. Even with a toy implementation, we have shown several novel architectural changes to the original proposal that can be used for a variety of tasks requiring low computational cost while maintaining reasonable accuracy. Our toy architecture yields lower Rouge scores than the baseline for two main reasons. First because the transformer model is much smaller and also because we did not have a pretrained FNET transformer as a starting point. The contribution of this work is the investigation of alternative implementation of the Fourier token mixing idea in a transformer on a summarization task. With a fully configured large implementation of these architectures, we believe that we can get competitive Rouge scores on the summarization task without the computation overhead of a full self-attention implementation. We believe that the extractive summarization prepossessing techniques used in this paper would generalize well with larger tuning datasets. Although extractive summarization is the most computationally expensive of the techniques used in this paper, it showed great promise as a tuning instrument for PEGASUS\textsubscript{LARGE}. Using more tuning examples, adjusting the tuning parameters, and adding more computational power would make extractive prepossessing competitive with the state-of-the-art techniques for long documents summarization with a limited attention capacity of 512 tokens.
\input{main.bbl}
\bibliographystyle{acl_natbib}
|
{
"timestamp": "2021-12-01T02:26:34",
"yymm": "2111",
"arxiv_id": "2111.15473",
"language": "en",
"url": "https://arxiv.org/abs/2111.15473"
}
|
\section{Introduction\label{sec:intro}}
The standard cosmological model predicts that galaxies grow through a hierarchical process.
The Milky Way is no exception and has been shown to have accreted a number of dwarf galaxies.
There is indeed evidence for the ongoing accretion in the form of stellar streams from the Sagittarius dwarf galaxy \citep[e.g.,][]{Ibata1994a,Belokurov2006a,Grillmair2006a,Bernard2016,Malhan2018a,Ramos2020a,Antoja2020a}.
However, in the case of ancient accretion events that deposited stars to the inner parts of the Galaxy, streams would have lost their spatial coherence.
This is why chemodynamical analysis of halo stars is a powerful way to recover the accretion history of the Milky Way.
Orbits and chemical abundances generally remain unchanged for a long time.
A number of studies have pointed out correlations between abundance ratios of halo stars and their kinematics \citep[e.g.,][]{Nissen1997a,Gratton2003a,Venn2004a,Nissen2010,Ishigaki2012}.
In particular, \citet[][hereafter NS10]{Nissen2010} clearly showed the presence of two chemically distinct stellar populations among nearby halo stars that also have different kinematics.
They interpreted the population with low [$\alpha$/{Fe}] abundance ratio as a group of accreted stars from dwarf galaxies and the high-$\alpha$ population to have formed in-situ within the Milky Way.
After the data releases from the Gaia mission \citep{GaiaCollaboration2016a,GaiaCollaboration2018a}, it became apparent that there is a kinematic overdensity of stars with radial orbits in the Galactic halo, known as the Gaia-Sausage \citep[e.g.,][]{Belokurov2018a,Koppelman2018a}.
The kinematic overdensity turns out to follow the low-$\alpha$ population \citep[e.g.,][]{Helmi2018a,Haywood2018a,Mackereth2019a} and it is now considered to be the debris from the last major merger that the Milky Way has experienced and has been named Gaia-Enceladus.
The in-situ halo stars with high [{Mg}/{Fe}] are likely those heated by this event from the disk present at that time \citep{Helmi2018a,Belokurov2020a,Gallart2019a,DiMatteo2019a}.
Besides these two major populations, there seems to be an additional, highly retrograde component in the Milky Way halo.
The Gaia data revealed that there is an overdensity of stars in the space of kinematics that has large retrograde motion and large orbital energy \citep[e.g., ][]{Myeong2018c,Koppelman2018a,Yuan2020a,Naidu2020a}, which is now widely called \textit{Sequoia}.
Since Sequoia is less prominent and has lower mean metallicity than Gaia-Enceladus, this could be a disrupted dwarf galaxy smaller than Gaia-Enceladus \citep{Myeong2019a,Koppelman2019a,Matsuno2019a,Naidu2020a,Feuillet2021a}.
However, the picture is somewhat confusing because the region occupied by Sequoia has been suggested to contain multiple components \citep{Naidu2020a}.
\citet{Helmi2018a} suggested that the progenitor of Gaia-Enceladus can also deposit stars onto Sequoia-like orbits depending on the morphology and the inclination of its initial orbit, which is supported by numerical simulations \citep{Koppelman2019a}.
Another complication arises in selection and definition of Sequoia stars; while \citet{Matsuno2019a}, \citet{Koppelman2019a}, \citet{Naidu2020a} and \citet{Aguado2021a} using selections in angular momentum ($L_z$) and orbital energy ($E$) of the stars ($L_z-E$ selection), \citet{Myeong2019a}, \citet{Monty2020a} and \citet{Feuillet2021a} mostly use normalized actions ($\tilde{\textbf{J}}$ selection).
In the latter case, the selected stars extend to much lower orbital energy \citep[e.g.,][]{Feuillet2021a}.
Chemical abundance analysis from high-resolution spectroscopy is crucial to understand the properties of Sequoia.
Differences in abundance ratios imply different conditions of star formation, e.g., star formation with different efficiency, which, in turn, could be related to the mass of the progenitor galaxy and/or its environment.
Although chemical abundance ratios have been well studied for Gaia-Enceladus, in-situ stars, and surviving dwarf galaxies \citep[e.g.,][]{Venn2004a,Tolstoy2009,Nissen2010}, the current understanding about the chemical abundance ratios of Sequoia is much less clear.
Long before the discovery of Sequoia as a kinematic overdensity, \citet{Venn2004a} pointed out the systematically low [$\alpha$/{Fe}] ratios of highly retrograde stars; they showed that the very low [$\alpha$/{Fe}] ratios seen among the outermost halo stars in \citet{Stephens2002} are rather related to their large retrograde motion.
\citet{Matsuno2019a} for the first time indicated the connection between Sequoia and the results from \citet{Venn2004a}.
They selected stars in an overdensity with large retrograde motion at high orbital energy ($L_z-E$ selection), which later named Sequoia, from the Stellar Abundances for Galactic Archaeology Database \citep{Suda2008,Suda2011,Yamada2013,Suda2017a}.
They showed that the selected stars have, on average, lower [{Na}/{Fe}], [{Mg}/{Fe}], and [{Ca}/{Fe}] than Gaia-Enceladus or in-situ stars at [{Fe}/{H}]$\sim-1.5$.
This conclusion is supported by \citet{Monty2020a}, who recalibrated the abundances and recalculated the kinematics of stars studied by \citet{Stephens2002}.
Among their Sequoia stars selected with the $\tilde{\textbf{J}}$ selection, stars with low binding energy, corresponding to $L_z-E$ selection, show low values of [{Mg}/{Fe}] and [{Ca}/{Fe}], while stars with higher binding energy have higher Mg and Ca abundances.
Additionally, the proper-motion pair, HD134439 and HD134440, which has large retrograde Galactic motion \citep{King1997a} and indeed satisfies most of the $L_z-E$ selections, has been known to have low $\alpha$-element abundances \citep{King1997a,Chen2006a,Chen2014a,Reggiani2018a}.
Among these studies, \citet{Reggiani2018a} has indeed confirmed that the $\alpha$-element abundances of HD134439/HD134440 are even lower than the NS10's low-$\alpha$ halo population.
Data from GALAH DR3 also seem to support the low-$\alpha$ abundance of Sequoia \citep[see $\alpha$-element abundance presented in][who use $L_z-E$ selection]{Aguado2021a}.
The same feature is, however, not clearly seen in data from APOGEE as presented in \cite{Koppelman2019a} with $L_z-E$ selection, and \citet{Myeong2019a} and \citet{Feuillet2021a} with $\tilde{\textbf{J}}$ selection.
\begin{table*}
\caption{Summary of the data \label{tab:obs}}
\centering
\begin{tabular}{lrrrrrr}
\hline\hline
Object & Gaia EDR3 source id & Telescope & Resolution & $S/N_1$ & $S/N_2$ & $S/N_3$ \\\hline
1336\_6432 & 1336408284224866432 & Subaru & 80,000 & 94 & 139 & 137 \\
2657\_5888 & 2657496656325125888 & Subaru & 80,000 & 41 & 79 & 92 \\
2813\_6032 & 2813331813720876032 & Subaru & 80,000 & 130 & 173 & 126 \\
2870\_9072 & 2870313110476579072 & Subaru & 80,000 & 88 & 118 & 52 \\
3336\_0672 & 3336204190352220672 & Subaru & 80,000 & 90 & 194 & 114 \\
3341\_2720 & 3341934256545182720 & Subaru & 80,000 & 126 & 140 & 99 \\
4587\_5616 & 4587905579084735616 & Subaru & 80,000 & 48 & 81 & 75 \\
4850\_5696 & 4850673911632285696 & Magellan & 32,000-40,000 & 207 & 128 & 176 \\\hline
G90-36 & 876358870971624320 & Keck & 48,000 & 41 & 67 & 50 \\
G115-58 & 1011379899590855936 & Subaru & 100,000 & 90 & 147 & 98 \\
HIP28104 & 2910503176753011840 & VLT & 50,000 & 93 & 161 & 181 \\
HIP98492 & 4299974407538484096 & Keck & 72,000 & 77 & 133 & 83 \\\hline
G112-43 & 3085891537839267328 & Subaru & 100,000 & 214 & 208 & 99 \\
CD$-48^{\circ}$02445 & 5551565291043498496 & VLT & 50,000 & 130 & 289 & 258 \\
HD59392 & 5586241315104190848 & VLT & 50,000 & 170 & 441 & 248 \\
\hline
\end{tabular}
\tablefoot{We obtained new high-resolution spectra for the eight Sequoia stars in the first group, and obtained archival high-resolution spectra for the four Sequoia stars in the second group and for the three standard stars in the last group. The $S/N_1$, $S/N_2$, and $S/N_3$ are measured at around $4500,\,5533,\,\mathrm{and}\, 6370\,\mathrm{\AA}$, respectively, and converted to per $0.01\,\mathrm{\AA}$.}
\end{table*}
\begin{table*}
\caption{Target information\label{tab:kinematics_photo}}
\centering
\begin{tabular}{lrrrrrrrrrr}
\hline\hline
Object & $\pi$ & $\sigma(\pi)$ & $G$ & $B_p-R_p$ & $E(B-V)$ & RV & $L_z$ & $\sigma(L_z)$ & $E$ & $\sigma(E)$ \\
& (mas) & (mas) & & & & $(\mathrm{km\,s^{-1}})$ & $(\mathrm{kpc\,km\,s^{-1}})$ & $(\mathrm{kpc\,km\,s^{-1}})$ & $(\mathrm{km^2\,s^{-2}})$ & $(\mathrm{km^2\,s^{-2}})$ \\\hline
1336\_6432 & 2.770 & 0.011 & 11.735 & 0.626 & 0.038 & -560.2 & -3000 & 12 & -1.022 & 0.005 \\
2657\_5888 & 1.798 & 0.024 & 13.960 & 0.924 & 0.024 & -348.4 & -2934 & 52 & -0.936 & 0.031 \\
2813\_6032 & 2.929 & 0.020 & 11.922 & 0.707 & 0.098 & -461.9 & -1787 & 12 & -1.218 & 0.005 \\
2870\_9072 & 1.397 & 0.017 & 12.885 & 0.855 & 0.116 & -392.1 & -1628 & 25 & -1.258 & 0.019 \\
3336\_0672 & 5.875 & 0.030 & 11.603 & 0.794 & 0.000 & -13.9 & -1722 & 26 & -1.251 & 0.011 \\
3341\_2720 & 1.344 & 0.014 & 13.281 & 0.792 & 0.130 & -137.3 & -2478 & 92 & -0.893 & 0.038 \\
4587\_5616 & 3.675 & 0.019 & 12.771 & 0.929 & 0.010 & -600.2 & -3147 & 9 & -1.016 & 0.004 \\
4850\_5696 & 3.981 & 0.014 & 10.987 & 0.616 & 0.0 & 430.4 & -1604 & 11 & -1.299 & 0.003 \\
G90-36 & 4.120 & 0.013 & 12.559 & 0.828 & 0.004 & 267.4 & -2880 & 46 & -1.004 & 0.019 \\
G115-58 & 2.224 & 0.019 & 11.973 & 0.657 & 0.036 & 226.2 & -1799 & 62 & -1.246 & 0.017 \\
HIP28104 & 2.590 & 0.011 & 12.080 & 0.620 & 0.014 & 253.6 & -2504 & 30 & -1.110 & 0.019 \\
HIP98492 & 2.660 & 0.018 & 11.373 & 0.898 & 0.076 & -266.4 & -1791 & 36 & -1.229 & 0.028 \\\hline
G112-43 & 5.595 & 0.019 & 10.063 & 0.686 & 0.000 & & & & & \\
CD$-48^{\circ}$02445 & 5.369 & 0.014 & 10.418 & 0.616 & 0.0 & & & & & \\
HD59392 & 6.382 & 0.013 & 9.576 & 0.685 & 0.0 & & & & & \\
\hline
\end{tabular}
\tablefoot{Parallax and photometric information from Gaia EDR3. The extinction coefficient is from \citet{Green2019a} except for 4850\_5696, CD$-48^{\circ}$02445, and HD59392, for which $E(B-V)=0.0$ is assumed. Radial velocity is from our newly obtained spectra except for G90-36, HIP28104 \citep{Stephens2002}, G115-58 \citep{Ishigaki2012}, and HIP98492 \citep{Omalley2017a}.}
\end{table*}
In summary, several studies seem to have shown that Sequoia stars have lower $\alpha$-element abundances than Gaia-Enceladus on average if they are selected according to $L_z$ and $E$.
However, the magnitude of the difference is comparable to the typical uncertainties of abundance ratios for individual stars.
This small difference together with the existence of multiple ways to kinematically select Sequoia stars has hampered clear understanding.
High-precision chemical abundance from a differential abundance analysis on high-$S/N$ spectra would allow us to robustly detect the abundance difference if any, as shown by \citet{Nissen2010,Nissen2011} to be powerful to characterize the abundance difference between accreted and in-situ halo stars.
The aim of this study is to carry out a high-precision abundance analysis for Sequoia stars selected by the $L_z-E$ selection using both newly obtained high-resolution spectra and archival spectra, and compare the results with other halo stars.
The comparison halo stars come from NS10 at $-1.5\lesssim [\mathrm{Fe/H}]\lesssim -0.7$ and \citet[][R17]{Reggiani2017a}, who carried out high-precision abundance analysis for stars with $-2.5\lesssim[\mathrm{Fe/H}]\lesssim-1.5$.
As we see below, we clearly detect the very low-$\alpha$ element abundances for Sequoia stars.
We describe our target selection and the data in Section~\ref{sec:obs} and the abundance analysis in Section~\ref{sec:analysis}.
After briefly introducing results in Section~\ref{sec:result}, we provide a discussion in Section~\ref{sec:discussion} and present our conclusions in Section~\ref{sec:conclusion}.
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{./ kinematicsCMD.pdf}
\caption{(Left:) Angular momentum and orbital energy of the observed stars and the comparison stars. We also show the distribution of all stars in Gaia EDR3 with good astrometry (relative parallax uncertainty smaller than 20\%) and Gaia DR2 radial velocity. The blue dotted lines represent the Sequoia selection from \citet{Koppelman2019a}. Stars in the orange box are used to define the chemical abundance trends of Gaia-Enceladus. NS10 and R17 stars within this box are shown with filled symbols while those outside of it are shown with open symbols. (Right:) The Gaia EDR3 color-magnitude diagram of the program stars. We also plot four PARSEC isochrones with the age of $12\ \mathrm{Gyr}$ and [{Fe}/{H}]$=-2.0,\ -1.5,\ -1.0,{\rm and }-0.5$ (from left right). We note that only stars with available extinction estimates from \citet{Green2019a} are plotted for the NS10 and R17 samples. \label{fig:kinematicsCMD}}
\end{figure*}
\section{Observations and target selection\label{sec:obs}}
We obtained new high-$S/N$, high-resolution spectra for nine Sequoia member candidates with the Subaru Telescope (for eight stars) and with Magellan (for one star).
Out of the nine stars, one star (Gaia EDR3 360456543361799808) turned out to have very different radial velocity from the value used for the selection (See Appendix \ref{appendixA}), and hence can no longer be regarded as a part of Sequoia.
We also collected high-$S/N$ archival high-resolution spectra for four Sequoia candidates and for three standard stars from Subaru, Keck, and Very Large Telescopes (VLT).
The data and target information are summarized in Table~\ref{tab:obs} and Table~\ref{tab:kinematics_photo}, respectively.
The Subaru observations were conducted with the High Dispersion Spectrograph \citep[HDS;][]{Noguchi2002} in November 8\textsuperscript{th}--10\textsuperscript{th}, 2019.
We used the standard setup StdYd of HDS, which provides a wavelength coverage of $4000-6800\ \mathrm{\AA}$, and the image slicer \#2 \citep{Tajitsu2012a}, which yields $R\sim 80,000$.
Two to eight exposures were taken for each object and the total exposure time ranged from 20 minutes to 4 hours depending on the brightness of the stars.
We reduced the data using an \texttt{IRAF}\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation} script, \texttt{hdsql}\footnote{\url{http://www.subarutelescope.org/Observing/Instruments/HDS/hdsql-e.html}}, which includes CCD linearity correction, scattered light subtraction, aperture extraction, flat-fielding, wavelength calibration, and heliocentric velocity correction.
The Magellan observation was conducted with the Magellan Inamori Kyocera Echelle \citep[MIKE;][]{Bernstein2003a} on December 29\textsuperscript{th}, 2019.
Although the MIKE spectrum has a wide spectral coverage from $3350\ \mathrm{\AA}$ to $9300\ \mathrm{\AA}$, we only use $4000-6800\ \mathrm{\AA}$ to keep the consistency with the HDS spectra.
The slit width was $0.70''$, which yields $R\sim 40,000$ for the region bluer than $4950\,\mathrm{\AA}$ and $R\sim 32,000$ for the redder part.
We searched reduced archive spectra using the Japanese Virtual Observatory portal for archive HDS spectra\footnote{\url{http://jvo.nao.ac.jp/portal/subaru/hds.do}}, the Keck Observatory Archive (KOA) for data taken with the High Resolution Echelle Spectrometer \citep[HIRES;][]{Vogt1994} on Keck, and European Southern Observatory Science Archive Facility for data taken with Ultraviolet and Visual Echelle Spectrograph \citep[UVES;][]{Dekker2000a}.
All the spectra were normalized by fitting continua with cubic splines after combining multiple exposures for individual objects.
For objects for which we conducted new observations, radial velocity was measured by comparing the observed wavelengths of \ion{Fe}{I} absorption lines with the values measured by laboratory experiments.
We adopt literature measurements of radial velocities for stars for which spectra were taken from archives (Table~\ref{tab:kinematics_photo}).
All the candidate Sequoia member stars were selected based on their angular momentum around the $z$-axis of the Milky Way ($L_z$) and orbital energy ($E$).
We combined Gaia DR2 astrometry with radial velocity measurements provided in Gaia DR2 and LAMOST DR4 for the target selection.
Table~\ref{tab:kinematics_photo} and Figure~\ref{fig:kinematicsCMD} include updated kinematic information for the program stars, where astrometry is now taken from Gaia EDR3 and the radial velocity comes from high-resolution spectroscopy.
Although we updated astrometry and radial velocity, the obtained orbital parameters did not change significantly.
Here we assumed that the Sun is located at $R_0=8.21\ \mathrm{kpc}$ \citep{McMillan2017a} and $z_0=20.8\ \mathrm{pc}$ \citep{Bennett2019a} and moving at $11.1\ \mathrm{km\ s^{-1}}$ toward the Galactic anti-center \citep{Schoenrich2010a}, $245.3\ \mathrm{km\ s^{-1}}$ in the Galactic rotation direction \citep{Reid2004a,McMillan2017a}, and $7.25\ \mathrm{km\ s^{-1}}$ toward the Galactic north pole \citep{Schoenrich2010a}.
The Milky Way potential used is from \citet{McMillan2017a}.
The calculation of the orbital energy was conducted with the software \textsc{Agama} \citep{Vasiliev2019a}.
We estimated uncertainties through a Monte Carlo method.
All the Sequoia candidates have $L_z<-1600\ \mathrm{kpc\ km\ s^{-1}}$ and $E>-1.3\times 10^5\ \mathrm{km^2\ s^{-2}}$.
We also removed stars with $[\mathrm{Fe/H}]>-1.0$ if they had metallicity estimates from LAMOST.
We note that our kinematic selections are based on our knowledges from Gaia~DR2, and updated selections will be available with more recent and future Gaia data releases (see L\"{o}vdal et al. 2021 in prep. and Ruiz-Lara et al. 2021 in prep.).
To suppress the effect of systematic uncertainties that depend on stellar spectral types and to put our derived abundances onto the same scale as previous studies, we limited the sample to stars around the main-sequence turn-off region using the Gaia DR2 color-magnitude diagram.
The updated photometric information with Gaia EDR3 is summarised in Table~\ref{tab:kinematics_photo} and Figure~\ref{fig:kinematicsCMD}.
The extinction was estimated using the three-dimensional dust map by \citet{Green2019a} and the extinction coefficients were estimated following \citet{Casagrande2021a}.
For three objects, 4850\_5696, HD59392, and CD$-48^{\circ}$02445 not in the coverage of \citet{Green2019a}, we assumed their extinctions to be negligible since \citet{RuizDern2018a} and \citet{Lallement2019a} provide very small estimates for them ($E(B-V)<0.01$).
We note that additional two stars (BD+09 2190 and HE1509-0252) were included in the list of Sequoia candidates with archive spectra but not analysed in the present study.
These two objects have much lower metallicity ([{Fe}/{H}]$=-2.63$ \citep{Ishigaki2012} and $-2.85$ \citep{Cohen2013a}, respectively) than the rest of the sample, and hence not suitable for the differential abundance analysis conducted in this study.
The removal of these two stars and the metallicity selection ($[\mathrm{Fe/H}]<-1.0$) for LAMOST stars result in narrower metallicity dispersion (0.19 dex) compared to $\sim 0.3\,\mathrm{dex}$ reported in literature \citep[e.g.,][]{Matsuno2019a}.
To compare the properties of Sequoia stars with those of in-situ and Gaia-Enceladus stars, we contrast chemical abundances of our Sequoia candidates with stars studied by NS10 and R17.
These stars are also plotted in Figure~\ref{fig:kinematicsCMD} and cover a similar region of the color magnitude diagram.
The orbital parameters of most of the NS10 stars and R17 stars are clearly different from the region of Sequoia.
We note that the most retrograde star in the R17 sample is HIP28104, which is regarded as a Sequoia member candidate and is included in our analysis.
The abundance of this star reported by R17 is marked in all our abundance figures with a special symbol.
\section{Abundance analysis\label{sec:analysis}}
We derived stellar parameters and chemical abundances based on a differential abundance analysis adopting HD59392 as the reference star.
Together with the high quality of the data, this approach enables us to achieve high precision abundance measurements.
In this section, we describe the analysis method, validate the results by comparing with the literature, and homogenize abundances from the present study, NS10, and R17.
For the abundance analysis, we used the November 2019 version of the MOOG \citep{Sneden1973} through $\texttt{q}^2$\citep{Ramirez2014} and adopted the standard \texttt{MARCS} model atmospheres \citep{Gustafsson2008}.
\subsection{Equivalent widths measurements}
\begin{table*}
\caption{Linelist and line-by-line abundance\label{tab:linelist}}
\centering
\begin{tabular}{l*{7}{r}}\hline\hline
Object & species & $\lambda$ & $\chi$ & $\log gf$ & $EW$ & $\sigma(EW)$ & $A(X)$ \\
\hline
1336\_6432 &NaI & 5889.959& 0.000& -0.193& 133.6& 6.2& 4.230\\
1336\_6432 &NaI & 5895.910& 0.000& -0.575& 107.0& 5.0& 4.108\\
1336\_6432 &MgI & 4167.271& 4.346& -0.746& 43.7& 2.1& 5.969\\
1336\_6432 &MgI & 5167.321& 2.709& -0.854& 125.8& 5.8& 6.125\\
1336\_6432 &MgI & 5172.684& 2.712& -0.363& 173.0& 8.0& 6.118\\ \hline\hline
$\sigma(A)_{T_{\rm eff}}$ & $\sigma(A)_{\log g}$ & $\sigma(A)_{v_t}$ & $\sigma(A)_{[\mathrm{Fe/H}]}$ & $\sigma(A)_{EW}$ & $s_X$ & weight \\\hline
0.053& -0.009& -0.030& -0.003& 0.108& 0.000& 60.883\\
0.034& -0.003& -0.024& 0.007& 0.099& 0.000& 78.054\\
0.026& -0.003& -0.003& 0.001& 0.053& 0.000& 204.968\\
0.055& -0.012& -0.023& 0.001& 0.096& 0.000& 21.625\\
0.064& -0.017& -0.014& 0.001& 0.080& 0.000& 10.299\\
\hline
\end{tabular}
\tablefoot{The full table is available online at CDS and portion of the table is shown here.}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./ EWNS10.pdf}
\caption{Equivalent width comparison with NS10.\label{fig:ewNS10}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./ EWR17.pdf}
\caption{Equivalent width comparison with R17. We note that this figure has a wider range in equivalent widths compared to Figure~\ref{fig:ewNS10}. \label{fig:ewR17}}
\end{figure}
Table~\ref{tab:linelist} provides the line list and measured equivalent widths ($EW$).
The line selection follows NS10 and R17, but the $\log gf$ values were updated for homogenization purposes.
We measured equivalent widths of the lines by fitting Voigt profiles, of which the results were visually inspected.
Stellar parameter determination and subsequent elemental abundance measurements are based on these measured equivalent widths unless otherwise stated.
Spectral synthesis was applied to Si, Mn, Zn, and Y.
The equivalent widths of the lines from these elements were not measured by the Voigt profile fitting but are estimates based on a synthetic spectrum.
Hyperfine structure splitting was taken into consideration for Na, Mn and Ba.
We have two and three stars in common with NS10 and R17, respectively.
The equivalent widths measured from archive spectra are compared with the values reported in the literatures for these objects in Figures~\ref{fig:ewNS10} and \ref{fig:ewR17}\footnote{The equivalent widths for the objects in R17 were kindly provided by H. Reggiani in private communication.}.
In the comparison with NS10, G112-43 particularly offers an opportunity to confirm that different telescopes yield consistent spectra because we used a Subaru/HDS archive spectrum for this object (Table~\ref{tab:obs}) while NS10 used one from VLT/UVES.
We find excellent agreement in the comparison with NS10 for HD59392 and for G112-43.
The average differences in reduced equivalent widths ($REW=\log(EW/\lambda)$) are, $\Delta REW=-0.004$ ($\sigma=0.029$) and $\Delta REW= 0.007$ ($\sigma=0.041$), respectively.
The agreements with R17 are also good, but our measured equivalent widths are larger than theirs for large equivalent widths, which is likely due to the different assumption for the line shape (Voigt and Gaussian profiles).
We confirmed that Voigt profiles provide better fits for the strongest lines than Gaussian profiles through visual inspection.
Despite the offset at large equivalent widths, the average differences in reduced equivalent widths are small for all the three objects ($\Delta REW=-0.012$, $\sigma=0.030$ for HD59392; $\Delta REW=-0.001$, $\sigma=0.026$ for CD$-48^{\circ}$02445; $\Delta REW=-0.001$, $\sigma=0.029$ for HIP28104).
We estimated the uncertainties in the equivalent widths using the formula provided by \citet{Cayrel1988a}.
Although the formula predicts very small fractional uncertainties for strong lines, Figures~\ref{fig:ewNS10} and \ref{fig:ewR17} both show this is not the case.
Since this holds true even for the strongest lines, this is not likely dependent on the signal-to-noise ratio ($S/N$).
Therefore, based on the scatter in $\Delta REW$ at $-5.5<REW<-4.5$, which is $0.015-0.025\,\mathrm{dex}$, we quadratically added $0.02\,\mathrm{dex}$, equivalent to 4.6\%, as an error floor to the uncertainties on the equivalent widths.
\subsection{Stellar parameter determination\label{sec:parameters}}
We determined stellar parameters by combining analysis of iron lines, and photometric and astrometric information from the Gaia mission.
We estimated effective temperature ($T_{\rm eff}$) and microturbulent velocity ($v_t$) by minimizing the correlation coefficients between iron abundances derived from individual neutral iron lines, and excitation potentials and $REW$, respectively.
We calculated surface gravity ($\log g$) from
\begin{eqnarray}
\log g &=& \log g_{\odot} + \log(M/M_\odot) - 2\log(R/R_{\odot})\nonumber\\
&=&\log g_{\odot} + \log(M/M_\odot) + 4\log(T_{\rm eff}/T_{\rm eff,\odot}) - \log (L/L_\odot),\label{eq:logg}
\end{eqnarray}
where $M$, $R$, $L$ are the mass, radius, and luminosity of the star.
The mass was obtained by finding the stellar model that describes the position of the star in the color-absolute magnitude diagram best.
Three parameters, age ($\tau$), initial mass ($M_{\rm ini}$), and $\mathrm{[Fe/H]_{\rm model}}$, were varied to find the best model.
We maximized $p(\theta|\bf x)$ through an ensemble Markov chain Monte Carlo sampling using \texttt{emcee} \citep{emcee}, where $\theta$ is $(\tau,\,M_{\rm ini},\,\mathrm{[Fe/H]_{\rm model}})$, $\bf x$ is $(M_G,(Bp-Rp)_0,\mathrm{[Fe/H]}_{\rm input})$, and $p(\theta|\textbf{x})\propto L(\textbf{x}|\theta)p(\theta)$.
The likelihood $L(\textbf{x}|\theta)$ was expressed as a multivariate Gaussian distribution $\mathcal{N}(\textbf{x}_{\rm model}|\textbf{x}_{\rm obs},\Sigma)$, in which $\Sigma$ reflects the observational uncertainties.
For the uncertainty of $M_G$ and $(Bp-Rp)_0$ and the covariance between the two, we considered the uncertainty in Gaia photometry and parallax, and extinction.
We adopted $0.2\,\mathrm{dex}$ uncertainty for $[\mathrm{Fe/H}]_{\rm sp}$ to take into account systematic uncertainties.
We used the initial mass function of \citet{Kroupa2003a} as the prior for the $M_{\rm ini}$ and a flat prior for the age between 1 and 20 Gyr.
The luminosity was obtained from $M_G$ and the bolometric correction by \citet{Casagrande2018a}.
First estimates of the uncertainties in the stellar parameters were obtained in the following way;
the uncertainties in $T_{\rm eff}$ and $v_t$ were estimated by finding the ranges of the values that provide the corresponding correlation coefficients consistent with zero at $1\sigma$ level;
the $\log g$ uncertainty was computed by randomly sampling $M$ and $L$ in Eq.~\ref{eq:logg} following their uncertainties, where covariances between $M$, $T_{\rm eff}$ and $L$ were assumed to be negligible;
the uncertainty of $[\mathrm{Fe/H}]_{\rm sp}$ was obtained from the standard deviation of the iron abundances from individual lines divided by the square root of the number of lines used.
These estimates, however, do not take correlations between parameters into consideration.
For example, since $[\mathrm{Fe/H}]_{\rm sp}$ clearly depends on the assumed values of the other parameters, we need to propagate the uncertainties in the other parameters into the uncertainty estimate for $[\mathrm{Fe/H}]_{\rm sp}$.
We corrected the estimated uncertainties by properly considering correlations between parameters following the method described in Appendix~\ref{app:uncertainty}.
As a result of this procedure, we also obtained covariances between the estimated parameters.
Since we adopted a differential abundance analysis, the parameters of the standard star HD59392 determines the scale of our parameters.
We adopted $T_{\rm eff}=6012\,\mathrm{K}$ and $[\mathrm{Fe/H}]_{\rm sp}=-1.6$ (NS10), and re-determined $\log g$ using the astrometric information.
The microturbulence was updated to $v_t=1.4\,\mathrm{km\,s^{-1}}$ so that the neutron iron abundances derived from individual lines do not depend on the line strength.
The stellar parameters and their uncertainties obtained as just described are provided in Table~\ref{tab:parameters}.
We note that the $[\mathrm{Fe/H}]_{\rm sp}$ values in Table~\ref{tab:parameters} are not the same as $[\mathrm{Fe/H}]_{\mathrm{I}}$ or $[\mathrm{Fe/H}]_{\mathrm II}$ that we report in the next section.
This is because computation of weights we use in the next section for abundance and its uncertainty estimates requires pre-determined stellar parameters and their uncertainties.
Since the stellar parameter determination process itself needs an iterative process, we here adopted the simple mean of the abundances from individual lines to simplify the computation while we considered weighted average in the next section.
\begin{sidewaystable*}
\caption{Stellar parameters\label{tab:parameters}}
\centering
\begin{tabular}{l*{14}{r}}
\hline\hline
Object & $T_{\rm eff}$ & $\sigma(T_{\rm eff})$ & $\log g$ & $\sigma(\log g)$ & $v_t$ & $\sigma(v_t)$ & $[\mathrm{Fe/H}]_{\rm sp}$ & $\sigma([\mathrm{Fe/H}]_{\rm sp})$ & $\rho_{T_{\rm eff},\log g}$ & $\rho_{T_{\rm eff},v_t}$ & $\rho_{T_{\rm eff},[\mathrm{Fe/H}]_{\rm sp}}$ & $\rho_{\log g,v_t}$ & $\rho_{\log g,[\mathrm{Fe/H}]_{\rm sp}}$ & $\rho_{v_t,[\mathrm{Fe/H}]_{\rm sp}}$ \\\hline
1336\_6432 & 6475 & 65 & 4.166& 0.034& 1.512& 0.102& -1.691& 0.030& 0.485& 0.269& 0.280& 0.108& 0.448& -0.540 \\
2657\_5888 & 5317 & 152 & 4.314& 0.056& 0.784& 0.530& -1.535& 0.053& 0.845& 0.925& -0.751& 0.749& -0.504& -0.807 \\
2813\_6032 & 6414 & 63 & 4.209& 0.035& 1.624& 0.102& -1.662& 0.029& 0.425& 0.435& 0.206& 0.071& 0.468& -0.264 \\
2870\_9072 & 5815 & 60 & 3.788& 0.046& 1.272& 0.066& -1.467& 0.023& 0.351& 0.704& 0.272& 0.123& 0.743& 0.035 \\
3336\_0672 & 5575 & 148 & 4.471& 0.051& 1.062& 0.427& -1.691& 0.033& 0.816& 0.951& -0.542& 0.738& -0.228& -0.625 \\
3341\_2720 & 6538 & 78 & 4.082& 0.038& 1.456& 0.124& -1.831& 0.031& 0.537& 0.355& 0.355& 0.169& 0.525& -0.425 \\
4587\_5616 & 5481 & 80 & 4.496& 0.033& 1.329& 0.235& -1.768& 0.048& 0.661& 0.688& -0.304& 0.343& 0.009& -0.470 \\
4850\_5696 & 6448 & 65 & 4.194& 0.033& 1.415& 0.107& -1.728& 0.027& 0.455& 0.665& 0.018& 0.265& 0.357& -0.287 \\
G90-36 & 5394 & 46 & 4.476& 0.030& 1.210& 0.172& -1.670& 0.045& 0.262& 0.553& -0.306& -0.069& 0.236& -0.491 \\
G115-58 & 6187 & 69 & 4.012& 0.041& 1.361& 0.069& -1.394& 0.021& 0.309& 0.670& 0.126& -0.023& 0.770& -0.249 \\
HIP28104 & 6468 & 45 & 4.247& 0.042& 1.339& 0.090& -1.986& 0.027& 0.255& 0.507& 0.143& 0.073& 0.539& -0.126 \\
HIP98492 & 5510 & 40 & 3.726& 0.037& 1.130& 0.062& -1.272& 0.021& 0.199& 0.526& -0.171& -0.170& 0.713& -0.358 \\\hline
G112-43 & 6125 & 54 & 4.086& 0.031& 1.412& 0.070& -1.286& 0.019& 0.421& 0.695& 0.137& 0.117& 0.626& -0.214 \\
CD$-48^{\circ}$02445 & 6446 & 58 & 4.205& 0.036& 1.457& 0.111& -1.852& 0.025& 0.421& 0.695& 0.005& 0.244& 0.425& -0.310 \\
HD59392 & 6012 & & 3.954& & 1.400& & -1.600& & & & & & & \\
\hline
\end{tabular}
\tablefoot{
The last three stars are standard stars and not part of Sequoia.
}
\end{sidewaystable*}
\longtab{
\begin{landscape}
\begin{longtable}{l*{17}{r}}
\caption{Abundances of Sequoia stars \label{tab:abundance}}\\
\hline\hline
\endfirsthead
\caption{continued.}\\
\hline\hline
\endhead
\endfoot
& \multicolumn{5}{c}{1336\_6432}&& \multicolumn{5}{c}{2657\_5888}&& \multicolumn{5}{c}{2813\_6032}\\\cline{2-6}\cline{8-12}\cline{14-18}
& $N$ & [{X}/{H}] & $\sigma$ & [{X}/{Fe}] & $\sigma$&& $N$ & [{X}/{H}] & $\sigma$ & [{X}/{Fe}] & $\sigma$&& $N$ & [{X}/{H}] & $\sigma$ & [{X}/{Fe}] & $\sigma$\\\hline
FeI & 84& -1.671& 0.033& ...& ...&& 90& -1.343& 0.048& ...& ...&& 97& -1.602& 0.030& ...& ...\\
FeII & 10& -1.692& 0.027& ...& ...&& 9& -1.517& 0.045& ...& ...&& 16& -1.632& 0.026& ...& ...\\
NaI & 2& -2.079& 0.085& -0.408& 0.081&& 4& -1.522& 0.071& -0.179& 0.076&& 4& -1.936& 0.063& -0.334& 0.062\\
MgI & 6& -1.563& 0.039& 0.108& 0.038&& 2& -1.087& 0.086& 0.256& 0.082&& 6& -1.461& 0.037& 0.141& 0.036\\
SiI & 2& -1.533& 0.064& 0.137& 0.065&& 4& -1.025& 0.035& 0.318& 0.053&& 1& -1.242& 0.061& 0.360& 0.063\\
CaI & 15& -1.354& 0.034& 0.316& 0.032&& 13& -0.947& 0.067& 0.396& 0.062&& 19& -1.361& 0.032& 0.241& 0.030\\
TiI & 9& -1.293& 0.056& 0.378& 0.044&& 16& -1.110& 0.113& 0.232& 0.102&& 10& -1.272& 0.053& 0.330& 0.042\\
TiII & 10& -1.421& 0.035& 0.271& 0.034&& 8& -1.273& 0.059& 0.245& 0.064&& 10& -1.403& 0.034& 0.230& 0.034\\
CrI & 3& -1.755& 0.066& -0.084& 0.055&& 3& -1.343& 0.134& -0.001& 0.123&& 3& -1.640& 0.061& -0.039& 0.052\\
MnI & 2& -1.968& 0.055& -0.297& 0.046&& 2& -1.679& 0.114& -0.336& 0.103&& 3& -1.912& 0.048& -0.310& 0.040\\
NiI & 8& -1.711& 0.047& -0.041& 0.042&& 22& -1.464& 0.057& -0.121& 0.063&& 15& -1.659& 0.040& -0.057& 0.036\\
ZnI & 2& -1.654& 0.048& 0.017& 0.043&& 2& -1.354& 0.039& -0.011& 0.060&& 2& -1.534& 0.045& 0.068& 0.041\\
YII & 1& -1.828& 0.061& -0.136& 0.060&& 1& -1.695& 0.062& -0.178& 0.080&& 1& -1.745& 0.054& -0.113& 0.053\\
BaII & 4& -2.074& 0.052& -0.382& 0.049&& 3& -1.709& 0.050& -0.192& 0.063&& 4& -2.021& 0.057& -0.388& 0.056\\\hline\hline
& \multicolumn{5}{c}{2870\_9072}&& \multicolumn{5}{c}{3336\_0672}&& \multicolumn{5}{c}{3341\_2720}\\\cline{2-6}\cline{8-12}\cline{14-18}
& $N$ & [{X}/{H}] & $\sigma$ & [{X}/{Fe}] & $\sigma$&& $N$ & [{X}/{H}] & $\sigma$ & [{X}/{Fe}] & $\sigma$&& $N$ & [{X}/{H}] & $\sigma$ & [{X}/{Fe}] & $\sigma$\\\hline
FeI & 104& -1.451& 0.025& ...& ...&& 94& -1.702& 0.035& ...& ...&& 79& -1.742& 0.031& ...& ...\\
FeII & 17& -1.438& 0.024& ...& ...&& 16& -1.667& 0.028& ...& ...&& 10& -1.822& 0.029& ...& ...\\
NaI & 3& -1.919& 0.055& -0.468& 0.054&& 3& -2.012& 0.163& -0.310& 0.160&& 2& -2.162& 0.095& -0.420& 0.090\\
MgI & 7& -1.455& 0.039& -0.004& 0.039&& 7& -1.622& 0.072& 0.079& 0.074&& 6& -1.626& 0.040& 0.116& 0.039\\
SiI & 5& -1.285& 0.042& 0.166& 0.044&& 1& -1.264& 0.068& 0.438& 0.070&& 1& -1.152& 0.080& 0.590& 0.082\\
CaI & 16& -1.291& 0.035& 0.160& 0.033&& 20& -1.425& 0.061& 0.277& 0.061&& 21& -1.422& 0.035& 0.320& 0.033\\
TiI & 12& -1.357& 0.058& 0.094& 0.052&& 14& -1.586& 0.082& 0.116& 0.080&& 9& -1.321& 0.070& 0.421& 0.061\\
TiII & 11& -1.384& 0.038& 0.054& 0.033&& 9& -1.504& 0.032& 0.163& 0.037&& 11& -1.492& 0.051& 0.331& 0.046\\
CrI & 5& -1.477& 0.058& -0.026& 0.052&& 4& -1.730& 0.127& -0.028& 0.121&& 2& -1.725& 0.094& 0.017& 0.088\\
MnI & 3& -1.882& 0.050& -0.431& 0.045&& 4& -2.151& 0.109& -0.449& 0.105&& 2& -2.001& 0.055& -0.259& 0.047\\
NiI & 22& -1.564& 0.036& -0.113& 0.034&& 17& -1.728& 0.076& -0.026& 0.075&& 4& -1.786& 0.044& -0.044& 0.039\\
ZnI & 2& -1.530& 0.039& -0.079& 0.038&& 2& -1.710& 0.048& -0.009& 0.053&& 2& -1.624& 0.058& 0.118& 0.054\\
YII & 2& -1.913& 0.048& -0.475& 0.044&& 2& -2.108& 0.092& -0.441& 0.096&& 2& -1.824& 0.058& -0.002& 0.053\\
BaII & 4& -1.879& 0.050& -0.441& 0.044&& 4& -2.005& 0.050& -0.338& 0.054&& 4& -2.114& 0.061& -0.292& 0.053\\\hline\hline
& \multicolumn{5}{c}{4587\_5616}&& \multicolumn{5}{c}{4850\_5696}&& \multicolumn{5}{c}{G115-58 }\\\cline{2-6}\cline{8-12}\cline{14-18}
& $N$ & [{X}/{H}] & $\sigma$ & [{X}/{Fe}] & $\sigma$&& $N$ & [{X}/{H}] & $\sigma$ & [{X}/{Fe}] & $\sigma$&& $N$ & [{X}/{H}] & $\sigma$ & [{X}/{Fe}] & $\sigma$\\\hline
FeI & 93& -1.563& 0.039& ...& ...&& 98& -1.684& 0.026& ...& ...&& 106& -1.317& 0.029& ...& ...\\
FeII & 6& -1.730& 0.053& ...& ...&& 13& -1.708& 0.027& ...& ...&& 18& -1.364& 0.021& ...& ...\\
NaI & 4& -1.964& 0.058& -0.400& 0.056&& 3& -2.095& 0.066& -0.411& 0.064&& 3& -1.736& 0.062& -0.418& 0.059\\
MgI & 6& -1.551& 0.056& 0.013& 0.054&& 4& -1.574& 0.047& 0.111& 0.045&& 5& -1.278& 0.050& 0.039& 0.047\\
SiI & 2& -1.242& 0.147& 0.322& 0.150&& 1& -1.583& 0.079& 0.101& 0.079&& 3& -1.175& 0.060& 0.142& 0.062\\
CaI & 13& -1.376& 0.052& 0.187& 0.047&& 23& -1.473& 0.032& 0.211& 0.030&& 19& -1.085& 0.035& 0.232& 0.031\\
TiI & 15& -1.403& 0.079& 0.161& 0.067&& 10& -1.366& 0.055& 0.318& 0.047&& 10& -1.149& 0.062& 0.168& 0.052\\
TiII & 10& -1.617& 0.066& 0.112& 0.078&& 14& -1.540& 0.037& 0.167& 0.039&& 9& -1.106& 0.059& 0.258& 0.057\\
CrI & 5& -1.463& 0.082& 0.101& 0.071&& 6& -1.638& 0.085& 0.046& 0.080&& 4& -1.309& 0.060& 0.008& 0.051\\
MnI & 4& -1.984& 0.090& -0.420& 0.083&& 8& -2.002& 0.061& -0.318& 0.055&& 3& -1.634& 0.052& -0.316& 0.045\\
NiI & 18& -1.663& 0.053& -0.100& 0.051&& 9& -1.770& 0.052& -0.086& 0.049&& 14& -1.440& 0.043& -0.123& 0.038\\
ZnI & 2& -1.570& 0.035& -0.007& 0.048&& 2& -1.605& 0.043& 0.079& 0.039&& 2& -1.411& 0.044& -0.094& 0.040\\
YII & 2& -1.781& 0.051& -0.051& 0.075&& 1& -2.055& 0.058& -0.347& 0.060&& 2& -1.567& 0.045& -0.204& 0.041\\
BaII & 4& -1.842& 0.052& -0.113& 0.071&& 4& -2.181& 0.054& -0.473& 0.055&& 4& -1.518& 0.049& -0.154& 0.045\\\hline\hline
& \multicolumn{5}{c}{G90-36 }&& \multicolumn{5}{c}{HIP28104 }&& \multicolumn{5}{c}{HIP98492 }\\\cline{2-6}\cline{8-12}\cline{14-18}
& $N$ & [{X}/{H}] & $\sigma$ & [{X}/{Fe}] & $\sigma$&& $N$ & [{X}/{H}] & $\sigma$ & [{X}/{Fe}] & $\sigma$&& $N$ & [{X}/{H}] & $\sigma$ & [{X}/{Fe}] & $\sigma$\\\hline
FeI & 80& -1.633& 0.023& ...& ...&& 101& -1.973& 0.021& ...& ...&& 103& -1.291& 0.021& ...& ...\\
FeII & 6& -1.622& 0.040& ...& ...&& 15& -1.980& 0.025& ...& ...&& 18& -1.246& 0.022& ...& ...\\
NaI & 1& -2.106& 0.081& -0.473& 0.081&& 2& -2.332& 0.071& -0.358& 0.070&& 4& -1.429& 0.063& -0.138& 0.062\\
MgI & 4& -1.565& 0.043& 0.069& 0.040&& 6& -1.812& 0.034& 0.161& 0.032&& 5& -1.074& 0.040& 0.217& 0.037\\
SiI & 2& -1.289& 0.050& 0.345& 0.054&& 0& ...& ...& ...& ...&& 8& -1.052& 0.042& 0.240& 0.044\\
CaI & 16& -1.479& 0.035& 0.154& 0.032&& 22& -1.651& 0.025& 0.322& 0.023&& 16& -1.048& 0.031& 0.243& 0.028\\
TiI & 13& -1.585& 0.052& 0.048& 0.044&& 9& -1.544& 0.042& 0.429& 0.034&& 16& -1.157& 0.042& 0.134& 0.036\\
TiII & 4& -1.552& 0.045& 0.070& 0.052&& 14& -1.723& 0.030& 0.257& 0.030&& 7& -0.959& 0.041& 0.287& 0.037\\
CrI & 4& -1.651& 0.055& -0.018& 0.048&& 4& -1.944& 0.048& 0.029& 0.043&& 4& -1.283& 0.049& 0.008& 0.043\\
MnI & 3& -2.019& 0.063& -0.386& 0.059&& 6& -2.357& 0.045& -0.384& 0.039&& 4& -1.688& 0.083& -0.397& 0.081\\
NiI & 17& -1.730& 0.042& -0.097& 0.040&& 8& -2.018& 0.059& -0.045& 0.058&& 24& -1.302& 0.026& -0.011& 0.025\\
ZnI & 2& -1.672& 0.043& -0.039& 0.048&& 1& -1.858& 0.059& 0.116& 0.057&& 2& -1.083& 0.036& 0.208& 0.039\\
YII & 1& -1.900& 0.063& -0.278& 0.072&& 1& -2.038& 0.079& -0.058& 0.079&& 2& -1.506& 0.037& -0.260& 0.034\\
BaII & 4& -1.876& 0.043& -0.254& 0.052&& 4& -2.332& 0.043& -0.352& 0.044&& 4& -1.612& 0.043& -0.366& 0.040\\
\hline
\end{longtable}
\end{landscape}
}
\begin{table}
\caption{Abundances of standard stars \label{tab:abundance_standard}}
\centering
\begin{tabular}{l*{5}{r}}
\hline\hline
& \multicolumn{5}{c}{G112-43 }\\\cline{2-6}
& $N$ & [{X}/{H}] & $\sigma$ & [{X}/{Fe}] & $\sigma$\\\hline
FeI & 100& -1.254& 0.022& ...& ...\\
FeII & 10& -1.256& 0.019& ...& ...\\
NaI & 3& -1.423& 0.076& -0.169& 0.074\\
MgI & 5& -1.075& 0.038& 0.179& 0.036\\
SiI & 5& -1.035& 0.025& 0.219& 0.027\\
CaI & 16& -0.977& 0.028& 0.278& 0.025\\
TiI & 11& -0.951& 0.044& 0.303& 0.037\\
TiII & 7& -0.914& 0.035& 0.343& 0.033\\
CrI & 6& -1.200& 0.058& 0.055& 0.052\\
MnI & 4& -1.527& 0.071& -0.273& 0.069\\
NiI & 20& -1.185& 0.029& 0.069& 0.025\\
ZnI & 2& -0.940& 0.036& 0.315& 0.035\\
YII & 2& -1.390& 0.037& -0.134& 0.035\\
BaII & 3& -1.584& 0.044& -0.328& 0.041\\\hline\hline
& \multicolumn{5}{c}{CD$-48^{\circ}$02445}\\\cline{2-6}
& $N$ & [{X}/{H}] & $\sigma$ & [{X}/{Fe}] & $\sigma$\\\hline
FeI & 111& -1.814& 0.025& ...& ...\\
FeII & 11& -1.829& 0.021& ...& ...\\
NaI & 2& -2.091& 0.082& -0.276& 0.079\\
MgI & 6& -1.563& 0.035& 0.251& 0.033\\
SiI & 2& -1.414& 0.111& 0.400& 0.112\\
CaI & 23& -1.461& 0.026& 0.353& 0.025\\
TiI & 10& -1.372& 0.051& 0.442& 0.043\\
TiII & 14& -1.540& 0.032& 0.289& 0.031\\
CrI & 5& -1.786& 0.049& 0.028& 0.041\\
MnI & 7& -2.161& 0.048& -0.346& 0.042\\
NiI & 13& -1.844& 0.044& -0.029& 0.040\\
ZnI & 1& -1.657& 0.045& 0.158& 0.043\\
YII & 2& -1.749& 0.043& 0.080& 0.040\\
BaII & 4& -1.985& 0.044& -0.156& 0.041\\\hline\hline
& \multicolumn{5}{c}{HD59392 }\\\cline{2-6}
& $N$ & [{X}/{H}] & $\sigma$ & [{X}/{Fe}] & $\sigma$\\\hline
FeI & 134& (-1.600)& ...& ( ...)& ...\\
FeII & 20& (-1.600)& ...& ( ...)& ...\\
NaI & 4& (-1.750)& ...& (-0.150)& ...\\
MgI & 7& (-1.320)& ...& ( 0.280)& ...\\
SiI & 8& (-1.330)& ...& ( 0.270)& ...\\
CaI & 23& (-1.220)& ...& ( 0.380)& ...\\
TiI & 16& (-1.270)& ...& ( 0.330)& ...\\
TiII & 14& (-1.270)& ...& ( 0.330)& ...\\
CrI & 6& (-1.600)& ...& ( 0.000)& ...\\
MnI & 8& (-1.960)& ...& (-0.360)& ...\\
NiI & 26& (-1.610)& ...& (-0.010)& ...\\
ZnI & 2& (-1.460)& ...& ( 0.140)& ...\\
YII & 2& (-1.470)& ...& ( 0.130)& ...\\
BaII & 4& (-1.670)& ...& (-0.070)& ...\\
\hline
\end{tabular}
\tablefoot{The abundance of HD59392 is identical to NS10 since this is the reference star in our analysis.}
\end{table}
\subsection{Elemental abundances\label{sec:abundance}}
Elemental abundances were obtained through a differential abundance analysis, assuming the one-dimensional plane-parallel atmosphere and local thermodynamic equilibrium (1D/LTE) except for Na.
Since Na abundance was sometimes derived from the Na~D lines, the deviation from LTE can be significant.
We corrected for this effect using the grid provided by \citet{Lind2011a}, which is available through the INSPECT database\footnote{
Data obtained from the INSPECT database, version 1.0 (\url{www.inspect-stars.net})}.
For each species, abundances from individual lines were combined to obtain the best estimate for the abundance of the species following the prescription by \citet{Ji2020a}.
In short, the final abundance is a weighted mean of the abundances from individual lines.
The weight for a line reflects the uncertainty in its equivalent width and the sensitivity of the abundance to stellar parameters.
An error floor ($s_\mathrm{X}$) was added for individual lines so that the log likelihood,
\begin{align}
\log{\cal L}=&-\frac{1}{2}\sum\frac{(A_i - \bar{A})^2}{\sigma^2(A_i)_{EW}+s_X^2} \\
&-\frac{1}{2}\sum\log(\sigma^2(A_i)_{EW}+s_X^2)+\mathrm{constants},
\end{align}
where $A_i$ and $\bar{A}$ are the abundance derived from individual lines and the best estimate of the abundance, is maximized.
The abundance ratios [{X}/{Y}] were computed with the correlation between the two elemental abundances taken into account.
When computing [{X}/{Fe}], we used the iron abundance from the same ionization state as the species X.
We indicate $1\sigma$ confidence error ellipses for Sequoia stars in the abundance figures to visualize the uncertainties and the covariance of the measured abundances.
The information on abundances derived from individual lines, their sensitivities to the uncertainties in stellar parameters and equivalent widths, weights, and error floors are included in Table~\ref{tab:linelist}.
As stated earlier, the abundances from our analysis, and those from NS10, \citet{Nissen2011} and R17 were put into the same scale.
Our abundances are anchored to NS10 using the abundance of HD59392.
Although HD59392 was also analysed by R17, it is located at the high metallicity end of their sample, and hence has large measurement uncertainties in their study.
Therefore, we used CD$-48^{\circ}$02445, which is one of the standard stars used by R17, to move the abundances of R17 into the same scale as ours.
Specifically we added offsets to the R17 abundances so that the abundance of CD$-48^{\circ}$02445 from our analysis and that from their analysis are consistent.
We note that the Na abundance of CD$-48^{\circ}$02445 was incorrectly reported in R17 and the correct value is [{Na}/{Fe}]$=-0.309$ (H. Reggiani 2021 private communication).
We adopt this correct value to shift the R17 abundances.
We also note that the metal-poor half of the R17 sample was analysed relative to HD338529 and not to CD$-48^{\circ}$02445.
Therefore it is possible that R17 stars below [{Fe}/{H}]$\lesssim -2.1$ are not on the same abundance scale as the rest of the stars in R17 and stars in the present study and NS10.
The adopted abundances are summarised in Table~\ref{tab:abundance} for Sequoia stars and in Table~\ref{tab:abundance_standard} for the standard stars.
\subsection{Comparison with literature}
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{./ abundanceNS10R17compare.pdf}
\caption{Stellar parameter and abundance comparison with NS10 (G112-43) and R17 (HD59392 and HIP28104). The black error bars reflect the uncertainties reported in the literature. \label{fig:param_ab_comparison}}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./ MgFe.pdf}
\caption{Mg abundance of the stars. Kinematically-selected Sequoia stars are plotted with blue symbols and color-coded according to their orbital energy (see Figure~\ref{fig:kinematicsCMD}). The measurement uncertainty and the covariance between [{Mg}/{Fe}] and [{Fe}/{H}] are indicated by the error ellipses. The data point circled with a black hexagon from R17 indicates the abundance of HIP28104. The abundance of the comparison samples comes from \citet{Nissen2010,Nissen2011,Reggiani2017a}. Filled symbols are for kinematically-selected Gaia-Enceladus stars among the comparison. \label{fig:Mg}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{./ NaFe.pdf}
\caption{Na abundances of the stars. Symbols follow Figure~\ref{fig:Mg}.\label{fig:Na}}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{./ AlphaFe.pdf}
\caption{Abundances of $\alpha$-elements of the stars. Symbols follow Figure~\ref{fig:Mg}.\label{fig:alpha}}
\end{figure*}
In this section, we compare the stellar parameters and chemical abundances of standard stars with the literature.
For this purpose, we use G112-43 for the comparison with NS10 and HD59392 and HIP28104 for the comparison with R17.
This comparison is presented in Figure \ref{fig:param_ab_comparison}.
There is excellent agreement in the abundance of G112-43 with NS10.
The difference in [{X}/{Fe}] is smaller than $0.1\,\mathrm{dex}$ in all the elements.
The differences are smaller than two times our measurement uncertainties for most of the species.
We therefore consider that we successfully put our abundance onto NS10's scale using the standard star and that our uncertainty estimates are not underestimated.
The exceptions are Si and Ni, for which our abundances differ from NS10's measurements by more than the 2$\sigma$ uncertainty (2.3$\sigma$ and 3.6$\sigma$, respectively).
However, we note that we have not taken the measurement uncertainties in NS10 here into account since NS10 do not provide measurement uncertainties for individual objects.
We now compare our results with R17.
Since HD59392 is the standard star, there is no uncertainty in our abundance.
The difference between the adopted abundance of HD59392 is consistent with R17 within the uncertainties reported by R17.
Therefore, we have successfully put all the abundances onto the same scale for our study, NS10, and R17.
We note that HIP28104 shows large difference in [{X}/{Fe}] especially for Cr and Zn.
Since HIP28104 was not analysed relative to CD$-48^{\circ}$02445 in R17 (see Section~\ref{sec:abundance}), the large offset is likely due to the use of different standard stars in R17.
\section{Results\label{sec:result}}
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{./ Fegroup.pdf}
\caption{Abundances of Cr, Mn, Ni, and Zn. Symbols follow Figure~\ref{fig:Mg}.\label{fig:iron}}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.8\textwidth]{./ ncaptureFe.pdf}
\caption{Abundances of neutron-capture elements. Symbols follow Figure~\ref{fig:Mg}.\label{fig:ncapture}}
\end{figure*}
In this section, we compare the chemical abundance of stars in the present study with those in NS10 and R17.
We kinematically selected Gaia-Enceladus stars out of NS10 and R17 samples using $L_z$ and $E_n$ as $|L_z|<600\,\mathrm{kpc\,km\,s^{-1}}$ and $-1.6<E/10^5\,\mathrm{km^2\,s^{-2}}<-1.0$ (Figure~\ref{fig:kinematicsCMD}).
We have 19 NS10 stars and six R17 stars that satisfy the Gaia-Enceladus kinematic selection.
Among the 19 kinematically selected stars from NS10, 16 stars belong to their low-$\alpha$ population, confirming previous findings on the chemical abundance of Gaia-Enceladus.
One of the remaining three stars is at $[\mathrm{Fe/H}]=-1.08$ and clearly has high-$\alpha$ abundance, which is clearly different from the rest of the Gaia-Enceladus stars.
Thus, we removed this star from the Gaia-Enceladus sample.
We kept the remaining two stars from NS10's high-$\alpha$ population, since they are at $[\mathrm{Fe/H}]\sim-1.4$, where the low-$\alpha$ and high-$\alpha$ populations begin to overlap.
We thus ended up with 24 Gaia-Enceladus stars from the literature, which provide us with opportunities to compare abundance patterns of our Sequoia stars with those of Gaia-Enceladus stars.
The other stars in NS10 and R17 allow us to compare Sequoia stars with in-situ (hot thick disk) stars and other typical halo stars.
The high-$\alpha$ population in NS10 are considered to be stars formed in-situ; they are likely to have formed in the Milky Way and later heated onto the halo-like orbits.
We simply call these stars as ``in-situ stars'' what follows.
The origin of R17 stars and NS10's low-$\alpha$ stars that do not satisfy the Gaia-Enceladus selection is less clear.
They are either Gaia-Enceladus stars, in-situ stars, or stars from accreted galaxies other than Gaia-Enceladus.
As there are no strong kinematic selections in NS10 and in R17, we consider these stars as typical halo stars.
Figures~\ref{fig:Mg}--\ref{fig:ncapture} show chemical abundances of kinematically-selected Sequoia stars, NS10 and R17 stars.
Although a few kinematically selected to be part of Sequoia stars at high metallicity (2657\_5888 and HIP98492) seem to follow the trend of Gaia-Enceladus, there are clearly differences between the majority of Sequoia stars and Gaia-Enceladus stars in [{Mg}/{Fe}], [{Na}/{Fe}], [{Ca}/{Fe}], [{Ti}/{Fe}], [{Zn}/{Fe}], and [{Y}/{Fe}].
Our homogeneously derived high-precision chemical abundances robustly confirm the finding of \citet{Matsuno2019a} in Na, Mg, and Ca, \citet{Monty2020a} in Mg and Ca, and \citet{Aguado2021a} in the overall $\alpha$-element abundance.
In order to quantify these differences, we follow the approach of \citet{Nissen2011} (see top panel of Figure \ref{fig:GEseq}).
We first fit the abundance trend of Gaia-Enceladus in [{X}/{Fe}]--[{Fe}/{H}] with a quadratic polynomial using the kinematically selected stars and calculate the residual scatter ($\sigma_{\rm resid}$).
For each of eight Sequoia stars in the metallicity range $-1.8<[\mathrm{Fe/H}]<-1.4$, we compute the deviation from this fit ($\Delta_{\mathrm{Seq-GE}}$).
We then compute $\chi^2=\sum \Delta^2_{\mathrm{Seq-GE}} / (\sigma_{\rm resid}^2+\sigma^2([\mathrm{X/Fe}]))$ and conduct a $\chi^2$-test to obtain the probability that a $\chi^2$-distribution with the degree of freedom of eight has a $\chi^2$ higher than the observed value.
This is to test if we can explain the displacement in abundance ratios of Sequoia stars from the trend of Gaia-Enceladus stars with the residual in the fit and the measurement uncertainties.
We also compute the abundance difference between Gaia-Enceladus and in-situ stars ($\Delta_{\mathrm{GE-in{\text -}situ}}$) at $-1.1<[\mathrm{Fe/H}]<-0.8$ by fitting the abundance trend of in-situ stars in the same way.
This calculation basically provides a similar quantity as Table 5 of \citet{Nissen2011}.
A difference to \citet{Nissen2011} is that we here compare kinematically selected Gaia-Enceladus stars instead of the low-$\alpha$ population with in-situ stars (high-$\alpha$ population).
The results are summarized in Table~\ref{tab:chitest}.
The average abundance difference between Gaia-Enceladus and Sequoia is largest in [{Na}/{Fe}] (Figure~\ref{fig:Na}), which is followed by [{Mg}/{Fe}] (Figure \ref{fig:Mg}).
The differences in these abundance ratios are $\sim 0.2\,\mathrm{dex}$ and highly significant.
Other $\alpha$-elements, Ca and Ti, show differences of $\sim 0.1\,\mathrm{dex}$ in [{X}/{Fe}] while the difference in [{Si}/{Fe}] is not as large as the other $\alpha$-elements (see Figure~\ref{fig:alpha}).
Although the results of the $\chi^2$-tests are significant for all of Si, Ca, and Ti, the large $\chi^2$ in [{Si}/{Fe}] might not be due to the average abundance difference between Sequoia and Gaia-Enceladus but to the large spread in [{Si}/{Fe}] ratios in our Sequoia stars.
Given that we had to rely on one or a few weak Si lines for its abundance determination, the [{Si}/{Fe}] distribution needs to be taken with a caution.
These $\alpha$-element abundance differences between Sequoia and Gaia-Enceladus are quite similar to those found by \citet{Nissen2011} between their low-$\alpha$ and high-$\alpha$ populations, which are still seen when we compare Gaia-Enceladus and high-$\alpha$ in-situ population (Table~\ref{tab:chitest}).
Even though the differences are found at different metallicity, this indicates that likely the same physical mechanism, specifically SNe~Ia in this case, is responsible for creating the abundance differences.
We will discuss this further in Section~\ref{sec:discussion}.
Iron-group elements including Cr, Mn, and Ni do not show significant abundance difference between Sequoia and Gaia-Enceladus (see Figure~\ref{fig:iron}), although there is clearly a difference in [{Ni}/{Fe}] between Gaia-Enceladus and in-situ stars \citep[Table~\ref{tab:chitest};][]{Nissen2011}.
The Ni abundance difference between Sequoia and Gaia-Enceladus is $\Delta[\mathrm{Ni/Fe}]\sim 0.04\,\mathrm{dex}$ if present.
On the other hand, there is a statistically significant difference in [{Zn}/{Fe}] by $\sim 0.11\,\mathrm{dex}$ between Sequoia and Gaia-Enceladus (Table~\ref{tab:chitest}).
Zn also shows abundance differences between Gaia-Enceladus and in-situ stars.
Neutron capture elements (Y and Ba) show relatively large scatters compared to other elements (Figure~\ref{fig:ncapture}, see also Table~\ref{tab:chitest}).
Despite the large scatter, [{Y}/{Fe}] of Sequoia stars are significantly smaller than Gaia-Enceladus (Table~\ref{tab:chitest}).
Y is another element that shows an abundance difference in [{X}/{Fe}] between Gaia-Enceladus and in-situ stars.
\begin{table*}
\caption{Abundance difference between Sequoia and Gaia-Enceladus at $-1.8<[\mathrm{Fe/H}]<-1.4$, and that between Gaia-Enceladus and in-situ stars at $-1.1<[\mathrm{Fe/H}]<-0.8$. \label{tab:chitest}}
\centering
\begin{tabular}{l*{6}{r}}
\hline\hline
Species & \multicolumn{2}{c}{$\Delta_{{\rm Sequoia}-{\rm GE}}$} & $\sigma([\mathrm{X/Fe}])$ &\multicolumn{2}{c}{$\Delta_{{\rm GE} -{\rm in-situ}}$} & $p_{\chi^2}$ \\\cline{2-3}\cline{5-6}
& Mean & Std. & Median & Mean & Std.& \\\hline
NaI&-0.223&0.055&0.073&-0.300&0.085&0.000\\
MgI&-0.195&0.040&0.040&-0.250&0.059&0.000\\
SiI&-0.037&0.146&0.068&-0.205&0.033&0.005\\
CaI&-0.123&0.054&0.033&-0.092&0.057&0.000\\
TiI&-0.113&0.119&0.050&-0.162&0.073&0.000\\
TiII&-0.096&0.080&0.038&-0.163&0.073&0.005\\
CrI&0.010&0.055&0.063&-0.039&0.035&0.843\\
MnI&-0.013&0.064&0.051&-0.053&0.040&0.552\\
NiI&-0.038&0.023&0.041&-0.147&0.026&0.725\\
ZnI&-0.114&0.049&0.045&-0.205&0.046&0.004\\
YII&-0.159&0.163&0.060&-0.265&0.073&0.000\\
BaII&-0.010&0.111&0.053&-0.106&0.076&0.258\\\hline
\end{tabular}
\end{table*}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{./ deltaxfe_seqGE.pdf}
\caption{(Top:) Visualization of the calculations of $\Delta_{\rm Seq-GE}$ and $\Delta_{\rm GE-in-situ}$. See text for details. (Bottom:) Mean abundance difference between Sequoia and Gaia-Enceladus at $-1.8<[\mathrm{Fe/H}]<-1.4$, and that between Gaia-Enceladus and in-situ stars at $-1.1<[\mathrm{Fe/H}]<-0.8$ ($\langle \Delta_{\rm Seq-GE}\rangle$ and $\langle \Delta_{\rm GE-in-situ}\rangle$). The errorbar represents the standard deviation. \label{fig:GEseq}}
\end{figure}
We finally mention the relation between kinematics and chemical abundances within Sequoia.
In Figures~\ref{fig:Mg}--\ref{fig:ncapture}, the orbital energy of the stars are indicated by the intensity of the color.
We do not find any significant correlations between kinematics and chemical abundances among Sequoia stars.
We conducted $t$-tests for the hypothesis that Sequoia stars having high energy ($E>1.2\times10^5\,\mathrm{km^2\,s^{-2}}$) have the same [{X}/{Fe}] as lower energy Sequoia stars at the metallicity range of $-1.8<[\mathrm{Fe/H}]<-1.4$.
In all species, we cannot reject the hypothesis at more than a $2\sigma$ level, indicating that there are no significant abundance differences between Sequoia stars at high energy and at low energy.
\section{Discussion\label{sec:discussion}}
\subsection{Chemical enrichments in Sequoia}
\begin{figure*}
\centering
\includegraphics[width=1.0\textwidth]{./ XMg_FeH.pdf}
\caption{Neutron-capture element abundances. Symbols follow Figure~\ref{fig:Mg}. \label{fig:BaY}}
\end{figure*}
The majority of kinematically-selected Sequoia stars we have studied seem to show different chemical abundance than Gaia-Enceladus, indicating that their progenitor is indeed different from Gaia-Enceladus.
In this section, we further discuss the origin of the chemical signature of Sequoia.
Sequoia stars tend to show lower [{X}/{Fe}] in the same elements that show deficiency in Gaia-Enceladus in comparison to the in-situ stars at high metallicity.
We visualize this similarity in Figure~\ref{fig:GEseq}, where the mean abundance difference between Sequoia and Gaia-Enceladus stars at $-1.8<[\mathrm{Fe/H}]<-1.4$ ($\langle \Delta_{\rm Seq-GE} \rangle$) from Table~\ref{tab:chitest} is plotted.
The error bar reflects the standard deviation in [{X}/{Fe}] in Sequoia.
We also plot the mean abundance difference between Gaia-Enceladus and in-situ stars at $-1.1<[\mathrm{Fe/H}]<-0.8$ ($\langle \Delta_{\rm GE-in-situ} \rangle$).
Figure~\ref{fig:GEseq} demonstrates a remarkable similarity between the patterns in $\langle\Delta_{\rm Seq-GE}\rangle$ and $\langle\Delta_{\rm GE-in-situ}\rangle$.
We find the largest differences for Na and Mg, and mild differences for Ca, Ti, Zn, and Y in both comparisons.
Although $\langle\Delta_{\rm GE-in-situ}\rangle$ is clearly non-zero and negative in Si and Ni, it is hard to see a similar feature in $\langle\Delta_{\rm Seq-GE}\rangle$ for these elements.
The lack of difference in the Si abundance comparison between Sequoia and Gaia-Enceladus could be due to our larger measurement uncertainty and smaller number of stars compared to NS10 and \citet{Nissen2011}.
Cr and Mn do not show significant differences in both comparisons.
The similarity in $\langle \Delta_{\rm Seq-GE} \rangle$ and $\langle \Delta_{\rm GE-in-situ}\rangle$ could suggest that the same physical process be responsible for shaping their patterns.
The origin of the abundance difference between Gaia-Enceladus and in-situ stars is usually attributed to chemical enrichment by type~Ia supernovae (SNe~Ia);
while in-situ stars are not yet significantly enriched by SNe~Ia at [{Fe}/{H}]$\sim -1$, Gaia-Enceladus is already enriched by SNe~Ia because of a longer star formation timescale, a lower star formation efficiency, and/or different star formation histories \citep[e.g.,][]{Zolotov2010,Nissen2010,Nissen2011,FernandezAlvar2018a,Gallart2019a,Brook2020a,Sanders2021a}, which would be a consequence of the lower halo mass of Gaia-Enceladus than the main progenitor of the Milky Way.
If we apply the same reasoning, the chemical abundance difference between Sequoia and Gaia-Enceladus is likely caused by larger SNe~Ia enrichments in Sequoia at [{Fe}/{H}]$\sim -1.5$, which would indicate lower mass for Sequoia than Gaia-Enceladus.
This is consistent with the mass estimates from kinematic analysis, metallicity distribution, or a number count of stars \citep{Koppelman2019a,Myeong2019a,Matsuno2019a,Naidu2020a}.
Although $\langle \Delta_{\rm Seq-GE} \rangle$ and $\langle \Delta_{\rm GE-in-situ}\rangle $ show a remarkable overall similarity, one may notice a slight difference in Figure~\ref{fig:GEseq}, especially in Ni.
If the similarity in patterns in $\langle \Delta_{\rm Seq-GE} \rangle$ and $\langle \Delta_{\rm GE-in-situ}\rangle $ is primarily driven by the chemical enrichment from SNe~Ia, the lack of Ni abundance difference between Sequoia and Gaia-Enceladus might indicate a variation in the properties of SNe~Ia with, e.g., environments or metallicity.
For example, \citet{Kirby2019a} and \citet{Sanders2021a} used Ni abundance to conclude that the explosion of sub-Chandrasekhar-mass is the dominant type of SNe~Ia in dwarf galaxies and in Gaia-Enceladus.
We note that, in addition to the larger contribution of SNe~Ia, \citet{FernandezAlvar2019a} suggested a possibility of top light initial mass function for Gaia-Enceladus by modelling $\alpha$-element abundances of halo stars from APOGEE.
It remains to be seen if a similar explanation can be applied to Sequoia.
Let us now focus on neutron-capture elements, Y and Ba.
While both elements are produced by the $s$-process at solar metallicity, Y belongs to a group of light $s$-process elements.
The weak $s$-process, which operates in massive stars, contributes more to light $s$-process elements than the main $s$-process in low-to-intermediate mass stars.
In addition, $r$-process also has significant contribution to the enrichments of these elements in the early universe.
Yttrium shows a relatively large scatter and a mild deficiency in Sequoia (Table~\ref{tab:chitest}).
The Y deficiency seems to be driven by a few stars with low [{Y}/{Fe}].
They have [{Y}/{Fe}]$\sim -0.4$ and such low abundance is hardly seen among Gaia-Enceladus stars in the NS10 and R17 samples.
Barium also shows a large scatter, although the average abundance seems to be comparable to Gaia-Enceladus stars in the comparison samples.
To understand these trends, we plot [{Y}/{Ba}], [{Y}/{Mg}], and [{Ba}/{Mg}] abundances in Figure~\ref{fig:BaY}.
The [{Y}/{Ba}] ratio allows us to infer the importance of weak $s$-process relative to the efficiency of Ba production either by low-to-intermediate mass stars or by $r$-process nucleosynthesis.
The [{Y}/{Ba}] ratios of Sequoia stars tend to be lower than those of Gaia-Enceladus stars.
The average difference between the two systems is $-0.15\,\mathrm{dex}$.
Gaia-Enceladus is also known to possess low [{Y}/{Ba}] values compared to in-situ stars at [{Fe}/{H}]$\sim -1$ \citep{Nissen2011}.
The low [{Y}/{Ba}] can be understood if the efficiency of the weak $s$-process is lower in Sequoia, if low-to-intermediate mass stars in which main-$s$ process operates start to contribute to the chemical evolution of Sequoia, or if there are more $r$-process nucleosynthesis events in Sequoia that produce more Ba than Y.
The abundance ratios [{Y}/{Mg}] and [{Ba}/{Mg}] displayed in the right two panels of Figure~\ref{fig:BaY} further allow us to infer the efficiency of the enrichment of the element X relative to the chemical enrichment by core-collapse supernovae (CCSNe) since CCSNe produce most of Mg.
Sequoia stars stand out less in [{Y}/{Mg}], indicating that the production of Y is controlled by the abundance of Mg to some extent.
This is expected if the majority of Y is produced by the weak $s$-process, since its efficiency is dependent on CNO abundances \citep[e.g.,][]{Prantzos1990a}, which are mostly produced by massive stars.
A similar argument is applied to explain the Cu (and Y) abundance differences between Gaia-Enceladus and in-situ stars \citep{Nissen2011,Matsuno2021a}.
Sequoia stars, on the other hand, are slightly enhanced in [{Ba}/{Mg}], which might be related to the high Eu abundance reported by \citet{Aguado2021a} and hence indicate efficient $r$-process nucleosynthesis in Sequoia.
While Y and Ba abundances provide some insights about the enrichment of neutron-capture elements in Sequoia, the information is still limited.
It is highly desirable to measure abundances of many neutron-capture elements in future studies.
For example, \citet{Aguado2021a} suggested a possibility of enhanced $r$-process element abundance in Gaia-Enceladus and Sequoia from Eu abundance, and \citet{Matsuno2021b} explained the high Eu abundance of Gaia-Enceladus as a combined effect of delay time in the $r$-process enrichment and the low star formation efficiency of Gaia-Enceladus.
In a forthcoming paper, we plan to revisit weak and main $s$-processes, and $r$-process enrichments in Sequoia with precise abundances of more neutron-capture elements (e.g., Sr, Eu).
There are some similarities in abundance ratios between Sequoia and surviving dwarf galaxies around the Milky Way, such as Sagittarius, Fornax, Draco, Sculptor, Sextans dwarf spheroidal galaxies, and Large and Small Magellanic Clouds (LMC and SMC).
The Milky Way, Gaia-Enceladus, and many of the surviving dwarf galaxies show similarly super-solar [{$\alpha$}/{Fe}] at low metallicity.
However, since the ''knee'' metallicity ($[\mathrm{Fe/H}]_{\rm knee}$) at which systems start to show decreasing [{$\alpha$}/{Fe}] with metallicity is below $[\mathrm{Fe/H}]\lesssim -1.8$ in Fornax, Draco, Sculptor, and Sextans dwarf galaxies, they show lower [{$\alpha$}/{Fe}] than Gaia-Enceladus or the Milky Way in-situ stars at $[\mathrm{Fe/H}]_{\rm knee}<[\mathrm{Fe/H}]$ \citep{Tolstoy2009,Cohen2009a,Kirby2011a,Lemasle2012,Lemasle2014,Hendricks2014a,Hill2019a,Theler2020a}.
Although the position of the knee is less clear in LMC, SMC, and Sagittarius, they also show low $\alpha$-element abundance at $[\mathrm{Fe/H}]\sim -1.5$ \citep{Nidever2020a,Hasselquist2021a}.
It is not clear if Sequoia shows a clear knee because of insufficient sampling of low-metallicity stars.
Nonetheless, the abundance of the most metal-poor star in our sample (HIP28104) seems to support the existence of the knee at low metallicity.
\subsection{Chemical identification of Sequoia members\label{sec:chemicalseparation}}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./ meanXFE.pdf}
\caption{Average values of $[\mathrm{X/Fe}]$ in Na, Mg, and Ca are plotted against [{Fe}/{H}]. For the calculation of the uncertainty in $\langle[\mathrm{X/Fe}]\rangle$, we property take correlated uncertainties between elemental abundances into account. Stars are color-coded according to the significance of their departure from the Gaia-Enceladus sequence, which is shown as the orange sold line. The Gaia-Enceladus sequence is obtained by fitting the abundance ratios of Gaia-Enceladus stars from \citet{Nissen2010} and \citet{Reggiani2017a} with a quadratic polynomial. The residual scatter of Gaia-Enceladus around the fit is also shown around the fit.
\label{fig:meanXFE}}
\end{figure}
Under the assumption that each system shows a well defined track in [{X}/{Fe}]--[{Fe}/{H}] planes and there are no chemical outliers, we should be able to chemically identify members of disrupted galaxies based on chemical abundance ratios.
In this section, we try to separate individual Sequoia stars from Gaia-Enceladus solely based on the abundance ratios.
We first need to explore what is the best combination of elements to identify Sequoia stars.
While considering the average of more elemental abundance ratios might lead to reduced uncertainty, abundance differences might be smeared out if we add elements whose abundance do not show difference between Sequoia and Gaia-Enceladus.
We consider average abundance of various combinations out of five elements (Na, Mg, Ca, Ti from Ti~II, and Zn) since they show clear abundance difference between Sequoia and Gaia-Enceladus in our study (Table~\ref{tab:chitest}).
To quantitatively identify the best combination of elements, we follow the approach that is explained in Figure~\ref{fig:GEseq}; we first fit the abundance trend of Gaia-Enceladus and then calculate $\chi^2$-values for Sequoia stars in $-1.8<[\mathrm{Fe/H}]<-1.4$.
The $\chi^2$-value takes its maximum when we consider the three elements, Na, Mg, and Ca (Figure~\ref{fig:meanXFE}).
Therefore, we consider that the average of abundance ratios of [{Na}/{Fe}], [{Mg}/{Fe}], and [{Ca}/{Fe}] offers a powerful diagnostic to chemically identify Sequoia stars.
Note that the best combination of abundance ratios, however, varies depending on the data set, specifically the typical uncertainties in abundances of elements.
Among our sample, eight stars (1336\_6432, 2813\_6032, 2870\_9072, 3341\_2720, 4587\_5616, 4850\_5696, G115-58, and G90-36) deviate more than $2\sigma$ from the Gaia-Enceladus sequence in $\langle [\mathrm{Na/Fe}],[\mathrm{Mg/Fe}],[\mathrm{Ca/Fe}]\rangle$, where $\sigma$ is defined as $\Delta_{\rm Seq-GE}/\sqrt{\sigma_{\rm resid}^2+\sigma^2(\langle[\mathrm{X/Fe}]\rangle)}$.
From the chemical point of view, these are the most likely members of Sequoia.
On the other hand, four stars do not deviate by more than $2\sigma$.
Two stars (2657\_5888 and HIP98492) are at high metallicity and seem to be on the Gaia-Enceladus sequence.
These stars might be those stripped in the very early stage of the Gaia-Enceladus accretion \citep{Koppelman2019a}.
The other two stars, 3336\_0572 and HIP28104 have relatively low metallicity.
Since 3336\_0572 is one of the stars for which the uncertainties in elemental abundances are large, the lack of significance might just reflect insufficient precision.
HIP28104 has the lowest metallicity among our sample.
Its metallicity might be too low even for the progenitor of Sequoia to be affected by SNe~Ia.
We now assess the minimum precision ($\sigma_{\rm obs}$) required to separate individual Sequoia stars from Gaia-Enceladus using [{Mg}/{Fe}] or $\langle [\mathrm{Na/Fe}],[\mathrm{Mg/Fe}],[\mathrm{Ca/Fe}]\rangle$.
The scatter among Sequoia stars (Table~\ref{tab:chitest}) and the residual scatter of Gaia-Enceladus stars from the fitting are comparable to the measurement uncertainty ($\sim0.04\,\mathrm{dex}$ in both cases).
Therefore, we consider the dispersion in abundance ratios among Sequoia stars would be dominated by the measurement uncertainty, $\sigma_{\rm obs}$.
The difference between Sequoia and Gaia-Enceladus in these abundance ratios ($|\Delta|$) are $\sim 0.20\,\mathrm{dex}$ and $\sim 0.18\,\mathrm{dex}$, respectively.
The required precision to chemically separate 84\% of Sequoia stars with the $2\sigma$ criterion can be calculated by solving
\begin{equation}
|\Delta| - \sigma_{\rm obs}>2\sqrt{\sigma_{\rm resid}^2+\sigma_{\rm obs}^2},
\end{equation}
where the left side of the equation reflects the fact that about 84\% of Sequoia stars have larger deviation in abundance ratios from the Gaia-Enceladus trend than this value.
Assuming $|\Delta|=0.19$ and $\sigma_{\rm resid}=0.04$, we obtain $\sigma_{\rm obs}\lesssim 0.07\,\mathrm{dex}$ from this equation.
This condition is met in our case even if we work only on Mg since the typical precision is $\sigma([\mathrm{Mg/Fe}])\sim 0.04\,\mathrm{dex}$.
Considering more elemental abundance might help in some cases, although the typical uncertainty does not improve significantly in our case when we consider the three elements (Na, Mg, and Ca).
We here note that it is also important to consider correlations between abundances of the elements to correctly estimate the uncertainty for their average abundance.
On the other hand, it is much easier to detect the average difference in chemical abundance between Sequoia and Gaia-Enceladus stars.
The uncertainty of the mean is scaled with $\sqrt{N}$ when systematic uncertainty can be neglected.
Therefore, even if the observational uncertainty is comparable to the abundance difference between Sequoia and Gaia-Enceladus, four stars would be sufficient to detect the average abundance difference if the sample does not contain chemical outliers or contaminants.
\subsection{Sequoia stars in literature and surveys}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./ Sequoia_literature.png}
\caption{Abundance of stars from literature and from this study. The literature data come from \citet[][SB02]{Stephens2002}, \citet[][I12]{Ishigaki2012}, and \citet[][RM18]{Reggiani2018a}. Sequoia stars are shown with filled symbols. Stars in the present study are shown with blue circles. We note that the uncertainty is shown with errorbars instead of error ellipses for visualization. \label{fig:seq_lite}}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{./ APOGEEGALAH_Sequoia.pdf}
\caption{Mg abundances of Sequoia, in-situ stars, and Enceladus stars from GALAH DR3 and APOGEE DR16. The abundances of in-situ stars and Enceladus stars are shown with running median with a bin size of $0.1\,\mathrm{dex}$. The shaded regions show 16-84 percentile region. \label{fig:surveyMg}}
\end{figure}
In this section, we compare our results with those seen in previous studies and in surveys.
As we discussed in Section~\ref{sec:intro}, previous studies do not agree on the chemical properties of Sequoia \citep{Matsuno2019a,Koppelman2019a,Monty2020a,Aguado2021a,Feuillet2020a}.
We revisit this problem using the updated data from spectroscopic surveys and a similar kinematic selection as we used.
We select Sequoia member candidates following that used in \citet{Koppelman2019a}.
The upper limits on circularity and energy are changed to $-0.35$ from $-0.40$ and to $-0.9\times 10^5\,\mathrm{km^2s^{-2}}$ from $-1.0\times 10^5\,\mathrm{km^2s^{-2}}$ so that the selection covers the kinematic extent of the stars in the present study.
In Figure~\ref{fig:seq_lite}, we plot Sequoia candidates from \citet{Stephens2002}, \citet{Ishigaki2012}, and \citet{Reggiani2018a}, which contain more than one Sequoia candidates at $-2<[\mathrm{Fe/H}]<-1$.
The abundance from \citet{Stephens2002} is updated following \citet{Monty2020a}.
There are nine stars from \citet{Stephens2002} that satisfy the kinematic selection for Sequoia, of which 6 are at $-2<[\mathrm{Fe/H}]<-1$.
The median reported uncertainty in [{Mg}/{Fe}] is $0.17\,\mathrm{dex}$, and hence it is difficult to chemically separate individual Sequoia stars (Section~\ref{sec:chemicalseparation}).
However, it is still possible to detect the average difference; this is why \citet{Venn2004a} was able to conclude that the most retrograde stars have lower [{Mg}/{Fe}].
The situation with \citet{Ishigaki2012} is similar to \citet{Stephens2002} since the median uncertainty is $0.11\,\mathrm{dex}$ and the number of Sequoia stars is 5 (4 in $-2<[\mathrm{Fe/H}]<-1$).
On the other hand, \citet{Reggiani2018a} conducted a very high precision abundance analysis for a pair of HD134439 and HD134440 (e.g., $\sigma([\mathrm{Mg/Fe}])\sim 0.02$).
With this precision, they were able to conclude that these two stars have lower [$\alpha$/{Fe}] than the low-$\alpha$ population from NS10.
Figure~\ref{fig:surveyMg} shows the Mg abundance of Sequoia candidates from GALAH DR3 and APOGEE DR16.
We have selected APOGEE stars with $\mathtt{ASPCAPFLAG}=0$, $\mathtt{STARFLAG}=0$, $\mathtt{TEFF}<5500$, $\mathtt{LOGG}<3.5$, $\texttt{FE\_H\_FLAG}=0$, and $\texttt{MG\_FE\_FLAG}=0$.
We also remove known calibration cluster members and probable stellar cluster members.
For GALAH DR3, we have selected stars with $\texttt{teff}<5500$, $\texttt{logg}<3.5$, $\texttt{flag\_fe\_h}=0$, and $\texttt{flag\_Mg\_fe}=0$.
We select in-situ stars and Gaia-Enceladus stars following \citet{Matsuno2021b} and compute the running median and 16 and 84 percentiles of [{Mg}/{Fe}] as a function of [{Fe}/{H}] using a $0.1\,\mathrm{dex}$ bin size.
In both surveys, Sequoia stars clearly have lower [{Mg}/{Fe}] than Gaia-Enceladus (Figure~\ref{fig:surveyMg}), reproducing our own finding, although the absolute values of [{Mg}/{Fe}] are different because of different approaches for, e.g., abundance calculation and/or stellar parameter determination.
The low abundance of Mg of Sequoia was not clearly seen in previous studies \citep[e.g.,][]{Koppelman2019a,Myeong2019a,Feuillet2021a}, even though these authors used APOGEE data.
This is likely because of different selection criteria and sample size.
The selection we made in the $L_z-E_n$ space allows a smaller contamination compared to normalized action selections adopted in \citet{Myeong2019a} and \citet{Feuillet2021a} that includes low-$E_n$ stars
\footnote{The normalized action space is a space defined by $J_\phi/J_{\rm tot}$ and $(J_z-J_R)/J_{\rm tot}$, where $J_\phi,\,J_z,\,J_R$ are the azimuthal, vertical, and radial actions, and $J_{\rm tot}=|J_{\phi}|+J_z+J_R$. Although a selection in this normalized action space includes both stars at high-$E_n$ and low-$E_n$, it should be in principle possible to make an action-based selection that is equivalent to the $L_z-E_n$ selection.}.
In order to deposit stars to wide orbital energy range, the progenitor needs to be much more massive \citep{Koppelman2019a}, which does not seem to be the case for Sequoia.
The sample size with reliable abundance and reliable astrometric measurements has increased compared thanks to more recent data releases from Gaia and APOGEE.
We also see abundance difference in other elements including Al (APOGEE) and K (GALAH).
However, detailed investigation of Sequoia stars in large surveys are beyond the scope of the present study and reserved for future studies.
We finally note that both surveys seem to cover wider metallicity range than the present study.
The narrow metallicity range of our study could be due to the metallicity cut at $[\mathrm{Fe/H}]<-1$ for stars from LAMOST and the removal of two Sequoia candidates at low metallicity as described in Section~\ref{sec:obs}.
\section{Conclusion\label{sec:conclusion}}
Through a differential abundance analysis of high-S/N and high-resolution spectra, we have shown that Sequoia stars are chemically distinguishable from Gaia-Enceladus.
The eight Sequoia stars in the metallicity range of $-1.8<[\mathrm{Fe/H}]<-1.4$ have lower [{Na}/{Fe}], [{Mg}/{Fe}], [{Ca}/{Fe}], [{Ti}/{Fe}], [{Zn}/{Fe}], and [{Y}/{Fe}] compared to the values expected for Gaia-Enceladus.
The abundance difference is $\sim 0.2\,\mathrm{dex}$ in [{Na}/{Fe}] and in [{Mg}/{Fe}] and $\sim 0.1\,\mathrm{dex}$ in other abundance ratios.
This pattern in the abundance difference is similar to that between Gaia-Enceladus and in-situ stars at higher metallicity.
This suggests that Sequoia started experiencing chemical enrichment from SNe~Ia at lower metallicity than Gaia-Enceladus.
We, however, note that we do not see a significant difference between Sequoia and Gaia-Enceladus in Ni abundance unlike in the comparison between Gaia-Enceladus and in-situ stars, which might suggest that dominant types of SNe~Ia are different between Sequoia and Gaia-Enceladus.
We have also shown that Sequoia stars show low [{Y}/{Ba}] ratios, although its cause remains unclear.
We will provide abundances for additional neutron-capture elements (e.g., Sr and Eu) in a future study to separate the contribution of weak $s$-process, main $s$-process, and $r$-process.
We have further shown that separation in Sequoia and Gaia-Enceladus becomes most prominent when we take the average of [{Na}/{Fe}], [{Mg}/{Fe}], and [{Ca}/{Fe}] although this choice could vary depending on the data set.
We have shown that individual Sequoia stars can be chemically separated if the abundance precision in [{X}/{Fe}] is better than $0.07\,\mathrm{dex}$.
On the contrary, detecting the average abundance difference is much easier since the uncertainty on the mean scales with the square root of the number of stars, if a kinematic selection that minimizes the contamination is adopted, and if there are few contaminants and chemical outliers.
Using the average of [{Na}/{Fe}], [{Mg}/{Fe}], and [{Ca}/{Fe}], we have concluded that eight out of 12 stars we studied have distinct chemical abundances compared to Gaia-Enceladus.
These eight stars are most likely true members of Sequoia.
Only two of the remaining four stars seem to be contaminants from Gaia-Enceladus, indicating that the kinematic selection we adopted efficiently selects Sequoia stars.
For the remaining two stars, it is not clear if they have the same chemical abundance as the other Sequoia stars or if they are contaminants from Gaia-Enceladus because of their low metallicity and/or insufficient precision in our abundance measurements.
We have demonstrated that we can see kinematically selected Sequoia stars having lower Na, Mg, and Ca abundances also in data from literature \citep{Stephens2002,Ishigaki2012,Reggiani2018a}.
\citet{Reggiani2018a} provided sufficiently precise chemical abundance for the pair of HD134439/HD134440 to chemically associate them to Sequoia.
We also confirmed low Mg abundances of Sequoia stars using GALAH DR3 and APOGEE DR16.
Now that we have established the chemical distinctness of Sequoia from the major populations in the halo, namely Gaia-Enceladus and in-situ stars, future studies of chemical abundances of Sequoia stars using large spectroscopic surveys are obvious next steps.
A large sample is necessary to study if the group of stars referred to as Sequoia in the present study can be further separated into a few subgroups \citep[][L\"{o}vdal et al. 2021 in prep., Ruiz-Lara et al. 2021 in prep.]{Naidu2020a}.
It would also be of interest to study the kinematic extent of chemically selected Sequoia stars.
Large surveys that measure chemical abundance of stars with high-precision are necessary for these studies.
\begin{acknowledgements}
We thank Henrique Reggiani for providing the data that allow us to compare our results of equivalent width measurements and for checking their results in detail.
We also thank Xiaodi Yu and Ian Roederer for taking the high-resolution spectrum with MIKE on Magellan.
This research has been supported by a Spinoza Grant from the Dutch Research Council (NWO).
WA was supported by JSPS KAKENHI Grant Number 21H04499.
This research is based in part on data collected at Subaru Telescope, which is operated by the National Astronomical Observatory of Japan.
We are honored and grateful for the opportunity of observing the Universe from Maunakea, which has the cultural, historical and natural significance in Hawaii.
This research has made use of the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration.
Part of the data were retrieved from the JVO portal (http://jvo.nao.ac.jp/portal/) operated by ADC/NAOJ.
This work is partly based on data obtained from the ESO Science Archive Facility, which are based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO programmes 67.D-0086(A), 95.D-0504(A), 095.D-0504(A).
\end{acknowledgements}
\input{draft.bbl}
\begin{appendix}
\section{Analysis of Gaia EDR3 360456543361799808\label{appendixA}}
Gaia EDR3 360456543361799808 turned out not to belong to Sequoia after updating radial velocity to $-377.6\,\mathrm{km\,s^{-1}}$ using our high-resolution spectroscopy from LAMOST DR4 value of $-288.15\,\mathrm{km\,s^{-1}}$.
With this updated radial velocity, we estimated $L_z=-841\,\mathrm{kpc\,km\,s^{-1}}$ and $E_n=-1.580\times 10^5\,\mathrm{km^2\,s^{-2}}$.
We note that LAMOST DR6 no longer provides a radial velocity for this object and Gaia DR2 gives $-376.3,\mathrm{km\,s^{-1}}$, which is closer to our measurement from the high-resolution spectrum.
We still measured stellar parameters and abundances for this object.
Derived stellar parameters are $T_{\rm eff}=6172\pm 70\,\mathrm{K}$, $\log g=3.905\pm0.032$, $v_t=1.567\pm0.108\,\mathrm{km\,s^{-1}}$, and $[\mathrm{Fe/H}]=-1.865\pm0.019$.
Derived abundances are summarised in Table~\ref{tab:0360}.
This star has comparable abundance to Gaia-Enceladus and in-situ stars.
\begin{table}
\caption{Abundances of Gaia EDR3 360456543361799808 \label{tab:0360}}
\begin{tabular}{l*{5}{r}}\hline\hline
& $N$ & [{X}/{H}] & $\sigma$ & [{X}/{Fe}] & $\sigma$\\\hline
FeI & 86& -1.809& 0.032& ...& ...\\
FeII & 14& -1.851& 0.019& ...& ...\\
NaI & 2& -2.027& 0.092& -0.218& 0.087\\
MgI & 6& -1.588& 0.039& 0.221& 0.039\\
SiI & 1& -1.432& 0.063& 0.377& 0.066\\
CaI & 18& -1.414& 0.032& 0.395& 0.032\\
TiI & 12& -1.354& 0.064& 0.455& 0.054\\
TiII & 10& -1.442& 0.033& 0.410& 0.030\\
CrI & 2& -1.910& 0.089& -0.101& 0.083\\
MnI & 2& -2.184& 0.057& -0.375& 0.048\\
NiI & 11& -1.761& 0.049& 0.048& 0.045\\
ZnI & 2& -1.670& 0.045& 0.139& 0.041\\
YII & 2& -1.911& 0.049& -0.060& 0.046\\
BaII & 3& -2.032& 0.053& -0.181& 0.049\\\hline
\end{tabular}
\end{table}
\section{Uncertainties in stellar parameters\label{app:uncertainty}}
Stellar parameters are dependent each other and hence determined iteratively.
We, therefore, need take into account uncertainties in the other parameters when we estimate the uncertainty in one parameter.
We also consider correlations between stellar parameters and their effects on abundances.
The goal of this section is to obtain the covariance matrix $\Sigma$ among the four stellar parameters.
We consider four stellar parameters, $x_1=T_{\rm eff},\,x_2=\log g,\,x_3=v_t,\,x_4=[\mathrm{Fe/H}]_{\rm sp}$.
A parameter $x_i$ is determined through a function $f_i(\mathbf{x})$; the set of best estimates $\tilde{\mathbf{x}}$ satisfies $\tilde{x}_i=f_i(\tilde{\mathbf{x}})$.
We first estimate the uncertainty in each parameter $\epsilon_i$ by fixing other parameters as described in Section~\ref{sec:parameters}.
The $\epsilon_i$ is not necessarily close to realistic uncertainty since it neglects the effect of the uncertainties in other parameters.
The values we estimated can be expressed as
\begin{equation}
x_i = f_i(\textbf{x}) + \epsilon_i, \label{eq:app1}
\end{equation}
which can be approximated as
\begin{equation}
x_i \simeq f_i(\tilde{\mathbf{x}})+\sum_{i\neq j} \frac{\partial f_i}{\partial x_j}(x_j-\tilde{x}_{j})+\epsilon_i.
\end{equation}
Here we define $\delta \mathbf{x}=\mathbf{x}-\mathbf{\tilde{x}}$ and the matrix $\mathbf{A}$ whose element is
\begin{equation}
A_{ij}=
\begin{cases}
0 & (i=j)\\
\frac{\partial f_i}{\partial x_j} &(i\neq j).
\end{cases}
\end{equation}
From equation \ref{eq:app1}, we can write
\begin{equation}
\delta\textbf{x}=\epsilon+\textbf{A}\delta\textbf{x},
\end{equation}
hence
\begin{equation}
(\textbf{I}-\textbf{A})\delta\textbf{x}=\epsilon.
\end{equation}
Since $\Sigma = \langle\delta\textbf{x}\delta\textbf{x}^T\rangle$ and $\langle\epsilon_i,\epsilon_j\rangle=\delta_{ij}\epsilon_i^2$, the covariance matrix can be calculated from
\begin{equation}
\Sigma=(\textbf{I}-\textbf{A})^{-1}\textrm{diag}(\epsilon_i^2)[(\textbf{I}-\textbf{A})^{-1}]^T. \label{eq:sigma}
\end{equation}
The above calculation is equivalent to considering the following likelihood,
\begin{equation}
{\cal L} \propto \prod \exp[-\frac{1}{2}\frac{(x_i-f_i(\mathbf{x}))^2}{\epsilon_i^2}],
\end{equation}
and calculating Fisher's matrix ${\cal F}$, whose element is expressed as
\begin{equation}
{\cal F}_{lm} = -\frac{\partial^2 \log {\cal L}}{\partial x_l \partial x_m}.
\end{equation}
Assuming $\textbf{x}$ follows a multivariate gaussian distribution, ${\cal F}$ is equal to $\Sigma^{-1}$ \citep[e.g.,][]{Andrae2010a}.
Eq. \ref{eq:sigma} can also be derived from this equation.
In practice, we estimate $A_{ij}$ by redetermining the parameter $i$ while shifting the parameter $j$ by $\pm\epsilon_j$.
Since we determine both $T_{\rm eff}$ and $v_t$ from neutral Fe lines, the correlation between these two parameters can be significant.
\end{appendix}
\end{document}
|
{
"timestamp": "2021-12-01T02:24:46",
"yymm": "2111",
"arxiv_id": "2111.15423",
"language": "en",
"url": "https://arxiv.org/abs/2111.15423"
}
|
\section{Introduction}
The meta-surfaces has been considered as one of key auxiliary devices in the future sixth generation (6G) wireless networks due to its substantial benefits, such as low-cost and low power consumption, communication coverage extension, and communication quality improvement \cite{2019arXiv190308925D}. With the development of corresponding fabrication technologies, two typical structures of meta-surfaces have been proposed recently\cite{2021arXiv210309104L}\cite{2021arXiv210109663X}, which are reconfigurable intelligent surfaces (RISs) and simultaneous transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs). Different from the RISs, which is commonly mentioned by its reflecting-only property, STAR-RISs can serve both sides of users located at its front and back, by simultaneously transmitting and reflecting the incident signals. Motivated by the attractive advantages of STAR-RISs, extensive research has been devoted to adopting STAR-RISs in exploiting the novel communication framework to achieve smart radio environments.
Non-orthogonal multiple access (NOMA) is a promising 6G technology to achieve high spectrum efficiency and high energy efficiency\cite{9197675}\cite{8535085}. NOMA assisted STAR-RISs has been envisioned as a future promising wireless network structure.
As one of performance indicators for future wireless networks, the improvement of energy efficiency (EE) is important to avoid energy overhead and achieve green communications in 6G\cite{2021arXiv210101588M}. However, solving the EE maximization problem by finding the global optimal solution is challenging due to the fractional form of the objective function and non-convex constraints. While inspired by the successful application of deep reinforcement learning (DRL) in solving a variety of wireless communication problems\cite{2020arXiv200210072H}\cite{2021arXiv210406007D}, we design a Deep Deterministic Policy Gradient (DDPG)-based algorithm to maximize EE in a NOMA- multiple-input and single-output (MISO) assisted STAR-RIS downlink network.
The proposed algorithm can effectively achieve the maximum system EE by considering various transmission power at the base station (BS) and different sizes of the STAR-RIS.
\begin{figure}[t]
\centering
\includegraphics[width=3.45in]{STAR-RIS-Multiple_USERS.jpg}
\caption{Model of STAR-RIS assisted NOMA-MISO downlink network.}
\label{system model}
\end{figure}
\section{SYSTEM MODEL AND PROBLEM FORMULATION}
\subsection{System Model}
As shown in Fig. \ref{system model}, we consider a NOMA-MISO assisted STAR-RIS downlink network, where a BS with $\textit{M}$ antennas transmits the signals to multiple single-antenna users via a STAR-RIS which has $\textit{N}$ elements.
$T_{\Bar{a}}, \forall \Bar{a} \in\{1,2,\cdots,A\}$ denote the users located at the back of STAR-RIS in the transmission zone, while $R_{\Bar{b}}, \forall \Bar{b} \in\{1,2,\cdots,B\}$ are the users located in the reflection zone at the front of STAR-RIS, and where $\Bar{a}$ and $\Bar{b}$ are indices of the users in the transmission zone and reflection zone, respectively. We assume that the direct links between the BS and all the users are blocked by buildings or walls.
In this system, we assume that the STAR-RIS follows energy splitting (ES) protocol\cite{2021arXiv210109663X}\cite{2021arXiv210401421M}. The ES protocol indicates that every element at the STAR-RIS can simultaneously transmit and reflect the incident signals by adopting coefficients matrices at the same time. The coefficients matrices at the STAR-RIS contain amplitudes coefficients and phase shifts coefficients responding to the conditions of reflecting or transmitting signals. Under the ES protocol, the energy conservation law should be guaranteed \textemdash the sum of the energy of the transmitted and reflected signals must be equal to the incident signals' energy, which, ideally, defines the rule of amplitudes coefficients, i.e., $\beta_n^T + \beta_n^R = 1, \forall n \in\{1,2,\cdots,N\}$\cite{2021arXiv210109663X, 2021arXiv210309104L}, where $\beta_n^T$ and $\beta_n^R$ denote the transmission and reflection amplitudes coefficients for $n$-th element at the STAR-RIS respectively. The coefficients matrices ${\bm{\Phi}}^{\tau} \in \mathbb{C}^{N \times N}$, $\tau \in \{T, R\}$ can be expressed as follows:
\begin{equation}
\begin{split}
{\bm{\Phi}}^{\tau} = \text{diag}(\sqrt{{\beta}_1^{\tau}}e^{j{\theta}_1^{\tau}},\sqrt{{\beta}_2^{\tau}}e^{j{\theta}_2^{\tau}}, \cdots \sqrt{{\beta}_N^{\tau}}e^{j{\theta}_N^{\tau}}),
\end{split}
\end{equation}
where $\text{diag}(\cdot)$ presents the diagonal matrix. ${\theta}_n^{\tau}\in[0,2\pi)$, $\forall n \in\{1,2,\cdots,N\}$ denotes the phase shifts coefficient for the element at the STAR-RIS.
In this paper, we consider that User $T_{\Bar{a}}$ and User $R_{\Bar{b}}$ are grouped together and served by the NOMA downlink transmission\cite{2021arXiv210309104L}.
Define $\Omega = \{T_1,T_2,\cdots,T_A,R_1,R_2,\cdots,R_B\}$ as a user set by merging both users in the transmission zone and the reflection zone.
Let $y$ denote the superimposed signal transmitted from the BS to all the users, the received signal $y_{\epsilon}$ for User $\epsilon \in \Omega$ can be expressed as follows:
\begin{equation}\label{recieved signal}
y_{\epsilon} = \bm{h}_{\epsilon}^H \bm{\Phi}^{k(\epsilon)} \bm{G} \sum_{\upsilon \in \Omega} \bm{\omega}_{\upsilon} s_{\upsilon} + z,
\end{equation}
where $\bm{h}_{\epsilon}^H \in \mathbb{C}^{1 \times N}$ denotes conjugate transpose of transmission or reflection channel between the STAR-RIS and User $\epsilon$. $\bm{G}\in \mathbb{C}^{N\times M}$ denotes the channel between the BS and the STAR-RIS, $n$ is the additive white Gaussian noise which follows $z \thicksim \mathcal{CN}(0, \sigma^2)$. $\bm{\omega}_{\upsilon} \in \mathbb{C}^{M \times 1}$ denotes the beamforming vector. $s_{\upsilon}$ is the signal symbol of User $\upsilon$ and we assume $\mathbb{E}(s_{\upsilon}) = 1$. $k(\cdot)$ is a function to get notation symbols which indicate transmission or reflection for coefficients matrices at the STAR-RIS:
\begin{subnumcases}{}
k(\epsilon) = T, & $if \ \epsilon = T_{\Bar{a}},$\\
k(\epsilon) = R, & $if \ \epsilon = R_{\Bar{b}}.$
\end{subnumcases}
For the NOMA transmission, successive interference cancellation (SIC) must be applied at the users.
We assume the decoding order for all the users in $\Omega$ is $\chi = \{ U_{\Bar{c}}, \cdots, U_2, U_1 \}$, where $U_1, U_2, \cdots, U_{\Bar{c}} \in \Omega$ and $\Bar{c} = A + B$.
To apply SIC, the decoding order is assumed sequentially from the first element to the last element of $\chi$, i.e., from $U_{\Bar{c}}$ to $U_1$. Therefore, the achievable date rate at User $U_i \in \chi$ can be expressed as follows\cite{7555306}:
\begin{align}\label{user rate}
R_{U_i} = \text{min}(R_{U_iU_i},R_{U_iU_j}),
\end{align}
where $R_{U_iU_i}$ denotes the decoding data rate at User $U_i$ when User $U_i$ decodes its own signal, and $R_{U_iU_j}$ denotes the decoding data rate at User $U_j \in \chi,j>i$ when User $U_j$ decodes User $U_i$'s signal. min($\cdot$) for data rate guarantees that SIC can be applied smoothly\cite{2021arXiv210603001Z}. $R_{U_iU_i}$ and $R_{U_iU_j}$ can be expressed as:
\begin{subnumcases}{}
R_{U_iU_i} = \text{log}_2(1+\frac{|\bm{h}_{U_i}^{H} \bm{\Phi}^{k(U_i)} \bm{G} \bm{\omega}_{U_i}|^2}{\sum\limits_{U_z \in \chi^{\prime}}|\bm{h}_{U_i}^{H} \bm{\Phi}^{k(U_i)} \bm{G} \bm{\omega}_{U_z}|^2 + \sigma^2}),\\
R_{U_iU_j} = \text{log}_2(1+\frac{|\bm{h}_{U_j}^{H} \bm{\Phi}^{k(U_j)} \bm{G} \bm{\omega}_{U_i}|^2}{\sum\limits_{U_z \in \chi^{\prime}}|\bm{h}_{U_j}^{H} \bm{\Phi}^{k(U_j)} \bm{G} \bm{\omega}_{U_z}|^2 + \sigma^2}),
\end{subnumcases}
where $\chi^{\prime} = \{U_{i-1},U_{i-2}, \cdots, U_j,\cdots,U_1\},i \leq \Bar{c}$ is a subset of $\Omega$, and the decoding order of users in $\{U_{\Bar{c}}, \cdots, U_{i+1}, U_{i}\}$ is priority than the decoding order of User $U_z$ in $\chi^{\prime}$.
\subsection{Problem Formulation}
In this paper, we aim to maximize the EE of the proposed downlink network.
The EE can be expressed as follows:
\begin{equation}\label{EE}
\eta_{EE} = \frac{B_w \sum\limits_{\epsilon \in \Omega}^{} R_\epsilon}{\frac{1}{\gamma} P_T+P_C},
\end{equation}
where $\gamma \in (0,1]$ denotes the efficiency of the power amplifier at the BS, and $P_C$ denotes total power consumption. $B_w$ denotes the transmission bandwidth. $P_T$ is the BS transmit power, which ideally can be expressed as the total power of all the users, i.e. $P_T = \sum\limits_{\epsilon \in \Omega}^{}||\bm{\omega}_{\epsilon}||^{2}$.
Therefore, considering the related constrains for energy efficiency, the EE maximization problem can be formulated as:
\begin{subequations}\label{optimization}
\begin{align}
\label{problems}\mathop{max}\limits_{(\bm{\omega_\epsilon}, \bm{\Phi^{\tau}})} &\eta_{EE} \\
\label{power contrl}\text{s.t.} \quad &P_{T} \le P_{max}, \\
\label{beta contrl} &\beta^T_n + \beta^R_n = 1, \ \forall n \in\{1,2,\cdots,N\}, \\
\label{theta contrl}& 0\le \theta_{n}^{\tau} < 2\pi, \ \forall n \in\{1,2,\cdots,N\},\\
\label{target rate} & R_\epsilon \ge R_{min},
\end{align}
\end{subequations}
where constraint (\ref{power contrl}) describes that the transmission power limited at the BS, which indicates that the total power of the users can not exceed the maximum transmission power $P_{max}$ at the BS. Constraints (\ref{beta contrl}) and (\ref{theta contrl}) guarantee that amplitudes and phase shifts coefficients at the STAR-RIS will be adjusted within the reasonable ranges. Constraint (\ref{target rate}) guarantees that the data rate of all the users should meet the minimum data rate requirement of the system.
Obviously, with the constrains and multiple variables, the EE maximizaiton problem (\ref{optimization}) is non-convex, which is challenging to obtain the global optimal solution by using the traditional mathematical tools, such as convex optimization.
To efficiently solve the problem (\ref{optimization}), we design a DRL-based algorithm to jointly optimize beamforming vectors at the BS and coefficients matrices including amplitudes and phase shifts at the STAR-RIS to maximize the EE. DRL is one of artificial intelligence (AI) technology, which can train fully autonomous agents via interacting with environment and applying specific optimal strategies, improving over time through trial and error\cite{DRLIntroduction}. With deep neural networks, DRL can solve more complex and high-dimensional optimal problems. As one of DRL, DDPG is applied to solve optimization problem in continuous space, which is suitable to solve our maximization problem.
\section{JOINT OPTIMAZATION WITH DDPG}
\subsection{Breif Introduction to DDPG}
Normally, there are four neural networks in DDPG: actor network, target actor network, critic network and target critic network, which two actor networks have the same parameters and structures, and the same features for both critic networks. A replay buffer is also used to store past experiences. One typical tuple of past experiences is organized as $(s^{(t)}, a^{(t)}, r^{(t)}, s^{(t+1)})$, where $s^{(t)}, a^{(t)}, r^{(t)}$ denotes state, action and reward in the current $t$-th training step, and $s^{(t+1)}$ is the state of the next step ($(t+1)$-th) obtained by executing action $a^{(t)}$ in the current environment. By randomly sampling $m_c$ tuples from the replay buffer, the parameters of the actor network can be updated by using the sampled policy gradient, and the critic network can be trained by minimizing the loss function\cite{8535085}\cite{2015arXiv150902971L}. A softly update method is adopted to update the parameters for both target networks. With four neural networks and their parameters update methods, DDPG model can constantly improve itself by repeating its backbone procedure\cite{2015arXiv150902971L}, to maximize the reward which can be specifically defined as the EE maximization in this work.
\subsection{Application of DDPG to EE optimization}
In this section, we briefly introduce the structures and process of our DDPG-based algorithm to the EE maximization. To apply DDPG to the maximization problem, the vectors both for action and state space, the reward function, the constraints normalization handling, and the algorithm process should be properly considered and designed in order to follow the DDPG operating rules.
According to the features in optimization problem (\ref{optimization}), we design the action vector, the state vector and the reward function as follows:
\begin{itemize}
\item [1)]
Action vector:
We select the beamforming vectors $\bm{\omega}_\epsilon^{(t)}$ and the coefficients matrices $\bm{\Phi}^{\tau,(t)}$ to define the action vector at the $t$-th training step.
Note that $\bm{\omega}_\epsilon^{(t)}$ is a complex vector and the input vectors of neural networks should be real numbers. Thus we separately take the real part and imaginary part of $\bm{\omega}_\epsilon^{(t)}$ to construct one part of the action vector. Similarly, we take the real part and imaginary part of diagonal elements of $\bm{\Phi}^{\tau,(t)}$ to construct the rest part of the action vector. The action vector at the $t$-th training step can be presented as follows:
\begin{equation}\label{action space}
\begin{split}
& a^{(t)} = \{ \text{Re}\{\bm{\omega}_\epsilon^{(t)}\}, \text{Im}\{\bm{\omega}_\epsilon^{(t)}\}, \text{Re}\{\bm{{\Phi}}^{\tau,(t)}_{n}\}, \text{Im}\{\bm{\Phi}^{\tau,(t)}_{n}\} \}, \\
& \forall\tau \in \{T,R\}, \ \forall\epsilon \in \Omega, \ \forall n \in \{1,2,\dots,N\},
\end{split}
\end{equation}
where $\text{Re}\{\cdot\}$ and $\text{Im}\{\cdot\}$ present the real part and imaginary part of complex numbers respectively. $\bm{\Phi}^{\tau,(t)}_{n}$ denotes the $n$-th diagonal element of $\bm{\Phi}^{\tau,(t)}$.
\item [2)]
State vector:
The state vector should fully present the status of the proposed communication system and consider the optimization problem (\ref{optimization}). We design the state vector at the $t$-th traning step as follows:
\begin{equation}\label{state space}
\begin{split}
&s^{(t)} = \{ R_\epsilon^{(t)}, ||\bm{\omega}_\epsilon^{(t)}||^2, |\bm{h}_\epsilon^{H,(t)} \bm{\Phi}^{k(\epsilon),(t)} \bm{G}^{(t)} |^2\},\\ &\forall\epsilon \in \Omega,
\end{split}
\end{equation}
\item [3)]
Reward function: Because our aim is to maximize the EE in (\ref{optimization}), it is naturally to use the EE as the reward for the $t$-th training step: $r^{(t)} = \eta_{EE}^{(t)}$.
\end{itemize}
\begin{algorithm}[t]
\caption{DDPG-based EE maximization}
\label{algorithm show}
\begin{algorithmic}[1]
\STATE Generate the actor network, the critic network, the target actor network and the target critic network with their parameters;
\STATE Initialize the replay buffer $\mathcal{M}$ with the capacity C;
\FOR{episode $q = 1,2,...,E$}
\STATE Generate the channel $\bm{G}^{(q)}$ and $\bm{h}_{\epsilon}^{(q)},\forall\epsilon\in \Omega$ by (\ref{G channel}) and (\ref{h channel});
\STATE Initial $\bm{\omega}_{\epsilon}^{(1)}$, $\bm{\Phi}^{\tau,(1)}$ and apply SIC to get $s^{(1)}$ by (\ref{state space});
\FOR{step $t = 1,2,...,S$}
\STATE Select the action $a^{(t)}$ from actor network based on the current state $s^{(t)}$;
\STATE Explore $a^{(t)}$ by adding a random process $\mathcal{N}$;
\STATE Obtain $\bm{\hat{\omega}}_{\epsilon}^{(t)}$ by (\ref{normal w}), and obtain $\bm{\hat{\Phi}}^{\tau,(t)}$ by (\ref{normal phi});
\STATE Calculate the data rate $R_{\epsilon}^{(t)}$ at User $\epsilon$ by (\ref{user rate});
\IF {$R_{\epsilon}^{(t)} < R_{min}, \exists \epsilon \in \Omega$}
\STATE Calculate the reward $\Hat{r}^{(t)}$ with (\ref{EE}), (\ref{reward}) and (\ref{punishment2});
\ELSE
\STATE Calculate the reward $\Hat{r}^{(t)}$ with (\ref{EE}), (\ref{reward}) and (\ref{punishment1});
\ENDIF
\STATE Construct a new state $s^{(t+1)}$ by (\ref{state space});
\STATE Store \{$s^{(t)}$, $a^{(t)}$, $\Hat{r}^{(t)}$, $s^{(t+1)}$\} to the replay buffer $\mathcal{M}$;
\STATE Randomly sample $m_c$ tuples from the replay buffer $\mathcal{M}$, and update the parameters of the critic network and the actor network;
\STATE Softly update the parameters of the target actor network and the target critic network;
\STATE $s^{(t)} = s^{(t+1)}$
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
To solve problem (\ref{optimization}), we propose a DDPG-based joint maximization algorithm shown in Algorithm \ref{algorithm show}.
In this algorithm, each neural network is fully connected and sequentially comprises the input layer, the hidden layer, the batch normalization layer, the hidden layer and the output layer. Regarding the actor neural network, the dimension of the input layer depends on the size of the state vector. In addition, the rectified linear activation (ReLU) function is used in the batch normalization layer, and the hyperbolic tangent (tanh) function is used in the second hidden layer. For the critic neural network, the state vector and the actor vector are fed to two individual hidden layers and two batch normalization layers, then the output of two batch normalization layers are concatenated together activated by the Relu function.
While the ReLU function is used in the second hidden layer for the critic neural network.
All the hidden layers contain 300 neurons in this paper. The learning rate for the critic network and the actor network are 0.002 and 0.001 respectively.
Considering that a build-in constraint structure in a neural network hardly meets the requirements of the constraints in (\ref{optimization}). It is possible to design a constraints handling process to normalize the beamforming vectors and the coefficients matrices in the original action vector which is the output of the actor neural network.
Note that the output range of the actor neural network is $(-1,1)$ due to the tanh activation function, the action vector $\bm{\omega}_\epsilon^{(t)}$ should be normalized in every training step to successfully apply the EE calculation and satisfy the constraints. Thus we normalize the beamforming vectors $\bm{\omega}_\epsilon^{(t)},\forall \epsilon \in \Omega$ as follows:
\begin{align}
\label{normal w} & \hat{\bm{\omega}}_\epsilon^{(t)} = \sqrt{\lambda_\epsilon^{(t)}} \bm{\omega}_\epsilon^{(t)},
\end{align}
where
\begin{subnumcases}{}
\lambda_\epsilon^{(t)} = \frac{\hat{P}_\epsilon^{(t)}}{P_\epsilon^{(t)}},\\
P_\epsilon^{(t)} = ||\bm{\omega}_\epsilon^{(t)}||^2,\\
\hat{P}_\epsilon^{(t)} = \frac{||\bm{\omega}_\epsilon^{(t)}||^2}{(A+B)||\bm{\omega}_{tanh}^{max}||^2} \cdot P_{max}, \\
|\text{Re}\{\bm{\omega}_{tanh}^{max}\}|, |\text{Im}\{\bm{\omega}_{tanh}^{max}\}| \in \bm{1}^{M\times1},
\end{subnumcases}
where $||\bm{\omega}_{tanh}^{max}||^2$ presents the achievable maximum value for $||\bm{\omega}_\epsilon^{(t)}||^2$ in the range of the tanh function. $P_\epsilon^{(t)}$ denotes the transmission power of the beamforming vector $\omega_{\epsilon}^{(t)}$ organized by the action vector. $\hat{P}_\epsilon^{(t)}$ denotes the transmission power of the normalized beamforming vector $\hat{\omega}_{\epsilon}^{(t)}$. $\hat{P}_\epsilon^{(t)}$ is the ratio of $P_{max}$, which can further satisfy:
\begin{align}
\label{satisfed power} P_T^{(t)} = \sum\limits_{\epsilon \in \Omega}^{}\hat{P}_\epsilon^{(t)} \le P_{max},
\end{align}
where (\ref{satisfed power}) guarantees the constrains (\ref{power contrl}). Thus, Based on (\ref{normal w}), we can create a new normalized beamforming vector by the power ratio $\lambda_\epsilon^{(t)}$. Meanwhile, $\hat{\omega}_{\epsilon}^{(t)}$ maintains the same direction with $\omega_{\epsilon}^{(t)}$.
Similarly, the normalized coefficients matrices $\hat{\bm{\Phi}}^{\tau,(t)}$, $\forall \tau \in \{T, R\}$ as follows:
\begin{equation}\label{normal phi}
\begin{split}
& \hat{\bm{\Phi}}^{\tau,(t)} = \text{diag}(\sqrt{{\hat{\beta}}_1^{^{\tau,(t)}}}e^{j{\hat{\theta}}_1^{^{\tau,(t)}}},\sqrt{{\hat{\beta}}_2^{^{\tau,(t)}}}e^{j{\hat{\theta}}_2^{^{\tau,(t)}}}, \cdots,\\
&\sqrt{{\hat{\beta}}_N^{^{\tau,(t)}}}e^{j{\hat{\theta}}_N^{^{\tau,(t)}}}),
\end{split}
\end{equation}
where $\forall n \in\{1,2,\cdots,N\}$,
\begin{subnumcases}{}
\label{theta call}\hat{\theta}_n^{\tau,(t)} = \arctan(\frac{\text{Im}\{\bm{\Phi}^{\tau,(t)}_{n} \}}{\text{Re}\{\bm{\Phi}^{\tau,(t)}_{n} \}}), \\
\label{beta call}\hat{\beta}^{\tau,(t)}_{n} = \frac{|\bm{\Phi}^{\tau,(t)}_{n}|^2}{|\bm{\Phi}^{T,(t)}_n|^2 + |\bm{\Phi}^{R,(t)}_n|^2},
\end{subnumcases}
where (\ref{theta call}) guarantees that the polar form of $\bm{\Phi}^{\tau,(t)}_{n}$ maintain the same radians with its rectangular form. (\ref{beta call}) guarantees that the constrains (\ref{beta contrl}) can be satisfied as: $\hat{\beta}^{T,(t)}_{n} + \hat{\beta}^{R,(t)}_{n} = 1$.
Furthermore, a punishment rule is designed for reward at the $t$-th training step, which can be expressed as:
\begin{align}\label{reward}
\hat{r}^{(t)} = \zeta \ r^{(t)},
\end{align}
where $\zeta$ is a punishment for the reward, which can be presented as:
\begin{subnumcases}{}
\zeta = 1,&$if \ R_{\epsilon}^{(t)} \geqslant R_{min}, \forall \epsilon \in \Omega $ \label{punishment1},\\
\zeta = -|R_{\epsilon,min}^{(t)}-R_{min}|,&$if \ R_{\epsilon}^{(t)} < R_{min}, \exists \epsilon \in \Omega$ \label{punishment2},
\end{subnumcases}
where $R_{\epsilon,min}^{(t)}$ presents the minimum data rate of all the users at the $t$-th training step. (\ref{punishment2}) punishes the reward in the negative way if any data rate of the users is less than the data rate requirement, which guarantees that the constraint (\ref{target rate}) can be satisfied. (\ref{punishment1}) denotes that the reward remains the value of the EE if all the users' data rate meet the data rate requirement. (\ref{punishment1}) and (\ref{punishment2}) both affect the reward in training. With (\ref{reward}), the DDPG model adjust the parameters to avoid the negative reward and try to achieve higher EE value through training.
\section{NUMERICAL RESULTS}
In this section, we present the performance of the proposed joint maximization algorithm. Specifically, Based on the related works\cite{2021arXiv210401421M}, we model the channels gain $\bm{G}^{(q)}$ and $\bm{h}_{\epsilon}^{(q)},\forall\epsilon \in \Omega$ as Rician fading channel:
\begin{subequations}
\begin{align}
\label{G channel} & \bm{G}^{(q)} = \sqrt{\frac{\rho_0}{d_G^{\alpha_{BR}}}}(\sqrt{\frac{K_{BR}}{1+K_{BR}}} \bm{G}^{LoS} + \sqrt{\frac{1}{1+K_{BR}}} \bm{G}^{nLoS}), \\
\label{h channel} & \bm{h}_{\epsilon}^{(q)} = \sqrt{\frac{\rho_0}{d_{\epsilon}^{\alpha_{RU}}}}(\sqrt{\frac{K_{AU}}{1+K_{RU}}} \bm{h}_{\epsilon}^{LoS} + \sqrt{\frac{1}{1+K_{RU}}} \bm{h}_{\epsilon}^{nLoS}),
\end{align}
\end{subequations}
where $\rho_0$ denotes the path loss at a reference distance of 1 meter. $\alpha_{BR}$, $\alpha_{RU}$ are path loss exponents. $d_G$, $d_k$ denote distance between the STAR-RIS and the BS as well as distance between the STAR-RIS and the users respectively. $K_{BR},K_{RU}$ denote the Rician factors. $\bm{G}^{LoS}$ and $\bm{h}_{\epsilon}^{LoS}$ are the line-of-sight (Los) components, while $\bm{G}^{nLoS}$ and $\bm{h}_{\epsilon}^{nLoS}$ are the none-line-of-sight (nLos) components both following Rayleigh fading. It is worth to point out that $\bm{G}^{(q)}$ and $\bm{h}_{\epsilon}^{(q)}$ are generated at every episode in training to simulate varying channels.
Furthermore, the mainly setting parameters are demonstrated in Table \ref{table_p}.
\begin{table}[htb]
\begin{center}
\caption{SIMULATION PARAMETERS}
\label{table_p}
\begin{tabular}{|c|c||c|c|
\hline
parameter&value¶meter&value\\
\hline
$d_G$&50 meters&$d_k$&(5,10) meters\\
\hline
$\rho_0$&-30 dB& $\gamma$ & 0.35\\
\hline
$P_c$&40 dBm& $\bm{G}^{LoS}, \bm{h}_k^{LoS}$ & 1\\
\hline
$\alpha_{BR}, \alpha_{RU}$ & 2.2, 2.5 & $\sigma^2$ &-80 dBm\\
\hline
$K_{BR},K_{RU}$&10& $B_w$ & 180 $k$Hz\\
\hline
$C$ & 10000 & $m_c$ & 32\\
\hline
\end{tabular}
\end{center}
\end{table}
Fig. \ref{Rewards versus episode} demonstrates the convergence of the proposed algorithm through the training episodes separately considering the time-varying channel with $P_{max} = 20$ dBm and $P_{max} = 30$ dBm at the BS. Each side of the STAR-RIS has two users, and the minimum data rate requirement is set to 0.1 bps/Hz. From Fig. \ref{Rewards versus episode}, we can see that the rewards rise dramatically and then remain at a relatively high value with the increase of episodes for both transmission power. As one of the benchmarks in our simulation, a random coefficients scheme for the STAR-RIS remains poor performance with the episodes increases, which indicates that our proposed algorithm can significantly maximize the EE for the proposed downlink network.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{ee-episode.png}
\caption{Rewards versus Episodes with $M = 10$, $N = 30$, $A = B = 2$, $R_{min}$ = 0.1 bps/Hz, as well as different power at the BS}
\label{Rewards versus episode}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{ee-power.png}
\caption{EE versus transmission power at the BS with $R_{min}$ = 0.1 bps/Hz, $N = 30, A = B = 2$, as well as different antennas at the BS}
\label{EE versus power}
\end{figure}
Fig. \ref{EE versus power} shows EE versus the transmission power at the BS with the variable number of antennas at the BS. The number of elements at the STAR-RIS is 30. The user numbers and the data rate requirement are same with Fig. \ref{Rewards versus episode}. From Fig. \ref{EE versus power}, we can see that, as maximum transmitted power increases, EE increases to a peak value and remains, which indicates that EE can not grow continually with the constant growth of power at the BS. Moreover, the improvement of performance continuously gets smaller with the number of antennas increases. This is because the feasible domain of each channel between antennas get narrowed under the same power.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{ee-ris.png}
\caption{EE versus elements at the STAR-RIS with $R_{min}$ = 0.1 bps/Hz, $A = B = 2$, as well as different antennas at the BS}
\label{EE versus elements}
\end{figure}
In Fig. \ref{EE versus elements}, we present the EE performance versus the number of the elements at the STAR-RIS with 20 dBm at the BS. It can be observed that the system EE increases with the number of the elements at the STAR-RIS.
\section{Conclusion}
In this paper, we have studied a joint EE maximization problem for a NOMA-MISO assisted STAR-RIS downlink network. We have designed a DDPG-based algorithm to jointly optimize the beamforming vectors at the BS and the coefficients matrices at the STAR-RIS to maximize EE. The numerical results have validated the effectiveness and convergence of the proposed algorithm considering the time-varying channel.
Moreover, we have analyzed the trend of EE with different transmission power at the BS and various elements at the STAR-RIS.
\bibliographystyle{IEEEtran}
\section{Introduction}
The meta-surfaces has been considered as one of key auxiliary devices in the future sixth generation (6G) wireless networks due to its substantial benefits, such as low-cost and low power consumption, communication coverage extension, and communication quality improvement \cite{2019arXiv190308925D}. With the development of corresponding fabrication technologies, two typical structures of meta-surfaces have been proposed recently\cite{2021arXiv210309104L}\cite{2021arXiv210109663X}, which are reconfigurable intelligent surfaces (RISs) and simultaneous transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs). Different from the RISs, which is commonly mentioned by its reflecting-only property, STAR-RISs can serve both sides of users located at its front and back, by simultaneously transmitting and reflecting the incident signals. Motivated by the attractive advantages of STAR-RISs, extensive research has been devoted to adopting STAR-RISs in exploiting the novel communication framework to achieve smart radio environments.
Non-orthogonal multiple access (NOMA) is a promising 6G technology to achieve high spectrum efficiency and high energy efficiency\cite{9197675}\cite{8535085}. NOMA assisted STAR-RISs has been envisioned as a future promising wireless network structure.
As one of performance indicators for future wireless networks, the improvement of energy efficiency (EE) is important to avoid energy overhead and achieve green communications in 6G\cite{2021arXiv210101588M}. However, solving the EE maximization problem by finding the global optimal solution is challenging due to the fractional form of the objective function and non-convex constraints. While inspired by the successful application of deep reinforcement learning (DRL) in solving a variety of wireless communication problems\cite{2020arXiv200210072H}\cite{2021arXiv210406007D}, we design a Deep Deterministic Policy Gradient (DDPG)-based algorithm to maximize EE in a NOMA- multiple-input and single-output (MISO) assisted STAR-RIS downlink network.
The proposed algorithm can effectively achieve the maximum system EE by considering various transmission power at the base station (BS) and different sizes of the STAR-RIS.
\begin{figure}[t]
\centering
\includegraphics[width=3.45in]{STAR-RIS-Multiple_USERS.jpg}
\caption{Model of STAR-RIS assisted NOMA-MISO downlink network.}
\label{system model}
\end{figure}
\section{SYSTEM MODEL AND PROBLEM FORMULATION}
\subsection{System Model}
As shown in Fig. \ref{system model}, we consider a NOMA-MISO assisted STAR-RIS downlink network, where a BS with $\textit{M}$ antennas transmits the signals to multiple single-antenna users via a STAR-RIS which has $\textit{N}$ elements.
$T_{\Bar{a}}, \forall \Bar{a} \in\{1,2,\cdots,A\}$ denote the users located at the back of STAR-RIS in the transmission zone, while $R_{\Bar{b}}, \forall \Bar{b} \in\{1,2,\cdots,B\}$ are the users located in the reflection zone at the front of STAR-RIS, and where $\Bar{a}$ and $\Bar{b}$ are indices of the users in the transmission zone and reflection zone, respectively. We assume that the direct links between the BS and all the users are blocked by buildings or walls.
In this system, we assume that the STAR-RIS follows energy splitting (ES) protocol\cite{2021arXiv210109663X}\cite{2021arXiv210401421M}. The ES protocol indicates that every element at the STAR-RIS can simultaneously transmit and reflect the incident signals by adopting coefficients matrices at the same time. The coefficients matrices at the STAR-RIS contain amplitudes coefficients and phase shifts coefficients responding to the conditions of reflecting or transmitting signals. Under the ES protocol, the energy conservation law should be guaranteed \textemdash the sum of the energy of the transmitted and reflected signals must be equal to the incident signals' energy, which, ideally, defines the rule of amplitudes coefficients, i.e., $\beta_n^T + \beta_n^R = 1, \forall n \in\{1,2,\cdots,N\}$\cite{2021arXiv210109663X, 2021arXiv210309104L}, where $\beta_n^T$ and $\beta_n^R$ denote the transmission and reflection amplitudes coefficients for $n$-th element at the STAR-RIS respectively. The coefficients matrices ${\bm{\Phi}}^{\tau} \in \mathbb{C}^{N \times N}$, $\tau \in \{T, R\}$ can be expressed as follows:
\begin{equation}
\begin{split}
{\bm{\Phi}}^{\tau} = \text{diag}(\sqrt{{\beta}_1^{\tau}}e^{j{\theta}_1^{\tau}},\sqrt{{\beta}_2^{\tau}}e^{j{\theta}_2^{\tau}}, \cdots \sqrt{{\beta}_N^{\tau}}e^{j{\theta}_N^{\tau}}),
\end{split}
\end{equation}
where $\text{diag}(\cdot)$ presents the diagonal matrix. ${\theta}_n^{\tau}\in[0,2\pi)$, $\forall n \in\{1,2,\cdots,N\}$ denotes the phase shifts coefficient for the element at the STAR-RIS.
In this paper, we consider that User $T_{\Bar{a}}$ and User $R_{\Bar{b}}$ are grouped together and served by the NOMA downlink transmission\cite{2021arXiv210309104L}.
Define $\Omega = \{T_1,T_2,\cdots,T_A,R_1,R_2,\cdots,R_B\}$ as a user set by merging both users in the transmission zone and the reflection zone.
Let $y$ denote the superimposed signal transmitted from the BS to all the users, the received signal $y_{\epsilon}$ for User $\epsilon \in \Omega$ can be expressed as follows:
\begin{equation}\label{recieved signal}
y_{\epsilon} = \bm{h}_{\epsilon}^H \bm{\Phi}^{k(\epsilon)} \bm{G} \sum_{\upsilon \in \Omega} \bm{\omega}_{\upsilon} s_{\upsilon} + z,
\end{equation}
where $\bm{h}_{\epsilon}^H \in \mathbb{C}^{1 \times N}$ denotes conjugate transpose of transmission or reflection channel between the STAR-RIS and User $\epsilon$. $\bm{G}\in \mathbb{C}^{N\times M}$ denotes the channel between the BS and the STAR-RIS, $n$ is the additive white Gaussian noise which follows $z \thicksim \mathcal{CN}(0, \sigma^2)$. $\bm{\omega}_{\upsilon} \in \mathbb{C}^{M \times 1}$ denotes the beamforming vector. $s_{\upsilon}$ is the signal symbol of User $\upsilon$ and we assume $\mathbb{E}(s_{\upsilon}) = 1$. $k(\cdot)$ is a function to get notation symbols which indicate transmission or reflection for coefficients matrices at the STAR-RIS:
\begin{subnumcases}{}
k(\epsilon) = T, & $if \ \epsilon = T_{\Bar{a}},$\\
k(\epsilon) = R, & $if \ \epsilon = R_{\Bar{b}}.$
\end{subnumcases}
For the NOMA transmission, successive interference cancellation (SIC) must be applied at the users.
We assume the decoding order for all the users in $\Omega$ is $\chi = \{ U_{\Bar{c}}, \cdots, U_2, U_1 \}$, where $U_1, U_2, \cdots, U_{\Bar{c}} \in \Omega$ and $\Bar{c} = A + B$.
To apply SIC, the decoding order is assumed sequentially from the first element to the last element of $\chi$, i.e., from $U_{\Bar{c}}$ to $U_1$. Therefore, the achievable date rate at User $U_i \in \chi$ can be expressed as follows\cite{7555306}:
\begin{align}\label{user rate}
R_{U_i} = \text{min}(R_{U_iU_i},R_{U_iU_j}),
\end{align}
where $R_{U_iU_i}$ denotes the decoding data rate at User $U_i$ when User $U_i$ decodes its own signal, and $R_{U_iU_j}$ denotes the decoding data rate at User $U_j \in \chi,j>i$ when User $U_j$ decodes User $U_i$'s signal. min($\cdot$) for data rate guarantees that SIC can be applied smoothly\cite{2021arXiv210603001Z}. $R_{U_iU_i}$ and $R_{U_iU_j}$ can be expressed as:
\begin{subnumcases}{}
R_{U_iU_i} = \text{log}_2(1+\frac{|\bm{h}_{U_i}^{H} \bm{\Phi}^{k(U_i)} \bm{G} \bm{\omega}_{U_i}|^2}{\sum\limits_{U_z \in \chi^{\prime}}|\bm{h}_{U_i}^{H} \bm{\Phi}^{k(U_i)} \bm{G} \bm{\omega}_{U_z}|^2 + \sigma^2}),\\
R_{U_iU_j} = \text{log}_2(1+\frac{|\bm{h}_{U_j}^{H} \bm{\Phi}^{k(U_j)} \bm{G} \bm{\omega}_{U_i}|^2}{\sum\limits_{U_z \in \chi^{\prime}}|\bm{h}_{U_j}^{H} \bm{\Phi}^{k(U_j)} \bm{G} \bm{\omega}_{U_z}|^2 + \sigma^2}),
\end{subnumcases}
where $\chi^{\prime} = \{U_{i-1},U_{i-2}, \cdots, U_j,\cdots,U_1\},i \leq \Bar{c}$ is a subset of $\Omega$, and the decoding order of users in $\{U_{\Bar{c}}, \cdots, U_{i+1}, U_{i}\}$ is priority than the decoding order of User $U_z$ in $\chi^{\prime}$.
\subsection{Problem Formulation}
In this paper, we aim to maximize the EE of the proposed downlink network.
The EE can be expressed as follows:
\begin{equation}\label{EE}
\eta_{EE} = \frac{B_w \sum\limits_{\epsilon \in \Omega}^{} R_\epsilon}{\frac{1}{\gamma} P_T+P_C},
\end{equation}
where $\gamma \in (0,1]$ denotes the efficiency of the power amplifier at the BS, and $P_C$ denotes total power consumption. $B_w$ denotes the transmission bandwidth. $P_T$ is the BS transmit power, which ideally can be expressed as the total power of all the users, i.e. $P_T = \sum\limits_{\epsilon \in \Omega}^{}||\bm{\omega}_{\epsilon}||^{2}$.
Therefore, considering the related constrains for energy efficiency, the EE maximization problem can be formulated as:
\begin{subequations}\label{optimization}
\begin{align}
\label{problems}\mathop{max}\limits_{(\bm{\omega_\epsilon}, \bm{\Phi^{\tau}})} &\eta_{EE} \\
\label{power contrl}\text{s.t.} \quad &P_{T} \le P_{max}, \\
\label{beta contrl} &\beta^T_n + \beta^R_n = 1, \ \forall n \in\{1,2,\cdots,N\}, \\
\label{theta contrl}& 0\le \theta_{n}^{\tau} < 2\pi, \ \forall n \in\{1,2,\cdots,N\},\\
\label{target rate} & R_\epsilon \ge R_{min},
\end{align}
\end{subequations}
where constraint (\ref{power contrl}) describes that the transmission power limited at the BS, which indicates that the total power of the users can not exceed the maximum transmission power $P_{max}$ at the BS. Constraints (\ref{beta contrl}) and (\ref{theta contrl}) guarantee that amplitudes and phase shifts coefficients at the STAR-RIS will be adjusted within the reasonable ranges. Constraint (\ref{target rate}) guarantees that the data rate of all the users should meet the minimum data rate requirement of the system.
Obviously, with the constrains and multiple variables, the EE maximizaiton problem (\ref{optimization}) is non-convex, which is challenging to obtain the global optimal solution by using the traditional mathematical tools, such as convex optimization.
To efficiently solve the problem (\ref{optimization}), we design a DRL-based algorithm to jointly optimize beamforming vectors at the BS and coefficients matrices including amplitudes and phase shifts at the STAR-RIS to maximize the EE. DRL is one of artificial intelligence (AI) technology, which can train fully autonomous agents via interacting with environment and applying specific optimal strategies, improving over time through trial and error\cite{DRLIntroduction}. With deep neural networks, DRL can solve more complex and high-dimensional optimal problems. As one of DRL, DDPG is applied to solve optimization problem in continuous space, which is suitable to solve our maximization problem.
\section{JOINT OPTIMAZATION WITH DDPG}
\subsection{Breif Introduction to DDPG}
Normally, there are four neural networks in DDPG: actor network, target actor network, critic network and target critic network, which two actor networks have the same parameters and structures, and the same features for both critic networks. A replay buffer is also used to store past experiences. One typical tuple of past experiences is organized as $(s^{(t)}, a^{(t)}, r^{(t)}, s^{(t+1)})$, where $s^{(t)}, a^{(t)}, r^{(t)}$ denotes state, action and reward in the current $t$-th training step, and $s^{(t+1)}$ is the state of the next step ($(t+1)$-th) obtained by executing action $a^{(t)}$ in the current environment. By randomly sampling $m_c$ tuples from the replay buffer, the parameters of the actor network can be updated by using the sampled policy gradient, and the critic network can be trained by minimizing the loss function\cite{8535085}\cite{2015arXiv150902971L}. A softly update method is adopted to update the parameters for both target networks. With four neural networks and their parameters update methods, DDPG model can constantly improve itself by repeating its backbone procedure\cite{2015arXiv150902971L}, to maximize the reward which can be specifically defined as the EE maximization in this work.
\subsection{Application of DDPG to EE optimization}
In this section, we briefly introduce the structures and process of our DDPG-based algorithm to the EE maximization. To apply DDPG to the maximization problem, the vectors both for action and state space, the reward function, the constraints normalization handling, and the algorithm process should be properly considered and designed in order to follow the DDPG operating rules.
According to the features in optimization problem (\ref{optimization}), we design the action vector, the state vector and the reward function as follows:
\begin{itemize}
\item [1)]
Action vector:
We select the beamforming vectors $\bm{\omega}_\epsilon^{(t)}$ and the coefficients matrices $\bm{\Phi}^{\tau,(t)}$ to define the action vector at the $t$-th training step.
Note that $\bm{\omega}_\epsilon^{(t)}$ is a complex vector and the input vectors of neural networks should be real numbers. Thus we separately take the real part and imaginary part of $\bm{\omega}_\epsilon^{(t)}$ to construct one part of the action vector. Similarly, we take the real part and imaginary part of diagonal elements of $\bm{\Phi}^{\tau,(t)}$ to construct the rest part of the action vector. The action vector at the $t$-th training step can be presented as follows:
\begin{equation}\label{action space}
\begin{split}
& a^{(t)} = \{ \text{Re}\{\bm{\omega}_\epsilon^{(t)}\}, \text{Im}\{\bm{\omega}_\epsilon^{(t)}\}, \text{Re}\{\bm{{\Phi}}^{\tau,(t)}_{n}\}, \text{Im}\{\bm{\Phi}^{\tau,(t)}_{n}\} \}, \\
& \forall\tau \in \{T,R\}, \ \forall\epsilon \in \Omega, \ \forall n \in \{1,2,\dots,N\},
\end{split}
\end{equation}
where $\text{Re}\{\cdot\}$ and $\text{Im}\{\cdot\}$ present the real part and imaginary part of complex numbers respectively. $\bm{\Phi}^{\tau,(t)}_{n}$ denotes the $n$-th diagonal element of $\bm{\Phi}^{\tau,(t)}$.
\item [2)]
State vector:
The state vector should fully present the status of the proposed communication system and consider the optimization problem (\ref{optimization}). We design the state vector at the $t$-th traning step as follows:
\begin{equation}\label{state space}
\begin{split}
&s^{(t)} = \{ R_\epsilon^{(t)}, ||\bm{\omega}_\epsilon^{(t)}||^2, |\bm{h}_\epsilon^{H,(t)} \bm{\Phi}^{k(\epsilon),(t)} \bm{G}^{(t)} |^2\},\\ &\forall\epsilon \in \Omega,
\end{split}
\end{equation}
\item [3)]
Reward function: Because our aim is to maximize the EE in (\ref{optimization}), it is naturally to use the EE as the reward for the $t$-th training step: $r^{(t)} = \eta_{EE}^{(t)}$.
\end{itemize}
\begin{algorithm}[t]
\caption{DDPG-based EE maximization}
\label{algorithm show}
\begin{algorithmic}[1]
\STATE Generate the actor network, the critic network, the target actor network and the target critic network with their parameters;
\STATE Initialize the replay buffer $\mathcal{M}$ with the capacity C;
\FOR{episode $q = 1,2,...,E$}
\STATE Generate the channel $\bm{G}^{(q)}$ and $\bm{h}_{\epsilon}^{(q)},\forall\epsilon\in \Omega$ by (\ref{G channel}) and (\ref{h channel});
\STATE Initial $\bm{\omega}_{\epsilon}^{(1)}$, $\bm{\Phi}^{\tau,(1)}$ and apply SIC to get $s^{(1)}$ by (\ref{state space});
\FOR{step $t = 1,2,...,S$}
\STATE Select the action $a^{(t)}$ from actor network based on the current state $s^{(t)}$;
\STATE Explore $a^{(t)}$ by adding a random process $\mathcal{N}$;
\STATE Obtain $\bm{\hat{\omega}}_{\epsilon}^{(t)}$ by (\ref{normal w}), and obtain $\bm{\hat{\Phi}}^{\tau,(t)}$ by (\ref{normal phi});
\STATE Calculate the data rate $R_{\epsilon}^{(t)}$ at User $\epsilon$ by (\ref{user rate});
\IF {$R_{\epsilon}^{(t)} < R_{min}, \exists \epsilon \in \Omega$}
\STATE Calculate the reward $\Hat{r}^{(t)}$ with (\ref{EE}), (\ref{reward}) and (\ref{punishment2});
\ELSE
\STATE Calculate the reward $\Hat{r}^{(t)}$ with (\ref{EE}), (\ref{reward}) and (\ref{punishment1});
\ENDIF
\STATE Construct a new state $s^{(t+1)}$ by (\ref{state space});
\STATE Store \{$s^{(t)}$, $a^{(t)}$, $\Hat{r}^{(t)}$, $s^{(t+1)}$\} to the replay buffer $\mathcal{M}$;
\STATE Randomly sample $m_c$ tuples from the replay buffer $\mathcal{M}$, and update the parameters of the critic network and the actor network;
\STATE Softly update the parameters of the target actor network and the target critic network;
\STATE $s^{(t)} = s^{(t+1)}$
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
To solve problem (\ref{optimization}), we propose a DDPG-based joint maximization algorithm shown in Algorithm \ref{algorithm show}.
In this algorithm, each neural network is fully connected and sequentially comprises the input layer, the hidden layer, the batch normalization layer, the hidden layer and the output layer. Regarding the actor neural network, the dimension of the input layer depends on the size of the state vector. In addition, the rectified linear activation (ReLU) function is used in the batch normalization layer, and the hyperbolic tangent (tanh) function is used in the second hidden layer. For the critic neural network, the state vector and the actor vector are fed to two individual hidden layers and two batch normalization layers, then the output of two batch normalization layers are concatenated together activated by the Relu function.
While the ReLU function is used in the second hidden layer for the critic neural network.
All the hidden layers contain 300 neurons in this paper. The learning rate for the critic network and the actor network are 0.002 and 0.001 respectively.
Considering that a build-in constraint structure in a neural network hardly meets the requirements of the constraints in (\ref{optimization}). It is possible to design a constraints handling process to normalize the beamforming vectors and the coefficients matrices in the original action vector which is the output of the actor neural network.
Note that the output range of the actor neural network is $(-1,1)$ due to the tanh activation function, the action vector $\bm{\omega}_\epsilon^{(t)}$ should be normalized in every training step to successfully apply the EE calculation and satisfy the constraints. Thus we normalize the beamforming vectors $\bm{\omega}_\epsilon^{(t)},\forall \epsilon \in \Omega$ as follows:
\begin{align}
\label{normal w} & \hat{\bm{\omega}}_\epsilon^{(t)} = \sqrt{\lambda_\epsilon^{(t)}} \bm{\omega}_\epsilon^{(t)},
\end{align}
where
\begin{subnumcases}{}
\lambda_\epsilon^{(t)} = \frac{\hat{P}_\epsilon^{(t)}}{P_\epsilon^{(t)}},\\
P_\epsilon^{(t)} = ||\bm{\omega}_\epsilon^{(t)}||^2,\\
\hat{P}_\epsilon^{(t)} = \frac{||\bm{\omega}_\epsilon^{(t)}||^2}{(A+B)||\bm{\omega}_{tanh}^{max}||^2} \cdot P_{max}, \\
|\text{Re}\{\bm{\omega}_{tanh}^{max}\}|, |\text{Im}\{\bm{\omega}_{tanh}^{max}\}| \in \bm{1}^{M\times1},
\end{subnumcases}
where $||\bm{\omega}_{tanh}^{max}||^2$ presents the achievable maximum value for $||\bm{\omega}_\epsilon^{(t)}||^2$ in the range of the tanh function. $P_\epsilon^{(t)}$ denotes the transmission power of the beamforming vector $\omega_{\epsilon}^{(t)}$ organized by the action vector. $\hat{P}_\epsilon^{(t)}$ denotes the transmission power of the normalized beamforming vector $\hat{\omega}_{\epsilon}^{(t)}$. $\hat{P}_\epsilon^{(t)}$ is the ratio of $P_{max}$, which can further satisfy:
\begin{align}
\label{satisfed power} P_T^{(t)} = \sum\limits_{\epsilon \in \Omega}^{}\hat{P}_\epsilon^{(t)} \le P_{max},
\end{align}
where (\ref{satisfed power}) guarantees the constrains (\ref{power contrl}). Thus, Based on (\ref{normal w}), we can create a new normalized beamforming vector by the power ratio $\lambda_\epsilon^{(t)}$. Meanwhile, $\hat{\omega}_{\epsilon}^{(t)}$ maintains the same direction with $\omega_{\epsilon}^{(t)}$.
Similarly, the normalized coefficients matrices $\hat{\bm{\Phi}}^{\tau,(t)}$, $\forall \tau \in \{T, R\}$ as follows:
\begin{equation}\label{normal phi}
\begin{split}
& \hat{\bm{\Phi}}^{\tau,(t)} = \text{diag}(\sqrt{{\hat{\beta}}_1^{^{\tau,(t)}}}e^{j{\hat{\theta}}_1^{^{\tau,(t)}}},\sqrt{{\hat{\beta}}_2^{^{\tau,(t)}}}e^{j{\hat{\theta}}_2^{^{\tau,(t)}}}, \cdots,\\
&\sqrt{{\hat{\beta}}_N^{^{\tau,(t)}}}e^{j{\hat{\theta}}_N^{^{\tau,(t)}}}),
\end{split}
\end{equation}
where $\forall n \in\{1,2,\cdots,N\}$,
\begin{subnumcases}{}
\label{theta call}\hat{\theta}_n^{\tau,(t)} = \arctan(\frac{\text{Im}\{\bm{\Phi}^{\tau,(t)}_{n} \}}{\text{Re}\{\bm{\Phi}^{\tau,(t)}_{n} \}}), \\
\label{beta call}\hat{\beta}^{\tau,(t)}_{n} = \frac{|\bm{\Phi}^{\tau,(t)}_{n}|^2}{|\bm{\Phi}^{T,(t)}_n|^2 + |\bm{\Phi}^{R,(t)}_n|^2},
\end{subnumcases}
where (\ref{theta call}) guarantees that the polar form of $\bm{\Phi}^{\tau,(t)}_{n}$ maintain the same radians with its rectangular form. (\ref{beta call}) guarantees that the constrains (\ref{beta contrl}) can be satisfied as: $\hat{\beta}^{T,(t)}_{n} + \hat{\beta}^{R,(t)}_{n} = 1$.
Furthermore, a punishment rule is designed for reward at the $t$-th training step, which can be expressed as:
\begin{align}\label{reward}
\hat{r}^{(t)} = \zeta \ r^{(t)},
\end{align}
where $\zeta$ is a punishment for the reward, which can be presented as:
\begin{subnumcases}{}
\zeta = 1,&$if \ R_{\epsilon}^{(t)} \geqslant R_{min}, \forall \epsilon \in \Omega $ \label{punishment1},\\
\zeta = -|R_{\epsilon,min}^{(t)}-R_{min}|,&$if \ R_{\epsilon}^{(t)} < R_{min}, \exists \epsilon \in \Omega$ \label{punishment2},
\end{subnumcases}
where $R_{\epsilon,min}^{(t)}$ presents the minimum data rate of all the users at the $t$-th training step. (\ref{punishment2}) punishes the reward in the negative way if any data rate of the users is less than the data rate requirement, which guarantees that the constraint (\ref{target rate}) can be satisfied. (\ref{punishment1}) denotes that the reward remains the value of the EE if all the users' data rate meet the data rate requirement. (\ref{punishment1}) and (\ref{punishment2}) both affect the reward in training. With (\ref{reward}), the DDPG model adjust the parameters to avoid the negative reward and try to achieve higher EE value through training.
\section{NUMERICAL RESULTS}
In this section, we present the performance of the proposed joint maximization algorithm. Specifically, Based on the related works\cite{2021arXiv210401421M}, we model the channels gain $\bm{G}^{(q)}$ and $\bm{h}_{\epsilon}^{(q)},\forall\epsilon \in \Omega$ as Rician fading channel:
\begin{subequations}
\begin{align}
\label{G channel} & \bm{G}^{(q)} = \sqrt{\frac{\rho_0}{d_G^{\alpha_{BR}}}}(\sqrt{\frac{K_{BR}}{1+K_{BR}}} \bm{G}^{LoS} + \sqrt{\frac{1}{1+K_{BR}}} \bm{G}^{nLoS}), \\
\label{h channel} & \bm{h}_{\epsilon}^{(q)} = \sqrt{\frac{\rho_0}{d_{\epsilon}^{\alpha_{RU}}}}(\sqrt{\frac{K_{AU}}{1+K_{RU}}} \bm{h}_{\epsilon}^{LoS} + \sqrt{\frac{1}{1+K_{RU}}} \bm{h}_{\epsilon}^{nLoS}),
\end{align}
\end{subequations}
where $\rho_0$ denotes the path loss at a reference distance of 1 meter. $\alpha_{BR}$, $\alpha_{RU}$ are path loss exponents. $d_G$, $d_k$ denote distance between the STAR-RIS and the BS as well as distance between the STAR-RIS and the users respectively. $K_{BR},K_{RU}$ denote the Rician factors. $\bm{G}^{LoS}$ and $\bm{h}_{\epsilon}^{LoS}$ are the line-of-sight (Los) components, while $\bm{G}^{nLoS}$ and $\bm{h}_{\epsilon}^{nLoS}$ are the none-line-of-sight (nLos) components both following Rayleigh fading. It is worth to point out that $\bm{G}^{(q)}$ and $\bm{h}_{\epsilon}^{(q)}$ are generated at every episode in training to simulate varying channels.
Furthermore, the mainly setting parameters are demonstrated in Table \ref{table_p}.
\begin{table}[htb]
\begin{center}
\caption{SIMULATION PARAMETERS}
\label{table_p}
\begin{tabular}{|c|c||c|c|
\hline
parameter&value¶meter&value\\
\hline
$d_G$&50 meters&$d_k$&(5,10) meters\\
\hline
$\rho_0$&-30 dB& $\gamma$ & 0.35\\
\hline
$P_c$&40 dBm& $\bm{G}^{LoS}, \bm{h}_k^{LoS}$ & 1\\
\hline
$\alpha_{BR}, \alpha_{RU}$ & 2.2, 2.5 & $\sigma^2$ &-80 dBm\\
\hline
$K_{BR},K_{RU}$&10& $B_w$ & 180 $k$Hz\\
\hline
$C$ & 10000 & $m_c$ & 32\\
\hline
\end{tabular}
\end{center}
\end{table}
Fig. \ref{Rewards versus episode} demonstrates the convergence of the proposed algorithm through the training episodes separately considering the time-varying channel with $P_{max} = 20$ dBm and $P_{max} = 30$ dBm at the BS. Each side of the STAR-RIS has two users, and the minimum data rate requirement is set to 0.1 bps/Hz. From Fig. \ref{Rewards versus episode}, we can see that the rewards rise dramatically and then remain at a relatively high value with the increase of episodes for both transmission power. As one of the benchmarks in our simulation, a random coefficients scheme for the STAR-RIS remains poor performance with the episodes increases, which indicates that our proposed algorithm can significantly maximize the EE for the proposed downlink network.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{ee-episode.png}
\caption{Rewards versus Episodes with $M = 10$, $N = 30$, $A = B = 2$, $R_{min}$ = 0.1 bps/Hz, as well as different power at the BS}
\label{Rewards versus episode}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{ee-power.png}
\caption{EE versus transmission power at the BS with $R_{min}$ = 0.1 bps/Hz, $N = 30, A = B = 2$, as well as different antennas at the BS}
\label{EE versus power}
\end{figure}
Fig. \ref{EE versus power} shows EE versus the transmission power at the BS with the variable number of antennas at the BS. The number of elements at the STAR-RIS is 30. The user numbers and the data rate requirement are same with Fig. \ref{Rewards versus episode}. From Fig. \ref{EE versus power}, we can see that, as maximum transmitted power increases, EE increases to a peak value and remains, which indicates that EE can not grow continually with the constant growth of power at the BS. Moreover, the improvement of performance continuously gets smaller with the number of antennas increases. This is because the feasible domain of each channel between antennas get narrowed under the same power.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{ee-ris.png}
\caption{EE versus elements at the STAR-RIS with $R_{min}$ = 0.1 bps/Hz, $A = B = 2$, as well as different antennas at the BS}
\label{EE versus elements}
\end{figure}
In Fig. \ref{EE versus elements}, we present the EE performance versus the number of the elements at the STAR-RIS with 20 dBm at the BS. It can be observed that the system EE increases with the number of the elements at the STAR-RIS.
\section{Conclusion}
In this paper, we have studied a joint EE maximization problem for a NOMA-MISO assisted STAR-RIS downlink network. We have designed a DDPG-based algorithm to jointly optimize the beamforming vectors at the BS and the coefficients matrices at the STAR-RIS to maximize EE. The numerical results have validated the effectiveness and convergence of the proposed algorithm considering the time-varying channel.
Moreover, we have analyzed the trend of EE with different transmission power at the BS and various elements at the STAR-RIS.
\bibliographystyle{IEEEtran}
|
{
"timestamp": "2021-12-01T02:26:27",
"yymm": "2111",
"arxiv_id": "2111.15464",
"language": "en",
"url": "https://arxiv.org/abs/2111.15464"
}
|
\section{Introduction}
In the theory of semigroups on the Hilbert space $L^2$ generated by bilinear forms, there exists a well-known characterisation -- in terms of the gene\-rating forms -- of the situation when one semigroup dominates another one. It is a consequence of another characterisation, namely of the situation when a semigroup leaves a closed and convex subset of a Hilbert space invariant. The Beurling-Deny criteria which characterise positivity and $L^\infty$-contractivity of a semigroup are also consequences of this abstract result. In this short note, we review domination of semigroups and state two main results, which basically can be read off from the literature but which are perhaps new in this generality:~a representation theorem for regular forms associated with dominated semigroups (compare with the proof of \cite[Theorem~4.1]{ArWa03b}) and a relation between locality and positivity of the dominated semigroup (compare with \cite[Theorem 4.3]{Ak18}).
\subsection*{A characterisation of domination}
Throughout, we let $(\Omega ,{\mathfrak A} ,\mu )$ be a topological measure space. By this we mean that $\Omega$ is a topological space, ${\mathfrak A}$ is the Borel $\sigma$-algebra, and $\mu$ is a (positive) Borel measure on $(\Omega ,{\mathfrak A} )$. Without loss of generality, we assume that $\mu$ has full support in the sense that there exists no nonempty open subset $U\subseteq\Omega$ which has zero measure. Otherwise, we replace $\Omega$ with $\Omega\setminus U$, where $U$ is the largest open subset which has measure zero. All vector spaces in this note are {\em complex} vector spaces.
We now state our main result:
\begin{theorem} \label{main}
Let ${\mathfrak a} :D({\mathfrak a} )\times D({\mathfrak a} )\to{\mathbb C}$ and $\widehat{{\mathfrak a}} : D(\widehat{{\mathfrak a}} ) \times D(\widehat{{\mathfrak a}} ) \to{\mathbb C}$ be two sesquilinear, Hermitean, closed, accretive, and densely defined forms on $L^2_\mu (\Omega )$ such that the associated self-adjoint $C_0$-semigroups (denoted by $T$ and $\widehat{T}$ respectively) are real. Assume that the semigroup $\widehat{T}$ is positive, and that the form ${\mathfrak a}$ is $\mathcal{A}$-regular for some closed, Hermitean, and unital subalgebra of $C^b (\Omega )$. Further, let $\widetilde{\Omega}$ denote the compactification of $\Omega$ associated with $\mathcal A$.
Then $T$ is dominated by $\widehat{T}$, in the sense that
\[
|T(t)u| \leq \widehat{T} (t)|u|,\qquad \text{ for every } u\in L^2_\mu (\Omega ),
\]
if and only if $D({\mathfrak a} )$ is an ideal of $D(\widehat{{\mathfrak a}} )$ and there exists a (positive) Hermitean Borel measure $\nu$ on $\widetilde{\Omega}\times \widetilde{\Omega}$ such that $\nu$ is absolutely continuous with respect to the ${\mathfrak a}$-capacity, the equality
\begin{equation} \label{eq.domination}
{\mathfrak a} (u,v) = \widehat{{\mathfrak a}} (u,v) + \int_{\widetilde{\Omega}\times \widetilde{\Omega}} u(x) \overline{v(y)} \; d\nu (x,y)
\end{equation}
holds for every $u,v\in D({\mathfrak a} )$, and
\begin{equation} \label{eq.domination.2}
{\rm Re }\int_{\widetilde{\Omega}\times \widetilde{\Omega}} u(x) \overline{v(y)} \; d\nu (x,y) \geq {\rm Re }\ \widehat{{\mathfrak a}} (|u|,|v|) - {\rm Re }\ \widehat{{\mathfrak a}} (u,v)
\end{equation}
for every $u,v\in D({\mathfrak a} )$ which satisfy $u\bar{v}\geq 0$ on $\Omega$.
\end{theorem}
The rest of this section will be dedicated to explaining the terminology used in the statement and that we need to prove Theorem~\ref{main}. Then, in Section~\ref{section:proof} we provide the proof of Theorem~\ref{main}. In Section~\ref{section:locality}, we turn our attention to local forms. Finally, we leave the reader with some additional remarks in Section~\ref{section:remarks}.
\subsection*{Notation and Preliminaries}
If $u$ is a vector in a Banach lattice $E$, then we use the usual notation $u^+$ and $u^-$ to denote the positive and negative parts of $u$ respectively. Recall that $u=u^+-u^-$ and the modulus of $u$ satisfies $|u|=u^++u^-$.
Let ${\mathfrak a} : D({\mathfrak a} ) \times D({\mathfrak a} ) \to{\mathbb C}$ be a sesquilinear form, where $D({\mathfrak a} )$ is a dense subspace of $L^2_\mu (\Omega )$ known as the {\em form domain}. We say that the form ${\mathfrak a}$ is {\em Hermitean} if for every $u$, $v\in D({\mathfrak a} )$, \[
{\mathfrak a} (u,v) = \overline{{\mathfrak a} (v,u)} .
\]
The form ${\mathfrak a}$ is said to be {\em accretive} if
\[
{\rm Re }\,{\mathfrak a} (u):= {\rm Re }\,{\mathfrak a} (u,u) \geq 0 ,
\]
for every $u\in D({\mathfrak a} )$
and an accretive form is called {\em closed} if the form domain $D({\mathfrak a} )$ is complete with respect to the norm
\[
\|u\|_{D({\mathfrak a})}:= \sqrt{{\rm Re }\ {\mathfrak a}(u) + \|u\|_{L^2_\mu}^2}\qquad (u\in D({\mathfrak a})).
\]
Clearly, $D({\mathfrak a} )$ embeds continuously into $L^2_\mu(\Omega)$ when equipped with this norm.
Given two Hermitean forms ${\mathfrak a}$ and $\widehat{{\mathfrak a}}$, we say that $D({\mathfrak a} )$ is an {\em ideal} of $D(\widehat{{\mathfrak a}})$ if
\begin{enumerate}[\upshape (a)]
\item the implication $u\in D({\mathfrak a})\Rightarrow |u| \in D(\widehat{{\mathfrak a}})$ holds and
\item whenever $u\in D(\widehat{{\mathfrak a}})$ and $v\in D({\mathfrak a} )$, then $0\leq u\leq v$ implies $u\in D({\mathfrak a} )$.
\end{enumerate}
For every Hermitean, accretive, and closed sesquilinear form, the operator given by
\begin{align*}
D(A) & := \{ u\in D ({\mathfrak a} ) |\ \exists\ f\in L^2_\mu (\Omega )\ \forall\ v\in D({\mathfrak a} ) : {\mathfrak a} (u,v) = \langle f,v\rangle_{L^2_\mu} \} , \\
Au & := f ,
\end{align*}
is self-adjoint and positive semi-definite. This operator is the negative generator of a self-adjoint contraction semigroup $T := (T(t))_{t\geq 0}$. Actually, there is a one-to-one correspondence between Hermitean, closed, accretive, and densely defined forms, the self-adjoint and positive semi-definite operators, and the self-adjoint contraction semigroups. For all these facts, we refer the reader to standard monographs, for instance \cite[Chapter~XVII]{DaLi92V} or \cite{Ou04, ReSi78IV}. Note that, by a semigroup, we always mean a $C_0$-semigroup.
The semigroup $T$ is said to be {\em real} if each operator $T(t)$ leaves the real part of $L^2_\mu(\Omega)$ invariant, and it is called {\em positive} if each $T(t)$ leaves the positive cone of $L^2_\mu (\Omega )$ invariant. Clearly, a positive semigroup is always real. By \cite[Proposition~2.5]{Ou04}, the semigroup $T$ is real if and only if
\[
\forall\ u\in D({\mathfrak a} ) : {\rm Re }\ u \in D({\mathfrak a} ) \text{ and } {\mathfrak a} ({\rm Re }\ u ,\text{Im }u ) \in{\mathbb R} ,
\]
Moreover, by the well-known {\em first Beurling-Deny criterion} (see, for instance, \cite[Theorem~1.3.2]{Da89}, \cite[Theorem~XIII.50]{ReSi78IV}, \cite[Theorem~2.7]{Ou04}), positivity of a real semigroup $T$ is characterised by the property
\[
\forall\ u \in D({\mathfrak a}) : \, |u| \in D({\mathfrak a}) \text{ and } {\mathfrak a}(|u|) \leq {\mathfrak a}(u) .
\]
Let ${\mathcal A}$ be a closed, unital, and Hermitean subalgebra of $C^b (\Omega )$, the space of bounded continuous functions. We say that an closed, accretive, and sesquilinear form ${\mathfrak a}$ is {\em $\mathcal A$-regular} if
\begin{align*}
& D({\mathfrak a} ) \cap {\mathcal A} \text{ is dense in the Hilbert space } D({\mathfrak a} ) \text{ and} \\
& {\rm span}\,\big((D({\mathfrak a} ) \cap {\mathcal A}) \cup \{ 1\}\big) \text{ is dense in } \mathcal{A} \text{ in the supremum norm.}
\end{align*}
In this note, we use an abstract compactification $\widetilde{\Omega}$ of $\Omega$, an abstract boundary of $\Omega$, and a capacity on $\widetilde{\Omega}$, which were recently considered in the case of nonlinear Dirichlet forms by Claus \cite{Cl21}. Let us explain this in more detail. First, by the Gelfand representation theorem, the Banach algebra $\mathcal{A}$ is isometrically isomorphic to the Banach algebra $C(\widetilde{\Omega})$ for some compact space $\widetilde{\Omega}$. This compact space $\widetilde{\Omega}$ is a compactification of $\Omega$. The construction of the above isometric isomorphism via the so-called Gelfand space or Gelfand spectrum shows that there is a natural, continuous mapping
\begin{align*}
\iota : \Omega & \to \widetilde{\Omega} , \\
x & \mapsto \iota (x) ,
\end{align*}
such that the inverse of the above mentioned isometric isomorphism is given by
\begin{align*}
J : C(\widetilde{\Omega}) & \to \mathcal{A} , \\
f & \mapsto f\circ\iota .
\end{align*}
The mapping $\iota$ allows us to define a push-forward Borel measure $\widetilde{\mu}$ on $\widetilde{\Omega}$ by setting
\[
\widetilde{\mu} (B) := \mu (\iota^{-1} (B))
\]
for every Borel set $B\subseteq \widetilde{\Omega}$.
Then the mapping $J$ above extends to an isometric isomorphism
\begin{align*}
J : L^2_{\widetilde{\mu}} (\widetilde{\Omega}) & \to L^2_\mu (\Omega ) , \\
f & \mapsto f\circ\iota .
\end{align*}
This means that in the general setting above, if the form ${\mathfrak a}$ is $\mathcal A$-regular, then we may assume without loss of generality that $\Omega$ (actually, $\widetilde{\Omega}$) is a compact, topological space, $D({\mathfrak a} )$ (actually, $J^{-1}D({\mathfrak a} )$) is a dense subspace of $L^2 (\Omega )$, and $D({\mathfrak a} )\cap C(\Omega)$ is dense in $D({\mathfrak a} )$ and a fortiori in $L^2_\mu (\Omega )$.
In order to prove Theorem~\ref{main}, we freely use the theory of tensor norms. For this, we refer to the monograph \cite{DeFl93}.
If ${\mathfrak a}$ is an closed, accretive form, then we define the {\em ${\mathfrak a}$-capacity} on the product space $\widetilde{\Omega}\times\widetilde{\Omega}$ by setting, for every subset $B\subseteq\widetilde{\Omega}\times\widetilde{\Omega}$,
\[
{\rm cap} (B) := \inf \{ \| w\|_{D({\mathfrak a} ) \otimes_\pi D({\mathfrak a} )} |\ w\geq 1 \,\, \mu\text{-a.e. on an open set } U\supseteq B\} .
\]
Here, $\| \cdot\|_{D({\mathfrak a} ) \otimes_\pi D({\mathfrak a} )}$ is the usual projective tensor norm, which on the algebraic tensor product is given by
\[
\| w\|_{D({\mathfrak a} ) \otimes_\pi D({\mathfrak a} )} := \inf \left\{ \sum_{i=1}^n \| u_i\|_{D({\mathfrak a} )} \, \| v_i \|_{D({\mathfrak a} )} \big|\ n\in{\mathbb N} \text{ and } \sum_{i=1}^n u_i \otimes v_i = w \right\}.
\]
The projective tensor product is the completion of the algebraic tensor prod\-uct with respect to this norm. Our definition of the capacity is perhaps unusual in two aspects: first, our assumption on the form is minimal (closed, accretive) in order to define some capacity, and second, we do not define the capacity on $\widetilde{\Omega}$ but on the product space $\widetilde{\Omega}\times\widetilde{\Omega}$. However, when dealing with non-local forms, it is natural to define the capacity on the product space.
A Borel measure $\nu$ on the product space $\widetilde{\Omega}\times\widetilde{\Omega}$ is called {\em symmetric} if $\nu (B) = \nu (\tilde{B} )$ for every Borel set $B$, where $\tilde{B} := \{ (x,y) \in\widetilde{\Omega}\times\widetilde{\Omega} | (y,x)\in B\}$ is the reflection of $B$. We say that a subset $B\subseteq\widetilde{\Omega}\times\widetilde{\Omega}$ is {\em ${\mathfrak a}$-polar} if ${\rm cap} (B) = 0$ and the measure $\nu$ is said to be {\em absolutely continuous with respect to the ${\mathfrak a}$-capacity} if $\nu (B) = 0$ for every ${\mathfrak a}$-polar Borel set $B\subseteq\widetilde{\Omega}\times\widetilde{\Omega}$. Let us mention that if the capacity and the measure $\nu$ were only defined on $\widetilde{\Omega}$, then one could also define absolute continuity of the measure with respect to the capacity; this property was called {\em admissibility} in \cite{ArWa03b}.
\section{Proof of the main result}
\label{section:proof}
In this section, we provide a proof of Theorem~\ref{main}.
But first, we make a brief remark on the boundedness assumption \eqref{eq.domination.2}.
\begin{remark} \label{rem.dom}
In some typical examples -- for instance, forms associated with the Laplace operator with local boundary conditions -- the sesquilinear form $\widehat{{\mathfrak a}}$ satisfies $\widehat{{\mathfrak a}} (u,v) = \widehat{{\mathfrak a}} (|u|,|v|)$ for every real $u$, $v\in D({\mathfrak a})$ such that $uv\geq 0$. In such examples, the boundedness condition \eqref{eq.domination.2} implies the condition
\begin{equation} \label{eq.domination.3}
\int_{\widetilde{\Omega}\times\widetilde{\Omega}} u(x) v(y) \; d\nu (x,y) \geq 0 \text{ for every real } u,v\in D({\mathfrak a} ) \text{ such that } uv\geq 0 .
\end{equation}
Note that the product $uv$ at the end of \eqref{eq.domination.3} is a function on $\widetilde{\Omega}$, while the tensor product $u\otimes v$ appearing under the integral is a function on $\widetilde{\Omega}\times\widetilde{\Omega}$. The positivity of the product $uv$ only means that the tensor product $u\otimes v$ is positive on the diagonal $\Delta := \{ (x,y)\in\widetilde{\Omega}\times\widetilde{\Omega} | x=y\}$. The condition \eqref{eq.domination.3} can be rewritten in the form
\begin{equation} \label{eq.domination.4}
\begin{split}
\int_\Delta u(x) v(x) \; d\nu (x,x) & \geq - \int_{\widetilde{\Omega}\times\widetilde{\Omega} \setminus\Delta} u(x) v(y) \; d\nu (x,y) \\
& \text{ for every real } u,v\in D({\mathfrak a} ) \text{ such that } uv\geq 0 .
\end{split}
\end{equation}
We say that the measure $\nu$ is {\em diagonally dominant} if it satisfies \eqref{eq.domination.4}. Diagonal dominance and also the condition \eqref{eq.domination.2} is a certain boundedness assumption on $\nu$.
Note that, there are positive measures $\nu$ which are not diagonally dominant. The sesquilinear form $\mathfrak{b} : H^1 (0,1) \times H^1 (0,1)\to{\mathbb C}$ given by
\[
\mathfrak{b} (u,v) = \int_0^\frac12 u \cdot \int_\frac12^1 \bar{v} + \int_0^\frac12 \bar{v} \cdot \int_\frac12^1 u
\]
is well defined, continuous, and positive on $H^1 (0,1)$, it is of the form of the integral term in \eqref{eq.domination.3} for the Lebesgue measure $\nu$ restricted to the union of the squares $\left(\left[0,\frac12\right]\times \left[\frac12 ,1\right]\right) \cup \left([\frac12 ,1]\times [0,\frac12 ]\right)$, but this measure $\nu$ is not diagonally dominant in the sense of condition \eqref{eq.domination.4}; indeed this can be seen by taking $u = v$ where $u(x) = 1$ for $x\in \left[0,\frac12\right]$ and $u(x) = -1$ for $x\in \left(\frac12 , 1\right]$.
\end{remark}
Next, we outsource the proof of the necessity of the measure $\nu$ to be absolutely continuous with respect to the ${\mathfrak a}$-capacity to the following lemma.
\begin{lemma} \label{lem.abs}
Let the sesquilinear form ${\mathfrak a}$, the algebra ${\mathcal A}$, and the compactification $\widetilde{\Omega}$ be as in Theorem~\ref{main}. Further, let $\nu$ be a positive Borel measure on $\widetilde{\Omega}\times \widetilde{\Omega}$ and let the form $\mathfrak{b} : D({\mathfrak a} ) \times D({\mathfrak a} ) \to{\mathbb C}$ given by
\[
\mathfrak{b} (u,v) = \int_{\widetilde{\Omega}\times \widetilde{\Omega}} u(x) \overline{v(y)} \; d\nu (x,y) \qquad (u,v\in D({\mathfrak a} ))
\]
be well defined.
Suppose $b$ is continuous on $D({\mathfrak a} )$, i.e., there exists $C>0$ such that the inequality $|\mathfrak{b} (u,v)|\leq C\, \| u\|_{D({\mathfrak a} )} \| v\|_{D({\mathfrak a} )}$ is true for every $u$, $v\in D({\mathfrak a} )$. Then $\nu (B) \leq C\, {\rm cap} (B)$ for every $B\subseteq \widetilde{\Omega}\times\widetilde{\Omega}$. In particular, $\nu$ is absolutely continuous with respect to the ${\mathfrak a}$-capacity.
\end{lemma}
\begin{proof}
Firstly, note that well-definedness of $\mathfrak{b}$ means that the integral in $\mathfrak{b}$ is absolutely convergent for every $u$, $v\in D({\mathfrak a} )$. Let $B\subseteq \widetilde{\Omega}\times\widetilde{\Omega}$. Then, for every non-negative $w\in D({\mathfrak a} ) \otimes_{alg} D({\mathfrak a} )$ with $w = \sum_{i=1}^n u_i \otimes v_i$ and $w \geq 1$ on a neighbourhood of $B$, we have
\begin{align*}
\nu (B) & \leq \int_{\widetilde{\Omega}\times\widetilde{\Omega}} w(x,y) \; d\nu (x,y) \\
& = \sum_{i=1}^n \int_{\widetilde{\Omega}\times\widetilde{\Omega}} u_i(x) v_i(y) \; d\nu (x,y) \\
& \leq \sum_{i=1}^n C\, \| u_i\|_{D({\mathfrak a} )} \, \| v_i\|_{D({\mathfrak a} )} .
\end{align*}
Taking the infimum over all appropriate decompositions of $w$, the assertion
$
\nu (B) \leq C\, {\rm cap} (B) ,
$
follows.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{main}]
Recall that, by assumption, both the semigroups $T$ and $\widehat{T}$ are real. Now, \cite[Theorem~2.21]{Ou04}, the semigroup $T$ is dominated by $\widehat{T}$ if and only if $D({\mathfrak a} )$ is an ideal of $D(\widehat{{\mathfrak a}})$ and
\begin{equation} \label{eq.domination.char}
{\rm Re }\ \widehat{{\mathfrak a}} (|u|,|v|) \leq {\rm Re }\ {\mathfrak a} (u,v)
\end{equation}
for every $u$, $v\in D({\mathfrak a} )$ with $u\bar{v}\geq 0$.
First of all, if $D({\mathfrak a} )$ is an ideal of $D(\widehat{{\mathfrak a}} )$ and if ${\mathfrak a}$ is given as in equation \eqref{eq.domination} for some appropriate measure $\nu$, which is absolutely continuous with respect to the ${\mathfrak a}$-capacity, and which satisfies the lower bound \eqref{eq.domination.2}, then
\begin{align*}
{\rm Re }\ \widehat{{\mathfrak a}} (|u|,|v|) & \leq{\rm Re }\ \widehat{{\mathfrak a}} (u,v) +{\rm Re }\ \int_{\widetilde{\Omega}\times \widetilde{\Omega}} u(x) \overline{v(y)} \; d\nu (x,y) \\
& ={\rm Re }\ {\mathfrak a} (u,v) .
\end{align*}
for all $u$, $v\in D({\mathfrak a} )$ with $u\bar{v}\geq 0$,
Hence, the inequality \eqref{eq.domination.char} is fulfilled for every $u$, $v\in D({\mathfrak a} )$ such that $u\bar{v}\geq 0$. Therefore, $T$ is dominated by $\widehat{T}$.
Conversely, suppose that $T$ is dominated by $\widehat{T}$. In particular, the inequality \eqref{eq.domination.char} is fulfilled for every $u$, $v\in D({\mathfrak a} )$ such that $u\bar{v}\geq 0$. Define the sesquilinear form
\begin{align*}
\Psi : D({\mathfrak a} ) \times D({\mathfrak a} ) & \to {\mathbb C} , \\
(u,v) & \mapsto {\mathfrak a} (u,v) - \widehat{{\mathfrak a}} (u,v) .
\end{align*}
Let $u,v$ be positive elements in $D({\mathfrak a})$. In particular $u,v$ are real. Since the semigroups $T$ and $\widehat{T}$ are real, therefore by a characterisation for real semigroups \cite[Proposition~2.5]{Ou04}, we obtain that ${\mathfrak a}(u,v)$ and $\widehat{{\mathfrak a}} (u,v)$ are also real.
Thus, the inequality \eqref{eq.domination.char} implies that
\[
\Psi (u,v) \geq 0 ,
\]
that is, $\Psi$ is positive on $D({\mathfrak a} ) \times D({\mathfrak a} )$ in the order sense. By the universal property of the algebraic tensor product $D({\mathfrak a} ) \otimes_{{\rm alg}} D({\mathfrak a} )$, associated with the sesquilinear form $\Psi$, a unique linear mapping $\psi : D({\mathfrak a} ) \otimes_{\rm alg} D({\mathfrak a} )\to{\mathbb C}$ exists such that the diagram
\[
\begin{tikzcd}
D({\mathfrak a} ) \times D({\mathfrak a} ) \arrow{r}{\mathfrak{b}} \arrow[swap]{dr}{\Psi} & D({\mathfrak a} ) \otimes_{\rm alg} D({\mathfrak a} ) \arrow{d}{\psi} \\
& {\mathbb C}
\end{tikzcd}
\]
commutes. Here, $\mathfrak{b} : D({\mathfrak a} ) \times D({\mathfrak a} ) \to D({\mathfrak a} ) \otimes_{\rm alg} D({\mathfrak a} )$, $(u,v) \mapsto u\otimes \bar{v}$ is the canonical sesquilinear form. By the construction of the algebraic tensor product and by the positivity of $\Psi$, the linear mapping $\psi$ is also positive.
This mapping is of course also positive on the smaller tensor product $(D({\mathfrak a} ) \cap {\mathcal A}) \otimes_{\rm alg} (D({\mathfrak a} ) \cap {\mathcal A})$. In the following, we write $C(\widetilde{\Omega})$ instead of $\mathcal A$; where $\widetilde{\Omega}$ denotes the compactification of $\Omega$ with respect to $\mathcal{A}$.
Let us consider two cases. Assume first, that the span of $D({\mathfrak a} ) \cap C(\widetilde{\Omega})$ is dense in $C(\widetilde{\Omega})$ with respect to the supremum norm. Then there exists a positive $u_0\in D({\mathfrak a} )\cap C(\widetilde{\Omega})$ such that $\| u_0 - 1\|_\infty \leq \frac12$. This function $u_0$ is an order unit in $C(\widetilde{\Omega})$ and similarly, $u_0\otimes u_0$ is an order unit in the injective tensor product $C(\widetilde{\Omega})\otimes_\varepsilon C(\widetilde{\Omega} )$; the injective tensor product actually is isometrically isomorphic to the space $C(\widetilde{\Omega} \times \widetilde{\Omega} )$ (see \cite[Example I.4.2(3)]{DeFl93}). The tensor product $(D({\mathfrak a} ) \cap C(\widetilde{\Omega})) \otimes_{\rm alg} (D({\mathfrak a} ) \cap C(\widetilde{\Omega}))$ therefore is majorizing in $C(\widetilde{\Omega} \times \widetilde{\Omega} )$ in the sense that for every $w\in C(\widetilde{\Omega} \times \widetilde{\Omega} )$ there exists $z\in (D({\mathfrak a} ) \cap C(\widetilde{\Omega})) \otimes_{\rm alg} (D({\mathfrak a} ) \cap C(\widetilde{\Omega}))$ such that $w\leq z$. The positivity of the mapping $\psi$ and Kantorovich's theorem \cite[Corollary 1.5.9]{MN91} now imply that $\psi$ uniquely extends to a positive and bounded linear form on the space $C(\widetilde{\Omega} \times \widetilde{\Omega} )$; actually, Kantorovich's theorem in this special situation is an exercise. By the Riesz-Markov representation theorem, there exists a unique, positive, and finite measure $\nu$ on $\widetilde{\Omega}\times\widetilde{\Omega}$ such that
\begin{equation} \label{representation}
\psi (u\otimes \bar{v}) = \Psi (u,v) = \int_{\widetilde{\Omega}\times \widetilde{\Omega}} u(x) \overline{v(y)} \; d\nu (x,y) ,
\end{equation}
and the implication is proved.
Secondly, assume that the span $D({\mathfrak a} ) \cap C(\widetilde{\Omega})$ is not dense in $C(\widetilde{\Omega})$. By assumption of $\mathcal A$-regularity, however, the span of $D({\mathfrak a} ) \cap C(\widetilde{\Omega})$ union the constant function $1$ is dense in $C(\widetilde{\Omega} )$. Hence, the closure $\mathcal A_0$ of the space $D({\mathfrak a} ) \cap C(\widetilde{\Omega})$ has co-dimension one in the space of continuous functions. The fact that $D({\mathfrak a} )$ is a lattice implies that $\mathcal A_0$ is actually a maximal ideal in $C(\widetilde{\Omega} )$. Hence, there exists a vector $x_\infty\in\widetilde{\Omega}$ such that
\[
\mathcal A_0 = \{ u\in C(\widetilde{\Omega} ) | u(x_\infty ) = 0 \} = C_0 (\widetilde{\Omega}_\infty );
\]
where $\widetilde{\Omega}_\infty := \widetilde{\Omega} \setminus \{x_\infty\}$. Similarly as above, one shows that the tensor product $(D({\mathfrak a} ) \cap C(\widetilde{\Omega})) \otimes_{\rm alg} (D({\mathfrak a} ) \cap C(\widetilde{\Omega}))$ is majorizing in $C_c(\widetilde{\Omega}_\infty \times \widetilde{\Omega}_\infty )$, and therefore, $\psi$ uniquely extends to a positive and bounded linear form on $C_c(\widetilde{\Omega}_\infty \times \widetilde{\Omega}_\infty )$. By the Riesz-Markov representation theorem, there exists a unique, positive Radon measure $\nu$ on $\widetilde{\Omega}_\infty$ such that \eqref{representation} holds. Note that in this case, the measure $\nu$ need not be finite, but for all functions $u$, $v\in D({\mathfrak a} )$ the product $u\otimes \bar{v}$ is $\nu$-integrable.
Now, the measure $\nu$ is positive and the forms ${\mathfrak a}$, $\widehat{{\mathfrak a}}$, and in turn, $\Psi$ are Hermitean. This implies that the measure $\nu$ can be chosen to be symmetric. In fact, if necessary, it suffices to replace the measure $\nu$ by the measure $\tilde{\nu}$ given by $\tilde{\nu} (B) = \frac12 (\nu (B) + \nu (\tilde{B}))$ for every Borel set $B\subseteq\widetilde{\Omega}\times\widetilde{\Omega}$ (where again, $\tilde{B}$ is the reflection of $B$). Finally, the continuity of $\psi$ on $D({\mathfrak a} ) \otimes D({\mathfrak a} )$ together with Lemma~\ref{lem.abs} implies that $\nu$ is absolutely continuous with respect to the ${\mathfrak a}$-capacity.
\end{proof}
\section{Positivity and locality}
\label{section:locality}
In a recent article, Akhlil found a surprising connection between positivity of a dominated semigroup and locality of the generating form, at least when the dominating semigroup is generated by a local operator or a local form \cite[Theorem~4.3]{Ak18}. Let us recall, that an operator $A$ on $L^2_\mu (\Omega )$ is called {\em local}, if for every $u\in D(A)$, $v\in L^2_\mu (\Omega )$ satisfying $uv = 0$ one has $\langle Au , v\rangle_{L^2_\mu} = 0$. Note that the condition $uv=0$ for two $L^2$-functions is equivalent to the condition $|u|\wedge |v| = 0$. Similarly, we say that a sesquilinear form ${\mathfrak a}$ is {\em local}, if for every $u$, $v\in D({\mathfrak a} )$ satisfying $uv = 0$ one has ${\mathfrak a} (u,v) = 0$. A sesquilinear form is local if and only if for every $u$, $v\in D({\mathfrak a} )$ satisfying $uv = 0$ one has
\begin{equation} \label{eq.local}
{\mathfrak a} (u+v) = {\mathfrak a} (u) + {\mathfrak a} (v).
\end{equation}
The proof of the following is straightforward.
\begin{proposition}
Let ${\mathfrak a}$ be a closed, accretive, and $\mathcal A$-regular sesquilinear form on $L^2_\mu (\Omega )$; where $\mathcal A$ is a closed, Hermitean, and unital subalgebra of $C^b (\Omega )$. Let $\widetilde{\Omega}$ be the compactification of $\Omega$ associated with $\mathcal{A}$ and let $\nu$ be a positive Borel measure on $\widetilde{\Omega} \times \widetilde{\Omega}$ such that the form
\[
{\mathfrak b} (u,v) = \int_{\widetilde{\Omega}\times\widetilde{\Omega}} u(x)\overline{v(y)} \; d\nu (x,y)
\]
is well defined and continuous on $D({\mathfrak a} )$. Then $\mathfrak b$ is local if and only if ${\rm supp}\, \nu$ is contained in the diagonal $\Delta := \{ (x,y)\in\widetilde{\Omega}\times\widetilde{\Omega} | x=y\}$.
\end{proposition}
The first statement in the following theorem gives a condition under which locality of a form implies positivity of the generated semigroup. The second statement theorem basically is \cite[Theorem~4.3]{Ak18}, although the assumptions in \cite{Ak18} are stronger. The proof in \cite{Ak18} uses the Beurling-Deny and Lejan representation of regular Dirichlet forms \cite{Al75,And75}; in particular, the dominated semigroup is a submarkovian semigroup, that is, it is positive {\em and} $L^\infty$-contractive. We give here a different proof which only uses positivity of the dominated semigroup.
\begin{theorem} \label{prop.local}
Let ${\mathfrak a} :D({\mathfrak a} )\times D({\mathfrak a} )\to{\mathbb C}$ and $\widehat{{\mathfrak a}} : D(\widehat{{\mathfrak a}} ) \times D(\widehat{{\mathfrak a}} ) \to{\mathbb C}$ be two sesquilinear Hermitean forms on $L^2_\mu (\Omega )$. Denote the associate semigroups by $T$ and $\widehat{T}$ respectively and assume they are real.
\begin{enumerate}[\upshape (a)]
\item If the form ${\mathfrak a}$ is local and $D({\mathfrak a} )$ is a sublattice of $L^2_\mu (\Omega )$, then $T$ is a positive semigroup.
\item Assume that the semigroup $T$ is positive and $T$ is dominated by $\widehat{T}$. Then locality of $\widehat{{\mathfrak a}}$ implies the locality of ${\mathfrak a}$.
\end{enumerate}
\end{theorem}
\begin{proof}
(a) Assume that the form ${\mathfrak a}$ is local and that $D({\mathfrak a} )$ is a sublattice of $L^2_\mu (\Omega )$. Then, by the characterisation \eqref{eq.local} of locality,
\begin{align*}
{\mathfrak a} (u) & = {\mathfrak a} (u^+ - u^-) \\
& = {\mathfrak a} (u^+) + {\mathfrak a} (-u^-) \\
& = {\mathfrak a} (u^+) + {\mathfrak a} (u^-) \\
& = {\mathfrak a} (u^+ + u^-) \\
& = {\mathfrak a} (|u|)
\end{align*}
for all $u\in D({\mathfrak a})$. Thus, the first Beurling-Deny criterion implies that the semigroup $T$ is positive.
(b) Suppose that the form $\widehat{{\mathfrak a}}$ generating $\widehat{T}$ is local. The positivity of $T$ and the first Beurling-Deny criterion imply, that for every real $u\in D({\mathfrak a} )$,
\begin{align*}
{\mathfrak a} (u^+) + {\mathfrak a} (u^-) - 2{\rm Re }\ {\mathfrak a} (u^+,u^-) & = {\mathfrak a} (u) \\
& \geq {\mathfrak a} (|u|) \\
& = {\mathfrak a} (u^+) + {\mathfrak a} (u^-) + 2{\rm Re }\ {\mathfrak a} (u^+,u^-) .
\end{align*}
Hence,
$
{\rm Re }\ {\mathfrak a} (u^+ ,u^-) \leq 0 .
$
for all real $u\in D({\mathfrak a} )$.
Thus, if $u$, $v\in D({\mathfrak a} )$ are both positive and $uv=0$, then the characterisation of domination from \eqref{eq.domination.char} implies
\[
0 \geq {\rm Re }\ {\mathfrak a} (u,v) \geq {\rm Re }\ \widehat{{\mathfrak a}} (u,v) .
\]
However, because $\widehat{{\mathfrak a}}$ is local, the right-hand side of this chain of inequalities is zero. Thus,
\[
{\rm Re }\ {\mathfrak a} (u,v) = 0 \text{ for every positive } u,v\in D({\mathfrak a} ) \text{ such that } uv=0 .
\]
From the sesquilinearity of the form, we then obtain
\[
{\rm Re }\ {\mathfrak a} (u,v) = 0 \text{ for every } u,v\in D({\mathfrak a} ) \text{ such that } uv= 0.
\]
Finally, replacing $u$ by $e^{i\theta} u$ yields
\[
{\mathfrak a} (u,v) = 0 \text{ for every } u,v\in D({\mathfrak a} ) \text{ such that } uv= 0.
\]
Whence, the form ${\mathfrak a}$ is local.
\end{proof}
\section{Final Remarks}
\label{section:remarks}
\subsection{The Laplace operator with Dirichlet and Neumann boundary conditions: what is in between?}
Let $T$, $\widehat{T}$, and $S$ be three semigroups generated by closed, accretive, and Hermitean
forms ${\mathfrak a}$, $\widehat{{\mathfrak a}}$, and $\mathfrak{b}$ respectively. If $S$ is sandwiched between $T$ and $\widehat{T}$ in the sense that $\widehat{T}$ dominates $S$ which in turn dominates $T$, then $S$ is necessarily positive. Indeed this holds because
\[
|T(t)u| \leq S(t) |u|
\]
for every $u\in L^2_\mu(\Omega)$. Therefore Theorem~\ref{prop.local} implies that if $\widehat{{\mathfrak a}}$ is local, then $\mathfrak{b}$ is local as well.
Let $\Omega \subset\mathbb{R}^{N}$ ($N \geq 1$) be a bounded open set with boundary $\partial \Omega$. By ${\mathfrak a}^N : H^1 (\Omega) \times H^1 (\Omega) \to {\mathbb R}$ and ${\mathfrak a}^D : H^1_0 (\Omega) \times H^1_0 (\Omega) \to {\mathbb R}$ we denote the sesquilinear and Hermitean forms of the Neumann-Laplace operator and the Dirichlet-Laplace operator, respectively, given by
\begin{align*}
{\mathfrak a}^N (u,v) & := \int_{\Omega} \nabla u \overline{\nabla v} \; \mathrm{d}x \qquad (u, v\in H^1 (\Omega)) \text{ and} \\
{\mathfrak a}^D (u,v) & := \int_{\Omega} \nabla u \overline{\nabla v} \; \mathrm{d}x \qquad (u, v\in H^1_0 (\Omega)) .
\end{align*}
Both forms ${\mathfrak a}^N$ and ${\mathfrak a}^D$ are local and $T^N$ dominates $T^D$, where $T^N$ and $T^D$ are the associated semigroups. Let $S$ be sandwiched between $T^N$ and $T^D$, generated by a closed, accretive, and Hermitean form $\mathfrak{b}$. Then (as remarked above), $\mathfrak{b}$ is necessarily local by Proposition~\ref{prop.local}. If $\Omega$ has Lipschitz boundary, so that $\mathfrak{b}$ is also $C(\overline{\Omega})$-regular, then the form is associated to a Laplace operator with local Robin boundary conditions \cite[Theorem~4.1]{ArWa03b}. As shown in Theorem~\ref{prop.local} (b) above, and as already shown by Akhlil in \cite{Ak18}, the assumption of locality in \cite[Theorem~4.1]{ArWa03b} is superfluous.
A similar representation theorem was also proved in a nonlinear setting in which closed, accretive, and Hermitean forms are replaced by convex, lower semi-continuous energy functions. For example, \cite{ChWa12} characterises all nonlinear semigroups which are sandwiched between the semigroups generated by the $p$-Laplace operator with Neumann boundary conditions and the $p$-Laplace operator with Dirichlet boundary conditions, if the energy function generating the sandwiched semigroup is local. Later, in \cite{Cl21b}, such a characterisation was generalised to semigroups generated by nonlinear and local Dirichlet forms. It is not clear whether locality is a necessary assumption in the nonlinear situation.
\subsection{Eventual positivity}
Let ${\mathfrak a}:D({\mathfrak a})\times D({\mathfrak a})\to\mathbb C$ be a sesquilinear Hermitean form on $L^2(\Omega)$, where $\Omega\subseteq{\mathbb R}^N$ is a bounded open set.
As a consequence of Theorem~\ref{prop.local}(b), we have that, if the semigroup $T$ associated to ${\mathfrak a}$ is dominated by the semigroup generated by the Neumann Laplacian (see above), then positivity of $T$ implies locality of ${\mathfrak a}$. However, there are non-positive semigroups that are dominated by the semigroup generated by the Neumann Laplacian. Of course, due to Theorem~\ref{prop.local}(a), they are necessarily non-local. We give an example:~Let $\Omega=(0,1)$ and let $T$ be the semigroup associated with the non-local form
\begin{align*}
{\mathfrak a}(u,v) & = \int_0^1 u'\bar{v'}\, dx+ \langle Bu\restrict{\{0,1\}},v\restrict{\{0,1\}}\rangle \\
& = \int_0^1 u'\bar{v'}\, dx + \lambda (u(0)\overline{v(0)} + u(1)\overline{v(1)} + u(0)\overline{v(1)} + u(1)\overline{v(0)} )
\end{align*}
for $u,v\in D({\mathfrak a}):=H^1(0,1)$; where $B=\begin{bmatrix} \lambda &\lambda\\ \lambda &\lambda\end{bmatrix}$ and $\lambda$ is a positive non-zero real number. The aforementioned domination is a consequence of Theorem~\ref{main}. Indeed, by Remark~\ref{rem.dom}, it suffices to show that the measure is diagonally dominant on $H^1(0,1)$. Note that the measure here is just the Dirac measure on the four corners of the unit square $[0,1]\times [0,1]$ and hence diagonally dominant. In fact, the domination of the semigroup $T$ by the Neumann Laplacian is also mentioned in \cite[Section~3]{Ak18} for the case $\lambda=1$.
While, non-positivity of the semigroup $T$ is a consequence of the first Beurling-Deny criterion, it can alternatively be deduced by Theorem~\ref{prop.local}(b).
Nevertheless, the semigroup $T$ is {\em uniformly eventually positive}, i.e., there exists a time $t_0\geq 0$ such that $T(t)$ leaves the positive cone invariant for all $t\geq t_0$. Indeed, this was shown for $\lambda=1$ in \cite[Theorem~4.2]{DaGl18a} and the proof for other values of $\lambda$ remains same.
The first example of an eventually positive semigroup in infinite dimensions was given by Daners in \cite{Da14} which led to a systematic study in \cite{DaGlKe16, DaGlKe16a}. The theory has been further developed in \cite{DaGl17, DaGl18a, DaGl18, ArGl21}.
\subsection*{Acknowledgements}
The first and the third named author were supported by
Deu\-tscher Aka\-de\-mi\-scher Aus\-tausch\-dienst. Part of the work was done during the third author's pleasant stay at TU Dresden.
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
\def\ocirc#1{\ifmmode\setbox0=\hbox{$#1$}\dimen0=\ht0 \advance\dimen0
by1pt\rlap{\hbox to\wd0{\hss\raise\dimen0
\hbox{\hskip.2em$\scriptscriptstyle\circ$}\hss}}#1\else {\accent"17 #1}\fi}
\def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'$} \def\cprime{$'${$'$}
|
{
"timestamp": "2021-12-01T02:26:48",
"yymm": "2111",
"arxiv_id": "2111.15489",
"language": "en",
"url": "https://arxiv.org/abs/2111.15489"
}
|
\section{Introduction}
Developing powerful language representations technique has been a key area of research in Natural Language Processing (NLP). The employment of effective representational models has also been an essential contributor in improving the performance of many NLP systems. Word vectors or embeddings are fixed-length vectors that are proficient in capturing the semantic properties of words. Emerging from a simple Neural Network-based Word2Vec model and recently transitioning to Contextualised Word Embeddings (CWEs), the advancements have consistently brought a revolution to every NLP sub-domain. The introduction of naive Word2Vec model not only brought an unprecedented increase in the performance of a wide variety of downstream tasks such as Machine Translation, Sentiment Analysis, and Question Answering, but it also laid the foundation for a majority of Natural Language Understanding (NLU) architectures that we use today.
Recent attempts in NLU research have fundamentally focused on generating context-aware word representations, i.e., embeddings that take into account the polysemous nature of words. Polysemy refers to the changes in the meaning of a word when the context around it changes. One related task in NLP is Word Sense Disambiguation (WSD) which deals with the automatic recognition of the correct sense of a word appearing in a specific context. WSD is an essential component of any NLP system as it helps in generating better semantic representations of words.
\noindent \textbf{Contribution}: The Transformer architectures implemented in the HuggingFace framework \cite{Wolf2019HuggingFacesTS} implicitly provide a model for WSD. We test the performance of nine such pre-trained models on WSD and extensively analyse each one of them. These models are BERT \cite{devlin2018bert}, OpenAI-GPT \cite{radford2018gpt}, OpenAI-GPT2 \cite{radford2019language}, CTRL \cite{keskarCTRL2019}, DistilBERT \cite{sanh2019distilbert}, Transformer-XL \cite{DBLP:journals/corr/abs-1901-02860}, XLNet \cite{DBLP:journals/corr/abs-1906-08237}, ELECTRA \cite{clark2020electra} and ALBERT \cite{lan2019albert}.
This comprehensive study helps us in comparing the ability of different transformer models in incorporating polysemy in embeddings, i.e., their power of segregating various senses of a word in the word-vector space. Through our experiments, we also report a new state-of-the-art on both the lexical sample WSD datasets we experimented on, i.e., SensEval-2 and SensEval-3.
\noindent \textbf{Note}: Although the prime use of the CTRL, OpenAI-GPT, and OpenAI-GPT2 model is Natural Language Generation (NLG), we still include them in our comparative study. We do this to determine the extent to which these models consider polysemy while carrying out NLG as their primary objective.
\section{Related Work}
Word Sense Disambiguation (WSD) is an old and common problem in NLP. In the early days of Artificial Intelligence, WSD was conceived as a fundamental task of Machine Translation \cite{weaver.1949}. Since then, advancements in NLP have led to the development of a variety of WSD systems. Recent attempts in this respect have tried to tackle the problem by introducing the concept of sense embeddings. For instance, \cite{bartunovetal:2016} induced sense embeddings using a pre-training based approach. \cite{pelevina:2016:RepL4NLP} proposed methods that focus on generating sense embeddings using pre-trained word embeddings such as Glove vectors \cite{pennington2014glove}.
\cite{DBLP:journals/corr/TraskML15} proposed `Sense2Vec', which utilized the part-of-speech and named entity tag information to distinguish between different meanings of a word.
An extensive survey on further ideas and research on sense representations of words is given by \cite{DBLP:journals/corr/abs-1805-04032}.
Most of the recent approaches have leveraged the power of Deep Learning to build WSD systems. \cite{bosc-vincent-2018-auto} proposed an auto-encoder-based approach that goes from the target word embedding back to the word definition. The method proposed by \cite{yuan.2016} revolved around the computation of sentence context vector for ambiguous words. They adopted a \emph{k}-Nearest Neighbor (kNN) \cite{cover.1967} based approach for classification of ambiguous words. In contrast to all the approaches described above, \cite{Wiedemann2019DoesBM} proposed a simple yet effective approach for the classification of ambiguous words. Instead of using any pre-trained embeddings like the glove embeddings, they used the BERT \cite{devlin2018bert} to obtain Contextualised Word Embeddings (CWEs). For prediction, they used a kNN based approach. The use of BERT also achieved new state-of-the-art results over previously proposed approaches.
\section{Datasets}
In our experiments, we use two widely-adopted lexical sample corpora available for WSD, SensEval-2 and SensEval-3. Both come with a train and test set to train and evaluate a WSD model. The words in these datasets are annotated with the sense identifiers defined in WordNet 3.0. A brief overview of both datasets is shown in Table 1. To evaluate the performance of a WSD model, we refer to the testing scripts from the comprehensive framework of \cite{raganato.2017}\footnote{\href{https://github.com/getalp/UFSAC}{https://github.com/getalp/UFSAC}.}.
\begin{table*}[t]
\caption{An overview of the datasets used for the study of nine Transformer Models. Average Sentence Length has been rounded off to the nearest integer.}
\resizebox{\textwidth}{!}
{%
\small
\begin{tabular}{@{}ccccccccc@{}}
\toprule
\textbf{Dataset} & \textbf{\thead{No. of \\ Sentences}} & \textbf{\thead{Avg. Sentence \\ Length}} & \textbf{\thead{No. of Distinct \\ Sense Identifiers}} & \textbf{\thead{No. of Sense \\ Embeddings}} & \textbf{\thead{Distinct \\ Words}} & \textbf{Nouns} & \textbf{Adjectives} & \textbf{Verbs} \\
\midrule
SensEval-3 Train & 7860 & 30 & 285 & 9280 & 172 & 3632 & 308 & 3879 \\
SensEval-3 Test & 3944 & 30 & 260 & 4520 & 168 & 1777 & 153 & 1999 \\
SensEval-2 Train & 8611 & 29 & 783 & 8742 & 187 & 3492 & 1400 & 2559 \\
SensEval-2 Test & 4328 & 29 & 620 & 4385 & 184 & 1737 & 702 & 1800 \\
\bottomrule
\end{tabular}%
}
\end{table*}
\section{Experiments}
For our experimentation, we take inspiration from a simple yet effective kNN based approach on CWEs to WSD proposed by \cite{Wiedemann2019DoesBM}.
This approach uses a cosine similarity-based distance metric for the classification of ambiguous words in the test data. In a nutshell, we obtain the CWEs of all the ambiguous words in the training data by providing their respective contexts to one of the nine contextualization approaches. While classifying an ambiguous word in a test sentence, a kNN classification approach is used, with cosine similarity between the CWE of the ambiguous word and all its instances observed during training as the similarity metric.
Such an experiment is carried out for six different values of the hyper-parameter $k \in \lbrace 1, 3, 5, 7, 10, 11 \rbrace$ in the kNN classifier. An ambiguous word is classified to the sense with the maximum number of nearest neighbors in the ``\emph{k}'' nearest neighbors.
We propose few additions to existing approach to improve the overall performance. The first improvement lies in the way data is collected. \cite{Wiedemann2019DoesBM} used the lemma of every word in a sentence to obtain sentences from the dataset. This, in some cases, generated inappropriate sentences such as:
\emph{``Nor be this feeling only provoked by the sight or the thought of art, he write."} instead of \emph{``Nor is this feeling only provoked by the sight or the thought of art, he wrote."}. Another sentence collected by their method and our method is \emph{``The art\_critic critic be thus bind to consider with care what standard of comparison should be use."} and \emph{``The art critic is thus bound to consider with care what standards of comparison should be used."} respectively. The sentences collected by their method lack a proper grammatical sense and structure. We improve this by collecting the lemma only for ambiguous words and surface form for every other word in the sentence.
Our second improvement is an empirical finding. While obtaining the CWEs from BERT, they treated the concatenation of the output of the last four layers of BERT as the word embeddings. Instead, we used only the final layer of BERT to obtain the embedding of a word.
\section{Experimental Results}
To study and analyze each of the transformer models in detail, we conduct three rounds of experiments. In the first round, we carry out the task of WSD on nine pre-trained Transformer architectures using the kNN approach described above and compare their performance on two WSD Lexical Sample tasks. Further, to visualize each model's power to separate different senses of a word in their embedding space, we draw the t-SNE plots for the CWEs generated by the transformer models. Lastly, we provide a qualitative analysis by examining the correct predictions and the wrong predictions made by each of our WSD models.
\subsection{Contextualized Embeddings}
To compare the models based on their contextualization power, we perform the task of WSD using the language representation provided by them. Table 2 lists the results obtained by each of these models for $k \in \lbrace 1, 3, 5, 7, 10, 11 \rbrace$. The BERT model achieved a new state-of-the-art on the SensEval-2 and SensEval-3 tasks \cite{Wiedemann2019DoesBM}. The modification also facilitates the DistilBERT model in beating the current state-of-the-art on SensEval-3 dataset. Also, it becomes evident from the results obtained by the ALBERT and the DistilBERT model that they highly resemble BERT's architecture. Through our observations, we also state that the employment of DistilBERT and ALBERT in place of BERT could take off a major overhead of the training time without incurring a significant loss in the performance.
An unexpected drop in performance is observed for the XLNet and Transformer-XL model compared to the other well-performing models. Though both the models are effective on various NLP tasks using their powerful recurrence-based Transformer architectures, we notice that the model still underperforms.
Coming towards the end, the three NLG models — OpenAI-GPT, OpenAI-GPT2, and CTRL also performed poorly. CTRL and OpenAI-GPT2 performed slightly better than the Most Frequent Sense (MFS) baseline on SensEval-3 dataset. In addition to this, they even failed to beat the MFS baseline on the SensEval-2 dataset, demonstrating that they are ineffective in capturing polysemy.
\begin{table}[t]
\caption{Results(F1\%) of all the Transformer models for different values of \emph{k} on the \emph{k}-Nearest Neighbor classification approach vs. the Most Frequent Sense (MFS) baseline and current state-of-the-art results. The Best results for each model are underlined, and the best result on a particular dataset is bold. The previous state-of-the-art is in italics.}
\resizebox{\textwidth}{!}
{%
\begin{tabular}{@{}p{0.02\textwidth}|c|c|c|c|c|c||c|c|c|c|c|c@{}}
\toprule
\multicolumn{1}{c|}{\textbf{Model}} & \multicolumn{6}{c||}{\textbf{SensEval-2}} & \multicolumn{6}{c}{\textbf{SensEval-3}} \\
& {\small k=1} & {\small k=3} & {\small k=5} & {\small k=7} & {\small k=10} & {\small k=11} & {\small k=1} & {\small k=3} & {\small k=5} & {\small k=7} & {\small k=10} & {\small k=11} \\
\midrule
\multicolumn{1}{c|}{BERT} & 76.02 & 76.78 & 76.62 & 76.62 & 76.76 & \underline{\textbf{76.81}} & 79.40 & 80.31 & 80.49 & \underline{\textbf{80.96}} & 80.75 & 80.72 \\
\multicolumn{1}{c|}{DistilBERT} & 74.81 & \underline{75.64} & 75.36 & 75.43 & 75.41 & 75.43 & 78.62 & 79.71 & 80.05 & 80.15 & \underline{80.23} & 80.07 \\
\multicolumn{1}{c|}{ALBERT} & 74.84 & 75.33 & \underline{75.43} & 74.98 & 75.07 & 75.07 & 77.94 & 78.93 & 79.44 & 79.60 & \underline{79.71} & 79.57 \\
\multicolumn{1}{c|}{XLNet} & 64.74 & 66.24 & \underline{66.48} & 66.45 & 66.38 & 66.45 & 69.97 & 70.64 & 71.50 & \underline{71.78} & 71.42 & 71.42 \\
\multicolumn{1}{c|}{ELECTRA} & 65.98 & 65.88 & 65.98 & \underline{66.10} & 66.07 & 65.95 & 69.45 & 70.10 & 70.82 & \underline{71.14} & 71.11 & 71.01 \\
\multicolumn{1}{c|}{GPT} & 59.80 & 60.84 & 61.29 & 61.24 & 61.15 & \underline{61.54} & 65.63 & 67.65 & 68.51 & 69.29 & 69.58 & \underline{69.60} \\
\multicolumn{1}{c|}{Trans-XL} & 53.36 & 54.35 & 55.01 & \underline{55.18} & 55.01 & 54.45 & 62.07 & 62.82 & 63.32 & \underline{63.99} & 63.50 & 63.50 \\
\multicolumn{1}{c|}{CTRL} & 52.39 & 53.64 & 54.28 & 54.45 & 54.49 & \underline{54.82} & 58.09 & 60.38 & 60.92 & 61.50 & \underline{61.78} & 61.63 \\
\multicolumn{1}{c|}{GPT2} & 50.96 & 53.57 & \underline{53.88} & \underline{53.88} & 53.86 & 53.80 & 57.03 & 59.83 & 60.92 & 61.21 & \underline{61.29} & 61.19 \\
\midrule
\multicolumn{1}{c|}{MFS} & \multicolumn{6}{c||}{54.79} & \multicolumn{6}{c}{58.95} \\
\multicolumn{1}{c|}{kNN \cite{Wiedemann2019DoesBM}} & \multicolumn{6}{c||}{\emph{76.52}} & \multicolumn{6}{c}{\emph{80.12}} \\
\bottomrule
\end{tabular}%
}
\end{table}
\subsection{Sense-space analysis using t-SNE plots}
To understand and interpret a model's power to segregate different senses of a word in the embedding space, we draw the t-SNE plots of the embeddings obtained for the word `bank' from the training data of SensEval-3 for each of the nine models. Figure 2 represents the t-SNE plots thus obtained. Sub-figure 2.(j) represents the interpretable meanings of the senses represented in the t-SNE plots along with their respective frequencies in the SensEval-3 training corpus. We exclude any sense with a frequency of less than three from the t-SNE plot for clarity. It is evident from the t-SNE plot of OpenAI-GPT2 that it hardly distinguishes between different senses, and we see this as a possible reason why its accuracy is very close to the MFS baseline. As all the sense embeddings are in the vicinity of each other, the model hardly learns any decision boundary for sense classification. Therefore, the approach performs slightly better than the MFS baseline. We can draw a similar conclusion by observing the t-SNE plots of CTRL and Transformer-XL, implying that the NLG objective of OpenAI-GPT2 and CTRL hardly takes polysemy into account.
\begin{figure*}
\centering
\begin{tabular}{ c @{\hspace{5pt}} c @{\hspace{5pt}} c @{\hspace{5pt}} c @{\hspace{5pt}} c}
\fbox{\includegraphics[width=0.17\textwidth, clip, trim={70 100 150 140}]{images/bank_BERT.png}} &
\fbox{\includegraphics[width=0.17\textwidth, clip, trim={70 100 150 140}]{images/bank_DistilBERT.png}} &
\fbox{\includegraphics[width=0.17\textwidth, clip, trim={70 100 150 140}]{images/bank_ALBERT.png}} &
\fbox{\includegraphics[width=0.17\textwidth, clip, trim={70 100 150 140}]{images/bank_XLNet.png}} &
\fbox{\includegraphics[width=0.17\textwidth, clip, trim={70 100 150 140}]{images/bank_ELECTRA.png}} \\
\small (a) BERT &
\small (b) DistilBERT &
\small (c) ALBERT &
\small (d) XLNet &
\small (e) ELECTRA \\
\end{tabular}
\begin{tabular}{ c @{\hspace{5pt}} c @{\hspace{5pt}} c @{\hspace{5pt}} c @{\hspace{5pt}} c}
\fbox{\includegraphics[width=0.17\textwidth, clip, trim={70 100 150 140}]{images/bank_openaigpt.png}} &
\fbox{\includegraphics[width=0.17\textwidth, clip, trim={70 100 150 140}]{images/bank_transforXL.png}} &
\fbox{\includegraphics[width=0.17\textwidth, clip, trim={70 100 150 140}]{images/bank_ctrl.png}} &
\fbox{\includegraphics[width=0.17\textwidth, clip, trim={70 100 150 140}]{images/bank_gpt2.png}} &
\fbox{\includegraphics[width=0.17\textwidth,
height =2.030cm, clip, clip, trim={120 0 0 0}]{images/label.png}} \\
\small (f) GPT&
\small (g) Trans-XL&
\small (h) CTRL &
\small (i) GPT2&
\small (j) Sense Labels\\
\end{tabular}
\medskip
\caption{t-SNE plots of different senses of `bank' and their contextualized embeddings. The legend(shown separately in Sub-figure (j)) shows a short description of the respective WordNet sense and the frequency of occurrence in the training data. We used the SensEval-3 training dataset for obtaining these plots.}
\end{figure*}
On the other hand, plots obtained for the OpenAI-GPT, ELECTRA, and XLNet models depict that these models capture polysemy relatively better than the NLG models. They do stress a little on making a distinction between different senses of a word. Lastly, models that performed the best among the nine models we experimented on are BERT, DistilBERT, and ALBERT. These models possess exceptional proficiency in identifying polysemy, which is evident from their t-SNE plots as well as their accuracy on both the datasets.
\subsection{Additional Experiments}
Part-of-Speech information of a word has been regarded as a crucial influencer in determining its possible sense. \cite{Wiedemann2019DoesBM} proposed a POS-sensitive approach to WSD for the determination of the sense of an ambiguous word. Their experiments resulted in an accuracy lift of approximately 2-3 F1 on the SemEval datasets. Still, this approach did not prove to be beneficial for models trained on SensEval-2 and SensEval-3 datasets. This was because each word in these datasets is annotated with only one POS.
Aligning our analysis on similar lines, as a final set of error analyses in our comparative study, we attempt to understand each model's behavior to different POS tags. We estimate the percentage of correct classifications made by each model for Nouns, Verbs, and Adjectives in the two datasets. This is presented in Table 4 for $\emph{k}=1$.
For SensEval-2 dataset, we observe that each model was able to classify both Nouns and Adjectives correctly to a considerable extent. But, for Verbs, a difference of approximately 15-20\% was observed from that of Nouns and Adjectives. A similar drop in classification accuracy was observed in SensEval-3 for Adjectives. Each model classified Nouns and Verbs in this dataset to a reasonable extent but underperformed during the classification of Adjectives.
\begin{table*}
\caption{The percentage of correct classifications made by the models for Nouns, Verbs and Adjectives on SensEval-2 Test and SensEval-3 Test data. The values are for \emph{k}=1. The best results on a particular POS tag have been marked in bold.}
\centering
{%
\begin{tabular}{@{}p{0.10\textwidth}|p{0.10\textwidth}p{0.10\textwidth}p{0.10\textwidth}|p{0.10\textwidth}p{0.10\textwidth}p{0.10\textwidth}@{}}
\toprule
\multicolumn{1}{c|}{\textbf{Model}} & \multicolumn{3}{c|}{\textbf{SensEval-2}} & \multicolumn{3}{c}{\textbf{SensEval-3}} \\
& ~\hfill~{\small Nouns}~\hfill~ & ~\hfill~{\small Verbs}~\hfill~ & ~\hfill~{\small Adj}~\hfill~ & ~\hfill~{\small Nouns}~\hfill~ & ~\hfill~{\small Verbs}~\hfill~ & ~\hfill~{\small Adj}~\hfill~ \\
\midrule
\multicolumn{1}{c|}{BERT} & ~\hfill~\textbf{81.64}~\hfill~ & ~\hfill~\textbf{67.22}~\hfill~ & ~\hfill~\textbf{81.62}~\hfill~ & ~\hfill~\textbf{78.17}~\hfill~ & ~\hfill~82.33~\hfill~ & ~\hfill~\textbf{56.86}~\hfill~ \\
\multicolumn{1}{c|}{DistilBERT} & ~\hfill~81.00~\hfill~ & ~\hfill~65.61~\hfill~ & ~\hfill~80.06~\hfill~ & ~\hfill~76.14~\hfill~ & ~\hfill~\textbf{82.86}~\hfill~ & ~\hfill~54.25~\hfill~ \\
\multicolumn{1}{c|}{ALBERT} & ~\hfill~82.38~\hfill~ & ~\hfill~64.33~\hfill~ & ~\hfill~80.06~\hfill~ & ~\hfill~75.63~\hfill~ & ~\hfill~81.87~\hfill~ & ~\hfill~55.56~\hfill~ \\
\multicolumn{1}{c|}{XLNet} & ~\hfill~71.50~\hfill~ & ~\hfill~56.39~\hfill~ & ~\hfill~66.81~\hfill~ & ~\hfill~67.92~\hfill~ & ~\hfill~73.58~\hfill~ & ~\hfill~48.37~\hfill~ \\
\multicolumn{1}{c|}{ELECTRA} & ~\hfill~75.30~\hfill~ & ~\hfill~54.83~\hfill~ & ~\hfill~68.80~\hfill~ & ~\hfill~67.02~\hfill~ & ~\hfill~73.16~\hfill~ & ~\hfill~50.98~\hfill~ \\
\multicolumn{1}{c|}{OpenAI-GPT} & ~\hfill~70.87~\hfill~ & ~\hfill~47.44~\hfill~ & ~\hfill~64.10~\hfill~ & ~\hfill~64.15~\hfill~ & ~\hfill~68.06~\hfill~ & ~\hfill~52.29~\hfill~ \\
\multicolumn{1}{c|}{Tranformer-XL} & ~\hfill~61.95~\hfill~ & ~\hfill~42.67~\hfill~ & ~\hfill~59.54~\hfill~ & ~\hfill~60.89~\hfill~ & ~\hfill~64.36~\hfill~ & ~\hfill~47.06~\hfill~ \\
\multicolumn{1}{c|}{CTRL} & ~\hfill~62.29~\hfill~ & ~\hfill~41.11~\hfill~ & ~\hfill~56.84~\hfill~ & ~\hfill~57.01~\hfill~ & ~\hfill~60.55~\hfill~ & ~\hfill~39.87~\hfill~ \\
\multicolumn{1}{c|}{OpenAI-GPT2} & ~\hfill~55.79~\hfill~ & ~\hfill~42.72~\hfill~ & ~\hfill~60.11~\hfill~ & ~\hfill~51.55~\hfill~ & ~\hfill~63.42~\hfill~ & ~\hfill~40.52~\hfill~ \\
\bottomrule
\end{tabular}%
}
\end{table*}
\section{Conclusion}
In this paper, we evaluated the contextualisation power of nine pre-trained Transformer Models on a WSD task. We presented a comparative study on each model's power to capture polysemy in the embeddings they generate. To accomplish this, we used a kNN based approach to WSD proposed by \cite{Wiedemann2019DoesBM} and proposed two improvements in their method that also accompanied us in establishing a new state-of-the-art on WSD Lexical Sample Task of SensEval-2 and SensEval-3. We concluded our study by stating that BERT, DistilBERT, and ALBERT models prove out to be most effective on the WSD task solely based on text encodings they provide. We found these models to possess an extraordinary potential to identify a word's different senses compared to all the other models.
As future work, we plan to make use of POS information as well to classify an ambiguous word. We firmly believe that incorporating POS information in WSD could be very useful and further increase the performance of these models. In addition to this, we also believe that fine-tuning these models could be a potential area of focus. In our experiments, we leveraged the pre-trained models as provided by the authors, and a bit of fine-tuning could be beneficial.
\bibliographystyle{splncs04}
|
{
"timestamp": "2021-12-01T02:24:35",
"yymm": "2111",
"arxiv_id": "2111.15417",
"language": "en",
"url": "https://arxiv.org/abs/2111.15417"
}
|
\section{Introduction}
\input{introduction}
\section{Isolation Forest Algorithm and Testing Datasets}\label{sec:isolation_forest}
\input{isolationforest}
\subsection{Real world datasets}\label{sec:datasets}
\input{datasets}
\section{Analysis of the \emph{standard} Isolation Forest algorithm}\label{sec:analysis}
\input{analysis}
\section{Weakly-supervised algorithm}\label{sec:weakly}
\input{weak}
\section{Conclusion}
\input{conclusion}
\section*{Acknowledgement}
This work has been supported by MIUR (Italian Minister for Education) under the initiative “Departments of Excellence” (Law 232/2016) and by "Black-box Anomaly Detection: Advanced Approaches and Applications - BADA$^3$" funded by the Department of Information Engineering of University of Padova.
|
{
"timestamp": "2021-12-01T02:25:04",
"yymm": "2111",
"arxiv_id": "2111.15432",
"language": "en",
"url": "https://arxiv.org/abs/2111.15432"
}
|
\section{Introduction and Summary}
\label{sec:introduction}
Wilson loops are universal gauge-invariant operators which were originally devised to characterize the vacuum of the given gauge theory \cite{Wilson:1974sk}.
With the advent of AdS/CFT and localization, supersymmetric extensions of Wilson loops came to the forefront as tools to both study the vacuum structure of gauge theories using holography, and also to further our understanding of holography itself. In this paper we focus on the latter.
The most widely studied holographic Wilson loop (WL) is the circular one in ${\cal N}=4$ supersymmetric Yang-Mills theory (SYM). The standard operator is dressed with coupling to adjoint scalars such that the operator preserves 1/2 of the supercharges of the theory. The vast amount of supersymmetry enables the exact computation of the vacuum expectation value (vev) at large rank $N$ using supersymmetric localization \cite{Erickson:2000af,Drukker:2000rr,Pestun:2007rz}. In holography, Wilson loop operators are dual to fundamental strings hanging from the conformal boundary~\cite{Maldacena:1998im}. In particular, for the circular Wilson loop in ${\cal N}=4$ SYM, the dual string is attached to the boundary of AdS$_5$ and extends into the bulk while staying at fixed location on the $S^5$. The partition function of the quantized string should reproduce the Wilson loop vev exactly
\begin{equation}\label{duality}
\langle {\cal W} \rangle = Z_\text{string}\,.
\end{equation}
Of course, the right hand side can only be evaluated in practice when the string coupling is sufficiently weak and the worldsheet sigma model is weakly coupled. In this case we can utilize a saddle point expansion around a classical string in AdS. The leading order contribution is given by the string action evaluated on-shell which is just its regularized area. Next-to-leading order is given by the one-loop string partition function, and so on.
This expansion on the string theory side corresponds to the strong coupling expansion on the SYM side, which fortunately we have access to via supersymmetric localization.
Finding a precise match between the SYM result and the string theory result beyond leading order has unfortunately proved difficult. Presumably this requires a careful treatment of all measure factors and regularization of UV divergences in the string path integral. A careful treatment was initiated in \cite{Drukker:2000ep} where gauge fixing and ghost determinants were discussed in detail. The one-loop path integral was then computed for the circular string in \cite{Kruczenski:2008zk,Kristjansen:2012nz,Buchbinder:2014nia} using various methods. Collecting all contributions to this order does not lead to a satisfactory match with the field theory result. In fact, even the scaling of the answer with $\lambda$ does not agree.
The origin of this mismatch has been discussed recently in \cite{Giombi:2020mhz}.
There it was emphasized that the Wilson loop operator, appearing on the left hand side of \eqref{duality}, should not be normalized with respect to the rank of the gauge group and therefore scales as $\sim N$.
On the string theory side this effect is reproduced by the dilaton coupling of the string worldsheet which is provided by the so-called Fradkin-Tseytlin (FT) action~ \cite{Fradkin:1984pq,Fradkin:1985ys}.
The role of the FT action for holographic Wilson loops was previously emphasized in \cite{Lewkowycz:2013laa,Chen-Lin:2017pay}. In AdS$_5$ the FT term means that the tree-level string partition function scales with $g_s^{-1}$ where $g_s ={\lambda\over 4 \pi N}$ is the string coupling constant.\footnote{The coupling to the dilaton on general worldsheet is $g_s^{-\chi}$ where the $\chi$ is the Euler character of the worldsheet.} The FT term therefore affects the $\lambda$ scaling of the string partition function but still does not fully resolve the mismatch when compared to the QFT. Giombi and Tseytlin suggested that the remaining discrepancy should be corrected by a careful treatment of the cancellation of UV divergences one naively encounters when computing one-loop string partition function. The end result of which should be that the naive string result should be multiplied by $(T/2\pi)^{\chi/2}$ where $T$ is the effective tension of the string.%
\footnote{This is the tension felt by the string in a curved background. For a circular string AdS the classical worldsheet geometry is just AdS$_2$ and the classical action of the string is $S_\text{classical} = -2\pi T$. Since the regularized area of AdS$_2$ is $-2\pi$, the remaining factor in $S_\text{classical}$ can be taken as the definition of $T$ in this case.}
Some possible explanations for this factor were offered in \cite{Giombi:2020mhz} but a full clarification of its origin is still lacking.
Another way to deal with the mismatch in the Wilson loop vev prefactor is to compute ratio of Wilson loop vevs. This program was carried out in \cite{Forini:2015bgo,Faraggi:2016ekd,Forini:2017whz,Cagnazzo:2017sny,Medina-Rincon:2018wjs} where the circular WL discussed above was compared to the latitude one. The latitude WL~\cite{Drukker:2007qr,Drukker:2007dw,Drukker:2007yx } can also be evaluated on the QFT side using supersymmetric localization~\cite{Pestun:2009nn,Young:2008ed,Bassetto:2008yf}, but on the string theory side the advantage of computing a ratio of WL vevs is that multiplying factors drop out.
In this paper we study the circular Wilson loop in five-dimensional SYM on $S^5$ with radius ${\cal R}$. Since SYM is not conformal in $d\ne 4$, placing the theory on a curved manifold including only minimal couplings breaks supersymmetry. In order to preserve all 16 supersymmetries, one must introduce additional couplings in the Lagrangian \cite{Blau:2000xg}. Once this is done, the theory can be localized to a matrix model which enables the computation of free energy and WL vev just as for ${\cal N}=4$ SYM in four dimensions \cite{Kim:2012ava} (see also \cite{Minahan:2015jta,Minahan:2015any,Bobev:2019bvq}). The matrix model turns out to be the same as for pure supersymmetric Chern-Simons theory in three dimension. This model has been solved exactly \cite{Marino:2004eq} which enables us to obtain a closed form expression for the WL expectation value at large $N$
\begin{equation}\label{SYMWL}
\tcbhighmath{
\langle { \cal W} \rangle = \f{N}{\xi}(\mathrm{e}^{\xi}-1) + O(N^{-1})\,, }
\end{equation}
where $\xi = g_\text{YM}^2 N/(2\pi {\cal R})$ is the dimensionless 't~Hooft coupling of the theory, as discussed in section \ref{sec:QFT}.
The holographic dual to SYM on $S^5$ was identified in \cite{Bobev:2018ugk} to be a particular analytic continuation and dimensional reduction of AdS$_7\times S^4$ in eleven-dimensional supergravity. We will review this geometry in detail in section \ref{sec:gravity}. A key feature of the gravitational solution is that the dilaton is non-trivial, signalling the non-conformal nature of the theory. Indeed, it is well known that a five-dimensional SYM is naively non-renormalizable but in the UV grows an extra dimension and is UV completed in the six-dimensional (2,0) theory. This is built into the gravitational dual as a dimensional reduction of AdS$_7$. The goal of this paper is to use the holographic dual geometry to compute the vev of the circular WL. In this way we hope to reproduce the first two terms of \eqref{SYMWL} in the large $\xi$ expansion:
\begin{equation}
\log \langle { \cal W} \rangle = \xi + \log \f{N}{\xi} + O(\mathrm{e}^{-\xi})\,.
\end{equation}
The first term was previously reproduced by computing the classical string area in \cite{Bobev:2019bvq}, and so here we are mainly interested in the first quantum correction. Along the way we encounter many of the same issues as in AdS$_5$ discussed above, but due to the non-conformal nature of the theory we also have to resolve some new ones.
As we have discussed, the first quantum correction consists of two terms, the one-loop fluctuations of the string worldsheet, and the FT term.
Since the dilaton is not constant, the FT term gives a non-trivial contribution beyond simply $g_s^{-\chi}$ (see also \cite{Chen-Lin:2017pay}). Moreover, we find that the FT is highly divergent. We believe this to be a direct consequence of the fact that 5D SYM is non-renormalizable, which means that the dilaton grows without bound in the UV.
We will argue that the divergence of the FT term is cancelled by a similar divergence of the one-loop fluctuation of the string worldsheet.
We see this indirectly by a Weyl rescaling of the worldsheet metric to the flat one.
Now the worldsheet Ricci scalar is zero and so the bulk FT term simply vanishes. In fact the surface term also vanishes and so it would seem that we get no contribution from the FT term. This is however naive. As emphasized in \cite{Cagnazzo:2017sny}, the Weyl rescaling is ill-defined at the center of the disk which effectively changes the topology of the worldsheet from a disk to a cylinder. This also means that the Euler characteristic changes from 1 to 0 and the naive evaluation of the FT term gives $g_s^0$. We must therefore add back the FT associated with a small disk at the center of the worldsheet before Weyl rescaling.
In order to compute the one-loop partition function of the string, we adopt the phase shift method \cite{Chen-Lin:2017pay,Cagnazzo:2017sny} utilizing the flat metric. The one-loop partition function is both UV and IR divergent, where the IR divergence is associated with a cutoff radius $R$ close to the center of the worldsheet. The structure of the divergences is $\log Z\sim -\log(\Lambda \mathrm{e}^{-R})$ where $\Lambda$ is a cutoff on the phase shift momentum. As we have discussed, the cancellation of divergences requires a careful understanding of all measure factors and ghost determinants. We will sidestep this problem by instead computing a ratio of string partition functions. As long as the same steps are followed for the computations of two partition functions, we expect the divergences to have exactly the same structure. We verify this explicitly by computing the one-loop partition function for the circular string in AdS$_4\times {\bf C}P^3$ dual to a circular WL in the ABJM theory \cite{Aharony:2008ug} using exactly the same steps as we did for the 5D SYM case. Our choice to use the ABJM WL is somewhat arbitrary but is preferred since we want to remain within type IIA string theory. Before cancelling the UV and IR cutoffs in a ratio of partition functions, we must translate the IR regulator $R$ to a diffeomorphism invariant cutoff $A \sim \mathrm{e}^{-2R}$ on the area of the worldsheet, which we remove when computing the one-loop determinants~\cite{Cagnazzo:2017sny}.
This translation depends on the string worldsheet metric and is different for the two cases.
In particular, this introduces a factor reminiscent of the $\sqrt{T/2\pi}$ prefactor proposed by Giombi and Tseytlin \cite{Giombi:2020mhz}.%
\footnote{This is for the worldsheet with a disk topology, we expect that the factor $(T/2\pi)^{\chi/2}$ to be produced in a similar way on higher genus worldsheets.}
After this is done however, we argue that the UV cutoff $\Lambda$ and IR cutoff $A$ can be cancelled in the ratio of partition functions resulting in a finite answer
\begin{equation}
\tcbhighmath{\f{Z_\text{SYM}^\text{string}}{Z_\text{ABJM}^\text{string}} = \Big(\f{N_\text{ABJM}}{4\pi\lambda}\mathrm{e}^{\pi\sqrt{2\lambda}}\Big)^{-1}\Big( \f{N_\text{SYM}}{\xi} \mathrm{e}^{\xi}\Big)\,.}
\end{equation}
On the right hand side we see a perfect match with the ratio of vevs of the circular WLs in 5D SYM~\eqref{SYMWL} on the one hand, and ABJM~\eqref{WABJM} on the other \cite{Kapustin:2009kz,Marino:2009jd,Drukker:2008zx}.
We close this summary with a few comments. First, the close connection of 5D SYM and the (2,0) theory in six dimensions implies that our computation should perhaps be rephrased purely in terms of the (2,0) theory. The Wilson loop in SYM corresponds to a BPS surface operator in six dimensions. On the holographic side, instead of computing the string partition function, we should compute the M2 brane partition function in AdS$_7$ with toroidal boundary. We have verified that the classical contribution is identical to the classical string area (see \cite{Mezei:2018url}) but have not attempted to reproduce the quantum correction from a purely eleven-dimensional computation. Recently there has been considerable progress in this direction \cite{Drukker:2020swu,Drukker:2020atp,Wang:2020xkc,Drukker:2020bes}, and it would be interesting to understand whether our result can be rephrased in the M2 surface operator language.
Next, we note that the localization result, reviewed in section \ref{sec:QFT}, can be used to compute the free energy beyond leading order in the 't Hooft coupling $\xi$. It would interesting to reproduce this answer on the gravity side by computing on-shell action using higher derivative corrections to the supergravity action. Naively the $R^4$ correction of eleven-dimensional supergravity should be all that is needed (see for example \cite{Tseytlin:2000sf}), and it is easy to verify that it scales in the correct way.
However, it is also apparent that the higher derivative corrections are UV divergent when evaluated on-shell and so a careful treatment of the divergences should be carried out to obtain a precise match. We leave this for future work.
The structure of the remainder of the paper is as follows. In section \ref{sec:QFT}, we review the localization of 5D SYM on $S^5$ and compute the WL vev at large $N$. In section \ref{sec:gravity}, we review the holographic dual geometry in ten dimensions and discuss its relation to AdS$_7\times S^4$ solution of eleven-dimensional supergravity. In section \ref{sec:string} we introduce the fundamental string solution dual to the circular WL and discuss the one-loop action. In section \ref{sec:oneloop} we compute the one-loop partition function using the phase shift method and compare with a similar computation for ABJM in order to find a match with the QFT in section \ref{ratio}.
We also include three appendices on the details of the one-loop string action (appendix \ref{app:lagrangian}), and the computation of one-loop partition functions of the SYM string (appendix \ref{app:phaseshift}) and of the ABJM string (appendix \ref{app:Ads}).
\section{Super Yang-Mills on $S^5$}
\label{sec:QFT}
The construction of a maximal supersymmetric gauge theory on the round sphere is non-trivial since introducing only the minimal coupling to the curved metric breaks supersymmetries. Progress in this direction was made in~\cite{Blau:2000xg, Minahan:2015jta}.
In these works an action for Euclidean maximal supersymmetric Yang-Mills (SYM) on a $d$-sphere $S^d$ is obtained by a dimensional reduction from ten-dimensional SYM in flat space, with the introduction of a minimal coupling to the sphere metric and additional interaction terms. On one hand, these break the original flat space $R$-symmetry from $\SO(1,9-d)$ to $\SU(1,1) \times \SO(7-d)$, but on the other hand, guarantee the existence of sixteen real supercharges~\cite{Blau:2000xg, Minahan:2015jta}.
The corresponding Lagrangian~\cite{Blau:2000xg, Minahan:2015jta} is given by
\begin{equation}\label{Lagrangian-SYM}
\begin{split}
\mathcal L &= -{1\over 2 g^2_{\rm YM}} \text{Tr}~ \left( \frac 12 F_{MN} F^{MN}- \Psi \slashed{D} \Psi + {(d-4)\over 2 \mathcal R} \Psi \Gamma^{089} \Psi +{2(d-3)\over \mathcal{R}^2} \phi^A \phi_A+\right.
\\
& +\left. {d-2\over \mathcal R^2} \phi_i\phi^i+ {2 i\over 3 \mathcal R} (d-4) \left[\phi^A, \phi^B\right]\phi^C \varepsilon_{A B C}- K_m K^m \right)\,.
\end{split}
\end{equation}
Here, $\mathcal R$ is the radius of the $d$-dimensional sphere where the theory lives on, the indices $M, N=0, \dots, 9$ are the original ten-dimensional Lorentz indices, which split into the space-time indices $\mu=1, \dots, d$, on the sphere $S^d$, and the scalar indices $I, J=0, d+1, \dots, 9$, due to the reduction from the original ten-dimensional SYM theory. Moreover, the scalar indices $I, J$ are further broken into the scalar indices $i, j=d+1, \dots, 7$, and $A, B=0, 8,9$, due to the terms proportional to the $S^d$ radius $\mathcal R$ (and its squared) in the above Lagrangian. The ten-dimensional Majorana-Weyl spinors $\Psi$ get reduced to 16 real components obeying the chirality condition $\Gamma_{11} \Psi=\Psi$. Finally $K_m$ are auxiliary fields.
The above Lagrangian \eqref{Lagrangian-SYM} is in Lorentzian signature, and it needs to be Wick rotated, which entails the scalar field to transform as $\phi^0 \to i \phi^0$, and the Lagrangian as $\mathcal L\to - i \mathcal L$.
In this paper, we are interested in the maximal SYM on a five-dimensional sphere $S^5$.
The $R$-symmetry group is then $\SU(1,1)\times \SO(2)$. The theory is Euclidean and so the corresponding space transformation group is $\SO(6)$.
The full supergroup of symmetries is the four-dimensional ${\cal N}=2$ superconformal group $\SU(4\vert 1,1)$.
It should be noticed that in five dimensions, the coupling constant $g^2_{\rm YM}$ is irrelevant, implying that five-dimensional maximal SYM theories are non-renormalisable.
At high energies, these theories are UV completed in the six-dimensional $(2,0)$ superconformal field theory \cite{Douglas:2010iu, Lambert:2010iw}.
For later convenience we introduce the 't~Hooft-like coupling constant%
\footnote{The constant $\xi$ is related to 't Hooft coupling constant $\lambda$ used in \cite{Bobev:2019bvq} by simply $\xi={\lambda\over 2\pi}$.}
\begin{equation}
\xi = \f{g^2_{\rm YM} N}{2\pi \mathcal R}\,.
\end{equation}
The theories described by the Lagrangian \eqref{Lagrangian-SYM} can be localized \cite{Pestun:2007rz, Minahan:2015jta, Minahan:2015any, Gorantis:2017vzz}, and the corresponding matrix-model partition function was given (up to instanton corrections) in \cite{Minahan:2015jta, Minahan:2015any, Gorantis:2017vzz} for any $d$.
The supercharge employed in \cite{Minahan:2015jta} localizes the theory on a locus described by vanishing gauge fields $A_\mu=0$ and scalar fields $\phi_I=0$ for $I\neq 0$, that is with the exception of $\phi_0$. This is the field used to construct a (dimensionless) $N\times N$ Hermitian matrix $M$, after being Wick rotated and rescaled by $\mathcal R$.
In the large $N$-limit the gauge fixed partition function can be evaluated in terms of the eigenvalues of the matrix $M$.
Here, for completeness we write the corresponding large $N$ partition function for the five-dimensional case
\begin{equation}\label{Z-matrix-d5}
Z = {1\over N!} \int \prod_{i=1}^N \dd \mu_i \, e^{-S_\text{eff}}\,,
\end{equation}
where the effective action is given by
\begin{equation}\label{Seff-d5}
S_\text{eff}= {2 \pi^2 N \over \xi} \sum_{i=1}^N \mu_i^2- \sum^N_{j\neq i} \sum_{i=1}^N \log \vert \sinh(\pi (\mu_i-\mu_j))\vert\,,
\end{equation}
and $\mu_i$ are the eigenvalues of the $N\times N$ Hermitian matrix $M$.
In the large $N$ limit the saddle point equation is then
\begin{equation}\label{discrete-saddle-d5}
N{2\pi \over\xi}\mu_i= \sum_{j\neq i}\coth\pi(\mu_i-\mu_j)\,, \qquad \qquad i, j=1, \dots, N\,.
\end{equation}
After introducing an eigenvalue distribution $\rho$ as follows
\begin{equation}
\rho(\mu) = {1\over N} \sum_{i=1}^N \delta(\mu-\mu_i)\,,
\end{equation}
and taking the large $N$ continuum limit, the saddle point equation \eqref{discrete-saddle-d5} becomes
\begin{equation}\label{saddle-eq-all-5d}
{2\pi\over \xi}\sigma= \text{PV}\, \int_{-b}^{b} \rho(\mu^\prime)\, \coth (\mu-\mu^\prime)\dd \mu^\prime\,.
\end{equation}
The integral equation \eqref{saddle-eq-all-5d} for $\rho$ and $b$ is well known.
Indeed, the partition function \eqref{Z-matrix-d5} and the consequent equation \eqref{discrete-saddle-d5} appear in the matrix formulation of Chern-Simons theories on a three-dimensional sphere $S^3$ \cite{Aganagic:2002wv, Marino:2002fk, Tierz:2002jj, Marino:2004eq}.
The fact that the partition function for maximal SYM on a $S^5$ equals the partition function of Chern-Simons theories on $S^3$ was previously emphasized in~\cite{Kim:2012ava}.
The solution to the integral equation \eqref{saddle-eq-all-5d} is given by \cite{Marino:2004eq}
\begin{equation}\label{rho-exact-5}
\rho(\mu)={2 \over \xi} \arctan\left({\sqrt{e^{\xi}-\cosh^2\left({\pi\mu}\right)} \over \cosh\left({\pi \mu}\right)}\right)\,,
\end{equation}
and
\begin{equation}\label{b-exact-5}
b={1\over \pi}{\rm arccosh}\,(e^{\xi/2})\,.
\end{equation}
Notice that the solution \eqref{rho-exact-5}-\eqref{b-exact-5} is {\it exact} in the 't~Hooft coupling $\xi$. At leading order in the strong coupling expansion, the eigenvalue density $\rho$ and the extreme of integration $b$ reduce to
\begin{equation}
\lim_{\xi\to\infty} \rho(\mu) ={\pi\over \xi} \,, \qquad \lim_{\xi\to\infty} b= {\xi\over 2\pi}\,,
\end{equation}
in perfect agreement with the leading-order results obtained in \cite{Bobev:2019bvq}.
Before discussing the Wilson loop, we report the large $N$ result for the free energy for the maximal SYM on $S^5$,
\begin{equation}
{F \over N^2}={2 \pi^2\over \xi} \int_{-b}^b \rho(\mu) \mu^2 \dd \mu- \int_{-b}^b \dd \mu \, \rho(\mu) \int_{-b}^b \dd \mu^\prime \rho(\mu^\prime) \log\vert \sinh(\pi (\mu-\mu^\prime))\vert\,,
\end{equation}
which can be computed from the effective action \eqref{Seff-d5} in the continuum limit. The planar result can be read from the Chern-Simons free energy on $S^3$~\cite{Marino:2004eq}
\begin{equation}
{F \over N^2}= -{\xi\over 6}+{\pi^2\over 3\xi } -{2\zeta(3)\over \xi^2}+ \mathcal O(e^{-\xi})\,.
\end{equation}
The leading order was also obtained in \cite{Kim:2012ava, Kallen:2012zn, Minahan:2013jwa, Bobev:2019bvq} from a five-dimensional point of view.
\subsection{$\frac 12$-BPS Wilson loop expectation value from localization}
In this paper our main focus is on the $\frac 12 $-BPS Wilson loop operator and its vacuum expectation value. As discussed in \cite{Bobev:2019bvq} its vev can be computed using the localization procedure sketched above. Here we extend the result of \cite{Bobev:2019bvq} to all orders in the coupling constant $\xi$ but remaining at large $N$.
The Wilson loop in question wraps the equator of the 5-sphere and its expectation value is
\begin{equation}
\langle \mathcal W \rangle = \left\langle {\rm Tr} \left(P e^{i \oint A_\mu \dd x^\mu+ i \oint \dd s \, \phi^0} \right)\right\rangle\,,
\end{equation}
where $A_\mu$ is the five-dimensional gauge fields and $\phi^0$ is the ``timelike'' scalar that does not vanish on the localization locus.
The gauge field on the other hand does vanish and the WL vev can be evaluated by taking the continuum limit and keeping only leading term in the large $N$ expansion%
\footnote{We have restored an explicit factor of $N$ which was omitted in \cite{Bobev:2019bvq} due to a different normalization convention for the WL operator.}
\begin{equation}\label{W5-final}
\langle \mathcal W \rangle = N \int_{-b}^b \rho(\mu) e^{2\pi \mu} \dd \mu+ \mathcal O\left({1\over N}\right)=\frac{N}{\xi }\left(e^{\xi}- 1 \right) +\mathcal O\left({1\over N}\right)\,.
\end{equation}
This is the expectation value of a $\frac 12$-BPS Wilson loop located on the equator of the sphere $S^5$ at large $N$ but for {\it any} 't~Hooft coupling $\xi$.%
\footnote{We refer the reader to \cite{Gopakumar:1998ki} for analogous results for Wilson loops in Chern-Simons theories on $S^3$.}
For large values of the 't Hooft coupling constant $\xi$, then the $\frac 12$-BPS Wilson loop expectation value approaches to
\begin{equation}\label{logW5-exp-cl}
\lim_{\xi \to \infty} \log \langle \mathcal W \rangle =\xi \,,
\end{equation}
which is the classical result derived in \cite{Bobev:2019bvq} both from field theory and supergravity.
Considering the next-to-leading order in $\xi$, we have for the $\frac 12$-BPS Wilson loop VEV
\begin{equation}\label{WL-strong-coupling-exp}
\langle \mathcal W \,\rangle = N {e^\xi\over \xi}+\mathcal O(e^{-\xi})+ \mathcal O\left({1\over N}\right)\,.
\end{equation}
The goal of the next sections is to reproduce the $\frac 12$-BPS Wilson loop VEV in a string theory setting. In particular,
the exponential behaviour in the large $\xi$-expansion corresponds to (minus) the classical action of the dual string \cite{Bobev:2019bvq}, cf. section \ref{classical}, while the prefactor is encoded in the one-loop string partition function, cf. sections \ref{sec:oneloop}-\ref{ratio}.
\section{Spherical D4 branes}
\label{sec:gravity}
\subsection{Solution of type IIA$^*$}
The holographic dual to SYM on $S^5$ was constructed in \cite{Bobev:2018ugk} by first solving BPS equations in seven-dimensional maximal supergravity and then subsequently uplifting to ten dimensions. Since we use slightly different coordinates here, we will review the full supergravity solution.
Given that we are working with Euclidean branes the proper framework are the so-called type II$^*$ theories of Hull\cite{Hull:1998ym,Hull:1998vg,Hull:1998fh}. The only difference with the standard type II theories is that the RR-fields are purely imaginary.\footnote{The role of II$^*$ theories will not play a fundamental role in this paper and the imaginary form fields can be thought of as a result of a formal analytic continuation of the supergravity background.}
The ten-dimensional metric is given by
\begin{equation}\label{metricsigma}
\mathrm{d} s_{10}^2 = \ell_s^2(N\pi\mathrm{e}^{\Phi})^{2/3}\,\Bigg[\f{4\big(\mathrm{d}\sigma^2+ \mathrm{d}\Omega_5^2\big)}{\sinh^2\sigma}+ \mathrm{d}\theta^2 + \cos^2\theta\, \mathrm{d} s_{\text{dS}_2}^2 + \f{\sin^2\theta\, \mathrm{d} \phi^2}{1-\f{h^2}{4}\tanh^2\sigma\, \sin^2\theta} \Bigg]\,,
\end{equation}
where the ten-dimensional dilaton $\Phi$ is
\begin{equation}\label{dilaton-10}
\mathrm{e}^{\Phi} = \f{\xi^{3/2}}{N\pi}\bigg(\coth^2\sigma-\f{h^2}{4} \sin^2\theta\bigg)^{3/4}\,.
\end{equation}
This background exhibits
\begin{equation}
\mathop{\rm SO}(6)\times \mathop{\rm SO}(1,2)\times \mathop{\rm {}U}(1)\,
\end{equation}
continuous symmetry in complete agreement with the field theory. The five-sphere $\mathrm{d}\Omega_5^2$ is where the field theory is living and $0\le\sigma<\infty$ plays the role of a radial direction. The solution depends on three (dimensionless) parameters $\xi$, $h$, and $N$.
The integer $N$ denotes the number of D4-branes and is taken to be large to ensure that the length scales set by the metric is large in string units.
Next, we have $\xi$ which we already encountered in \ref{sec:QFT} and is related to the Yang-Mills coupling constant in the QFT.
Finally, we seem to have one more parameter $h$. For all $0\le h \le 2$ the metric defines a regular background of type IIA$^*$ supergravity. Note that for $h=0$ the symmetry of the background is enhanced to $\mathop{\rm SO}(5)\times \mathop{\rm SO}(1,4)$ which does not match the expected symmetry of the QFT. Indeed in \cite{Bobev:2018ugk} it was determined that for $h=1$ we obtain the relevant supersymmetric background dual to SYM on $S^5$. In this paper we will focus on $h=1$, but will keep $h$ unfixed throughout the computation.
In addition to the metric and dilaton we have the gauge potentials
\begin{equation}\label{BC-fields}
\begin{split}
B_2 &= \f{h \xi\ell_s^2}{2}\cos^3\theta\,\text{vol}_{\text{dS}_2}\,,\\
C_1 &= \f{hi N\pi\xi\ell_s}{2}(N\pi\mathrm{e}^{\Phi})^{-4/3}\sin^2\theta\,\mathrm{d}\phi\,,\\
C_3 &= -iN\pi\ell_s^3\cos^3\theta\,\mathrm{d}\phi\wedge\text{vol}_{\text{dS}_2}\,.
\end{split}\end{equation}
From these we can compute the NSNS and RR field strengths
\begin{equation}\label{BC-fields-v2}
H_3=\mathrm{d} B_2\,,\quad F_2 = \mathrm{d} C_1\,,\quad F_4 = \mathrm{d} C_3 - H_3 \wedge C_1\,.
\end{equation}
In \cite{Bobev:2019bvq}, the geometry \eqref{metricsigma} was used to compute the holographic free energy of SYM on $S^5$. This was done by first reducing to six-dimensional supergravity, and then evaluating the regularized on-shell action. In addition to standard infinite counterterms required to regularize the bare evaluation of the on-shell action, a number of finite counterterms had to be considered. These ultimately allowed for a successful match with the QFT.
As mentioned above, the metric \eqref{metricsigma} is completely regular. In particular, for large $\sigma$ the metric takes the form
\begin{equation}
\mathrm{d} s_{10}^2 \to \ell_s^2(N\pi\mathrm{e}^{\Phi})^{2/3}\,\Bigg[16\big(\mathrm{d} r^2+ r^2\mathrm{d}\Omega_5^2\big)+ \mathrm{d}\theta^2 + \cos^2\theta\, \mathrm{d} s_{\text{dS}_2}^2 + \f{\sin^2\theta\, \mathrm{d} \phi^2}{1-\f{h^2}{4} \sin^2\theta} \Bigg]\,,
\end{equation}
where we have changed coordinates $r=\mathrm{e}^{-\sigma}$. Here we see that as $r\to 0$, the five-sphere smoothly shrinks down to zero size without introducing irregularities in the metric (or other supergravity fields). In the opposite limit, as $\sigma\to0$ we get back the flat-space D4 brane solution:
\begin{equation}\label{metricUVlimit}
\mathrm{d} s_{10}^2 \to \xi \ell_s^2\left[4U^{3/2} \mathrm{d}\Omega_5^2 + \f{\mathrm{d} U^2 + U^2 \mathrm{d} s_{\text{dS}_4}^2 }{U^{3/2}}\right]\,,
\end{equation}
where we have momentarily changed coordinates $\sinh\sigma = U^{-1/2}$ and we have combined the metric on dS$_2$ with the $\theta$ and $\phi$ coordinates to form the metric on dS$_4$. The coordinate $U$ measures the distance to the stack of branes which are formally located at $U=0$.
Here, we clearly see that the five-sphere acts as the brane world-volume, whereas the remaining five directions are transverse to the brane. We also note that the isometry of the solution is enhanced in this limit to $\mathop{\rm SO}(1,4)$ which is the R-symmetry of the UV field theory.
\subsection{11D solution}
The metric for D4 branes in flat space develops a singularity in the far UV ($U\gg1$) where also the dilaton blows up
\begin{equation}
\mathrm{e}^{\Phi} = \f{\xi^{3/2}U^{3/4}}{N\pi}\,.
\end{equation}
This solutions should therefore be reinterpreted in eleven dimensions as the geometry around a stack of M5 branes. As explained in \cite{Hull:1998ym,Hull:1998vg,Hull:1998fh,Bobev:2018ugk}, the solutions of type IIA$^*$-theory are uplifted to the exotic M$^*$-theory with $(2,9)$ signature. This means that the extra coordinate introduced during the uplift is in fact timelike.
For completeness we review here the uplift of the spherical D4 brane solution in \eqref{metricsigma}. The eleven-dimensional metric is obtained by combining the ten-dimensional metric with the dilaton and $C_1$ form (see for example \cite{Bobev:2018ugk}),
\begin{equation}\label{11DMetric}
\mathrm{d} s_{11}^2 = L_{\text{AdS}_7}^2\,\Bigg[\f{\mathrm{d}\sigma^2+ \mathrm{d}\Omega_5^2}{\sinh^2\sigma}-\f{\mathrm{d} t^2}{\tanh^2\sigma}+ \f14\Big(\mathrm{d}\theta^2 + \cos^2\theta\, \mathrm{d} s_{\text{dS}_2}^2 + \sin^2\theta\, (\mathrm{d} \phi-h\mathrm{d} t)^2 \Big) \Bigg]
\end{equation}
where the eleventh coordinate is $x_{11}=2N\pi i\ell_s t/\xi$ and is taken to be imaginary to implement the timelike uplift. We note that $t$ is periodic with periodicity $\xi/N$, and the AdS$_7$ length scale $L_{\text{AdS}_7}$ is related to ten-dimensional quantities through
\begin{equation}\label{LAdS7}
L_{\text{AdS}_7}^3 = 8\pi N \ell_s^3\,.
\end{equation}
The three-form is constructed similarly yielding
\begin{equation}
A_3 = -\f{i L_{\text{AdS}_7}^3}{8}\cos^3\theta\,\left(\mathrm{d}\phi-h\mathrm{d} t\right)\wedge\text{vol}_{\text{dS}_2}\,.
\end{equation}
We note that the parameter $\xi$ completely drops out of these expressions, whereas $h$ can be absorbed into a coordinate redefinition $\phi\mapsto \tilde \phi + h t$.
In fact, this is just the metric on AdS$_7\times$dS$_4$ which is the near horizon geometry of $N$ M5 branes in the M$^*$-theory. This geometry is the holographic dual to the six-dimensional $(2,0)$ theory with non-compact R-symmetry.
\section{Holographic Wilson loop}
\label{sec:string}
The holographic dictionary instructs that the vev of a supersymmetric Wilson loop can be computed by evaluating the partition function of a string which satisfies the boundary conditions compatible with the WL~\cite{Maldacena:1998im}. To leading order in the large $\xi$ limit, the partition function reduces to the on-shell action of the string. In this paper we are interested in the subleading correction to this leading order answer, and so we expand the partition function to second order in the coupling $\xi$:
\begin{equation}
\label{log-Z-tocompute}
\log Z \approx -S_\text{classical}-S_\text{FT} + \log\text{Sdet}^{-1/2} \mathbb{K}= -S_\text{classical}-S_\text{FT}-\Gamma_{\mathbb{K}}\,.
\end{equation}
In this expression the partition function $Z$ of the string is expanded in terms of the classical action $S_\text{classical}$ at order $\xi^1$ as well as two ``quantum'' correction at order $\xi^0$. The first correction term is the Fradkin-Tseytlin action $S_{\rm FT}$ evaluated on-shell, which we review below. The second correction, $\Gamma_{\mathbb{K}}$, is the one-loop partition function of bosonic and fermionic fluctuations around the classical configuration of the string. Since only the leading order action for the fluctuations is kept, the path integral is Gaussian and reduces to the determinant of bosonic and fermionic operators. We collectively denote the second order operators by $\mathbb{K}$, and their determinants by $\text{Sdet} \mathbb{K}$.
Before embarking on the journey of computing these three terms, we give a general discussion of the ingredients in the string action and introduce our notation.
The worldsheet action of the string we will use, consists of three parts
\begin{equation}
S = S_\text{bosons} + S_\text{fermions} + S_\text{FT}\,.
\end{equation}
First we have the Polyakov action%
\footnote{Our worldsheets are Euclidean which explains the $i$ multiplying the two-form.}
\begin{equation}\label{BosonicAction}
S_\text{bosons} = \f{1}{4\pi\ell_s^2}\int\Big( \gamma^{ij}G_{ij}\text{vol}_\gamma + 2 i B_{ij}\mathrm{d} x^i \wedge \mathrm{d} x^j\Big)\,,
\end{equation}
where $i,j=1,2$ denotes the two-dimensional worldsheet indices and $\gamma_{ij}$ is the worldsheet metric. Here $G_{\mu\nu}$ are the ten-dimensional metric components and $B_{\mu\nu}$ are the components of the 2-form field $B_2$. The notation $G_{ij}$ where we use two-dimensional indices on a ten-dimensional object refers to the pull-back of the ten-dimensional tensor down to two dimensions
\begin{equation}
G_{ij} = G_{\mu\nu}\partial_i X^\mu \partial_j X^\nu\,,
\end{equation}
where $X^\mu$ are the ten scalar fields living on the worldsheet, and in this context, can be thought of as defining the embedding of the string into the ten-dimensional geometry.
Next, we have the so-called Fradkin-Tseytlin (FT) \cite{Fradkin:1984pq,Fradkin:1985ys} action which introduces a dilaton coupling on the worldsheet
\begin{equation}\label{FTAction}
S_\text{FT}=\f1{4\pi}\int_M \Phi R_\gamma\text{vol}_\gamma+\f1{2\pi} \int_{\partial M} \Phi K\mathrm{d} s\,,
\end{equation}
where $K$ is the geodesic curvature and $\mathrm{d} s$ is the reparametrization invariant measure on the boundary.
We have included the boundary term to ensures that the dilaton coupling correctly accounts for string loop counting even on worldsheets with boundaries. For constant dilaton this simply gives $S_\text{FT} = \chi \Phi_0$ as expected.
Two remarks are in order about the FT action which will be crucial below. First, we note that in the large $\xi$ expansion,%
\footnote{Or in fact any derivative expansion, where the momenta of the string modes are small compared to the length scales set by the classical geometry.}
the FT action \eqref{FTAction} should be thought of as subleading when compared with the bosonic action \eqref{BosonicAction} above. This can be seen from the fact that the bosonic action \eqref{BosonicAction} is of order $T=1/2\pi\ell_s^{2}$, whereas the FT term \eqref{FTAction} is of order $T^0$.
The second related point is that the FT action \eqref{FTAction} classically violates Weyl invariance of the worldsheet theory: the classical energy-momentum tensor computed from the FT action has non-zero trace. However, as we will review below, the classical Weyl ``anomaly'' of the FT action is exactly compensated by the one-loop quantum Weyl anomaly of the bosonic theory in \eqref{BosonicAction} as well as the fermionic terms discussed below. This pattern of cancellation of Weyl anomaly is expected to carry on to subleading orders such that for example the one-loop Weyl anomaly of the FT term cancels the two-loop anomaly of $S_\text{bosons}+S_\text{fermions}$ and so on.
Finally, the Green-Schwarz (GS) action which couples the ten-dimensional, 32 component worldsheet GS fermions $\theta$ to the background geometry reads \cite{Cvetic:1999zs}
\begin{equation}
\label{FermionicAction}
S_\text{fermions} = -\frac{1}{2\pi \ell_s^2}\int\Big\{ i\bar\theta P^{ij} \Gamma_i D_j \theta - \frac{i}{8}\bar\theta P^{ij} \Gamma_{11}\Gamma_i^{\p{i}\mu\nu}H_{j\mu\nu} \theta+\frac{i}{8} \mathrm{e}^{\Phi}\bar\theta P^{ij}\Gamma_i(-\Gamma_{11}\slashed{F}_2+\slashed{F}_4)\Gamma_j\theta\Big\}\,,
\end{equation}
where $\Gamma_\mu$ are the ten-dimensional gamma matrices,
$\Gamma_{11}$ is the chirality operator and
\begin{equation}
P^{ij} = \sqrt{\gamma} \gamma^{ij} - i\epsilon^{ij}\Gamma_{11}\,.
\end{equation}
Once again, the pull-back of ten-dimensional indices is implied in our notation, for example $\Gamma_i = \partial_i X^\mu \Gamma_\mu$.
In this paper we will work in static gauge where the two worldsheet coordinates are directly identified with corresponding ten-dimensional coordinates. This means that $\partial_i X^\mu = \delta_i^\mu$.
\subsection{The string configuration}
We consider a Wilson loop wrapping the equator of $S^5$. This is dual to a fundamental string wrapping the same $S^5$ and extending along the $\sigma$-coordinate.
The classical string solution was presented in~\cite{Bobev:2019bvq}. In static gauge, the two worldsheet coordinates are identified with the ten-dimensional coordinates $\sigma$ and $\tau$ and $G_{ij} = \gamma_{ij}$.
Here the coordinate $\tau$ has been introduced to parametrize the equator of $S^5$. Minimizing the action leads to the remaining scalars $X$ to be constant
\begin{equation}
\label{classicalSol}
\theta = 0~,\qquad \text{equator of $S^5$}~,\qquad \text{any fixed point on dS$_2$}~.
\end{equation}
The worldsheet metric is then
\begin{equation}\label{classicalMetric}
\mathrm{d} s_2^2= \mathrm{e}^{2\rho}\Big(\mathrm{d}\sigma^2+ \mathrm{d}\tau^2\Big)~,
\end{equation}
where
\begin{equation}\label{D4metric}
\mathrm{e}^{2\rho} = \f{4\xi \ell_s^2}{\tanh\sigma\sinh^2\sigma} \,,
\end{equation}
is the conformal factor. In the conformal coordinates used in \eqref{classicalMetric}, the volume form is given by $\text{vol}_\gamma = \mathrm{e}^{2\rho}\mathrm{d}\sigma\wedge\mathrm{d}\tau$ and the curvature scalar is
\begin{equation}\label{ricciscalar}
\mathrm{e}^{2\rho}\,R_\gamma = -2\partial_\sigma^2\rho=\f{\text{sech}\,^2\sigma-4}{\sinh^2\sigma}\,.
\end{equation}
The classical string solution can be uplifted to an M2-brane in eleven dimensions. The M2 brane wraps the eleven dimensional directions $\tau$ and $t$ (see \eqref{11DMetric}), and extends in the $\sigma$ direction. The metric on the M2-brane worldvolume is
\begin{equation}\label{M2-metric}
\mathrm{d} s_{\text{M2}}^2 = L_{\text{AdS}_7}^2\,\Bigg[\f{\mathrm{d}\sigma^2+ \mathrm{d}\tau^2}{\sinh^2\sigma}-\f{\mathrm{d} t^2}{\tanh^2\sigma}\Bigg]\,,
\end{equation}
which is just the metric on AdS$_3$ with boundary topology ${\bf T}^2$.
\subsection{The classical on-shell action}
\label{classical}
In this section we review the classical action of the string described by the embedding \eqref{classicalSol}.
As mentioned in section \ref{sec:introduction}, this was discussed in~\cite{Bobev:2019bvq}, however here we provide a different way to regularise the classical string action by its Legendre transform~\cite{Drukker:1999zq}.
Firstly, we notice that the pull-back of the $B$-field \eqref{BC-fields} vanishes.
Hence, the on-shell action of the string \eqref{BosonicAction} takes the simple form
\begin{equation}\label{Scl}
S_\text{classical} = \f{1}{2\pi\ell_s^2}\int \mathrm{e}^{2\rho} \,\mathrm{d} \sigma \,\mathrm{d} \tau = 4\xi\int \f{\mathrm{d} \sigma}{\tanh\sigma\sinh^2\sigma} = -\f{2\xi}{3} +\f{2\xi}{\varepsilon^2}~.
\end{equation}
This diverges at the boundary $\sigma=\varepsilon\to 0$. In order to regularize the integral we must Legendre transform to the variables that are conjugate to the transverse directions to the Wilson loop \cite{Drukker:1999zq} (see also \cite{Agarwal:2009up,Young:2011aa} for a similar setup to ours). These are properly identified in the UV limit of our ten-dimensional solution.
This limit was already discussed in section \ref{sec:gravity} and the metric was given in \eqref{metricUVlimit}. The appropriate Legendre transformation should be done with respect to the UV coordinates, i.e. the five angles parametrizing $\mathrm{d} \Omega_5^2$, and the five flat space coordinates coordinates used to parametrize
$$\mathrm{d} s_{\mathbf{R}^{1,4}}^2=\mathrm{d} U^2 + U^2 \mathrm{d} s_{\text{dS}_4}^2\,, \qquad \text{where} \qquad U=\sinh^{-2}\sigma\,.
$$
The latter we can write as $x^a = U \hat\theta^a$ where $a=1,\cdots,5$. The angular variables $\hat\theta^a$ parametrize a unit radius dS spacetime through the constraint $\hat\theta^a\hat\theta^b\eta_{ab} = 1$ and $\eta_{ab}=\text{diag}(1,1,1,1,-1)$. Only these five flat space coordinates are of interest to us as they explicitly depend on $U=\sinh^{-2}\sigma$. We now compute the term required for the Legendre transform and treat that as a counterterm for the classical action\footnote{We ignore the $B$-field term as it does not play a role for our solution.}
\begin{equation}
S_\text{ct}=-\int\partial_i\left(X^\mu \f{\delta {\cal L}}{\delta \partial_i X^\mu}\right) \mathrm{d}\sigma\mathrm{d}\tau= -\f{1}{2\pi\ell_s^2}\int\sqrt{\gamma}\gamma^{ij}\partial_i(X^\mu\partial_j X_\mu)\mathrm{d}\sigma\mathrm{d}\tau\,.
\end{equation}
Reexpressing this as a boundary integral at fixed small $\sigma$, and using the flat $D$-brane coordinates just defined, we find
\begin{equation}\label{Sct-0}
S_\text{ct}=\f{\xi}{2\pi} \int_{\partial M}\f{\partial_\sigma U}{U^{1/2}} \mathrm{d} \tau= -\f{2\xi}{\varepsilon^2}-\f{\xi}{3}\,.
\end{equation}
Combining the bulk term \eqref{Scl} with the counterterm \eqref{Sct-0}, we obtain the final answer
\begin{equation}
\label{s-classical}
S_\text{classical} + S_\text{ct}= -\xi\,.
\end{equation}
When compared with the minus logarithm of the Wilson loop expectation value \eqref{logW5-exp-cl} as computed in the QFT in section \ref{sec:QFT}, we see that this precisely agrees with the leading order answer in the large $\xi$ expansion.
For the reminder of this paper we will focus on extracting the next-to-leading order contribution in string theory and comparing with the QFT result.
The result \eqref{s-classical} can also be obtained by evaluating the classical volume of the M2 brane \eqref{M2-metric}
\begin{equation}
S_{\text{M2}} = \f{1}{(2\pi)^2\ell_s^3} \int \sqrt{-g_{\text{M2}}}\, \mathrm{d}\sigma\,\mathrm{d}\tau\,\mathrm{d} t\,.
\end{equation}
Using the periodicity of $t$ and the relation \eqref{LAdS7} we recover \eqref{Scl}. In this case the volume can be regularized by introducing a boundary counterterm which is proportional to the boundary area of the M2 brane and the final result is again \eqref{s-classical}.
\subsection{The one-loop string action}
In this subsection we work out the string action at order $\xi^0$, which comprises two terms, the FT action and the action for the quantum fluctuations.
As we explained above, the FT action gives a contribution which is of order $\xi^0$ even though it is a classical term in the string action.
\subsubsection{Fradkin-Tseytlin action}
In order to evaluate the FT action \eqref{FTAction} on-shell, we need the pull-back of the dilaton \eqref{dilaton-10}, which is
\begin{equation}\label{dilaton}
\mathrm{e}^{2\Phi_0}\equiv P[\mathrm{e}^{2\Phi}]=\f{\xi^3 }{ N^2 \pi^2}\coth^3\sigma \,.
\end{equation}
This should be evaluated directly in the action \eqref{FTAction}. For this we need in addition to the curvature scalar in \eqref{ricciscalar}, the geodesic curvature on the boundary which is located at a fixed small $\sigma$. This is easily computed in conformal coordinates, and takes the form
\begin{equation}
K \mathrm{d} s = (\nabla^\mu n_\mu)\mathrm{e}^\rho\mathrm{d}\tau=\partial_\sigma\rho\,\mathrm{d}\tau\,.
\end{equation}
Combining these expressions, we obtain
\begin{equation}\label{FTtermdivergent}
S_\text{FT}=-\int_\varepsilon^\infty\Phi_0\partial_\sigma^2\rho\,\mathrm{d}\sigma + \Phi_0\partial_\sigma \rho\Big|_{\sigma = \varepsilon} = \f{9}{4\varepsilon} + {\cal O}(\varepsilon^0)\,,
\end{equation}
where we have performed the integration over the angular variable $\tau$.
We easily see that even including the boundary term, this expression has divergences which do not cancel in the limit $\varepsilon\to0$. We will argue that this divergence is cancelled by a divergence of the one-loop fluctuations of the string worldsheet. This is the first indication that treating the terms $S_\text{FT}$ and $\Gamma_\mathbb{K}$ separately leads to inconsistencies which are hard to resolve. As we will emphasize, these terms should be treated together to obtain a finite one-loop correction of the holographic Wilson loop.
\subsubsection{Second order fluctuations}
We now turn to the quantum fluctuations of the worldsheet fields around the classical configuration \eqref{classicalSol}. This leads to a Gaussian model whose partition function is formally expressed in terms of determinants. The evaluation of the determinants will be the subject of next section (Sec. \ref{sec:oneloop}) but here we summarize the structure of the second order action, the derivation is carried out in appendix \ref{app:lagrangian}.
The Gaussian bosonic fields consists of eight scalar modes that determine the fluctuations of the embedding of the worldsheet in the ten-dimensional geometry. Let $X_\text{cl}$ denote the classical embedding of the string in the ten-dimensional geometry defined in \eqref{classicalSol}. The scalar fluctuations around this solution can then be written
\begin{equation}
X^\mu = X_\text{cl}^\mu + \delta X^\mu = X_\text{cl}^\mu + E^\mu_{\hat \mu} \zeta^{\hat \mu}\,,
\end{equation}
where $E^\mu_{\hat \mu}$ are the ten-dimensional vielbein, $E^\mu_{\hat\mu} E^\nu_{\hat \nu}\delta^{\hat \mu\hat \nu} = G^{\mu\nu}$.
A priori there are ten scalar fields $\zeta^{\hat\mu}$ as well as the dynamical worldsheet metric, but in static gauge these are reduced to eight.
For our diagonal ten-dimensional metric, this simply means that $\zeta^{\hat\sigma}$ and $\zeta^{\hat\tau}$ are set to zero.
To underline the fact that we are only working with the transverse fluctuations we use $\zeta^a$ where now $a=1, \dots, 8$ (see appendix \ref{app:lagrangian}).
For a proper treatment of the theory, in place of static gauge one should use conformal gauge which keeps the conformal factor of the metric as well as the longitudinal modes active.
In addition, one then has to properly treat the reparametrization $bc$-ghost system as explained in \cite{Drukker:2000ep,Forini:2015mca}. The proper treatment ultimately leads to the same result as the static gauge approach where these extra modes are effectively cancelled against the ghosts%
\footnote{Some care must be exercised when treating the ghost determinant and its cancellation against the longitudinal modes. The $bc$ ghost determinant must exclude the zero modes which correspond to conformal Killing vector which are already accounted for in the measure.}.
For this reason, in this paper we choose the simpler static gauge and refer the reader to \cite{Drukker:2000ep,Forini:2015mca} for a detailed account on the difference between the two gauges.
The fermionic fields are eight just like the scalars. These originate as 32 real target space fermions. The $\kappa$-symmetry gauge fixing reduces the physical modes to 16, which then are mapped to 8 two-dimensional fermions which we denote by $\theta_a$. The combined second order action can be written as
\begin{equation}
\label{sk}
S_{\mathbb{K}} = \f{1}{4\pi \ell_s^2}\int\sqrt{\gamma} \left( \zeta^a{\cal K}_{ab}\zeta^b + \bar\theta^a{\cal D}_{ab}\theta^b\right)\mathrm{d}^2\sigma\,.
\end{equation}
Here the bosonic operators are diagonal with degeneracies $(4,2,2)$:
\begin{equation}
{\cal K}_{ab} = \text{diag}({\cal K}_{x},{\cal K}_{x},{\cal K}_{x},{\cal K}_{x} ,{\cal K}_{y},{\cal K}_{y},{\cal K}_{z},{\cal K}_{z})\,.
\end{equation}
Explicitly the operators take the form
\begin{equation}\label{op-K-Ktilde}
{\cal K}_a = \mathrm{e}^{-2\rho}\tilde {\cal K}_a\,,\qquad \tilde {\cal K}_a=-\partial_\sigma^2 - \partial_\tau^2 + E_a\,,
\end{equation}
with
\begin{equation}\label{theEs}
\begin{split}
E_x &= \partial_\sigma^2 \rho+(\partial_\sigma \rho)^2-1 = \f{7+8\cosh 2\sigma}{\sinh^2 2\sigma}\,,\\
E_y &=\f12 \partial_\sigma^2 \rho= \f{1+2\cosh 2\sigma}{\sinh^2 2\sigma}\,,\\
E_z &=\f{1-3h^2}{2}\partial_\sigma^2 \rho+h^2(\partial_\sigma\rho)^2-h^2= \f{1+2h^2+2(1-h^2)\cosh 2\sigma}{\sinh^2 2\sigma}\,.
\end{split}
\end{equation}
The fermionic operators are likewise diagonal with all identical entries ${\cal D}_{ab} = {\cal D} \delta_{ab}$ where
\begin{equation}\label{op-D-Dtilde}
\mathcal{D} = \mathrm{e}^{-3\rho/2}\tilde{\mathcal{D}}\mathrm{e}^{\rho/2}\,,\qquad \tilde{\mathcal{D}}=i\slashed{\partial} + \tau_3 a + v
\end{equation}
and
\begin{equation}\label{a-v-ferm}
a = \f{hi}{2\cosh\sigma}\,,\quad v = \f{3 i}{2\sinh\sigma} \,.
\end{equation}
The path integral measure is implicitly defined with respect to the norms
\begin{equation}
\lVert\zeta\rVert^2 = \int\mathrm{d}^2\sigma\sqrt{\gamma}\zeta^a\zeta^b\delta_{ab}\,, \qquad \lVert\theta\rVert^2 = \int\mathrm{d}^2\sigma\sqrt{\gamma}\,\bar\theta^a \theta^b\delta_{ab}\,,
\end{equation}
through
\begin{equation}
1 = \int \big[ D\zeta\big]\mathrm{e}^{-\f1{4\pi\ell_s^2}\lVert\zeta\rVert^2}\,,\qquad 1 = \int \big[ D\theta D\bar\theta\big]\mathrm{e}^{-\f1{4\pi\ell_s^2}\lVert\theta\rVert^2}\,.
\end{equation}
Using this we perform the Gaussian path integral
\begin{equation}\label{Gamma-K}
\Gamma_{\mathbb{K}}= -\log\int \big[D\zeta D\theta D\bar\theta\big]\mathrm{e}^{-S_\mathbb{K}} = \f12 \log \f{(\det{\cal K}_x)^4(\det{\cal K}_y)^2(\det{\cal K}_z)^2}{(\det{\cal D})^8}\,.
\end{equation}
The subject of next section is evaluating these determinants.
\section{One-loop partition function}
\label{sec:oneloop}
The focus of this section is to evaluate the one-loop functional determinants \eqref{Gamma-K}.
To this end, instead of evaluating the determinants of ${\cal K}_a$ and ${\cal D}$, we would like to compute the considerably simpler determinants of the tilded operators $\tilde{\cal K}_a$ and $\tilde{\cal D}$, cf. \eqref{op-K-Ktilde} and \eqref{op-D-Dtilde}.
The operators $\tilde{\cal K}_a$ and $\tilde{\cal D}$ are flat operators, since they are obtained from the operators ${\cal K}_a$ and ${\cal D}$ by stripping off a conformal factor.
This is equivalent to perform a Weyl rescaling, which is allowed only in a Weyl invariant theory.
The computation of the Weyl anomaly and its relation to UV divergences of the one-loop partition function are discussed in the next subsection (Sec. \ref{anomaly}).
In the subsection \ref{phaseshiftmethod} we illustrate the main points in the computation of the one-loop determinants \eqref{Gamma-K} by means of the phase shift method.
\subsection{Weyl anomaly}
\label{anomaly}
As mentioned above, the Weyl invariance of the theory should allow us to perform Weyl rescalings of the metric, and thus to employ the flat operators $\tilde{\cal K}_a$ and $\tilde{\cal D}$ in the computation of the functional determinants \eqref{Gamma-K}.
In particular, the absence of a Weyl anomaly is required for this procedure.
Fortunately, consistency of string theory requires the total central charge $c$ to vanish and by it the Weyl anomaly. In this subsection we will shortly review the Weyl anomaly in string theory and argue that indeed for our particular setup the total anomaly vanishes. Essentially this a simple consequence of our background geometry being a consistent background of string theory.
We will also review how it relates to the logarithmic divergences of the partition function.
The quantization of the string in general backgrounds, leads to a non-trivial trace of the energy momentum tensor. This can be parametrized as\footnote{This expression is relevant for the bosonic string and for the superstring when the fermions vanish on-shell.}
\begin{equation}\label{stringyemtensor}
2 \ell_s^2 \langle {T_i}^i\rangle = \ell_s^2 \beta^\Phi R_\gamma + \beta_{\mu\nu}^G\partial_i X^\mu\partial^i X^\nu + \beta_{\mu\nu}^B\partial_i X^\mu\partial_j X^\nu\epsilon^{ij}\,.
\end{equation}
Here we have introduced the Weyl anomaly functions $\beta$ which are computed in the $\alpha'=\ell_s^2$ expansion of string theory. The consistency of the theory relies on the fact that all $\beta$-functions vanish, eliminating the Weyl anomaly completely. It is proven to all orders in the $\alpha'$-expansion that, if $\beta_{\mu\nu}^G=\beta_{\mu\nu}^B=0$, then $\beta^\Phi$ is constant \cite{Callan:1989nz} and we get back the familiar expression\footnote{We use the convention $T_{ij} = -\f{4\pi}{\sqrt{\gamma}}\f{\delta S}{\delta \gamma^{ij}}.$}
\begin{equation}
\langle {T_i}^i\rangle = -\f{c}{12}R_\gamma\,,\qquad \beta^\Phi = -\f{c}6\,,
\end{equation}
where $c$ is the total central charge. In the RNS formulation of the superstring in flat space, the contribution of the worldsheet scalars and fermions adds up to $3 D/2$. This should be combined with the reparametrization ghosts with $c=-26$ and the superconformal ghosts with $c=11$ giving\cite{Polyakov:1981rd,Polyakov:1981re}
\begin{equation}
c = \f32 (D-10)\,.
\end{equation}
And we conclude that the total Weyl anomaly vanishes in the critical dimension $D=10$.
The cancellation of the Weyl anomaly in the GS string works slightly differently as reviewed in \cite{Drukker:2000ep} (see references therin for further details). The lack of worldsheet supersymmetry means there are no superconformal ghosts and we only have eight worldsheet fermions instead of 10. However, it is important to note that the GS fermions are really 2d scalars with the wrong statistics, their contribution to the conformal anomaly is subtle to compute, but effectively they contribute four times the naive expectation for a normal 2D fermion, due to the fact that they couple to the worldsheet metric as scalars would.%
\footnote{Unfortunately this is obscured in our expressions since we use static gauge.}
Combining all contributions yields
\begin{equation}
c = D-10\,,
\end{equation}
which, again, vanishes in the critical dimension.
The Weyl anomaly is closely related to logarithmic divergences in the partition function. This can be observed directly from the definition of the quantum energy-momentum tensor as a variation of the effective action with respect to the conformal factor. We use $\gamma_{ij} = \mathrm{e}^{2\rho} \delta_{ij}$, then
\begin{equation}\label{QEMtensor}
\langle {T_i}^i\rangle_{\mathbb{K}} = \f{2\pi}{\sqrt{\gamma}}\f{\delta \Gamma_{\mathbb{K}}}{\delta \rho}\,,
\end{equation}
and the right-hand-side can be expressed in terms of the DeWitt-Seeley coefficients that control logarithmic divergences \cite{Drukker:2000ep}. In particular
\begin{equation}
\delta \log \det {\cal K} = -2 a_2(\delta \rho|{\cal K})\,,\qquad \delta \log \det {\cal D}^2 = -2 a_2(\delta \rho| {\cal D}^2)\,,
\end{equation}
where $a_2(f|{\cal O})$ is the second DeWitt-Seeley coefficients for the operator ${\cal O}$ evaluated on a test function $f$.
Using this we can rewrite \eqref{QEMtensor} as
\begin{equation}\label{EMtensSeeley}
\langle {T_i}^i\rangle_{\mathbb{K}} = \f14\Tr b_2({\cal D}^2)-\f12\Tr b_2({\cal K})\,,
\end{equation}
where $a_2$ and $b_2$ are related through
\begin{equation}
a_2(f|{\cal O}) = \f{1}{4\pi} \int\sqrt{\gamma} f b_2({\cal O}) + \text{boundary terms}\,.
\end{equation}
Notice that in \eqref{EMtensSeeley} we have implicitly assumed that the variation $\delta \rho$ vanishes on the boundary. The ``local'' DeWitt-Seeley coefficients in our conventions (which are the same as those of \cite{Cagnazzo:2017sny}) take the form
\begin{equation}
b_2({\cal D}^2) = -\f1{6} R_\gamma + 2\mathrm{e}^{-2\rho}(v^2-a^2)\,,\qquad b_2({\cal K}) = \f1{6} R_\gamma - \mathrm{e}^{-2\rho}E\,.
\end{equation}
The complete expression for the quantum energy momentum tensor for the eight scalars and eight fermions is then given by
\begin{equation}\label{traceanomalydef}
\langle {T_i}^i\rangle_{\mathbb{K}}=-\f12\Big(2R_\gamma -\mathrm{e}^{-2\rho}\Tr E - \mathrm{e}^{-2\rho}\Tr(v^2-a^2)\Big)\,,
\end{equation}
where the trace should be understood as over all bosonic and fermionic masses \eqref{theEs}-\eqref{a-v-ferm}.
Just as in \eqref{stringyemtensor}, the terms in \eqref{traceanomalydef} should be separated into terms proportional to $R_\gamma$ on one hand, and $\partial_i X^\mu \partial_j X^\nu$ on the other.
However, since we have worked in static gauge, it is difficult to separate the two terms.
In order to do so, we would have to expand the Polyakov action using the conformal gauge and a background metric that is not identified with the induced metric (see \cite{Drukker:2000ep,Forini:2015mca} for a detailed discussion).
In addition to the eight transverse scalars, we would now have the two longitudinal modes as well as ghost fields. The contribution to $\langle {T_i}^i\rangle$ that is proportional to $\partial_i X^\mu \partial_j X^\nu$ turns out to be the total mass contribution of all physical fields. The total scalar masses of all ten fields is $\mathrm{e}^{-2\rho}\Tr E-R_\gamma$ while the fermions still give $\mathrm{e}^{-2\rho}\Tr (v^2-a^2)$. Note that including also the ghost fields, the total contribution to the DeWitt-Seeley coefficients is the same in the two gauges~\cite{Drukker:2000ep}.
With this in mind we can suggestively rewrite \eqref{traceanomalydef} as
\begin{equation}\label{Tseparated}
\langle {T_i}^i\rangle_{\mathbb{K}}=-\f12\Big(R_\gamma -\mathrm{e}^{-2\rho}\Tr E - \mathrm{e}^{-2\rho}\Tr(v^2-a^2)\Big) -\f12 R_\gamma\,,
\end{equation}
where the first term should now be understood as the one proportional to $\partial_i X^\mu \partial_j X^\nu$.
Indeed we can explicitly verify using the ten-dimensional solution that
\begin{equation}
R_\gamma -\mathrm{e}^{-2\rho}\Tr E= \partial_i X^\mu \partial^i X^\nu\Big[ R_{\mu\nu} -\f12 |H|^2_{\mu\nu} \Big]\,,
\end{equation}
which are the first two terms in the expected Weyl anomaly function $\beta^G_{\mu\nu}$. Similarly we have checked that
\begin{equation}
-\mathrm{e}^{-2\rho}\Tr(a^2-v^2)=-\f14 \partial_i X^\mu \partial^i X^\nu\mathrm{e}^{2\Phi}\sum_n|F_n|^2_{\mu\nu} \,.
\end{equation}
Here the sum over $n$ runs over form fields as well as their Hodge duals (this is the so-called democratic formulation). Using these results we see that the $\partial_i X^\mu \partial_j X^\nu$-terms in \eqref{traceanomalydef} can be expressed as
\begin{equation}\label{missingterm}
-\f12\partial_i X^\mu \partial^i X^\nu\Big[ R_{\mu\nu} -\f12 |H|^2_{\mu\nu} -\f14 \mathrm{e}^{2\Phi}\sum_n|F_n|^2_{\mu\nu}\Big]=\partial_i X^\mu \partial^i X^\nu\nabla_\mu\nabla_\nu \Phi\,,
\end{equation}
where we used the equations of motion of ten-dimensional supergravity. We therefore see that if the dilaton is non-constant (as for our background) then the $\partial_i X^\mu \partial_j X^\nu$-terms in the energy momentum tensor do not vanish as they must for a consistent theory.
This should not come as a particular surprise since we have neglected to take into account the classical Weyl ``anomaly'' of the FT action. It is well known that the classical Weyl rescaling of the FT action is cancelled by the anomaly of the one-loop fluctuations of the string (see for example \cite{Callan:1989nz} for a detailed discussion). We can correct for this by adding to \eqref{traceanomalydef} the classical energy-momentum tensor computed using the FT action \eqref{FTAction}
\begin{equation}
({T_i}^i)_\text{FT} = -\partial_i X^\mu \partial^i X^\nu\nabla_\mu\nabla_\nu \Phi\,,
\end{equation}
which exactly compensates for the missing term in \eqref{missingterm}, ensuring that the $\partial_i X^\mu \partial_j X^\nu$-terms in the full energy momentum tensor do in fact vanish as expected.
We are then left with
\begin{equation}\label{finalEMtens}
\langle {T_i}^i\rangle =\langle {T_i}^i\rangle_{\mathbb{K}} + ({T_i}^i)_\text{FT} = -\f12 R_\gamma\,,
\end{equation}
which is identical to the result one would get from an similar treatment of the GS string in flat space \cite{Drukker:2000ep}. The remaining anomaly is universal and is cancelled here in exactly the same way as on flat space. Roughly speaking this is accomplished by the combination of two effects. First, the transformation of the GS fermions to two-dimensional fermions on the worldsheet is accompanied by a Jacobian which contributes additional $-R_\gamma$, next, in conformal gauge the FP determinant can be rewritten as a $bc$-ghost system for which zero-modes must be excluded. This produces an additional $(3/2) R_\gamma$ which makes the total conformal anomaly vanish (see for example \cite{Blumenhagen:2013fgp}).
Since we will not take care of these two ingredients, and only use its universal nature \cite{Giombi:2020mhz}, our partition function will carry logarithmic divergences controlled by the DeWitt-Seeley coefficients. In terms of the Weyl anomaly, the divergence is just
\begin{equation}
\f1{2\pi} \int \langle {T_i}^i\rangle \text{vol}_\gamma = -\chi = -1\,,
\end{equation}
where $\chi$ is the Euler characteristic of the worldsheet.
We will recover exactly this logarithmic divergence when we explicitly compute the partition function in section \ref{phaseshiftmethod}.
To summarize this subsection, our treatment of the Weyl anomaly shows that because the ten-dimensional dilaton is non-trivial in our background, we should not separately compute $S_\text{FT}$ and $\Gamma_\mathbb{K}$ if we want to perform our desired Weyl rescaling. Rather these should be treated as a combined object
\begin{equation}
W \equiv S_\text{FT}+\Gamma_\mathbb{K}\,.
\end{equation}
Since the \emph{total} Weyl anomaly vanishes in string theory, we can perform the Weyl transformation to the flat metric as desired. Due to the fact that we do not carefully keep account of all string theory ingredients discussed above, our one-loop determinants will carry divergences. These divergences are however argued to be universal. As we will explain in further detail in section \ref{ratio}, in order to control the universal contributions we suggest to compute the ratio of two string partition functions keeping in mind that all universal factors drop out.
For now, let us discuss a different issue that we encounter. We denote the corresponding flat space quantities (i.e. Weyl transformed quantities) by a tilde. It turns out that $\tilde S_\text{FT}$ identically vanishes. This is somewhat surprising since we expect to find something in the spirit of $\chi \log g_s$.
Since our worldsheet is a disc (that is $\chi=1$), and since the vacuum value of the dilaton does not vanish, the FT term should not vanish.
The problem is that the Weyl transformation effectively changes the topology of our manifold due to the choice of coordinates used. The ``center'' of our worldsheet is located in our coordinates at $\sigma\to\infty$. In the Weyl rescaled flat metric, this point is pushed infinitely far away.
This was first emphasized in \cite{Cagnazzo:2017sny}.
In all practical computations we must place an IR cutoff at some large finite $\sigma=R$ which changes the topology of the worldsheet to a cylinder. The Euler characteristic of the cylinder vanishes, which explain why the direct application of the formula \eqref{FTAction} results in the answer $\tilde S_\text{FT}=0$. What we have neglected to take into account is the contribution of the small disc located at $\sigma\ge R$ that we cut off by the Weyl rescaling. In this region the dilaton \eqref{dilaton} is approximately constant, and we can just use
\begin{equation}
\label{S-FT-final}
\tilde S_\text{FT} =\chi\lim_{\sigma\to\infty} \Phi_0 = -\log \f{N\pi}{\xi^{3/2}}\,.
\end{equation}
In principle we should also retain a contribution from one-loop fluctuation of fields inside this small disc. However, the bosonic and fermionic operators are exactly free in this limit and so we obtain only the universal contribution due to UV divergences. We will take these into account when computing the one-loop fluctuation for $\sigma \le R$ and so do not account for these here. In total we then have
\begin{equation}
W = -\log \f{N\pi}{\xi^{3/2}} + \tilde\Gamma_{\mathbb K}(R)\,,
\end{equation}
where $\tilde\Gamma_{\mathbb K}(R)$ is the one-loop partition function of the Weyl rescaled operators using an IR cutoff at $\sigma = R \gg 1$. Notice that this expression, and in particular the FT term \eqref{S-FT-final} is free from the divergence encountered in \eqref{FTtermdivergent}. We expect that if we had not performed the Weyl rescaling discussed here, the one-loop fluctuations $\Gamma_{\mathbb K}(R)$ would carry a similar divergence as \eqref{FTtermdivergent} and cancel it. However, using the Weyl rescaled quantities both quantities are now free from this powerlaw UV divergence.
\subsection{Phase shift method}
\label{phaseshiftmethod}
Our remaining task is to compute the one-loop partition function $\tilde\Gamma_{\mathbb K}(R)$ for the tilded (flat) operators, that is
\begin{equation}
\label{Gamma-tilde}
\tilde\Gamma_{\mathbb K}(R)= \f12 \log \f{(\det{\tilde{\cal K}}_x)^4(\det{\tilde{\cal K}}_y)^2(\det{\tilde{\cal K}}_z)^2}{(\det{\tilde{\cal D}})^8}\,,
\end{equation}
where the operators are given in equations \eqref{op-K-Ktilde} and \eqref{op-D-Dtilde}.
To this end we will use the phase shift method as in \cite{Chen-Lin:2017pay,Cagnazzo:2017sny}.
The operators we are interested in are two-dimensional, for example the bosonic operators are of the form
\begin{equation}
\tilde {\cal K}_a=-\partial_\sigma^2 - \partial_\tau^2 + E_a (\sigma)\,,
\end{equation}
where the potentials $E_a$ are defined in \eqref{theEs}.
The first step is to Fourier expand with respect to the angle $\tau$, then $\partial_\tau$ is mapped to $i \omega$.
Bosonic and fermionic operators obey periodic and anti-periodic boundary conditions with respect to the $\tau$ coordinate, respectively, then $\omega$ will be an integer for the bosonic fluctuations and half-integer for the fermionic ones.
Computing the (log of the) functional determinants in \eqref{Gamma-tilde} amounts to solve the spectral problem for these now one-dimensional Schr\"odinger operators, i.e.
\begin{equation}\label{eigenvaluedef}
\tilde {\cal K}_a \,\eta_{ \omega}(\sigma) =\left(-\partial_\sigma^2 +\omega^2+ E_a (\sigma)\right) \eta_{ \omega}(\sigma)= \lambda\, \eta_{\omega}(\sigma)\,,
\end{equation}
where $\eta_{\omega}(\sigma)$ represents now only the ``radial'' component of the full wave function, that is $\Psi(\sigma, \tau)=\Sigma_\omega e^{i \omega \tau} \eta_{\omega}(\sigma)$.
For large $\sigma$ the potentials $E_a(\sigma)$, $v(\sigma)$ and $a(\sigma)$ asymptote to zero, and we are left with free operators.
Hence, the solutions asymptotically behave as waves.
The effect of the potential (and hence the information about the spectrum) is contained on a phase shift, $\delta$, which measures at large $\sigma$ how close the solution is to a free (ingoing or outgoing) wave, that is
\begin{equation}
\label{eta-asymptotic}
\eta_{\omega} \to C \sin(p\sigma + \delta(\omega,p))\,.
\end{equation}
From here it is also manifest that the dispersion relation is nothing but
\begin{equation}\label{disp-rel}
\lambda= \omega^2+p^2\,.
\end{equation}
The goal is to compute the phase shift $ \delta(\omega,p)$ for each of our bosonic and fermionic operators.
This scattering problem is clearly illustrated in~\cite{Cagnazzo:2017sny}, hence here we only summarise the main steps.
Firstly, our fluctuations obey Dirichlet boundary conditions at $\sigma=0$.
In the UV limit, the potentials diverge $E_a \sim \sigma^{-2}$ which implies that the wave functions are either non-normalizable or they vanish. We will choose normalizable wavefunctions that vanish in the UV:
\begin{equation}\label{bc-Dirichlet-UV}
\eta_{\omega}(\sigma =0)=0\,.
\end{equation}
Secondly, to get a discrete spectrum we introduce an IR cutoff at large distance $\sigma = R$ and impose Dirichlet boundary conditions at $\sigma=R$, that is $\eta_{\omega}(R)=0$.
Given the asymptotic behaviour \eqref{eta-asymptotic} of the solutions, this quantization condition reads
\begin{equation}\label{quantization}
pR + \delta(\omega,p) = \pi k\,,
\end{equation}
where $k$ is a positive integer.
From this we can read the density of states, or the multiplicity of the eigenvalues,
\begin{equation}
\rho = \f{\mathrm{d} k}{\mathrm{d} p} = \f{1}{\pi}\left(R + \f{\mathrm{d} \delta(\omega,p)}{\mathrm{d} p}\right)\,.
\end{equation}
Hence, the functional determinant reduces to
\begin{equation}
\log \det \tilde {\cal K} = \sum_\omega \int_0^\infty \f{\mathrm{d} p}{\pi}\left(R + \f{\mathrm{d} \delta(\omega,p)}{\mathrm{d} p}\right)\log (p^2+\omega^2)\,,
\end{equation}
where we used the fact that the spectrum is approximately continuous in the large $R$ limit, and the explicit form of the eigenvalues \eqref{disp-rel}.
We can then integrate by parts over $p$, and replace the sum over Matsubara frequencies with an integral over the complex plane $\omega$, which will give a contribution only at the poles $\omega=\pm i p$.%
\footnote{More details can be found in~\cite{Cagnazzo:2017sny}.}
This gives for the bosonic operators
\begin{equation}
\log \det \tilde {\cal K} =- \int_0^\infty \mathrm{d} p\,\coth (\pi p)\Big( 2pR + \delta(i p,p)+ \delta(-i p,p) \Big)\,,
\end{equation}
and for the fermionic operators
\begin{equation}
\log \det \tilde {\cal D} =- \int_0^\infty \mathrm{d} p\,\tanh (\pi p)\Big( 2pR + \delta(i p,p)+ \delta(-i p,p) \Big)\,.
\end{equation}
Finally we can use the fact that the bosonic operators \eqref{op-K-Ktilde} are Hermitian, which leads to $\delta(i p,p)= \delta(-i p,p)$. We will denote the phase shifts corresponding to the bosonic operators by simply $\delta_a$ where $a=x,y,z$.
On the other hand, the fermionic operators \eqref{op-D-Dtilde} are not Hermitian, and so we will have two independent phase shifts for $\omega=\pm i p$, which we denoted by $\delta_\pm$.
At this point we are ready to write the full expression for the effective action $\tilde\Gamma_{\mathbb K}(R)$ \eqref{Gamma-tilde} in terms of the phase shifts, that is collecting all the terms we have
\begin{equation}\label{phaseshiftformula}
\tilde\Gamma_{\mathbb K}(R)= -\int_0^\infty \mathrm{d} p\, \Big[\coth (\pi p)(4\delta_x+2\delta_y+2\delta_z)-\tanh (\pi p)(4\delta_{+}+ 4\delta_{-})\Big] - R\,,
\end{equation}
where we have performed the explicit $p$-integral multiplying the cutoff $R$.
\subsubsection{Phase shifts for the bosonic operators}
In this subsection we focus on the bosonic operators.
As we have seen above, in order to compute the phase shifts, we have to solve the zero eigenvalue Schr{\"o}dinger problem
\begin{equation}\label{bosoniceq}
\tilde{\cal K}_a \eta_a(\sigma) = 0\,, \qquad a=x, y,z\,,
\end{equation}
with boundary conditions \eqref{bc-Dirichlet-UV} and \eqref{quantization}. Notice that this means $\omega=\pm i p$.
Since the bosonic operators are Hermitian, the two independent solutions come in complex conjugate pairs $\eta_{a}(p;\sigma)$ and $\bar \eta_a(p;\sigma)$. Like \cite{Cagnazzo:2017sny}, we normalize our basis functions such that
\begin{equation}\label{normalization-eta}
\lim_{\sigma\to\infty}\mathrm{e}^{-ip\sigma}\eta_a(p;\sigma) = 1 = \lim_{\sigma\to\infty}\mathrm{e}^{ip\sigma}\bar \eta_a(p;\sigma)\,.
\end{equation}
The explicit form of the bosonic basis function is then
\begin{equation}
\label{eta-bos-h1}
\begin{split}
\eta_{x}(p;\sigma) &= \big(2\sinh \sigma\big)^{i p}(\coth\sigma)^{1/2}\, {}_2F_1\Big(-\tfrac{1+i p}{2},\tfrac{3-ip}{2};1-ip; -{\rm csch}^2\sigma \Big)\,, \\
\eta_{y}(p;\sigma) &= \big(2\sinh \sigma\big)^{i p}(\coth\sigma)^{1/2}\, {}_2F_1\Big(-\tfrac{i p}{2},\tfrac{2-ip}{2};1-ip;-{\rm csch}^2\sigma \Big)\,, \\
\eta_{z}(p,h;\sigma) &= \big(2\sinh \sigma\big)^{i p}(\coth\sigma)^{-1/2}\, {}_2F_1\Big(\tfrac{1-i p}{2},-\tfrac{1+ip}{2};1-ip;-{\rm csch}^2\sigma \Big)\,, \\
\end{split}
\end{equation}
for the three operators $\tilde{\cal K}_x$, $\tilde{\cal K}_y$, and $\tilde{\cal K}_z$.
Here we have set the parameter $h$ equal to 1, however the results for generic values of $h$ are discussed in appendix \ref{app:res-various-h}.
A wave function that is regular at $\sigma\to0$ vanishes there, and can be directly constructed as follows
\begin{equation}
\eta(p;\sigma) = {\cal N}\big(\bar\eta(p;0)\eta(p;\sigma)-\eta(p;0)\bar\eta(p;\sigma)\big)\,,
\end{equation}
where ${\cal N}$ is an unimportant normalization of the wavefunction. The phase shift can then be determined directly by evaluating the limit $\sigma\to\infty$ and imposing the quantization condition \eqref{quantization}. The resulting phase shifts for the three bosonic operators for $h=1$ are
\begin{equation}
\label{delta-bos-h1}
\begin{split}
\delta_x(p) &= \text{Arg}\, \Big[{2^{-ip}\Gamma(\tfrac{3-ip}{2})^2\Gamma(1+ip)}\Big]\,,\\
\delta_y(p) &= \text{Arg}\, \Big[\Gamma(1-\tfrac{ip}{2})\Gamma(\tfrac{1+ip}{2})\Big]\,,\\
\delta_z(p) &= \text{Arg}\, \Big[{2^{-ip}\Gamma(\tfrac{1-ip}{2})\Gamma(\tfrac{3-ip}{2})\Gamma(1+ip)}\Big]\,.
\end{split}
\end{equation}
The bosonic phase shifts for any $h$ are reported in appendix \ref{app:res-various-h}.
\subsubsection{Phase shifts for the fermionic operators}
\label{sec:phase-shift-ferm}
Here, we start by illustrating the computation and the results of the fermionic phase shifts. At the end of the section we collect our findings in the expression \eqref{final-gamma-tilde}.
Unfortunately, for the fermionic operators we could not analytically solve the wave equation for all $h$.
We are now looking at the following matrix equation
\begin{equation}\label{fermioniceq}
\tilde{\mathcal{D}}\eta =0\,,
\end{equation}
where the operator $\tilde{\mathcal D}$ is given in \eqref{op-D-Dtilde}.
Only for a few values of $h$ ($h=0, 3$) we can find an analytic solution to the above equations, and the results are reported in appendix \ref{app:res-various-h-ferm}.
For this reason, we have to resort to numerics.
Exactly as for the bosonic operators, we impose regular boundary conditions \eqref{bc-Dirichlet-UV} and read off the oscillating wave function for large $\sigma$, cf. \eqref{eta-asymptotic}. We must do this for both $\omega = i p$ and $\omega=- i p$ since the fermionic operator \eqref{op-D-Dtilde} is not Hermitian, and thus, the corresponding phase shifts will not be equal.
The most stable approach we have found to numerically extract the phase shifts is to first compute the two component wave function to high accuracy, imposing regular boundary conditions at small $\sigma$.
Then, we evaluate a particular ratio of these two components at large $\sigma$ which approaches a constant, which is just the phase shift.
At large $\sigma$ the wave functions take the form\footnote{We reuse $\eta$ here for a two-dimensional fermionic wave function.}
\begin{equation}
\eta(\sigma) =\begin{pmatrix}\eta_1(\sigma) \\\eta_2(\sigma) \end{pmatrix}\sim \begin{pmatrix}c_1 \mathrm{e}^{\mp i p \sigma}\\ c_2 \mathrm{e}^{\pm i p\sigma}\end{pmatrix}\,,
\end{equation}
where upper sign refers to $\omega=+i p$, and lower sign to $\omega=-i p$. Here $c_{1,2}$ are two constants that depend on $p$ and must be computed numerically. Using this asymptotic form we can impose quantization condition at large $\sigma=R$ which takes the form \cite{Cagnazzo:2017sny} (see also \cite{Medina-Rincon:2019bcc} for a more detailed discussion)
\begin{equation}
\tau_2 \eta(R) = \eta(R)\,\quad \text{or} \quad p R = k\pi \pm\f{1}{2} \text{Arg} \f{i c_1}{c_2}\,.
\end{equation}
Comparing with \eqref{quantization} and using the components of the wave function we find
\begin{equation}
\delta = \mp\f{1}{2} \text{Arg} \Big(\f{i \mathrm{e}^{\pm 2i pR}\eta_1(R)}{\eta_2(R)}\Big)\,,
\end{equation}
where the signs are correlated with $\omega=\pm i p$.
We have performed many numerical evaluations for a large range of $p$ and for many values of the parameter $h$, as discussed in appendix \ref{app:phaseshift}.
For $h=0$ and $3$ we can directly compare the numerical results with the analytic answers \eqref{delta-ferm-h0}-\eqref{delta-ferm-h3} to evaluate the precision of our code. We find excellent agreement with the analytic phase shifts with errors ranging between $10^{-9}$ and $10^{-7}$ (see figure \ref{errors}).
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{fermionphaseshifterrors.pdf}
\caption{\label{errors}The total absolute numerical error for the sum of all fermionic phase shifts computed using numerical methods compared with the analytic expressions obtained for $h=0, h=3$, and reported in \eqref{delta-ferm-h0} and \eqref{delta-ferm-h3}.}
\end{figure}
We also compare our numerical results against a WKB approximation at large $p$ finding a perfect match.
\vskip 0.3 cm
The numerical fermionic phase shifts are combined with the analytic ones for the bosons \eqref{delta-bos-h1} into the integrand in \eqref{phaseshiftformula} (shown in figure \ref{integrand} for $h=1$).
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{numericalIntegrand.pdf}
\caption{\label{integrand}The full integrand in \eqref{phaseshiftformula} obtained by combining numerical results for the fermionic phase shifts and the analytic expression for the bosonic phase shift. We have set the parameter $h=1$ in this figure. The integral over the shaded region gives us the regularized $\Gamma_{\mathbb K}$.}
\end{figure}
We can now perform a numerical integration of our integrand ${\cal I}(p)$.
This is logarithmically divergent for large $p$ with a coefficient given by the worldsheet Euler characteristic $\chi=1$, as we expected from our analysis in section \ref{anomaly}.
For this reason we numerically integrate up to a large UV cutoff $p=\Lambda$.
At this point the integral is finite, and we find that it matches $2\log \pi$ up to five digits.
We therefore conclude that, for $h=1$, the one-loop effective action \eqref{Gamma-tilde} is given by
\begin{equation}\label{final-gamma-tilde}
\tilde\Gamma_{\mathbb K}(R) = 2\log \pi +\log(\Lambda \mathrm{e}^{-R})\,.
\end{equation}
\section{Ratio with ABJM and match with QFT}
\label{ratio}
In this section we collect the results obtained previously.
Up to one-loop in the 't Hooft coupling constant $\xi$, the logarithm of the string partition function \eqref{log-Z-tocompute} comprises of three terms: the classical regularized action \eqref{s-classical}, the Fradkin-Tseytlin contribution \eqref{S-FT-final}, and finally the contribution coming from the fluctuations \eqref{final-gamma-tilde}.
Hence, collecting all the pieces, we find that the string partition function takes the form
\begin{equation}
\label{logZD4result}
\log Z_\text{SYM}^\text{string} \approx \xi +\log \f{N_\text{SYM}}{\xi^{3/2}\pi} - \log(\Lambda \mathrm{e}^{-R_\text{SYM}})\,,
\end{equation}
where $R_{\rm SYM}$ is the IR cutoff on the coordinate $\sigma$.
We have introduced the label SYM on the integer $N$ and the cutoff $R$ in order not to confuse with ABJM quantities which we will encounter in this section.
In order to compare this result with the field theory prediction \eqref{W5-final} we must first find a way to deal with the two cutoffs $\Lambda, R_{\rm SYM}$.
We suggest to follow a similar procedure used in \cite{Forini:2015bgo,Faraggi:2016ekd,Forini:2017whz,Cagnazzo:2017sny,Medina-Rincon:2018wjs}, that is to compute a ratio of string partition functions, where the string worlsheets have the same topology, and then compare this with the corresponding ratio of Wilson loop expectation values on the field theory side.
In \cite{Forini:2015bgo,Faraggi:2016ekd,Forini:2017whz,Cagnazzo:2017sny,Medina-Rincon:2018wjs} the ratio considered was that of a latitude Wilson loop with the circular one.
Without having an analogue of the latitude WL in 5D SYM, this approach seems not applicable.
However, as we have anticipated in the introduction (Sec. \ref{sec:introduction}) and discussed in section \ref{sec:oneloop}, the divergencies plagued by our string theory computation are universal. Indeed, the UV divergent piece is present even for a string in flat space \cite{Drukker:1999zq}, whereas the IR divergence is directly related to the procedure we used to compute the one-loop functional determinants.
Therefore, we should be able to compute a ratio of string partition functions (for string worldsheets with the same topology), and cancel the two regulators as long as the same computational method is used, implying that the same regularization scheme is employed.
Staying within a type IIA/M-theory setup, we will compute the ratio of our string partition function \eqref{logZD4result} with that of a circular string in AdS$_4\times {\bf C}P^3$.
The partition function of this string should capture the expectation value of a $1/2$-BPS circular (fermionic) Wilson loop operator~\cite{Drukker:2009hy} in the ABJM theory
\cite{Aharony:2008ug} which can be computed by supersymmetric localization \cite{Kapustin:2009kz,Drukker:2008zx,Marino:2009jd,Drukker:2010nc}
\begin{equation}
\label{WABJM}
\langle{\cal W} \rangle_\text{ABJM} \approx \f{N_\text{ABJM}}{4\pi\lambda}\mathrm{e}^{\pi\sqrt{2\lambda}}\,,\qquad \lambda\gg 1\,.
\end{equation}
Note that here we have used the conventions of \cite{Giombi:2020mhz} when normalizing the Wilson loop VEV, and included the rank of the gauge group $N_\text{ABJM}$ in this expression.
Let us now consider the dual string wrapping the equator of $S^3$ inside AdS$_4$ (or its analytic continuation ${\bf H}^4$) in global coordinates. The classical solution was firstly discussed in \cite{Drukker:2008zx}.
Using the same conventions as we have done so far, the metric on the string worldsheet is in this case given by the conformal factor
\begin{equation}\label{M2metric}
\mathrm{e}^{2\rho } = \f{\pi\sqrt{2\lambda}\,\ell_s^2}{\sinh^2\sigma}\,.
\end{equation}
where $\lambda$ is the corresponding 't Hooft coupling.%
\footnote{In the AdS$_4$/CFT$_3$ duality the 't Hooft coupling is given by $\lambda={N\over k}$ where $k$ is the level of gauge groups $\U(N)_k\times \U(N)_{-k}$~\cite{Aharony:2008ug}.}
This allows us to compute the regularized classical action
\begin{equation}
\label{ads4-cl}
S_\text{classical} = -\pi\sqrt{2\lambda}\,,
\end{equation}
reproducing the exponential behavior of the Wilson loop vev in \eqref{WABJM}.
Next we compute the one-loop correction to the classical result. This is exactly the same procedure we have been doing so far for the D4-branes except that, since the geometry is AdS, it is technically simpler.
The one-loop correction consists of two terms: The FT action and the one-loop fluctuations of worldsheet fields.
For the AdS$_4$ string the FT action is simple to evaluate due to the fact that the dilaton is constant
\begin{equation}\label{ads4-SFT}
S_\text{FT} = \chi \log g_s = -\log \f{N}{\sqrt{\pi}(2\lambda)^{5/4}}\,,
\end{equation}
where the string coupling is indeed $g_s =\left(32 \pi^2 \lambda^5\right)^{1/4} N^{-1}$. Here we have simply followed the steps discussed at the end of section \ref{anomaly}.
The one-loop fluctuation of worldsheet fields is computed in much the same way as in section \ref{phaseshiftmethod}.
First, we perform a Weyl transformation to strip off the metric factor so that we can compute the functional determinant of operators on flat space.
Next, we use the phase shift method to compute determinants themselves (see appendix \ref{app:Ads}).
The one-loop partition function for a string in AdS$_4\times \mathbf{C} P^3$ dual to a 1/2-BPS circular (fermionic) WL was computed in~\cite{Kim:2012tu, Aguilera-Damia:2018bam}, where either a Gel’fand-Yaglom method or a heat kernel approach were utilized
\footnote{See also \cite{Buchbinder:2014nia,Giombi:2020mhz} for a computation of the one-loop string partition function directly using the heat kernel method without performing any Weyl rescaling. }
These methods implicitly employ a different UV regularization scheme with respect to the phase shift method used here, and so we cannot simply borrow their results. In fact, for this reason, the answer \eqref{GammaAdS4} we obtain does not agree e.g. with the heat kernel result.%
\footnote{See eq. (2.21) in~\cite{Giombi:2020mhz} for a summary of the regularised one-loop string effective action in AdS$_n$ calculated by means of the heat kernel.}
%
In \cite{Medina-Rincon:2019bcc} the phase shift method was used to calculate the 1/2-BPS circular WL in ABJM at one loop at strong coupling.
Since there the focus was slightly different, and also in order to match with the notation of this manuscript, we have included our calculation of the AdS$_4$ string partition function in appendix \ref{app:Ads}.
The final expression is
\begin{equation}
\label{GammaAdS4}
\Gamma_{\text{AdS}_4} = 2 \log \pi +\log(\Lambda \mathrm{e}^{-R_\text{ABJM}})\,,
\end{equation}
where $R_{\rm ABJM}$ above is an IR cutoff in the coordinate $\sigma$.
It is important to note that the IR cutoff appearing in \eqref{GammaAdS4}, cannot be identified with the IR regulator $R$ in \eqref{logZD4result}, as indeed underlined by the different suffix.
Since the metric is different for the two cases it is not sensible to identify the two cutoffs.
Instead, we should follow the procedure in \cite{Cagnazzo:2017sny} and replace $R$ by a diffeomorphism invariant regulator given by the area of the worldsheet that is being cut off in the computation of the phase shifts.
This area is defined by
\begin{equation}
A = 2\pi\int_R^\infty \mathrm{e}^{2\rho}\mathrm{d}\sigma\,,
\end{equation}
for the two cases at hand, i.e. \eqref{D4metric} and \eqref{M2metric}, we obtain
\begin{equation}
\label{def-A}
A_\text{ABJM} = 4\pi^2\ell_s^2\sqrt{2\lambda}\,\mathrm{e}^{-2R_\text{ABJM}}\,,\qquad A_\text{SYM} = 16\pi\xi\ell_s^2\,\mathrm{e}^{-2R_\text{SYM}}\,.
\end{equation}
The cutoffs defined in terms of the area $A$ can now safely be identified for the two cases, that is
\begin{equation}
A_\text{ABJM} = A_\text{SYM} \equiv A\,.
\end{equation}
Let us start by rewriting the string partition function for the AdS$_4$ string, that is collecting
\eqref{ads4-cl}, \eqref{ads4-SFT}, and \eqref{GammaAdS4}, we obtain
\begin{equation}
\label{logZ-ads4-notinv}
\log Z_\text{ABJM}^\text{string} \approx \pi\sqrt{2\lambda} +\log \f{N}{(2\pi^2 \lambda)^{5/4}} -\log(\Lambda \mathrm{e}^{-R_\text{ABJM}})\,.
\end{equation}
We can now express the string partition function in the two cases, i.e.\eqref{logZD4result} and \eqref{logZ-ads4-notinv}, using the diffeomorphism invariant cutoff $A$ defined in \eqref{def-A}, then we have
\begin{equation}
\begin{split}
\log Z_\text{SYM}^\text{string} &\approx \xi +\log \f{ 4\ell_s N_\text{SYM}}{\xi\sqrt{\pi}} - \log(\Lambda \sqrt{A})\,,\\
\log Z_\text{ABJM}^\text{string} &\approx \pi\sqrt{2\lambda} + \log \f{\ell_s N_\text{ABJM}}{\pi^{3/2}\lambda} - \log (\Lambda \sqrt{A})\,.
\end{split}
\end{equation}
Since the cutoffs in these two expressions are now the same we can safely cancel them in a ratio of string partition functions, and obtain
\begin{equation}
\f{Z_\text{SYM}^\text{string}}{Z_\text{ABJM}^\text{string}} = \Big(\f{N_\text{ABJM}}{4\pi\lambda}\mathrm{e}^{\pi\sqrt{2\lambda}}\Big)^{-1}\Big( \f{N_\text{SYM}}{\xi} \mathrm{e}^{\xi}\Big)\,.
\end{equation}
Let us then consider the ratio of the two localization results for the vev of the 1/2-BPS Wilson loop operators, expanded at strong coupling, in five-dimensional SYM \eqref{WL-strong-coupling-exp} on one hand, and ABJM \eqref{WABJM} on the other. It is then clear that comparing the two ratios we find a perfect match:
\begin{equation}
{\langle{\cal W} \rangle_\text{SYM}\over \langle{\cal W} \rangle_\text{ABJM}}= \f{Z_\text{SYM}^\text{string}}{Z_\text{ABJM}^\text{string}} \,.
\end{equation}
\bigskip
\bigskip
\bigskip
\leftline{\bf Acknowledgements}
\smallskip
\noindent We are grateful to Pieter Bomans, Nikolay Bobev, Nadav Drukker, Valentina Forini, Luca Griguolo, Daniel Medina-Rincon, Joe Minahan, Domenico Seminara, L{\'a}rus Thorlacius, Maxime Tr{\'e}panier, Arkady Tseytlin, Edoardo Vescovi, and Kostya Zarembo for useful discussions. We especially acknowledge Domenico Seminara, Arkady Tseytlin and Kostya Zarembo for comments on the manuscript. FFG is supported by the University of Iceland Recruitment Fund. VGMP is partially supported by grants from the University of Iceland Research Fund.
\newpage
|
{
"timestamp": "2021-12-01T02:27:06",
"yymm": "2111",
"arxiv_id": "2111.15493",
"language": "en",
"url": "https://arxiv.org/abs/2111.15493"
}
|
\section{Introduction}\label{sec:1}
\noindent In this paper we always assume that $(M,\omega)$ are connected and tame symplectic manifolds (see~\cite{ALP}). Such manifolds include closed symplectic manifolds, open manifolds which are symplectically convex at infinity as well as products of such. Denote by $\mathcal{J}$ the space of $\omega$-compactible almost complex structures such that $(M,g_{J})$ is geometrically bounded, where for $J\in\mathcal{J}$, $g_{J}(\cdot,\cdot)=\omega(\cdot,J\cdot)$ is the associated Riemannian metric.
For $H\in\mathcal{H}_c:=C_c^\infty([0,1]\times M,{\mathbb{R}})$ (clearly, if $M$ is compact this function space coincides with $\mathcal{H}:=C^\infty([0,1]\times M,{\mathbb{R}})$), we denote by $\{\varphi_H^t\}_{t\in[0,1]}$ the Hamiltonian isotopy of $H$ which is given by integrating the time-dependent vector field $X_{H_t}$, where $H_t=H(t,\cdot)$ and $X_{H_t}$ is determined uniquely by $-dH_t=\omega(X_{H_t},\cdot)$. Denote by $\mathcal{H}am_c(M,\omega)$ (resp. $\mathcal{H}am(M,\omega)$) the group of all Hamiltonian diffeomorphisms generated by elements of $\mathcal{H}_c$ (resp. $\mathcal{H}$).
Let $L\subset M$ be a Lagrangian submanifold and let $\varphi=\varphi^1_H$. The Lagrangian version of the homological Arnold conjecture states that under appropriate conditions the set $L\cap\varphi(L)$ contains at least as many points as a topological number of $L$. Namely, if all intersections in $L\cap\varphi(L)$ are transverse, the number should be at least the sum of the Betti numbers of the homology group $H_*(L,{\mathbb{F}})$ with coefficients in any field ${\mathbb{F}}$. In general, the number should be the \textit{cuplength} of $L$ which is defined as
\begin{eqnarray}
cl(L):=\max\big\{k+1:&&\exists\; a_i\in H_{d_i}(M),\; d_i<n,\;i=1,\ldots,k\notag\\
&&\hbox{such that}\;a_1\cap\ldots \cap a_k\neq0\big\}.\notag
\end{eqnarray}
The Arnold conjecture in degenerate sense was confirmed by Hofer~\cite{Ho} for the zero section of a cotangent bundle $T^*L$, see also Laudenbach and Sikorav~\cite{LS}.
Note that a ``small" Lagrangian torus in a symplectic manifold can always be moved away from itself by a Hamiltonian diffeomorphism. To exclude this case, Floer~\cite{Fl1,Fl2} first introduced the condition that $\pi_2(M,L)=0$ or $\omega(\pi_2(M,L))=0$ and proved the homological Arnold conjecture on Lagrangian intersections under this condition. Inspired by the work of Chekanov~\cite{Ch1,Ch2}, Liu~\cite{Liu} proved the Arnold conjecture under the same conditions as in~\cite{Ch1}.
It is well known that Lagrangian Floer homology (see, e.g., ~\cite{FOOO2} for the general definition) is a very powerful tool for solving the Arnold conjecture in the non-degenerate sense since the seminar work of Floer~\cite{Fl1,Fl3}. However, it seems to the author that there is no effective way to estimate the number of a monotone Lagrangian (which will be briefly recalled later) with its image of a Hamiltonian diffeomorphism when the intersections are not transverse so far. A possible candidate to do this is to apply Ljusternik--Schnirelman theory as we have already seen its usefulness in previous works ~\cite{Ho,Fl2,Fl3,LO,Sc2, GG,GG2}. This is the very reason to develop a Lagrangian Ljusternik--Schnirelman theory in the present paper.
In this paper we are interested in the size of the intersection $L\cap\varphi(L)$ for monotone Lagrangians $L$ under certain conditions. More specifically, for certain classes of Lagrangians which include ${\mathbb{R}} P^n$ in complex projective spaces ${\mathbb{C}} P^n$, we give a
condition in terms of Lagrangian spectral invariants to make sure that $L\cap\varphi(L)$ is homologically non-trivial and thus infinite. The proofs of these results involve hard tools from symplectic topology of both Lagrangians and the ambient manifold, for instance, Floer homology and Lagrangian quantum homology. Interestingly enough, the condition given here is closely related to the classical
Ljusternik--Schnirelman inequality.
\medskip
\subsection{Notations and conventions}\label{subsec:notation}
Throughout this paper all Lagrangian submanifolds $L\subset (M,\omega)$ will be assumed to be connected and closed. Recall that $L$ is called \textit{monotone} if the two homomorphisms
$$\omega:\pi_2(M,L)\to{\mathbb{R}},\quad \mu: \pi_2(M,L)\to{\mathbb{Z}}$$
which are given by integration and the Maslov index respectively, satisfy
$$\omega=\kappa_L\mu\quad \hbox{for some positive constant } \kappa_L.$$
We define the \textit{minimal Maslov number} of $L$ to be the integer
$$N_L=\min\big\{\mu(A)|A\in\pi_2(M,L),\;\mu(A)>0\big\}.$$
Throughout this paper we assume that all $L$ are monotone with minimal Maslov number at least two, i.e., $N_L\geq 2$. In this case it is known that
$(M,\omega)$ is (spherically) \textit{monotone}, which means that
$$\omega(A)=2\kappa_Lc_1(A),\quad \forall A\in\pi_2(M),$$
where $c_1=c_1(TM,\omega)$ is the first Chern class of $M$. We denote by $C_M$ the minimal positive Chern number of $M$
$$C_M=\min\big\{c_1(A)|A\in\pi_2(M),\;c_1(A)>0\big\}.$$
If the homomorphisms $\omega$ and $\mu$ vanish on $\pi_2(M,\omega)$, i.e.,
$$\omega(A)=\mu(A)=0,\quad A\in \pi_2(M,L),$$
we call $L$ \textit{weakly exact}, and in this case we have $N_L=\infty$. Similarly, if for all $A\in \pi_2(M)$
$$\omega(A)=c_1(A)=0,\quad A\in \pi_2(M),$$
we call $(M,\omega)$ \textit{symplectically aspherical}.
For any monotone Lagrangian $L$ of $(M,\omega)$ we have that $N_L$ divides $2C_M$. In what follows we denote by $A_L=\kappa_LN_L$ the minimal positive generator of $\omega(\pi_2(M,L))$, and let $A_L=0$ in the weakly exact case.
Since the Maslov numbers are multiples of $N_L$, for simplicity we also use the notation
$$\overline{\mu}=\frac{1}{N_L}\mu: \pi_2(M,L)\to{\mathbb{Z}}$$
In this paper we work with ${\mathbb{Z}}_2$-coefficient unless otherwise specified. For a monotone Lagrangian submanifold $L\subset (M,\omega)$ we denote by $\Lambda={\mathbb{Z}}_2[t^{-1},t]$ the ring of Laurent polynomials in $t$, and grade it by $\deg t=-N_L$. Each element of $\Lambda$ is a semi-infinite sum $\sum_ka_kt^k, a_k\in\{0,1\}$. This means that for any $k_0\in{\mathbb{Z}}$ there is only a finite number of of terms with $a_k\neq 0, k\leq k_0$.
We define a valuation map $\Lambda\to {\mathbb{Z}}\cup \{-\infty\}$ as
\begin{equation}\label{e:val}
\nu\bigg(\sum\limits_{k}a_kt^k\bigg)=\max\big\{-k|a_k\neq 0\big\}.
\end{equation}
Similarly, let $\Gamma={\mathbb{Z}}_2[s^{-1},s]$ be the ring of Laurent polynomials in $s$, where the degree of $s$ is $-2C_M$. It is easy to see that there exists a natural embedding of rings $\Gamma\hookrightarrow\Lambda$ given by $s\to t^{2C_M/N_L}$ which preserves the degree.
\subsection{Main results}\label{subsec:Main}
Recall that by the work of Oh~\cite{Oh1} if $L\subset M$ is a monotone Lagrangian, then the Floer homology $HF(L):=HF(L,L)$ with ${\mathbb{Z}}_2$-coefficients is well-defined.
Recall also that~\cite{BC,BC2} if there exists an isomorphism $HF(L)\cong H(L,{\mathbb{Z}}_2)\otimes\Lambda$ then $L$ is said to be \emph{wide}; if $HF(L)=0$ then $L$ is said to be \emph{narrow}. Examples among wide Lagrangians are ${\mathbb{R}} P^n$, Clifford torus in ${\mathbb{C}} P^n$, and weakly exact Lagrangians. By the PSS isomorphism (see~Section~\ref{sec:pss}), we always have $HF(L)\cong QH(L)$, where $QH(L)$ is the Lagrangian quantum homology of $L$ which will be briefly recalled in Section~\ref{sec:lqh}.
Clearly, if $L$ is wide then $QH(L)=\widehat{QH}_*(L)\oplus [L]\Lambda$, where $\widehat{QH}_*(L)\cong H_{*<n}(L,{\mathbb{Z}}_2)\otimes \Lambda$. However, we emphasize that in general there is no canonical isomorphism $QH_*(L)\cong (H(L,{\mathbb{Z}}_2)\otimes\Lambda)_*$, see~\cite[Section~4.5]{BC}.
We notice that for every $p\geq n+1-N_L$, there exists a canonical embedding $H_p(L,{\mathbb{Z}}_2)\otimes\Lambda_*\hookrightarrow QH_{p+*}(L)$, see~\cite[Proposition~4.5.1]{BC}. In what follows, this fact will be used frequently.
In the following theorems we denote by $\circ: QH(L)\otimes QH(L)\longrightarrow QH(L)$ the Lagrangian quantum product, $\bullet: QH(M,\Lambda)\otimes QH(L)\longrightarrow QH(L)$ the module structure, $\nu$ and $I_\omega$ the valuation maps on $QH_*(L)$ and $QM(M,\Lambda)$ induced by (\ref{e:val}), and $\ell:QH(L)\times C^\infty([0,1]\times M)\to{\mathbb{R}}$ the Lagrangian spectral invariant, see Section~\ref{sec:qh}, \ref{sec:lqh} and \ref{sec:lsi} for the definitions.
\begin{thm}[Lagrangian Ljusternik--Schnirelman inequality~I]\label{thm:lls}
Let $L^n\subset M^{2n}$ be a monotone Lagrangian with minimal Maslov number $N_L\geq 2$.
For any $H\in C_c^\infty([0,1]\times M)$ and $\alpha,\beta\in QH_*(L)$ we have
\[
\ell(\alpha\circ \beta, H)\leq \ell(\beta, H)+A_L\nu(\alpha).
\]
Moreover, if $L$ is wide, $\alpha\in \widehat{QH}_*(L)$ and the intersections of $L$ and $\varphi_H(L)$ are isolated, then
\[
\ell(\alpha\circ \beta, H)< \ell(\beta, H)+A_L\nu(\alpha).
\]
\end{thm}
\begin{thm}[Lagrangian Ljusternik--Schnirelman inequality~II]\label{thm:ll}
Suppose that $L^n$ is a monotone Lagrangian of
a closed symplectic manifold $(M^{2n},\omega)$ with minimal Maslov number $N_L\geq 2$. For any $H\in C^\infty([0,1]\times M)$, $a\in QH(M,\Lambda)$ and $\alpha\in QH(L)$, we have
$$\ell(a\bullet\alpha,H)\leq\ell(\alpha,H)+I_\omega(a).$$
Furthermore, if $a\in \widehat {QH}(M):=H_{*<2n}(M)\otimes_\Gamma\Lambda$ and the intersections of $L$ and $\varphi_H(L)$ are isolated, then the strict inequality holds.
\end{thm}
\begin{rmk}
These two inequalities can be viewed as generalisations of the classical Ljusternik--Schnirelman inequality~\cite{Vi2,GG}. A Hamiltonian version of Ljusternik--Schnirelman inequality was established by Ginzburg and G\"{u}rel~\cite{GG}. We also remark that the condition that $L$ is wide in Theorem~\ref{thm:lls} is not so strange as it looks at first glance since all known monotone Lagrangians are either wide or narrow. It was conjectured by Biran and Octav~\cite{BC} that any monotone Lagrangian submanifold is either narrow or wide. Note that $L\cap\varphi(L)=\emptyset$ implies that $L$ is narrow. So if the \textit{wide--narrow} conjecture is right, the wide condition we put on $L$ is not much restrictive.
\end{rmk}
\begin{thm}\label{thm:homess}
Let $L^n\subset M^{2n}$ be a monotone wide Lagrangian with minimal Maslov number $N_L\geq n+1$. Let $H\in C_c^\infty([0,1]\times M)$ which generates $\varphi_H$. Suppose that there exist homology classes $\alpha,\beta\in QH_*(L)$ satisfying $\beta\neq 0$ and $\alpha=\sum_i x_i \otimes\lambda_i$ with each homogeneous class $x_i\in H_*(L,{\mathbb{Z}}_2)$ satisfying $\deg(x_i)<n$ so that
$$\ell(\alpha\circ \beta,H)=\ell(\beta,H)+A_L\nu(\alpha).$$
Then $L\cap \varphi_H(L)$ is homologically non-trivial in $L$.
\end{thm}
Here a subset $S$ of a topological space $X$ is called \textit{homologically non-trivial} in $X$ if for every open neighborhood $V$ of $S$ the map $i_*:H_k(V)\to H_k(X)$ induced by the inclusion $i:V\hookrightarrow X$ is non-trivial.
\begin{rmk}
In~\cite{How} Howard established a Hamiltonian version of Theorem~\ref{thm:homess} which is one of the main motivations for our present work. Recently, Buhovsky, Humili\`{e}re and Seyfadini~\cite{BHS1} adapted Howard's method and gave a generalization of the Arnold conjecture to non-smooth settings on a closed symplectically aspherical manifold. Besides, their results on the $C^0$--Arnold conjecture are also generalized to the cotangent bundle of a closed manifold and many other cases, see~\cite{BHS2,Ka}.
\end{rmk}
\begin{thm}\label{thm:hess}
Let $L^n$ be a monotone Lagrangian of
a closed symplectic manifold $(M^{2n},\omega)$ with minimal Maslov number $N_L\geq 2$, and let $H\in C^\infty([0,1]\times M)$. Suppose that there exists a nonzero homology class $\alpha\in QH_*(L)$ and an element $a=\sum_i x_i \otimes_\Gamma\lambda_i\in QH(M,\Lambda)$ with each homogeneous class $x_i\in H_*(M,{\mathbb{Z}}_2)$ satisfying $\deg(x_i)<2n$ so that
$$\ell(a\bullet\alpha,H)=\ell(\alpha,H)+I_\omega(a).$$
Then $L\cap \varphi_H(L)$ is homologically non-trivial in $M$.
\end{thm}
Since there is no holomorphic disk with boundary on a weakly exact Lagrangian, the Lagrangian quantum product coincides with the intersection product. In this case by convention $N_L=\infty$, $A_L=0$ and $L$ is obviously wide. Thus Theorem~\ref{thm:homess} implies the following.
\begin{cor}
Let $L$ be a compact smooth Lagrangian submanifold of a symplectic manifold $(M,\omega)$ so that $\pi_2(M,L)=0$ or $L$ is weakly exact.
Let $\varphi_H\in\mathcal{H}am_c(M,\omega)$. If the total number of spectral invariants of $(L,H)$ (up to a shift) is smaller than ${\mathbb{Z}}_2$-cuplength of $L$, then $L\cap \varphi_H(L)$ is homologically non-trivial in $L$. In particular, for $M=T^*L$ this recovers the corresponding result implicitly contained in the work of Buhovsky, Humili\`{e}re and Seyfadini~\cite{BHS2}.
\end{cor}
\subsection{Applications}
The Lagrangian Ljusternik--Schnirelman inequalities~I and II
have the following immediate applications.
\begin{cor}\label{cor:num}
Let $L^n\subset M^{2n}$ be a monotone wide Lagrangian with minimal Maslov number $N_L\geq 2$.
If there exist $n$ nonzero homology classes $\alpha_i\in \widehat{QH}(L)$, $i=1,\ldots,k$ with $\nu(\alpha_i)<0$ and $0\neq\beta\in QH(L)$, then for any $H\in C_c^\infty([0,1]\times M)$
the total number of Lagrangian spectral invariants (up to a shift) of the pair $(L,H)$ is at least $k+1$.
\end{cor}
\begin{cor}
$L^n$ be a monotone non-narrow Lagrangian of
a closed symplectic manifold $(M^{2n},\omega)$ with minimal Maslov number $N_L\geq 2$. If there exist $n$ nonzero homology classes $a_i\in \widehat{QH}(M,\Lambda)$, $i=1,\ldots,k$ with $I_\omega(a_i)<0$, then for any $H\in C^\infty([0,1]\times M)$ the total number of Lagrangian spectral invariants (up to a shift) of the pair $(L,H)$ is at least $k+1$.
\end{cor}
\subsubsection{The Chekanov-type result}
To obtain some estimates of the number of the intersections $\varphi_H(L)\cap L$, we need further information about the Hofer distance~\cite{Ho2} between Lagrangians.
Let $$\mathcal{L}(L)=\{\varphi(L)|\varphi\in\mathcal{H}am_c(M,\omega)\}$$ denote the orbit of $L$ under the Hamiltonian diffeomorphism group $\mathcal{H}am_c(M,\omega)$. For $L_1,L_2\in\mathcal{L}(L)$ we define $\mathcal{L}(L)\times \mathcal{L}(L)\to{\mathbb{R}}$ by setting
$$d_H(L_1,L_2)=\inf_{H\in\mathcal{H}_c}\bigg\{\int^1_0{\rm osc}_M H_t dt\bigg|\varphi^1_H(L_1)=L_2\bigg\},$$
where ${\rm osc}_M=\max_M H-\max_M H$. It turns out that this function $d_H$ is a genuine metric on $\mathcal{L}(L)$ which is invariant under the action of $\mathcal{H}am_c(M,\omega)$ in our setting that $(M,\omega)$ is geometrically bounded and $L$ is a closed Lagrangian submanifold, see~\cite{Oh2,Ch3}. We call $d_H$ the \textit{Lagrangian Hofer metric} of $\mathcal{L}(L)$.
Recall that~\cite{Le,KS} for $QH(L)\neq 0$ the \textit{Lagrangian spectral pseudo-norm} of the pair $(L,H)$ with $H\in\mathcal{H}_c$ is defined by $$\gamma(L,H)=\ell([L],H)+\ell([L],\overline{H}),$$ where $\overline{H}(t,x)=-H(-t,x)$. This quantity is non-negative, see~(LS13) in Section~\ref{sec:lsi}.
Similarly to the Lagrangian Hofer metric, for $L_1,L_2\in\mathcal{L}(L)$ we define
$$\gamma(L_1,L_2):=\inf_{H\in\mathcal{H}_c}\big\{\gamma(L,H)\big|\varphi_H^1(L_1)=L_2\big\}.$$
This pseudo-metric, whenever is defined, is non-degenerate and invariant under the action of $\mathcal{H}am_c(M,\omega)$, see Kislev and Shelukhin~\cite{KS}.
Note that by the Lagrangian control property~(LS) (see Section~\ref{sec:lsi}) we have for any $\varphi\in \mathcal{H}am_c(M,\omega)$,
$$\gamma(L,\varphi(L))\leq d_{H}(L,\varphi(L)).$$
In view of the above inequality, the following statement sharpens a Chekanov-type result by Liu~\cite{Liu} about the Arnold conjecture for wide Lagrangians with $N_L>\dim(L)$.
\begin{thm}\label{thm:ArnoldC}
Let $L^n\subset M^{2n}$ be a monotone Lagrangian with $N_L\geq 2$. Suppose that the singular homology $H(L,{\mathbb{Z}}_2)$ is generated as a ring (with the intersection product) by $H_{\geq n+1-N_L}(L,{\mathbb{Z}}_2)$. If $\gamma(L,\varphi(L))<A_L$, then
$$\sharp \big(L\cap\varphi(L)\big)\geq cl(L).$$
\end{thm}
We remark here that under the assumption that $L$ and $\varphi(L)$ intersect transversely a sharpened Chekanov-type result has been already established by Kislev and Shelukhin~\cite{KS}.
It is a standard fact that the Clifford torus $\mathbb{T}_{clif}^n\subset{\mathbb{C}} P^n$ is monotone with $N_{\mathbb{T}_{clif}^n}=2$, see~\cite{Cho,BC}.
Let $t_1,\ldots,t_n$ be a basis of $H_{n-1}(\mathbb{T}_{clif}^n,{\mathbb{Z}}_2)$ dual to the basis $[c_1],\ldots,[c_n]\in H_1(\mathbb{T}_{clif}^n,{\mathbb{Z}}_2)$ with respect to the classical intersection product. Clearly, $H_*(\mathbb{T}_{clif}^n,{\mathbb{Z}}_2)$ is generated by $t_1,\ldots,t_n$ and the fundamental class $[\mathbb{T}_{clif}^n]$.
Therefore, Theorem~\ref{thm:ArnoldC} implies the following.
\begin{cor}\label{e:clif}
Let $\varphi$ be a Hamiltonian diffeomorphism of ${\mathbb{C}} P^n$. If
$\gamma(\mathbb{T}_{clif}^n,\varphi(\mathbb{T}_{clif}^n))<A_{\mathbb{T}_{clif}^n}$, then
$\sharp \big(\mathbb{T}_{clif}^n\cap\varphi(\mathbb{T}_{clif}^n)\big)\geq n+1.$
\end{cor}
We guess that it always holds that $\gamma(\mathbb{T}_{clif}^n,\varphi(\mathbb{T}_{clif}^n))<A_{\mathbb{T}_{clif}^n}$ for any $\varphi\in\mathcal{H}am({\mathbb{C}} P^n,\omega_{FS})$. In other words, the Arnold conjecture holds for $({\mathbb{C}} P^n,\mathbb{T}_{clif}^n)$ in the degenerate sense, but now we are not able to prove this.
Examples of wide Lagrangians $L$ with $N_L>\dim L$ are studied by Biran and Cornea, see~\cite[Section~6]{BC}. Since $n+1$ is the maximal possible value of $N_L$ for a monotone Lagrangian $L$ in projective space ${\mathbb{C}} P^n$, see~\cite{Se}, we have the following corollary.
\begin{cor}\label{e:quad}
Suppose that $L$ is a monotone wide Lagrangian submanifold with $N_L=n+1$, or a Lagrangian submanifold of the quadric
$Q:=\{z_0^2+\cdots+z_n^2=z_{n+1}^2\}\subset{\mathbb{C}} P^n$ with $H_1(L;{\mathbb{Z}})=0$. If $\gamma(L,\varphi(L))<A_L$ then the number of intersections of $\varphi(L)$ with $L$ is greater than or equal to the ${\mathbb{Z}}_2$-cuplength of $L$.
\end{cor}
Another interesting explicit examples which satisfy the conditions of Theorem~\ref{thm:ArnoldC} include:
$$(M,L)=({\mathbb{C}} P^n,\;{\mathbb{R}} P^n), \quad ({\mathbb{C}} P^n\times( {\mathbb{C}} P^n)^-,\;\Delta_{{\mathbb{C}} P^n}), \quad(Q^n,\;S^n),\quad(Gr(2,2n+2),\;\mathbb{H} P^n),$$
where $\Delta_{{\mathbb{C}} P^n}$ is the diagonal in the product symplectic manifold $({\mathbb{C}} P^n\times( {\mathbb{C}} P^n)^-,\omega_{FS}\oplus(-\omega_{FS}))$, $Q^n\subset {\mathbb{C}} P^{n+1}$ ($n>1$) is the complex quadric as before, $S^n=Q^n\cap{\mathbb{R}} P^{n+1}$ is the natural monotone Lagrangian sphere in $Q^n$,
and $\mathbb{H} P^n$ ($n\geq1$) is the quaternionic projecteve space in
the complex Grassmannian $Gr(2,2n+2)$.
Moreover, these four examples satisfy $\gamma(L,\varphi(L))<A_L$ for any $\varphi\in\mathcal{H}am(M,\omega)$, see Kislev and Shelukhin~\cite[Theorem~G]{KS} for more precise estimates of these Lagrangian spectral norms. Therefore, by Theorem~\ref{thm:ArnoldC} we always have $\sharp(L\cap \varphi(L))\geq cl(L)$ in these four cases. Since for $L={\mathbb{R}} P^n, \Delta_{{\mathbb{C}} P^n}, \mathbb{H} P^n$ we have $cl(L)=\dim_{{\mathbb{Z}}_2}H_*(L;{\mathbb{Z}}_2)$. As a consequence, we obtain
\begin{cor}\label{cor:RPn}
If $(M,L)=({\mathbb{C}} P^n,\;{\mathbb{R}} P^n),\;({\mathbb{C}} P^n\times( {\mathbb{C}} P^n)^-,\;\Delta_{{\mathbb{C}} P^n}), \;(Gr(2,2n+2),\;\mathbb{H} P^n)$, then for all $\varphi\in\mathcal{H}am(M,\omega)$ we have
$$\sharp(L\cap\varphi(L))\geq \dim_{{\mathbb{Z}}_2}H_*(L,{\mathbb{Z}}_2).$$
\end{cor}
This improves~\cite[Theorem~D]{KS} by removing the condition that $L$ and $\varphi(L)$ intersect transversely in these three cases. In particular, this recovers a well-known result by Givental~\cite{Gi} that $\sharp({\mathbb{R}} P^n\cap\varphi({\mathbb{R}} P^n))\geq n+1$ for all Hamiltonian diffeomorphism $\varphi$ of ${\mathbb{C}} P^n$, see also Chang and Jiang~\cite{CJ} for a different proof. We also mention that this result was generalized by Lu~\cite{Lu} to the weighted projective spaces $({\mathbb{C}} P^n(\mathbf{q}),\;{\mathbb{R}} P^n(\mathbf{q}))$ with odd weights $\mathbf{q}=(q_1,\ldots,q_{n+1})\in{\mathbb{N}}^{n+1}$.
\subsubsection{Uniform lower bounds for Lagrangian intersections }
\begin{df}\label{def:fqf}
Let $L^n$ be a monotone Lagrangian of
a closed symplectic manifold $(M^{2n},\omega)$. We say that $M$ has a \textit{ fundamental quantum factorization} (denoted by FQF for short) \textit{of length $k$} if there exist $u_1,\ldots,u_k\in H_{*<2n}(M,{\mathbb{Z}}_2)$ and $\tau\in{\mathbb{Z}}$ such that
$$t^\tau[M]= u_1*u_2*\cdots *u_k\quad \hbox{in}\; QH(M,\Lambda),$$
where the integer $\tau$ is called
the \textit{order} of FQF. Clearly, $\tau\neq 0$ by degree reasons.
\end{df}
The following symplectic manifolds which contain monotone Lagrangians satisfy the FQF property.
\begin{example}
(1) The complex projective spaces ${\mathbb{C}} P^n$ and complex Grassmannians; (2) The quadric $Q=\{z_0^2+\cdots+z_n^2=z_{n+1}^2\}\subset{\mathbb{C}} P^n$; (3) $W\times P$ if $M$ satisfies this condition and $P$ is symplectically aspherical; (4) ${\mathbb{C}} P^{n}\times{\mathbb{C}} P^{m_1}\times\cdots\times{\mathbb{C}} P^{m_r}$ with $m_1+1,\ldots,m_r+1$ divisible by $n+1$ and equally normalized symplectic structures, see, e.g., ~\cite{GG,BC}.
For relevant calculations of the quantum homology we refer to McDuff and Salamon~\cite{MS}.
\end{example}
\begin{thm}\label{thm:two}
Let $L^n$ be a monotone non-narrow Lagrangian of
a closed symplectic manifold $(M^{2n},\omega)$ with $N_L\geq 2$, and let $\varphi\in \mathcal{H} am(M,\omega)$. Suppose that $M$ has a FQF of length $k$ with order $\tau$. If the intersections of $L$ and $\varphi(L)$ are isolated, then the number of $L\cap\varphi(L)$ is at least $\lceil k/\tau\rceil$. Here $\lceil\cdot\rceil$ denotes the integer part of a real number, i.e., the smallest integer that is greater or equal to the given number.
\end{thm}
\begin{example}
Let $L\subset {\mathbb{C}} P^n$ be a closed Lagrangian submanifold with minimal Maslov index $N_L$. We
denote by $h=[{\mathbb{C}} P^{n-1}]\in H_{2n-2}({\mathbb{C}} P^n,{\mathbb{Z}}_2)$ the class of a hyperplane in the quantum homology $QH({\mathbb{C}} P^n,\Lambda)$. Then we have
\[
h^{*k}=
\begin{cases}h^{\cap k},\quad \ \ & 0\leq k\leq n,\\
[{\mathbb{C}} P^n]t^{(2n+2)/N_L}, \quad \ \ & k=n+1.
\end{cases}
\]
If $2H_1(L,{\mathbb{Z}}_2)=0$, then $L$ is monotone with $N_L=n+1$, see~\cite[Section 6]{BC}.
So for this Lagrangian, ${\mathbb{C}} P^n$ has a FQF of length $n+1$ with order $2$.
\end{example}
\begin{example}
Let
$Q\subset{\mathbb{C}} P^{n+1}$ be the quadric as before. Let $L\subset Q$ be a Lagrangian submanifold with $H_1(L;{\mathbb{Z}})=0$, then $L$ is monotone with $N_L=2n$, see~\cite[Section 6]{BC}.
For $n=2k$, let $a,b\in H_{2k}(Q;{\mathbb{Z}})$ be two classes of complex $k$-dimensional planes lying in $Q$ by which $H_n(Q,{\mathbb{Z}})$ is generated, see~\cite{GH}.
Let $u\in H_{2n}(Q;{\mathbb{Z}})$ the fundamental class and $p\in H_0(Q,{\mathbb{Z}})$ the class of a point.
The quantum product on $QH(Q,\Lambda)$ satisfies: (i) if $k=odd$ then $a*b=p$, $a*a=b*b=ut$; (ii) if $k=even$ then $a*a=b*b=p$, $a*b=ut$, see~\cite[Section 6.3.1]{BC}.
So for such $L$, the quadric $Q$ has a FQF of length $2$ with order $1$.
\end{example}
\begin{cor}\label{cor:quadric}
If $L$ is a Lagrangian in $ {\mathbb{C}} P^n$ with $2H_1(L;{\mathbb{Z}}_2)=0$, or a Lagrangian in the quadric $Q^{2k}\subset {\mathbb{C}} P^{2k+1}$ ($k\geq 1$) with $H_1(L;{\mathbb{Z}})=0$, then for any $\varphi\in\mathcal{H}am$ the number of $L\cap\varphi(L)$ is at least $\lceil\frac{n+1}{2}\rceil$ or $2$, respectively.
\end{cor}
Here we notice that a Lagrangian $L\subset {\mathbb{C}} P^n$ with $2H_1(L;{\mathbb{Z}}_2)=0$ has minimal Maslov number $n+1$ and is shown to be homotopy equivalent to ${\mathbb{R}} P^n$, see~\cite{KoS}. It was conjectured by Biran and Cornea~\cite{BC} that such a Lagrangian $L$ must be diffeomorphic to or Hamiltonian isotopic to ${\mathbb{R}} P^n$. If the second case is true, then for $({\mathbb{C}} P^n,{\mathbb{R}} P^n)$ the uniform lower bound given by Corollary~\ref{cor:RPn} is obviously better than the one by Corollary~\ref{cor:quadric}.
We also note that for the natural monotone Lagrangian sphere $L=S^{2k}\subset Q^{2k}$, the uniform lower bound given by Corollary~\ref{cor:quadric} is better than the one by Corollary~\ref{e:quad} since $cl(S^{2k})=1<2=\dim_{{\mathbb{Z}}_2}H_*(S^{2k},{\mathbb{Z}}_2)$.
Similarly to definition~\ref{def:fqf}, using the Lagrangian quantum product on $QH(L)$ one can propose the following definition.
\begin{df}
Let $L^n\subset M^{2n}$ be a monotone wide Lagrangian with $N_L\geq 2$. Suppose that $H_*(L,{\mathbb{Z}}_2)$ is generated as a ring by $H_{\geq n+1-N_L}(L,{\mathbb{Z}}_2)$. We call that $L$ has a \textit{Lagrangian fundamental quantum factorization} (denoted by LFQF for short) \textit{of length $l$} if there exist $v_1,\ldots,v_l\in H_{n-N_L<*<n}(L,{\mathbb{Z}}_2)$ and $\nu\in{\mathbb{Z}}$ such that
$$t^\nu[L]= v_1\circ v_2\circ \cdots \circ v_l\quad \hbox{in}\; QH(L),$$
where $\nu$ is called
the \textit{order} of LFQF. By degree reasons again we have $\nu\neq0$.
\end{df}
Correspondingly, one can prove the following.
\begin{thm}\label{thm:more}
Let $L^n\subset M^{2n}$ be a wide Lagrangian with $N_L\geq 2$, and let $\varphi\in \mathcal{H}am_c(M,\omega)$. Suppose that $H_*(L,{\mathbb{Z}}_2)$ is generated as a ring by $H_{\geq n+1-N_L}(L,{\mathbb{Z}}_2)$. If $M$ has a LFQF of length $l$ with order $\nu$, then the number of $L\cap\varphi(L)$ is at least $\lceil l/\nu \rceil$.
\end{thm}
It is easy to see that the
only Lagrangian submanifold on the sphere $S^2$ which is monotone is the ``equator", by which we mean the embedding circle seperating the sphere into two disks of equal areas.
It is a standard fact that such equator $L$ is
\textit{non-displaceable} in the sense that for every Hamiltonian diffeomorphism $\varphi$ of $S^2$ we have $L\cap\varphi(L)\neq 0$. Moreover, we have $\sharp (L\cap\varphi(L))\geq 2$. This obvious fact can be confirmed in many ways, for instance, by using Corollary~\ref{cor:RPn}. Here we provide a simple proof from the viewpoint of Lagrangian Ljusternik--Schnirelman theory.
In fact, for $L=equator \subset {\mathbb{C}} P^1=S^2$, an easy calculation shows that $[pt]\circ [pt]=[L]t$, where $[pt]$ is the point class for $L$, and $\deg t=-2$.
So $L$ has a LFQF of length $2$ with order $1$, and thus Theorem~\ref{thm:more} implies that $L\cap\varphi(L)$ has at least two elements whatever $\varphi\in\mathcal{H}am(S^2)$ is.
In general, let $t_1,\ldots,t_n$ be a basis of $H_{n-1}(\mathbb{T}_{clif}^n,{\mathbb{Z}}_2)$ dual to the basis $[c_1],\ldots,[c_n]\in H_1(\mathbb{T}_{clif}^n,{\mathbb{Z}}_2)$ as before. It can be shown that for $i\neq j$, $t_i\circ t_j+t_j\circ t_i=[\mathbb{T}_{clif}^n]t$, and for every $i$, $t_i\circ t_i=[\mathbb{T}_{clif}^n]t$, see~\cite{BC,Cho2}. So $\mathbb{T}_{clif}^n$ has LFQF of length $2$ with order $1$, and thus
\begin{cor}
For all Hamiltonian diffeomorphisms $\varphi$ of ${\mathbb{C}} P^n$, $\sharp (\mathbb{T}_{clif}^n\cap\varphi(\mathbb{T}_{clif}^n))\geq 2.$
\end{cor}
\subsection{Organization of the paper} In Section~\ref{sec:pre} we sum up preliminaries from Floer theory including Lagrangian Floer homology and Hamiltonian Floer homology, and quantum homology including Lagrangian quantum homology and quantum homology of the ambient manifold. In addition, the algebraic structures of Lagrangian quantum homology and the relations given by Piunikhin-Salamon-Schwarz isomorphisms between Floer homology and quantum homology are reviewed. In Section~\ref{sec:spectinv} we list the basic properties of two spectral invariants including Hamiltonian spectral invariant and Lagrangian spectral invariant. In particular, their relations with the classical Ljusternik--Schnirelman theory are given. In Section~\ref{sec:mainthms} we prove the main results presented in the introduction including Theorems~\ref{thm:lls}--\ref{thm:hess}. In Section~\ref{sec:last} we prove
Theorem~\ref{thm:ArnoldC} and \ref{thm:two}. In Section~\ref{sec:remarks} we summarize some directions of further study of Ljusternik--Schnirelman theory.
\section*{Acknowledgements}
The author is deeply indebted to Jun Zhang for explaining to me the concepts of various Novikov rings/fields and for reading a preliminary version of the paper and making valuable suggestions. I warmly thank Weiwei Wu for stimulating discussions and persistent encouragement when preparing this paper. Many ideas involved in this work stem out from the excellent work of Hofer and Zehnder~\cite{HZ}, Schwarz~\cite{Sc2} and Ginzburg and G\"{u}rel~\cite{GG}. I benefit a lot from the fascinating papers by Biran and Cornea~\cite{BC,BC2,BC3} and Leclercq and Zapolsky~\cite{LZ}. Without their pioneering work the present paper is impossible to finish. I am grateful to all of them. I thank Lev Buhousky for explaining to me the main result in~\cite{BHS1}. I would like to thank my colleague Huang Hong for many valuable discussions. Besides, I wish to thank Guangcun Lu for helpful remarks and for pointing out to me the work of Givental~\cite{Gi}. Finally, I would like to mention that close to the completion of the paper I am surprised to find that at the end of the paper~\cite{KS} Kislev and Shelukhin said that they were planning to use Ljusternik--Schnirelman theory to give a lower bound of the number of monotone Lagrangian intersections but so far I do not know whether they have finished their work. The author is supported by NSFC 11701313 and the Fundamental Research Funds for the Central Universities 2018NTST18 at Beijing Normal University.
\section{Preliminaries}\label{sec:pre}
\subsection{Floer homology}
In this section we recall the construction of two Floer theories, that is, Lagrangian Floer homology and Hamiltonian Floer homology.
\subsubsection{Lagrangian Floer homology}
Given a Hamiltonian $H\in\mathcal{H}_c$ and a Lagrangian $L\subset M$ we consider the space of contractible chords relative to $L$
$$\mathcal{P}_L=\big\{x:[0,1]\to M|x(0), x(1)\in L\; \hbox{and } [x]=0\in\pi_1(M,L)\big\}.$$
For every $x\in\mathcal{P}_L$, there is a capping $\overline{x}:{\mathbb{D}}\cap\mathbb{H}\to M$ such that $\overline{x}|_{\partial{\mathbb{D}}\cap\{\im(z)\geq 0\}}=x$ and $\overline{x}|_{{\mathbb{D}}\cap{\mathbb{R}}}\subset L$, where ${\mathbb{D}}=\{z\in{\mathbb{C}}:|z|\leq 1\}$ and $\mathbb{H}=\{z\in{\mathbb{C}}:\im(z)\geq 0\}$. Two cappings $\overline{x},\overline{x}'$ is said to be equivalent if the glued map $v=\overline{x}\sharp\overline{x}':({\mathbb{D}},\partial{\mathbb{D}})\to (M,L)$ along the common boundary chord, defined by $\overline{x}(z)$ for $z\in{\mathbb{D}}\cap\mathbb{H}$
and $\overline{x}'(z)$ for $z\in{\mathbb{D}}\cap\overline{\mathbb{H}}$, satisfies
$\omega(v)=\mu_L(v)=0$. Here $\overline{\mathbb{H}}=\{z\in{\mathbb{C}}:\im(z)\leq 0\}$. Denote by $\widetilde{\mathcal{P}}_L$ the cover of $\mathcal{P}_L$ consisting of all equivalent classes $[x,\overline{x}]$ of the pair $(x,\overline{x})$ with $x\in \mathcal{P}_L$ up to the equivalence relation described above.
The action of $[x,\overline{x}]$ is given by
$$\mathcal{A}_{H,L}([x,\overline{x}])=\int^1_0H(t,x(t))-\int_{\overline{x}}\omega.$$
We denote by $\spec(H,L)$ the set of critical values of $\mathcal{A}_{H,L}$ on $\widetilde{\mathcal{P}}_L$. It is well known that the set $\spec(H,L)$ is a closed nowhere dense subsets of ${\mathbb{R}}$.
Suppose now that $(H,L)$ is non-degenerate, which means that $\varphi^1_H(L)$ intersects $L$ transversely. Fix a family of time-dependent almost complex structure $\{J_t\}_{t\in[0,1]}$ so that $J_t\in\mathcal{J}$ for each $t\in[0,1]$. Take a base point $\eta_0=[x_0,\overline{x}_0]\in \widetilde{\mathcal{P}}_L$ and define the index of each element $\widetilde{x}=[x,\overline{x}]$ by $\mu(\widetilde{x})=\mu_{V}(\widetilde{x},\eta_0)$ for which $\mu_V$ is the Viterbo-Maslov index. For a detailed construction of this index we refer to~\cite{Vi3}. Here we remark that $\mu_V$ is a relative Maslov index which depends on the choice of a smooth map
$u:[0,1]^2\to M$ such that $u(0,t)=x_0(t), u(1,t)=x(t)$ and $u(s,0),u(s,1)\in L$. A different choice of the base point in $\widetilde{\mathcal{P}}_L$ gives rise to a shift of degrees. To normalizing the index we require that if $H$ is a lift of a $C^2$-small Morse function $f$ on $L$ to a Weinstein neighborhood of $L$, then for a constant path and capping $(q,\overline{q})$ at a critical point $q$ of $f$, we have $\mu([q,\overline{q}])=\ind_f(q)$.
The Floer complex is a $\Lambda$-module
$$CF_*(L;H,J)= {\mathbb{Z}}_2\langle\widetilde{\mathcal{P}}_L\rangle\otimes\Lambda$$
which is graded by the formula $|\tilde{x}\otimes t^r|=\mu(\tilde{x})-rN_L$. For every $\widetilde{x}=[x,\overline{x}]\in\widetilde{\mathcal{P}}_L$ we define the differential
$$d_F\widetilde{x}=\sum\sharp_2\mathcal{M}(\widetilde{x},\widetilde{y})\widetilde{y},$$
where $\mathcal{M}(\widetilde{x},\widetilde{y})$ is the moduli space of solutions $u:{\mathbb{R}}\times[0,1]\to M$ of Floer's equation
$$\partial_su+J\partial_tu+\nabla H_t(u)=0$$
which satisfy $u({\mathbb{R}}\times\{0,1\})\subset L$. Here the sum is taken over all $\widetilde{y}=[y,\overline{y}]$ satisfying $\overline{y}=\overline{x}\sharp u$ and $\mu(\widetilde{x})-\mu(\widetilde{y})=1$. Extending $d$ by linearity over $\Lambda$ provides a differential on the complex $CF_*(L;H,J)$, i.e., $d_F\circ d_F=0$. The \textit{Lagrangian Floer homology} is defined to be the homology of the complex $(CF_*(L;H,J),d_F)$, denoted by $HF_*(L;H,J)$. It can be shown that this homology is independent of the choice of a family of almost complex structures and invariant under Hamiltonian perturbations.
In particular, there exists a canonical isomorphism $HF_*(L;H,J)\cong HF_*(L;K,J')$ for any two Hamiltonian functions $H,K$ so that the corresponding Lagrangian Floer homology can be well defined.
The ring $\Lambda$ acts on $CF_*(L;H,J)$ given by $$t^{\bar{\mu}(A)}\cdot [x,\overline{x}]=[x,\overline{x}\sharp A].$$
The Floer complex $CF_*(L;H,J)$ is filtered by the action $\mathcal{A}_{H,L}$ as follows:
$$\mathcal{A}_{H,L}\big(\sum \lambda_k[x_k,\overline{x}_k]\big)=\max\limits_k\big\{\mathcal{A}_{H,L}([x_k,\overline{x}_k])+A_L\nu(\lambda_k)\big\}.$$
For $a\in{\mathbb{R}}\setminus\spec(H,L)$, we define
$$CF_*^a(L;H,J):=\big\{\beta\in CF_*(L;H,J)\big|\mathcal{A}_{H,L}(\beta)<a\big\}.$$
It is easy to show that $CF_*^a(L;H,J)$ is a subcomplex of $CF_*(L;H,J)$. The homology of this subcomplex is denoted by $HF_*^a(L;H,J)$.
\subsubsection{Hamiltonian Floer homology}
Consider a $1$-periodic Hamiltonian $H:S^1\times M\to {\mathbb{R}}$.
Let $\mathcal{O}(H)=\{\overline{\gamma}=(\gamma,\widehat{\gamma})\}/\sim$,
where $\gamma$ is a contractible $1$-periodic orbit of the Hamiltonian flow of $H$, $\widehat{\gamma}:D\to M$ is a disk-capping of $\gamma$ (i.e., $\widehat{\gamma}|_{\partial D}=\gamma$) and the equivalence relation $\sim$ is $\overline{\gamma}\sim\overline{\gamma}'$ if $\gamma=\gamma'$ and $\omega(\widehat{\gamma})=\omega(\widehat{\gamma}')$.
For a generic pair $(H,J)$ consisting of a $1$-periodic Hamiltonian $H$ and an almost complex structure $J$, the Floer complex $CF_*(H,J)={\mathbb{Z}}_2\langle\mathcal{O}(H)\rangle$ is well defined. The complex $CF_*(H,J)$ is filtered by the values of the action functional
$$\mathcal{A}_H(\overline{\gamma})=\int_{S^1}H(t,\gamma(t))dt-\int_D\widehat{\gamma}^*\omega.$$
This action is compatible with the action of $\Gamma$ (recall that $\Gamma={\mathbb{Z}}_2[s^{-1},s]$, see~Section~\ref{subsec:notation}) which is given by $s\cdot(\gamma,\widehat{\gamma})=(\gamma,\widehat{\gamma}')$, where $\omega(\widehat{\gamma}')=\omega(\widehat{\gamma})+2\kappa_LC_M$. So one can extend it to the complex $CF(H,J;\Lambda)=CF_*(H,J)\otimes_\Gamma\Lambda$
by $\mathcal{A}_H(\overline{\gamma}\otimes t^k)=\mathcal{A}_H(\overline{\gamma})-kA_L$ (recall also that $A_L=\kappa_LN_L$). In this paper we mainly interested in the Floer complex $CF_*(H,J;\Lambda)$. The resulting homology is independent of the choice of $J$ by standard continuation maps, and hence we denote it by $HF_*(H,\Lambda)$. Given $\nu\in{\mathbb{R}}$, we denote by $CF^\nu_*(H,J;\Lambda)$ the subcomplex of the Floer complex which is generated by all the elements $\overline{\gamma}\otimes\lambda$ of action at most $\nu$, and $HF^\nu_*(H,\Lambda)$ the corresponding homology.
\subsection{ Quantum homology}\label{sec:qh}
The quantum homology $QH(M)=H(M,{\mathbb{Z}}_2)\otimes\Gamma$ of a closed monotone manifold $(M,\omega)$ is a module over the ring $\Gamma={\mathbb{Z}}_2[s^{-1},s]$. Using the degree preserving embedding of rings $\Gamma\hookrightarrow\Lambda$ given by $s\to t^{2C_M/N_L}$, we define the obvious extensions of the quantum homology:
$$QH(M,\Lambda)=H(M,{\mathbb{Z}}_2)\otimes\Lambda=QH(M)\otimes_\Gamma\Lambda.$$
Now we extend the valuation map $\nu$ (see~(\ref{e:val})) from $\Lambda$ to $QH(M,\Lambda)$ as
$$I_\omega(a)=A_L\max\big\{\nu(\lambda_k)|a_k\neq 0\big\}$$
for $a=\sum_{k}a_k\otimes\lambda_k\in QH(M,\Lambda)$ with $a_k\in H(M,{\mathbb{Z}}_2)$ and $\lambda_k\in\Lambda$.
We endow $QH(M,\Lambda)$ with the quantum intersection product
$$*:QH_i(M,\Lambda)\otimes QH_j(M,\Lambda)\longrightarrow QH_{i+j-2n}(M,\Lambda),$$
see McDuff and Salamon~ \cite{MS} for the definition. This homology is an associative ring with unit $[M]\in QH_{2n}(M,\Lambda)$. Clearly, the quantum product has degree $-2n$. Furthermore, using a Morse-theoretical approach to quantum homology one can define a ring isomorphism
$${\rm PSS}:QM(M,\Lambda)\longrightarrow HF(H,\Lambda)$$
which is induced by the Hamiltonian version of the Piunikin-Salamon-Schwarz homomorphism
~\cite{PSS} $\widetilde{{\rm PSS}}:C(f,g;\Lambda):={\mathbb{Z}}_2\brat{\crit (f)}\otimes \Lambda\to CF(H,J;\Lambda)$, where the pair $(f,g)$ is Morse-Smale with respect to the Morse function $f:M\to{\mathbb{R}}$ and the Riemannian metric $g$ on $M$, and the pair $(H,J)$ is generic so that the Floer complex $CF_*(H,J;\Lambda)$ is well defined.
\subsection{Lagrangian quantum homology}\label{sec:lqh}
Recall that by the work of Biran and Cornea~\cite{BC,BC2,BC3} if $L$ is a closed monotone Lagrangian then the Lagrangian quantum homology $QH(L)$ is well defined. We briefly recall the construction as follows. Let $L$ be a Morse function on $L$ and let $\rho$ be a Riemannian metric on $L$ so that the pair $(f,\rho)$ is Morse-Smale. For an almost complex structure $J\in\mathcal{J}$ we define $$C(L;f,\rho,J):={\mathbb{Z}}_2\brat{\crit (f)}\otimes \Lambda$$
as the complex generated by critical points of $f$, which is graded by the Morse indice of $f$ and the grading of $\Lambda$. For two points $x,y\in\crit(f)$ and a class $A\in\pi_2(M,L)$, we consider the space of sequences $(u_1,\ldots, u_l)$ of possible length $l\geq 1$ satisfying
\begin{itemize}
\item $u_i:(D,\partial D)\to (M,L)$ is a non-constant $J$-holomorphic disk, where $D$ denotes the closed unit disk in ${\mathbb{C}}$.
\item $u_1(-1)\in W^u(x)$. Hereafter $W^u(x)$ denotes the unstable submanifold at $x$ of the negative gradient flow of $f$ in $L$.
\item For every $i\in\{1,\ldots,l-1\}$, $u_{i+1}(-1)\in W^u(u_i(1))$.
\item $y\in W^u(u_l(1))$.
\item $[u_1]+\ldots [u_l]=A$.
\end{itemize}
Two sequences $(u_1,\ldots,u_l)$ and $(u'_1,\ldots,u'_{l'})$ are said to be equivalent if $l=l'$ and for every $1\leq i\leq l$ there exists $\tau_i\in\aut (D)$ such that $\tau_i(-1)=-1, \tau_i(1)=1$ and $u_i=u_i'\circ \tau_i$. Let $\mathcal{M}_{prl}(x,y;A;f,\rho, J)$ be the quotient space with respect to this equivalence relation. Elements of this space are called \textit{pearly trajectories connecting $x$ to $y$}. A typical pearly trajectory is illustrated in Figure~\ref{fig:pear}.
\includegraphics[scale=0.9]{Figure1}\label{fig:pear}
\\
We extend the definition of the space of pearly trajectories to the case that $A=0$ by defining $\mathcal{M}_{prl}(x,y;0;f,\rho, J)$ to be the space of unparametrized trajectories of the negative gradient flow of $\nabla f$ connecting $x$ to $y$. The virtual dimension of $\mathcal{M}_{prl}(x,y;A;f,\rho, J)$ is given by
$$\virdim\mathcal{M}_{prl}(x,y;A;f,\rho, J)=\ind_f(x)-\ind_f(y)+\mu(A)-1.$$
Here $\ind_f(x)$ denotes the Morse index of the critical point $x$ of $f$.
Suppose that $\virdim\mathcal{M}_{prl}(x,y;A;f,\rho, J)=0$.
For a generic $J\in\mathcal{J}$ the space $\mathcal{M}_{prl}(x,y;A;$ $f,\rho, J)$ is smooth compact $0$-dimensional manifold. Put
$$d(x)=\sum\limits_{y,A}\sharp_2\mathcal{M}_{prl}(x,y;A;f,\rho, J)yt^{\bar{\mu}(A)}$$
and extend $d$ to $C(L;f,\rho,J)$ by linearity over $\Lambda$. Here $\sharp_2\mathcal{M}_{prl}(x,y;A;f,\rho, J)$ denotes the number of $\mathcal{M}_{prl}(x,y;A;f,\rho, J)$ modulo $2$. The compactness and gluing property of the moduli spaces of virtual dimension equal to $1$ lead to $d^2=0$. We call the homology of $(C(L;f,\rho,J),d)$ the \textit{Lagrangian Quantum homology}. Different choices of the datum $\mathcal{D}=(f,\rho,J)$ give rise to the isomorphic resulting homology by continuation isomorphisms. We denote by $QH(L)$ the abstract Lagrangian quantum homology of $L$ (i.e., the limit of the corresponding direct system of graded modules), and $QH(L;\mathcal{D})$ the homology of $(C(L;f,\rho,J),d)$ for the specific choice of the datum $\mathcal{D}=(f,\rho,J)$. The homology $QH(L)$ is invariant with respect to the action of symplectomorphism group. This means that for any symplectomorphism $\varphi$ of $M$ with $L'=\varphi(L)$ there exists a chain map from
$C(L;f,\rho,J)$ to $ C(L';f^\varphi,\rho^\varphi,J^\varphi)$
which induces an isomorphism
$$\varphi_*:QH(L;f,\rho,J)\longrightarrow QH(L';f^\varphi,\rho^\varphi,J^\varphi),$$
where $f^\varphi=f\circ\varphi^{-1}$, and $\rho^\varphi,J^\varphi$ are obtained by the pushforward of $\rho, J$ via $\varphi|_L$ and $\varphi$.
We define on the chain complex $(C(L;f,\rho,J),d)$ a map $$\epsilon_L:C(L;f,\rho,J)\longrightarrow\Lambda$$which is given by $\epsilon_L(x)=1$ for all $x\in\crit_0(f)$ and $\epsilon_L(x)=0$ for all critical points of $f$ with strictly positive index. This is a chain map, and the induced map on $QH(L;\mathcal{D})$ is called the \textit{augmentation}.
Now we extend the valuation map $\nu$ (see~(\ref{e:val})) from $\Lambda$ to $C(L;\mathcal{D})$ as
$$\nu(x)=\max\big\{\nu(\lambda_k)|x_k\neq 0\big\}$$
for $x=\sum_{k}x_k\lambda_k\in C(L;\mathcal{D})$, where $x_k$ are the critical points of $f$. Then by abusing of notation we define the valuation on $QH(L;\mathcal{D})$ by setting
$$\nu(\alpha)=\inf \big\{\nu(x)|[x]=\alpha\big\}$$
and $\nu(0)=-\infty$. A similar valuation was first introduced in the work of Entov and Polterovich~\cite{EP}.
\subsubsection{ Lagrangian quantum structures}\label{subsec:lqs}
We give a rapid review of three algebraic structures (see~\cite{BC,BC2,BC3}) of Lagrangian quantum homology which will be useful in this paper.
It is shown that the homology $QH(L)$ carries a supercommutative associative product
$$\circ: QH_i(L)\otimes QH_j(L)\longrightarrow QH_{i+j-n}(L),\quad \alpha\otimes \beta\longmapsto \alpha\circ \beta.$$
for every $i,j\in {\mathbb{Z}}$ with a unity which is given by the fundamental class of $L$, $[L]\in QH_n(L)$.
Also, $QH(L)$ has the structure of a module over the quantum homology $QH(M,\Lambda)$. Specifically, for every $i,j\in{\mathbb{Z}}$ there exists a $\Lambda$-bilinear map
$$\bullet:QH_i(M,\Lambda)\otimes QH_j(L)\longrightarrow QH_{i+j-2n}(L),\quad a\otimes\alpha\longmapsto a\bullet\alpha.$$
These two structures give rise to the ring $QH(L)$ a two-sided algebra over the ring $QH(M,\Lambda)$. This means that for any $a\in QH(M,\Lambda)$ and $\alpha,\beta\in QH(L)$, we have $$a\bullet(\alpha\circ \beta)=(a\bullet\alpha)\circ \beta=\alpha\circ(a\bullet\beta).$$
The dual cochain complex of the chain complex $C(L;\mathcal{D})$ is given by
$$C^*(L;\mathcal{D})=\big(\hom_{{\mathbb{Z}}_2}\big({\mathbb{Z}}_2\brat{\crit (f)},{\mathbb{Z}}_2\big)\otimes \Lambda,d^*\big),$$
where for each $x\in\crit_k(f)$ the degree of its dual $x^*\in\hom_{{\mathbb{Z}}_2}({\mathbb{Z}}_2\brat{\crit (f)},{\mathbb{Z}}_2)$ is $k$, the differential $d^*$ is the dual of $d$. The cohomology of this complex is called the \textit{Lagrangian quantum cohomology} of $L$ which we denote it by $QH^*(L,\mathcal{D})$, and correspondingly, $QH^*(L)$ denotes the abstract Lagrangian quantum cohomology of $L$. Clearly, we have an evaluation
$\langle\cdot,\cdot\rangle:QH^*(L)\otimes QH_*(L)\to \Lambda$
which is the $\Lambda$-linear extension of the Kronecker pair. Besides, there is a canonical isomorphism
$$\mathcal{T}:QH_k(L)\longrightarrow QH^{n-k}(L)$$
called the \textit{Ponicar\'{e} duality map}, which is determined by the bilinear map
$$\overline{\mathcal{T}}:QH_k(L)\otimes QH_l(L)\stackrel{\circ}{\longrightarrow}QH_{k+l-n}(L)\stackrel{\epsilon_L}{\longrightarrow}\Lambda$$
via the relation $\mathcal{T}(x)(y)=\overline{\mathcal{T}}(x\otimes y)$.
\subsection{Piunikhin-Salamon-Schwarz isomorphisms}\label{sec:pss}
For generic $(f,\rho,H,J)$ there are chain morphisms
$\psi: C_*(L;f,\rho,J)\to CF_*(L;H,J)$ which induce canonical isomorphisms
\begin{equation}\label{e:PSS}
\Psi_{PSS}:QH_*(L;f,\rho,J)\longrightarrow HF_*(L;H,J).
\end{equation}
These isomorphisms are analogs of the Piunikhin-Salamon-Schwarz isomorphisms~\cite{PSS}, which are constructed between Hamiltonian Floer homology and singular homology. In the Lagrangian setting such isomorphisms have been studied in~\cite{Oh2,Oh3,KM,Al,BC,BC2}. Here we mainly follow the construction of Biran and Cornea~\cite{BC,BC2}. Given $q\in\crit(f)$, $\gamma=[x,\overline{x}]\in\crit\mathcal{A}_{H,L}$ and $A\in\pi_2(M,L)$, consider the sequence $(u_1,\ldots,u_l)$ of maps
\begin{itemize}
\item every $u_i:(D,\partial D)\to (M,L)$, $i=1,\ldots,i-1$ is a $J$-holomorphic disk (which is allowed to be constant).
\item $u_1(-1)\in W^u(q)$.
\item For every $i\in\{1,\ldots,l-2\}$, $u_{i+1}(-1)\in W^u(u_i(1))$.
\item $u_l:{\mathbb{R}}\times[0,1]\to M$ is a solution of the equation
$$\partial_su+J\partial_tu+\chi(s)\nabla H_t(u)=0,$$
and is subject to the conditions: $u_l({\mathbb{R}}\times\{0,1\})\subset L$, $u_l(+\infty,t)=x(t)$, $g^{t_l}(u_{l-1}(1))=u_l(-\infty,t)$, where $g^t$ is the negative gradient flow of $f$ and $\chi(s)$ is a smooth cutoff function satisfying $\chi(s)=0$ for $s\leq 0$ and $\chi(s)=1$ for $s\geq 1$.
\item $[u_1]+\ldots [u_l\sharp \overline{x}]=A$.
\end{itemize}
We denote the moduli space of such sequences by $\mathcal{M}^\psi(q,\gamma):=\mathcal{M}^\psi(q,\gamma;J,H,\chi,f,\rho)$. The virtual dimension of this moduli space is
\begin{equation}\label{e:dim}
\virdim\mathcal{M}^\psi(q,\gamma)=\ind_f(q)-|\gamma|+\mu(A).
\end{equation}
If $\virdim\mathcal{M}^\psi(q,\gamma)=0$ then for generic $(f,\rho,H,J)$ one can obtain the appropriate transversality of the evaluation maps. So we can define on generators
$$\psi_{PSS}(q)=\sum\limits_{\gamma,A}\sharp_2\mathcal{M}^\psi(q,\gamma)\gamma t^{\bar{\mu}(A)}$$
and extend this map by linearity over $\Lambda$. This extended map is a chain map and induces the PSS-type isomorphism $\Psi_{PSS}$ in~(\ref{e:PSS}).
\section{Spectrial invariants}\label{sec:spectinv}
\subsection{The classical Ljusternik--Schnirelman theory and minmax critical values} Fix a ground field ${\mathbb{F}}$, saying ${\mathbb{Z}}_2$, ${\mathbb{Q}}$, or ${\mathbb{C}}$. We will denote by $H_*(X)$ the singular homology of a topological space $X$ with coefficient field ${\mathbb{F}}$.
Let $f\in C^\infty(X)$ be a smooth function on a closed $n$-dimensional manifold $X$. For any $\nu\in{\mathbb{R}}$ we put $$X^\nu:=\{x\in X|f(x)<\nu\}.$$
To a non-zero singular homology class $a\in H_*(X)\setminus\{0\}$, we associate a numerical invariant by
$$c_{LS}(a,f)=\inf\{\nu\in{\mathbb{R}}|a\in\im(i^\nu_*)\},$$
where $i^\nu_*:H_*(X^\nu)\to H_*(X)$ is the map induced by the natural inclusion $i^\nu:X^\nu\to X$. This number is critical value of $f$. The function $c_{LS}:H_*(X)\setminus\{0\}\times C^\infty(X)$ is often called a \textit{minmax critical value selector}. The following proposition summarizing the properties of the resulting function, which are well-known facts from the classical Ljusternik--Schnirelman theory, see, e.g., Hofer and Zehnder~\cite{HZ}, see also~\cite{Vi2,CLOT,GG}.
\begin{prop}\label{pp:minmax}
The minmax critical value selector $c_{LS}$ satisfies the following properties.
\begin{enumerate}
\item[{\rm1.}] {\rm Normalization:} $c_{LS}(a,f)=c$ for any constant function $f\equiv c$.
\item[{\rm 2.}]{\rm Minmax principle:} $c_{LS}(a,f)$ is a critical value of $f$, and $c_{LS}(ka,f)=c_{LS}(a,f)$ for any nonzero $k\in{\mathbb{F}}$.
\item[{\rm3.}] {\rm Continuity:} $c_{LS}(a,f)$ is Lipschitz in $f$ with respect to the $C^0$-topology.
\item[{\rm 4.}] {\rm Triangle inequality:} $c_{LS}(a,f+g)\leq c_{LS}(a,f)+c_{LS}(a,g)$.
\item[{\rm5.}] Let $[pt]$ and $[X]$ denote the class of a point and the fundamental class respectively. Then
$$c_{LS}([pt],f)=\min f\leq c_{LS}(a,f)\leq\max f= c_{LS}([X],f).$$
\item[{\rm 6.}] $c_{LS}(a\cap b,f)\leq c_{LS}(a,f)$ for any $b\in H_*(X)$ with $a\cap b\neq 0$.
\item[{\rm7.}] If $b\neq [X]$ and $c_{LS}(a\cap b,f)= c_{LS}(a,f)$, then the set $\Sigma=\{x\in\crit(f)|f(x)= c_{LS}(a,f)\}$ is homologically non-trivial.
\end{enumerate}
\end{prop}
\subsection{Hamiltonian spectral invariants}\label{subsec:hsi}
In this subsection we recall the definition of Hamiltonian spectral invariants following Oh~\cite{Oh4}, where these spectral invariants are studied for Hamiltonians on closed weakly monotone manifolds. For a further review we refer to Viterbo~\cite{Vi}, Schwarz~\cite{Sc}, Oh~\cite{Oh4,Oh5,Oh6} and Fukaya \textit{et al }~\cite{FOOO} for the foundations of the theory of Hamiltonian spectral invariants. Fix $a\in QH_*(M,\Lambda)=(H(M,{\mathbb{Z}}_2)\otimes\Lambda)_*$ and define the spectral invariant $\sigma(a,H)$ of $a$ by
$$\sigma(a,H)=\inf\big\{\nu\in{\mathbb{R}}|{\rm PSS}(a)\in\im(i^\nu)\big\},$$
where $i^\nu:HF^\nu(H,\Lambda)\hookrightarrow HF(H,\Lambda)$ is the natural inclusion, ${\rm PSS}:QM(M,\Lambda)\longrightarrow HF(H,\Lambda)$ is the Hamiltonian Piunikhin-Salamon-Schwarz isomorphism. By convention we have $\sigma(0,H)=-\infty$.
The properties of Hamiltonian spectral invariants are summarized as follows:
\begin{prop}
The function $\sigma:QH(M,\Lambda)\setminus\{0\}\times C^\infty(S^1\times M)\to{\mathbb{R}}$ has the following properties:
\begin{enumerate}
\item[{\rm (HS1)}] {\rm Normalization:} For any $a\in H(M,{\mathbb{Z}}_2)$ and any $C^2$-small $H\in C^\infty(M)$, we have that $\sigma(a,H)=c_{LS}(a,H)$. In particular, $\sigma(a,0)=0$.
\item[{\rm(HS2)}] {\rm Spectrality:} $\sigma(a,H)\in\spec(H)$.
\item[{\rm(HS3)}] {\rm Continuity:} $\sigma(a,H)$ is Lipschitz in $H$ in the $C^0$-topology.
\item[{(\rm HS4)}] {\rm Hamiltonian shift:} If $a$ is a function of time then $\sigma(a,H+a)=\sigma(a,H)+\int^1_0a(t)dt$.
\item[{\rm(HS5)}] {\rm Monotonicity:} $\sigma(a,H)\leq\sigma(a,K)$ if $H\leq K$ pointwise.
\item[{\rm(HS6)}] {\rm Symplectic invariance:} $\sigma(\varphi_*(a),H)=\sigma(a,\varphi^*H)$ for any symplectomorphism $\varphi$.
\item[{\rm(HS7)}] {\rm Triangle inequality:} $\sigma(a*b,H\sharp K)\leq\sigma(a,H)+\sigma(b,K)$.
\item[{\rm(HS8)}] {\rm Quantum shift:} For $\lambda\in\Lambda$, $\sigma(\lambda a,H)=\sigma(a,H)+I_\omega(\lambda)$.
\item[{\rm(HS9)}] {\rm Valuation inequality:} $\sigma(a+b,H)\leq\max\{\sigma(a,H),\sigma(b,H)\}$. Moreover, if $\sigma(a,H)\neq\sigma(b,H)$ then this inequality is strict.
\item[{\rm(HS10)}] {\rm Homotopy invariance:} $\sigma(a,H)=\sigma(a,K)$, when $\varphi_H=\varphi_K$ in the universal covering of the group of Hamiltonian diffeomorphisms, where $H$ and $K$ are normalized.
\end{enumerate}
\end{prop}
\subsection{Lagrangian spectral invariants}\label{sec:lsi}
In this subsection we recall the construction of Lagrangian spectral invariants mainly following Leclercq and Zapolsky~\cite{LZ} in the monotone case, see also~\cite{Za}. These spectral invariants are the Lagrangian counterparts of spectral invariants constructed by Oh~\cite{Oh4,Oh5,Oh6}. In other cases they are constructed for Lagrangians in the cotangent bundle of a closed manifold~\cite{Oh2,Oh3,Le,MVZ}. Assume that $(H,J)$ is regular and $H$ is normalized. Fix a nonzero homogeneous $\alpha\in QH_*(L)$ and define the spectral invariant $\ell(\alpha,H)$ of $\alpha$ by
$$\ell(\alpha,H)=\inf\big\{\nu|\Psi_{PSS}(\alpha)\in \im(i^\nu)\big\}$$
where $i^\nu: HF^\nu(L;H,J)\to HF(L;H,J)$ is the natural inclusion map. This invariant is Lipschitz with respect to the Hamiltonian function $H$, i.e.,
for any two nondegenerate $H,K\in \mathcal{H}_c$ and $\alpha\neq 0$, we have
$$\int^1_0\min\limits_M(K_t-H_t)dt\leq \ell(\alpha,K)-\ell(\alpha, H)\leq \int^1_0\max\limits_M(K_t-H_t)dt.$$
So one can extend this spectral invariant to to a map
$$\ell:QH(L)\times C^\infty([0,1]\times M)\longrightarrow{\mathbb{R}}.$$
Here by convention we set $\ell(0,H)=-\infty$.
The Lagrangian spectral invariants have the following properties:
\begin{enumerate}
\item[(LS1)] Normalization: If $c$ is function of time then
$$\ell(\alpha,H+c)=\ell(\alpha,H)+\int^1_0c(t)dt,$$
and $\ell(\alpha,0)=A_L\nu(\alpha)$.
\item[(LS2)] Spectrality: $\ell(\alpha,H)\in\spec(L;H)$.
\item[(LS3)] Quantum shift: $\ell(\lambda\alpha,H)=\ell(\alpha,H)+A_L\nu(\lambda)$ for all $\lambda\in\Lambda$.
\item[(LS4)] Symplectic invariance: $\ell(\alpha,H)=\ell'(\varphi_*(a),H\circ\varphi^{-1})$ for any symplectomorphism $\varphi$ satisfying $L'=\varphi(L)$, where
$\ell':QH(L')\times C^\infty(M\times[0,1])\to{\mathbb{R}}$
is the corresponding spectral invariant.
\item[(LS5)] Continuity: $\ell$ is Lipschitz in $H$ in the $C^0$-topology.
\item[(LS6)] Monotonicity: $\ell(\alpha,H)\geq\ell(\alpha,K)$ for any $\alpha\in QH_*(L)$ provided that $H\geq K$.
\item[(LS7)] Homotopy invariance: $\ell(\alpha,H)=\ell(\alpha,K)$, when $\varphi_H=\varphi_K$ in the universal covering of the group of Hamiltonian diffeomorphisms, and $H,K$ are normalized.
\item[(LS8)] Triangle inequality: $\ell(\alpha\circ\beta,H\sharp K)\leq \ell(\alpha,H)+\ell(\beta,K)$.
\item[(LS9)] Module structure: Let $K\in C^\infty(S^1\times M)$. For all $a\in QH(M)$ and $\alpha\in QH(L)$, we have
$\ell(a\bullet\alpha,H\sharp K)\leq\ell(\alpha,H)+c(a,K).$
\item[(LS10)] Valuation inequality: $\ell(\alpha+\beta,H)\leq\max\{\ell(\alpha,H),\ell(\beta,H)\}$. Moreover, if $\ell(\alpha,H)\neq\ell(\beta,H)$ then this inequality is strict.
\item[(LS11)] Duality: Let $\mathcal{T}(\alpha)\in QH^{n-k}(L)$ be the dual element of $\alpha\in QH_k(L)$, and set $\overline{H}(t,H)=-H(-t,H)$. Then
$$\ell(\alpha,\overline{H})=-\inf\big\{\ell(\beta,H)|\beta\in QH_{n-k}(L),\;\langle \mathcal{T}(\alpha),\beta\rangle\neq 0\big\}.$$
\item[(LS12)] Lagrangian control: For all $H\in\mathcal{H}$ we have
$$\int^1_0\min\limits_{L}H_tdt\leq \ell(\alpha,H)-A_L\nu(\alpha)\leq\int^1_0\max\limits_{L}H_tdt.$$
\item[(LS13)] Non-negativity: $\ell([L],H)+\ell([L],\overline{H})\geq0$.
\end{enumerate}
In addition to the above properties, another property of Lagrangian spectral invariants mainly concerned in this paper is the following.
\begin{prop}\label{pp:ls=minmax}
Let $f:L\to{\mathbb{R}}$ be a $C^2$-small function and let $H_f$ be a $C^2$-small compactly supported autonomous Hamiltonian which coincides with the lift of $f$ to a Weinstein neighborhood of $L$. More specifically, $H_f=f\circ\pi$ on a ball bundle $T^*_RL$ of $T^*L$ (after identifying $T^*L$ with some Weinstein neighborhood of $L$) containing $L^f:=\{(q,\partial_q f(a))\in T^*L|q\in L\}$, and $H_f=0$ outside $T^*_{R+1}L$ in $M$, where $\pi:T^*L\to L$ is the natural projection map. If $L$ is wide and $N_L\geq n+1$ then we have
$$\ell(a,H_f)=c_{LS}(a,f)\quad\hbox{for any}\;a\in H_*(L,{\mathbb{Z}}_2).$$
Here we have used the canonical embedding $H_*(L,{\mathbb{Z}}_2)\hookrightarrow QH(L)=H(L;{\mathbb{Z}}_2)\otimes\Lambda$ for a wide Lagrangian $L$ with $N_L\geq n+1$.
\end{prop}
\begin{proof}
Since both $\ell(\alpha,H)$ and $c_{LS}(a,f)$ are continuous in $H$ and $f$ with respect to $C^0$-topology respectively, it suffices to prove the proposition for a $C^2$-small Morse function $f:L\to{\mathbb{R}}$ and the corresponding lift $H_f:M\to{\mathbb{R}}$. In this case, it is easy to see that the Lagrangian $\varphi_{H_f}(L)$ is the graph of $df$ in $T^*L$ and intersects $L$ transversely, and that a critical point $q$ of $f$ is exactly an intersection point between $L$ and $\varphi_{H_f}(L)$. So we have
\begin{equation}\label{e:action}
\mathcal{A}_{H_f,L}([q,\overline{q}])=f(q),
\end{equation}
where $\overline{q}$ is the constant capping of the constant path $q$. Moreover, from our definition of the grading of Lagrangian Floer homology we have $\mu(q,\overline{q})=\ind_f(q)$.
Given a smooth cutoff function $\chi$ used in the definition of PSS map $\Psi_{PSS}$, for a generic pair $(\rho,J)$ we consider the moduli space $\mathcal{M}^\psi(q,\gamma)=\mathcal{M}^\psi(q,\gamma;J,H_f,\chi,f,\rho)$, where $q,\gamma$ are the critical points of $f$ and $\mathcal{A}_{H_f,L}$ respectively. If the virtual dimension of $\mathcal{M}^\psi(q,\gamma)$ is zero, then by the dimension formula~(\ref{e:dim}) we deduce from our assumption $N_L\geq n+1$ that $\gamma=(q,\overline{q})$.
Furthermore, following the idea of Floer~\cite{Fl3} one can show that if $|f|_{C^2}$ is sufficiently small then the last component $u_l$ of any element $(u_1,\ldots,u_l)$ in
$\mathcal{M}^\psi(q,\gamma)$ has to be independent of variable $t$ (in fact $u_l$ is a flow line of $-\chi(s)\nabla^\rho f$). Due to $N_L\geq n+1$ holomorphic disks $u_i$, $i=1,\ldots,l-1$ are constant. So $\mathcal{M}^\psi(q,\gamma)$ consists of negative gradient flow lines of $f$ (up to a scaling) from $q$ to $\gamma=(q,\overline{q})$, and hence the only element of $\mathcal{M}^\psi(q,\gamma)$ is the constant flow line by dimension reasons. As a consequence, we have the map
\begin{equation}\label{e:inclusion}
\Psi_{PSS}\circ i: H_*(L,{\mathbb{Z}}_2)\longrightarrow HF_*(H_f,J),\quad
\big[\sum_kq_k\big]\longmapsto \big[\sum_k(q_k,\overline{q}_k)\big],
\end{equation}
where $i$ is the inclusion map from $H_*(L,{\mathbb{Z}}_2)$ to $QH_*(L,f,\rho)\cong (H(L;{\mathbb{Z}}_2)\otimes\Lambda)_*$. We remark here that the singular homology $H_*(L,{\mathbb{Z}}_2)$ has been identified with the Morse homology $HM(f,\rho)$ of the Morse-Smale pair $(f,\rho)$.
Clearly, it follows from (\ref{e:action}) and (\ref{e:inclusion}) that $\ell(a,H_f)=c_{LS}(a,f)$ for all $a\in H_*(L,{\mathbb{Z}}_2)$.
\end{proof}
\section{Proofs of main results: Lagrangian Ljusternik--Schnirelman theory}\label{sec:mainthms}
The Lagrangian Ljusternik--Schnirelman inequalities we will prove here are Lagrangian counterparts of the Hamiltonian Ljusternik--Schnirelman inequality (see~\cite{GG}). We remark that the similar arguments are implicitly contained in Schwarz's work~\cite{Sc,Sc2}, and that similar results also appear in~\cite{Fl2,Ho,Sc2,GG}. A key point of our arguments here is that the Lagrangian quantum structures must be considered in our situation.
\subsection{Proof of Theorem~\ref{thm:lls}}
By (LS8) and (LS1) we have
$$\ell(\alpha\circ \beta, H)=\ell(\alpha\circ \beta, 0\sharp H)\leq \ell(\alpha,0)+\ell(\beta,H)=\ell(\beta,H)+A_L\nu(\alpha).$$
This finishes the proof of the non-strict inequality.
To prove the strict inequality we proceed in three steps.\\
\noindent\textbf{Step 1.} Take a $C^2$-small function $f\in C^\infty(L)$, and let its corresponding Hamiltonian $H_f\in C^\infty(M)$ be as in Proposition~\ref{pp:ls=minmax}. We shall prove the following.
\begin{clm}\label{clm:small}
For any $\alpha\in \widehat{QH}_*(L)\cong H_{*<n}(L,{\mathbb{Z}}_2)\otimes \Lambda$, we have
\begin{equation}\label{e:lsineq}
\ell(\alpha,H_f)<A_L\nu(\alpha)+\max_{x\in L} f(x).
\end{equation}
\end{clm}
\noindent {\bf Proof of Claim~\ref{clm:small}.}
Since both $\ell(\alpha,H)$ and $\max f$ are continuous in $H$ and $f$ with respect to $C^0$-topology respectively, it suffices to prove the claim for a $C^2$-small Morse function $f:L\to{\mathbb{R}}$ and the corresponding lift $H_f:M\to{\mathbb{R}}$. Without loss of generality we ask that $f$ has a \textbf{unique maximum} which we denote by $q_{max}$. Since the Lagrangian $\varphi_{H_f}(L)$ is the graph of $df$ in $T^*L$ and intersects $L$ transversely, a critical point $q$ of $f$ is exactly an intersection point between $L$ and $\varphi_{H_f}(L)$. So we have
\begin{equation}\label{e:ptaction}
\mathcal{A}_{H_f,L}([q,\overline{q}])=f(q),
\end{equation}
where $\overline{q}$ is the constant capping of the constant path $q$. Furthermore, by our definition of the grading of Lagrangian Floer homology we have $\mu(q,\overline{q})=\ind_f(q)$.
As before, for a smooth cutoff function $\chi$, which used in the construction of PSS map $\Psi_{PSS}$, and a generic pair $(\rho,J)$, we consider the moduli space $\mathcal{M}^\psi(q,\gamma)=\mathcal{M}^\psi(q,\gamma;J,H_f,\chi,f,\rho)$, where $q,\gamma$ are the critical points of $f$ and $\mathcal{A}_{H_f,L}$ respectively. If $|f|_{C^2}$ is sufficiently small then the last component $u_l$ of any element $(u_1,\ldots,u_l)$ in
$\mathcal{M}^\psi(q,\gamma)$ has to be independent of variable $t$ (in fact $u_l$ is a flow line of $-\chi(s)\nabla^\rho f$). So $\mathcal{M}^\psi(q,\gamma)$, in fact, consists of pearly trajectories connecting $q$ to $q'\in\crit(f)$ (up to a scaling of the negative gradient flow lines of $f$). If the virtual dimension of $\mathcal{M}^\psi(q,\gamma)$ is zero, then by the dimension formula~(\ref{e:dim}) we have $|\gamma|=\ind_f(q)+\mu(A)$. Now we discuss in two cases: (1) if $\gamma=[q,\overline{q}]$ then $A=0$, in this case the only pearly trajectory from $q$ to $\gamma$ is the constant flow line; (2) if $\gamma\neq [q,\overline{q}]$ then $A\neq 0$ by dimension reasons.
Let $\sum_kq_k\otimes \lambda_k$ be any representive of $\alpha\in\widehat{QH}(L;f,\rho,J)$. Consequently, we have the map
\begin{equation}\label{e:image}
\begin{split}
\Psi_{PSS}: QH(L;f,\rho,J)\longrightarrow HF_*(H_f,J),\\
\big[\sum_k\lambda_kq_k\big]\longmapsto \bigg[\sum_k\lambda_k[q_k,\overline{q}_k]+\sum_l\lambda_l[q_l',\overline{q}_l'\sharp A_l]\bigg],
\end{split}
\end{equation}
Now we look at the actions of $\mathcal{A}_{H,L}$ on those generators appearing in the right hand side of~(\ref{e:image}). By (\ref{e:ptaction}) we have
\begin{equation}\label{e:sumand1}
\mathcal{A}_{H_f,L}(\lambda_k[q_k,\overline{q}_k])=f(q_k)+A_L\nu(\lambda_k)\quad \forall k.
\end{equation}
By (\ref{e:ptaction}) again, we have that $$\mathcal{A}_{H_f,L}\big(\lambda_l[q'_l,\overline{q}'_l\sharp A_l]\big)=\mathcal{A}_{H_f,L}\big([q'_l,\overline{q}'_l]\otimes t^{\overline{\mu}(A_l)}\lambda_l\big)=f(q_l')-\overline{\mu}(A)A_L+A_L\nu(\lambda_l)\quad \forall l.$$
So if $|f|_{C^0}<A_L/2$, for all $l$ we have
\begin{eqnarray}\label{e:sumand2}
\mathcal{A}_{H_f,L}\big(\lambda_l[q'_l,\overline{q}'_l\sharp A_l]\big)&=&f(q_l)+A_L\nu(\lambda_l)+f(q_l')-f(q_l)-\overline{\mu}(A_l)A_L\notag\\
&<&f(q_l)+A_L\nu(\lambda_l)+(1-\overline{\mu}(A_l))A_L\notag\\
&\leq& f(q_l)+A_L\nu(\lambda_l),
\end{eqnarray}
where each $q_l\in\crit(f)$ coming from the representive $\sum_k\lambda_kq_k$ is the starting point of some pearly trajectory with end point $q'_l$, and we have used $\overline{\mu}(A_l))\geq 1$ for each $l$ in the third inequality.
As $q_{max}$ represents the fundamental class $[L]$, each $q_k$ appearing in $\sum_k\lambda_kq_k$ can not be $q_{max}$. So we have
$$\max_k f(q_k)<\max_{x\in L} f(x).$$
This, combining with (\ref{e:image})--(\ref{e:sumand2}), implies that for any representive $\sum_k\lambda_k q_k$ of $\alpha$ we have
$$\mathcal{A}_{H,L}\bigg(\psi_{PSS}\big(\sum_k\lambda_kq_k\big)\bigg)<\max_{x\in L} f(x)+A_L\max_k\nu(\lambda_k)$$
whenever $|f|_{C^2}$ is sufficiently small. Therefore, by the definition of the valuation $\nu$ on $QH(L;f,\rho,J)$ we conclude the desired inequality.
\noindent\textbf{Step 2.} Since the intersections of $L$ and $\varphi_H(L)$ are isolated, we pick a small neighborhood $U\subset L$ of $L\cap \varphi_H(L)$ in $L$. Let $f:L\to{\mathbb{R}}$ be a smooth function such that $f=0$ on $\overline{U}$ and $f<0$ on $L\setminus \overline{U}$, and let $H_f$ be the lift of $f$ to a Weinstein neighborhood of $L$ as in Proposition~\ref{pp:ls=minmax}. Given
$\alpha\in \widehat{QH}_*(L)$, by Step 1, for sufficiently small $\varepsilon>0$ we have
\begin{equation}\label{e:sineq}
\ell(\alpha,\varepsilon H_f)<A_L\nu(\alpha)+\varepsilon\max_{x\in L} f(x)=A_L\nu(\alpha).
\end{equation}
\noindent\textbf{Step 3.} Let $f$, $U$ and $H_f$ be as in Step 2. We will show
\begin{clm}\label{clm:sv}
For sufficiently small $\varepsilon>0$ we have
$\ell(\alpha\circ\beta,H)=\ell(\alpha\circ\beta,\varepsilon H_f\sharp H).$
\end{clm}
\noindent {\bf Proof of Claim~\ref{clm:sv}.}
Recall that the construction of $H_f\in C^\infty(M)$: identify $T^*L$ with a Weinstein neighborhood $W\subset M$ of $L$, and pick a smooth function $\sigma:{\mathbb{R}}\to[0,1]$ such that $\sigma(s)=1$ for any $s\leq R$ and $\sigma(s)=0$ for any $s\geq R+1$. We put $H_f(q,p)=\sigma(\|p\|) f(q)$ for all $(q,p)\in T^*L$ and extend it to $M\setminus W$ by setting $H_f=0$, where the norm $\|\cdot\|$ on $T^*L$ is induced by a Riemannian metric $\rho$ of $L$. It is more convenient to work on the cotangent bundle $T^*L$ instead of $W$ whenever the intersections of some Lagrangian and $L$ are considered.
Note that $H_f=\pi^*f$ on the closed ball bundle $T^*_RL$ of $T^*L$ containing $L^f:=\{(q,\partial_q f(a))\in T^*L|q\in L\}$ and $H_f=0$ outside $T_{R+1}^*L$. It is easy to show that $\varphi^t_{H_f}(q,p)=(q,p+t\partial_q f(a))\in T^*L$ for $t\in[0,1]$ and $(q,p)\in T^*_RL$. Set $L^H_R=\varphi_H(O_L)\cap T^*_RL$. Then we have
$$\varphi_{\varepsilon H_f}(L^H_R)=\{(q,p+\varepsilon df(q))|(q,p)\in L^H_R\}.$$
Since $L_R^H\cap\pi^{-1}(O_L\setminus U)$ is compact and has no intersections with $O_L$, we deduce that for small enough $\varepsilon>0$, $\varphi_{\varepsilon H_f}(L^H_R)\cap \pi^{-1}(O_L\setminus U)$ has no intersections with $O_L$ as well. For $(q,p)\in T^*L$ with $R\leq\|(q,p)\|\leq R+1$ we have
$$d_\rho\big(\varphi_{\varepsilon H_f}(q,p),(q,p)\big)\leq\bigg\|\int^1_0\frac{d}{dt}
\varphi_{\varepsilon H_f}^t(q,p)dt\bigg\|\leq \varepsilon\sup\limits_{R\leq\|(q,p) \|\leq R+1}\|X_{H_f}\|,$$
where $d_\rho$ is the distance function induced by the metric on $T^*L$. Therefore, for $\varepsilon>0$ sufficiently small, $\varphi_{\varepsilon H_f}(T^*_{R+1}L\setminus T^*_RL)$ does not intersect $O_L$. Note also that the Hamiltonian diffeomorphism $\varphi_{H_f}^\varepsilon$ is supported in $T^*_{R+1}L$, we conclude that $\varphi_{\varepsilon {H_f}}\varphi_H(O_L)\cap\pi^{-1}(O_L\setminus U)$ does not intersect $O_L$ provided that $\varepsilon>0$ is sufficiently small. On the other hand, we have that
$\varphi_{\varepsilon {H_f}}\varphi_H(O_L)\cap\pi^{-1}(U)=\varphi_H(O_L)\cap\pi^{-1}(U)$ because $f=0$ on $U$. So if $\varepsilon>0$ is sufficiently small then the Lagrangians $\varphi_{\varepsilon {H_f}}\varphi_H(O_L)$ and $\varphi_H(O_L)$ have the same intersections with $O_L$.
For each $q\in\varphi_H(O_L)\cap O_L$ we have $\varphi_{\varepsilon H_f}^t \varphi_H^t((\varphi_{\varepsilon H_f} \varphi_H)^{-1}(q))=\varphi_{\varepsilon H_f}^t \varphi_H^t(\varphi_H^{-1}(q))$. It is well known that there is an one-to-one correspondence between the set $\varphi_H(L)\cap L$ and the set $\mathcal{P}_L(H):=\{x\in\mathcal{P}_L|\dot{x}=X_H(x(t))\}$ of Hamiltonian chords by sending $q\in \varphi_H(L)\cap L$ to $x=\varphi_H^t(\varphi^{-1}_H(q))$. So we obtain a bijection between $\mathcal{P}_L(H)$ and $\mathcal{P}_L(\varepsilon H_f\sharp H)$ by mapping $x(t)$ to $\varphi_{\varepsilon H_f}^t(x(t))$. For simplicity, we write $K=\varepsilon H_f$ and put $u(s,t)=\varphi_K^{st}(x(t))$ for $x(t)\in\mathcal{P}_L(H)$, where $(s,t)\in S:=[0,1]\times[0,1]$. Then $u(0,t)=x(t)$ and $u(1,t)=\varphi_K^t(x(t))\in\mathcal{P}_L(K\sharp H)$. To show that $\spec(K\sharp H,L)=\spec(H,L)$, we consider the map
$$\Theta:\crit(\mathcal{A}_{H,L})\longrightarrow\crit(\mathcal{A}_{K\sharp H,L}),\quad \big[x(t),\overline{x}\big]\mapsto\big[\varphi_K^t(x(t)), \overline{x}\sharp u\big]$$
as illustrated in Figure~\ref{fig:map}. \\
\includegraphics[scale=0.9]{Figure2}\label{fig:map}
Clearly, this map is a bijection. In the following we will show that $\Theta^*\mathcal{A}_{K\sharp H,L}=\mathcal{A}_{H,L}$. To this end, for every $[x,\overline{x}]\in\crit(\mathcal{A}_{H,L})$, we have
\begin{eqnarray}
\mathcal{A}_{K\sharp H,L}(\Theta[x,\overline{x}])&=&\int^1_0K(\varphi_K^t(x(t)))dt+\int^1_0H_t\circ(\varphi_K^t)^{-1}(\varphi_K^t(x(t)))dt-\int_Su^*\omega-\int_{\overline{x}}\omega \notag\\
&=&\mathcal{A}_{H,L}([x,\overline{x}])+\int^1_0K(x(t))dt-\int^1_0\int^1_0\omega\big(\partial_su,sX_K(\varphi_K^{st}(x(t)))\big)dsdt
\notag\\
&&-\int^1_0\int^1_0\omega\big(\partial_su,d\varphi_K^{st}(\dot{x}(t))\big)dsdt\notag\\
&=&\mathcal{A}_{H,L}([x,\overline{x}])+\int^1_0K(x(t))dt-\int^1_0\int^1_0sdK(\varphi_K^{st}(x(t)))[\partial_su]dsdt\notag\\
&&-\int^1_0\int^1_0\omega\big(tX_K(\varphi_K^{st}(x(t))),d\varphi_K^{st}(\dot{x}(t))\big)dsdt\notag\\
&=&\mathcal{A}_{H,L}([x,\overline{x}])+\int^1_0K(x(t))dt-\int^1_0\int^1_0
\frac{d}{ds}\big[sK(\varphi_K^{st}(x(t)))\big]dsdt
\notag\\
&&+\int^1_0\int^1_0
\frac{d}{dt}\big[tK(\varphi_K^{st}(x(t)))\big]dsdt\notag\\
&=&\mathcal{A}_{H,L}([x,\overline{x}])+\int^1_0K(x(t))dt-\int^1_0K(x(t))dt+K(x(1))\notag\\
&=&\mathcal{A}_{H,L}([x,\overline{x}]),\notag
\end{eqnarray}
where in the second and fifth equalities we have used the fact that the value of an autonomous Hamiltonian $H_f$ is invariant along its Hamiltonian flow, and the last equality is implied by $H_f=0$ on the intersections of $\varphi_H(L)$ and $L$.
Therefore, the action spectra $\spec(\varepsilon H_f\sharp H,L)$ and $\spec(H,L)$ are the same. Now fix $\varepsilon>0$ and consider the family of Lagrangians $\varphi_{s\varepsilon {H_f}}\varphi_H(O_L)$ with $s\in[0,1]$. As above, the action spectra $\spec(s\varepsilon H_f\sharp H,L)$ are all the same. Since the action spectrum is a closed nowhere dense subsets of ${\mathbb{R}}$, it follows from Continuity (LS5) that
$\ell(\alpha\circ\beta, s\varepsilon H_f\sharp H)$ do not depend on $s$. So we have $\ell(\alpha\circ\beta,H)=\ell(\alpha\circ\beta,\varepsilon H_f\sharp H)$.
Finally, take $f$, $U$ and $H_f$ being as in Step 2. For small enough $\varepsilon>0$, by Claim~\ref{clm:sv} and Triangle inequality~(LS8), we obtain
$$\ell(\alpha\circ\beta,H)=\ell(\alpha\circ\beta,\varepsilon H_f\sharp H)\leq \ell(\alpha, \varepsilon H_f)+\ell(\beta, H)<\ell(\beta, H)+A_L\nu(\alpha).$$
This completes the proof.
\qed
The proof of Theorem~\ref{thm:ll} bears a similarity to the arguments from the proof of Theorem~\ref{thm:lls}, but is not completely parallel to the former. For the sake of completeness we shall give a sketch of the proof.
\subsection{Proof of Theorem~\ref{thm:ll}} The non-strict inequality is deduced from the module structure property~(LS8) of $\ell$, the quantum shift property~(HS8) and normalization property~(HS1) of $\sigma$ immediately, that is,
$$\ell(a\bullet\alpha,H)=\ell(a\bullet\alpha, H\sharp 0)\leq \sigma(a,0)+\ell(\alpha,H)=\ell(\alpha,H)+I_\omega(a).$$
To prove the strict inequality, we first take any class $a=\sum_k\lambda_kz_k\in QH(M,\Lambda)$ with each $x_k\in H(M,{\mathbb{Z}}_2)$ and $\lambda_k\in\Lambda$. For any smooth $C^2$-small function $f:M\to{\mathbb{R}}$, we claim that
\begin{equation}\label{e:ineqhs}
\sigma(a,f)\leq \sup_k\{\sigma(z_k,f)\}+I_\omega(a).
\end{equation}
This follows from (HS1), (HS8) and (HS9) immediately.
Secondly, we take
a $C^2$-small function $f:M\to{\mathbb{R}}$ such that $f=0$ on $\overline{V}$ and $f<0$ on $M\setminus \overline{V}$, where $V$ is any small neighborhood of $L\cap \varphi_H^{-1}(L)$ in $M$. Furthermore, we require that $H_k(\overline{V},{\mathbb{Z}}_2)=0$ for all $k>0$. This is possible since we have assumed that the elements of $L\cap \varphi_H(L)$ are isolated. We claim that if $a\in \widehat{QH}_*(M)=H_{*<2n}(M,{\mathbb{Z}}_2)\otimes_\Gamma \Lambda$, then
$$\sigma(a,f)<I_\omega(a).$$
Observe that $c_{LS}(a,f)<-\epsilon_f<0$ for all $a\in H_{*<2n}(M,{\mathbb{Z}}_2)$ and some $\epsilon_f>0$ depending on $f$. In fact, if there exists a sequence of homology classes $a_k\in H_{*<2n}(M,{\mathbb{Z}}_2)$ such that
$c_{LS}(a_k,f)\to0$ as $k\to\infty$ then we have $c_{LS}(a_{l},f)=0$ for some $l\in{\mathbb{N}}$ because $H_{*<2n}(M,{\mathbb{Z}}_2)$ is a finite dimensional vector space over ${\mathbb{Z}}_2$. Then we have $c_{LS}(a_{l},f)=c_{LS}(a_l\cap [M],f)=0$. It follows from Proposition~\ref{pp:minmax}.1 that $c_{LS}([M],f)=\max_Lf=0$. So we have $c_{LS}(a_l\cap [M],f)=c_{LS}([M],f)$. Then by Proposition~\ref{pp:minmax}.7 the zero level set $\overline{V}$ of $f$ is homologically non-trivial, which is a contradiction. Therefore, for any
$a=\sum_k\lambda_kz_k\in QH(M,\Lambda)$ with each $z_k\neq0$, we have $c_{LS}(z_k,f)<-\epsilon_f$. Consequently, by (\ref{e:ineqhs}) we obtain $\ell(\alpha,f)\leq-\epsilon_f + I_\omega(a)<I_\omega(a)$.
Thirdly, we will show that
\begin{clm}\label{clm:hsv}
$\ell(a\bullet\alpha,H\sharp\varepsilon f)=\ell(a\bullet\alpha,H)$ whenever $\varepsilon>0$ is small enough.
\end{clm}
\noindent {\bf Proof of Claim~\ref{clm:hsv}.} We consider a family of spectral invariants $\ell(a\bullet\alpha,H\sharp s\varepsilon f)$ for $s\in[0,1]$. Note that the function $\ell(a\bullet\alpha,\cdot):C^\infty([0,1]\times M)\to{\mathbb{R}}$ is continuous with respect to the $C^0$-topology and that the action spectrum $\spec(H\sharp s\varepsilon f)$ for each $s\in[0,1]$ is a closed nowhere dense subset of ${\mathbb{R}}$. It suffices to prove that for $\varepsilon>0$ sufficiently small the action spectra $\spec(H\sharp s\varepsilon f,L)$ are all the same. To this end, for $\varepsilon>0$ sufficiently small we will show
\begin{equation}\label{e:intersect}
\varphi_H\circ\varphi_{\varepsilon f}(L)\cap L=\varphi_H(L)\cap L.
\end{equation}
On the one hand, we have $\varphi_H\circ\varphi_{\varepsilon f}(L\setminus V)\cap L=\emptyset$ for $\varepsilon>0$ sufficiently small. Because, otherwise, there exists a sequence of numbers $\varepsilon_i>0$ with $\varepsilon_i\to 0$, and two sequences of points $x_i\in L$, $y_i\in L\setminus V$ such that $\varphi_{\varepsilon_i f}(y_i)=\varphi_H^{-1}(x_i)$ for all $i$. Since $L$ is compact and $V$ is open in $M$, without loss of generality we may assume that $x_i\to x\in L$ and $y_i\to y\in L\setminus V$ as $i$ goes to infinity. Then we have
$y=\varphi_H^{-1}(x)\subseteq\varphi^{-1}(L)\cap L\subseteq V$, which is impossible. On the other hand, since $\varphi_{\varepsilon f}=id$ on $V$, we obtain $\varphi_H\circ\varphi_{\varepsilon f}(L\cap V)\cap L=\varphi_H(L\cap V)\cap L=\varphi_H(L)\cap L$.
So we have (\ref{e:intersect}). Let $\chi:[0,\frac{1}{2}]\to [0,1]$ be a smooth cut-off function so that $\chi=0$ near $t=0$ and $\chi=1$ near $t=\frac{1}{2}$. It is easy to show that the time one flow $\varphi_K$ of the Hamiltonian
\[
K=
\begin{cases}\chi'(t)\varepsilon f(x),\quad \ \ & t\in[0,\frac{1}{2}],\\
\chi'\big(t-\frac{1}{2}\big)H\big(\chi\big(t-\frac{1}{2}\big),x\big), \quad \ \ & t\in[\frac{1}{2},1].
\end{cases}
\]
coincides with the time one flow $\varphi_H\circ\varphi_{\varepsilon f}$ of $H\sharp\varepsilon f$. Moreover, the mean values of $H$ and $K$ are the same, i.e., $\int H_t\omega^n=\int (H\sharp\varepsilon f)_t\omega^n$ for all $t$.
It is well known that $\spec(H,L)=\spec(K,L)$ if $\varphi_H=\varphi_K$ in the universal covering of the group of Hamiltonian diffeomorphisms with the same mean value, see e.g., \cite{LZ}. Therefore, to prove $\spec(H,L)=\spec(H\sharp\varepsilon f,L)$ we only need to prove that the action spectra
$\spec(H,L)$ and $\spec(K,L)$ are the same. For each $q\in\varphi_H(L)\cap L$, due to~(\ref{e:intersect}) the corresponding Hamiltonian chords of $H$ and $K$ are
$\varphi_H^t(\varphi^{-1}_H(q))$ and $\varphi_K^t(\varphi^{-1}_K(q))$ respectively, while, by $\varphi_{\varepsilon f}^t=id$ on $V$, we have
\[
\varphi_K^t(\varphi^{-1}_K(q))=
\begin{cases}\varphi_H^{-1}(q),\quad \ \ & t\in[0,\frac{1}{2}],\\
\varphi^{\chi(t-\frac{1}{2})}_H\circ\varphi^{-1}_H(q), \quad \ \ & t\in[\frac{1}{2},1].
\end{cases}
\]
Consequently, we obtain a bijection
$$\Pi:\crit(\mathcal{A}_{H,L})\longrightarrow\crit(\mathcal{A}_{K,L}),\quad [x(t),\overline{x}(s,t)]\mapsto [x'(t),\overline{x}'(s,t)],$$ where $\overline{x}(0,t)=q_0$ for some point $q_0\in L$,
$\overline{x}(1,t)=x(t)$, $\overline{x}'(s,t)=\overline{x}(s,0)$ for $(s,t)\in [0,1]\times[0,\frac{1}{2}]$ and $\overline{x}'(s,t)=\overline{x}(s,\chi(t-\frac{1}{2}))$ for
$(s,t)\in [0,1]\times[\frac{1}{2},1]$.
A direct calculation shows that for any $(x,\overline{x})\in\crit(\mathcal{A}_{H,L})$,
$\mathcal{A}_{K,L}(\Pi[x,\overline{x}])=\mathcal{A}_{H,L}([x,\overline{x}])$. So we have $\spec(H,L)=\spec(H\sharp\varepsilon f,L)$. As above, one can show $\spec(H,L)=\spec(H\sharp s\varepsilon f,L)$ for every $s\in[0,1]$. This finishes the proof the claim.
Now we are in a position to finish the proof of the strict inequality. Let $f$ be as before, and take $\varepsilon>0$ sufficiently small. It follows from Claim~\ref{clm:hsv} and the moduli structure property~(LS9), we obtain
$$\ell(a\bullet\alpha,H)=\ell(a\bullet\alpha,H\sharp \varepsilon f)\leq \sigma(a, \varepsilon f)+\ell(\alpha, H)<\ell(\alpha, H)+I_\omega(a)$$
which concludes the desired strict inequality.
\qed
\subsection{Proof of Theorem~\ref{thm:homess}}
We first notice that since $L$ is wide, for $N_L\geq n+1$ there exists a canonical isomorphism
$(H(L,{\mathbb{Z}}_2)\otimes\Lambda)_*\cong QH_*(L)$, see~\cite{BC}.
Let $U\subset L$ be any neighborhood of $L\cap \varphi_H(L)$ in $L$. Let $f\in C^\infty(L)$ be a smooth function such that $f=0$ on $\overline{U}$ and $f<0$ on $L\setminus \overline{U}$. Take $H_f\in C^\infty(M)$ as a lift of $f$ in Proposition~\ref{pp:ls=minmax}. Given $\alpha=\sum_ix_i\otimes\lambda_i\in QH(L)$ with $0\neq x_i\in H_{*<n}(L,{\mathbb{Z}}_2)$, we have that
\begin{eqnarray}\label{e:ineqls}
\ell(\alpha,\varepsilon H_f)&\leq&\sup\limits_{i,x_i\neq0}\{\ell(x_i\otimes\lambda_i,\varepsilon H_f)\}\\\notag
&\leq&\sup\limits_{i,x_i\neq0}\{\ell(x_i,\varepsilon H_f)+A_L\nu(\lambda_i)\}\\\notag
&\leq&\sup\limits_{i,x_i\neq0}\{\ell(x_i,\varepsilon H_f)\}+A_L\sup\limits_{i,x_i\neq0}\{\nu(\lambda_i)\}\\\notag
&=&\sup\limits_i\{c_{LS}(x_i,\varepsilon f)\}+A_L\nu(\alpha).
\end{eqnarray}
where the first inequality is obtained by (LS10), the second one by (LS3) and the last equality by Proposition~\ref{pp:ls=minmax}.
It follows from Claim~\ref{clm:sv} and Triangle inequality~(LS7) that
\begin{equation}\label{e:ineqlls}
\ell(\alpha\circ\beta,H)=\ell(\alpha\circ\beta,\varepsilon H_f\sharp H)\leq \ell(\alpha,\varepsilon H_f)+\ell(\beta,H)
\end{equation}
provided that $\varepsilon>0$ is sufficiently small. So if $\ell(\alpha\circ\beta,H)=\ell(\beta,H)+A_L\nu(\alpha)$ then
from (\ref{e:ineqls}) and (\ref{e:ineqlls}) we deduce that for $\varepsilon>0$ sufficiently small
$$A_L\nu(\alpha)\leq \sup\limits_i\{c_{LS}(x_i,\varepsilon f)\}+A_L\nu(\alpha).$$
Consequently, we have $0\leqc_{LS}(x_k,\varepsilon f)=\sup_i\{c_{LS}(x_i,\varepsilon f)\}$ for some $k$. On the other hand, by Proposition~\ref{pp:minmax}.5 we have $c_{LS}(x_k,\varepsilon f)\leq\varepsilon\max_Lf=c_{LS}([L],\varepsilon f)=0$. So $c_{LS}(x_k,\varepsilon f)=c_{LS}(x_k\cap [L],\varepsilon f)=c_{LS}([L],\varepsilon f)$ with $x_k\in H_{*<n}(L,{\mathbb{Z}}_2)$, and hence
by Proposition~\ref{pp:minmax}.7 the zero level set $\overline{U}$ of $f$ is homologically non-trivial.
\qed
\subsection{Proof of Theorem~\ref{thm:hess}} For an arbitrary open neighborhood $V$ of $L\cap\varphi^{-1}_H(L)$ in $M$ we pick a smooth function $f:M\to {\mathbb{R}}$ such that $f|_{\overline{V}}=0$ and $f<0$ on $M\setminus \overline{V}$. If $a=\sum_kz_k\lambda_k\in\widehat{QH}(M,\Lambda)$ with each nonzero $z_k\in H_{*<2n}(M,{\mathbb{Z}}_2)$ (of pure degree), then by (\ref{e:ineqhs}) for $\varepsilon>0$ small enough, we have
$\sigma(a,\varepsilon f)\leq \sup_k\{\sigma(z_k,\varepsilon f)\}+I_\omega(a)$.
The moduli structure property~(LS9) and Claim~\ref{clm:hsv}
imply that
$ \ell(a\bullet\alpha,H)=\ell(a\bullet\alpha,H\sharp \varepsilon f)\leq \sigma(a,\varepsilon f)+\ell(\alpha,H)$. Then we get $\sigma(z_{k_0},\varepsilon f)\geq \ell(a\bullet\alpha,H)-\ell(\alpha,H)-I_\omega(a)=0$ for some $k_0$. It follows from the normalization property~(HS1) that $c_{LS}(z_{k_0},\varepsilon f)=\sigma(z_{k_0},\varepsilon f)\geq0$ whenever $\varepsilon>0$ is sufficiently small. Now from Proposition~\ref{pp:minmax}.5 we deduce that
$$0\leq c_{LS}(z_{k_0},\varepsilon f)= c_{LS}(z_{k_0}\cap [M],\varepsilon f)\leq c_{LS}( [M],\varepsilon f)\leq 0.$$
Therefore, we obtain $c_{LS}(z_{k_0}\cap [M],\varepsilon f)= c_{LS}( [M],\varepsilon f)$ for some $x_{k_0}\in H_{*<2n}(M,{\mathbb{Z}}_2)$. Then Proposition~\ref{pp:minmax}.7 implies that the zero level set $\overline{V}$ of $f$ is homologically non-trivial. since $\varphi_H$ is a diffeomorphism on $M$, as a neighborhood $\overline{\varphi_H(V)}$ of the intersection $L\cap\varphi_H(L)$ is also homologically non-trivial. The proof is completed.
\qed
\section{Proofs of Theorem~\ref{thm:ArnoldC} and Theorem~\ref{thm:two}} \label{sec:last}
\subsection{Proof of Theorem~\ref{thm:ArnoldC}}
By definition there exists a Hamiltonian $H\in\mathcal{H}_c$ such that $\gamma(L,H)<A_L$. Since $H(L,{\mathbb{Z}}_2)$ is generated as a ring by $H_{\geq n+1-N_L}(L,{\mathbb{Z}}_2)$, we may assume that $u_i\in H_{n-N_L<*<n}(L,{\mathbb{Z}}_2)$, $i=1\ldots,k$ such that $u_1\cap\cdots\cap u_k=[pt]$ and $cl(L)=k+1$.
Set
$$[L]=\alpha_0,\alpha_1,\ldots,\alpha_k\in QH(L), \quad \alpha_i=u_{k-i+1}\circ\alpha_{i-1}.$$
As the Lagrangian quantum product $\circ$ is a quantum deformation of the homological intersection product in $H(L,{\mathbb{Z}}_2)$ (see~\cite{BC,BC2}), each $\alpha_i$ is nonzero.
In particular, we have
$$\alpha_k=[pt]+\sum_{r\geq 1}a_{rN_L}t^r,\quad a_{rN_L}\in H_{rN_L}(L,{\mathbb{Z}}_2).$$
This implies that $\langle \mathcal{T}([L]),\alpha_k\rangle\neq 0$, where $\mathcal{T}: QH_*(L)\to QH^{n-*}(L)$ is the Ponicar\'{e} duality map (see Section~\ref{subsec:lqs}).
From the Ponicar\'{e} duality property~(LS11) of Lagrangian spectral invariant (see~ Section~\ref{sec:lsi})
we infer that $-\ell([L],\overline{H})\leq \ell(\alpha_k,H)$. So we have
$$\gamma(L,H)=\ell([L],H)+\ell([L],\overline{H})\geq \ell([L],H)-\ell(\alpha_k,H)$$
It follows from Corollary~\ref{cor:num} and the spectral property~(LS2) that there exist $k+1$ elements $\widetilde{x}_i=[x_i,\overline{x}_i]\in\crit(\mathcal{A}_{H,L})$ such that
$$\ell(\alpha_k,H)=\mathcal{A}_{H,L}(\widetilde{x}_k)<\mathcal{A}_{H,L}(\widetilde{x}_{k-1})<\cdots<\mathcal{A}_{H,L}(\widetilde{x}_0)=\ell(\alpha_0,H).$$
Clearly, from $\gamma(L,H)<A_L$ we exclude the possibility that one of $\{\widetilde{x}_i\}_{i=0}^k$ is the recapping of another.
Therefore, all $x_i$ are different, equivalently, all elements of $L\cap\varphi(L)$ are distinct. This completes the proof.
\qed
\subsection{Proof of Theorem~\ref{thm:two}} Assume that $\varphi\in\mathcal{H} am(M,\omega)$ is generated by a Hamiltonian $H\in C^\infty([0,1]\times M)$, i.e., $\varphi=\varphi_H^1$. Since $L$ is non-narrow, $[L]\in QH(L)$ is non-trivial. Note that $[M]\bullet [L]=[L]$.
Then we have
\begin{eqnarray}\label{e:triangle}
\ell([L],H)&=&\ell\big([(t^{-\tau} u_1)*\cdots *u_k]\bullet [L],H\big)\\\notag
&=&\ell\big((t^{-\tau} u_1)\bullet u_2\bullet\cdots\bullet(u_k\bullet [L]),H\big)\\\notag
&=&\ell\big( u_1\bullet u_2\bullet\cdots\bullet(u_k\bullet [L]),H\big)+\tau A_L
\\\notag
&<&\ell\big(u_2\bullet\cdots\bullet(u_k\bullet [L]),H\big)+\tau A_L\\\notag
&<&\cdots
\\\notag
&<&\ell\big(u_k\bullet [L],H\big)+\tau A_L,\\\notag
\end{eqnarray}
where we have used the module structure of Lagrangian quantum homology in the second equality, the quantum shift property~(LS3) in the third equality and Theorem~\ref{thm:ll} in the remained inequalities. Using Theorem~\ref{thm:ll} again we obtain $\ell\big(u_k\bullet [L],H\big)<\ell([L],H)+I_\omega(u_k)=\ell([L],H)$. This together with (\ref{e:triangle}) implies that
\[
\begin{split}
\ell([L],H)-\tau A_L=\ell\big(u_1\bullet\cdots\bullet(u_k\bullet [L]),H\big)<\ell\big( u_2\bullet\cdots\bullet(u_k\bullet [L]),H\big)<\cdots\\
<\ell(u_k\bullet [L],H)<\ell([L],H).
\end{split}
\]
Therefore, the spectral property (LS2) of $\ell$ concludes the desired result.
\section{Concluding remarks}\label{sec:remarks}
In Floer theory the Lagrangian (resp. Hamiltonian) spectral invariants serve as the minmax critical value selectors, the Lagrangian (resp. Hamiltonian) Ljusternik--Schnirelman theory seems to be a very useful tool towards the Arnold conjecture for degenerate Lagrangian intersections (resp. degenerate symplectic fixed points), and could be of independent interests. In the following concluding remarks we summarize some directions of further study of this class of objects.
\begin{itemize}
\item Just as we have seen in Theorem~\ref{thm:two} and Theorem~\ref{thm:more}, the lower bounds given by the algebraic structures of Lagrangian quantum homology (more specifically, by FQF and LFQF) does not depend on the Hamiltonian diffeomorphisms on an ambient symplectic manifold. So making a lower bound estimate of Lagrangian intersections could be translated into a pure algebraic calculation as long as the product and module structures of Lagrangian quantum homology (identically, Lagrangian Floer homology) are known.
\item Since Lagrangian spectral invariants generalize Hamiltonian spectral invariants (see~\cite{LZ}), the Lagrangian Ljusternik--Schnirelman inequality~I in Theorem~\ref{thm:lls} can be viewed naturally as a generalisation of the Hamiltonian Ljusternik--Schnirelman inequality established by Ginzburg and G\"{u}rel~\cite{GG}. Indeed, let $\Delta$ be the Lagrangian diagonal in the product symplectic manifold $(M\times M,\omega\oplus(-\omega))$, then $N_\Delta=2C_M\geq 2$. Furthermore, there is canonical algebra isomorphism $QH(\Delta)=QH(M)$ so that for every class $\alpha\in QH(\Delta)$ and every Hamiltonian $H\in C_c^\infty(S^1\times M)$ we have
$$\sigma(\alpha,H)=\ell(\alpha,H\oplus 0),$$
see~\cite[Theorem~5]{LZ} for a proof. Correspondingly, for $\alpha\in \widehat{QH}(M)=H_{*<2n}(M,{\mathbb{Z}}_2)\otimes \Gamma$ and $\beta\in QH(M)$ we have the strict inequality
\[
\sigma(\alpha*\beta, H)< \sigma(\beta, H)+I_\omega(\alpha).
\]
In view of this relation between these two Ljusternik--Schnirelman theories, one may propose the following definition:
\begin{df}\label{def:qf}
Let $M^{2n}$ be a monotone tamed symplectic manifold $(M^{2n},\omega)$. We say that $M$ has a \textit{ fundamental quantum factorization} (denoted by FQF for short) \textit{of length $l$ with order $\kappa$} if there exist $u_1,\ldots,u_l\in H_{*<2n}(M,{\mathbb{Z}}_2)$ such that
$$s^\kappa[M]= u_1*u_2*\cdots *u_l\quad \hbox{in}\; QH(M).$$
\end{df}
Similarly to Theorem~\ref{thm:more}, one can obtain
\begin{thm}\label{thm:fixpoints}
Let $M^{2n}$ be a monotone tame symplectic manifold $(M^{2n},\omega)$. Suppose that $M$ has a FQF of length $l$ with order $\kappa$. Let $\varphi$ be any compactly supported Hamiltonian diffeomorphism on $M$ with isolated fixed points. Then $\varphi$ has at least $\lceil l/\kappa\rceil$ fixed points.
\end{thm}
Indeed, the monotone condition in the above theorem could be substituted with the weaker condition that $M$ is weakly monotone (see McDuff and Salamon~\cite{MS} for the definition) with a minor change of the defintion of the Novikov ring $\Gamma$, see~\cite{GG}. With this theorem at hand, one could easily deduce from the quantum product of quantum homology/cohomology of an explicit symplectic manifold that the least number of fixed points which a Hamiltonian diffeomorphism must have. For instance, for $
{\mathbb{C}} P^n$ we have
\[
h^{*k}=
\begin{cases}h^{\cap k},\quad \ \ & 0\leq k\leq n,\\
[{\mathbb{C}} P^n]s, \quad \ \ & k=n+1.
\end{cases}
\]
Here $h=[{\mathbb{C}} P^{n-1}]\in H_{2n-2}({\mathbb{C}} P^n,{\mathbb{Z}}_2)$ the class of a hyperplane in the quantum homology $QH({\mathbb{C}} P^n)$. So ${\mathbb{C}} P^n$ has a FQF of length $n+1$ with order $1$, and hence every $\varphi\in\mathcal{H}am({\mathbb{C}} P^n)$ must have at least $n+1$ fixed points. This recovers the classical result by Fortune~\cite{Fo}, see also~\cite{FW,Fl3}. Many other related results could also be obtained similarly. A completed list of such examples is not possible to present here, for more examples we refer to~\cite{GG}.
\item In principle, one could extend Ljusternik--Schnirelman theory to a more general background of spectral invariants, for instance, the \textit{boundary depth} introduced by Usher~\cite{Us,Us2}, \textit{spectral length} introduced by Atallah and Shelukhin~\cite{AS}, \textit{spectral invariants with bulk}~introduced by Fukaya \textit{et al }~\cite{FOOO}, \textit{PHF spectral invariants} introduced by Edtmair and Hutchings~\cite{EH} and so on. After that one may use various versions of Ljusternik--Schnirelman theory to study properties of Hamiltonian dynamics or symplectic geometry,{\footnote{which we will investigate in our future work~\cite{Go4}}} e.g., making more refined estimates of the numbers of Lagrangian intersections, periodic orbits, Reeb chords, etc. For more applications of Ljusternik--Schnirelman theory we refer to~\cite{Go1,Go2,Go3,Go4}.
\end{itemize}
|
{
"timestamp": "2021-12-01T02:25:31",
"yymm": "2111",
"arxiv_id": "2111.15442",
"language": "en",
"url": "https://arxiv.org/abs/2111.15442"
}
|
\section{Introduction}
\label{introduction}
\begin{figure}[t]
\vspace{-1.0cm}
\subfigure[Conceptual LED setup]{\includegraphics[width=0.49\textwidth]{pictures/setup_image.png}}
\subfigure[Color space according to the \textbf{C}ommission \textbf{i}nternationale de l’\textbf{é}clairage (CIE-color space adopted from Ding et al. \cite{Ding2013})]{\includegraphics[width=0.49\textwidth]{pictures/colorspace.png}}
\caption{An LED (a) consists of a chip, a conversion system and a multi-layer thin film that features ten layers. The latter focuses the emitted light spectrum into a $\pm 25^\circ$-angle cone. The spectrum is composited of light rays (arrows) of different wavelengths. Such a spectrum can be associated with a point $\mathbf{c}=\left( C_x, C_y \right)$ in the color space (b). In this work, the Euclidean distance between two color points is denoted as the color point deviation $d$.}
\label{fig:introduction}
\end{figure}
For many recent applications in our everyday lives, light-emitting diodes (LED) are required to emit a light spectrum that features specific optical properties. Such a light spectrum explains how the (relative) power provided by an LED is distributed over its contributing wavelengths and occurring emergent angles. In this work, we elaborate on how to shift the light distribution of white LEDs towards forward directions using a multi-layer thin film (MLTF). As illustrated in figure \ref{fig:introduction} $(a)$, the term white LED (package) refers to a horizontal stack of a semi-conductor chip, the conversion system and an optional multi-layer thin film. Further on, the common LED structure without an MLTF is referred to as the reference design. By multiplication with a color matching function\cite{Guild1931,Wright1929}, each light spectrum can be characterized by the so-called color point, a two-dimensional vector in the color space of figure \ref{introduction} $(b)$, which describes the color of a spectrum\cite{Ding2013}. In the common case of a white LED without a multi-layer thin film (MLTF), the chip emits blue light that passes through the ensuing conversion system\cite{Cho2017}. Here, two conversion materials, compounded with weight percentages $\mathbf{w}=(w_1, w_2)$, convert a portion of the blue light to green and red light resulting in white light. The amount of conversion materials determines the degree of conversion and is adapted such that the resulting light spectrum corresponds to the application-specific white target color point $\textbf{C}$. Another important characteristic of an LED is the power of its radiated spectrum. In radiometry, this power is commonly referred to as \textit{radiant flux}. Hence, both the \textit{color point} $\mathbf{c}^\alpha(\mathbf{w})$ and the \textit{forward power} $P^\alpha(\mathbf{w})$ featured by an LED in a $\pm \alpha$-angle cone are computed based on the anisotropic radiated light spectrum. Unfortunately, such emitted spectra of LED packages are unknown a priori for varying conversion materials and/or MLTFs. Thus, the deduction of spectrum-related measures of a particular LED like its power or color point would require expensive physical experiments; namely, to physically fabricate and evaluate the spectra of such LED packages. An elegant and cheaper alternative is to statistically trace a bunch of rays sampled from the light distribution of the light injecting LED chip, until they exit the LED package and form the emitted spectrum, or vanish due to non-radiative thermal losses. Thereby, the behavior of each ray is governed by geometrical optics\cite{Lahiri2016}. In this framework, only the spectrum of the LED chip needs to be known and thus is measured once. The spectrum of the LED chip in turn is assumed to remain stable over changes of both, the conversion system and the MLTF. Such still time-consuming and noisy ray tracing simulations are based on calibrated optical models of LEDs \cite{Kenji2018,Liu2010,Sun2008,Lee2001,Sun2006} and allow an estimation of their spectra. Notably, ray tracing simulations are non-deterministic and can be considered Monte-Carlo simulations \cite{Lee2001}: The random variables of interest (power and color point) cannot be computed analytically and thus need to be estimated via numerous realizations and processing (tracing) of observable random variables (rays that exit the chip). Such tray tracing simulations replace expensive experiments and allow to conduct extensive optimization of LEDs.\par
Since the aforementioned so-called \textit{color point optimization problem} based on $\mathbf{w}$ is empirically unambiguous and convex, it is not possible to further increase the power $P^\alpha(\mathbf{w})$ if
\begin{align*}
\mathbf{w} = \text{argmin}_{\boldsymbol{\omega}} \lbrace d \left( \mathbf{C} , \textbf{c}^\alpha\left( \boldsymbol{\omega} \right) \right) \rbrace
\end{align*}
holds based on the Euclidean distance $d \equiv \Vert \cdot \Vert_2$. The proposed idea of this manuscript is to modify the anisotropic light spectrum so as to focus more power $P^\alpha(\mathbf{w}, \mathbf{t})$ into the forward angle cone compared to the reference design by using an MLTF --- parameterized by $\mathbf{t}$ --- on top of the conversion system; thus, increasing the directionality of white light. In this work, directional emission or directionality of white light refers to the power that is emitted in a particular (solid) angle of interest induced by emitted rays of light accumulated in these directions. The corresponding inverse design problem features $T + 2$ parameters that describe the modified LED: In addition to the weight percentages of the conversion materials, the layer thicknesses $\mathbf{t} = \left( t_1, ..., t_{T} \right)$ of $T$ alternating layers of titanium dioxide ($\text{TiO}_2$) and silicon dioxide ($\text{SiO}_2$) can be adapted. Notably, increasing the usable power may compete with achieving the application-specific white target color point. For instance, due to the \textit{Stokes shift} \cite{Stokes1852}, designing an MLTF that raises the ratio of blue light in forward direction, obviously increases the usable power but will no longer retain the desired color point. Namely, it will render the LED's light to appear bluish and no longer convenient for a white light application. In other words, the MLTF is required to not only increase the number of rays that emerge in forward direction but also achieve a particular ratio between blue, green and red rays. The aforementioned challenges result in a multi-objective optimization problem: improving the directionality of white light while preserving the color point associated to the spectrum that exits the LED. More precisely, we aim to increase $P^\alpha(\mathbf{w}, \mathbf{t})$ while keeping the Euclidean distance $ d^\alpha(\mathbf{w}, \mathbf{t}) \equiv d(\mathbf{c}^\alpha(\mathbf{w}, \mathbf{t}), \mathbf{C})$ low. To summarize, the contribution of this work is three-fold:
\begin{itemize}
\item Bayesian optimization is used to adapt the layer thicknesses of an MLTF based on ray tracing simulations of white LEDs
\item MLTFs are investigated with regard to their general ability to increase directionality of white light emission of LEDs
\item The effect that explains the directionality increase of white LEDs in physical terms is identified
\end{itemize}
\section{State of the art}
Although the directional emission of white light is crucial for many applications like head lamps of cars, to the best of our knowledge, none of the aforementioned points were investigated in previous work. Traditionally, the directionality of incoherent light sources like LEDs is increased via optical devices such as reflectors or lenses \cite{Lorenzo1982,Ma2015}, which shape the beam profile of a light source \cite{Fournier2011}. Notably, such optics are often bulky and therefore cause optical loss or limit the design. On the nanoscale, meta-\cite{Su2020,Lalanne2017, Schreiber2003}
and microlenses\cite{Lee2013,Chen2018} have been introduced to manipulate light by microscopic structures. In addition, scientific work has been conducted towards the development of incoherent and directional light sources based on periodic or random gratings like dielectric\cite{Yang2017,Yan2014} or plasmonic\cite{Kamakura2018} structures. Moreover, extensive studies regarding metasurfaces \cite{Agata2021,Shunsuke2021} have been conducted to increase the directional outcoupling of emitters. However, all of these systems are challenging to fabricate in mass-production or apply to coherent light sources only. Notably, they do not affect the light source itself but rather shape the light distribution over angle that exits a light source like an LED package. In contrast, the proposed epitaxial deposition of an additional MLTF on top of the conversion system of an LED is straightforward from an engineering viewpoint, allows compact integration in existing LED packages, implies only low additional expenditure and is directly applicable for mass-production of optical semiconductors. The closest investigation to ours may be the study conducted by Yi Zheng and Matthew Stough \cite{Zheng2008}: They proposed to use an MLTF as a wavelength selective filter between the LED chip and the conversion system to increase the global efficacy of white LEDs. This filter allows the blue light coming from the chip to pass while simultaneously reflecting the green and red light emitted by the conversion system. Thereby, reabsorbtion effects of non-blue photons in the chip are suppressed. In other words, the MLTF enforces the green and red light to exit the package rather than irradiating the chip and thus increases the overall extraction efficiency. Such an MLTF is included into the optical LED model used in this work and further considered as integral component of the LED chip. In contrast, the contribution of our work is to improve the directional emission into an angle cone rather than increasing the overall outcoupling efficiency. Therefore, we demonstrate that an MLTF, acting as an angle and wavelength selective filter between the conversion system and the ambient air, can increase the directional emission of white light. Thereby, the MLTF helps to shape the angular and spectral light distribution directly through ray ping pong during its generation process rather than to collimate the outcoupled light.\par
Designing MLTFs that feature a particular target reflectivity or transitivity over angle of incidence and wavelength is a common engineering challenge and therefore many optimization methods have been developed: Some of them are based on gradients \cite{Sullivan1996} or biological inspirations \cite{rabady2014,Guo2014}.
Recently, even some approaches including neural networks \cite{Roberts2018,LiuII2018,Hedge2019} or reinforcement learning \cite{Wankerl2021,Jiang2020} have been implemented to efficiently scan the search space for suitable MLTF designs. These techniques rely on fast-to-evaluate computations regarding the transfer matrix method\cite{Luce2021,Byrnes2020} (TMM) that allow to compute the optical characteristics\footnote{e.g. reflectivity or transmittivity over wavelength and angle of incidence} of an MLTF. To measure\footnote{e.g. with a notion of reconstruction error} how close a particular MLTF's optical characteristic is to an optimal one takes not even a second in total. However, in contrast with applications like anti-reflection coatings\cite{Guo2014}, a specific optimal optical characteristic that causes an MLTF to increase the directionality of white light is not known a priori. To circumvent this lack of information, we conduct noisy ray tracing simulations in order to optimize the power and color point in forward direction regarding the layer thicknesses of an MLTF. As a side benefit, the optical characteristic of MLTFs is implicitly optimized towards the a priori unknown optimal one and can be investigated further a posteriori. Notably, these noisy simulations take about $4 \text{ [min]}$ for a given MLTF, which is $420$ times longer compared to computing the optical characteristics of the MLTF itself based on TMM. Thus, ray tracing simulations are relatively expensive-to-evaluate and render most of the aforementioned data-hungry TMM-based optimization methods of MLTFs impractical. Therefore, we propose to adapt the individual MLTF layer thicknesses in order to maximize $P^\alpha(\mathbf{w}, \mathbf{t})$ while minimizing $d^\alpha(\mathbf{w}, \mathbf{t})$ via a variant of Bayesian optimization\cite{Bradford2018}. Although Bayesian optimization has shown satisfying results on many mathematical test functions as well as expensive-to-evaluate real-world problems regarding engineering, physical and chemical sciences \cite{Schweidtmann2018,Amar2019,Clayton2020}, to the best of our knowledge no application towards ray tracing simulations is reported in scientific literature. After optimizing an MLTF, its optical characteristics, like transmittivity, allow to physically deduce target behaviors that an MLTF needs to fulfill in order to further increase the directionality of white light. Hereby, an effect called \textit{ray ping pong} is identified to be responsible for the improved directional emission of white light. Related effects like photon recycling \cite{Cheng2021} were studied in previous work for solar GaAs cells \cite{Kosten2014,Raja2021,Walker2015}, perovskite LEDs \cite{Cho2020} or thin films themselves \cite{Fu2021}. In our work, depending on their wavelength and incident angle, the rays are considered to be partially trapped in the LED package by an MLTF, enforcing interactions between rays and LED package materials like scattering, (re-)absorbtion, (re-)emission or non-radiative effects. The MLTF can statistically steer the degree of such interactions for particular parts of the original spectrum emitted by the LED package. Thus, the MLTF has a direct impact on the emitted spectrum and needs to balance the radiated spectrum not only to appear as white in forward direction, but also to suppress radiation in non-forward direction. In other words, the MLTF is found to play angle and wavelength selective ping pong with the rays of light to achieve an equilibirium of emitted rays, which is of advantage regarding directionality while still holding the color point.
\begin{figure}[!t]
\vspace{-0.5cm}
\begin{center}
\includegraphics[width=0.9\textwidth]{pictures/ColorConvexWithArrow.png}
\caption{(a) Illustration of a multi-layer thin film consisting of two dielectric materials ($\text{TiO}_2$ in light grey and $\text{SiO}_2$ in dark grey). (b) The heatmap of the Euclidean color point deviation over weight percentages of two conversion materials for the multi-layer thin film in (a). The green arrow indicates the possible path of a local optimizer initialized near the origin of the search space to the optimal converter weight percentages.}
\label{fig:convex}
\end{center}
\end{figure}
\begin{figure}[!b]
\begin{algorithm}[H]
\begin{algorithmic}[1]
\State Initialize $\mathcal{D}$, $\mathcal{W}$, $0<\alpha<90$, and set a stopping criterion, e.g. a timeout
\While{stopping criterion not fulfilled}
\State Suggest (next) $\mathbf{t}$ based on $\mathcal{D}$ and TS-SOO \cite{Bradford2018}
\State Solve $\mathbf{w} = \text{argmin}_{\boldsymbol{\omega}} \lbrace d ^\alpha \left( \boldsymbol{\omega} , \mathbf{t} \right) \rbrace$ based on Downhill-simplex algorithm
\State Add aquired data point to data set $\mathcal{D} \leftarrow \mathcal{D} \cup
\lbrace \left( \mathbf{t} , f ^\alpha (\mathbf{t} \right) \rbrace$
and track $\mathcal{W} \leftarrow \mathcal{W} \cup
\lbrace \mathbf{w} \rbrace$
\EndWhile
\end{algorithmic}
\caption{Hierarchical optimization approach: Pseudo-code that illustrates the proceeding during optimization. $\mathcal{D}$ denotes the acquired data set of parameters and observations, $\alpha$ denotes the opening angle of the forward cone, and the timeout criterion defines how long the optimization endures. Optionally, the weight percentages are stored in $\mathcal{W}$ for traceability.}
\label{alg:hierarchical_optimizer}
\end{algorithm}
\vspace{-0.3cm}
\end{figure}
\section{Hierarchical optimization as a product of weights}
\label{problem}
Since hitting a suitable color point of an LED is mandatory for production, we reformulate the problem introduced in section \ref{introduction} as a hierarchical optimization task
\begin{align}
\label{eq:problem}
\max_{\mathbf{t}} \big\lbrace P^\alpha \left( \mathbf{w}, \mathbf{t} \right) \vert \mathbf{w} = \text{argmin}_{\boldsymbol{\omega}} \lbrace d^\alpha\left( \boldsymbol{\omega} , \mathbf{t} \right) \rbrace \big\rbrace.
\end{align}
Therefore, according to domain expert knowledge, we assume that the color point optimization with respect to the conversion materials remains convex for a fixed particular MLTF, if initialized near the origin (see figure \ref{fig:convex}). Equation \eqref{eq:problem} allows to separate the two sets of parameters: The convex color point optimization problem based on the weight percentages $\mathbf{w}$ of conversion materials, and the non-convex power optimization problem including the layer thicknesses $\mathbf{t}$. In this work, a variant of active learning called Bayesian optimization is applied to maximize the power $P^\alpha$ by adapting the thicknesses of the layers of an MLTF. Eventually, a variation of the Downhill-simplex\cite{Nelder1965} optimizer is used to customize the conversion system for each particular MLTF in order to retain the desired color point before evaluating the power. To account for this hierarchical structure, we implement a weighted power as physics-guided real-valued objective function
\begin{align}
\label{eq:objective}
f ^\alpha ( \mathbf{t} ) = P^\alpha \left( \mathbf{w}, \mathbf{t} \right)
\cdot W\left(d ^\alpha \left(\mathbf{w} , \mathbf{t} \right) \right)
&\text{, where }
\mathbf{w} = \text{argmin}_{\boldsymbol{\omega}} \lbrace d ^\alpha \left( \boldsymbol{\omega} , \mathbf{t} \right) \rbrace \\
&\text{ and }
W \left( d^\alpha \left(\mathbf{w} , \mathbf{t} \right) \right) = \text{exp} \left( a \cdot [ d ^\alpha \left(\mathbf{w} , \mathbf{t} \right) ] ^b \right),
\label{eq:weighting}
\end{align}
which is maximized via Thompson-sampling single-objective Bayesian optimization\cite{Bradford2018} (TS-SOO) in this work. Here, $\mathbf{w}$ is a solution to the nested color point optimization in problem \eqref{eq:problem}. As explained, the weight percentage parameters $\mathbf{w}$ are not tuneable by the Bayesian optimizer directly.
However, for given thicknesses $ \mathbf{t}$ of an MLTF, the color point deviation $d^\alpha(\mathbf{w} , \mathbf{t})$ may be bounded from below with regard to the weight percentages $\mathbf{w}$. This means that the target color point is not reachable for a given MLTF. In such cases, the comparison of power values for different MLTFs at different color points becomes invalid due to the Stokes shift. Here, $W(\cdot)$ allows to guide the course of optimization towards suitable MLTFs: The weighting punishes excessive deviations from the target color point by decreasing the objective, although the power of an MLTF may be high. In practice, more weighting functions may be introduced to account for various conflicting or competing effects during LED development.\par
We compared the TS-SOO to the Thompson-sampling efficient multi-objective optimization\cite{Bradford2018} (TS-EMO). Here, a high forward power $P^\alpha$ and a low color point deviation $d^\alpha$ are considered as (competing) real-valued objectives. Namely, TS-EMO directly searches for joint parameter vectors $\left(\mathbf{w}, \mathbf{t} \right)$ to entry-wise maximize
\begin{align}
\label{eq:joint}
\mathbb{R}^2 \times \mathbb{R}^{T} &\rightarrow \mathbb{R}^2 \\
\nonumber
\left(\mathbf{w}, \mathbf{t} \right) &\mapsto \left( P^\alpha\left(\mathbf{w}, \mathbf{t} \right), - d^\alpha\left( \mathbf{w} , \mathbf{t} \right) \right),
\end{align}
where the aforementioned Stokes shift renders the involved objectives to be competitive in nature. The minus sign of the second entry of equation \eqref{eq:joint} reflects the intention to minimize the color point deviation. Note that the hierarchical structure of the engineering optimization problem is not longer represented in the equation.
\begin{figure}[!t]
\vspace{-0.5cm}
\includegraphics[width=0.99\textwidth]{pictures/noiseweight.png}
\caption{(left) The ray tracing for the reference design based on different numbers of traced rays ($10 \cdot 10^3$, $50 \cdot 10^3$ and $100 \cdot 10^3$) is started $15$ times. The conversion material weight percentages remained unchanged. As expected, the corresponding boxplots indicate that the noise level decreases with an increasing number of rays traced. The horizontal dashed line indicates the reference power obtained with $10^6$ rays. This noise effect holds for the estimated color point $\left( C_x, C_y \right)$, too. (right) The weighting function \eqref{eq:weighting} for $a=-634914.5425$ and $b=3.3900$. The dark lines emphasize the condition $W(0.01) = 0.9$.}
\label{fig:implementation}
\end{figure}
\section{Implementation}
For applications like headlights, we set $\alpha=25^\circ$ and set $d^\alpha(\mathbf{w} , \mathbf{t}) = 0.005$ as a preliminary upper bound of the color point deviation from the required target color point $\mathbf{C} = \left( \nicefrac{1}{3}, \nicefrac{1}{3} \right)$ of a white LED. This enables us to universally solve for the parameters $ \left( a, b \right)$ such that the empirically derived conditions
\begin{align}
\label{eq:weight}
W(0.005) &= 0.99 \\ \nonumber W(0.010) &= 0.90
\end{align}
hold, yielding $a=-634914.5425$ and $b=3.3900$. The resulting weighting function is illustrated in figure \ref{fig:implementation} (a). The reference design refers to the special case $t_1 = ... = t_{T} = 0.0$. In general, we propose to formulate hierarchical, competing objectives as a product of weights and a central figure of merit, e.g. the power. Conceptually, this enforces strong (AND-)conditions for all constraints explained by the weights. In other words, a high figure of merit is only valuable if all constraints are fulfilled. On the other hand, this formulation preserves continuity of the objective function which makes it approximable with Gaussian processes \cite{Rasmussen2005}. As the black-box function \eqref{eq:objective} is based on a ray tracing simulation of $25 \cdot 10^4$ rays that takes several minutes ($\approx 4 \text{ [min]}$) to be evaluated and features non-linearity and non-convexity, the maximization is conducted via an active learning appoach, the aforementioned TS-SOO. This variant of Bayesian optimization is employed to optimize expensive-to-evaluate engineering and chemical problems \cite{Schweidtmann2018,Amar2019,Clayton2020}. Moreover, as illustrated in figure \ref{fig:implementation} (b), the evaluation of the objective function provides noisy samples due to the conducted ray tracing. As explained in algorithm \ref{alg:hierarchical_optimizer}, in each optimization iteration $n$ a thickness vector $\mathbf{t}^n$ is suggested using TS-SOO. For this MLTF, a variant of the Downhill-simplex algorithm solves the convex color point optimization nested in statement \eqref{eq:problem}, yielding $\mathbf{w}^n$. The acquired data point $\left( \mathbf{t}^n , f ^\alpha \left(\mathbf{t}^n \right) \right)$ is added to the data set $\mathcal{D}$. This data set is used to update the global surrogate model --- implemented as a Gaussian process --- based on which the next thickness vector $\mathbf{t}^{n+1}$ is derived until a predefined stopping criterion is fulfilled, e.g. a timeout or a maximum number $N \geq n$ of iterations. Basically, TS-EMO follows the same optimization routine, but directly suggests joint parameter vectors $\left( \mathbf{w}^n, \mathbf{t}^n \right)$ to solve the multi-objective optimization problem \eqref{eq:joint}. For TS-SOO as well as TS-EMO, we set $T=12$ and allow the thicknesses to vary between $10 \text{ [nm]}$ and $200 \text{ [nm]}$ for each layer. Moreover, both conversion material weight percentages are adoptable between $0 \text{ [wt\%}]$ and $25 \text{ [wt\%}]$. After the objective function \eqref{eq:objective} or \eqref{eq:joint} was optimized via TS-SOO or TS-EMO, a naive Downhill-simplex algorithm \cite{Nelder1965} is conducted, respectively. Via not more than $25$ optimizer steps the thickness parameters and conversion material weight percentages are jointly fine-tuned. Each of these steps takes $8-9 \text{ [min]}$ and is based on $5 \cdot 10^{5}$ traced rays per simulation to evaluate the respective objective function. During the local refinement, the acceptable upper bound for the color point deviation in equation \eqref{eq:weight} is narrowed down from $0.005$ to $0.002$, which is practically required for most applications.
\begin{figure}[!t]
\vspace{-0.5cm}
\includegraphics[width=0.99\textwidth]{pictures/results.png}
\caption{Color point deviation $d^\alpha$ over power $P^\alpha$ after joint (left, TS-EMO), hierarchical (middle, TS-SOO) and local (right, Downhill simplex) optimization for $\alpha=25^\circ$. The horizontal black lines denote the color point deviation of $0.005$ points, where $W\left( 0.005 \right) = 0.99$ holds. The vertical black lines denote the power of the reference design without multi-layer thin film. For each sample in these plots, the color map reflects the objective function values obtained by equation \eqref{eq:objective}.}
\label{fig:results}
\end{figure}
\section{Results}
In this section, we present the results of our investigations. First, we demonstrate that MLTFs can increase the directionality of white LEDs, which may seem counterintuitive beforehand. Second, we give an explanation of the underlying physical effects and discuss how we can make the latter observable in the spectra. The results are summarized in table \ref{tab:results}. Here, we report the MLTFs suggested by TS-SOO and TS-EMO that provided the highest power in $\pm25^\circ$, while exhibiting a color point deviation lower than $0.002$ points --- which is an acceptable deviation for the most consumer products in practice. As mentioned, a joint local optimization of both, thicknesses and conversion material weight percentages is conducted after the global Bayesian optimization. Unsurprisingly, the total power of the white LED decreases for all considered MLTFs due to absorption losses. However, more power is available at particular forward angles of interest. This directionality increase is of value for applications like automotive headlamps or projection, where non-forward light does not contribute.
\begin{figure}[!t]
\vspace{-0.5cm}
\includegraphics[width=0.99\textwidth]{pictures/spectra.png}
\caption{(left) Common angular and spectral distribution of relative power of a white LED without a multi-layer thin film. (right) Spectrum of a white LED with a multi-layer thin film that contains $28.9 \%$ more relative power compared to the reference design in forward direction. In each case, the blue chip wavelength ($440\text{ [nm]}$) and the green and red wavelengths ($555\text{ [nm]}$) and $600\text{ [nm]}$) of the conversion materials are indicated by vertical dashed lines. The horizontal grey line indicates the $\pm 25^\circ$-forward direction. The color map for the heat maps coincides with figure \ref{fig:photonpingpong} (right).}
\label{fig:spectra}
\end{figure}
\begin{table*}[!b]
\centering
\footnotesize
\makebox[\linewidth]{
\begin{tabular}{ | P{1.5cm} | P{2cm} || P{2cm} || P{2cm} | P{2cm} | }
\hline
& $\alpha$ \textbf{ } $[^\circ]$ & $25$ & $45$ & $90$\\
\hline
\hline
$P^\alpha \text{ [W]}$ & \pbox{25cm}{\vspace{5pt} Reference \\ TS-SOO \\ TS-EMO \vspace{5pt}} &
\pbox{20cm}{$0.201$ \\ $\mathbf{0.259}$ \\ $0.210$} & \pbox{20cm}{$0.559$ \\ $0.640$\\ $0.569$} & \pbox{20cm}{$1.052$ \\ $0.985$ \\ $0.978$} \\
\hline
$d^\alpha$ \textbf{ } $[1]$ & \pbox{20cm}{ \vspace{5pt} Reference \\ TS-SOO \\ TS-EMO \vspace{5pt}} &
\pbox{20cm}{$0.0015$ \\ $\mathbf{0.0007}$ \\ $0.0025$} & \pbox{20cm}{$0.0071$ \\ $0.0230$ \\ $0.0262$} & \pbox{20cm}{$0.0146$ \\ $0.0275$ \\ $0.0281$} \\
\hline
\end{tabular}
}
\caption{Optimization results regarding color point deviation $d^\alpha$, and power $P^\alpha$ of algorithm \ref{alg:hierarchical_optimizer}, and TS-EMO for $\alpha = 25^\circ$, and after local refinement. The designs featuring a multi-layer thin film are compared with the reference design, where only a color point optimization is conducted. In addition to the direct optimization objectives regarding $\pm 25^\circ$, we also report the color point deviations and energies for $\pm 45^\circ$ and $\pm 90^\circ$ observed for the respective optimized MTLFs.}
\label{tab:results}
\end{table*}
\subsection{Increase of directionality}
The reference design provides $P^\alpha\left(\mathbf{w},\mathbf{t}=\mathbf{0}\right)=0.201 \text{ [W]}$ after adopting the weight percentages $\mathbf{w}$, while the deviation between the LED's and the target color point is given by $d^\alpha \left( \mathbf{w}, \mathbf{t} = \mathbf{0} \right) = 0.00151 < 0.002$. After optimizing an MLTF's thickness vector $\mathbf{t}$ using algorithm \ref{alg:hierarchical_optimizer}, the forward power of the LED is increased by $25.4 \%$ to $P^\alpha \left( \mathbf{w}, \mathbf{t} \neq \mathbf{0} \right) = 0.252 \text{ [W]}$. The Euclidean color point deviation for this design is given by $0.0014< 0.002$. Notably, using TS-EMO to solve the corresponding original multi-objective optimization problem \eqref{eq:joint} achieved only almost $5\%$ more forward power while keeping the color point deviation below the required value of $0.002$. This is not surprising, as TS-EMO is not supposed to be used for more than eight parameters\cite{Bradford2018}. As illustrated in the very right plot in figure \ref{fig:results}, the local refinement increased the power in forward direction by an additional $3.5 \%$ relative to the reference design. Thus, yielding $0.259 \text{ [W]}$ or $28.9 \%$ more light compared to the reference design, while maintaining an acceptable color point deviation of $0.0007 < 0.002$. Notably, starting a local refinement with the joint parameters provided by the TS-EMO implementation, either the color point deviation could not be reduced under $0.002$ or the forward power increase remained below $10.0\%$. Due to the high computational effort associated with ray tracing simulations, we restarted each optimization procedure---based on TS-SOO and TS-EMO--- only three times for $48\text{ [hours]}$ each. The reported data corresponds to the best run regarding the objective function for TS-SOO and TS-EMO, respectively. Because the relative and absolute trends of the results of these experiments appeared to be consistent and showed no unexpected anomalies, they were not evaluated in detail. The Pareto fronts illustrated in figure \ref{fig:results} plot the (forward) power against the color point deviation for both, joint optimization using TS-EMO and hierarchical optimization using TS-SOO. As the time horizont was fixed, the different numbers of samples are explained by the implementation of the joint ($719$ samples) and hierarchical ($143$ samples) optimization approach: For each thickness vector $\mathbf{t}$ suggested by TS-SOO, the solving of the nested color point optimization \eqref{eq:objective} based on ray tracing simulations takes about $20 \text{ [min]}$. Contrariwise, the evaluation of a joint parameter vector $\left(\mathbf{t}, \mathbf{w}\right)$ requires only one ray tracing simulation and thus takes about $4\text{ [min]}$. The comparison between the left and middle Pareto fronts in figure \ref{fig:results} indicate that TS-SOO circumvents MLTFs that lead to high color point deviations, as those are punished via multiplicative weights \eqref{eq:weighting}. In contrast, TS-EMO is not implicitly informed about the engineering structure of the problem via the objective function \eqref{eq:joint}. Namely, an increase in forward power at the cost of color point deviation is of no value for specific LED applications. Therefore, most MLTFs suggested by TS-EMO indeed achieve the same or even higher forward power values compared to TS-SOO, but bring along an unacceptable color point deviation significantly above $0.005$.
\begin{figure}[!t]
\vspace{-0.5cm}
\includegraphics[width=0.99\textwidth]{pictures/pingpong.png}
\caption{(left) Transmission of a multi-layer thin film that exhibits $28.9 \%$ more light compared to the reference design in forward direction. (right) The hypothetical, effective spectrum of light, which exits the conversion system and irradiates the multi-layer thin film. In both cases, the vertical dashed lines indicate the wavelengths contributing to the LED spectrum and the horizontal lines denote the separation between forward and non-forward direction.}
\label{fig:photonpingpong}
\end{figure}
\subsection{Ping pong with light rays}
The results of the previous section allow to deduce an explanation of the mechanism of white light directionality using an MLTF. We can deduce a hypothetical effective spectrum which irradiates the MLTF. Therefore, we conduct pixel-wise division of the observed spectrum in figure~\ref{fig:spectra} (b) through the spectral and directional transmission of the optimized MLTF in figure \ref{fig:photonpingpong} (a). This hypothetical spectrum is illustrated in figure \ref{fig:photonpingpong} (b) and indicates how much relative power is required to impinge on the MLTF from the conversion system for each angle and wavelength, in order to explain the physically detected spectrum that irradiates the ambient air after traversing the MLTF. The transmission describes the probability of a light ray to pass the MLTF, depending on its wavelength and angle of incidence. It was computed by a parallelized version of the TMM package\cite{Byrnes2020} provided by Luce et al.\cite{Luce2021}. Here, the light-injecting substrate (conversion system) is represented as silicone ($\text{SiO}_2$) of infinite thickness. Air of infinite thickness defines the ambient environment. Aside from the relative power peak around the chip wavelength of $440 \text{ [nm]}$, another two contributions of relative power at $555 \text{ [nm]}$ and $600 \text{ [nm]}$ appear as a single broad peak in the spectra of figure \ref{fig:spectra}. These green and red wavelength contributions correspond to the conversion material emissions, respectively. As expected, the transmission of the MLTF for all of these wavelengths is low for inconvenient, large beam angles. Thus, non-forward light rays are trapped in the LED package until physical interactions change their directional properties such that they are likely to escape or the light rays vanish optically due to non-radiative thermal effects. We suppose that this phenomenon causes the directionality enhancement and refer to it as \textit{ray ping pong}. Here, the MLTF on top of the conversion system directly influences the emitted spectrum over angle and wavelength of the LED. The MLTF does not only function as an angle selective filter that reflects non-forward light rays back into the LED package, but also balances the statistics of emitted blue, green and red rays in order to appear as white light of a specific color. In other words, the MLTF exploits the process of ray ping pong to reach a statistical equilibrium of emitted rays that is advantageous regarding directionality and color point of the outcoupled light. Notably, the characteristic transmission pattern of an MLTF over wavelength and angle of incidence (see figure \ref{fig:photonpingpong}) not only depends on the layer thicknesses, but also the constituent materials. Studying table \ref{tab:results} implies that the MLTF decreases the total efficacy of a white LED corresponding to $\alpha = 90^\circ$: Rays are trapped in the LED package and are thus more likely to further interact with the LED materials instead of escaping into the ambient air. Such interactions may include photon recycling or scattering, but also non-radiative effects like thermal losses of rays absorbed by the conversion materials. As mentioned, for applications like head lamps only power emitted in forward direction $\left( \alpha \ll 90^\circ \right)$ is usable. In this case, any increase in directionality obviously outweighs a (moderate) drop of global efficacy.
\newpage
\subsection{Limits of directionality increase}
\begin{wrapfigure}{R}{0.6\textwidth}
\vspace{-1.5cm}
\begin{center}
\includegraphics[width=0.59\textwidth]{pictures/intensity.png}
\end{center}
\caption{The normalized intensity of an LED with and without a multi-layer thin film (MLTF) is denoted on the left axis. The sinus-weighted integral over the intensity difference (grey area) --- the so-called \textit{cumulated intensity difference} --- on the right axis indicates that a directionality increase can be achieved up to $58.4^\circ$. However, the peak of directionality increase is observed between $35^\circ$ and $40^\circ$.}
\label{fig:intensity}
\end{wrapfigure}
The previous section allows us to understand the constraints of our approach. Although no explicit optimizations for angles different from the application-specific $25^\circ$ were conducted for this work, our investigations regarding the cumulated intensity difference of figure \ref{fig:intensity} indicate that an increase in forward power is possible up to almost $60^\circ$. Namely, for angles $\alpha>58.4^\circ$, the MLTF would need to re-direct light rays of unsuitable angles between $\alpha$ and $90^\circ$. Due to the Lambertian radiation characteristics of LEDs, the quantity of such rays may be too low to outweigh non-radiative effects like thermal losses, which are statistically enforced by the MLTF for each ray. Thus, the deposition of an MLTF may not be suitable for applications that leverage light with a broad range of incidence angles, here larger than $58.4^\circ$.
\section{Conclusions}
Increasing the (forward) power of a white LED while still maintaining its color point is a hierarchical multi-objective optimization problem: The Stokes shift renders these objectives to be competitive by nature, because a naive increase in power may turn the emitted light bluish instead of pure white. Since the color point is a strict requirement for many applications, a higher forward power of an LED at the cost of color changes is utterly undesired. In this work, we enter into the competition between color point and power with a Bayesian optimization approach that optimizes a physics-guided, weighted objective function. Hereby, weights continuously implement hard constraints and thus preserve approximability via Gaussian processes. The reported results indicate that multi-layer thin films on top of white LEDs can increase the light directionality. The epitaxial deposition of such multi-layer thin films is only implying low additional expenditure and directly applicable for mass-production of many optical semiconductors. Our analyses reveal that a carefully designed multi-layer thin film functions as an angle and wavelength sensitive filter: The filter statistically balances emitted rays of different wavelengths to meet the color point. In addition, it traps rays that would exit the LED at large angles in order to implicitly enforce their forward (re-)emission. To summarize, the proposed objective function guides the optimization towards a multi-layer thin film that leverages statistical ray ping pong to enforce favorable properties of the spectrum radiated by the LED. Thus, we shine a light on the previously enigmatic effect causing the counterintuitive increase of white light directionality using a multi-layer thin film.
\section*{Material, Data, and Code Availability}
The optical models and related (commercial) software that support the findings of this study are available from OSRAM Opto Semiconductors GmbH but restrictions apply to the availability of these items, which were used under license for the current study, and so are not publicly available. The generated and analysed data as well as any code-related information is included in this manuscript or already published by third parties.
|
{
"timestamp": "2021-12-01T02:26:45",
"yymm": "2111",
"arxiv_id": "2111.15486",
"language": "en",
"url": "https://arxiv.org/abs/2111.15486"
}
|
\subsubsection{#1}\indent\par}
\section{\texorpdfstring{\sectionpadding{1.5 em}}{ } Literature: a rapid survey}
\label{App:Literature}
The present Appendix contains an outline of some
literature related in various ways
to the theory of homogeneous structures
and some natural generalizations.\footnote{With relatively few exceptions, the cut-off for the entries was 2016.}
A small portion of this bears directly
on the classification of
countable homogeneous structures of various types;
the bulk of the literature cited deals with other issues. Of course,
the examples constructed by the \Fraisse method
(and, on occasion, uncovered in the course of a systematic
classification), can serve as the basis for useful case studies
of the study of the associated automorphism groups,
Ramsey properties, reducts, and so forth. And given the breadth
of some unresolved conjectures in the area, it would be good
to have more such examples.
A more focused bibliography accompanies the comprehensive
survey in \cite{Mcp-SHS}.
The topics have been organized as follows.
\begin{enumerate}[(I)]
\item Homogeneous structures:
\begin{enumerate}[(A)]
\item Countable
\item Uncountable
\end{enumerate}
\item Their Automorphism groups:
\begin{enumerate}[(A)]
\item Algebraic properties
\item Dynamical properties
\item Reconstruction
\item Endomorphisms
\end{enumerate}
\item Generalizations of homogeneity
\begin{enumerate}[(A)]
\item $\aleph_0$-categoricity
\item Transitivity conditions
\item Metric geometry
\item Homomorphism homogeneity
\item Continuous or projective \Fraisse theory
\item Hrushovski amalgamation
\item Random Structures
\item Universality
\end{enumerate}
\end{enumerate}
The boundaries of the subject are very fluid.
Any countable structure can in principle be constructed
by \Fraisse's method---but some structures
are most naturally constructed that way,
and for a number of others this approach is a useful
complement to other points of view.
\bigskip
\begin{enumerate}[(I)]
\item Homogeneous structures:
\begin{enumerate}[(A)]
\item Countable
\begin{enumerate}[(1)]
\item
{\it General theory:}\\
\
(\cite{BaZ-AC, Cam-OPG, Cam-ARS, Cam-IPG, Che-CP,
Fra-RGQ, Fra-PCO, Fra-Ama, Fra-TR1, Fra-TR2,
Hod-MT, Hod-SMT, Jon-URS, Jon-HURS, Mcp-SHS,
Sau-TRA, Tar-MRG}, \cite{TeZ-CMT, Wie-PG, Wie-PGIR}),
cf.~\pcite{Bon-CCnEC, Cam-Tr, Che-CP, CsHMM-CFL,
CuP-AR, DePSS-IRS, DrG-UD, DrG-UDA, Eng-GGT, RoW-US,
Wie-ATPG, Wie-PR};\\
{\it amalgamation bases:} \\
(\cite{AlB-BOAB},
\cite{Bac-SA, Bac-AIET, BaH-SCA,
Bel-ACAS, Ber-ACDV, Ber-ACMLV, Ber-ABCG, BlG-ROA,
BlG-ABLG, BuFM-FAS, BuFN-APOM, Che-ABCR, Ekl-ACSA,
Fle-AFMAB, GlSW-NOG, Hal-AIS, Hal-ISVAP, Hal-REAS,
Hal-AIRS, Hal-GISA, Hal-FISA, Hal-AGIS, Hal-RSA, HaP-ABFS,
HaS-FBAB, How-ETAS, How-EAS, ImH-AGI*, Lei-ALFp,
Mag-ANG2, Mai-ANG2, Mai-ALFG, Nas-APOM, Ren-EAMS,
Ren-EAR, Ren-FAM, Ren-PAB, Ren-SAM, Sar-ABn2,
Say-ABCA, Say-AUAL, Sho-SAB, Sho-CSAB1, Sho-ABS,
Sho-BSAB, Sho-ABR, Sho-NBAB, Sho-REABR, Sho-RECNB,
Sho-CNBAB, Sho-AFRS, Sho-FISAB, Sho-DREFS, Sho-RSAB,
Sho-FRSAB, Sho-RSABF}).
\item {\it Constructions:} \\
{\it graphs}
\pcite{ErR-AG, Rad-UG, Hen-HG, Cam-OGHG},
cf.~\pcite{Nes-AGA}; \\
{\it posets, semi-lattices} \pcite{AlB-ECPO};\\
{\it directed graphs}
\pcite{Hen-2A0, PaPPS-OPO};\\
{\it unbalanced digraphs}
\pcite{EmE-PUD};\\
{\it $n$-fold linear orders} \pcite{Bra-HPS};\\
{\it ultrametric spaces}
\pcite{Bog-MHS};\\
{\it generalized ultrametric spaces}
\pcite{Bra-HPS};\\
{\it metrically homogeneous graphs }
\pcite{Che-2P, DaM-JDE, Mos-EUG, Mos-UGD, Mos-DG},
cf.~\pcite{Bon-MUG, How-GUIE};\\
{\it Urysohn space} \pcite{Ury25, Ury27a, Ury27b}, cf.~\pcite{HuN-FPUS, Kat-UMS};\\
{\it generalized polygons }
\pcite{Ten-FP}, cf.~\pcite{Ten-STMT};\\
{\it metric spaces}
(\cite{Ury25, Ury27a, Ury27b, CaT-LC},
\cite{Hus-UUS,
LePRSU-WUS});\\
{\it linear, semi-linear spaces, and Steiner systems }
\pcite{Cam-ILS, Dev-HUS, Dev-USS, Dev-FUSS, DeD-HULS,
Tho-3TSTS, Tre-ISTS};\\
{\it nilpotent groups and rings} \pcite{Bau-FLN, ChSW-HGR};
\\
{\it groups and lattices}
\pcite{AbT-CHL, Hal-CLFG, Hic-HUG, MaS-ULFG,
Tho-CLF, Tho-CLFL};\\
{\it algebras} \pcite{Gol-TCA};\\
{\it domains, event structures, causal sets}
\pcite{BoCD-UHSD, Dro-UHES, Dro-FAUD, Dro-UCS, DrG-UDDS,
DrG-UIS}\\
{\it Chu spaces} \pcite{DrZ-BCS};\\
{\it semigroups} \pcite{Ash-ETAB};\\
{\it finite number of countable models }
\pcite{Tan-TC3M, Woo-FCM1, Woo-FCM2};\\
{\it cofinitary permutation groups} \pcite{Ade-TFRH, Cam-PRF, Cam-CPG, Cam-MTSG, Cam-CPGb, Cam-ACPG};\\
{\it probabilistic} \pcite{AcFP-IM, AcFNP-IM, AlS-PM,
DoM-PGCL, ErS-PMC},
cf.~\pcite{ErKR-EKnF, PeV-IM, PrSS-KnF,
PrT-RGPO}.
\item
{\it Classification:} \\
{\it finite or stable}
\pcite{ChL-SFHS, KnL-SSC, Lac-SHS, Lac-BHS1a, Lac-BH1b, Lac-SH3, Lac-FHD,
Lac-BH2, Lac-HS, Lac-SFH,
LaS-SHB},
cf.~\pcite{Che-SHS};
{\it finite binary primitive}
\pcite{Che-PBPG, DaGS-BPS, GiS-BPAC, GiHS-BPR1, GiLS-BPL, Wis-PBPG};
{\it graphs} \pcite{Gar-HG, LaW-HG, She-SES, She-HG, Woo-4UGT},
cf.~\pcite{Eno-CHG, Gar-HGS, Gar-HCG};
{\it partitioned graphs} \pcite{Ros-H2G}; \index{graph!partitioned}
{\it multipartite edge-colored graphs} \pcite{JeST-HMG, LoT-HCMG};
{\it 3-edge colored complete graph }\pcite{Ara-HS3G, Che-IH3, Lac-SH3, Tri-FH3G};
{\it tournaments} \pcite{Lac-HT, Che-HTR};
{\it coloured partial orders} \pcite{Sch-HPO, ToT-HCPO};
{\it tournaments with a vertex coloring} \pcite{Che-HDG};
{\it directed graphs}
\pcite{Che-HDG1, Che-HDG2, Che-HDG, Lac-FHD};
{\it bipartite digraphs with partition} \pcite{Ham-2PD};
{\it hypergraphs} \pcite{LaT-FH3G},
{\it partial orders with vertex coloring} \pcite{ToT-HCPO};
{\it permutation structures} \pcite{Cam-HP};
{\it linear extensions of partial orders} \pcite{DoM-HLPO};
{\it ordered graphs (this monograph, part I);}
{\it finite or locally finite metrically homogeneous graphs}
\pcite{Cam-6TG, HeP-LFHG, Mcp-DTF} (using \pcite{Dun-CUG}),
cf.~\pcite{Gar-ATG1, Gar-ATG2, Gar-ATG3, Iva-DDRG};
{\it infinite metrically homogeneous graphs }
\pcite{AmCM-MH3, Che-2P};
{\it finite homogeneous 3-hypergraphs}
\pcite{AkL-H3H, LaT-FH3G, Tri-FH3G}.
{\it homogeneous 3-hypergraphs with one constraint }
\pcite{AkL-H3H}:
{\it rings} \pcite{Ber-EQRp, BeC-QERp, BeC-QERpn, BoMP-QESS,
Sar-ECR2, SaW-QEN, SaW-QEp2, SaW-FHRO, SaW-FQE4,
SaW-HFR2n};
{\it finite or solvable groups} \pcite{ChF-QEG, ChF-HSG, ChF-HFG};
{\it unary algebras} \pcite{Wea-HUDA}.
\item
{\it Connections with computer science:}\\
\label{Item:ConnectionsCS}
{\it cores}
\pcite{Bau-CLP, Bau-CCDG, Bau-FEID, Bod-CCC1, Bod-CCC2, BoP-GBG};
\\
{\it oligomorphic clones} \pcite{BoC-OC, BoJ-CCEI, PeP-PHRS};
\\
{\it constraint satisfaction:}
\pcite{Bod-CCS, BoCKO-MCL, BoHM-UACS, BoKM-CSH, BoN-CSCH1, BoN-CSCH2, BoK-CTCS1, BoK-CTCS2,
BoPT-DD1, BoPT-DD2}
\item {\it Model theoretic properties:}\\
{\it generalized metric spaces} \pcite{Con-DGMS, Con-NHMS};
{\it homogeneous 3-edge colored complete graphs with
simple theory}
\pcite{Ara-HS3G};
{\it simplicity and supersimplicity} \pcite{AhK-SHS, Ara-OCST,
DeK-G1B, Kop-BP1B, Kop-BSH, Kru-PCC, Pal-GAH};
{\it independence relations} \pcite{Con-SIR};
{\it no binary homogeneous pseudo-plane }
\pcite{Tho-BHP};
{\it finite axiomatizability}\pcite{Lip-FACC, Lip-CCMH, Mcp-FAAC};
{\it definable groups} \pcite{Mcp-IG};
{\it strongly determined types} \pcite{Iva-SDGC}.
\item {\it Finite approximation and 0-1 laws:}\\
\pcite{BlH-AAG};
\item {\it Homogenizability:}\\
{\it relational complexity of finite structures }
\pcite{Che-PBPG, ChMS-APG, HaHN-CRS},
cf.~\pcite{KaK-MSS, KaK-IPG, Sar-WP12, Sar-WP3, Wis-PBPG,
XuGLP-IRAC};
{\it relational complexity of infinite structures }
\pcite{Cov-UNFG, Cov-HRS, CoT-HFP, HaHN-BRC},
cf.~\pcite{Neu-CSTP};
\item {Applications of Ramsey theory:}\\
{\it reducts }
\pcite{Ben-RBH, JuZ-RQ0, BoP-RRS, BoP-MFRG, BoCP-EPP,
BoPP-RROG, LiP-RGP, Lu-RCCG, PaPPPS-RRPO, Tho-RRG,
Tho-RRH}, \\
cf.~\pcite{CaLPTW-OARG, Hun-TO, JuZ-RQ0};\\
{\it decidability of positive primitive definability}
\pcite{BoPT-DD1};\\
{\it analysis} \pcite{ArT-RMA}.
\item {\it ``Going forth''} \pcite{McL-SGF, McL-FPBF, Vil-HFGF}.
\end{enumerate}
\item {\it Uncountable:}\\
{\it universal locally finite groups}
\pcite{Hic-CULF};
{\it $n$-cardinal spectra} \pcite{Ack-nCS}
\end{enumerate}
\newpage
\item Their Automorphism groups:
\begin{enumerate}[(A)]
\item Algebraic properties
\begin{enumerate}[(1)]
\item
{\it Normal subgroups and quotients:}\\
{\it simplicity} \pcite{EvHKL-SGMS, McT-SAG, Tru-IPG1, Tru-SPG, Tru-4G3B};\\
{\it O'Nan-Scott} \pcite{McP-IONS};\\
{\it symmetric group} \pcite{AlCM-AQS, Bae-KGAM, Ono-TSI1, Ono-TSI2, Ono-TSI3},
cf.~\pcite{Ber-CIC, Ber-ECC, Ber-SU, BoF-NSISG, DrG-PCP,
ShT-QSG};\\
{\it $m$-edge colored random graph} \pcite{CaT-AmRG}:\\
{\it linear or semilinear orders}
\pcite{BaD-NSDT, BlDG-AGOS, BlG-FPOG, BlGGS-AGM,
DrHM-AHSO, DrKT-HSL, GiT-QOPG, GiT-ROS};\\
{\it partial orders} \pcite{GlMR-AGHPO};\\
{\it cycle-free partial orders} \pcite{DrTW-SGCF},
\\cf.~\pcite{Tru-BCF};\\
{\it trees} \pcite{MoV-AGG};\\
{\it distributive lattices} \pcite{DrM-AUDL};\\
{\it rational topology }\pcite{Tru-CHRW};\\
{\it Urysohn space} \pcite{TeZ-IUS, TeZ-IBU};\\
{\it linear groups} \pcite{Ros-IGL};\\
{\it free amalgamation classes} \pcite{McT-SAG};\\
{\it homeomorphisms} \pcite{And-SGH};\\
{\it multiply transitive actions} \pcite{Cam-NSMT}.
\item
{\it Maximal subgroups:} \\
{\it symmetric group }
\pcite{Bal-MSSG, Bal-ISSG, BaST-MSSG, BrCPPW-MSSG,
CoMM-MISG, CoM-SSGF, McP-MSS, Ric-MSSG}, \\
cf.~\pcite{BeS-CSSG};
\item {\it Small index property:}\\
{\it general theory} \pcite{Eva-AGIS, Las-PPI, Tru-IPG2};\\
{\it symmetric group} \pcite{DiNT-SISG, Gau-IISG, ScU-PNZ,
ShT-SISG, Tho-ISG};\\
{\it the random graph} \pcite{Cam-SSIP, HoHLS-SISC, Hru-EPA};\\
{\it Henson graphs} \pcite{Her-EPI, Sol-HLE};\\
{\it linear orders} \pcite{ChT-SI1T, DrT-SIOPG, GlM-AG2H};\\
{\it trees} \pcite{Mol-AGRT, Tru-CFPO};\\
{\it linear groups} \pcite{Eva-SGL, Eva-SIC};\\
{\it relatively free groups} \pcite{BrE-SIF};\\
{\it $\aleph_0$-categorical structures }
\pcite{Her-PISI, HoHLS-SISC};\\
{\it saturated structures} \pcite{Las-ARS, Las-ASS, MeS-SSSI},
cf.~\pcite{Las-AFM, Las-SIPA, LaS-USSI}.
\item {\it Cofinality:}\\
\pcite{DrG-HCD, DrG-CPG, DrHU-ETUC,
DrT-CAGO, DrT-UCDL, Gou-GAQ, McN-SISG, MiS-CUSG,
ShT-UP, ShT-QCS, ShT-UF, ShT-CSSG, Tho-IDCG, Tho-CIPG,
Tho-GDSG}.
\item {\it Bergman property:}\\
\pcite{deC-SBG};\\
{\it symmetric group} \pcite{Ber-GSG};\\
{\it automorphisms of linear orders} \pcite{DrH-AGC,Mor-1TLO}.
\item {\it Representing words}
\pcite{DoM-RIP, DrT-RWRG, Lyn-WIP, Myc-RIP}.
\item {\it Free subgroups}
\pcite{GaK-UFS}.
\item {\it Embedding theorems:}\\
\pcite{Ade-EIPG, BhM-PRF, BhM-LFRG, Bil-GASH, BiJ-RM,
BoDD-MRG, HaKO-SI, Jal-MRT, JaK-RTCT, Mcp-ACC, Mcp-RST,
McW-PGM, Mek-GEAQ, Mel-SCS, Neu-ARW, Nie-USMG,
Nie-UVAG, Tru-EIPG, Usp-IUS},
cf.~\pcite{Dou-AGUS, Huh-UUMS, MbP-SIUS,
MeSST-RGRW, Mel-GUS, MeS-DFSG, ShT-ISSG}.
\item {\it Regular actions:}\\
(\cite{Cam-HCO, CaJ-BG},
\\
\cite{CaV-IGU, Dou-GSU}).
\item {\it Orbit growth or profile:}\\
\pcite{ApC-OnT, BuH-ELW, Cam-OUS, Cam-OC, Cam-OE, Cam-EMT, Cam-AA, Cam-OC, Cam-SOPG, Cam-CP, CaS-PUS,
CaT-GUS, EnK-RNF, HaR-POH, Mcp-US, Mcp-GRPG,
Mcp-SIPG, Mcp-PGRG, Mer-OnU, Mer-OGOE, Mil-IAG, MnS-LWT,
Nak-GLWT, Pou-RFR, Pou-OAID, Pou-TR, Rob-TLW, Smi-SGR,
Vat-SPC, Vat-PC, Was-CACCT},
cf.~\pcite{Pro-CUS};
\item {\it Cycle structure:} \\
{\it primitive groups (esp.~Jordan groups)} \pcite{McP-CT}
{\it graphs} \pcite{LoT-CT, Tru-GUG},
cf.~\pcite{Tru-AAUG};\\
{\it finitary elements} \pcite{BiM-FOAG, Iva-FPT};\\
{\it Parker vectors} \pcite{GeM-PVIG, GeM-PVOG, GeM-CAT}.
\item {\it Isomorphisms up to language} \pcite{CaT-AmRG, Cou-MHG};
\item {\it First order theories}\\
\pcite{GiGT-UAG, Gla-ALO, GlGHJ-2HC, GlMR-AGHPO,
Kni-AGGP, RuS-AGBA}.
\end{enumerate}
\goodbreak
\item Dynamical properties
\begin{enumerate}[(1)]
\item {\it General theory:}\\
\pcite{Aus-MFE, Ell-TD}.
\item {\it Universal minimal flow:}\\
\pcite{Bar-MFUS, Ell-UMS, Ell-UMS, KePT-UMF, KeS-AGRP,
MeNT-MUMF, Usp-MCG}.
\item {\it Extreme amenability (and relation to Ramsey theory):}\\
\pcite{Mit-FPM, Usp-EAH};\\
{\it concentration of measure} \pcite{GrM-TAII, Led-CMP, Pes-RMEA};\\
{\it Ramsey theory and dynamics}
\pcite{FaS-RTLG, KePT-RTTD, Moo-ART, MuP-TDUR, Pes-UCTD,
Pes-TGWT, Pes-AC, Pes-mmS, Pes-DIG, Tsa-Hab}\\
cf.~\pcite{Vuk-PRG};\\
{\it oligomorphic groups}
\pcite{Gla-UMSS, GlW-MAGS, GlW-UMGH, Pes-FAMF};\\
{\it homeomorphism groups} \pcite{GlG-UMhH, GlG-MHA};\\
{\it operator algebras} \pcite{GiP-EAG, GiP-OAET},
cf.~\pcite{Gla-AmRP};\\
{\it metrizable universal flows }
\pcite{BeMT-MUMF, MeNT-MUMF};\\
{\it $\aleph_0$-categorical linear orders }
\pcite{DoGMR-CCLO};
{\it $L^0$} \pcite{FaS-RTLG};\\
{\it generic abelian isometry groups }\pcite{MeT-RAG};\\
{\it precompact expansions} \pcite{Ngu-PCE}.
\item {\it Ramsey theory:} \\
\pcite{GrRS-RT1, GrRS-RT2, GrRS-RT2p, Hub-BRD, Nes-RT, Nes-RCH,
NeO-Sp, Ngu-UFREA, Ngu-SRT, Sol-SDR},
cf.~\pcite{Tod-IRS};\\
{\it Ramsey degree in general }
\pcite{Fou-SRT, Fou-SRD, Fou-AR};\\
{\it monotone classes} \pcite{Nes-RCH};\\
{\it canonical partitions} \pcite{LaSV-CPUS, Lar-RTBH};\\
{\it convex equivalence relations} \pcite{Sok-RQ};\\
{\it homogeneous graphs, generalizations}
(\cite{AbH-MWI, Bod-NRC, Deu-PTG,
Fol-MCS, Hen-EPP, Nes-RCG}
\\
\cite{NeR-PSG, NeR-RTF,
NeR-TPP, NeR-FCS, NeR-PFR, NeR-SRT, NeR-RTS, NeR-RTH,
NeR-RGP, NeR-2RTH, NeR-RCSS, NeR-PRSS, NeR-RON,
NeR-PCRS, PoS-EPRG, PrV-PTPS, Sau-EPTF, Sau-EHP}, \\
cf.~\pcite{RoSZ-RFEG, Sau-RFG, Sol-DRT, Spe-EPPT, Spe-RTRT});\\
{\it bipartite graphs} \pcite{FoPS-RDBG};\\
{\it $n$-colorable graphs} \pcite{Ngu-RTnCG};\\
{\it bowtie-free graphs} \pcite{HuN-BF};
\\
{\it trees}
\pcite{Deu-GRT, Deu-RTRT, DePV-CRT, Fou-SRPT, Mil-RRT,
Mil-RTT, Mil-PTIS, Sol-ART};\\
{\it local order} \pcite{LaNS-PDLO};\\
{\it directed graphs} \pcite{JaLNW-RHDG};\\
{\it partial orders} \pcite{Fou-RDPO, Fou-CRPO, NeR-CPPL, NeR-RPO, PaTW-GO, Sok-RPFP,
Sok-RPFP2},
cf.~\pcite{HuN-FPH}; \\
{\it boron trees} \pcite{Jas-HRP, Jas-RBT};\\
{\it metric spaces} \pcite{DiP-MR, ErGMRSS-ERT2,
ErGMRSS-ERT3, Jas-HRP, Nes-RTM, Nes-MSR, Ngu-UUS,
Ngu-SRTD};\\
{\it matroids} \pcite{NePT-AMA};\\
{\it vector space} \pcite{LaNPS-PIVS, GrLR-RTC1, GrLR-RTC2,
Spe-RTS};\\
{\it affine space} \pcite{NePRV-COT};\\
{\it inner product spaces} \pcite{Jas-HRP};
\\
{\it Steiner systems} \pcite{BhNRR-RSS}\\
{\it cubes} \pcite{NePRV-COHJ1, NePRV-COHJ2, Pro-NOCC,
Sol-RTS};
\\
{\it indivisibility}
\pcite{DeLPS-IUM, ElZS-IKn, ElZS-RRS, ElZS-DHDG,
ElZS-DHDG, ElZS-DHH, ElZS-GVP, ErHP-SEG, KoR-CUG,
LoAN-OSUS, Mel-GDUS, Ngu-BRD, NgS-USOS, NgS-WIMS,
Pou-RI, Sau-VPP, Sau-CVP, Sau-AWI},\\
cf.~\pcite{ErH-FIC, ImKT-DIG, ImSTW-2DGG, Sau-EHP,
Sau-FSCS, Sau-CSRG, SaWR-CRT, WaZ-DLFT};
\\
{\it pigeonhole property} \pcite{BoCD-TOPP, BoD-PHP};\\
{\it inexhaustible structures}
\\
\pcite{BoD-IG, BoSZ-IHS};
{\it affine and projective space}
\pcite{DeV-PAPR};\\
cf.~\pcite{GrRS-RT1, GrRS-RT2, GrRS-RT2p};
cf.~\pcite{LaNS-NHRS}.
\item {\it Amenability and unique ergodicity:}\\
{\it equivalence relations} \pcite{Iva-AAG, PaS-UEHDG}.\\
{\it graphs and digraphs} \pcite{AnKL-UEAG, Zuc-AUE}.
\item {\it EPPA (Hrushovski property), ample generics, generic automorphisms:}\\
\pcite{Con-HMS, Con-PIGMS, EvHKN-2G_AM, Her-EPI, HeL-EPA, Hod-LGFMP, HoO-FCHC, Iva-GE,
Iva-AHS, Iva-SBAG, Iva-GSDO, KeR-TAGA, KuT-GAPO,
LoT-GEHS, McT-CCC, Ros-GI, Ros-FAGA1, Ros-FAGA2},
cf.~\pcite{Pes-HSV, PeU-RFG, Slu-NGP, Sol-EPI, Tru-GAHS,
Ver-GPI}.
\item {\it Strong non-local compactness} \pcite{Mal-TOS}.
\item {\it Measures on models} \pcite{Alb-MRG}.
\end{enumerate}
\item {\it Reconstruction (see also small index property):}\\
\pcite{Bar-AGOC, Bar-RCG, BaM-RHRS, BeR-RFH, GoR-GQA,
KuR-RUS, LeR-RLCS, Ros-ACGH, RoS-ACFP, Rub-ABA,
Rub-RBA, Rub-RTS, Rub-RBAAG, Rub-RTAG, Rub-RCC,
Rub-LMG, Rub-RFM, RuR-EH1O, Slu-ACFP, Tru-QAG,
Tru-AGCO}.
\item {\it Endomorphisms:}\\
{\it maximal subgroups} \pcite{McP-EFL};
{\it random graph} \pcite{DeD-EMRG, Dol-EMRP};
{\it partial orders }\pcite{Mas-EMPO};
{\it embedding theorem} \pcite{DoM-EMU};\\
{\it Bergman property} \pcite{Dol-BPEM}.
\end{enumerate}
\bigskip
\item Generalizations of homogeneity
\begin{enumerate}[(A)]
\item {\it $\aleph_0$-categoricity: } \\
{\it general theory} \pcite{Eng-UFM, RyN-CC, Sve-CC},
cf.~\pcite{Car-FCTP, Che-ACC, ClK-RHS,
Eva-ECC, Grz-LUCC, Grz-DPCC, Hau-VSC, Sch-CCIS,
Wea-CCT1, WeL-CCT2},
\pcite{Cam-OPG, Cam-PGHC, Cam-PGr, Cam-IPG, Cam-OPG2, DiM-PG, Eva-PGMS, Hod-CPG};\\
{\it constructions }\pcite{Ash-UCC},
\pcite{CaPZ-GCM, Ehr-CCCT, Gla-2A0},\\
cf. \pcite{Iva-CCC},
\pcite{Tho-CCEP, Per-FNCM, Woo-3IT};\\
{\it total categoricity} \pcite{Ahl-TCMT, Ahl-ASM, AhZ-QFA, AhZ-Z/4Z,
AhZ-IS, BaCM-TCGR, Che-TCS, ChHL-CSS, Hod-STC, Hru-TC,
Sch-CICC, Ten-TCG, Zil-SMCC1, Zil-TCT1, Zil-TCCG, Zil-SMCC2,
Zil-SMCC3},\\
cf.~\pcite{HoHM-CCRC, Iva-DLAC, Iva-NGC, IvM-AE, Kos-SFC,
Vas-CCDAG};\\
{\it covers}
\pcite{ChHS-CLG, Eva-SFC, Eva-CGFC, Eva-FCFK, Eva-FCCC,
EvG-KCG, EvH-CRC, EvH-CCPG, EvH-AGFC, EvIM-FC,
EvP-2CFC, EvR-BFC, HoP-CS, Iva-CP, Iva-FCCH, IvM-SDT,
Pas-AFFC, Pas-FCG, Pas-NHC};\\
{\it graphs} \pcite{Shi-CCG, Whe-CUT, Whe-TNCG};\\
{\it colored linear orders} \pcite{MwT-CLO, MwT-FCLO, Ros-CCLO,
Ros-LO},
cf.~\pcite{CrT-AS, CrT-QAS, Gla-OPG, HeMMNT-CCWo,
Kul-WCM, Kul-CCoM, KuM-MCO};\\
{\it Boolean algebras with ideals}
\pcite{Ala-CCBA, Pal-BADI},
cf.~\pcite{Hei-WFBA};\\
{\it multitrees (``reticles'')}
\pcite{Pun-MCD, Pun-MCMT};\\
{\it partial orders} \pcite{Pou-EOU2C, Sch-CCPO};\\
{\it distributive lattices} \pcite{Sch-CCDL};
\\
{\it rings}
\pcite{BaR-CCSR, Che-CCN1, Che-CCN2,
MaR-CCRB, Ros-PQER, Ros-GL2B, Ros-CCG};
\\
{\it groups}
\pcite{App-BPG, App-CCG, App-ECCG, App-FEBP,
App-ECCG, ArM-SCCG,
BaCM-TCGR,
ChR-AFG,
Fel-SCCG, Fel-CCSG, Iva-CCG,
Iva-DIT, IvM-AUG,
Mcp-AUCCG,
Ros-PQER, Ros-GL2B, Ros-CCG,
SaW-PENG, SaW-QEe4, Wil-SCCG}
cf.~\pcite{IvM-ECCG};\\
{\it quasi-groups} \pcite{Shi-CQQ};\\
{\it bilinear maps} \pcite{Bau-CCBM};\\
{\it quasi-varieties} \pcite{BaL-UHC, Pal-CQV, Pal-DCQV,
Pal-CPHT, Pal-CPHC, Pal-CHC};\\
{\it e.c.~structures for some universal Horn classes }\pcite{Alb-PEC};
{\it automorphism groups } \pcite{Bar-AGOC, Tsa-UOG};\\
{\it Keisler measures} \pcite{Ens-AIM};
\\
{\it model companions} \pcite{Sar-MCCC};\\
{\it simplicity} \pcite{Pal-CCST, PaW-SCMT};\\
{\it orbit growth} \pcite{Pal-AFCC}, cf.~\pcite{BoT-HMPG};\\
{\it computable models, decidability}
\pcite{KhM-CCCTA, Mor-CCDM, Puz-CCT, Sch-DCCT,
Sch-CCT, Sch-RNF, Sch-TCCPO, Stu-HFS};\\
cf. \pcite{Per-CCFA, Ric-DS}.
\begin{enumerate}[(1)]
\item {\it Coordinatized by indiscernible sets} \pcite{Lac-IS}
\item {\it Tree decomposable} \\
\pcite{Lac-CCI1, Lac-CIG, Lac-CCI2, Lac-TDS}, cf.~\pcite{Pal-CCUT}.
\item {\it Smooth approximation:} \\
\pcite{Che-LFFT, ChH-FSFT, Hru-FSFT, KaLM-SAF}
\item {\it Simple theories }\pcite{Ara-OCST, EvW-SCCG}.
\end{enumerate}
\item Transitivity conditions
\begin{enumerate}[(1)]
\item {\it Setwise homogeneous:}\\
{\it in general} \pcite{Mcp-HIPG},
\\
cf.~\pcite{LoM-OEPG};
\\
{\it graphs} \pcite{DrGMS-SHG, DrGM-EO, GrMPR-SHDG};
\\
{\it directed graphs} \pcite{GrMPR-SHDG}.
\item {\it $k$-homogeneity and variants:} \\
\pcite{Bro-WnH, Cam-LW, Cam-TUS, Cam-PGUS, Cam-OUS, Cam-OUS2, Cam-OUS3, Cam-OUS4, Dro-kHUP, Dro-kHRT, Hig-HR,
Hug-kHG, Kan-4HG, Kan-kHG, Kie-HRBP, Mcp-HIPG, Mar-PGT,
Neu-HIPG, Tru-PHO, Wie-kHPG, Yos-kHPG}, \\
cf.~\pcite{Haj-HIPG, Ros-NCC, ShT-HIPG};\\
{\it graphs} \pcite{FaLLP-L2ATB, GoK-kHG, Gar-SCG, LiW-TPOS},
cf.~\pcite{Dej-K4K222HG, Dej-C4UHG, Dej-OSDT, Gra-kCSG,
IsJP-KUH, LiSS-sAT, Ron-HG, ShS-HPG, YaF-WsATG};\\
{\it linear orders} \pcite{DrS-OAG};\\
{\it circular orders} \pcite{CaT-1TCO, GiH-OhS, KuM-MCO};\\
{\it partial orders} \pcite{Dro-POT1, Dro-POT2, DrM-kHPG,
DrMM-UHPO, SaW-PHPO};\\
{\it cycle-free partial orders}
\pcite{CrTW-kCST, GrT-CFPO, Tru-kCST, War-kCSCF},
cf.~\pcite{Mol-EG1, Mol-EG2};\\
{\it linear spaces} \pcite{Dev-dHLS};\\
{\it affine or projective space} \pcite{Tho-GPS1, Tho-GPS2},
cf.~\pcite{Tho-ISG};\\
{\it real measurement} \pcite{Alp-RMS, Alp-OHFU};
\item {\it Canonical expansions}
\pcite{Tho-ECCS}
\item {\it $1$-homogeneity in the sense of Myers}\\
\pcite{GaM-SP1H, McA-HCP, Mye-nHG, Mye-1HG}.
\item {\it Distance transitive graphs:}\\
{\it finite} \pcite{Bon-ADTG, Bon-FPDT, Cam-MTG, Cam-6TG, Gar-SG,
Iva-DTG};\\
{\it infinite} \pcite{Cam-Cen, Mcp-DTF, Mol-DTIG},
cf.~\pcite{Mol-LFG};\\
{\it imprimitive} \pcite{AlH-ST, Smi-PIG};\\
{\it distance transitive with more than one end}
\pcite{HaP-TCIG}, using \pcite{DuK-VC};\\
{\it distance regular} \pcite{BrCN-DRG}.
\item {\it Highly arc-transitive digraphs:}\\
\pcite{Ama-DHAT, AmT-HATD, AmT-CFHA, CaPW-HATD,
Che-HATD, DeMS-HATD, MaMSZ-HATD, MaMMSZ-HATD,
Mol-DATD, Neu-HATD, Pra-HATD}, \\
cf.~\pcite{DeJLP-LGGT, GiLP-sATG, GiLP-3ATG, GiLP-5ATG,
JiDLP-GTG, Mol-LCG}.
\item {\it Descendant-homogeneous digraphs:}\\
\pcite{AmET-CDHD, AmT-DHD}.
\item {\it Homogeneous with respect to connected induced subgraphs or digraphs: }\\
\pcite{GrM-CHD, Ham-LFCHD, Ham-ETG, HaH-CHDE, HaP-TCIG},
cf.~\pcite{DuK-VC, KrM-ME, KrM-QIGT}.
\item {\it Orbit-homogeneity}
\pcite{CaD-OH}.
\item {\it Jordan groups:}\\
{\it survey} \pcite{BhMMN-IPG, Mcp-SJG};
\pcite{Ade-GJG, Ade-STSS, Ade-IJPG, Ade-IIJG, AdM-CJG,
AdN-PJS, AdN-IBPG, AdN-RRB, BhM-JGB, Eva-HG, Hic-JRFP,
Hru-UMS, Hyt-LMG, Joh-STS, Kan-HDGL, McD-JG,
Neu-PPG, Neu-IJG}
\item {\it Universal transversal property:}\\
\pcite{ArC-GHG}.
\item {\it fine partition} \pcite{HoM-RSFIS, Lac-UE}.
\item {\it Extension properties:} \\
{\it graphs} \pcite{Ana-APGPG, AnC-GPAP, AnC-APPG, AnC-GSAP,
AnC-GPAP1, AnC-GPAP2, AnC-CQPnEC,
BaBB-3ECAP, BaBS-nECAP, BaBMP-nECRD,
BlEH-PGA, BlR-EGEP, Bon-nEC, BoC-APG,
BoC-2ECLC, BoC-2ECGT, BoC-APT, BoC-APGE,
BoHK-SR3EC, CaS-SRnEC, DaM-JDE, ErHK-FSU,
ErP-SIS, Exo-APG, ExH-GCAP, ExH-SGAP,
Fag-PFM, RoSW-kUG}, \\
cf.~\pcite{Fla-PCSR, Tro-UMS};\\
{\it triangle free graphs}
\pcite{AlCH-TFA, EZL-3EC, Lar-TFGS, Pac-ISCN, PaS-2SU},
cf.~\pcite{AlR-TFGS};
{\it tournaments} \pcite{GrS-CSTP}.
\item {\it Association schemes:} \pcite{AlBC-ASPG}.
\end{enumerate}
\item {\it Metric geometry:}\\
\pcite{Bir-MFG, Bog-MHS, Bus-LDP, Bus-SFS, Bus-MMFS,
Bus-2PG, Bus-LMG, Bus-GG, Bus-AG, BuP-DG,
DaW-MHR, Fre-RHL1, Fre-RHL2, Nag-TGNC, Nag-HSB,
Nag-WTF, Tit-CEM, Tit-CEMn, Tit-EHGL, Tit-TGM,
Wan-2MS, Wan-2PH},
cf.~\pcite{Tit-G3T, Tit-G2TC}.
\item {\it Homomorphism-homogeneity:}\\
\pcite{CaL-PHH, CaN-HHS, DoM-HHL, DoJ-HHP, HaHM-HHLG,
IlMR-FHHT, IlMR-HHG, JuM-HHMA, Loc-CHHG, LoT-HH,
Mas-HHPO, Mas-HHTL, Mas-HHFA, Mas-HHPLG, Mas-HHOGL,
MaNS-HHBR, MaNS-FHHB, MaP-HHS, RuS-HHG},\\
cf. \pcite{PeP-PHRS}.
\item {\it Continuous and projective \Fraisse theory:}\\
{\it continuous \Fraisse constructions}
\pcite{AvSCGM-BSUD, BaK-LF, BeM-USECS, BeY-FLM,
Cam-QPFL, GaK-GS, IrS-PAMT, IrS-PFPA, Kub-FS, KuS-UGS,
Sch-FTMS, Usv-GSMS}, \\
cf.~\pcite{ArB-UEC, Dza-UEC, Dza-UUEC, Kub-IRFL, Mas-RUS,
ShZ-UMKS};
\\
{\it generic automorphisms, continuous case}
\pcite{BeBM-PTG, GuI-PGRT, Hod-PAQ, KaL-PGAG,
Kwi-HCAG, Kwi-LCPA, RiZ-PTFG};\\
{\it Ramsey theory} \pcite{Kai-ARTM};\\
{\it linear metric rigidity} \pcite{MePV-EMLR, MePV-LRMS};\\
{\it background}\pcite{BeBHU-MTMS};\\
{\it neostability} \pcite{CoT-PUS}.
\item {\it Hrushovski amalgamation:}\\
\pcite{AnI-SSG, Are-HUG, Bal-ASMP, Bal-LBCI, Bal-RHS, Bal-EG, BaH-CR2F, BaH-CSCR, BaH-CSRk, BaH-CSMC, BaI-GPP, BaS-DFG, Bau-AC, Bau-UCG, BaHMW-BF, BaMZ-FVS, BaMZ-HF, BaMZ-RF, BaP-FPS,
EaO-CATF, Eva-HAD, Eva-SIPG, Eva-CCP,
Eva-BTSS, Eva-TNT, EvF-GHC1, EvF-GHC2, EvGT-SAG,
EvP-GAC, Goo-HG, Has-IFMR, Has-HAC, HaH-FSL, HaH-DMP,
Her-SGS, Her-STFT, Hol-GF, Hol-FVS, Hol-MCSM, Hru-CCP,
Hru-SMF, Hru-NSM, Hru-SLG, Ike-MFD, Ike-GPP, Ike-SSGG,
Ike-AIG, IkK-SGS, IkKT-GSSA, KuL-GS, PiT-ACC,
Poi-CE, Poi-EC, Poi-AH,
Pou-SCRT, Pou-SGS, Pou-SFGS, Pou-SFHMT, PoW-SPRT,
Sud-VGC, Sud-GGIW1, Sud-SGM, Sud-GGIW2, Sud-GL,
Ten-HnG, Tsu-RAST, Tsu-AT,
Ver-EISM, VeY-CMT, Wag-RSD, Wag-HA, Zie-FFMR,
Zie-NSM, Zil-CSAV, Zil-PEACF, Zil-CQEC, Zil-CMG};\\
{\it relation to random structures }
\pcite{Bal-NMC01, Bal-FIMT, Bal-MRG, BaM-DTLL, BaS-RSG,
BaS-SG, Bey-RSPF, Bey-RHPF, BeH-ACPF, DeN-ASnG,
DoL-ORS, Lyn-ASU, Lyn-PUF, Lyn-PSRG1, Lyn-PSRG2, Lyn-E01,
Lyn-PGMT, Lyn-CLRG, Lyn-CLRGD}.\\
\item{\it Random Structures:}\\
\pcite{AlF-RGO, Boh-TFP, BoPLPSSV-DT, BoK-HFP, BoK-DCTF,
Bol-RG, BoJ-IRGG, BoJW-nOG, BrT-RSC, Cam-RG, ChS-LRG,
Com-TTFM, Com-LAAC1, Com-CDP, CoHS-IAP, Com-AP1PO,
Com-LAAC2, Com-LLC, DrG-RpG, DrK-RRS, Erd-RTG, Erd-CCG,
ErR-AG, Fag-PFM, Fag-MTPP, GlKLT-FSPC, Gra-AAFS,
Las-SS01, ShS-01SRG, Spe-NDM, Spe-PM, Spe-10L, Spe-PM2,
Spe-EEA, SpJ-T01, Tar-MRG, Ver-RMSU, Win-ROFD, Win-RO,
Win-RODk, Win-TRO, Win-RS01},
cf.~\pcite{KoPR-KnF, KoPR-KnF2};\\
\item Theory of relations (general structures)
\pcite{McPW-CSA}.
\goodbreak
\item Universality
\begin{enumerate}[(1)]
\item {\it Countable case:} \\
{\it graphs}
\pcite{ArBM-CAC, BoT-GCSG, Bon-PHGC, Bon-HA,
ChK-UPF, ChSh-UFT, ChSS-UFS, ChS-FC,
ChS-FSFS, ChST-OBT, ChT-FP2B, ChT-2CP,
FuK-CUG, FuK-UGT, Kom-UG, KoMP-UG, KoP-UBS,
KoP-UE, Mos-EUG, Nur-GMFC, Rad-UGUF, Rad-UG},
cf.~\pcite{BrHM-UGHP, Kom-OSUG, Pac-MPCG};\\
{\it width-2 orders} \pcite{BoD-W2O};\\
{\it partial orders} \pcite{HuN-UGPO};\\
{\it rings} \pcite{Sar-UCR, SaW-UCR};\\
{\it permutation patterns}
\pcite{AtMR-PAS, HuR-UPC, Bon-CP2, HuR-UPC}, cf.~{MaT-EPM};\\
\item {\it Uncountable case:} \\
{\it structures with $n$-dimensional amalgamation} \pcite{Mek-US1};
\\
{\it graphs} \pcite{KoP-UE, DzS-UGSS, KoP-UE, She-CST, She-UGCH,
She-UGCHR, She-US, She-UGOB, Tho-SUF};\\
{\it groups}
\pcite{GoSW-ULNG, GrS-ULFG, Hic-CULF, She-USAG, She-NEU,
She-NEUAG, ShS-KPTF, ShS-KPTFR};\\
{\it topological groups} \pcite{Shk-UATG};\\
{\it linear orders }\pcite{KoS-UO, Moo-UAL, She-IR};\\
{\it partial orders} \pcite{Joh-UIPO};\\
{\it cardinal spectra} \pcite{Ack-nCS};\\
{\it bipartite graphs} \pcite{GoGK-BU};\\
{\it topological spaces}
\pcite{MaNO-USRT, MaT-URS, Tod-FS2}; \\
{\it metric space} \pcite{Kat-UMS};\\
{\it Banach spaces}
\pcite{BrK-UBSC, BrK-UBSE, BrK-IUBS, Kos-UMUF, Kos-UOBS,
ShU-BSGOP, Szl-NESRU};\\
{\it von Neumann algebras} \pcite{Oza-UII1};\\
{\it using club guessing} \pcite{Dza-CGU};\\
{\it universal models} \pcite{Dza-UM, DzS-EU, DzS-NEU, KoS-USUS}.
\end{enumerate}
\end{enumerate}
\end{enumerate}
|
{
"timestamp": "2021-12-01T02:25:01",
"yymm": "2111",
"arxiv_id": "2111.15429",
"language": "en",
"url": "https://arxiv.org/abs/2111.15429"
}
|
\section{Introduction}
\label{sec:intro}
The privacy paradox \cite{Barnes_2006} shows that humans can be on one side quite concerned about security
and privacy in general but when it comes to their own behaviour they seem to ignore any caution and freely
spread their private data into public cloud based social network services, like Facebook, Twitter or
Instagram.
Assuming that humans are rationally acting beings has led to quite successful models and prediction in
economics using what is termed Rational Choice Theory (RCT) \cite{sco:00}. Sociologist have transferred RCT
more generally to social interaction forming what is known as {\it exchange theory}.
We want to test this theory on the privacy paradox and use the results to improve automated
logical verification of social networks. We consider a dynamic system research approach more suitable.
The Isabelle Insider framework permits modeling and analyzing dynamic state transition. Thus we can reason
on actions and their effects. Methodologically, we thus follow the action research approach \cite{lew:48}
interleaving empirical research with interventions, here, practical implementations and verification.
This paper first presents an empirical study on increasing privacy awareness for
the construction of a social self awareness tool for social networks.
It uses assumptions
from RCT testing and highlighting the significance
of applying this theory.
RCT can be considered as a follow up theory of Max Weber's sociological explanation which has
strongly inspired the human actor model of Isabelle's Insider framework. Consequently, it appears
natural to use the RCT interpretation found in the empirical study to extend the human model in the
Isabelle Insider framework.
Moreover, it turns out that the RCT interpretation of social awareness allows to model unintentional
insiders a challenge hitherto unanswered.
This paper first gives some background from sociology about RCT and the Isabelle Insider framework
(Section \ref{sec:back}).
Section \ref{sec:snpa} presents the tool based study on privacy awareness in social networks and the influence
of RCT giving some insights into the requirements, design, testing and evaluation of the tool and the
key findings in the RCT interpretation with respect to privacy awareness. This section is based on the
Bachelor of Science dissertation of one of the authors \cite{alv:21}.
Section \ref{sec:isaunins} then continues to transfer the experimental findings into extending
the Isabelle Insider framework and illustrating them on the case study. The Isabelle sources are
publicly available on github \cite{kam:21}.
\section{Background}
\label{sec:back}
\subsection{Social Explanation and Rational Choice Theory}
Rational choice theory is based upon the assumption that complex
social phenomena can be explained by individual actions that constitute them. This
philosophy now coined {\it methodological individualism} holds that:
`The elementary unit of social life is the individual human action. To explain social institutions
and social change is to show how they arise as the result of the action and interaction of individuals'
\cite{els:89}.
Seemingly very close to {\it methodological individualism}, is
what was originally conceived by Max Weber \cite{we:72,we:20a} as
`understanding explanation' ({\it Verstehendes Erkl\"aren}) sketched in Figure \ref{fig:weber}.
\begin{figure}
\begin{center}
\includegraphics[scale=.5]{Grundmodell_EN}
\caption{Max Weber's sociological explanation model: a macro-micro-macro-level-transition explaining
sociological phenomena by breaking down the global facts from the macro level (a) onto a more
refined local view of individual actors at the micro-level (b). Finally those micro-steps
are generalized and lifted back on the macro-level (c) to explain the global phenomenon (d).}\label{fig:weber}
\end{center}
\end{figure}
Despite these similarities, RCT is more extreme in only considering rational actions.
John Scott explains in his critical overview over RCT \cite{sco:00}:
`what distinguishes RCT [\dots] is that it denies the existence of any kind of action
other than the purely rational [\dots]'. We draw from Scott's overview to
contrast and provide the right context for our work.
He quite critically highlights the limitations of RCT in particular when branching out
from economics and applying RCT more generally to sociology. According to Scott, Homans
\cite{hom:61} was a ``pioneering figure'' in establishing rational choice theory in sociology
setting up the basic framework of {\it exchange theory} which can be understood as RCT for social
interaction. In this framework, money and market mechanisms of economic theories are
replaced by human resources as time, information, approval and prestige.
Besides pioneering RCT, Homans additionally grounded exchange theory on assumptions that
he drew from behaviourist psychology.
While the methodological individualism of rational choice theories starts from individuals' actions
and sees all social phenomena reducible to these actions, Homans went one step further into
explaining them. For him it was necessary to reduce these actions to conditioned psychological responses.
In brief, human behaviour is like animal behaviour not free but determined by rewards and
punishment. This reinforcement is called `conditioning' and determines human behaviour.
Behaviour can thus be studied purely externally and needs no inspection of internal mental
states.
While others rejected Homans' claims about this explanation of human behaviour
-- and even Homans came to see it as inessential -- for our formal model of awareness and
unintentional insiders it is very helpful. In Section \ref{sec:isaunins}, when we formalize
the taxonomy extracted from the experimental work into Isabelle, we model human behaviour
in the sense of `conditioning'. We actually do model the internal state of the actors
although Homans considered this as unnecessary but our model permits dynamic state inspection including
psychological disposition of human actors.
\subsection{Isabelle Insider framework}
The Isabelle Insider framework \cite{bikp:14,kp:16} has also been inspired by Max Weber and
methodological individualism.
In mapping this fundamental philosophy to logic, this framework follows a common introductory
textbook for sociologists by Hartmut Esser \cite{he:93} written in the spirit of Popper's
critical rationalism. This offers an approach to understand sociological experiments in a
formal way using a logical view on explanation by the logicians Hempel and Oppenheim \cite{ho:48}.
In addition, the Isabelle Insider framework uses a taxonomy provided in \cite{nblgcww:14}
which is founded on empirical and psychological studies of counterproductive workplace behaviour.
In Section \ref{sec:isaunins}, we will in more detail present the details of how the human disposition
and its effects to the environment are modeled in Isabelle and how this model is now extended to accommodate
the unintentional insider.
Isabelle is an interactive proof assistant based on Higher Order Logic (HOL).
Application specific logics are formalized into new theories extending HOL.
They are called object-logics. Although HOL is undecidable and therefore proving
needs human interaction, the reasoning capabilities are very sophisticated
supporting ``simple'', i.e., repetitive, tedious proof tasks to a level of
complete automation. The use of HOL has the advantage that it enables expressing
even the most complex application scenarios, conditions, and logical
requirements and HOL simultaneously enables the analysis of the meta-theory.
That is, repeating patterns specific to an application can be abstracted and
proved once and for all.
An object-logic contains new types, constants, and definitions. These items
reside in a theory file. For instance, the file \texttt{UnintentionalInsider.thy} contains
the object-logic for unintentional insiders described in the following paragraphs.
This Isabelle Insider framework is a {\it conservative extension} of HOL. This means
that our object logic does not introduce new axioms and hence guarantees consistency.
Conceptually, new types are defined as subsets of existing types and properties are proved
using a one-to-one relationship to the new type from properties of the existing type.
We are going to use Isabelle syntax and concepts in the presentation of the Isabelle Insider
framework and will explain them when they are used.
\section{Social Networks and Privacy Awareness}
\label{sec:snpa}
\subsection{Requirements analysis and design of social awareness tool}
A questionnaire was created in order to research about public attitudes to internet security
amongst social media users. Quantitative and qualitative data are obtained through this method,
allowing more time for analysis of the results and how the results can be used to create a
prototype.
Answers to `How many different social media apps/websites do you use every day?',
show that 84.6\% of 39 responses use more than 3 different forms of social media every day. This
shows the commonality and reliance of social media in everyday lives and how many different apps
can hold information about you.
\includegraphics[scale=.3]{questionnaire2}
The above chart show the most common applications that people use, Instagram, Facebook and TikTok
have shown to be the most common platforms. Each of which have shown to have breaches in misuse of
personal data they have collected from their users.
How many people have their accounts on private? The majority, 56.4\%, say that only some are private,
meaning that the users have chosen to only privatise one or more of their accounts but have left others
to be able to be accessed by the public.
Are these users aware of how much information they have put out publicly? Surprisingly, the most common
answers are completely aware or somewhat aware. Of the amount. 59\% have said that they have knowledge
of information they have posted publicly but leave room for uncertainty as to how much is actually available
to the public. This shows a slight concern from users in their social media behaviour.
\subsection{Testing and Evaluation}
The left side of Figure \ref{fig:interface} shows the design of the search page which focuses on clear
minimalistic esthetic to display clear concise information, which will be easily accessible by all. The
website title placed at the top middle and highlights the purpose pf the website. The search bar is in the
middle of the page, letting the user know that the tool only has one importance and should not show otherwise.
The user will not be lost when navigating the website, easing user comfort. The bottom grey section reflects
basic information on the importance of internet security and what the website aims to show. User inputs through
the search bar and uses the search button.
\begin{figure}
\begin{center}
\includegraphics[scale=.25]{interface}
\end{center}\caption{Interface of social network awareness tool}\label{fig:interface}
\end{figure}
The right side of the figure displays what the user sees when they have searched a username. Profiles are created
of the available information such as other social media accounts that are linked to the username searched.
The profiles are highlighted by the black boxes they are in that contrast the white background, allowing less
crowded visuals which may have disorientated people.
For the implementation, we used opensource API’s. An API is an application programming interface that allows
computers to send signals and receive data in return. This enables specific queries and actions to be retrieved.
APIs need keys allowing access to sensitive data whilst also protecting important and sensitive data that
cannot be accessed by any user. All social networks allow developers to apply for API keys, allowing APIs to be
used for projects. The API allowed us to retrieve that data necessary enabling to connect to the internet and
use genuine social network server data. Users are able to search any username on any social
network and retrieve related information. The API’s proved to be the best solution for this project as
we could acquire the necessary data and use it to create a summarized only profile.
The results are thus inherently genuine reflecting real world scenarios.
\subsection{Key findings and RCT interpretation of privacy awareness}
\includegraphics[scale=.3]{questionnaire4}
Most importantly, the last question investigates the issue of social media behaviours. It asks whether they
would personally change their online behaviours if they were able to see what others could see about them.
From responses to this question on the questionnaire, many have shown concern about what others could obtain
from the information they post online and would immediately act on this by privatising their social media.
Consequently, this shows the importance of a tool that helps people become more aware of their online behaviours.
It matches the rationality assumption of RCT and proves that creating awareness changes the users attitude.
This creates a potential for improved privacy in social networks and how awareness could change the risk of
attacks on privacy.
\section{Modeling Unaware Social Network Users and Unintentional Insiders in Isabelle}
\label{sec:isaunins}
The state based dynamic semantics of the Isabelle infrastructure framework allows expressing how
awareness dynamically changes the global policy and thus how a change on awareness eliminates the risk.
We also show how to integrate awareness into the notion of insiderness thus extending the Isabelle
Insider framework to unintentional insiders based on the findings of our experiment with the social
awareness tool.
\subsection{Infrastructures, Policies, Actors in Isabelle}
\label{sec:infra}
The Isabelle Infrastructure framework supports the representation of infrastructures
as graphs with actors and policies attached to nodes. These infrastructures
are the {\it states} of the Kripke structure.
The transition between states is triggered by non-parameterized
actions \texttt{get}, \texttt{move}, \texttt{eval}, and \texttt{put}
executed by actors.
Actors are given by an abstract type \texttt{actor} and a function
\texttt{Actor} that creates elements of that type from identities
(of type \texttt{string} written \texttt{''s''} in Isabelle).
\begin{ttbox}
{\bf {typedecl}} actor
{\bf {type}}_{\bf{synonym}} identity = string
{\bf {consts}} Actor :: string \ttfun actor
\end{ttbox}
Note that it would seem more natural and simpler to just
define \texttt{actor} as a datatype over identities with a constructor \texttt{Actor}
instead of a simple constant together with a type declaration like, for example,
in the Isabelle inductive package by Paulson~\cite{pau:97}.
This would, however, make the constructor \texttt{Actor} an injective function
by the underlying foundation of datatypes therefore excluding the fine
grained modelling that is at the core of the insider definition:
in fact, {the core insider property \texttt{UasI} (see below)} defines the
function \texttt{Actor} to be injective for all except insiders and explicitly enables
insiders to have different roles by identifying \texttt{Actor} images.
To represent the macro level view seeing the actor within an infrastructure,
we define a graph datatype \texttt{igraph} (see below) for infrastructures.
This datatype has generic input parameters that are going to be supplied as
concrete parts of an application infrastructure on instantiation of an \texttt{igraph}.
They represent the actual location graph, the actors in each locations, their roles, credentials
and psychological disposition (see following subsection) and the locations' state.
\begin{ttbox}
{\bf datatype} igraph = Lgraph (location \tttimes location)set
location \ttfun identity set
actor \ttfun (string list \tttimes string list)
actor \ttfun actor_state
location \ttfun string list
\end{ttbox}
Consider here the social network case study as an example.
\begin{ttbox}
ex_graph \ttequiv Lgraph
\{(aphone,instagram), (bphone,instagram)\}
(\ttlam x. if x = aphone then \{''Alice''\} else
(if x = bphone then \{''Bob''\} else \{\}))
ex_creds ex_locs
\end{ttbox}
Policies specify the expected behaviour of actors of an infrastructure.
Atomic policies of type \texttt{apolicy}
describe prerequisites for actions to be granted to actors given by
pairs of predicates (conditions) and sets of (enabled) actions:
\begin{ttbox}
{\bf type}_{\bf{synonym}} apolicy = ((actor \ttfun bool) \tttimes action set)
\end{ttbox}
For example, the \texttt{apolicy} pair
\texttt{(\ttlam x.\ {has (x, ’’PIN’’)}, \{move\})}
specifies that all actors {who know the \texttt{PIN}} are enabled to perform action \texttt{move}.
Infrastructures combine an infrastructure graph of type \texttt{igraph}
with a policy function that assigns local policies
over a graph to each location of the graph, that is, it is a function
mapping an \texttt{igraph} to a function from \texttt{location} to
\texttt{apolicy set}.
\begin{ttbox}
{\bf datatype} infrastructure = Infrastructure igraph
[igraph, location] \ttfun apolicy set
\end{ttbox}
For our social network example, the initial infrastructure contains the above graph
\texttt{ex\_graph} and the local policies defined shortly.
\begin{ttbox}
sn_scenario \ttequiv Infrastructure ex_graph local_policies
\end{ttbox}
The function \texttt{local\_policies} gives the policy for each location \texttt{x}
over an infrastructure graph \texttt{G} as a pair: the first element of this pair is
a function specifying the actors \texttt{y} that are entitled to perform the actions
specified in the set which is the second element of that pair.
\begin{ttbox}
local_policies G x \ttequiv
case x of
aphone \ttfun \{(\ttlam y. has G (y,''aPIN'')), \{put,get,move,eval\})\}
| bphone \ttfun \{((\ttlam y. has G (y,''bPIN'')), \{put,get,move,eval\})\}
| instagram \ttfun \{(\ttlam y. \ttin \{Actor ''Alice'', Actor ''Bob''\},
\{put,get,move,eval\})\}
| _ \ttfun \{\})
\end{ttbox}
We define the behaviour of actors using a predicate \texttt{enables}:
within infrastructure \texttt{I}, at location \texttt{l},
an actor \texttt{h} is enabled to perform an action \texttt{a} if there
is a pair \texttt{(p,e)} in the local policy of \texttt{l} -- \texttt{delta I l}
projects to the local policy -- such that action \texttt{a} is in the action set
\texttt{e} and the policy predicate \texttt{p} holds for actor \texttt{h}.
\begin{ttbox}
enables I l h a \ttequiv \ttexists (p,e) \ttin delta I l. a \ttin e \ttand p h
\end{ttbox}
For example, the statement
\texttt{enables I l (Actor''Bob'') move} is true if
the atomic policy \texttt{(\ttlam x. True, \{move\})} is in the set
of atomic policies \texttt{delta I l} at location \texttt{l} in infrastructure
\texttt{I}. {Double quotes as in \texttt{''Bob''} create a string in Isabelle/HOL.}
\subsection{Modelling the human actor and psychological disposition}
\label{sec:human}
The human actor's level is modeled in the Isabelle Insider framework by assigning
the individual actor's psychological disposition to each actor's identity.
\begin{ttbox}
{\bf datatype} actor_state = State psy_state motivations
\end{ttbox}
There are selector functions \texttt{motivation} and \texttt{psy\_state} to project
the components from an \texttt{actor\_state} element.
The psychological state of an actor is not determined using the formal system but we
use here empirical facts as input as for example our own
studies from Section \ref{sec:snpa} or other sociological findings, like \cite{nblgcww:14}.
The formal representation of {\it Psychological State}
is a simple enumeration datatype distinguishing the ``normal'' state of happiness from one in which the actor is
alerted or ``suspicious''.
\begin{ttbox}
{\bf datatype} psy_states = happy | suspicious
\end{ttbox}
The element on the right hand side are the two injective constructors of the new datatype
\texttt{psy\_states}. They are simple constants, modeled as functions without arguments.
Motivation plays a vital role in RCT and as Homans observed the strongest one is that humans
seek approval (which is only excluded by a state of mind that corresponds to complete detachment
which we abbreviate as ``zen'').
\begin{ttbox}
{\bf datatype} motivations = approval_hungry | zen
\end{ttbox}
The types for psychological state and motivations allow defining the users
state of unawareness by a predicate.
\begin{ttbox}
{\bf definition} unaware :: actor\_state \ttfun bool
unaware a \ttequiv motivation a = \{approval_hungry\} \ttand happy = psy_state a
\end{ttbox}
\subsection{Privacy by labeling data and state transition}
\label{sec:label}
The Decentralized Label Model (DLM) \cite{ml:98} introduced the idea to
label data by owners and readers. We use this idea and formalize
a new type to encode the owner and the set of readers of a data item.
\begin{ttbox}
{\bf type\_synonym} dlm = actor \tttimes actor set
\end{ttbox}
Labelled data is then just given by the type \texttt{dlm \tttimes\ data}
where \texttt{data} can be any data type.
The abstract state transition provided in the underlying Kripke structure
theory is instantiated in the infrastructure model by an inductive definition
of a state transition relation \texttt{\ttrel{n}} over infrastructures.
A set of inductive rules defines this transition relation \texttt{\ttrel{n}}
relative to characteristics of the current state. These characteristics can exploit
the information encoded into the infrastructure as well as the enables predicate
to express how the next infrastructure state evolves from the current one.
We show here the rules for put and get as they suffice to illustrate how to model
the social network application scenario.
\subsubsection{The put pata rule} assumes an
actor \texttt{h} residing at a location \texttt{l} in the infrastructure
graph \texttt{G} and being enabled the \texttt{put} action. In addition, the psychological
state \texttt{pgra G h} needs to be unaware. Here we add the newly extended option for the
human actor model to the semantic rule as precondition thus stating that only unaware users
put their data onto the graph.
If infrastructure state \texttt{I} fulfills those preconditions, the next state \texttt{I'} can
be constructed from the current state by adding the data item
\texttt{((Actor h, hs), n)} at location \texttt{l}. The addition is
given by updating (using \texttt{:=}) the existing data storage \texttt{lgra G l}
at location \texttt{l} with the singleton set \texttt{\{((Actor h, hs), n)\}}. Note
that the first component \texttt{Actor h} marks the owner of this data item as \texttt{h}.
\begin{ttbox}
put:
G = graphI I \ttImp h \ttatI l \ttImp enables I l (Actor h) put \ttImp
unaware (pgra G h) \ttImp
I' = Infrastructure
(Lgraph (gra G)(agra G)(cgra G)(pgra G)
((lgra G)(l := (lgra G l \ttcup \{((Actor h, hs), n)\}))))
(delta I)
\ttImp I \ttrel{n} I'
\end{ttbox}
\subsubsection{The get data rule} resembles the put data rule in
many parts. However, here an actor \texttt{h} accesses data in a remote
location \texttt{l'} and adds it to the data in his current location
\texttt{l}. This copying of data is only permitted if the current location
\texttt{l'} of the data enables \texttt{h} to \texttt{get} and if the list of readers
\texttt{hs} in the data item \texttt{((Actor h', hs), n)} contains the entry
\texttt{Actor h} or if the accessing actor is \texttt{h} herself.
\begin{ttbox}
get_data:
G = graphI I \ttImp h \ttatI l \ttImp enables I l' (Actor h) get \ttImp
((Actor h', hs), n) \ttin lgra G l' \ttImp Actor h \ttin hs \ttor h = h' \ttImp
I' = Infrastructure
(Lgraph (gra G)(agra G)(cgra G)(pgra G)
((lgra G)(l := (lgra G l \ttcup \{((Actor h', hs), n)\}))))
(delta I)
\ttImp I \ttrel{n} I'
\end{ttbox}
The global policy is `only the owner and friends can access the data on the cloud'
using for example the definition of \texttt{friends} as \texttt{\{''Alice'', ''Bob''\}}.
\begin{ttbox}
global_policy I a \ttequiv a \ttnin friends
\ttimp \ttneg(enables I instagram (Actor a) get)
\end{ttbox}
We can prove that Bob is enabled to get (Alice's data) at instagram if Bob is specified
as a reader in an application scenario where Alice sets the label parameter \texttt{hs}
in a put action accordingly. So, using the features of attack tree analysis of the
Isabelle Insider framework, we can formally prove such statements.
However, we are interested in investigating negative effects of unawareness and how a change of human
behaviour may improve the situation. Therefore, we use the representation of human factors and
(malicious) Insiders in the Isabelle Insider framework, integrating the existing notion of
malicious insiders and extending them to include also unintentional insiders.
\subsection{Representing human factors and insiders}
The Isabelle Insider framework defines ``[a]n insider [as] a trusted user of a system who behaves
like an attacker abusing privileges thereby bypassing security controls'' \cite{kk:21}.
This definition leads to the notion of an insider as
an attacker
formally represented as an actor \texttt{Eve} who is a malicious ``evil'' actor outside some
set of actors within the system. Actors are represented as having a unique identity as well as a role of actor
which normally is the same as their identity unless impersonation happens. Insiderness is now represented
by explicitly identifying the actor \texttt{Eve} with privileged users. Thus the malicious actor Eve can
act like an inside actor.
So far, the Isabelle Insider framework has rooted insiderness on a taxonomy from the insider
threat literature based on psychological studies \cite{nblgcww:14}. Thus, insiderness was uniquely
determined by the description of an insider as a system actor turning bad as a consequence of susceptible
dispositions and triggering events leading to a ``tipping point''.
Technically, we model this explicit yet flexible impersonation of privileged users inside the system
by a function \texttt{Actor} that maps identities to roles. In places where an impersonation is deemed
feasible the function may map the identity of the ``evil'' actor \texttt{Eve} to the same role as
that of a privileged user inside the system.
For all other identities that are not compromised the
function actor maps these identities exclusively to roles in the system, that is, for these
identities \texttt{Actor} is injective: $id_0 \neq id_1 \Rightarrow \texttt{Actor}\ id_0 \neq \texttt{Actor}\ id_1\,.$
Here, we want to extend this classical view of an intentional insider to that of an unintentional
insider \cite{InsiderUnintentionalInsider2013}.
As Matt Bishop puts it ``[i]n many cases, unintentional insider attacks are as dangerous as deliberate
insider attacks; preventing them adds more complexity to an already, difficult problem. Any approach
therefore must have not only a technical aspect (detecting the attack), but also a non-technical
aspect (detecting the problem), which includes consideration of social, political, legal, and cultural
influences, among others''\cite{Bishop:2017aa}.
We remain in the spirit of this design decision of representing the human actor but extend it with awareness
and thus unintentional insiderness. In the following we retrace the steps of the formal insider model
as originally conceived in the Isabelle Insider framework highlighting the additions and extensions
to accommodate unintentional insiders.
\subsection{Integrating Unaware with Malicious Insiders}
For the integration of unintentional insiders with the existing the malicious insiders, e.g. \cite{kk:21},
we extend the definitions of the types \texttt{motivations} and \texttt{psy\_state} given in Section \ref{sec:human}.
The values for the malicious insider
are based on a taxonomy from psychological insider research by Nurse et al.~\cite{nblgcww:14}.
\begin{ttbox}
{\bf datatype} psy_states = ... | depressed | disgruntled | angry | stressed
\end{ttbox}
Another example is
{\it motivation} for malicious insiders ranging far \cite{nblgcww:14}.
\begin{ttbox}
{\bf datatype} motivations = ... | financial | political | revenge
| fun | competitive_advantage | power | peer_recognition
\end{ttbox}
The transition
to become an insider is represented by a {\it catalyst} that tips the insider
over the edge so he acts as an insider formalized as a ``tipping point''
predicate.
\begin{ttbox}
{\bf definition} tipping_point :: actor\_state \ttfun bool
\ tipping_point a \ttequiv motivation a \ttneq \{\} \ttand motivation a \ttneq \{approval_hungry\}
\ttand happy \ttneq psy_states a
\end{ttbox}
To embed the fact that the attacker is an insider, the actor can then
impersonate other actors.
This assumption entails that an insider \texttt{Actor ''Eve''} can act like
their alter ego, say \texttt{Actor ''Charlie''} within the context of the locale.
This is realized by the predicate \texttt{UasI}.
\begin{ttbox}
UasI a b \ttequiv (Actor a = Actor b) \ttand
\ttforall x y. x \ttneq a \ttand y \ttneq a \ttand Actor x = Actor y \ttimp x = y
\end{ttbox}
Note that this predicate also stipulates that the function \texttt{Actor}
is injective for any other than the identities \texttt{a} and \texttt{b}.
This completes the Actor function to an ``almost everywhere injective function''.
Insiderness can now be defined as a rule that is triggered by conditions that may
be valid in a state of the infrastructure. For the malicious insider, this condition
has been the ``tipping point'' for an actor's state (given here as the parameterized \texttt{as a}).
To integrate insiderness to unintentional insiders, we simply add \texttt{unawareness} as an additional
sufficient condition to the rule.
\begin{ttbox}
Insider a C as \ttequiv tipping_point (as a) \ttor unaware (as a)
\ttimp (\ttforall b \ttin C. UasI a b)
\end{ttbox}
Although the above insider predicate is a rule, it is not axiomatized.
It is just an Isabelle definition, that is, it serves as an abbreviation.
To use it in an application, like the auction protocol, we can use
this rule as a local assumption in theorems
or using the \texttt{assumes} feature of locales \cite{kpw:99}).
Based on the state transition and the above defined
\texttt{sn\_scenario}, we define the first Kripke structure.
\begin{ttbox}
sn_Kripke \ttequiv
Kripke \{ I. sn_scenario \ttrelIstar I \} \{sn_scenario\}
\end{ttbox}
\subsection{Attack: Eve can get data}
\label{sec:getatt}
How do we find attacks? The key is to use invalidation \cite{kp:14}
of the security property we want to achieve, here the global policy.
Since we consider a predicate transformer semantics, we use
sets of states to represent properties.
The invalidated global policy is given by the following set \texttt{ssn}.
\begin{ttbox}
ssn \ttequiv \{x. \ttneg (global_policy x ''Eve'')\}
\end{ttbox}
The attack we are interested in is to see whether for the scenario
\begin{ttbox}
sn\_scenario \ttequiv Infrastructure ex_graph local_policies
\end{ttbox}
from the initial state \texttt{Isn \ttequiv \{sn\_scenario\}},
the critical state \texttt{ssn} can be reached,
that is, is there a valid attack \texttt{(Isn,ssn)}?
For the Kripke structure
\begin{ttbox}
sn_Kripke \ttequiv Kripke \{ I. sn_scenario \ttrelIstar I \} Isn
\end{ttbox}
we first derive a valid and-attack using the attack tree proof calculus.
\begin{ttbox}
\ttvdash [\ttcalN{(Isn,SN)}, \ttcalN{(SN,ssn)}]\ttattand{\texttt{(Isn,ssn)}}
\end{ttbox}
The set \texttt{SN} is an intermediate state where \texttt{Alice} moves to instagram
to then put her data \texttt{''Alice's\_diary''} there.
The attack tree calculus \cite{kam:18b} exhibits that an attack is possible.
\begin{ttbox}
sn_Kripke \ttvdash {\sf EF} ssn
\end{ttbox}
The attack tree formalisation in the Isabelle Infrastructure framework provides
adequacy, that is, Correctness and Completeness theorem for the relationship between
attack trees and the CTL statement \cite{kam:18b}.
We can thus simply apply the Correctness theorem \texttt{AT\_EF} to
immediately prove CTL-{\sf EF} statements. This application of the meta-theorem
of Correctness of attack trees saves us proving the CTL formula tediously
by exploring the state space in Isabelle proofs. Alternatively, we could use
generated code for the function \texttt{is\_attack\_tree} in Scala \cite{kam:21}
to check that a refined attack of the above is valid.
\section{Conclusions}
\subsection{Related work on awareness}
Awareness contributes to having knowledge of something; thus, security awareness could be considered as
a cognitive behavioural response to security and understanding its consequences. Some studies investigate
this possible understanding of internet and cyber security awareness, such as Bulgur \cite{bcb:10}.
Korovessis et al. \cite{kfph:17} introduces a ``toolkit approach to information security awareness and education'',
whilst focusing on organisations and the importance of user training by introducing a toolkit.
Training in this sense, focuses on teaching skills to safeguard information.
They completed a string of surveys, focus groups and interviews with different
participant groups and ages to establish the effectiveness of the toolkit. Results showed that the prototype
was successful in establishing awareness, however limitations were shown through the delivery of the approach
as the kit was not accessible to everyone.
Kruger and Kearney \cite{kk:06} establish a model prototype for assessing informational security awareness. The model
focuses on knowledge, attitude, and behaviour.
As stated by Lacey \cite{lac:09}, the gap in internet security is not the technology, but fundamentally the
awareness in people. The effectiveness of the approach is assessed by the resulting attitudes and behaviour to the topic.
Bada, Sasse and Nurse \cite{bsn:15} also investigated through a psychological perspective where lack of
motivation lead to poorly designed security systems and poor security compliance.
The study results showed a
raised awareness and had positive effects on creating a ``security minded culture''. By introducing human
factors to awareness campaigns, the results deemed more positive, showing us that security awareness can
be increased if the tool used is more personal and relatable.
Bada, Sasse and Nurse \cite{bsn:15} provide a literature based survey on the effectiveness of campaigns
on human behaviour comparing cyber security awareness campaigns in Africa and UK.
They review Dolan et al's nine critical factors which influence and change human behaviour. Although
these factors provide an even finer granularity of categorizing human motivations, they are aligned
with the psychological characterization by Homans \cite{hom:61} that we we use as a basis for our model.
While our work uses an experimental approach, their survey \cite{bsn:15} also leads them to conclude that
``security education has to be more than providing information to users – it needs to be targeted,
actionable, doable and provide feedback''. Our approach is aligned with their findings, since our security
tool and modeling enables a ``targetted, actionable, doable'' analysis of a social network leading
to feedback to the user.
Labuschagne et al \cite{lbve:11}
proposed a game hosted by social networking sites to increase security awareness.
The game uses social networks, something that is accessible by those at home and at work.
Lack of security knowledge is what makes people vulnerable and unable to protect their information,
an idea clearly stated by Kritzinger and von Solms \cite{ks:10}, with internet becoming so involved
in personal lives, it is paramount that the tools to raise awareness should be accessible to all. The
approach utilises a medium that is popular, therefore accessible. Whilst a prototype has not been
created, the approach must be analysed to see if it would utilise the increase of public awareness
to internet security. In this scenario the game that is hosted by social media sites is an approach
that would possibly be interacted with by the younger audience, producing a limitation as to its
non-inclusive medium, leaving a numerous amount of the public not being educated to.
Jemal Abwajy 2012 [18] concludes that combined delivery methods of text, video and game would be a more
suitable approach to deliver security awareness, rather than individual as it creates an inclusive
audience.
\subsection{Related work on Isabelle Insider and Infrastructure framework}
\label{sec:relisa}
A whole range of publications have documented the development of the Isabelle Insider framework.
The publications \cite{kp:13,kp:14,kp:16} first define the fundamental notions of insiderness, policies,
and behaviour showing how these concepts are able to express the classical insider threat patterns
identified in the seminal CERT guide on insider threats \cite{cmt:12}.
This Isabelle Insider framework has been applied to auction protocols \cite{kkp:16,kkp:16a}. An Airplane
case study \cite{kk:16,kk:21} revealed the need for dynamic state verification leading to
the extension of adding a mutable state. Meanwhile, the embedding of Kripke structures and CTL
into Isabelle have enabled the emulation of Modelchecking and to provide a semantics for attack
trees \cite{kam:17a,kam:17b,kam:17c,kam:18b,kam:19a}.
Attack trees have provided the leverage to integrate Isabelle formal reasoning for IoT systems
as has been illustrated in the the CHIST-ERA project SUCCESS \cite{suc:16} where
attack trees have been used in combination with the Behaviour Interaction Priority (BIP) component
architecture model to develop security and privacy enhanced IoT solutions.
This development has emphasized the technical rather than the psychological side of the framework
development and thus branched off the development of the Isabelle {\it Insider} framework into the
Isabelle {\it Infrastructure} framework. Since the strong expressiveness of Isabelle allows to formalize
the IoT scenarios as well as actors and policies, the latter framework can also be applied to
evaluate IoT scenarios with respect to policies like the European data privacy regulation
GDPR \cite{kam:18a}. Application to security protocols first pioneered in the
auction protocol application \cite{kkp:16,kkp:16a} has further motivated the analysis of Quantum Cryptography
which in turn necessitated the extension by probabilities \cite{kam:19b,kam:19c,kam:19d}.
Requirements raised by these various security and privacy case studies have shown the need for a
cyclic engineering process for developing specifications and refining them towards implementations.
A first case study takes the IoT healthcare application and exemplifies a step-by-step
refinement interspersed with attack analysis using attack trees to increase privacy by ultimately
introducing a blockchain for access control \cite{kam:19a}. This formalisation of secure distributed data
labels has given rise generalising to sets of blockchain for Inter-clockchain protocols \cite{kn:20}.
First ideas to support a dedicated security refinement process are available in a preliminary
arxive paper \cite{kam:20a} but the first to fully formalize the RR-cycle
and illustrate its application completely is the application to the Corona-virus Warn App (CWA) \cite{kl:20}.
\subsection{Discussion and Outlook}
We have presented a pragmatic action research study into awareness in social networks. User awareness
interviews have given evidence to design, implement, and test a web-based tool enabling to show the user
how much is known about her. This feedback leads users to be more cautious and not give private data to
social networks. In our research, we have followed the action research methodology, that is, we have used
quantitative and qualitative research with practical interventions which consisted in implementing
a web application tool based on social network APIs for feedbacking to users what is visible of their data.
In addition, we mechanized formal modeling and analysis for social network scenarios including
human actors in Isabelle.
For the latter application, we have used the Isabelle Insider framework to provide a dynamic logic model
enabling
(1) formally reproducing the experimental scenario and (2) embedding the notion of awareness in the general
security notion of insiderness. We have thus linked up social network analysis to formal security engineering
and provided a novel formal notion of unintentional insiderness.
\bibliographystyle{abbrv}
|
{
"timestamp": "2021-12-01T02:24:47",
"yymm": "2111",
"arxiv_id": "2111.15425",
"language": "en",
"url": "https://arxiv.org/abs/2111.15425"
}
|
\section{Introduction}
Let $p$ be a prime number and $k$ be an algebraically closed filed of characteristic $p$. A finite group $G$ is said to be a \textit{quasi} $p$-\textit{group} if $G$ is generated by all its Sylow $p$-subgroups. In \cite{Abh_57}, Abhyankar posed a conjecture stating that the quasi $p$-groups are precisely the groups occurring as the Galois groups for connected Galois \'{e}tale covers of the affine $k$-line $\mathbb{A}^1$. This is a sharp contrast from the behavior of finite covers defined over $\mathbb{C}$ as there is no non-trivial \'{e}tale cover of the Riemann sphere. The conjecture (known as Abhyankar's Conjecture on the affine line) was proved by Serre (for quasi $p$ solvable groups; \cite{Serre_AC}) and Raynaud (\cite{Raynaud_AC}). It can be seen that any such Galois \'{e}tale cover of $\mathbb{A}^1$ extends uniquely to a Galois cover of the projective $k$-line $\mathbb{P}^1$ that is branched over $\infty$ and is \'{e}tale everywhere else. Moreover, the inertia groups above $\infty$ are conjugate to each other. For a quasi $p$-group $G$, and a subgroup $I \, \subset \, G$, we say that the pair $(G, \, I)$ is \textit{realizable} if there is a connected $G$-Galois cover of $\mathbb{P}^1$ branched only at $\infty$, and at a point above $\infty$, the inertia group is $I$. From the theory of extension of local fields, $I$ is necessarily an extension of a $p$-group by a cyclic group of order prime-to-$p$. The question arises that given a quasi $p$-group $G$ and a subgroup $I \, \subset G$ which is potentially an inertia group (i.e., has the above necessary form), whether the pair $(G, \, I)$ is realizable. Abhyankar conjectured a necessary and sufficient group theoretic condition to this question, now known as the Inertia Conjecture.
\begin{conj}[The IC, {\cite[Section 16]{Abh_01}}]\label{conj_IC}
Let $G$ be a finite quasi $p$-group. Let $I$ be a subgroup of $G$ which is an extension of a $p$-group by a cyclic group of order prime-to $p$. Then the pair $(G, \, I)$ is realizable if and only if the conjugates of $P$ in $G$ generate $G$ (in notation, $G = \, \langle \, P^G \, \rangle$).
\end{conj}
It can be seen that the condition on $I$ in the conjecture is a necessary one. When $I = P$ is a $p$-group, a special case of the above conjecture is known as the Purely Wild Inertia Conjecture (henceforth referred to as the PWIC).
\begin{conj}[The PWIC, {\cite[Section 16]{Abh_01}}]\label{conj_PWIC}
Let $G$ be a finite quasi $p$-group. Let $P$ be a $p$-subgroup of $G$. Then the pair $(G, \, P)$ is realizable if and only if $G = \langle \, P^G \, \rangle$.
\end{conj}
It is known that the Inertia Conjecture is true for the $p$-groups. From \cite[Theorem 2]{2}, it follows that for any quasi $p$-group and a Sylow $p$-subgroup $P$ of $G$, the pair $(G, \, P)$ is realizable. In particular, the PWIC is true for any quasi $p$-group whose order is strictly divisible by $p$. In the other earlier works, the Inertia Conjecture was shown to be true for a few groups. For a summary on these works and other important developments, see \cite[Section 4]{survey_paper}. Recent works from \cite{DK}, \cite{Das}, \cite{Dean} shed light on several systematic ways to construct the Galois \'{e}tale covers of the affine line. As a consequence, a larger class of groups provide evidence towards the above conjectures. In general, the status of the conjectures remain open at the moment. In \cite{Das}, the PWIC was generalized (the Generalized Purely Wild Inertia Conjecture or GPWIC) to the case with multiple branched points in $\mathbb{P}^1$.
\begin{conj}[The GPWIC, {\cite[Conjecture 6.8]{Das}}]\label{conj_GPWIC}
Let $G$ be a finite quasi $p$-group. Let $P_1, \, \cdots, \,P_r$ be non-trivial $p$-subgroup of $G$ for some $r \geq 1$ such that $G \, = \, \langle P_1^G,\, \cdots, \, P_r^G \rangle$. Let $B = \{x_1, \, \cdots, \, x_r\}$ be a set of closed points in $\mathbb{P}^1$. Then there is a connected $G$-Galois cover of $\mathbb{P}^1$ \'{e}tale away from $B$ such that $P_i$ occurs as an inertia group above the point $x_i$ for $1 \leq i \leq r$.
\end{conj}
In this paper, we concentrate mostly on the PWIC and to its above mentioned generalization. One of the interesting suitable class of groups for these conjectures are the Alternating and Symmetric groups, and their products. When $p$ is an odd prime, $A_d$ is a quasi $p$-group for all $d \geq \text{max}\{5,p\}$; when $p = 2$, $A_d$ for $d \geq 5$ is a quasi $2$-group and $S_d$ for $d \geq 3$ is a quasi $2$-group. Moreover, any product of quasi $p$-groups is again a quasi $p$-group. This way, we have a large class of potential candidates for which the above conjectures can be checked.
When $p$ is an odd prime, \cite[Theorem 1.7]{Das} shows that the GPWIC is true for any product $A_{d_1} \times \cdots \times A_{d_n}$, $n \geq 1$, where each $d_i = p$ or $d_i \geq p+1$ is co-prime to $p$. In Corollary~\ref{cor_GPWIC_Alt_products} we remove this restriction on the $d_i$'s, hence proving the GPWIC for arbitrary product of simple quasi $p$ Alternating groups. In view of the proof of \cite[Theorem 7.4]{Das}, this boils down to showing the following important result.
\begin{theorem}[{Theorem \ref{thm_PWIC_Alternating}}]\label{thm_intro_PWIC_Alternating_multiple_p}
Let $p$ be an odd prime, $r \geq 1$. Then the PWIC (Conjecture~\ref{conj_PWIC}) is true for $A_{rp}$.
\end{theorem}
To prove this, we first apply a formal patching result Lemma~\ref{lem_fp_main} (which is very similar to \cite[Theorem~2.3.7]{Pries}) to a cover obtained in \cite[Theorem 5.2]{DK} and a suitable Harbater-Katz-Gabber cover to obtain a connected $S_{rp}$-Galois cover of $\mathbb{P}^1$ that is \'{e}tale away from $\{0, \infty\}$. Taking a pullback under a Kummer cover, we obtain the result. The same idea can be used in a more general setting, which is Proposition \ref{prop_purely_wild_from_tame}. This approach is different from the previously employed methods to construct purely wildly ramified covers. Namely, we first enlarge the Galois group, enlarge the inertia by a prime-to $p$ part, as well as add new branched points. Then we obtain our cover with the desired properties via a suitable pullback. We also note that Lemma~\ref{lem_fp_main} establishes (Corollary~\ref{cor_A_p+1}) the Inertia Conjecture (Conjecture~\ref{conj_IC}) for $A_{p+1}$ for any prime $p \geq 5$ (this was proved in \cite[Theorem~5.3]{Das} under the condition that $p \equiv 2 \pmod{3}$).
We also prove the following important theorem for a general product of perfect groups.
\begin{theorem}[{Theorem~\ref{thm_product_perfect}}]
Let $p$ be a prime number. Let $G_1$ and $G_2$ be two perfect quasi $p$-groups such that the PWIC holds for $G_1$ and $G_2$. Then the PWIC is true for $G_1 \times G_2$.
\end{theorem}
Such a result was proved (\cite[Theorem~7.5]{Das}) for general quasi $p$-groups $G_1$ and $G_2$ with the assumption that $G_1$ and $G_2$ does not have a common non-trivial quotient.
When $p=2$, we consider the product of Alternating and Symmetric groups. There is no earlier result towards the construction of such covers with prescribed inertia groups. We establish the following evidence towards the GPWIC.
\begin{theorem}[{Corollary~\ref{cor_GPWIC_arbit_product_char_2}}]\label{cor_intro_char_2_all_prod}
When $p = 2$, the GPWIC (Conjecture~\ref{conj_GPWIC}) is true for any product $G = G_1 \times \cdots \times G_n$, $n \geq 1$, where $G_i = A_{d_i}$ for some $d_i \geq 5$ with $4 \nmid d_i$ or $G_i = S_{d_i}$ for an odd integer $d_i \geq 5$.
\end{theorem}
The above result is proved in several steps. After we observe a group theoretic characterization for the potential inertia groups in $A_d$ and $S_d$ covers (Lemma~\ref{lem_wild_candidates}), we prove the PWIC for $A_d$ when $4 \nmid d$, $d \geq 5$ (Theorem~\ref{thm_PWIC_A_d_char_2}) and for $S_d$ when $d \geq 5$ is an odd integer (Theorem~\ref{thm_PWIC_S_d_char_2}). These results are consequence of formal patching techniques (\cite[Theorem~2.2.3]{Raynaud_AC}), \cite[Theorem~2]{2}); in fact, the covers are constructed out of several simpler Galois covers where the Sylow $2$-subgroups are realized as inertia groups. Then again using the patching results to these $A_d$ and $S_d$ covers one obtains the general Theorem~\ref{cor_intro_char_2_all_prod}.
We also consider the permutation wreath products $N \wr A_d$ for simple quasi $p$-groups $N$ and $A_d$ for which the PWIC have already been established. A brief introduction on these groups and the characterization of the potential inertia groups is in \Cref{sec_wreath}. In this direction, we prove Theorem~\ref{thm_perfect_group}: for a perfect quasi $p$-group $G$, a product $P_1 \times P_2 \subset G$ of $p$-subgroups, this allows us to realize a pair $(G, \, P)$ for a index $p$ subgroup of $P_1 \times P_2$ when $P_1$ and $P_2$ are realized as the inertia groups for covers with certain subgroups of $G$ as the Galois groups. We explicitly write down the families of covers and use patching techniques extensively to obtain a $G$-Galois \'{e}tale cover of the affine line such that $P_1 \times P_2$ occurs as an inertia group at a point above $\infty$, and the local extensions near $\infty$ has a lower jump at $1$ (see \Cref{sec_filtration} for a brief summary on jumps in the ramification filtration). This allows us to apply \cite[Theorem~3.7]{Manish_Killing} to finally obtain a cover with $P$ as an inertia group at a point above $\infty$. Using this result, we obtain some covers with $N \wr A_d$ (also with $N \wr S_d$ when $p = 2$) as a Galois group.
The structure of the article is as follows. \Cref{sec_notation} is a review of the notation and definitions we will be using throughout the paper. \Cref{sec_patching} contains formal patching results which are used in the later parts. The GPWIC (Conjecture~\ref{conj_GPWIC}) for the product of Alternating groups (for arbitrary prime $p$) and Symmetric groups ($p = 2$) are the content of \Cref{sec_odd} and \Cref{sec_two}. The partial results towards the PWIC (conjecture~\ref{conj_PWIC}) for a certain type of quasi $p$ wreath products are obtained in \Cref{sec_PWIC_wreath}. Finally, we review some details on the ramification filtration in \Cref{sec_filtration}, and the details on the wreath products (with a view towards their study towards the PWIC) are in \Cref{sec_wreath}.
\section{Notation, Convention and Definitions}\label{sec_notation}
We will use the following notation throughout this article without further mention.
\begin{enumerate}
\item Let $p$ be a prime number. Let $k$ be an algebraically closed field of characteristic $p$.
\item All the $k$-curves considered will be smooth connected curves over $\text{Spec}(k)$, unless otherwise specified.
\item In this article, we work with finite group covers. We will be mostly interested in the Alternating and Symmetric groups of degree $\geq 5$.
\item For a finite group $G$, let $p(G)$ denote the (necessarily normal) subgroup of $G$ generated by all the Sylow $p$-subgroups of $G$. A finite group $G$ is said to be a \textit{quasi {$p$}-group} if $p(G) = G$.
\item For any scheme $X$ and a point $x \in X$, the local ring at $x$ is denoted by $\mathcal{O}_{X,x}$. We denote its completion at the maximal ideal by $\widehat{\mathcal{O}}_{X,x}$. When the local ring is a domain, so is $\widehat{\mathcal{O}}_{X,x}$; in this case, $K_{X,x}$ stands for the fraction field of $\widehat{\mathcal{O}}_{X,x}$.
\end{enumerate}
Let $R$ be a Noetherian ring. An $R$-scheme will be assumed to be of finite type over $\text{Spec}\left( R \right)$. By an \textit{{$R$}-curve}, we mean an $R$-scheme that is flat of relative dimension $1$ whose generic fibres are smooth and geometrically connected. A \textit{cover} of $R$-curves is defined to be a finite, generically separable morphism of $R$-curves. We say that a cover $Y \, \longrightarrow \, X$ is \textit{connected} (respectively, \textit{integral} or \textit{normal}) if $X$ and $Y$ are both \textit{connected} (respectively, integral or normal). For a finite group $G$, a cover $\phi \, \colon \, Y \, \longrightarrow \, X$ of integral $R$-curves is said to be \textit{Galois with group }$G$ or $G$-\textit{Galois} if there is an inclusion of groups $G \hookrightarrow \text{Aut}\left( Y/X \right) \coloneqq \{ \sigma \in \text{Aut}_k(Y) \, | \, f \circ \sigma = f \}$ via which $G$ acts simply transitively on each generic geometric fibre. The Galois theory and the Ramification theory of covers of Noetherian normal schemes is standard in literature, see \cite[Chapter~IV]{Serre_loc}, \cite[Section~3]{Thesis}. Any cover $f \, \colon \, Y \longrightarrow X$ of connected integral $R$-curves is \'{e}tale at the generic point of $Y$. So it is \'{e}tale away from a closed set of codimension $\geq 1$. The closed subset of points in $Y$ where $f$ is not \'{e}tale (which may be empty) is said to be the \textit{ramification locus} of $f$, and its image in $X$ is said to be the \textit{branch locus} of $f$. If $X$ and $Y$ are also smooth, the branch locus is either empty or is pure of codimension one by Zariski's purity of branch locus (cf. \cite[Proposition~2]{Zar}).
Our objects of study are the inertia groups over points in a $G$-Galois cover $f \, \colon \, Y \longrightarrow X$ of smooth connected $k$-curves for a finite group $G$. Since $k$ is an algebraically closed field, the inertia group at a point $y \in Y$ is the stabilizer subgroup of $y$ under the natural transitive $G$-action on the finite set $f^{-1}(f(y)) \subset Y$. It turns out that the inertia group at $y$ is also the Galois group of the Galois field extension $K_{Y,y}/K_{X,f(y)}$. Moreover, for $y, \, y' \in Y$ with $f(y) = f(y')$, the inertia groups at $y$ and $y'$ are conjugate in $G$. For a subgroup $I$ of $G$, we say that \textit{{$I$} occurs as an inertia group at a point {$x \in X$}} if there is a point $y \in Y$ with $f(y) = x$ such that $I = \text{Gal}\left( K_{Y,y}/K_{X,x} \right)$. By \cite[Chapter~IV, Corollary~4]{Serre_loc}, every inertia group $I$ is of the form $I = P \rtimes \mathbb{Z}/m$ for some $p$-group $P$ and a co-prime to $p$ integer $m$. We say that the inertia group $I$ is \textit{purely wild} or that \textit{the cover {$f$} is purely wildly ramified over {$x$}} if $I$ is a $p$-group. Similarly, \textit{{$f$} is tamely ramified over {$x$}} or \textit{a tame inertia group {$I$}} refers to the case when $I$ is a cyclic group of order prime-to-$p$. We can associate the lower and the upper indexed ramification filtration (a finite, decreasing filtration) to $I$ which are briefly recalled in \Cref{sec_filtration}. Our main interest in this article is when the base curve $X$ is the projective $k$-line.
\begin{definition}\label{def_real_pair}
Let $G$ be a finite group, $I \subset G$. We say that the pair $(G, \, I)$ is \textit{realizable} if there exists a $G$-Galois cover $Y \longrightarrow \mathbb{P}^1$ of smooth projective connected $k$-curves that is \'{e}tale away from $\infty$, and $I$ occurs as an inertia group above $\infty$.
\end{definition}
In the above definition, $G$ is necessarily a quasi $p$-group (i.e. $G = p(G)$), and the group $I$ has the necessary structure of an inertia group. We also make the following definition.
\begin{definition}\label{def_Kummer_pullback}
Let $p$ be a prime number and $n$ be co-prime to $p$. The $[n]$-\textit{Kummer cover} is defined to be the unique connected $\mathbb{Z}/n$-Galois cover $\psi \, \colon \, Z \cong \mathbb{P}^1 \longrightarrow \mathbb{P}^1$ that is \'{e}tale away from $\{0,\infty\}$ over which the cover is totally ramified. Let $\phi \, \colon \, Y \longrightarrow \mathbb{P}^1$ be a connected $G$-Galois cover for a finite group $G$. Let $W$ be a dominant component in the normalization of $Y \times_{\mathbb{P}^1} Z$. We say that \textit{the cover} $W \longrightarrow Z$ \textit{is obtained by a pullback by the} $[n]$-\textit{Kummer cover}.
\end{definition}
In the above definition, the cover $W \longrightarrow Z$ is a Galois cover. By \cite[Corollary~5.15]{Thesis}, if $G \in \{A_d, S_d\}$ for some $d \geq p$, the cover $W \longrightarrow Z$ is a connected $A_d$-Galois cover.
For any scheme $V$, we denote the normalization of the reduced scheme $V_{\rm red}$ by $V^{{\scalebox{1.5}{\ensuremath \sim}}}$.
For any $p$-group $P$, let $\mathcal{M}_P$ be the Coarse moduli space of $P$-Galois \'{e}tale covers of $\text{Spec}\left( k((x)) \right)$ (see \cite[Proposition~2.1]{Ha_Moduli}), and let $\mathcal{M}^{\text{irr}}_P$ be the dense open subset of $\mathcal{M}_P$ parametrizing the covers that are irreducible (\cite[Remark~2.5(b)]{Ha_Moduli}).
\section{Formal patching results}\label{sec_patching}
A powerful technique to construct covers of curves is to use patching results in formal geometry, a.k.a. formal patching. This enables us to `patch' covers of curves defined over a power series ring $k[[t]]$ under suitable conditions. As a result, starting from some Galois covers, one obtains new covers with potentially bigger Galois and Inertia groups. The resulting cover is again defined over the ring $k[[t]]$, and a further Lefchetz type argument produces covers defined over $k$. For details on formal patching, see \cite{Ha_St}; also see \cite[Section 3.4]{Thesis} for a brief summary. We start with the following patching result based on \cite[Proposition~2.3, Proposition~2.6, Corollary~2.7]{Ha_AC}.
\begin{lemma}\label{lem_patching_same_extension}
Let $p$ be a prime number. Let $G_1$ and $G_2$ be quasi $p$-subgroups of a finite group $G$. Suppose that there are $G_1$-Galois and $G_2$-Galois covers $f_1 \, \colon \, Z_1 \longrightarrow \mathbb{P}^1$ and $f_2 \, \colon \, Z_2 \longrightarrow \mathbb{P}^1$ of smooth projective $k$-curves, respectively, both covers are \'{e}tale away from $\infty$, and for points $z_1 \in Z_1$, \, $z_2 \in Z_2$ over $\infty$, there is an isomorphism
$$K_{Z_1,z_1} \, \cong \, K_{Z_2,z_2}$$
as Galois field extensions of $K_{\mathbb{P}^1,\infty}$. If $G_1$ and $G_2$ generate $G$, there is a $G$-Galois cover $Z \longrightarrow \mathbb{P}^1$ of smooth projective connected $k$-curves, \'{e}tale away from $\infty$, and for a point $z \in Z$ over $\infty$, we have $K_{Z,z} \cong K_{Z_1,z_1}$ as Galois extensions of $K_{\mathbb{P}^1,\infty}$.
\end{lemma}
\begin{proof}
Set $I = \text{Gal}\left( K_{Z_1,z_1}/K_{\mathbb{P}^1,\infty} \right)$. By our hypothesis, we also have $I = \text{Gal} \left( K_{Y_2,y_2}/K_{\mathbb{P}^1, \infty} \right)$. A standard formal patching argument together with a Lefschetz principle (cf. \cite[Section 2]{Ha_AC}) produces a connected $G$-Galois \'{e}tale cover of the affine line with $I$ as an inertia group above $\infty$. We use similar arguments with a variation of the Lefschetz type principle (also used in \cite[Lemma~3.3]{DK}) to obtain the required isomorphism of the local extensions.
Let $\mathcal{S}$ be a regular irreducible projective $k[[t]]$-curve whose geometric fibre is $\mathbb{P}^1_{k((t))}$, and the closed fibre is the union of two projective lines $\mathbb{P}^1_u$ and $\mathbb{P}^1_v$ (with local coordinates $u$ and $v$ near $\infty$) meeting at the point $s \coloneqq (u=0, v=0)$, and $\widehat{\mathcal{O}}_{\mathcal{S} , s} = k[[u,v]][t]/(uv-t) \, \cong \, k[[u,v]]$ (for a construction of $\mathcal{S}$, see \cite[Remark~3.36]{Thesis}). Let $Z_1- f_1^{-1}(\infty) = \text{Spec}\left( B_1 \right)$, and $Z_2 - f_2^{-1}(\infty) = \text{Spec}\left( B_2 \right)$. Consider the trivial deformation $\text{Spec}\left( \widehat{\mathcal{O}}_{Z_1,z_1}[v] \right) \longrightarrow \text{Spec}\left( k[[u]][v] \right)$, and set
$$\mathcal{T} \coloneqq \text{Spec}\left( \widehat{\mathcal{O}}_{Z_1,z_1}[v] \right) \times_{\text{Spec}\left( k[[u]][v] \right)} \text{Spec}\left( k[[u,v]] \right).$$
Then the hypotheses of \cite[Proposition~2.3]{Ha_AC} hold for the $G_1$-Galois and $G_2$-Galois \'{e}tale covers
$$\text{Spec}\left( B_1[[t]] \right) \longrightarrow \text{Spec}\left( k[u^{-1}][[t]] \right) \text{ and } \text{Spec}\left( B_2[[t]] \right) \longrightarrow \text{Spec}\left( k[v^{-1}][[t]] \right),$$
together with the following isomorphisms of covers (where the later isomorphism is induced from the given isomorphism of field extensions $K_{Z_1,z_1} \, \cong \, K_{Z_2,z_2}$ over $K_{\mathbb{P}^1,\infty}$).
$$\mathcal{T} \times_{\text{Spec}\left( k[[u,v]] \right)} \text{Spec}\left( k((u))[[t]] \right) \, \cong \, \text{Ind}_{I}^{G_1} \, \text{Spec}\left( K_{Z_1,z_1}[[t]] \right), \, \text{and}$$
$$\mathcal{T} \times_{\text{Spec}\left( k[[u,v]] \right)} \text{Spec}\left( k((v))[[t]] \right) \, \cong \, \text{Ind}_{I}^{G_2} \, \text{Spec}\left( K_{Z_2,z_2}[[t]] \right).$$
Since $G = \langle \, G_1, G_2 \, \rangle$, we obtain an irreducible normal $G$-Galois cover $h \, \colon \, \mathcal{V} \longrightarrow \mathcal{S}$ such that the following $G$-equivariant isomorphisms of normal covers hold.
\begin{eqnarray*}
\mathcal{V} \times_{\mathcal{S}} \text{Spec}(k[v^{-1}][[t]]) & \cong & \text{Ind}_{G_1}^G \, \text{Spec}\left( B_1[[t]] \right);\\
\mathcal{V} \times_{\mathcal{S}} \text{Spec}(k[u^{-1}][[t]]) & \cong & \text{Ind}_{G_2}^G \, \text{Spec}\left( B_2[[t]] \right); \, \text{and}\\
(\mathcal{V} \times_{\mathcal{S}} \text{Spec}\left( k[[u,v]] \right)^{{\scalebox{1.5}{\ensuremath \sim}}} & \cong & \text{Ind}_{I}^G \, \text{Spec}(\mathcal{T}).
\end{eqnarray*}
The last isomorphism of normal covers over $\text{Spec} \left( k[[u,v]] \right)$ corresponds to (via pushforwards of the structure sheaves) a $G$-equivariant isomorphism of coherent sheaves of $\mathcal{O}_{\text{Spec}\left( k[[u,v]] \right)}$-algebras. Hence it is defined locally by matrix equations involving only finitely many functions over $\text{Spec}\left( k[[u,v]] \right)$. So there is a finite type $k[t]$-algebra $A \subset k[[t]]$ with $E = \text{Spec}(A)$ being smooth connected, and an irreducible $G$-Galois cover $h_A \, \colon \, F_A \longrightarrow \mathbb{P}^1_A$ with the following properties: $h_A$ is \'{e}tale away from $\infty_A \coloneqq \infty \times_{\text{Spec}(k)} E$, \, $I$ occurs as an inertia group above $\infty_A$, the fibre over each point of $E$ is irreducible and non-empty, and there is an isomorphism
$$F_A \times_{E} \text{Spec}\left( k[[t]] \right) \, \cong \, \mathcal{V}$$
of normal, irreducible $G$-Galois covers of $\mathbb{P}^1_A \times_{E} \text{Spec}\left( k[[t]] \right) \, \cong \, \mathcal{S}$. For any $e \in E$, the $G$-Galois cover $(F_A \times_E e)^{{\scalebox{1.5}{\ensuremath \sim}}} \longrightarrow \mathbb{P}^1_k$ has the stated properties.
\end{proof}
Next, we discuss a formal patching result that will be used in our proof of the PWIC for Alternating groups (see Theorem \ref{thm_PWIC_Alternating}) in odd characteristics. It shows that an inertia group whose Sylow $p$-group is cyclic of order $p$ can be realized for a group $G$ if it is realized for a subgroup of $G$. This is done in \cite{Pries} under the hypothesis that $p$ strictly divides the order of $G$.
\begin{lemma}\label{lem_fp_main}
Let $X$ be a smooth projective connected $k$-curve, $x \in X$ be a closed point. Let $G$ be a finite group, $I \subset G$ be an extension of a $p$-cyclic group $\langle \, \tau \, \rangle \cong \mathbb{Z}/p$ by a cyclic group $\langle c \rangle$ of order $m$, $(m,p) \, = \, 1$. Let $G_1$ and $G_2$ be subgroups of $G$ such that $\tau \in G_1, \, I \subset G_2$, and assume that the following hold.
\begin{enumerate}
\item There is a connected $G_1$-Galois cover $\psi_1$ of $X$ \'{e}tale away from a set $B \subset X$ of closed points with $x \in B$. Suppose that $\langle \, \tau \, \rangle$ occurs as an inertia group above $x$. For each point $y \neq x$ in $B$, let $I_y$ denote an inertia group above $y$.
\item $\psi_2 \, \colon \, Y_2 \longrightarrow \mathbb{P}^1$ is a connected $G_2$-Galois cover, \'{e}tale away from a set $\{0 = \eta_0, \eta_1, \ldots, \eta_r \}$ such that $I$ occurs as an inertia group above $0$, and for $1 \leq i \leq r$, let $J_i$ denote an inertia group above $\eta_i$.
\end{enumerate}
If $G = \langle \, G_1, G_2 \, \rangle$, there is a set $B' = \{ x_1, \ldots, x_r \}$ of closed points in $X$, disjoint from $B$, and a connected $G$-Galois cover $Y \longrightarrow X$, \'{e}tale away from $B \sqcup B'$, such that $I$ occurs as an inertia group above $x$, for each point $y \neq x$ in $B$, $I_y$ occurs as an inertia group above $y$, and $J_i$ occurs as an inertia group above $x_i$ for $1 \leq i \leq r$.
\end{lemma}
\begin{proof}
We have $I = \langle \, \tau \, \rangle \rtimes \langle \, c \, \rangle$. Let $m'$ be the order of the prime-to-$p$ part of the center of $I$. Let $u$ be a local parameter in $\mathbb{P}^1$ at $0$. Let $y_2 \in Y_2$ be a point above $0$ such that $I = \text{Gal}\left( K_{Y_2,y_2}/k((u)) \right)$. Let $h_2$ be the conductor of this extension. Then $(h_2,p)=1$, $(h_2,m)=m'$ and $m \, | \, h_2(p-1)$. Suppose that $h_1'$ be the conductor for the local $\langle \, \tau \, \rangle \cong \mathbb{Z}/p$-Galois field extension in $\psi_1$ near $0$. So, $(h_1',p)=1$.
Choose $\gamma$ co-prime to $p$ such that $(\gamma,m)=1$, $p \nmid (\gamma m +1)$ and $\gamma h_2 \geq h_1'$. Set $h_1 \coloneqq \gamma h_2$. By \cite[Theorem 2.2.2]{Pries}, there is a connected $G_1$-Galois cover $\phi_1 \, \colon Y_1 \longrightarrow X$ with the same inertia groups and ramification behavior above the points $y \neq x$ in $X$, and there is a point $y_1 \in Y_1$ above $x$ such that the $\langle \, \tau \, \rangle$-Galois field extension $K_{Y_1,y_1}/K_{X,x}$ has conductor $h_1$.
Set $e \coloneqq h_1 m + h_2 = (\gamma m +1) h_2$. Then $(e,m) = (h_2,m) = m'$, \, $p \nmid e$, and $(m,\frac{e}{(h_1,h_2)}) = (m , \frac{e}{h_2}) = (m, \gamma m +1) = 1$, and \cite[Notation 2.3.2 and Numerical Hypothesis]{Pries} hold. Consider the integral $k[[t]]$-scheme $S \, \coloneqq \text{Spec}\left( k[[u,v,t]]/(uv-t^{\gamma m +1}) \right)$, and denote its closed fibre (the subscheme given by the locus of $t=0$) by $S'$. By \cite[Theorem 2.3.4]{Pries}, there is an irreducible $I$-Galois cover $\widehat{\phi} \, \colon \, \widehat{Z} \longrightarrow S$ whose generic fibre is irreducible, and the special fibre $\widehat{\phi}_k \, \colon \, Z_k \longrightarrow S'$ has the following properties.
\begin{eqnarray*}
Z_k \times_{S'} \text{Spec}\left( K_{X,x} \right) & \cong & \text{Spec}\left( K_{Y_1,y_1} \right),\\
Z_k \times_{S'} \text{Spec}\left( k((u)) \right) & \cong & \text{Spec}\left( K_{Y_2,y_2} \right)
\end{eqnarray*}
as Galois \'{e}tale covers of $\text{Spec}\left( K_{X,x} \right)$ and $\text{Spec}\left( k((u)) \right)$, respectively.
Let $T$ be a regular irreducible projective $k[[t]]$-curve whose geometric fibre is $X_{k((t))}$, and the closed fibre is $X \cup \mathbb{P}^1$ meeting at the point $\sigma = (x,0)$ which we identify with $(u=0,v=0)$ and has equation $u v = t^{\gamma m +1}$ near $\sigma$ (so $S = \text{Spec}\left( \widehat{\mathcal{O}}_{T, \sigma} \right)$). Let $X - x = \text{Spec}\left( A \right)$, \, $Y_1 - \psi_1^{-1}(x) = \text{Spec}\left( B_1 \right)$, \, $Y_2 - \psi_2^{-1}(0) = \text{Spec}\left( B_2 \right)$. By \cite[Proposition 2.3]{Ha_AC}, there is an irreducible normal $\langle \, G_1, G_2 \, \rangle = G$-Galois cover $h \, \colon \, V \longrightarrow T$ such that
\begin{eqnarray*}
V \, \times_T \, \text{Spec}\left( A[[t]] \right) & \cong & \text{Ind}_{G_1}^G \, \text{Spec}\left( B_1[[t]] \right),\\
V \, \times_T \, \text{Spec}\left( k[u^{-1}][[t]] \right) & \cong & \text{Ind}_{G_2}^G \, \text{Spec}\left( B_2[[t]] \right), \, \text{and}\\
(V \, \times_T \, \text{Spec}\left( \widehat{\mathcal{O}_{T,\sigma}} \right)^{{\scalebox{1.5}{\ensuremath \sim}}} & \cong & \text{Ind}_{I}^G \, \text{Spec}\left( \widehat{Z} \right)
\end{eqnarray*}
as Galois covers of $\text{Spec}\left( A[[t]] \right)$, \, $\text{Spec}\left( k[u^{-1}][[t]] \right)$ and $S = \text{Spec}\left( \widehat{\mathcal{O}}_{T,\sigma} \right)$, respectively (the first two isomorphisms are of \'{e}tale Galois covers). Let $h^0 \, \colon \, V^0 \longrightarrow X_{k((t))}$ be the generic fibre of $h$. Then there exists a set $B' = \{x_1, \ldots, x_r\} \subset X$ disjoint from $B$ such that $h^0$ is \'{e}tale away from $\{b_{k((t))} \, | \, b \in B \sqcup B'\}$,\, $I$ occurs as an inertia group above $x_{k((t))}$, for each point $y \neq x$ in $B$, $I_y$ occurs as an inertia group above $y_{k((t))}$, for each $1 \leq i \leq r$, $J_i$ occurs as an inertia group above $x_{i,k((t))}$. As $h$ is a cover of smooth irreducible $k((t))$-curves and the closed fibre of $h$ is generically smooth, the result follows from \cite[Corollary 2.7]{Ha_AC}.
\end{proof}
In the following, we see that given a Galois cover, we can always deform the cover to obtain a new cover with the same Galois and inertia groups, but with a bigger first upper jump in its higher ramification filtration (cf. \Cref{sec_filtration}). It will be used in the proof of our next result. When the inertia group is cyclic of order $p$, this follows from \cite[Theorem~2.2.2]{Pries}.
\begin{proposition}\label{prop_changing_first_upper_jump}
Let $p$ any prime number and $G$ be a quasi $p$-group. Let $P \subset G$ be a $p$-group. Let $f \, \colon \, Y \longrightarrow \mathbb{P}^1$ be a $G$-Galois cover of smooth projective connected $k$-curves, \'{e}tale away from $\infty$, such that $P$ occurs as an inertia group over $\infty$. Suppose that $u \geq 1$ is the first upper jump in the higher ramification filtration for $P$ (cf. \Cref{sec_filtration}). For any integer $\tilde{u} \geq u$, $(\tilde{u},p)=1$, there is a connected $G$-Galois cover $Z \longrightarrow \mathbb{P}^1$, \'{e}tale away from $\infty$, such that $P$ occurs as an inertia group above $\infty$, and $\tilde{u}$ is the first jump in the ramification filtration of $P$.
\end{proposition}
\begin{proof}
Let $x_0$ be a local parameter of $\mathbb{P}^1$ near $\infty$. Let $y \in f^{-1}(\infty) \subset Y$ be a point such that $P = \text{Gal}\left( K_{Y,y}/k((x_0)) \right)$. Set $L \coloneqq K_{Y,y}$. Let $\text{Frat}(P)$ be the Frattini subgroup of $P$. Then $P/\text{Frat}(P) \cong (\mathbb{Z}/p)^e$ for some $e \geq 1$. By Proposition~\ref{prop_jump_quotients}, the $(\mathbb{Z}/p)^e$-Galois extension $L^{\text{Frat}(P)}/k((x_0))$ is the compositum of $e$-many $\mathbb{Z}/p$-Galois extensions defined by the Artin-Schreier polynomials $R_{i,0} = Z^p - Z - f_i(x_0) \in k((x_0))[Z]$ satisfying conditions \eqref{d:1}--\eqref{d:3}. Since $u$ is the smallest upper jump in the ramification filtration of $L/k((x))$, and $P/\text{Frat}(P)$ is the maximal elementary abelian quotient of $P$, there are prime-to-$p$ integers $ u = v_1 < v_2 < \ldots < v_t$ which occur as the upper jumps for the ramification filtration for $L^{\text{Frat}(P)}/k((x_0))$, and $\{v_{x_0^{-1}}(f_i(x_0)) \, | \, 1 \leq i \leq e\} = \{ u, v_2, \ldots, v_t\}$. Set $u_1 = \tilde{u}$. Let $u_2 < \cdots < u_e$ be co-prime to $p$ integers such $u_i > \text{max}\{v_j , \tilde{u}\}$ where $v_j = v_{x_0^{-1}}(f_i(x_0))$.
Consider the $P/\text{Frat}(P)$-Galois \'{e}tale cover
\begin{equation}\label{eq_1}
\psi' \, \colon \, W' = \text{Spec}\left( k((x_0))[t, Z_1, \ldots, Z_e]/(R_{1,t}, \ldots, R_{e,t}) \right) \longrightarrow \text{Spec}\left( k((x_0))[t] \right),
\end{equation}
where $R_{i,t} = Z_i^p - Z_i - f_i(x_0) - t x_0^{-u_i}$ for $1 \leq i \leq e$. For any $0 \neq \beta \in k$, the fibre of $\psi'$ over the point $(t=\beta)$ is a $P/\text{Frat}(P)$-Galois \'{e}tale cover of $\text{Spec}\left( k((x_0)) \right)$ with upper jumps $\tilde{u} = u_1 < u_2 < \ldots < u_e$. The fibre of $\psi'$ over $(t=0)$ is the cover corresponding to the field extension $L^{\text{Frat}(P)}/k((x_0))$. Let $M/k((x_0))$ be a $P$-Galois extension (which exists by \cite[Lemma~3,Example~5]{Ha_embed_er}) such that $M^{\text{Frat}(P)}/P$ is the $P/\text{Frat}(P)$-Galois cover given by the compositum of the $\mathbb{Z}/p$-Galois extensions defined by the Artin-Schreier polynomials $R_{i,1}$, $1 \leq i \leq e$. Let $\psi \colon \, W \longrightarrow \text{Spec} \left( k[[x_0]][t] \right)$ be the $P/\text{Frat}(P)$-Galois cover obtained by taking normalization of $\text{Spec}\left( k[[x_0]][t] \right)$ in the function field of $W'$.
We claim the following.
\begin{description}\label{claim_1}
\item[\underline{Claim (*)}] There is a $P$-Galois cover
\begin{equation}\label{eq_theta}
\theta \, \colon \, T \longrightarrow \text{Spec}\left( k[[x_0]][t] \right)
\end{equation}
of connected integral schemes dominating the cover $\psi$ together with the isomorphisms
\begin{eqnarray}\label{eq_isom_1}
T \times_{\text{Spec}\left( k[[x_0]][t] \right)} \left( \text{Spec}\left( k[[x_0]] \right) \, \times \, (t=0) \right) \, \cong \, \text{Spec}\left( \widehat{\mathcal{O}}_{Y,y} \right),\\
\text{and } \, T \times_{\text{Spec}\left( k[[x_0]][t] \right)} \left( \text{Spec}\left( k((x_0)) \right) \, \times \, (t=1) \right) \, \cong \, \text{Spec}\left( M \right)
\end{eqnarray}
as $P$-Galois covers of $\text{Spec}\left( k[[x_0]] \right)$; the cover $\theta$ is branched only at $\infty \times \mathbb{A}^t$ over which it is totally ramified.
\end{description}
If $\text{Frat}(P) = \{1\}$, take $\theta = \psi$. Assume that $\text{Frat}(P) \neq \{1\}$. Applying \cite[Theorem~3.11]{Ha_extn} with $X'$ as the closed subset of $\text{Spec}\left( k((x_0))[t]\right)$ consisting of points $(t=0)$ and $(t=1)$, and $Z' \longrightarrow Y'$ (where $Y' = W' \times_{\text{Spec}\left( k((x_0))[t] \right)} X'$) given by the disjoint union of the $P/\text{Frat}(P)$-Galois \'{e}tale covers
$$\text{Spec}\left( L \right) \longrightarrow \text{Spec}\left( L^{\text{Frat}(P)} \right)$$
$$\text{and } \, \text{Spec}\left( M \right) \longrightarrow \text{Spec}\left( M^{\text{Frat}} \right),$$
we obtain a $P$-Galois \'{e}tale cover
$$\theta' \, \colon \, T' \, \longrightarrow \text{Spec}\left( k((x_0))[t] \right)$$
of connected integral schemes that dominates the cover $\psi'$ given by Equation~\eqref{eq_1} together with the following isomorphisms of $P$-Galois \'{e}tale cover of $\text{Spec}\left( k((x_0)) \right)$.
\begin{equation}\label{eq_etale_iso_1}
T' \times_{\text{Spec}\left( k((x_0))[t] \right)} \left( \text{Spec}\left( k((x_0)) \right) \times (t=0) \right) \, \cong \, \text{Spec}\left( L \right),
\end{equation}
$$ \text{and } \, T' \times_{\text{Spec}\left( k((x_0))[t] \right)} \left( \text{Spec}\left( k((x_0)) \right) \times (t=1) \right) \, \cong \, \text{Spec}\left( M \right).$$
Taking the normalization of $\text{Spec}\left( k[[x_0]][t] \right)$ in the function field of $T'$, we obtain $\theta \, \colon \, T \longrightarrow \text{Spec}\left( k[[x_0]][t] \right)$ satisfying the isomorphisms~\eqref{eq_isom_1}. It remain to show that $\theta$ is branched only at $\infty \times \mathbb{A}^1_t$ over which it is totally ramified. Since $\theta'$ is an \'{e}tale cover, the branch locus of $\theta$ is contained in $\infty \times \mathbb{A}^1_t$. The cover $\theta'$ corresponds to a morphism $\Theta \, \colon \, \mathbb{A}^1_t \longrightarrow \mathcal{M}_{P}$ such that $\Theta \left( (t=0) \right) \in \mathcal{M}^{\text{irr}}_{P}$, and $\Theta \left( (t=1) \right) \in \mathcal{M}^{\text{irr}}_{P}$ (here $\mathcal{M}_{P}$ and $\mathcal{M}^{\text{irr}}_{P}$ are the moduli spaces as in \cref{sec_notation}). Since $\mathcal{M}^{\text{irr}}_{P} \subset \mathcal{M}_P$ is a dense open subset, for all points $(t = \beta)$ in a dense open subset of $\text{Spec}\left( k((x_0))[t] \right)$, we have $\Theta \left( (t = \beta) \right) \in \mathcal{M}^{\text{irr}}_{P}$. This finishes the proof of the claim~(*).
By \cite[Lemma~3.2 and Lemma~3.3]{DK}, there is a dense open subset $\mathcal{V} \subset \mathbb{A}^1_t$ such that for each closed point $(t=\beta)$ in $\mathcal{V}$ the following hold. There is a connected $G$-Galois cover $Z \longrightarrow \mathbb{P}^1$, \'{e}tale away from $\infty$, and for some point $z \in Z$ over $\infty$,
$$\text{Spec}\left( K_{Z,z} \right) \, \cong \, T' \times_{\text{Spec}\left( k((x_0))[t] \right)} \left( \text{Spec}\left( k((x_0)) \right) \times (t = \beta) \right)$$
as $P$-Galois covers of $\text{Spec}\left( k((x_0)) \right)$. In particular, $K_{Z,z}^{\text{Frat}(P)}$ has $\tilde{u}$ as the first upper jump. By Proposition~\ref{prop_jump_quotients}, $\tilde{u}$ is also the first jump for the group $P$ in the extension $K_{Z,z}/k((x_0))$. Now the result follows by taking any $\beta \neq 0$, $(t = \beta) \in \mathcal{V}$.
\end{proof}
The following theorem uses formal patching techniques with \cite[Theorem~3.7]{Manish_Killing} to obtain a cover with a perfect group $G$ as the Galois group. It shows that if $P_1 \times P_2$ is a product of $p$-subgroups of $G$, and there are quasi $p$-subgroups $G_1$ and $G_2$ containing $P_1$ and $P_2$, respectively, such that the pairs $(G_1, \, P_1)$ and $(G_2, \, P_2)$ are realizable (see Definition~\ref{def_real_pair}), and that $G$ is generated by $G_1$ and $G_2$, then the pair $(G, \, P)$ is realizable for an index $p$-subgroup $P$ of $P_1 \times P_2$. We will use this to prove the realization of certain inertia groups in the context of a perfect wreath product (Proposition~\ref{prop_one_tau}) where $P_1$ and $P_2$ both have order $p$, and $G$ is generated by $G_1$ and $G_2$. A special case of this result is \cite[Theorem~5.2]{DK} where $G = G_1 \times G_2$, a product of perfect quasi $p$-groups, $P_1$ is a cyclic $p$-group and $P_2$ has order $p$. The long proof of the Theorem is broken into several steps for the ease of reading. In the first step, we use patching technique to obtain two Galois \'{e}tale covers of the affine line with $P_1 \times P_2$ as an inertia group above $\infty$ for both covers, and such that $1$ occurs as a lower jump (or equivalently, an upper jump; see Equation~\eqref{eq_jumps_relation}) in the corresponding ramification filtration. In step two, we further use formal patching and obtain covers with the above properties, and additionally, the local extensions over $\infty$ are also isomorphic. This allows us to apply Lemma~\ref{lem_patching_same_extension}, and finally use \cite[Theorem~3.7]{Manish_Killing} (which can be seen as a purely wild analogue of Abhyankar's Lemma) to reduce the inertia in the last step.
\begin{theorem}\label{thm_perfect_group}
Let $p$ be a prime number and $G$ be a finite group. Let $\Gamma$ be a $p$-subgroup of $G$ such that $\Gamma \coloneqq P_1 \times P_2$ for $P_1, \, P_2 \subset G$. Let $Q_1$ and $Q_2$ be two subgroups of index $p$ in $P_1$ and $P_2$, respectively. Suppose that $G_1$ and $G_2$ be two quasi $p$-subgroups of $G$ such that the pairs $(G_1, \, P_1)$ and $(G_2, \, P_2)$ are realizable. Set $H \coloneqq \langle \, G_1, \, G_2 \, \rangle \subset G$. Then there is a connected $H$-Galois cover $Y \longrightarrow \mathbb{P}^1$ that is \'{e}tale away from $\infty$ with the following ramification properties.
\begin{enumerate}
\item $\Gamma$ occurs as an inertia group at a point $y \in Y$ above $\infty$;\label{c:1}
\item $1$ occurs as a jump in the ramification filtration for $\Gamma$;\label{c:2}
\item the higher ramification group $\Gamma_{(2)}$ has index $p$ in $\Gamma$ and contains the group $Q_1 \, \times \, Q_2$.\label{c:3}
\end{enumerate}
Moreover, if $H$ is a perfect group, the pair $(H, \, \Gamma_{(2)})$ is realizable.
\end{theorem}
\begin{proof}
Let $x_0$ be the local parameter at $\infty$ for $\mathbb{P}^1$. So $K_{\mathbb{P}^1,\infty} = k((x_0))$. For $i \, = \, 1, \, 2$, let $\phi_i \, \colon \, X_i \longrightarrow \mathbb{P}^1$ be a connected $G_i$-Galois cover, \'{e}tale away from $\infty$, $x_i \in X_i$ be a point above $\infty$ such that $P_i = \text{Gal}\left( K_{X_i,x_i}/k((x_0)) \right)$. After possibly changing the covers $\phi_1$ and $\phi_2$ using Proposition~\ref{prop_changing_first_upper_jump}, we suppose that $1$ does not occur as a jump in the ramification filtration of $P_1$ or $P_2$.
\underline{Step 1:} We construct a connected \'{e}tale cover of the affine line with Galois group $H_1 \coloneqq \langle \, G_1, \, P_2 \, \rangle$ such that $\Gamma = P_1 \times P_2$ occurs as an inertia group at a point above $\infty$, and the first jump in the ramification filtration of $\Gamma$ is $1$.
Set $L_0 \coloneqq K_{X_1,x_1}$. Let the local $\mathbb{Z}/p \cong P_1/Q_1$-Galois extension $L_0^{Q_1}/k((x_0))$ be given by the Artin-Schreier polynomial
$$R_0 \, = \, Z^p - Z - f(x_0) \in k((x_0))[Z],$$
and let $v_{x_0^{-1}}(f(x_0)) = h$ for some co-prime-to $p$ integer $h$. Since $1$ is not a jump in the ramification filtration for $P_1$, we have $h \geq 2$. Let $\alpha_0 \in L_0^{Q_1}$ be a root of $R_0$. For each $\beta \in k$, $\beta \neq 0$, consider the $\mathbb{Z}/p \cong P_2/Q_2$-Galois field extension $E_\beta/k((x_0))$ given by the Artin-Schreier polynomial
$$R_\beta = Z^p - Z - f(x_0) - \beta x_0^{-1} \in k((x_0))[Z].$$
Let $\alpha_\beta$ be a root of $R_\beta$ in $E_\beta$.
We first note that the extensions $L_0$ and $E_\beta$ are linearly disjoint over $k((x_0))$, $\beta \neq 0$. If $N_1$ is a normal subgroup of $P_1$ such that $L_0^{N_1} \cong E_\beta$ as $\mathbb{Z}/p$-Galois extensions of $k((x_0))$, then $N_1 \neq Q_1$, and the $(\mathbb{Z}/p)^2 \cong P_1/(N_1 \cap Q_1)$-Galois extension $L_0^{N_1 \cap Q_1}/k((x_0))$ is isomorphic to the compositum $L_0^{Q_1} \cdot E_\beta/k((x_0))$. Since the valuation $v_{x_0^{-1}}\left( f(x_0) - (f(x_0) + \beta x_0^{-1}) \right) = 1 < h$, by \cite[Proposition~3.1]{Manish_Compositum}, $1$ is an upper jump of the extension $L_0^{Q_1} \cdot E_\beta/k((x_0))$. Thus $1$ is also an upper jump for the extension $L_0/k((x_0))$ (see Proposition~\ref{prop_jump}\eqref{j:2}), contradicting our assumption. So for $\beta \neq 0$, the extensions $L_0$ and $E_\beta$ are linearly disjoint over $k((x_0))$, and their compositum $F_\beta \coloneqq L_0 \cdot E_\beta$ is a $P_1 \times P_2/Q_2$-Galois extension of $k((x_0))$. Since $\alpha_0 -\alpha_\beta$ is a root of the polynomial $Z^p-Z-\beta x_0^{-1}$, for $\beta \neq 0$, by \cite[Proposition~3.1]{Manish_Compositum}, the extension $F_\beta^{Q_2} = k((x_0))(\alpha_0, \alpha_\beta)/k((x_0))$ is a $(\mathbb{Z}/p)^2 \cong P_1/Q_1 \times P_2/Q_2$-Galois extension with two distinct upper jumps $1$ and $h$. Moreover, the ramification filtration is given by
$$(P_1/Q_1 \times P_2/Q_2)^{(1)} = P_1/Q_1 \times P_2/Q_2,$$
$$(P_1/Q_1 \times P_2/Q_2)^{(2)} = (P_1/Q_1 \times P_2/Q_2)^{(h)} = \text{Gal}\left( k((x_0))(\alpha_0,\alpha_\beta)/k((x_0))(\alpha_0-\alpha_\beta) \right).$$
Consider the $P_1$-Galois and the $\mathbb{Z}/p \cong P_2/Q_2$-Galois \'{e}tale covers
$$\theta_1 \, \colon \, V_1 = \text{Spec}\left( L_0 \right) \times_k \mathbb{A}^1_t \longrightarrow \text{Spec}\left( \widehat{\mathcal{O}}_{\mathbb{P}^1,\infty} \right) \times_k \mathbb{A}^1_t = \text{Spec}\left( k((x_0))[t] \right),$$
$$\theta_2 \, \colon \, W_2 = \text{Spec}\left( k((x_0))[t][Z]/(Z^p-Z-f(x_0)-tx_0^{-1}) \right) \longrightarrow \text{Spec}\left( k((x_0))[t] \right)$$
of connected integral schemes over the affine $t$-line over the field $k((x_0))$. Let $S'$ be an irreducible component of the fibre product $V_1 \times_{\text{Spec}\left( k((x_0))[t] \right)} W_2$, and let $\eta ' \, \colon \, S' \longrightarrow \text{Spec}\left( k((x_0))[t] \right)$ be the induced morphism. From the construction, we have the following isomorphisms of fibres of $\eta'$.
$$S' \times_{\text{Spec}\left( k((x_0))[t] \right)} \left( \text{Spec}\left( k((x_0)) \right) \times (t=\beta) \right) \, \cong \, \text{Spec}\left( F_\beta \right), \, \, \beta \neq 0;$$
$$\text{and } \, S' \times_{\text{Spec}\left( k((x_0))[t] \right)} \left( \text{Spec}\left( k((x_0)) \right) \times (t=0) \right) \, \cong \, \text{Ind}_{P_1}^{P_1 \times P_2/Q_2} \, \text{Spec}\left( L_0 \right).$$
So $\eta'$ is a connected normal $P_1 \times P_2/Q_2$-Galois \'{e}tale cover of the affine $t$-line over the field $k((x_0))$.
By \cite[Lemma~3,Example~5]{Ha_embed_er}, we obtain a $\Gamma$-Galois extension $M_1$ of $k((x_0))$ such that $M_1^{Q_2} \cong F_1$ as $P_1 \times P_2/Q_2$-Galois extensions of $k((x_0))$. Applying \cite[Theorem~3.11]{Ha_extn} with $X'$ as the closed subset of $\text{Spec}\left( k((x_0))[t]\right)$ consisting of points $(t=0)$ and $(t=1)$, and $Z' \longrightarrow Y'$ (where $Y' = S' \times_{\text{Spec}\left( k((x_0))[t] \right)} X'$) given by the disjoint union of the $Q_2$-Galois \'{e}tale covers
$$\text{Ind}_{P_1}^{\Gamma} \, \text{Spec}\left( L_0 \right) \, \cong \, \text{Ind}_{P_1 \times P_2/Q_2}^{P_1 \times P_2} \, \left( \text{Ind}_{P_1}^{P_1 \times P_2/Q_2} \, \text{Spec}\left( L_0 \right) \right) \longrightarrow \text{Ind}_{P_1}^{P_1 \times P_2/Q_2} \, \text{Spec}\left( L_0 \right)$$
$$\text{and } \, \text{Spec}\left( M_1 \right) \longrightarrow \text{Spec}\left( F_1 \right),$$
we obtain a $\Gamma$-Galois \'{e}tale cover
\begin{equation}\label{eq_theta_2}
\theta' \, \colon \, T' \, \longrightarrow \text{Spec}\left( k((x_0))[t] \right)
\end{equation}
of connected integral schemes that dominates the cover $\eta'$ together with the following isomorphisms of $\Gamma$-Galois \'{e}tale covers of $\text{Spec}\left( k((x_0)) \right)$.
\begin{equation}\label{eq_etale_iso_2}
T' \times_{\text{Spec}\left( k((x_0))[t] \right)} \left( \text{Spec}\left( k((x_0)) \right) \times (t=0) \right) \, \cong \, \text{Ind}_{P_1}^{\Gamma} \text{Spec}\left( L_0 \right),
\end{equation}
$$T' \times_{\text{Spec}\left( k((x_0))[t] \right)} \left( \text{Spec}\left( k((x_0)) \right) \times (t=1) \right) \, \cong \, \text{Spec}\left( M_1 \right).$$
Let $\theta \, \colon \, T \longrightarrow \text{Spec}\left( k[[x_0]][t] \right)$ be the cover obtained from $\theta'$ by taking normalization of $\text{Spec}\left( k[[x_0]][t] \right)$ in the function field of $T'$. Note that $\theta'$ induces a morphism $\Theta \, \colon \, \mathbb{A}^1_t \longrightarrow \mathcal{M}_{\Gamma}$ such that $\Theta \left( (t=1) \right) \in \mathcal{M}^{\text{irr}}_{\Gamma}$ (here $\mathcal{M}_{\Gamma}$ and $\mathcal{M}^{\text{irr}}_{\Gamma}$ are the moduli spaces as in \cref{sec_notation}). As in the proof of Claim~(*) of Proposition~\ref{prop_changing_first_upper_jump}, $\theta$ is a $\Gamma$-Galois cover of connected integral schemes, branched only at $\infty \times \mathbb{A}^1_t$ over which it is totally ramified.
The isomorphism~\eqref{eq_etale_iso_2} induces the $\Gamma$-equivariant isomorphism
$$T \times_{\text{Spec}\left( k[[x_0]][t] \right)} \left( \text{Spec}\left( k[[x_0]] \right) \, \times \, (t=0) \right) \, \cong \, \text{Ind}_{P_1}^{\Gamma} \, \text{Spec}\left( \widehat{\mathcal{O}}_{X_1,x_1} \right)$$
as covers of $\text{Spec}\left( \widehat{\mathcal{O}}_{\mathbb{P}^1,\infty} \right)$. Applying \cite[Lemma~3.2 and Lemma~3.3]{DK}, there is a dense open subset $\mathcal{V} \subset \mathbb{A}^1_t$ such that for each closed point $(t=\beta)$ in $\mathcal{V}$ the following hold. There is a connected $H_1 = \langle \, G_1 , P_2 \, \rangle$-Galois cover $Y_1 \longrightarrow \mathbb{P}^1$, \'{e}tale away from $\infty$, and for some point $y_1 \in Y_1$ over $\infty$,
$$\text{Spec}\left( K_{Y_1,y_1} \right) \, \cong \, T' \times_{\text{Spec}\left( k((x_0))[t] \right)} \left( \text{Spec}\left( k((x_0)) \right) \times (t = \beta) \right)$$
as $\Gamma$-Galois covers of $\text{Spec}\left( k((x_0)) \right)$. In particular, $K_{Y_1,y_1}^{Q_2} \cong F_\beta$ as $P_1 \times P_2/Q_2$-Galois extensions of $k((x_0))$; since $1$ is an upper jump for this extension, by Proposition~\ref{prop_jump}\eqref{j:2}, we conclude that $1$ is also an upper jump for the extension $K_{Y_1,y_1}/k((x_0))$.
Thus we obtain a connected $H_1$-Galois cover $g_1 \, \colon \, Z_1 \longrightarrow \mathbb{P}^1$ that is \'{e}tale away from $\infty$, $\Gamma$ occurs as an inertia group at a point $z_1 \in Z_1$ above $\infty$, and $1$ occurs as a lower jump in the ramification filtration of $\Gamma$. Moreover, $Q_1 \times Q_2 \subset \Gamma_{(2)}$, and since $[P_1/Q_1 \times P_2/Q_2 \colon (P_1/Q_1 \times P_2/Q_2)^{(h)}] = p$, \, $\Gamma_{(2)}$ has index $p$ in $\Gamma$.
Similarly, we obtain a connected $H_2 = \langle \, P_1, G_2 \, \rangle$-Galois cover $g_2 \, \colon \, Z_2 \longrightarrow \mathbb{P}^1$, \'{e}tale away from $\infty$, such that $\Gamma$ occurs as an inertia group at a point $z_2 \in Z_2$ above $\infty$ and with the following ramification properties. $1$ occurs as a lower jump in the ramification filtration of $\Gamma$, $Q_1 \times Q_2 \subset \Gamma_{(2)}$, and $\Gamma_{(2)}$ has index $p$ in $\Gamma$.
\underline{Step 2:} We will further deform the covers $g_i$'s to obtain covers without changing the Galois groups and the ramification properties such that the local extensions over $k((x_0))$ in the corresponding covers are isomorphic. This will allow us to patch the covers to obtain a $G$-Galois cover.
Suppose that the $\mathbb{Z}/p \cong \Gamma/\Gamma_{(2)}$-Galois extensions $K_{Z_1,z_1}^{\Gamma_{(2)}}/K_{\mathbb{P}^1,\infty}$ and $K_{Z_2,z_2}^{\Gamma_{(2)}}/K_{\mathbb{P}^1,\infty}$ are given by the following Artin-Scrier polynomials
\begin{eqnarray*}
Z^p-Z-\gamma_1(x_0) \in k((x_0))[Z] \text{ and }\\
Z^p-Z-\gamma_2(x_0) \in k((x_0))[Z],
\end{eqnarray*}
$v_{x_0^{-1}} \left( \gamma_1 \right) = 1 = v_{x_0^{-1}} \left( \gamma_2 \right)$. Consider the connected $\mathbb{Z}/p$-Galois \'{e}tale cover of integral schemes of the affine $u$-line over the field $k((x_0))$ given by
$$\psi' \, \colon \, U' = \text{Spec}\left( k((x_0))[u][Z]/( Z^p - Z - (1-u)\gamma_1(x_0) - u\gamma_2(x_0) )\right) \longrightarrow \text{Spec}\left( k((x_0))[u] \right).$$
Taking $X'$ as the closed subset of $\text{Spec}\left( k((x_0))[u] \right)$ consisting of two points $(u=0)$ and $(u=1)$, and $Z' \longrightarrow X'$ as the disconnected $\Gamma$-Galois \'{e}tale cover $\text{Spec}\left( K_{Z_1,z_1} \right) \sqcup \text{Spec}\left( K_{Z_2,z_2} \right) \longrightarrow \text{Spec}\left( k((x_0))[u] \right) \times \{(u=0), \, (u=1)\}$ in \cite[Proposition~3.11]{Ha_extn}, we obtain a connected $\Gamma$-Galois \'{e}tale cover $\Psi' \, \colon \, W' \longrightarrow \text{Spec}\left( k((x_0))[u] \right)$ that dominates the $\Gamma/\Gamma^{(2)} \cong \mathbb{Z}/p$-Galois cover $\psi'$ and such that we have the following isomorphisms of of $\Gamma$-Galois covers of $\text{Spec}\left( K_{\mathbb{P}^1,\infty}\right)$.
$$W' \times_{\text{Spec}\left( k((x_0))[u] \right)} \left( \text{Spec}\left( k((x_0)) \right) \times (u=0) \right) \, \cong \, \text{Spec}\left( K_{Z_1,z_1} \right),$$
$$W' \times_{\text{Spec}\left( k[[x_0]][u] \right)} \left( \text{Spec}\left( k((x_0)) \right) \times (u=1) \right) \, \cong \, \text{Spec}\left( K_{Z_2,z_2} \right).$$
Taking normalization of $\text{Spec}\left( k[[x_0]][u] \right)$ in the function field of $W'$, we obtain a connected $\Gamma$-Galois cover $\psi \, \colon \, W \longrightarrow \text{Spec}\left( k[[x_0]][u] \right)$ of integral schemes, branched at $\infty \times \mathbb{A}^1_u$ over which it is totally ramified (this can be established using the same argument for the cover $\theta \, \colon \, T \longrightarrow \text{Spec}\left( k[[x_0]][t] \right)$).
By \cite[Lemma~3.2]{DK}, there are connected $H_1$-Galois and $H_2$-Galois covers of $\mathbb{P}^1 \, \times_k \, \text{Spec}\left( k[[u]] \right)$ satisfying the hypotheses of \cite[Lemma~3.3]{DK}. Thus there are dense open subsets $\mathcal{W}_1$ and $\mathcal{W}_2$ of $\mathbb{A}^1_u$, and for $i \, = \, 1, \, 2$, and every closed point $(u=c_i)$ in $\mathcal{W}_i$, the following hold. There is a connected $H_i$-Galois cover $Y'_i \longrightarrow \mathbb{P}^1$, \'{e}tale away from $\infty$, and for some point $y'_i \in Y'_i$, we have
$$\text{Spec}\left( K_{Y'_i,y'_i}\right) \cong W \times_{\text{Spec}\left( k[[x_0]][u] \right)} \left( \text{Spec}\left( k((x_0)) \right) \times (u=c_i) \right)$$
as $\Gamma$-Galois covers of $\text{Spec}\left( K_{\mathbb{P}^1,\infty} \right)$. Taking any point $(u = c) \in \mathcal{W}_1 \cap \mathcal{W}_2$, we obtain two connected $H_1$-Galois and $H_2$-Galois covers $f_1 \, \colon \, Y_1 \longrightarrow \mathbb{P}^1$ and $f_2 \, \colon \, Y_2 \longrightarrow \mathbb{P}^1$, respectively, both covers are \'{e}tale away from $\infty$, and there are points $y_1 \in Y_1, \, y_2 \in Y_2$ lying over $\infty$ in the respective covers such that
\begin{equation}\label{isom}
K_{Y_1,y_1} \, \cong \, K_{Y_2,y_2}
\end{equation}
as $\Gamma$-Galois extensions of $K_{\mathbb{P}^1,\infty}$, and $1$ occurs as a lower jump in the ramification filtration for both the extensions. From the construction it follows that for both the covers $f_1$ and $f_2$, $Q_1 \times Q_2$ is contained in the higher ramification group $\Gamma_{(2)}$ which has index $p$ in $\Gamma$.
\underline{Step 3:} By Lemma~\ref{lem_patching_same_extension}, there is a connected $H = \langle \, G_1, \, G_2 \, \rangle$-Galois cover $Y \longrightarrow \mathbb{P}^1$ satisfying conditions~\eqref{c:1}--\eqref{c:3}.
If $H$ is a perfect group, we apply \cite[Theorem~3.7]{Manish_Killing} to obtain a connected $H$-Galois \'{e}tale cover of the affine line with $\Gamma_{(2)}$ as an inertia group above $\infty$.
\end{proof}
\begin{remark}\label{rmk_cyclic_inertia_product}
In the special case of the above theorem with $P_1 \cong \mathbb{Z}/p \cong P_2$, the resulting connected $G$-Galois cover has conductor $1+(h-1)p$, where $h$ is the common conductor for the local extensions $K_{X_1,x_1}/K_{\mathbb{P}^1,\infty}$ and $K_{X_2,x_2}/K_{\mathbb{P}^1,\infty}$ we began with (such a common conductor can always be arranged; cf. \cite[Theorem~2.2.2]{Pries}). To see this, observe that the covers $g_1$ and $g_2$ obtained at the end of Step 1 both have two distinct upper jumps $1$ and $h$ in their ramification filtration of the inertia group $\Gamma \cong (\mathbb{Z}/p)^2$ above $\infty$. Each of the local $\Gamma$-Galois extensions over $\infty$ is a compositum of two $\mathbb{Z}/p$-Galois extensions given by Artin-Schreier polynomials with conductors $1$ and $h$, respectively. In Step 2, we can use these equations to obtain the connected $\Gamma$-Galois cover $W \longrightarrow \text{Spec}\left( k((x_0))[u] \right)$ whose fibre over any $(u = \beta)$, \, $\beta \neq 0$, is a $\Gamma$-Galois extension with $1$ and $h$ as the distinct upper jumps in the ramification filtration. This shows that the cover $Y \longrightarrow \mathbb{P}^1$ can be obtained so that $1$ and $h$ are the distinct upper jumps (or equivalently, $1$ and $1+(h-1)p$ are the lower jumps by Equation~\eqref{eq_jumps_relation}) of the ramification filtration of the inertia group $\Gamma$ above $\infty$. Finally, from the proof of \cite[Theorem~3.7]{Manish_Killing} it follows that the final cover obtained in Step 3 has conductor $1+(h-1)p$.
\end{remark}
\section{The GPWIC in odd characteristics}\label{sec_odd}
As before, let $k$ be an algebraically closed field of characteristic $p$. The Generalized Purely Wild Inertia conjecture (the GPWIC, Conjecture \ref{conj_GPWIC}) asserts that given a quasi $p$-groups $G$ and a nonempty finite set $B \, \subset \, \mathbb{P}^1$ of closed point, there is connected $G$-Galois cover of $\mathbb{P}^1$, \'{e}tale away from $B$, with prescribed purely wild inertia groups above these branched points satisfying a necessary condition, namely, the conjugates of these inertia groups generate $G$.
First, we prove that the GPWIC is true for any arbitrary product of simple Alternating groups when $p \geq 3$. By \cite[Theorem 1.7(3)]{Das}, the GPWIC is true for any product $A_{d_1} \times \cdots \times A_{d_n}$, $n \geq 1$, where each $d_i = p$ or $d_i \geq p+1$ is co-prime-to $p$. This was proved using formal patching techniques and by understanding the covers of the affine $k$-line given by explicit affine equations, \`{a} la Abhyankar. The restrictions on the $d_i$'s, namely, $p \nmid d_i$ if $d_i > p$, come from the fact that the status of the PWIC remained unknown for the groups $A_{rp}$, $r \geq 2$, which is established in the following theorem.
\begin{theorem}\label{thm_PWIC_Alternating}
Let $p$ be an odd prime, $r \geq 1$. Then the PWIC holds for $A_{rp}$.
\end{theorem}
\begin{proof}
As $p$ strictly divides the order of $A_p$, every potential purely wild inertia group is a Sylow $p$-group of $A_p$. So the PWIC is true for $A_p$ by Raynaud's proof of the Abhyankar's Conjecture on the affine line (\cite[Corollary~2.2.2]{Raynaud_AC}). We suppose that $r \geq 2$. For $1 \leq u \leq r$, consider the $p$-cycle
$$\tau_u \coloneqq ((u-1)p+1, \ldots, up) \in A_{rp}.$$
Since $A_{rp}$ is a simple quasi $p$-group, it is generated by the conjugates of any cyclic subgroup of order $p$. As any $P$-subgroup of $A_{rp}$ contains an element of order $p$, and the inertia groups above a point in any connected $A_{rp}$-Galois cover are conjugates in $A_{rp}$, in view of \cite[Theorem~2]{2}, it is enough to prove the following.
\emph{For each $1 \leq u \leq r$, the pair $(A_{rp}, \, \langle \, \tau_1 \cdots \tau_u \, \rangle)$ is realizable.}
When $u < r$, this is \cite[Corollary~5.6]{DK}. Using induction on $r$, we will show that the pair $(A_{rp}, \, \langle \, \tau_1 \cdots \tau_r \, \rangle)$ is realizable. By the induction hypothesis, the pair $(A_{(r-1)p}, \, \langle \, \tau_1 \cdots \tau_{r-1} \, \rangle)$ is realizable. By \cite[Theorem~5.2]{DK}, the pair $(A_{(r-1)p} \times A_p, \, \langle \, (\tau_1 \cdots \tau_{r-1}, \tau_r) \, \rangle)$ is realizable, as well. Consider the odd permutation
$$c \coloneqq (1, (r-1)p+1)\,(2, (r-1)p+2) \, \cdots \, (p, rp) \in S_{rp}$$
of order $2$. Under the natural embedding $A_{(r-1)p} \times A_p \hookrightarrow S_{rp}$, we identify the element $(\tau_1 \cdots \tau_{r-1}, \tau_r)$ with $\tau = \tau_1 \cdots \tau_r$. The element $c$ acts on $\tau$ via conjugation. Now let $Y_1 \longrightarrow \mathbb{P}^1$ be a connected $A_{(r-1)p} \times A_p$-Galois cover, \'{e}tale away from $\infty$, such that $\langle \, (\tau_1 \cdots \tau_{r-1}, \tau_r) \, \rangle$ occurs as an inertia group above $\infty$. Let $Y_2 \longrightarrow \mathbb{P}^1$ be an Harbater-Katz-Gabber cover with Galois group $\langle \, \tau \, \rangle \rtimes \langle \, c \, \rangle$ that is totally ramified over $\infty$, \, $\langle \, c \, \rangle$ occurs as an inertia group above $0$, and is \'{e}tale everywhere else. Taking $G_1 = A_{(r-1)p} \times A_p$, $G_2 = \langle \, \tau \, \rangle \rtimes \langle \, c \, \rangle$, $G = S_{rp}$, $I_1 = \langle \, (\tau_1 \cdots \tau_{r-1}, \tau_r) \, \rangle$, and $I_2 = \langle \, \tau \, \rangle \rtimes \langle \, c \, \rangle$ in Lemma~\ref{lem_fp_main}, we obtain a connected $S_{ap}$-Galois cover $Y \longrightarrow \mathbb{P}^1$, \'{e}tale away from $\{0, \infty\}$, such that $\langle \, \tau \, \rangle \rtimes \langle \, c \, \rangle$ occurs as an inertia group above $\infty$, and $\langle \, c \, \rangle$ occurs as an inertia group above $0$. Now taking a $[\text{ord}(c)]$-Kummer pullback (cf. Definition~\ref{def_Kummer_pullback}), we obtain a connected $A_{rp}$-Galois cover of $\mathbb{P}^1$, branched only at $\infty$, and $\langle \, \tau \, \rangle$ occurs as an inertia group above $\infty$.
\end{proof}
A similar argument as in \cite[Theorem~7.4]{Das} produces the following as a consequence of the above theorem.
\begin{corollary}\label{cor_GPWIC_Alt_products}
Let $p$ be an odd prime, $n \geq 1$. For $1 \leq i \leq n$, let $d_i \geq \text{max}\{p, 5\}$. The GPWIC holds for the product of Alternating groups $A_{d_1} \times \cdots \times A_{d_n}$.
\end{corollary}
Although we will be interested in the PWIC and the GPWIC, we observe the following consequence of Lemma~\ref{lem_fp_main} and the above corollary.
\begin{proposition}\label{prop_reduction_result}
Let $p$ be a prime number, $G$ be a finite group, and $\mathbb{Z}/p \cong P$ be a subgroup of $G$ such that the pair $(G, \, P)$ is realizable. Let $\beta \in N_G(P)$ be an element of order prime-to-$p$. Suppose that $H$ is a subgroup of $G$ containing $I \coloneqq P \rtimes \langle \, \beta \, \rangle$, and the pair $(H, \, I)$ is realizable. Then for any $p$-subgroup $P'$ of $G$ containing $P$ that is normalized by $\beta$, the pair $(G, \, P' \rtimes \langle \, \beta \, \rangle)$ is realizable.
In particular, let $p$ be an odd prime, $d' \geq d \geq p$. Suppose that for some element $\beta \in N_{A_d}(\langle \, (1, \ldots, p) \, \rangle)$ of order prime-to-$p$, the pair $(A_d, \, \langle \, (1, \ldots, p) \, \rangle \rtimes \langle \, \beta \, \rangle)$ is realizable. If $P \subset A_{d'}$ is a $p$-subgroup containing the $p$-cycle $(1, \ldots, p)$ that is normalized by $\beta$, the pair $(A_{d'}, \, P \rtimes \langle \, \beta \, \rangle)$ is realizable.
\end{proposition}
\begin{proof}
By our assumption, there are connected Galois covers $\psi_1$ and $\psi_2$ of $\mathbb{P}^1$, \'{e}tale away from $\infty$, with Galois groups $G$ and $H$, respectively, and such that the inertia groups above $\infty$ are the conjugates of $P$ and $I$, respectively, in the corresponding Galois groups. By Lemma~\ref{lem_fp_main}, the pair $(G, \, I)$ is realizable. By \cite[Theorem 2]{2}, the pair $(G, \, P' \rtimes \langle \, \beta \, \rangle)$ is realizable as well.
By Corollary~\ref{cor_GPWIC_Alt_products}, for any $d' \geq p$, the pair $(A_{d'}, \, \langle \, (1, \ldots, p) \, \rangle )$ is realizable. So, the first statement applies with $G = A_{d'}$ and $H = A_d \subset A_{d'}$.
\end{proof}
Using the above, we can prove the Inertia Conjecture (Conjecture~\ref{conj_IC}) for $A_{p+1}$ for $p \geq 5$. This was proved in \cite[Theorem~5.3]{Das} under the assumption that $p \equiv 2 \pmod{3}$.
\begin{corollary}\label{cor_A_p+1}
For any prime number $p \geq 5$, the Inertia Conjecture is true for $A_{p+1}$.
\end{corollary}
\begin{proof}
This follows from Proposition~\ref{prop_reduction_result} since
$$N_{A_p}( \langle \, (1, \cdots, p) \, \rangle ) = N_{A_{p+1}}( \langle \, (1, \cdots, p) \, \rangle ),$$
and the PWIC holds for $A_p$, $p \geq 5$ by \cite[Theorem~1.2]{BP}.
\end{proof}
The argument of Theorem~\ref{thm_PWIC_Alternating} can be used in a more general set up.
\begin{proposition}\label{prop_purely_wild_from_tame}
Let $p$ be a prime. Let $G$ be a finite group, $G_1 \times G_2 \subset G$, where $G_1$ and $G_2$ are quasi $p$-groups. Suppose that for $i = 1, \, 2$, $\tau_i \in G_i$ is an element of order $p$ such that the pair $(G_1 \times G_2, \, \langle \, (\tau_1, \tau_2) \, \rangle)$ is realizable. Let $\sigma \in G$ be an element of order prime-to-$p$ that interchanges $\tau_1$ and $\tau_2$ via conjugation, i.e. $\sigma^{-1} \, (\tau_1, \tau_2) \, \sigma = (\tau_2, \tau_1)$. Consider the subgroup $H \coloneqq \langle \, G_1 \times G_2 , \, \langle \, \sigma \, \rangle \, \rangle$ of $G$. Let $T$ be the maximal common quotient of $H$ and $\langle \, \sigma \, \rangle$. Then the pair $(H \times_T \langle \, \sigma \, \rangle, \, \langle \, (\tau_1, \tau_2) \, \rangle)$ is realizable.
\end{proposition}
\begin{proof}
Let $Y_1 \longrightarrow \mathbb{P}^1$ be a connected $G_1 \times G_2$-Galois cover, \'{e}tale away from $\infty$, such that $\langle \, (\tau_1, \tau_2) \, \rangle$ occurs as an inertia group above $\infty$. Let $Y_2 \longrightarrow \mathbb{P}^1$ be a $\langle \, (\tau_1, \tau_2) \, \rangle \rtimes \langle \, \sigma \, \rangle$-Galois Harbater-Katz-Gabber cover that is totally ramified over $\infty$, \, $\langle \, \sigma \, \rangle$ occurs as an inertia group above $0$, and is \'{e}tale everywhere else. By Lemma~\ref{lem_fp_main}, we obtain a connected $H$-Galois cover $Y \longrightarrow \mathbb{P}^1$, \'{e}tale away from $\{0, \infty\}$, such that $\langle \, (\tau_1, \tau_2) \, \rangle \rtimes \langle \, \sigma \, \rangle$ occurs as an inertia group above $\infty$, and $\langle \, \sigma \, \rangle$ occurs as an inertia group above $0$. Now the result follows by taking a $[\text{ord}(\sigma)]$-Kummer pullback (Definition~\ref{def_Kummer_pullback}).
\end{proof}
\begin{remark}\label{rmk_realization_product}
We know that the pair $(G_1 \times G_2, \, \langle \, (\tau_1, \, \tau_2) \, \rangle)$ is realizable in the following cases.
\begin{enumerate}
\item (\cite[Theorem~5.2]{DK}) When the PWIC holds for two perfect (groups whose derived subgroup is the whole group) quasi $p$-group $G_1$ and $G_2$, and for any $1 \leq a \leq p-1$, there is an automorphism of $G_1 \times G_2$ taking $(\tau_1^a, \tau_2)$ to $(\tau_1, \tau_2)$.
\item (\cite[Theorem~7.5]{Das}) Suppose that for $i = 1, \, 2$, the pair $(G_i, \, \langle \, \tau_i \, \rangle)$ is realizable. Let $\pi_i \, \colon \, G_1 \times G_2 \longrightarrow G_i$ be the projections. If $G_1$ and $G_2$ do not have a common quotient, and $Q \subset G_1 \times G_2$ is a $p$-subgroup such that $\pi_i(Q) \cong \langle \, \tau_i \, \rangle$, then the pair $(G_1 \times G_2, \, Q)$ is realizable. In particular, this holds for $Q = \langle \, (\tau_1, \tau_2) \, \rangle$.
\end{enumerate}
\end{remark}
In view of the above remark, we may ask whether the PWIC is true for a product $G = G_1 \times G_2$ of quasi $p$-groups, even when $G_1$ and $G_2$ have a common non-trivial quotient. We show that this question has an affirmative answer when $G_1$ and $G_2$ are perfect groups.
\begin{theorem}\label{thm_product_perfect}
Let $p$ be a prime number. Let $G_1$ and $G_2$ be two perfect quasi $p$-groups such that the PWIC holds for $G_1$ and $G_2$. Then the PWIC is true for $G_1 \times G_2$.
\end{theorem}
\begin{proof}
Set $G \coloneqq G_1 \times G_2$. Let $\pi_1 \, \colon \, G \twoheadrightarrow G_1$ and $\pi_2 \, \colon \, G \twoheadrightarrow G_2$ denote the projections. Let $P \subset G$ be a $p$-group such that $G = \langle \, P^G \, \rangle$ (as before, $P^G$ denote the set of conjugate subgroups of $P$ in $G$). We want to show that there is a connected $G$-Galois cover of $\mathbb{P}^1$ that is \'{e}tale away from $\infty$, and $P$ occurs as an inertia group above $\infty$. For $j = 1, \, 2$, the conjugates of $\pi_j(P)$ generate $G_j$. By Goursat's lemma, $P = \pi_1(P) \times_{Q} \pi_2(P)$ for some common quotient $Q$ of $\pi_1(P)$ and $\pi_2(P)$. If $Q' \twoheadrightarrow Q$, and $Q'$ is a common quotient of $\pi_1(P)$ and $\pi_2(P)$, then $\pi_1(P) \times_{Q'} \pi_2(P) \subset \pi_1(P) \times_{Q} \pi_2(P)$. In view of \cite[Theorem~2]{2}, we may assume that $Q$ is a maximal common quotient of $pi_1(P)$ and $\pi_2(P)$.
The PWIC is true for $G_1$ by our hypothesis. The PWIC is true for any $p$-group. So there are connected $G_1$-Galois and $\pi_2(P)$-Galois cover of $\mathbb{P}^1$, both are \'{e}tale away from $\infty$, and $\pi_1(P)$ and $\pi_2(P)$ occurs as the inertia groups above $\infty$ in the respective covers. By \cite[Lemma~4.6]{Das}, the following hold.
\begin{enumerate}
\item There is a connected $G_1$-Galois cover $f_1 \, \colon \, Y \longrightarrow \mathbb{P}^1$, \'{e}tale away from $\infty$, and there is a point $y \in Y$ over $\infty$ with $\pi_1(P) = \text{Gal}\left( K_{Y,y}/K_{\mathbb{P}^1,\infty} \right)$;
\item there is a connected $\pi_2(P)$-Galois cover $f_2 \, \colon \, Z \longrightarrow \mathbb{P}^1$, \'{e}tale away from $\infty$, and there is a point $ \in Z$ over $x_i$ with $\pi_2(P) = \text{Gal}\left( K_{Z,z}/K_{\mathbb{P}^1,\infty} \right)$;
\item there is an isomorphism
$$K_{Y,y}^{N_{1}} \, \cong \, K_{Z,z}^{N_{2}}$$
of $Q$-Galois field extensions of $K_{\mathbb{P}^1,\infty}$, where $N_1 \trianglelefteq \pi_1(P)$ and $N_2 \trianglelefteq \pi_2(P)$ are such that $\pi_1(P)/N_1 \, \cong \, Q \, \cong \, \pi_2(P)/N_2$.
\end{enumerate}
Let $W_1$ be a connected component of the normalization of the fibre product $Y \times_{\mathbb{P}^1} Z$, and let $h_1 \, \colon \, W_1 \longrightarrow \mathbb{P}^1$ be the induced cover. Since $G_1$ is a perfect group, by \cite[Lemma~2.5]{DK}, there is no non-trivial $p$-group quotient of $G_1$. So $h_1$ is a $G_1 \times \pi_2(P)$-Galois cover. By construction, $h_1$ is \'{e}tale away from $\infty$, and the inertia groups above $\infty$ are the conjugates of $\text{Gal}\left( K_{Y,y} \cdot K_{Z,z}/K_{\mathbb{P}^1,\infty} \right) = \pi_1(P) \times_Q \pi_2(P) = P$.
Similarly, we obtain a connected $\pi_1(P) \times G_2$-Galois cover $h_2 \, \colon \, W_2 \longrightarrow \mathbb{P}^1$, \'{e}tale away from $\infty$, such that $P$ occurs as an inertia group above $\infty$.
Finally, applying \cite[Lemma~4.6]{Das} to the covers $h_1$ and $h_2$, we obtain a $G_1 \times \pi_2(P)$-Galois and a $\pi_1(P) \times G_2$-Galois cover as above with the following additional property: the local $P$-Galois extensions over $K_{\mathbb{P}^1,\infty}$ are isomorphic. Since $G = \langle \, \pi_1(P) \times G_2, \, G_1 \times \pi_2(P) \, \rangle$, the result follows from Lemma~\ref{lem_patching_same_extension}.
\end{proof}
\begin{remark}
Using the above argument in the context of GPWIC (i.e., if the GPWIC holds for $G_1$ and $G_2$), we end up with a cover that has more branched points than needed. More precisely, let $P_1, \ldots, P_r$ be $p$-subgroups of $G$ whose conjugate generate $G$, and $B = \{x_1, \ldots, x_r\} \subset \mathbb{P}^1$ be a finite set of closed points. The GPWIC asserts that there is a connected $G$-Galois cover of $\mathbb{P}^1$, \'{e}tale away from $B$, and $P_i$ occurs as an inertia group above $x_i$ for $1 \leq i \leq r$. We again apply Goursat's Lemma, and each $P_i = \pi_1(P_i) \times_{Q_i} \pi_2(P_i)$, where we may assume that $Q_i$ is a maximal common quotient of $\pi_1(P_i)$ and $\pi_2(P_2)$. We further assume that for $j = 1, \, 2$, $\pi_j(P_1), \ldots, \pi_j(P_r)$ are contained in a single Sylow $p$-group of $G_j$, hence generate a $p$-subgroup $T_j \subset G_j$. Then proceeding as in the proof, we obtain a connected $G_1 \times T_2$-Galois and a connected $T_1 \times G_2$-Galois cover of $\mathbb{P}^1$, both are \'{e}tale away from $B$, $P_i$ occurs as an inertia group above $x_i$ ($1 \leq i \leq r$) in both the covers, and the local extensions over $K_{\mathbb{P}^1,x_1}$ for the respective covers are isomorphic. Using formal patching technique it follows that there is a set $B' = \{x_2', \ldots, x_r'\} \subset \mathbb{P}^1$ disjoint from $B$ and a connected $G$-Galois cover of $\mathbb{P}^1$, \'{e}tale away from $B \sqcup B'$, such that $P_1$ occurs as an inertia group above $x_1$, and for $2 \leq i \leq r$, $P_i$ occurs as an inertia group above $x_i$ and $x_i'$.
\end{remark}
\section{The GPWIC in characteristic two}\label{sec_two}
In this section, we show that the Generalized Purely Wild Inertia Conjecture (GPWIC, Conjecture~\ref{conj_GPWIC}) is true for any product of certain Symmetric and Alternating groups in characteristic $2$. This is the first ever non-trivial result towards the PWIC and the GPWIC in characteristic $2$. We start with the study of the PWIC.
Let $P$ be a $2$-subgroup of a finite quasi $2$-group $G$ such that $\langle \, P^G \, \rangle \, = \, G$, i.e. the conjugates of $P$ in $G$ generate $G$. The PWIC asserts that the pair $(G, \, P)$ is realizable (Definition~\ref{def_real_pair}). By \cite[Theorem 2]{2}, if $P$ is a $2$-subgroup as above for which the pair $(G, \, P)$ is realizable, and $P' \subset G$ is a $2$-subgroup containing $P$, then the pair $(G, \, P')$ is also realizable. Note that a Symmetric group $S_d$ is a quasi $2$-group for all $d \geq 2$ ($S_3$ has order strictly divisible by $2$, and the PWIC holds for it by \cite[Corollary~2.2.2]{Raynaud_AC}). In fact, for $d \geq 2$, \, $S_d$ is a quasi $p$-group only for the prime $p \, = \, 2$. On the other hand, $A_d$ is a quasi $2$-group for all $d \geq 5$ ($A_3$ and $A_4$ are quasi $p$-groups only for the prime $p \, = \, 3$). In these cases, we characterize the potential purely wild inertia groups ($2$-subgroups) $P$.
\begin{lemma}\label{lem_wild_candidates}
Let $d \geq 5$. We have the following.
\begin{enumerate}
\item Let $P \subset S_d$ be a $2$-subgroup such that $\big\langle \, P^{\, S_d} \, \big\rangle \, = \, S_d$. Then $P$ contains an odd permutation $\tau$ such that $\big\langle \, \langle \, \tau \, \rangle^{\, S_d} \, \big\rangle \, = \, S_d$.\label{i:1}
\item Let $P \subset A_d$ be a $2$-subgroup such that $\big\langle \, P^{\, A_d} \, \big\rangle \, = \, A_d$. Then $P$ contains an even permutation $\tau$ of order $2$ such that $\big\langle \, \langle \, \tau \, \rangle^{\, A_d} \, \big\rangle \, = \, A_d$.\label{i:2}
\end{enumerate}
If $P \subset S_4$ is a $2$-subgroup such that $\big\langle \, P^{ \, S_4} \, \big\rangle \, = \, S_4$, then $P$ contains either a transposition or a $4$-cycle $\tau$ such that $\big\langle \, \langle \, \tau \, \rangle^{\, S_4} \, \big\rangle \, = \, S_4$.
\end{lemma}
\begin{proof}
\begin{enumerate}
\item If $P \subset A_d$, every conjugate of $P$ is also contained in $A_d$. Since $\big\langle \, P^{\, S_d} \, \big\rangle \, = \, S_d$, we may choose an odd permutation $\tau \in P$. Since $\big\langle \, \langle \, \tau \, \rangle^{\, S_d} \, \big\rangle$ is a non-trivial normal subgroup of $S_d$ containing an odd permutation $\tau$, $\big\langle \, \langle \, \tau \, \rangle^{\, S_d} \, \big\rangle \, = \, S_d$.
\item Since $A_d$ is a simple group, for every non-identity element $\tau \in A_d$, we have $\big\langle \, \langle \, \tau \, \rangle^{\, A_d} \, \big\rangle \, = \, A_d$. As $P$ always contains an element $\tau$ of order $2$, the result follows.
\end{enumerate}
As in \eqref{i:1}, $P$ contains an odd permutation $\tau \in S_4$ such that $\big\langle \, \langle \, \tau \, \rangle^{\, S_4} \, \big\rangle \, = \, S_4$. In $S_4$, such an element must be either a transposition or a $4$-cycle.
\end{proof}
We first consider connected $A_d$-Galois and $S_d$-Galois covers ($d \geq 5$) of $\mathbb{P}^1$ over an algebraically closed field of characteristic $2$, \'{e}tale away from $\{\infty\}$, such that the inertia groups above $\infty$ are cyclic of order $2$; cf. Lemma~\ref{lem_wild_candidates}.
\begin{proposition}\label{prop_cyclic_order_two_inertia}
Let $d \geq 5$, and $1 \leq r < \floor{d/2}$ be an integer. For $1 \leq u \leq r$, let $\tau_u$ be the transposition $(2u-1, 2u)$, and consider the element $\tau \coloneqq \tau_1 \cdots \tau_r \in S_d$ of order $2$. Over any algebraically field of characteristic $2$, we have the following.
\begin{enumerate}
\item If $r$ is an even integer, the pair $(A_d, \, \langle \, \tau \, \rangle)$ is realizable.\label{Alt}
\item If $r$ is an odd integer, the pair $(S_d, \, \langle \, \tau \, \rangle)$ is realizable.\label{Sym}
\end{enumerate}
\end{proposition}
\begin{proof}
For each $1 \leq i \leq r$ and each $b \in \{2r+1, \ldots , d\}$ (as $2r < d$ by out assumption, this set is non-empty), consider the $3$-cycle
$$\sigma_{u,b} \, \coloneqq \, (2u-1,2u, b).$$
Then for any $u$ and $b$, we have $\tau_u^{-1} \sigma_{u,b} \tau_u = \sigma_{u,b}^2$. As $\tau_{u'}$ has support disjoint from the set $\{2u-1, \, 2u, \, b\}$ for $u' \neq u$, $\tau^{-1} \sigma_{u,b} \tau = \sigma_{u,b}^2$. So for each $u$ and $b$, $\tau \in N_{S_d}(\langle \, \sigma_{u,b} \, \rangle)$, and the subgroup $H_{u,b} \coloneqq \langle \, \sigma_{i,b} \, \rangle \rtimes \langle \, \tau \, \rangle$ is a quasi $2$-group of order $6$. As $\langle \, \tau \, \rangle$ is a Sylow $2$-subgroup of $H_{u,b}$, by Raynaud's proof of the Abhyankar's Conjecture on the affine line (\cite[Corollary~2.2.2]{Raynaud_AC}), the pair $(H_{u,b}, \, \langle \, \tau \, \rangle)$ is realizable when the base field has characteristic $2$. Consider the quasi $2$-subgroup
$$H \, \coloneqq \, \langle \, H_{u,b} \, | \, 1 \leq u \leq r, \, 2r+1 \leq b \leq d \, \rangle$$
of $S_d$. From the construction, $H$ is a primitive subgroup of $S_d$. By the patching result \cite[Theorem~2.2.3]{Raynaud_AC}, the pair $(H, \, \langle \, \tau \, \rangle)$ is realizable.
When $d = 5$, by our assumption, $r = 1 \text{ or } 2$. If $r = 1$, $H$ contains the transposition $(1, 2)$. By Jordan's Theorem \cite[Theorem~1.1]{Jones}, $A_5 \subset H$. If $r = 2$, $H$ contains the $3$-cycle $(1,2,5)$ that fixes $2$ points in $\{1, \ldots, 5\}$. By \cite[Theorem~1.2(3)]{Jones}, $A_5 \subset H$ (note that $PGL_2(4) \equiv A_5$). For $d \geq 6$, $H$ is a primitive permutation group containing a $3$-cycle $(1,2,d)$; so again by Jordan's Theorem, $A_d \subset H$. Since for all $d \geq 5$, $H$ is a quasi $2$-subgroup of $S_d$ containing $A_d$, we have $H \in \{A_d, S_d\}$.
If $r$ is an even integer, $\tau$ is an even permutation. So $\left\langle \, \langle \, \tau \, \rangle^{\, S_d} \, \right\rangle = A_d$. Since $\left\langle \, \langle \, \tau \, \rangle^{\, H} \, \right\rangle = H$, we have $H = A_d$. If $r$ is an odd integer, $\tau$ is an odd permutation and we have $H = S_d$.
\end{proof}
\begin{remark}\label{rmk_S_4_transposition}
A similar argument as above shows that for any transposition $\tau \in S_4$, the pair $(S_4, \, \langle \, \tau \, \rangle)$ is realizable. The subgroup $H$ as in the proof becomes a primitive quasi $2$-subgroup of $S_4$ containing the transposition $\tau$ and a $3$-cycle. As $PGL_2(3) \cong S_4$, by \cite[Theorem~1.2(3)]{Jones}, $H$ contains $A_4$. Since $H$ is a quasi $2$-subgroup of $S_4$, we conclude that $H = S_4$.
\end{remark}
\begin{theorem}\label{thm_PWIC_A_d_char_2}
Let $d \geq 5$ be an integer that is not a multiple of $4$. The PWIC is true for $A_d$ in characteristic $2$.
\end{theorem}
\begin{proof}
Let $P \subset A_d$ be a $2$-subgroup such that $\left\langle \, P^{\, A_d} \, \right\rangle = A_d$. By Lemma~\ref{lem_wild_candidates}~\eqref{i:2}, there is an even permutation $\tau \in P$ of order $2$ such that $\left\langle \, \langle \, \tau \, \rangle^{\, A_d} \, \right\rangle = A_d$. Consider a disjoint cycle decomposition $\tau = \tau_1 \cdots \tau_r$ of $\tau$ in $A_d$ where each $\tau_u$ is a transposition, and $r$ is necessarily an even integer, $2r \leq d$. Since $4 \nmid d$ by our assumption, $r < \floor{d/2}$. As $r \geq 2$, $\tau$ is conjugate to the permutation $(1, 2) \cdots (2r-1, 2r)$. By Proposition~\ref{prop_cyclic_order_two_inertia}~\eqref{Alt}, the pair $(A_d, \, \langle \, \tau \, \rangle)$ is realizable. Now applying \cite[Theorem~2]{2}, the pair $(A_d, \, P)$ is realizable.
\end{proof}
The proof of \cite[Theorem~7.4]{Das} and the above result have the following immediate consequence.
\begin{corollary}\label{cor_GPWIC_product_A_d_char_2}
In characteristic $2$, the GPWIC (Conjecture~\ref{conj_GPWIC}) is true for any product $A_{d_1} \times \cdots \times A_{d_u}$, $u \geq 1$, where each $d_i \geq 5$ is an integer not divisible by $4$.
\end{corollary}
\begin{remark}
The smallest case the above theorem does not include is $A_8$. When $\tau = (1, 2)(3, 4)(5, 6)(7, 8) \in A_8$, the realization of the potential purely wild inertia group $\langle \, \tau \, \rangle$ remains unknown. Note that when $4 | d$, $d \geq 5$, $r = d/2$, and $\tau = (1,2) \cdots (2r-1,r)$, proving that the pair $(A_d, \, \langle \, \tau \, \rangle)$ is realizable would imply that the PWIC holds for $A_d$ in characteristic $2$.
\end{remark}
\begin{theorem}\label{thm_PWIC_S_d_char_2}
In characteristic $2$, the PWIC is true for $S_2$, $S_3$, and $S_d$ for all odd integer $d \geq 5$.
\end{theorem}
\begin{proof}
By \cite[Corollary~2.2.2]{Raynaud_AC}, the PWIC in characteristic $2$ is true for the groups $S_2$ and $S_3$. Let $d \geq 2$ be an odd integer. Let $P \subset S_d$ be a $2$-subgroup such that $\left\langle \, P^{\, S_d} \, \right\rangle = S_d$. Using Lemma~\ref{lem_wild_candidates}~\eqref{i:1}, choose an odd permutation $\tau \in P$ such that $\left\langle \, \langle \, \tau \, \rangle^{\, S_d} \, \right\rangle = S_d$. By \cite[Theorem~2]{2}, it is enough to show that the pair $(S_d, \, \langle \, \tau \, \rangle)$ is realizable. Suppose that $\text{ord}(\tau) = 2^f$, $f \geq 1$.
If $\tau$ has order $2$, then $\tau = \tau_1 \cdots \tau_r$ for disjoint transpositions $\tau_u$'s, for some odd integer $r \leq \floor{d/2}$. As $d$ is an odd integer, $2r < d$. Moreover, $\tau$ is conjugate to the element $(1,2) \cdots (2r-1,2r)$. By Proposition~\ref{prop_cyclic_order_two_inertia}~\eqref{i:1}, the pair $(S_d, \, \langle \, \tau \, \rangle)$ is realizable.
Let $\text{ord}(\tau) \ geq 4$. Then $\tau^2$ is a non-trivial even permutation. By Theorem~\ref{thm_PWIC_A_d_char_2}, the pair $(A_d, \, \langle \, \tau^2 \, \rangle)$ is realizable. Also the pair $(\langle \, \tau \, \rangle, \, \langle \, \tau \, \rangle)$ is realizable as the PWIC holds for any $2$-group. By \cite[Theorem~2.2.3]{Raynaud_AC}, the pair $(S_d = \langle \, A_d, \langle \, \tau \, \rangle, \, \langle \, \tau \, \rangle)$ is realizable.
\end{proof}
\begin{example}
(Failure of Abhyankar-style equation to the PWIC for $S_4$ in characteristic $2$) One can construct the connected $A_d$-Galois and $S_d$-Galois covers using covers $\mathbb{P}^1 \longrightarrow \mathbb{P}^1$ given by explicit affine equations, following Abhyankar or \cite[Section~3]{Das}. It is not clear whether we can use this method to obtain a connected $S_4$-Galois cover \'{e}tale cover of $\mathbb{A}^1$ such that $\langle \, (1,2,3,4) \, \rangle$ occurs as an inertia group above $\infty$.
Let $\psi \, \colon \, Y = \mathbb{P}^1_y \longrightarrow \mathbb{P}^1_x$ be a degree-$4$ cover given by the affine equation $f(x,y) = 0$, where
$$f(x,y) = y^4 - y^3 +x.$$
Since $(0,0)$ is the only common zero of $f$ and its $y$-derivative $\frac{\partial f}{\partial y}$, the cover $\psi$ is \'{e}tale away from $\{0, \infty\}$. From the equation, it follows that there are two points $(y=0)$ and $(y=1)$ above $x=0$ in $\psi$ having ramification indices $3$ and $1$, respectively, and $\psi^{-1}(\infty)$ consists of the unique point $(y= \infty)$ having ramification index $4$. Let $\phi \, \colon \, Z \longrightarrow \mathbb{P}^1_x$ be the Galois closure of $\psi$, and let $G$ be the Galois group. Since $f(x,y)$ is an irreducible polynomial in $k(x)[y]$, it follows that $G$ is a transitive subgroup of $S_4$. It can also be shown that $\phi$ is \'{e}tale away from $\{0,\infty\}$, the inertia groups above $0$ are the cyclic groups generated by the $3$-cycles in $G$; moreover, if $I_\infty \subset G$ occurs as an inertia groups above $\infty$, a $4$-cycle $\tau$ is contained in $I_\infty$. From this it follows that $G = S_4$. Note that $N_{S_4}(\langle \, \tau \, \rangle) \cong \langle \, (1,2,3,4), (1,3) \, \rangle$ is the Dihedral group $D_8$ of order $8$ (which is also a Sylow $2$-subgroup of $S_4$). So $I_\infty = p(I_\infty)$ is either a cyclic group of order $4$ or is a Sylow $2$-group. If $I_\infty \cong \mathbb{Z}/4$ (generated by $\tau$), the induced cover $Z \longrightarrow Y = \mathbb{P}^1_y$ is a connected $S_3$-Galois cover, tamely ramified over $(y=1)$, \'{e}tale everywhere else. By the Riemann Hurwitz formula, such a cover cannot exist. So the cover $\phi$ has the Sylow $2$-group as the inertia groups above $\infty$.
\end{example}
Now we establish the GPWIC (Conjecture~\ref{conj_GPWIC}) in characteristic $2$ for any product of the Alternating and the Symmetric groups we encountered above.
\begin{theorem}\label{thm_product_S_d_char_2}
The GPWIC in characteristic $2$ is true for any product $G = S_{d_1} \times \cdots \times S_{d_n}$, $n \geq 1$, where each $d_i \geq 5$ is an odd integer.
\end{theorem}
\begin{proof}
We write any element of $G$ as $g = (g_1, \cdots, g_n)$. For $g \in G$, set $\text{Supp}(g) \coloneqq \{1 \leq i \leq n \, | \, g_i \neq 1\}$. Let $r \geq 1$ be an integer, $B = \{x_1, \ldots, x_r\} \subset \mathbb{P}^1$ be a set of closed points, and $P_1, \ldots, P_r \subset G$ be $2$-groups such that $G = \left\langle \, P_1^G, \ldots, P_r^G \, \right\rangle$ where $P_j^G$ denote the set of conjugates of $P_j$ in $G$. We assert that there is a connected $G$-Galois cover of $\mathbb{P}^1$ that is \'{e}tale away from $B$, and $P_j$ occurs as an inertia group above $x_j$ for $1 \leq j \leq r$. Since the inertia groups above a point in a connected $G$-Galois cover are conjugate, we assume that $P_1, \ldots, P_r$ are contained in a single Sylow $2$-subgroup of $G$.
\underline{Step 1:} We start with a group theoretic observation that each $2$-group $P_j$ contains a product $P_j'$ of cyclic groups, and the conjugates of all these $P_j'$'s generate $G$.
There is a natural projection map $\pi \, \colon \, G \longrightarrow Q \coloneqq \prod_{1 \leq i \leq n} \mathbb{Z}/2$. We see that
$$\left\langle \, \pi(P_1)^Q, \ldots, \pi(P_r)^Q \, \right\rangle = Q.$$
By \cite[Lemma~7.1]{Das}, $\left\langle \, \pi(P_1), \ldots, \pi(P_r) \, \right\rangle = Q$. If $\pi(P_{j_0}) = \{1\}$ for some $j_0$, we have $\left\langle \, P_j^G \, | \, 1 \leq j \leq r, \, j \neq j_0 \, \right\rangle = G$. In view of \cite[Theorem~4.7]{Das}, it is enough to consider that $\pi(P_j) \neq \{1\}$ for any $1 \leq j \leq r$. Since each $\pi(P_j) \subset Q$ is a non-trivial elementary abelian $2$-group, there are integers $0 = t_0 < t_1 < \cdots < t_{r-1} < t_r$ such that the following hold.
\begin{enumerate}
\item For each $1 \leq j \leq r$, there is a subgroup $P'_j \coloneqq \left\langle \, g^{(t_{j-1}+1)} \, \right\rangle \times \cdots \times \left\langle \, g^{(t_j)} \, \right\rangle \subset P_j$ such that $\pi(P'_j) = \pi(P_j)$;
\item for each $1 \leq i \leq t_r$, there exists an integer $\lambda(i)$ such that $g^{(i)}_{\lambda(i)} \in S_{\lambda(i)}$ is an odd permutation;
\item $\cup \{ \lambda(i) \, | \, 1 \leq i \leq t_r \} = \{1, \cdots, n\}$, i.e., conjugates of $\left\langle \, P'_j \, | \, 1 \leq j \leq r \, \right\rangle$ generate $G$.
\end{enumerate}
\underline{Step 2:} Let $1 \leq j \leq r$, and $t_{j-1}+1 \leq i \leq t_j$. Set $H_i \coloneqq \prod_{v \in \text{Supp}\left( g^{(i)} \right)} S_{d_v} \subset G$. We use formal patching techniques to realize $\left\langle \, g^{(i)} \, \right\rangle$ as the inertia groups for certain quasi $2$-subgroups of $G$.
First suppose that $g^{(i)}_{\lambda(i)}$ has order $\geq 4$. Set $h^{(i)} \coloneqq (g^{(i)})^2$. Then for any $v \in \text{Supp}\left( h^{(i)} \right)$, \, $h^{(i)}_v \in S_{d_v}$ is a non-trivial even permutation. Set $K_i \coloneqq \prod_{v \in \text{Supp}\left( h^{(i)} \right)} A_{d_v} \subset G$. By Theorem~\ref{thm_PWIC_A_d_char_2}, the pair $\left(K_i, \left\langle \, h^{(i)} \, \right\rangle \right)$ is realizable. As the pair $\left(\left\langle \, g^{(i)} \, \right\rangle, \, \left\langle \, g^{(i)} \, \right\rangle\right)$ is also realizable, by \cite[Theorem~2.2.3]{Raynaud_AC}, we conclude that the pair
\begin{equation}\label{eq_pair_1}
\left(H_i \coloneqq \left\langle \, K_i, \left\langle \, g^{(i)} \, \right\rangle \, \right\rangle, \, \left\langle \, g^{(i)} \, \right\rangle\right) \, \text{ is realizable}.
\end{equation}
Since $g^{(i)}_{\lambda(i)}$ is an odd permutation, the image of $H_i$ under the $\lambda(i)^{\text{th}}$ projection $G \twoheadrightarrow S_{d_{\lambda(i)}}$ is $S_{d_{\lambda(i)}}$.
Now let $\text{ord}\left( g^{(i)}_{\lambda(i)} \right) = 2$. Then $g^{(i)}_{\lambda(i)}$ is conjugate to the element $\tau \coloneqq (1,2) \cdots (2u-1, 2u)$ in $S_{d_{\lambda(i)}}$ for some odd integer $1 \leq u < d/2$, and let $w \in S_{d_{\lambda(i)}}$ be such that $g^{(i)}_{\lambda(i)} = w^{-1} \tau w$. For each $1 \leq t \leq u$ and $2u+1 \leq b \leq d_{\lambda(i)}$, consider the $3$-cycle $\sigma_{t,b} \coloneqq (2t-1,2t,b) \in S_{d_{\lambda(i)}}$, and let $\gamma_{t,b} \in H_i$ be such that $(\gamma_{t,b})_{\lambda(i)} = \sigma_{t,b}$, $(\gamma_{t,b})_{s} = 1$ for $s \neq t$. As in the proof of Proposition~\ref{prop_cyclic_order_two_inertia}, we have $\left( \tau \right)^{-1} \gamma_{t,b} \tau = \gamma_{t,b}^2$. Hence for each $t$ and $b$, setting $\gamma_{t,b}' \coloneqq w^{-1} \gamma_{t,b} w$, we have
$$\left( g^{(-1)} \right)^{-1} \gamma_{t,b}' g^{(i)} = \left( \gamma_{t,b}' \right)^2.$$
By \cite[Corollary~2.2.2]{Raynaud_AC}, the pairs $\left( \left\langle \, \gamma_{t,b}' \, \right\rangle \rtimes \left\langle \, g^{(i)} \, \right\rangle, \, \left\langle \, g^{(i)} \, \right\rangle \right)$ are realizable for each $1 \leq t \leq u$ and $2u+1 \leq b \leq d_{\lambda(i)}$. By \cite[Theorem~2.2.3]{Raynaud_AC} we conclude that the pair
\begin{equation}\label{eq_pair_2}
\left( B_i \coloneqq \left\langle \, \left\langle \, \gamma_{t,b}' \, \right\rangle \rtimes \left\langle \, g^{(i)} \, \right\rangle \, | \, 1 \leq t \leq u, \, 2u+1 \leq b \leq d_{\lambda(i)} \, \right\rangle , \, \left\langle \, g^{(i)} \, \right\rangle \right) \, \text{ is realizable}.
\end{equation}
The image of $B_i$ under the projection $G \twoheadrightarrow S_{d_{\lambda(i)}}$ is $S_{d_{\lambda(i)}}$.
\underline{Step 3:} For $1 \leq j \leq r$, let $G_j$ be the subgroup of $G$ generated by $H_i$'s ($\text{ord}\left( g^{(i)}_{\lambda(i)} \right) \geq 4$) and $B_l$'s ($\text{ord}\left( g^{(l)}_{\lambda(l)} \right) 2$), $t_{j-1}+1 \leq i, l \leq t_j$. Since $G_j \twoheadrightarrow \prod_{t_{j-1}+1 \leq i \leq t_j} S_{d_{\lambda(i)}}$ by construction, $\left\langle \, G_j \, | \, 1 \leq j \leq r \right\rangle = G$.
Applying \cite[Theorem~2.2.3]{Raynaud_AC} to the realizable pairs in~\eqref{eq_pair_1} and \eqref{eq_pair_2}, for each $1\leq j \leq r$, the pair $(G_j, \, P_j)$ is realizable; further applying \cite[Theorem~2]{2}, the pair $(G_j, \, P_j)$ is realizable. Finally, we apply \cite[Theorem~4.7]{Das} inductively to obtain our required $G$-Galois cover.
\end{proof}
\begin{corollary}\label{cor_GPWIC_arbit_product_char_2}
The GPWIC (Conjecture~\ref{conj_GPWIC}) is true in characteristic $2$ for any product $G = G_1 \times \cdots \times G_n$, $n \geq 1$, where $G_i = A_{d_i}$ for some $d_i \geq 5$ with $4 \nmid d_i$ or $G_i = S_{d_i}$ for an odd integer $d_i \geq 5$.
\end{corollary}
\begin{proof}
This follows immediately from Corollary~\ref{cor_GPWIC_product_A_d_char_2}, Theorem~\ref{thm_PWIC_S_d_char_2}, and \cite[Theorem~7.5]{Das}.
\end{proof}
\section{Towards the PWIC for Wreath products}\label{sec_PWIC_wreath}
Let $p$ be a prime number. We show the realization of certain purely wild inertia groups for a wreath product of the form $N \wr A_d$ for a quasi $p$ group $A_d$, $d \geq 5$, and a non-abelian simple quasi $p$-group $N$ for which the PWIC has been established.
Let $d \geq \text{max}\{p, \, 5\}$. Let $N$ be a perfect quasi $p$-group (recall that a group $\Gamma$ is perfect if $\Gamma' = \Gamma$, where $\Gamma'$ is the derived subgroup of $\Gamma$, or equivalently, by \cite[Lemma~2.5]{DK}, there is no non-trivial $p$-group quotient of $\Gamma$). Set $G = N \wr A_d$. By property~\eqref{l:1} and \eqref{eq_derived}, $G$ is a perfect quasi $p$-group. By Proposition~\ref{prop_wr_inertia_candidates}, to prove the PWIC for $G$, it is enough to show that for each $1 \leq r \leq \floor{d/p}$ ($r$ even when $p = 2$), every element $b_1, \ldots, b_r, \, a_{rp+1}, \ldots, a_d \in N$ of order $p$-power, and $\tau = \tau_1 \cdots \tau_r \in A_d$ with $\tau_u = ( (u-1)p+1, \ldots, up)$ for $1 \leq u \leq r$, the following holds.
\emph{Set $g \coloneqq (a_1, \ldots, a_d; \tau)$, where for $1 \leq i \leq rp$,
$$a_i =
\begin{cases}
b_u , & \tx{if } i = (u-1)p+1, \, 1 \leq u \leq r, \\
1, & \tx{if } i=(u-1)p+j, \, 2 \leq j \leq p, \, 1 \leq u \leq r.
\end{cases}$$
Then the pair $(G, \, \langle \, g \, \rangle)$ is realizable.}
From Proposition~\ref{prop_order_p-power_elements} it follows that if $\text{ord}(g) = p$, each $b_u = 1$ and $a_i^p = 1$ for $rp+1 \leq i \leq d$. In particular, $g = (\mathbf{1};\tau)$ is an element of order $p$, and by Remark~\ref{rmk_generation}, the conjugates of the $p$-cyclic group $\langle \, g \, \rangle$ generate $G$.
\begin{proposition}\label{prop_one_tau}
Let $p$ be a prime, $d \geq \text{max}\{5, p\}$. Let $1 \leq r \leq \floor{d/p}$ be an integer (when $p = 2$, assume that $r < d/2$ is an even integer). Let $N$ be a perfect quasi $p$-group containing an element $a$ of order $p$ such that the pair $(N^{rp}, \, \langle \, (a, \ldots, a) \, \rangle)$ is realizable. Set $G \coloneqq N \wr A_d$. Then for $\tau = \tau_1 \, \cdots \, \tau_r \in A_d$ with $\tau_u \, = \, ( (u-1)p+1, \ldots, up)$, $1 \leq u \leq u$, the pair $(G, \, \langle \, (\mathbf{1};\tau) \, \rangle)$ is realizable.
\end{proposition}
\begin{proof}
We first note that the conjugates of $\langle \, (\mathbf{1}; \tau) \, \rangle$ in $G$ generate $G$. This follows from Remark~\ref{rmk_generation} when $N$ is a simple non-abelian group. Set $N_1 \coloneqq \langle \, \langle \, (\mathbf{1}; \tau) \, \rangle^G \, \rangle \subset G$. Then $N_1$ is a normal subgroup of $G$, and $A_d \subset N_1$. Thus $N_1$ contains the normal closure of $A_d$ in $G$. Since $N$ is a perfect group, by property~\eqref{l:4} of \Cref{sec_wreath}, $N_1 \, = \, G$.
Let $a \in N$ be an element of order $p$ such that the pair $(N^{rp}, \, \langle \, (a, \ldots, a) \, \rangle)$ is realizable. Since $a^p \, = \, 1$, the element
$$g \coloneqq ( \underbrace{a, \ldots, a}_{rp-\text{times}}, \underbrace{1, \ldots, 1}_{(d-rp)-\text{times}}; \tau)$$
is conjugate to $(\mathbf{1};\tau)$. Since the inertia groups over a point in a connected Galois cover are conjugate, it is enough to prove that the pair $(G, \, \langle \, g \, \rangle)$ is realizable. We will use Theorem~\ref{thm_perfect_group} for this.
Note that $(\mathbf{1};\tau)$ commutes with the element $b = ( \underbrace{a, \ldots, a}_{rp-\text{times}}, \underbrace{1, \ldots, 1}_{(d-rp)-\text{times}}; 1)$. Since the pair $(N^{rp}, \, \langle \, (a, \ldots, a) \, \rangle)$ is realizable by our hypothesis, and $(A_d, \, \langle \, \tau \, \rangle)$ is also realizable (cf. \cite[Corollary~5.5]{DK}, Theorem~\ref{thm_PWIC_Alternating}, Theorem~\ref{thm_PWIC_A_d_char_2}), by Proposition~\ref{prop_cyclic_order_two_inertia}, there are $1 \leq i, \, j \leq p$ such that the pair $(G, \, \langle \, ( \underbrace{a^i, \ldots, a^i}_{rp-\text{times}}, \underbrace{1, \ldots, 1}_{(d-rp)-\text{times}}; \tau^j) \, \rangle)$ is realizable. Since $G$ is generated by the conjugates of the group $\langle \, (a^i, \ldots, a^i, 1, \cdots, 1; \tau^j) \, \rangle$, we have $j \neq p$. Finally, since $(a^i, \ldots, a^i, 1, \cdots, 1; \tau^j)$, $1 \leq j \leq p-1$, $1 \leq i \leq p$, is conjugate to $(\mathbf{1};\tau)$ in $G$, the result follows.
\end{proof}
\begin{proposition}\label{prop_diagonal_element_wr}
Let $p$ be any prime number, $d \geq \text{max}\{5, \, p\}$. Let $1 \leq r \leq \floor{d/p}$ be an integer ($1 \leq r < \floor{d/2}$ an even integer if $p=2$). Let $N$ be non-abelian simple quasi $2$-group for which the PWIC holds in characteristic $2$. Let $\tau = \tau_1 \cdots \tau_r \in A_d$, where $\tau_u$ is the $p$-cycle $((u-1)p+1, \ldots, up)$, $1 \leq u \leq r$. Let $a \in N$ be an element of order $p^f$, $f \geq 2$. Then the pair $(N \wr A_d, \, \langle \, (a, \ldots, a; \tau) \, \rangle)$ is realizable.
\end{proposition}
\begin{proof}
By \cite[Theorem~4.8]{Manish_Compositum}, the pair $(\langle \, (a, \ldots, a) \, \rangle \times A_d, \, \langle \, (a, \ldots, a; \tau) \, \rangle)$ is realizable. Since $f \geq 2$, $(a,\ldots,a;\tau)^p = (a^p, \ldots, a^p; 1) \neq 1$. By Theorem~\ref{thm_product_perfect}, $(N^d, \, \langle \, (a, \ldots, a) \, \rangle )$ is also realizable. Let $N_1$ be the group generated by $\langle \, (a, \ldots, a) \, \rangle \times A_d$ and $N^d$ in $G$. Since $N_1$ contains $\langle \, N^d, \, A_d \, \langle$, by property~\eqref{l:2}, $N_1 = G$. Now the result follows from \cite[Theorem~2.2.3]{Raynaud_AC}.
\end{proof}
Using the same arguments as in Proposition~\ref{prop_one_tau} and Proposition~\ref{prop_diagonal_element_wr}, we obtain the following result.
\begin{proposition}
Let $p$ be a prime number, $N$ and $H$ be two non-abelian simple quasi $p$-groups for which the PWIC hold. Let $G = N \wr_{\text{St}} H$ be the standard wreath product. Then for any element $a \in N$ of $p$-power order and $h \in H$ of order $p$, the pair $(G, \, \langle \, ((a, \ldots, a); h) \, \rangle)$ is realizable.
\end{proposition}
In the following, we consider the case $p = 2$.
\begin{proposition}\label{prop_char_2_wr}
Let $d \geq 5$, $1 \leq r < \floor{d/2}$. Let $H = A_d$ if $r$ is an even integer and $H = S_d$ if $r$ is an odd integer. For $1 \leq u \leq r$, consider the $2$-cycle $\tau_u = (2u-1,2u)$, and let $\tau \coloneqq \tau_1 \cdots \tau_r \in H$. Let $N$ be a non-abelian simple quasi $2$-group for which the PWIC holds., and set $G = N \wr H$. Let $a_{2r+1}, \ldots, a_d \in N$ be such that $a_i^2 = a_d^2$ for $2r+1 \leq i \leq d$. Let $b \coloneqq a_d^2$. Consider the element $g \coloneqq (a_1,\ldots, a_d;\tau) \in G$ where $a_{2u-1} = b$, $a_{2u} = 1$ for each $1 \leq u \leq r$. Then the pair $(G, \, \langle \, g \, \rangle)$ is realizable for any algebraically closed field of characteristic $2$.
\end{proposition}
\begin{proof}
We have $g = ( \underbrace{b,1, \ldots, b,1}_{2r-\text{times}}, a_{2r+1}, \ldots, a_d; \tau)$ with $b = a_d^2 = a_i^2$ for $2r+1 \leq i \leq d$. If $a_d = 1$, we have $g = (\mathbf{1}; \tau)$. Then the conclusion follows from Proposition~\ref{prop_one_tau}. So we assume that $a_d \neq 1$.
For each $1 \leq u \leq r$, and $2r+1 \leq l \leq d$, consider the $3$-cycle $\sigma_{u,l} \coloneqq (2u-1, 2u, l)$, and set
$$\gamma_{u,l} \coloneqq (\underbrace{1, \ldots, 1}_{(2u-2)-\text{times}}, a_l, a_l^{-1}, \underbrace{1, \ldots, 1}_{(d-2u)-\text{times}}; \sigma_{u,l}).$$
Since $a_l^2 = b$, we see that
\begin{eqnarray*}
g^{-1} \, \gamma_{u,l} \, g \, = \, \gamma_{u,l}^2 & \text{ for each } 1 \leq u \leq r, \, 2r+1 \leq l \leq d, \text{ and}\\
\text{ord}(\gamma_{u,l}) \, = \, 3. &
\end{eqnarray*}
So for each $u$ and $l$, the quasi $2$-group $\langle \, \gamma_{u,l} \, \rangle \rtimes \langle \, g \, \rangle$ of order $6$ has $\langle \, g \, \rangle$ as a Sylow $2$-subgroup of it. Thus for all $1 \leq u \leq r, \, 2r+1 \leq l \leq d$, the pair $(\langle \, \gamma_{u,l} \, \rangle \rtimes \langle \, g \, \rangle, \, \langle \, g \, \rangle)$ is realizable.
Suppose that $\text{ord}(g) = 2$. By our assumption, each $a_i$ has order $2$ for $2r+1 \leq i \leq d$, and $b = 1$. So $g = (1, \ldots, 1, a_{2r+1}, \ldots, a_d; \tau)$. The elements $(1, \ldots, 1, a_{2r+1}, \ldots, a_d; 1)$ and $(\mathbf{1}; \tau)$ commute in $G$. Consider the subgroup
$$G_1 \coloneqq \{(1, \ldots, 1, c_1, \ldots, c_{d-2r} ; h) \, | \, c_i \in N, \, h \in \langle \, \tau \, \rangle\} \subset G.$$
So $G_1 \cong N^{d-2r} \times \langle \, \tau \, \rangle$. By our hypothesis, the PWIC holds for $N$. By Theorem~\ref{thm_product_perfect} and \cite[Corollary~4.6]{Manish_Compositum}, the PWIC holds for $N^{d-2r} \times \langle \, \tau \, \rangle$. Thus $(N^{d-2r} \times \langle \, \tau \, \rangle, \, \langle \, (a_{2r+1, \ldots, a_d, \tau}) \, \rangle)$ is realizable, and hence the pair $(G_1, \, \langle \, g \, \rangle)$ is also realizable. Set
$$N_1 \coloneqq \langle \, G_1, \, \langle \, \gamma_{u,l} \, \rangle \rtimes \langle \, g \, \rangle \, | \, 1 \leq u \leq r, \, 2r+1 \leq l \leq d \, \rangle \subset G.$$
The same argument as in the proof of Proposition~\ref{prop_cyclic_order_two_inertia} shows that $\pi(N_1) = H$. As $\langle \, N, \, H \, \rangle = G$, we have $N_1 = G$.
By \cite[Theorem~2.2.3]{Raynaud_AC}, the pair $(G, \, \langle \, g \, \rangle)$ is realizable.
Now let $\text{ord}(g) = 2^f$, $f \geq 2$. Then $g^2 = (b, \ldots, b; 1)$, and $b$ has order $2^{f-1} \geq 2$. Using Theorem~\ref{thm_product_perfect} we conclude that the pair $(N^d, \, \langle \, (b, \ldots, b) \, \rangle)$ is realizable. Since $\langle \, g \, \rangle$ is a $2$-group, the pair $(\langle \, g \, \rangle, \, \langle \, g \, \rangle)$ is also realizable. We again note that the subgroups $N^d$ and $\langle \, g \, \rangle$ together with all the groups $\langle \, \gamma_{u,l} \, \rangle \rtimes \langle \, g \, \rangle$ generate $G$, and the result follows by \cite[Theorem~2.2.3]{Raynaud_AC}.
\end{proof}
\begin{corollary}
Let $N$ be a non-abelian simple quasi $2$-group, and $d \geq 5$ be an integer. We have the following realization of pairs when the base field has characteristic $p = 2$.
\begin{enumerate}
\item If $4 \nmid d$, for any element $g \in N \wr A_d$ of order $2$, the pair $(N \wr A_d, \, \langle \, g \, \rangle)$ is realizable.
\item If $d$ is an odd integer, for any element $g \in N \wr S_d$ of order $2$, the pair $(N \wr S_d, \, \langle \, g \, \rangle)$ is realizable.
\end{enumerate}
\end{corollary}
\begin{appendix}\label{appendix}
\section{Ramification Filtration for purely wild inertia}\label{sec_filtration}
We briefly recall the ramification filtration associated to an inertia group. As before, let $k$ be an algebraically closed field. Let $L/k((x))$ be a Galois field extension with group $I$. By \cite[Chapter~IV, Corollary~4]{Serre_loc}, $I$ is an extension of a $p$-group $P$ by a cyclic group of order prime-to-$p$. Such local extensions are encountered in the context of $G$-Galois covers $f \, \colon \, Y \longrightarrow X$ of smooth connected $k$-curves, where $G$ is a finite group. For a closed point $\xi \in X$ and $y \in f^{-1}(\xi)$, we obtain an extension $\widehat{\mathcal{O}}_{X,\xi} \subset \widehat{\mathcal{O}}_{Y,y}$ of the complete discrete valuation rings. For a local parameter $x$ in $X$ at the point $\xi$, $\widehat{\mathcal{O}}_{X,\xi} \cong k[[x]]$. The corresponding extension of the quotient fields is an extension $L/k((x))$ as above. The normalization of $k[[x]]$ in $R$ is a complete discrete valuation ring, say $R$, whose fraction field is $L$. Let $v_L$ denote the normalized discrete valuations on $R$.
The group $I$ admits a finite decreasing filtration by normal subgroups of $I$ (\cite[Chapter~IV, Proposition~1]{Serre_loc}) $I = I_{(0)} \trianglerighteq I_{(1)} \trianglerighteq \cdots \trianglerighteq I_{(h)} \triangleright \{1\}$ defined as follows.
$$I_{(j)} \coloneqq \{ g \in I \, | \, v_R(g a - a) \geq j+1 \text{ for all } a \in R\}, \, j \geq 0.$$
This filtration is called the \textit{lower numbering ramification filtration}, and the normal subgroup $I_{(j)}$ is called the $j^{\text{th}}$ higher ramification group.By \cite[Corollary~1, Corollary~3]{Serre_loc}, $I_{(0)}/I_{(1)}$ is a cyclic group of prime-to-$p$, and for $j \geq 1$, the quotients $I_{(j)}/I_{(j+1)}$ are elementary abelian $p$-groups. This filtration can be extended to the non-negative real numbers as follows: for $u \in \mathbb{R}$, $u \geq 0$, set $I_{(r)} \coloneqq I_{(\ceil{r})}$. There is also an \textit{upper numbering ramification filtration} that can be defined using the Herbrand's function $\phi \, \colon \, [0,\infty) \longrightarrow [0, \infty)$,
$$\phi(u) \coloneqq \int_{0}^u \frac{dt}{[I:I_{(t)}]}.$$
The function $\phi$ is a homeomorphism from $[0,\infty)$ to itself, with the inverse $\psi$. Then one defines
$$I^{(v)} \coloneqq I_{(\psi(v))} \text{ or equivalently, } I^{(\phi(u))} = I_{(u)}.$$
It should be noted that the above filtrations are defined for the decomposition group and the filtration starts from $-1$, and the functions $\phi$, $\psi$ are homeomorphisms of $[-1,\infty)$ to itself. But since we are only interested in the case of covers of smooth curves of $k$, the above is enough for our purpose.
A real number $l \geq 1$ (respectively, $u \geq 1$) is said to be a \textit{lower jump} (respectively, an \textit{upper jump}) if $I_{(l+\epsilon)} \neq I_{(l)}$ (respectively, $I^{(u+\epsilon)} \neq I^{(u)}$) for some $\epsilon > 0$. Let $l_1, \ldots, l_h$ (which are integers $\geq 1$ by definition) and $u_1, \ldots, u_h$ be the set of lower and upper jumps. When $I = P$ is a $p$-group, the relation among the lower and upper jumps is given as follows.
Set $l_0 = u_0 = 0$. For $1 \leq i \leq h$, set $s_i \coloneqq [P : P_{(l_i)}] = [P : P^{(u_i)}]$. Then (\cite[Remark~2.1]{Manish_Compositum}) for $i \geq 1$, we have
\begin{equation}\label{eq_jumps_relation}
u_i = \sum_{j=1}^i \frac{l_j-l_{j-1}}{s_j}, \, \, l_i = \sum_{j=1}^i (u_j - u_{j-1})s_j.
\end{equation}
The following result shows the behavior of the upper ramification filtration with respect to taking quotient extensions.
\begin{proposition}\label{prop_jump}
Let $L/k((x))$ be a $P$-Galois extension and $N$ be a normal subgroup of $P$.
\begin{enumerate}
\item (\cite[Chapter~IV, Proposition~14]{Serre_loc}) $(P/N)^v = P^v N/N$ for all $v$.\label{j:1}
\item (\cite[Lemma~2.3]{Manish_Compositum}) If $u_1, \ldots, u_r$ are the upper jumps of $P/N$, then $u_1, \ldots, u_r$ are also among the upper jumps of $P$.\label{j:2}
\end{enumerate}
\end{proposition}
The above results together with \cite[Proposition~3.1]{Manish_Compositum} shows the following.
\begin{proposition}\label{prop_jump_quotients}
Let $L/k((x))$ be a $P$-Galois extension with upper jumps $u_1, \ldots, u_r$. Let $N$ be a normal subgroup of $P$ such that $P/N$ is an elementary abelian group of $p$-exponent $e$, \, $e \geq 1$. Then the $P/N \cong (\mathbb{Z}/p)^e$-Galois extension $L^N/k((x))$ has the following description.
\begin{enumerate}
\item There exists a subset $\{v_1,\ldots,v_t\} \subset \{u_1, \ldots, u_r\}$, positive integers $e_1, \ldots, e_t$ with $\sum_{j=1}^t e_j = e$, and Artin-Schreier polynomials $R_i \coloneqq Z^p-Z-f_i(x) \in k((x))[Z]$ having a root $\alpha_i$ and $v_{x^{-1}}(f_i) = v_j$ for $i \in \{e_{j-1}+1, \ldots, e_j\}$, $1 \leq j \leq t$ such that $L^N/k((x))$ is given by the compositum $k((x))(\alpha_1, \ldots, \alpha_e)/k((x))$ of the $\mathbb{Z}/p$-Galois extensions $k((x))(\alpha_i)/k((x))$.\label{d:1}
\item For any two disjoint subsets $S_1$ and $S_2$ in $\{1, \dots, e\}$, the extension $L_1/k((x))$ constructed as the compositum of the extensions $k((x))(\alpha_i)/k((x))$, $i \in S_1$, and the extension $L_2/k((x))$ constructed as the compositum of the extensions $k((x))(\alpha_j)/k((x))$, $j \in S_2$, are linearly disjoint.\label{d:2}
\item $v_1, \ldots, v_t$ are the upper jumps of the extension $L^N/k((x))$. They also occur as the upper jumps for the extension $L/k((x))$.\label{d:3}
\end{enumerate}
Moreover, if $N = \text{Frat}(P)$ is the Frattini subgroup of $P$, and $v_1$ is minimal among the jumps $v_1, \ldots, v_t$, then $v_1$ is the minimal upper jump in the ramification filtration of $P$ for the extension $L/k((x))$.
\end{proposition}
\begin{proof}
Since $k$ is an algebraically closed field, the Galois extension $L^N/k((x))$ is totally ramified. Since the Galois group is an elementary abelian group of $p$-exponent $e$, $L^N/k((x))$ is a compositum of $e$-many $\mathbb{Z}/p$-Galois extensions of $k((x))$ given by Artin-Schreier polynomials $R_i \coloneqq Z^p-Z-f_i(x) \in k((x))[Z]$. If $\alpha_i$ is a root of $R_i$ for $1 \leq i \leq e$, we have $L^N = k((x))(\alpha_1, \ldots, \alpha_e)$ as Galois extension of $k((x))$. Since $L^N/k((x))$ is totally ramified, \eqref{d:2} follows. The upper jumps of the group $P/N$ corresponding to the extension are given by the set $U = \{v_{x^{-1}}(f_i(x)) \, | \, 1 \leq i \leq e\}$. Let $U = \{v_1, \cdots, v_t\}$. By Proposition~\ref{prop_jump}~\eqref{j:2}, $U \subset \{u_1, \cdots, u_r\}$, proving~\eqref{d:3}. For $1 \leq j \leq t$, taking $e_j \coloneqq \# \{ 1 \leq i \leq e \, | \, v_{x^{-1}}(f_i(x)) = v_j \}$, statement \eqref{d:1} follows.
The moreover statement follows from the above since $\text{Frat}(P)$ is the normal subgroup of $P$ such that $P/\text{Frat}(P)$ is the maximal elementary abelian quotient of $P$.
\end{proof}
\section{On some details on the wreath products}\label{sec_wreath}
Let $p$ be a prime number. This section concerns the (permutation) wreath products of the form $N \, \wr \, H$ where $N$ is a quasi $p$-group and $H$ is a transitive permutation quasi $p$-group of degree $d$. We will be interested in the cases where $H \, = \, A_d$ for $d \geq \text{max}\{5, \, p\}$, or when $H \, = \, S_d$ with $d \geq 5$ and $p \, = \, 2$. We will classify the potential purely wild inertia groups for such groups. Recall that any group $G \, = \, N \wr H$ has the underlying set of elements $(a_1, \ldots, a_d; h)$, where $a_i \in N$ for $1 \leq i \leq d$ and $h \in H$. The group multiplication in $G$ is defined by the law
$$(a_1, \ldots, a_d; h_1) (c_1, \ldots, c_d; h_2) \, = \, (a_1 \, c_{h_1^{-1}(1)}, \ldots, a_d \, c_{h_1^{-1}(d)}; h_1 \, h_2),$$
and the inverse of an element is given by
$$(a_1, \ldots, a_d; h)^{-1} \, = \, (a_{h^{1}}^{-1}, \ldots, a_{h^{d}}^{-1}; h^{-1}).$$
Any elements of the form $(1, \ldots, 1; h) \in G$, $h \in H$, will be denoted by $(\mathbf{1} ; h)$. Under this notation, the identity element of $G$ is $(\mathbf{1}; 1)$, where $1$ is the trivial permutation.
For any $1 \leq i \leq d$, $N$ can be identified with the subgroup of elements $(a_1, \ldots, a_d; 1) \in G$ where $a_j \, = \, 1$ for $j \neq i$, and $a_i \in N$. Similarly, $H$ is identified with the subgroup $\{ (\mathbf{1}; h) \, | \, h \in H \} \subset G$. For for any subset $\chi \subset \{1, \ldots, d\}$, a subgroup $N_1 \leq N$, the group $N_1^{\# \chi} $ is identified with the subgroup of $G$ containing elements $(a_1, \ldots, a_d; 1)$ where $a_j \, = \, 1$ for $j \not\in \chi$, $a_i \in N_1$ for $i \in \chi$. More generally, if $H_1 \subset H$ is a permutation subgroup and $N_1$ is a subgroup of $N$, for any $H_1$-invariant subset $\chi \subset \{1, \ldots, d\}$, $N_1 \wr H_1 = (N_1^{\# \chi}) \rtimes H_1$ can be considered as a subgroup of $G$ (cf. \cite[Lemma~1.9, Lemma~1.13]{wr}).
We note some important facts about the group $G$ that are used in this paper.
\begin{enumerate}\label{list}
\item $G$ being an extension of a quasi $p$-group $N^d$ (known as the `base group') by the quasi $p$-group $H$, it is a quasi $p$-group.\label{l:1}
\item Since $H$ acts transitively on the set $\{1, \ldots, d\}$, $G$ is generated by the subgroups $N$ and $H$ in $G$ (\cite[Lemma~1.11]{wr}).\label{l:2}
\item For the details of the centralizer and the derived subgroups we refer to \cite[Section~1.4, page~11]{wr}. In particular, by \cite[Corollary~4.9]{wr}, if $N$ is a perfect group (i.e., the derived subgroup $N'$ of $N$ equals $N$), the derived subgroup $G'$ of $G$ is given by
\begin{equation}\label{eq_derived}
G' \, = \, N \wr H',
\end{equation}
where $H'$ is the derived subgroup of $H$.\label{l:3}
\item By \cite[Theorem~4.10]{wr}, when $N$ is a perfect group, the normal closure of $H$ in $G$ is the group $G$ itself.\label{l:4}
\item From the definition of the wreath product, it follows that if $g \in G$ commutes with every element of the base group $N^d$, then $g \in N^d$.\label{l:5}
\end{enumerate}
For whatever follows, we assume the following.
\emph{$N$ is a perfect quasi $p$-group, and $H$ is either $A_d$, $d \geq \text{max}\{5, \, p\}$ or $S_d$, $d \geq 5$, when $p \, = \, 2$. $G \, = \, N \wr H$. Let $\pi \, \colon \, G \twoheadrightarrow H$ denote the natural projection, $\pi((a_1, \ldots, a_d; h)) \, = \, h$. Let $\tau \in H$ be the product of disjoint $p$-cycles $\tau \, = \, \tau_1 \, \cdots \, \tau_r$ for some $1 \leq r \leq \floor{d/p}$, where}
\begin{equation}\label{eq_tau_u}
\tau_u \, = \, ( (u-1)p+1, \ldots, up )
\end{equation}
\emph{for $1 \leq u \leq r$.}
We have the following classification of certain elements of $G$ of order $p$-power and their conjugates in $G$.
\begin{proposition}\label{prop_order_p-power_elements}
Let $p$ be a prime number. Under the above hypotheses, an element $g \in G$ with $\pi(g) \, = \, \tau$ of order $p^f$, $f \geq 1$, is of the form $g \, = \, (a_1, \ldots, a_d; \tau)$ satisfying the following conditions.
\begin{enumerate}
\item For each $1 \leq u \leq r$, $\left( a_{up} \, \cdots \, a_{(u-1)p+1} \right)^{p^{f-1}} \, = \, 1$;\label{ord:1}
\item For each $rp+1 \leq i \leq d$, $a_i^{p^{f}} \, = \, 1$.\label{ord:2}
\end{enumerate}
Moreover, if $f \geq 2$, there is either a $u$, $1 \leq u \leq r$, such that the product $a_{up} \, \cdots \, a_{(u-1)p+1}$ has order $p^{f-1}$ or an $i$, $rp+1 \leq i \leq d$, with $a_i$ having order $p^f$.
Two elements $(a_1, \ldots,a_d; \tau)$ and $(a'_1, \ldots, a'_d; \tau)$ are conjugate in $G$ if and only if the following hold.
\begin{enumerate}
\item For each $1 \leq u \leq r$, the elements $a_{up} \, \cdots \, a_{(u-1)p+1}$ and $a'_{up} \, \cdots \, a'_{(u-1)p+1}$ are conjugate in $N$;
\item for each $rp+1 \leq i \leq d$, the elements $a_i$ and $a'_i$ are conjugate in $N$.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $g \in G$ with $\pi(g) \, = \, \tau$ has order $p^f$, $f \geq 1$. Then $g \, = \, (a_1, \ldots, a_d; \tau)$ for some $a_i \in N$. Then $g^p$ has order $p^{f-1}$. We have $g_p \, = \, (z_1, \ldots, z_d; 1)$, where
$$ z_i =
\begin{cases}
a_i \, a_{i-1} \, \cdots \, a_{(u-1)p+1} \, a_{up} \, a_{up-1} \, \cdots \, a_{i+1} , & \text{if } \substack{i \, = \, (u-1)p+j, \, 1 \leq j \leq p-1,\\ 1 \leq u \leq r,}\\
a_{up} \, a_{up-1} \, \cdots \, a_{(u-1)p+1}, & \text{if } i \, = \, up, \, 1 \leq u \leq r,\\
a_i^p, & \text{if } rp+1 \leq i \leq d.
\end{cases}$$
Since for $i \, = \, (u-1)p+j$, $1 \leq j \leq p-1$, $1 \leq u \leq r$, we have
$$z_i \, = \, (a_i \, a_{i-1} \, \cdots \, a_{(u-1)p+1}) \, z_{up} (a_{(u-1)p+1}^{-1} \, \cdots \, a_i^{-1},$$
the conditions \eqref{ord:1} and \eqref{ord:2} are necessary. If $f \geq 2$, the element $g^p$ is non-trivial, producing the additional condition. The converse is immediate.
The second statement follows from \cite[Corollary~1.10]{wr}.
\end{proof}
\begin{proposition}\label{prop_wr_inertia_candidates}
Let $p$ be a prime number, $N$ be a simple quasi $p$-group, $G \, = \, N \wr A_d$, $d \geq \text{max}\{p, \, 5\}$. Let $\tau \, = \, \tau_1 \, \cdot \tau_r \in A_d$ for some $1 \leq r \leq \floor{d/p}$, where $\tau_u$ is the $p$-cycle as in~\eqref{eq_tau_u}, $1 \leq u \leq r$ ($r$ is even if $p \, = \, 2$). Suppose that the following hold. For any element $b_1, \, \ldots, \, b_r, \, a_{rp+1}, \, \ldots, \, a_d \in N$ of order $p$-powers,
\begin{equation}\label{eq_def_g}
g \coloneqq (a_1, \ldots, a_d; \tau) \in G,
\end{equation}
where $a_{(u-1)p+1} \, = \, b_u$ and $a_{(u-1)p+j} \, = \, 1$ for $2 \leq j \leq p$, and the pair $(G, \, \langle \, g \, \rangle)$ is realizable. Then the PWIC holds for $G$.
\end{proposition}
\begin{proof}
Let $P \subset G$ be a $p$-subgroup such that $G \, = \, \langle \, P^G \, \rangle$ (recall that $P^G$ denote the set of conjugates of $P$ in $G$). So the conjugates of $\pi(P)$ in $A_d$ generate $A_d$, and $P$ contains an element $g_1$ such that $\pi(g_1)$ is a non-trivial permutation in $A_d$. If $\text{ord}(\pi(g_1)) \, = \, p^{f_1}$, $f_1 \geq 2$, then $\text{ord}(g_1) \geq p^{f_1}$ and $\pi(g_1^{p^{f_1-1}})$ is again a non-trivial permutation in $A_d$. So $P$ contains an element $g_2$ such that $\pi(g_2)$ has order $p$ in $A_d$. Since the inertia groups above a point in a connected $G$-Galois cover are conjugate to each other, after $P$ by its conjugate $(\mathbf{1}; h)^{-1} \, P \, (\mathbf{1};h)$ for a suitable $h \in A_d$, we may assume that $P$ contains an element $g_3 \, = \, (a_1, \ldots, a_d; \tau)$ where $\tau \, = \, \tau_1 \, \cdots \, \tau_r$, for some $1 \leq r \leq \floor{d/p}$ ($r$ is necessarily an even integer when $p \, = \, 2$). For $1 \leq u \leq r$, consider $b_u \coloneqq a_{up} \, \cdots \, a_{(u-1)p+1}$. Let $g$ be defined as in \eqref{eq_def_g}. By Proposition~\ref{prop_order_p-power_elements}, $g \, = \, g_4^{-1} \, g_3 \, g_4$ for some $g_4 \in G$. Again replacing $P$ by its conjugate $g_4^{-1} \, P \, g_4$, we may assume that $P$ contains $g$ as above. By our assumption, the pair $(G, \, \langle \, g \, \rangle)$ is realizable. Then by \cite[Theorem~2]{2}, the pair $(G, \, P)$ is realizable as well.
\end{proof}
\begin{remark}\label{rmk_generation}
The hypothesis of the above proposition assumes the realization of the pairs $(G, \, \langle \, g \, \rangle)$. We remark that for any $g \in G$ of order $p$-power such that $\pi(g)$ is a non-trivial even permutation in $A_d$, the conjugates if $\langle \, g \, \rangle$ generate $G$. To see this, set $N_1 \coloneqq \langle \, \langle \, g \, \rangle^G \, \rangle$. This is necessarily a normal subgroup of $G$. An easy calculation shows that the commutators $[z_1, \, z_2] \in N_1 \cap N^d$ for any elements $z_1 \in N_1$, $z_2 \in N^d$. So $N_1 \cap N^d$ is trivial if and only if the elements of $N_1$ commutes with the elements of $N^d$ in $G$, and this is equivalent to $N_1$ being contained in $N^d$. As $N_1 \not\subset N^d$, we have an element $(c_1, \ldots, c_d; 1) \in N_1 \cap N^d$. Since $A$ is a non-abelian simple group, $N^d \subset N_1$. So we have a surjective homomorphism $G/N^d \twoheadrightarrow G/N_1 \cong A_d$. Thus $N_1 \, = \, G$.
\end{remark}
\begin{remark}\label{rmk_std_wr}
Another important quasi $p$-group is a standard wreath product $N \wr_{\text{St}} H = N^{\# H} \rtimes H$ for two finite quasi $p$-groups $N$ and $H$, where $H$ acts on $N^{\# H}$ via its regular representation. In fact, for a split extension $G_1 \rtimes G_2$ of quasi $p$-groups, there is a natural embedding of $G_1 \rtimes G_2$ in $G_1 \wr_{\text{St}} G_2$. An element of $N \wr_{\text{St}} H$ is of the form $((a_h)_{h \in H}; h_1)$ for $a_h \in N$, $h_1 \in H$. Since the regular representation of $H$ is faithful and transitive, we can classify the $p$-power order elements in $N \wr_{\text{St}} H$ and their conjugates as in Proposition~\ref{prop_order_p-power_elements}. Then for simple non-abelian quasi $p$-groups $N$ and $H$, to show that the PWIC holds for the standard wreath product $G = N \wr_{\text{St}} H$, it is enough to show (in view of \cite[Theorem~2]{2}) that for every element $a \in N$ of $p$-power order and every element $h \in H$ of order $p$, the pairs $(G, \, \langle \, ((a, 1, \ldots, 1); h) \, \rangle)$ is realizable.
\end{remark}
\end{appendix}
\bibliographystyle{amsplain}
|
{
"timestamp": "2021-12-01T02:27:07",
"yymm": "2111",
"arxiv_id": "2111.15495",
"language": "en",
"url": "https://arxiv.org/abs/2111.15495"
}
|
\section{Introduction}
Recently, researchers have been using the weak gravity conjecture to study the inflation models as well as other cosmological implications. The weak gravity conjecture in the theories coupled with gravity actually describes an important issue that gravity should be the weakest force. In fact, weak gravity conjecture is an important tool to distinguish between two sectors of the effective low-energy theories: landscape (which are compatible with quantum gravity) and the swampland (which are incompatible with quantum gravity) \cite{1}. At the low energy level, the swampland is much wider than the landscape. The landscape can actually be overlooked in front of the swampland at low energy levels. In that case, an important question is that what kinds of effective field theories are consistent with quantum gravity. Some of examples can be seen in Refs. \cite{R1,R2,R3}. The swampland has two conditions such as distance and dS conjecture which are examined in order to study inflation models according to specific conditions. For more information about weak gravity conjecture, landscape and swampland, one can visit Refs.\cite{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15}. Usually, the quantum gravitational corrections ignored in the eternal inflation studies and only the large-field models of inflation are considered. In order to consider the quantum gravitational corrections one should take in to account the small-field models for example via swampland conjecture. Of course, we need to mention a few very important points.
In general, swampland conjectures have not yet been widely accepted in the literature or it can not be considered as a complete theory, but by expressing conjectures and corrections to seek to resolve a series of ambiguities or in a way a proof for string theory.
Although it has valuable answers in theories such as hot inflation and a few other examples, it still faces many challenges and many problems this theory.
However, it is possible in the future and with the further expansion of this theory and perhaps more observations that will have a good consistency theory about the many issues. But it is still a toddler that is growing and modifying and being used in various cosmological structures such as inflation, black hole physics, and dark energy. The second point; the quantum gravitational corrections are negligible in small-field models and very strong (potentially dominant) in large-field models. That is why the effective field theories associated with large-field models are not always reliable, while those corresponding to small-field models. Of course, in connection with the eternal inflation mentioned above, some points can also be added in this way, although the initial conjectures of the swampland are indicated by Matsui and Takahashi and Dimopoulos, which is generally incompatible with eternal inflation. But more recently William H. Kinney in his recent article (Eternal Inflation and the Refined Swampland Conjecture), has shown that the recently refined swampland conjecture, which somehow applies weaker criteria to the potential of the scale field in inflation, is somewhat consistent with eternal inflation \cite{2R1, 2R2, 2R3}. In fact, weak gravity conjecture, landscape and swampland are used to study inflation models and low-energy gravity theories in order to adapt or not to quantum gravity. The use of swampland criteria to study various inflation models in different theories of gravity has been studied by many researchers \cite{16,17,18,19,20,20-2}.
In the last few years, many inflation models have been evaluated from different aspects, such as the condition of slow-roll, constant-roll, etc. \cite{21,22,23,24,25,26, 26p}. To be precise, slow-roll models depict an early phase of
de Sitter evolution of the Universe.
In order to have inflation in effective low energy theories, inflationary models must be compatible to a UV complete field theory. As we know, the Swampland is
a set of consistent effective field theories
which cannot be completed into quantum gravity in the UV. Now, in order to have particular compatibility with quantum gravity, or in other words to satisfy the quantum gravity, an effective field theory must satisfy the following criteria
\cite{27,28,29,30,31}. The first swampland criterion (distance conjecture)
which limits the range traversed by scalar
fields $\phi$ is
\begin{equation}\label{1}
\frac{\Delta\phi}{M_{pl}}<\Delta \sim \mathcal{O}(1),
\end{equation}
where $M_{pl}$ denotes to the Planck mass.
Another swampland criterion (dS conjecture) is given by the gradient of the
potential $V$ for any scalar field satisfying the lower
bound \cite{32,33}
\begin{equation}\label{2}
M_{pl}\frac{V'}{V}>c \sim \mathcal{O}(1),
\end{equation}
where $(c>0)$ and $'$ denotes the derivative with respect to $\phi$. According to the study of inflation models on the brane, the cosmological parameters must always have modifications, which we discuss in next section. However, the number of e-folds for single-field inflation is given by
\begin{equation}\label{3}
N=\frac{1}{M_{pl}^{2}}\int\frac{V}{V'}d\phi\approx\frac{\frac{\Delta\phi}{M_{pl}}}{M_{pl}(\frac{V'}{V})}.
\end{equation}
Recently, researchers have always used various methods to overcome the obstacles presented by these criteria to study these models and formalisms which can be seen in Refs. \cite{34,35}. In studying inflation models by using these criteria, according to previous works, we know that a number of inflation models are compatible with these criteria and some of the models are incompatible with them \cite{36}. Of course, there are models that are compatible with these criteria, but they are always associated with the problems in the initial conditions which has been extensively discussed in Refs. \cite{2002.02941} and \cite{1910.08837}. The objective of this paper is to improve the aforementioned situation.
In section \ref{sec2}, we study the inflation model on the brane. In section \ref{sec3}, we introduce the modified $f(R)$ gravitational model on brane. With respect to the concept proposed in section \ref{sec2}, we investigate the some cosmological parameters such as the $r$, $n_{s}$, $\alpha$ etc. Then we are plotting some figures to determine the concentrated areas of each parameter. Finally, in section \ref{sec4}, we explain the result of this paper.
\section{Inflation Model on Brane}\label{sec2}
In this section, we wish to express the modification of cosmological parameters according to brane perspectives.
Inflation models have been studied from different perspectives. Given the models on brane as well as the concepts associated with braneworld, we know that our $4D$ world is a $3$-brane located in the higher dimensions of the bulk \cite{36,37,38,39}. Now, within the context of brane inflation, there exists series of modifications of cosmological parameters. The first relation in the brane scenario is the Friedman equation, which is modified to
\begin{equation}\label{4}
H^{2}=\frac{1}{3M_{pl}^2}\rho\left(1+\frac{\rho}{2\Lambda}\right).
\end{equation}
Here $\Lambda$ is the three
dimensional brane tension which relates Planck masses as following:
\begin{equation}\label{5}
M_{4}=\sqrt{\frac{3}{4\pi}}\left(\frac{M_{5}^{2}}{\sqrt{\Lambda}}\right)M_{5},
\end{equation}
where $M_{4}$ is $4$-dimensional Planck mass scale and $M_{5}$ is $5$-dimensional Planck mass scale.
Also the two modified slow-roll parameters can be defined \cite{40}. The first slow-roll parameter ($\epsilon$), which is a measure of the slope of the
potential, is given by
\begin{equation}\label{6}
\epsilon\equiv\frac{1}{2}\left(\frac{V'}{V}\right)^{2}\frac{1}{(1+\frac{V}{2\Lambda})^{2}}\left(1+\frac{V}{\Lambda}\right),
\end{equation}
and the second slow-roll parameter ($\eta$) is given by
\begin{equation}\label{7}
\eta\equiv \left(\frac{V''}{V}\right)\left(\frac{1}{1+\frac{V}{2\Lambda}}\right).
\end{equation}
The number of e-folds ($N$) during inflation is as following \cite{40}
\begin{equation}\label{8}
N=- \frac{8 \pi}{M_4^2}\int_{\phi_{e}}^{\phi_{i}}\left(\frac{V}{V'}\right)\left(1+\frac{V}{2\Lambda}\right)d\phi.
\end{equation}
The power spectrum $(P_{R})$ is given by
\begin{equation}\label{9}
P_{R}=\frac{1}{12\pi^{2}}\frac{V^{3}}{V'^{2}}\left(1+\frac{V}{2\Lambda}\right)^{3},
\end{equation}
which is evaluated at horizon crossing.
Also, the scale-dependence of the perturbations is described by following scalar spectral index $(n_{s})$:
\begin{equation}\label{10}
n_{s}=1+2\eta-6\epsilon.
\end{equation}
In addition, the running spectral index, $\alpha$, has the following form:
\begin{equation}\label{11}
\alpha=\frac{dn_{s}}{d\ln k}=-\frac{dn_{s}}{dN}.
\end{equation}
Now, we apply the these concepts discussed in this section to the modified $f(R)$ gravitational model. Then we determine the range of each parameter by plotting some figures and compare with observable data.
\section{Modified $f(R)$ gravitational model on the brane}\label{sec3}
Recently, people have described the characteristics of our universe by using the various inflation models. These modified inflation models play a very important role in describing the features of our universe, for example, in describing dark energy as well as cosmic acceleration. Different types of modified gravitational models are also used to describe quantum gravity. A number of these models are commonly used to describe neutron stars, as well as other types of them to study a variety of different cosmic models and gluons \cite{41,42,43,44,45,46,47,48}. These inflation models are studied in different implications and conditions.
Now, we want to examine a modified $f(R)$ gravitational model on the brane and by calculating the modified cosmological parameters, we study the compatibility of this model with swampland criteria. We consider a modified gravitational model with following expression of $f(R)$:
\begin{equation}\label{12}
f(R)=R+F(R)=R+\gamma R^{p},
\end{equation}
where $p$ is positive number (not necessarily integer). Also, $\gamma$ is a constant parameter, this coefficient solves the dimensional problem for such a model. Here we consider this value to be positive units for calculating the other important. Also, the result for this model with respect to $\gamma=1$ is very interesting that we explain in the following in detail. However, other values of $\gamma$ can be considered in an independent study.\\
First of all, we assume $V / \Lambda \gg 1$, so that the effect of brane is very significant. Hence, we should impose an upper bound to $\Lambda$. It should be noted that, if $V$ is decreasing during inflation, then assumptions such as $ V /\Lambda \gg 1 $ are satisfied at the end of inflation.\\
In order to investigate the potential of $f(R)$ gravitational model, we begin with the following Hilbert-Einstein action:
\begin{equation}\label{13}
S=\int d^{4}x\sqrt{-g}\left[\frac{1}{2k^2}\widetilde{R}-\frac{1}{2}\widetilde{g}^{\mu\nu}\nabla_{\mu}\phi \nabla_{\nu}\phi - V(\phi)\right],
\end{equation}
where $\widetilde{g}^{\mu\nu}= f'(R) {g}^{\mu\nu}$ and $\widetilde{R} =\varphi R=\exp\left(\sqrt{\frac{2}{3}}\phi R\right)$.
Now, we arrange the potential as
\begin{equation}\label{16}
V(\phi)=\frac{(\varphi-1)R(\phi)-F(R(\phi))}{2\varphi^{2}},
\end{equation}
and
we investigate the potential of $f(R)$ gravitational model, which is given by
\begin{equation}\label{17}
V(\phi)=\frac{1}{2}e^{-2\sqrt{\frac{2}{3}}\frac{\phi}{M_{pl}}}\left(-1+e^{\sqrt{\frac{2}{3}}\frac{\phi}{M_{pl}}}\right)^{\frac{p}{-1+p}}M_{pl}(p-1)p^{\frac{p}{1-p}}.
\end{equation}
Then, according to equation (\ref{17}), we have
\begin{equation}\label{18}
\frac{V'}{V}=-\frac{p(-\frac{\phi}{M_{pl}})^{-1+p}(-\frac{\phi}{M_{pl}})^{-p}\left(2^{\frac{3p}{2}}(-1+2p)(-\frac{\phi}{M_{pl}})^{-p}-3^{\frac{p}{2}}(-1+p)p!\right)}{M_{pl}(-1+p)\left(2^{\frac{3p}{2}}(-\frac{\phi}{M_{pl}})^{-p}-3^{\frac{p}{2}}p!\right)},
\end{equation}
where $\prime$ is the first derivative with respect to $\phi$.
The second derivative of potential divided by potential leads to
\begin{equation}
\frac{V''}{V}=\frac{D(A+B)}{C},\label{19}
\end{equation}
where the explicit expression for $A$, $B$, $C$ and $D$ are, respectively,
\begin{eqnarray}
A&=&\left(M_{pl}p^{\frac{-2+p}{-1+p}}+2(1+p(-3+4p))\right)\left(-\frac{\phi}{M_{pl}}\right)^{2p},\\
B&=&-2^{1+\frac{3p}{2}}*3^{\frac{p}{2}}(2+5(-1+p)p)\left(-\frac{\phi}{M_{pl}}\right)^{p}\Gamma(1+p)+2*3^{p}(-1+p)^{2}\Gamma^{2}(1+p),\\
C&=&M_{pl}^{3}(-1+p)^{2}\left(2^{\frac{3p}{2}}\left(-\frac{\phi}{M_{pl}}\right)^{-p}-3^{\frac{p}{2}}p!\right)^{2},
\\
D&=&8^{p}P^{2+\frac{1}{p-1}}\left(-\frac{\phi}{M_{pl}}\right)^{-2}.
\end{eqnarray}
In the following, we will assume $\frac{V}{\Lambda}\gg1$. At the end of the section, this assumption will turn out to be completely valid and correct.
According to equations (\ref{6}) and (\ref{7}), the modified slow-roll parameters
are calculated by
\begin{equation}\label{20}
\epsilon=\frac{2^{2-\frac{3p}{2}}3^{\frac{p}{2}}\Lambda p^{3+\frac{1}{-1+p}}p!(2^{3p}(-1+2p)^{2})(\frac{2^{\frac{3p}{2}}3^{-\frac{p}{2}}}{p!})^{-\frac{p}{-1+p}}(-\phi^{\frac{-2p^{2}-p+2}{p-1}})}{2^{3p}(-1+p)^{3}},
\end{equation}
\begin{equation}\label{21}
\eta=\frac{2^{2-\frac{3p}{2}}3^{\frac{p}{2}}\Lambda p^{3+\frac{2}{p-1}} 8^{p}(p^{\frac{p-2}{p-1}}+2(1+p(-3+4p))) p!(\frac{2^{\frac{3p}{2}}3^{-\frac{p}{2}}}{p!})^{\frac{p}{1-p}}(-\phi)^{\frac{-2P^{2}-p+2}{p-1}}}{2^{3p}(-1+p)^{3}}.
\end{equation}
Now the ratio of slow-roll parameters is
\begin{equation}\label{22}
\frac{\eta}{\epsilon}=\frac{p^{\frac{1}{p-1}}(2-6p+8p^{2}+p^{\frac{p-2}{p-1}})}{(1-2p)^{2}}.
\end{equation}
We know that the inflation ends at $\phi=\phi_{e}$ if one of the slow-roll parameters is of the order of one, that is, $\epsilon=1$ or $\eta=1$. With respect to above equation, it is obvious that $\eta>\epsilon$ for different values of $p$. Thus, we have
\begin{equation}\label{23}
(-\phi_{e})^{\frac{-2p^{2}-p+2}{p-1}}= \frac{(-1+p)^{3}2^{3p}}{2^{2-\frac{3p}{2}}3^{\frac{p}{2}}\Lambda p^{ \frac{3p-1}{p-1}} 8^{p}(p^{\frac{p-2}{p-1}}+2 +2p(-3+4p) )p!(\frac{2^{\frac{3p}{2}}3^{-\frac{p}{2}}}{p!})^{\frac{p}{1-p}}(-\phi)^{\frac{-2P^{2}-p+2}{p-1}}}.
\end{equation}
Therefore, following the above computations and equation (\ref{8}), the number of e-folds is calculated as (in unit of $\frac{8 \pi}{M_4^2}$)
\begin{equation}\label{24}
\begin{split}
N=&-\frac{2^{3p-2}3^{-p}(-1+p)^{3}p^{-2+\frac{1}{1-p}}(\frac{2^{\frac{3p}{2}}3^{-\frac{p}{2}}}{p!})^{\frac{1}{p-1}}(-\phi_{i})^{\frac{2P^{2}+p-2}{p-1}}}
{\Lambda(2-5p+4p^{3})(p!)^{2}}\\
&+\frac{2^{\frac{3p}{2}}3^{-\frac{p}{2}}p^{1+\frac{1}{1-p}+\frac{2}{p-1}}(p^{\frac{p-2}{p-1}}+2(1+p(-3+4p)))(\frac{2^{\frac{3p}{2}}3^{-\frac{p}{2}}}
{p!})^{\frac{p+1}{p-1}}}{(2-5p+4p^{3})(p!)}.
\end{split}
\end{equation}
If we ignore the second term of the number of e-folds, (the contribution of $\phi_{e}$), then (\ref{23}) becomes
\begin{equation}\label{25}
(-\phi_{i})^{\frac{2p^{2}+p-2}{p-1}}=\frac{\Lambda(2-5p+4p^{3})(p!)^{2}}{2^{3p-2}
3^{-p}(-1+p)^{3}p^{-2+\frac{1}{1-p}}(\frac{2^{\frac{3p}{2}}3^{-\frac{p}{2}}}{p!})^{\frac{1}{p-1}}}.
\end{equation}
Now, according to equations (\ref{25}),
the $\epsilon$ in (\ref{20}) and $\eta$ in (\ref{21}) take the following values, respectively:
\begin{eqnarray}
\epsilon &=& -\frac{2^{\frac{3p}{2}}+3^{-\frac{p}{2}}p^{1+\frac{1}{1-p}+\frac{1}{p-1}}(-1+2p)^{2}(\frac{2^{\frac{3p}{2}}*3^{-\frac{p}{2}}}{p!})^{\frac{1-p}{p-1}}}{(2-5p+4p^{3})N p!},\label{26}\\
\eta &=&-\frac{2^{\frac{3p}{2}}*3^{-\frac{p}{2}}p^{1+\frac{1}{1-p}+\frac{2}{p-1}}(p^{\frac{p-2}{p-1}}+2(1+p(-3+4p)))(\frac{2^{\frac{3p}{2}}*3^{-\frac{p}{2}}}{p!})^{\frac{p+1}{p-1}}}{(2-5p+4p^{3}) Np! }.\label{27}
\end{eqnarray}
Now, corresponding to above expression of $\epsilon$ and $\eta$, we are interested to investigate the value of spectral index defined in (\ref{10}). This leads to
following spectral index:
\begin{equation}\label{28}
n_{s}=\frac{-26p^{2}+12p^{2+\frac{1}{p-1}}-16p^{3+\frac{1}{p-1}}-4p^{\frac{p}{p-1}}+p(6-5N)+2N+4p^{3}(6+N)}{(2-5p+4p^{3})N}.
\end{equation}
In order to check the validity of assumption made for $\Lambda$, the very interesting and exciting point is that, in the above calculations, one can see the cosmological parameters are only a function of ($p$) and ($N$). They don't depend on the parameter $ \Lambda $. So we can say that the initial assumption of $\Lambda$ is completely correct and acceptable.
Now, according to equation (\ref{28}), we obtain the expression for running spectral index (\ref{11}) as follows,
\begin{equation}\label{29}
\alpha=\frac{-4p^{\frac{p}{p-1}}-2p(-3+4p)(1+p(-3+2p^{\frac{1}{p-1}}))}{(2-5p+4p^{3})N^{2}}.
\end{equation}
Exploiting
expression (\ref{25}),
the spectrum (\ref{9}) is calculated as
\begin{equation}
P_{R}=\frac{\mathcal{L}\cdot\mathcal{M}\cdot\mathcal{N}}{O},\label{30}
\end{equation}
where
\begin{eqnarray}
\mathcal{L}&=&3^{-1-2p}8^{-3+2p}(-1+p)^{6}p^{ \frac{-6p+2}{p-1}} \times\nonumber\\
&&\left(-\frac{\Lambda(2-5p+4p^{3})p!^{2}N}{2^{-2+3p}
3^{-p}(-1+p)^{3}p^{-2+\frac{1}{1-p}}(\frac{2^{\frac{3p}{2}}3^{-\frac{p}{2}}}{p!})^{\frac{1}{p-1}}}\right)^{\frac{(p-1)(4p+2)}{2p^{2}+p-2}}, \\
\mathcal{M}&=&\left(-1+ 2^{\frac{3p}{2}}3^{-\frac{p}{2}} \left(-\frac{\Lambda(2-5p+4p^{3})p!^{2}N}{2^{-2+3p}
3^{-p}(-1+p)^{3}p^{-2+\frac{1}{1-p}}(\frac{2^{\frac{3p}{2}}3^{-\frac{p}{2}}}{p!})^{\frac{1}{p-1}}p!}\right)^{\frac{p(p-1)}{2p^{2}+p-2}} \right)^{\frac{4p}{p-1}},\\
\mathcal{N}&=& 2^{ {3p} } \left(-\frac{\Lambda(2-5p+4p^{3})p!^{2}N}{2^{-2+3p}3^{-p}(-1+p)^{3}p^{-2+\frac{1}{1-p}}(\frac{2^{\frac{3p}{2}}
3^{-\frac{p}{2}}}{p!})^{\frac{1}{p-1}}}\right)^{\frac{p(p-1)}{2p^{2}+p-2}} -3^{\frac{p}{2}}p!, \\
O&=&\Lambda^{3}\pi^{2}(p!)^{4}(2^{\frac{3p}{2}}(-1+2p) \left(-\frac{\Lambda(2-5p+4p^{3})p!^{2}N}{2^{-2+3p}*3^{-p}(-1+p)^{3}p^{ \frac{-1+p}{1-p}}(\frac{2^{\frac{3p}{2}}3^{-\frac{p}{2}}}{p!})^{\frac{1}{p-1}}}\right)^{\frac{p(p-1)}{2p^{2}+p-2}} \nonumber\\
&-&3^{\frac{p}{2}}(-1+p)p!)^{2}.
\end{eqnarray}
Also, the tensor-to-scalar ratio \cite{49,50} is obtained with respect to equation (\ref{26}) as
\begin{equation}\label{31}
r=24\epsilon =-24\left(\frac{2^{\frac{3p}{2}}+3^{-\frac{p}{2}}p^{1+\frac{1}{1-p}+\frac{1}{p-1}}(-1+2p)^{2}(\frac{2^{\frac{3p}{2}}*3^{-\frac{p}{2}}}{p!})^{\frac{1-p}{p-1}}}{(2-5p+4p^{3})N p!}\right).
\end{equation}
After calculating the modified cosmological parameters, we plot some figures
of each parameter in accordance to the computational results and observable data.
The plotted figures and their explanations are as following.
The figures \ref{fig1}, \ref{fig2} and \ref{fig3} depict the range associated with the scalar spectrum index, the running spectrum index and the slow roll parameters, respectively, for different values of $p$ and number of e-folds ($N$).
The range of each parameter specified in these plots is consistent with the observable data \cite{51}.
In Fig. \ref{4}, we plot $\eta$ with respect to $\epsilon$ for $N=60$.
\begin{figure}[h!]
\begin{center}
{
\includegraphics[height=6cm,width=6cm]{fig1.eps}
\label{1a}}
\caption{The spectral index $n_{s}$ in term of $p$ for number of e-folds
($N=50-60$).}
\label{fig1}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
{
\includegraphics[height=6cm,width=6cm]{fig2.eps}
\label{2a}}
\caption{The running spectral index $\alpha$ in term of $p$ for number of e-folds ($N=50-60$).}
\label{fig2}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\subfigure[] {
\includegraphics[height=5cm,width=5cm]{fig3.eps}
\label{3a}}
\subfigure[]{
\includegraphics[height=5cm,width=5cm]{fig4.eps}
\label{3b}}
\caption{\small{The slow-roll parameters $\epsilon$ in (a) and $\eta$ in (b) for $p$ and $N=50-60$. }}
\label{fig3}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
{
\includegraphics[height=6cm,width=5cm]{fig5.eps}
}
\caption{\small{The plot of slow-roll parameter $\eta$ in terms of $\epsilon$ for $N=60$ and different values of $p$ as brown ($p = 3$), green ($p =4$), and red ($p= 5$). }}
\label{fig4}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\subfigure[]{
\includegraphics[height=5cm,width=5cm]{fig7.eps}
\label{5a}}
\subfigure[]{
\includegraphics[height=6cm,width=5cm]{fig6.eps}
\label{5b}}
\caption{\small{The plot of $r$ in terms of $p$ in (a) for $N=50-60$ and (b) is close up of (a).}}
\label{fig5}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\subfigure[]{
\includegraphics[height=5cm,width=5cm]{fig8.eps}
\label{6a}} \subfigure[]{
\includegraphics[height=5cm,width=5cm]{fig9.eps}
\label{6b}}
\subfigure[]{
\includegraphics[height=5cm,width=5cm]{fig10.eps}
\label{6c}}
\caption{\small{The plots of $n_{s}$ in terms of $r$ for $N=50$, $N=60$ and $N=70$ and different values of $p$ as blue ($p=2$), brown ($p =3$), green ($p =4$) and red ($p=5$). }}
\label{fig6}
\end{center}
\end{figure}
We plot some figures to investigate each of these cosmological parameters, according to the different values of $p$, $N$ and observable data. We plot the $r$ in term of $p$ with respect to $N=50-60$ in Fig. \ref{fig5}. The validity of this parameter is determined according to Planck's data which is $0.064$. Also, we plot $n_{s}$ plan for $N=50$, $N=60$ and $N=70$ and different value of $p$ in figure \ref{fig6}. As shown in the figures, the values are closer to the observable data for number of e-folds ($N = 50, 60$).
Now, if we consider the second term of the number of e-folds or the contribution of $\phi_{e}$ corresponding to (\ref{24}), then
\begin{eqnarray}\label{32}
(-\phi)_{i}^{\frac{2p^{2}+p-2}{p-1}}&=& \frac{2^{2-3p}*3^{p}\Lambda p^{\frac{2p}{p-1}}}{(-1+p)^{3}}
\left[2p(1+p(-3+4p))\right.\nonumber\\
&+& \left. p^{\frac{1}{1-p}}(p^{2}+N(-2+5p-4p^{3}))\right](\frac{2^{\frac{3p}{2}}3^{-\frac{p}{2}}}{p!})^{\frac{1}{1-p}}p!^{2}.
\end{eqnarray}
Therefore, the modified slow-roll parameters given in the equations (\ref{26}) and (\ref{27}) convert, respectively, to
\begin{equation}\label{33}
\epsilon=\frac{(1-2p)^{2}p}{p^{2}-6p^{2+\frac{1}{p-1}}+8p^{3+\frac{1}{p-1}}=2p^{\frac{p}{p-1}}-2N+5pN-4p^{3}N},
\end{equation}
and
\begin{equation}\label{34}
\eta=\frac{p^{\frac{p-2}{p-1}}(2p^{\frac{2}{p-1}}+p^{\frac{p}{p-1}}+8p^{\frac{2p}{p-1}}-6p^{\frac{1+p}{p-1}}}{p^{2}-6p^{2+\frac{1}{p-1}}+2p^{\frac{p}{p-1}}-2N+5Np-4P^{3}N}.
\end{equation}
The the spectral index (\ref{28}) with respect to the new condition becomes
\begin{equation}\label{35}
n_{s}=3+\frac{6(1-2p)^{2}p-2(2-5p+4p^{3})N}{-2p^{\frac{p}{p-1}}+2N+p(-5N+p(-1-2p^{\frac{1}{p-1}}(-3+4p)+4pN))},
\end{equation}
and the running spectral index (\ref{29}) with respect to the new condition becomes
\begin{equation}\label{36}
\alpha=\frac{2(2-5p+4p^{3})(2p^{\frac{p}{p-1}}+p(-3+4p)(1+p(-3+2p^{\frac{1}{p-1}})))}{(p^{2}-6p^{2+\frac{1}{p-1}}+8p^{3+\frac{1}{p-1}}+2p^{\frac{p}{p-1}}-2N+5pN-4p^{3}N){2}}.
\end{equation}
Also, the tensor-to-scalar ratio takes the following form:
\begin{equation}\label{37}
r=24\left(\frac{(1-2p)^{2}p}{p^{2}-6p^{2+\frac{1}{p-1}}+8p^{3+\frac{1}{p-1}}+2p^{\frac{p}{p-1}}-2N+5pN-4p^{3}N}\right).
\end{equation}
As you can see from the above equations, by considering the contribution of $\phi_{e}$ from equation (\ref{28}) the cosmological parameters are only a function of two values of $p$ and $N$ and independent of $\Lambda$. Actually, it is consistent with our initial assumption of $\Lambda$. According to the observable data and the above equations, we determine the range of each of these parameters from the figures. Then we compare the changes between these two parts, i.e. by considering the contribution of $\phi_{e}$ and without it. In general, the final results will be very close to each other. Also, according to equations (\ref{9}) and (\ref{32}), we again calculate the values of $P_{R}$ given as
\begin{equation}\label{38}
\begin{split}
&\phi=(\frac{2^{2-3p}3^{p}\Lambda p^{\frac{2p}{p-1}}(2p(1+p(-3+4p))+p^{\frac{1}{p-1}}(p^{2}+N(-2+5p-4p^{3})))(\frac{2^{\frac{3p}{2}}
3^{-\frac{p}{2}}}{p!})^{\frac{1}{1-p}}(p!)^{2}}{(p-1)^{3}})^{\frac{p-1}{2P^{2}+p-2}}\\
&P_{R}=\frac{3^{-1-2p}8^{-3+2p}(-1+p)^{6}p^{-6-\frac{4}{p-1}}(\phi)^{2+4p}(-1+\frac{2^{\frac{3p}{2}}
3^{-\frac{p}{2}}(\phi)^{p}}{p!})^{\frac{4p}{p-1}}(2^{\frac{3p}{2}(\phi)^{p}-3^{\frac{p}{2}}p!})^{2}}{\Lambda^{3}\pi^{2}(p!)^{4}(2^{\frac{3p}{2}}(-1+2p)(\phi)^{p}-3^{\frac{p}{2}}(-1+p)p!)^{2}}
\end{split}
\end{equation}
Now, we plot the range of each cosmological parameter specified by taking the contribution of $\phi_{e}$ into account and the observable data and measurement values. With respect to the above description, the range of each cosmological parameter with regard to the new conditions in plots will be been determined.
\begin{figure}[h!]
\begin{center}
{
\includegraphics[height=5cm,width=5cm]{fig11.eps}
}
\caption{The spectral index $n_{s}$ in terms of $p$ for number of e-folds
($N=50-60$).}
\label{fig7}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
{
\includegraphics[height=5cm,width=5cm]{fig12.eps}
}
\caption{The running spectral index $\alpha$ in terms of $p$ for number of e-folds ($N=50-60$).}
\label{fig8}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\subfigure[] {
\includegraphics[height=5cm,width=5cm]{fig13.eps}
\label{9a}}
\subfigure[]{
\includegraphics[height=5cm,width=5cm]{fig14.eps}
\label{9b}}
\caption{\small{The slow-roll parameter $\epsilon$ in terms of $p$ for $N=50-60$ in (a) and (b) is close up of (a). }}
\label{fig9}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\subfigure[]{
\includegraphics[height=4cm,width=5cm]{fig15.eps}
\label{10a}}
\subfigure[]{
\includegraphics[height=4cm,width=5cm]{fig16.eps}
\label{10b}}
\caption{\small{We plot the slow-roll parameter $\eta$ in terms of $p$ for $N=60$ in (a) and (b) is close up of (a). }}
\label{fig10}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\subfigure[] {
\includegraphics[height=4cm,width=5cm]{fig17.eps}
}
\caption{\small{The plot of $r$ in terms of $p$ for $N=50-60$.}}
\label{fig11}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\subfigure[] {
\includegraphics[height=4cm,width=5cm]{fig18.eps}
\label{12a}}
\subfigure[] {
\includegraphics[height=4cm,width=5cm]{fig19.eps}
\label{12b}}
\subfigure[] {
\includegraphics[height=4cm,width=5cm]{fig20.eps}
\label{12c}}
\caption{\small{The plot $n_{s}$ in terms of $r$ for $N=50$, $N=60$ and $N=70$ with different value of $p$ as blue ($p=2$), brown ($p =3$), and green ($p =4$). }}
\label{fig12}
\end{center}
\end{figure}
In general, the surprising point is the independence of these cosmological parameters to $\Lambda$. As you can see in the plots of the scalar spectrum index (Fig. \ref{fig7}), the running spectrum index (Fig. \ref{fig8}), and the slow roll parameters (Figs. \ref{fig9} and \ref{fig10}) that the range is associated with different values of $p$ and $N$. The range of each parameter is specified in these plots is compared to the previous plots (without contribution of $\phi_{e}$) and found that it has more minor changes. But in this case, too, for certain values, the results match with the observable data. We plot some figures to investigate each of these cosmological parameters, according to the different values of $p$, $N$, and observable data. To be precise, we plot the $r$ in terms of $p$ with respect to $N=50-60$ in figure (\ref{11}). The validity of this parameter is determined according to Planck's data which is $0.064$. Also we plot $n_{s}$ for $N=50$, $N=60$ and $N=70$ and different value of $p$ in figure \ref{12}. We find in the figure that for e-folds ($N = 50, 60$), the values are closer to the observable data.
Finally, we can specify the values and range associated with each of the $ \phi $ and $ \Lambda $ parameters by using the calculated values and observable data. In fact, in this paper, we examined the modified gravitational model on the brane and its compatibility with the swampland criteria. The interesting thing is that the cosmological parameters were independent of the $ \Lambda $ parameter. The present studies can also be extended for other inflation models and pay attention to the existing commonalities. An important point to note is whether the model studied in this paper falls within the range of swampland conjectures or clearly how much of the parameter space of the modified gravitational model falls within the range of the swampland conjecture.
What role will each of the important cosmological parameters play in this model?
Is this compatibility in line with the latest cosmological data, or can it be examined more broadly?
Therefore, as shown in (Figs. \ref{fig13} and \ref{fig14}), the parametric range allowed for the modified gravitational model in terms of the scalar field $ \phi $ and the constant parameters mentioned in the text, especially the parameter (p) and with respect to the condition $V/\Lambda >> 1$ is specified. As shown in (Fig. \ref{fig13}), we have a plot of $ V '/ V $ in terms of $ \phi $ with respect to various values of $p$, and as you can see in a specific region, the relation $V' /V$ on these plots is order one or greater, as the swampland conjecture would demand. Of course, we tried to determine the range for the negative values of the scalar field.
Of course, unauthorized areas are clearly identified in the figures, and also the constant parameters mentioned play their role well.
Similarly, according to equations (\ref{18}) and (\ref{25}), the changes in swampland conjecture are shown according to the constant parameter $p$ for different a number of e-folds N in (Fig. \ref{fig14}). The compatibility and alignment of these swampland conjectures with each of the cosmological parameters are also determined. Thus, the coordinated gravitational model and swampland conjectures in a particular parametric space can be used.
Of course, deeper challenges can also be posed to swampland conjectures for such models, where the main focus can be on classifying a new type of inflation model according to swampland criteria, provided that swampland conjecture can be used in the future for Adapting to existing theories and solving many problems in modern cosmology.
It can now be seen as a growing emerging theory.
\begin{figure}[h!]
\begin{center}
\subfigure[]{
\includegraphics[height=6cm,width=6cm]{fig23.eps}
\label{5a}}
\subfigure[]{
\includegraphics[height=6cm,width=6cm]{fig24.eps}
\label{5b}}
\caption{\small{The plot of $C=V'/V$ in terms of $\phi$ with respect to various values of $p$.}}
\label{fig13}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\subfigure[]{
\includegraphics[height=6cm,width=6cm]{fig25.eps}
\label{6a}}
\subfigure[]{
\includegraphics[height=6cm,width=6cm]{fig26.eps}
\label{6b}}
\caption{\small{The plot of $V'/V$ in terms of $p$ with respect to $N=60$ in fig (6a), and $N=50-60$ in fig (6b).}}
\label{fig14}
\end{center}
\end{figure}
In addition to the items mentioned in the text of the article, swampland conjectures can be challenged in other ways, and these conjectures can also be examined about cosmological parameters.
Among these, two very important cosmological parameters, i.e., the scalar spectrum index $n_{s}$ and tensor-to-scalar ratio $r$, can be examined according to the swampland conjectures. The allowable range of each of these two cosmological parameters can be determined in relation to the swampland conjecture components.
That is, in fact, using equations (\ref{18}), (\ref{32}), (\ref{35}), and (\ref{36}), we challenged the first conjecture of the swampland in terms of these two cosmological parameters, namely $n_{s}$ and $r$.
As you can see in Fig. \ref{fig15}, the allowable range for each of these cosmological parameters and the first component of the swampland are determined according to the constant parameter $p$.
As it is clear in the literature, the first and second components of the swampland are always a positive value and of unit order.
In this figure, although these two parameters are within the allowable range of the swampland components, their changes are also specified for the changes of the constant parameter $p$.
The allowable range for the second component can be calculated in the same way, and it can be determined what virtual range can be specified for different fixed values.
In fact, by challenging the first swampland conjecture, we found that this satisfaction and acceptance in relation to the model mentioned is clearly indicated in Fig. \ref{fig15}.
\begin{figure}[h!]
\begin{center}
\subfigure[]{
\includegraphics[height=6cm,width=6cm]{fig21.eps}
\label{7a}}
\subfigure[]{
\includegraphics[height=6cm,width=6cm]{fig22.eps}
\label{7b}}
\caption{\small{The plot of $C=V'/V$ in terms of $r$ and $n_{s}$ with respect to various values of $p$.}}
\label{fig15}
\end{center}
\end{figure}
\section{Conclusions}\label{sec4}
Recently, various inflation models have been studied by researchers from different perspectives and conditions such as slow-roll, constant-roll, ultra-slow-roll, and weak gravity conjecture in order to introduce a model for expanding the universe. In this paper, we investigated a new condition for inflation models by introducing a modified ($R+\gamma R^{p}$) gravitational model. In this paper, we considered the special and interesting case of $\gamma=1$, so it is interesting also to consider other values of $\gamma$ in future works. As our studies are based on a modified $f(R)$ gravitational model on the brane, we encountered modified cosmological parameters. So, we first introduced these modified cosmological parameters such as spectral index, a number of e-folds and etc. Then we applied these conditions to the modified $f(R)$ gravitational model in order to adapt to the swampland criteria. Finally, we determined the range of each of these parameters by plotting some figures and with respect to observable data such as Planck 2018. A very interesting and exciting point we observed is that the series of cosmological parameters throughout the calculations do not depend on $\Lambda$.
|
{
"timestamp": "2021-12-01T02:26:39",
"yymm": "2111",
"arxiv_id": "2111.15477",
"language": "en",
"url": "https://arxiv.org/abs/2111.15477"
}
|
\section{Introduction}
\label{sec:intro}
At the epoch of recombination, matter and radiation separate, allowing radiation to stream freely in the Universe. This free-streaming radiation permeating the Universe, which we observe as the \textup{cosmic microwave background} (CMB) radiation, encodes a treasure trove of information about the initial conditions in the Universe \citep{ryden2003,jones2017precision}. Although it has a remarkably consistent average temperature, the CMB still exhibits tiny deviations of about $10^{-5}$ from the background average. The temperature fluctuations in the CMB trace the fluctuations in the underlying matter distribution in the early Universe that are linked to the spontaneous quantum fluctuations generated in an otherwise homogeneous medium \citep{harrison1970,peebles1970}. Studying the properties of the temperature fluctuations in the CMB is therefore essential for understanding the properties of the primordial matter field.
The lambda cold dark matter (LCDM) paradigm is the standard paradigm of cosmology. Together with the inflationary models in their simplest form \citep{starobinsky1982,guthpi1982}, the standard model of cosmology predicts the nature of the primordial stochastic matter distribution field to be that of an isotropic and homogeneous Gaussian random field \citep{harrison1970,guth1981}. This prediction finds allies theoretically in the central limit theorem, and observationally in the various measurements of the CMB temperature anisotropy field via ground- and space-based probes such as the \texttt{BOOMERanG} balloon-based experiment \citep{boomerang} and the Wilkinson Microwave Anisotropy Probe (WMAP) satellite \citep{wmap9}. The latest endeavor of measuring the CMB temperature anisotropies was the launch of the \textup{Planck} satellite, which boasts of the highest resolution in measurements to date. The resolution is at scales of a few arcminutes \citep{planckOverview2018}. Despite the general consensus that the CMB exhibits the characteristics of an isotropic and homogeneous Gaussian random field, a growing body of evidence shows anomalies in the observed CMB field with respect to the base model. They include in particular the observed hemispherical asymmetry in the CMB power spectrum \citep{eriksen2004} as well as the alignment of low multipoles \citep{multipoles}; see \cite{cmbanomaliesstarkman} for a review. These observed anomalies raise doubts about the assumption of statistical isotropy and homogeneity, respectively.
Testing the assumption of Gaussianity requires tools that encode information about higher orders. Traditional endeavor in this direction has focused on higher-order correlation functions \citep{durrer1996}, which are generally extremely resource-intensive computationally \citep{planckcollaboration2016a}. Recently, attention has turned toward developing alternative tools beyond the correlation functions and multispectra, which may potentially encode information of all orders. The principal tools in this regard have arisen from integral geometry and involve computing the \textup{Minkowski functionals} or the \textup{Lifshitz-Killing curvatures} \citep{adler1981,mecke94,schmalzing1996,schmalzinggorski,sahni1998,codis2013,ducout2013,matsubara2010,chingangbam2017,pranav2019a,appleby2021minkowskiSDSS}. The $j$-th Minkowski functional and $(D-j)$-th Lifshitz-Killing curvature of a $D$-dimensional manifold $\Mspace$ are related by $Q_j(\Mspace) \ =\ j! \omega_j {\cal L}_{D-j}(\Mspace)$, where $ j=0,\dots,D,$ and $\omega_j$ is the volume of the $j$-dimensional unit ball. There are $d$ such quantifiers for a $D$-dimensional set, where $d = 0, \ldots, D$. All but one are purely geometrical quantities that are related to the $d$-dimensional volume of the manifold. The exception is the $0$-th Lifshitz-Killing curvature, or equivalently, the $D$-th Minkowski functional, which is related to a purely topological quantity, the \textup{Euler characteristic} \citep{euler1758,adler1981,pranav2019a,eecestimate}, via Gauss's \textup{Theorema Egrerium} \citep{gauss1900,adler1981,pranav2019a}. The Minkowski functional computations of the CMB have consistently shown the observations to be congruent with the standard model \citep{planckIsotropy2015}.
More recently, developments in computational topology have paved the way for extracting topological information from datasets at the level of \textup{homology} \citep{munkres1984,edelsbrunnerharer10,isvd10,pranavthesis,pranav2017,moraleda2019,pranavReview2021} and its hierarchical extension, \textup{persistent homology} \citep{edelsbrunnerharer10,pranav2017,shivashankar2015,rst,pranav2021topology2,heydenreich2021}. \textup{Topological data analysis} (TDA) involving homology and persistent homology has recently started finding application in astrophysical disciplines, for example, in the context of structure identification \citep{shivashankar2015,xu2019} and quantification of large-scale structures \citep{kono2020,wilding2020}, including detection and quantification of non-Gaussianities \citep{feldbrugge2019,biagetti2020}. Homology describes the topology of a space by identifying the holes and the topological cycles that bound them. A $d$-dimensional space may contain topological cycles of $0$ up to $d$ dimensions. The cycles and holes are associated with the \textup{homology groups} of the space. The $p$-th \textup{Betti number}, $\Betti{p}$, is the rank of the $p$-th homology group, $\mathbb{H}_{p, p = 0 \ldots d}$. While itself a purely topological quantity, the Euler characteristic is also the alternating sum of the Betti numbers of all ambient dimensions of a manifold, as denoted by the Euler-Poincar\'{e} formula \citep{adler1981,pranav2017,pranav2019a}. The Euler characteristic has a long history in the analysis of cosmological fields \citep{gdm86,pogosyan2009,ppc13,appleby2020}.
Building on and refining existing tools from computational topology in the context of analyzing the CMB field, this paper presents the homology characteristics of the temperature fluctuation maps of the cosmic microwave background obtained by the \textup{Planck} satellite \citep{planckOverview2018}. We perform our experiments on the fourth and final data release, \textup{Planck 2020} Data Release 4 (DR4), which is based on the \texttt{NPIPE} data-processing pipeline \citep{npipe}. The \texttt{NPIPE} dataset represents a natural evolution of the Planck data-processing pipeline, integrating the best practices from the LFI and HFI pipelines separately. The result is an overall amplification of signal and reduction in the associated systematic, noise, and residuals at almost all angular scales \citep{npipe}. For comparison, we also present results for the \textup{Planck 2018} Data Release 3 (DR3) \citep{planckOverview2018}, which is based on the \textup{Full Focal Plane} (FFP) data processing pipeline \citep{plancksims}, resulting in the \texttt{FFP10} simulations \citep{ffp10}. The paper follows the spirit of \cite{pranav2019b} in methods and analysis, where we present results for the intermediate \textup{Planck 2015} Data Release 2 (DR2) \citep{planckIsotropy2015}; also see \cite{rst}. The novel aspect of the methods presented here and in \cite{pranav2019b} is an analysis pipeline that takes regions with unreliable data on $\Sspace^2$ into account. In the case of CMB, this is reflected in the obfuscation effects of the measurements that are due to foreground objects such as our own galaxy, as well as other extra- and intragalactic foreground sources. We masked these regions and computed the homology of the excursion sets relative to the mask.
We present a brief description of the topological background in Section~\ref{sec:topology}, followed by the results in Section~\ref{sec:result}. We discuss the ramifications of the results and conclude the main body of the paper in Section~\ref{sec:discussion}. The appendices present a brief description of the datasets and the computational pipeline as well as a validation for the statistical tests.
\begin{figure}
\centering
\subfloat{\includegraphics[width=0.5\textwidth]{figs/sim_d64_s3_BBR_roundCorners.png} }
\caption{Visualization of the temperature fluctuations in the CMB sky. The survey surface $\Sspace^2$ is distorted at each point in the direction of the surface normal. The distortion is proportional to the fluctuation in direction and magnitude. The visualization is based on the observed CMB sky cleaned by the \texttt{NPIPE} pipeline and smoothed at $5$ degrees.}
\label{fig:cmbSky}
\end{figure}
\begin{figure*}
\centering
\subfloat[]{\includegraphics[width=0.49\textwidth]{figs/sim_d64_s3_BBR_thldHI_roundCorners.png}}
\subfloat[]{\includegraphics[width=0.49\textwidth]{figs/sim_d64_s3_BBR_thldLow_roundCorners.png}}
\caption{Cosmic microwave background sky thresholded at moderately positive (left) and negative (right) levels. For high thresholds, the excursion set is dominated by isolated components, while at low thresholds, it gives the appearance of a single connected surface indented by numerous holes. For sufficiently low thresholds, the holes fill up, and the excursion set covers the entire sphere, which is composed of a connected surface without a boundary that encloses a single void (cf. Figure~\ref{fig:cmbSky}).}
\label{fig:cmbThld}
\end{figure*}
\begin{figure}
\centering
\subfloat{\includegraphics[width=0.5\textwidth]{figs/obs_d64_s5_masked_BBR_roundCorners.png} }
\caption{Visualization of the temperature fluctuations in the CMB sky in the presence of masks (plotted in gray). It consists of an equatorial belt and numerous patches on the northern and southern cap, which correspond to our galaxy and other bright foreground objects. The visualization is based on the observed CMB sky cleaned by the \texttt{NPIPE} pipeline and smoothed at $5$ degrees.}
\label{fig:maskedSky}
\end{figure}
\section{Topological background}
\label{sec:topology}
Commensurate with our intention of analyzing the topology of the CMB temperature fluctuations, we restricted ourselves to topological definitions on $\Sspace^2$ \citep{pranav2019b} and invoked \textup{relative homology} to account for analysis in the presence of masked regions. The standard reference for this section is \cite{edelsbrunnerharer10}; also see \cite{pranav2019b} for discussion in the context of CMB, as well as \cite{heydenreich2021} in the context of cosmic shear fields.
\subsection{Homology characteristics of excursion sets of $\Sspace^2$}
Denoting the CMB temperature fluctuations on $\Sspace^2$ as $f \colon \Sspace^2 \to \Rspace$, we define the \textup{excursion set} \footnote{Excursion sets are also known as \textup{superlevel sets} and their boundary thresholds as \textup{levelsets}, or simply \textup{levels}.} at a temperature $\nu$ as the subset of $\Sspace^2$ where the temperature is higher than or equal to $\nu$,
\begin{equation}
\Excursion (\nu) = \{ x \in \Sspace^2 \mid f(x) \geq \nu \}.
\end{equation}
In the cosmological setting, the usual practice is to examine the dimensionless threshold, which is the mean-subtracted and variance-scaled field derived from the original field, in which case, $\nu = (f - \mu_f)/\sigma_f$, and $\mu_f$ and $\sigma_f$ are the mean and the standard deviation of the field $f$. If $\Excursion (\nu)$ does not cover the entire $\Sspace^2$, it may be composed of isolated components and holes. Figure~\ref{fig:cmbThld} presents excursion sets corresponding to two different thresholds. For high thresholds, presented in the left panel, the excursion set is dominated by components, while for low thresholds, presented in the right panel, the excursion set is dominated by a few large connected objects indented with holes, which are bounded by loops. The \textup{Betti numbers} $\Betti{0}$ and $\Betti{1}$ count the number of independent components and loops of the excursion set, respectively. In general, for a $d$-dimensional topological space, $\Betti{p}$ is the rank of the $p$-th \textup{homology group}, $\Homology{p};p = 0, \ldots, d$, and counts the number of independent $p$-dimensional cycles \citep{munkres1984,edelsbrunnerharer10,pranav2017}. If $\Excursion (\nu)$ does not cover the entire $\Sspace^2$, the number of independent loops is one less than the total number of loops. If $\Excursion (\nu)$ covers the entire $\Sspace^2$, there are no loops, and $\Betti{2} = 1$, because of the void that is enclosed by the boundary-less surface of the sphere. A related quantity that has a long history of usage in cosmological analyses is the \textup{Euler characteristic}, or alternatively, the \textup{genus} \citep{gdm86,ppc13}, which is the alternating sum of the Betti numbers of the excursion set,
\begin{equation}
\Euler (\nu) = \Betti{0} (\nu) - \Betti{1} (\nu) + \Betti{2} (\nu).
\end{equation}
The Euler characteristic also has a geometric interpretation as one of the \textup{Lifshitz-Killing} curvatures of the manifold \citep{adler1981,pranav2019b}.
\subsection{Masks and relative homology}
The measurement of the CMB signal is unreliable in certain parts of the sky due to interference from bright foreground objects. These include extended objects such as our galaxy, as well as bright point sources. We masked these regions and computed the homology characteristics of the excursion set relative to the mask. Figure~\ref{fig:maskedSky} presents a visualization of the masked CMB sky. Letting $\Mask \subseteq \Sspace^2$ be the mask and $\Excursion (\nu)$ the excursion set, we considered the \textup{relative homology} of the pair of closed spaces, $(E, M)$, where $E = \Excursion (\nu)$ and $M = \Mask \cap \Excursion (\nu)$. We note that $M$ is contained in $E$ and is an open set by definition in this context. We denote the rank of the \textup{relative homology groups} of the pair $(E, M)$ by $\relBetti{p} = \Rank{\Homology{p} (E, M)}; p = 0, 1, 2$. The Betti numbers computed considering the pair $(E, M)$ are different from the Betti numbers of the excursion set without a mask. For a more detailed discussion about relative homology in the context of masked CMB sky, see \cite{pranav2019b}. The \textup{relative Euler characteristic}, as in the case of absolute homology, is the alternating sum of the rank of relative homology groups,
\begin{equation}
\relEuler (\nu) = \relBetti{0} (\nu) - \relBetti{1} (\nu) + \relBetti{2} (\nu).
\end{equation}
\begin{figure*}
\centering
\subfloat[][]{\includegraphics[width=0.8\textwidth]{figs/npipe_betti0_errorbars.pdf}}\\
\subfloat[][]{\includegraphics[width=0.6\textwidth]{figs/Normalized_CMB_curves_0_9b0_good.png} }\\
\caption{Graphs of $\relBetti{0}$ for the \texttt{NPIPE} dataset for various degraded and smoothing scales. Panel (a) presents the observational curve in yellow, and the curves corresponding to the average of simulations are presented in gray. Error bands corresponding to $(1\sigma:3\sigma)$ are also drawn. Panel (b) presents the curve for the significance of the differences, where the observed curves are presented in red, and the values for the simulations are presented as dotted gray lines.}
\label{fig:betti0_graph_npipe}
\end{figure*}
\begin{figure*}
\centering
\subfloat[][]{\includegraphics[width=0.8\textwidth]{figs/npipe_betti1_errorbars.pdf}}\\
\subfloat[][]{\includegraphics[width=0.6\textwidth]{figs/Normalized_CMB_curves_0_9b1_good.png} }\\
\caption{Graphs of $\relBetti{1}$ for the \texttt{NPIPE} dataset for various resolutions and smoothing scales. In panel (a), the observational curves are presented in yellow, and the curves corresponding to the average of simulations are presented in gray, while panel (b) presents the significance of the differences. Error bands corresponding $(1\sigma:4\sigma)$ are also presented in various colors. }
\label{fig:betti1_graph_npipe}
\end{figure*}
\begin{figure*}
\centering
\subfloat{\rotatebox{-90}{\includegraphics[height=0.8\textwidth]{figs/npipe_betti1_errorbars_32_16.pdf}}}\\
\subfloat{\includegraphics[width=0.43\textwidth]{figs/npipe_obs_sevem_32_masked.png} }
\subfloat{\includegraphics[width=0.43\textwidth]{figs/npipe_obs_sevem_16_masked.png} }\\
\caption{Graph of $\relBetti{1}$, selectively for $\Res= 32$ and $\Res= 16$, corresponding to $FWHM = 320'$ and $FWHM = 640'$, presenting an enlarged view of the concerned resolutions. The observational curve is presented in yellow, and the curves corresponding to the average of simulations are presented in gray. Error bands corresponding to $(1\sigma:4\sigma)$ are also drawn. The bottom two panels present the visualization of the scalar temperature field in order to facilitate an appreciation of the structure of the field.}
\label{fig:betti1_graph_32_16}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{figs/betti1_histogram_l-2_5_mask0_9.png}
\caption{Histogram of $b_1$ at $\nu = -2.5$, where the $\sim4\sigma$ deviation occurs. The minimum value attained across $600$ simulations is $18$, with a mean at $\sim21$ and a standard deviation of $\sim1.7$. The observed map exhibits $b_1 = 28$ and is well outside the distribution. }
\label{fig:b1_level-2.5_distr}
\end{figure}
\begin{figure*}
\centering
\subfloat[][]{\includegraphics[width=0.8\textwidth]{figs/npipe_ec_errorbars.pdf}}\\
\subfloat[][]{\includegraphics[width=0.6\textwidth]{figs/Normalized_CMB_curves_0_9EC_good.png}}\\
\caption{Graphs of $\relEuler$ for the \texttt{NPIPE} dataset. Panels (a) and (b) present similar information as Figures~\ref{fig:betti0_graph_npipe} and~\ref{fig:betti1_graph_npipe}. $\relEuler$ exhibits deviations commensurate with those in $\relBetti{0}$ and $\relBetti{1}$.}
\label{fig:ec_graph_npipe}
\end{figure*}
\begin{figure*}
\centering
\subfloat[][]{\includegraphics[width=0.8\textwidth]{figs/ffp10_betti0_errorbars.pdf}}\\
\subfloat[][]{\includegraphics[width=0.6\textwidth]{figs/Normalized_CMB_curves_0_9b0_good_ffp10.png} }
\caption{Graph of $\relBetti{0}$ for the \texttt{FFP10} dataset. In panel (a), the observational curves are presented in yellow, and the curves corresponding to the average of simulations are presented in gray. Error bands corresponding to $(1\sigma:3\sigma) $ are also drawn. Panel (b) presents the significance of the differences. The dataset exhibits milder deviations than the \texttt{NPIPE } dataset in general. However, we note a $2.96\sigma$ deviation in the number of components between the simulations and observation at $\Res = 128, FWHM = 80'$.}
\label{fig:betti0_graph_ffp10}
\end{figure*}
\begin{figure*}
\centering
\subfloat[][]{\includegraphics[width=0.8\textwidth]{figs/ffp10_betti1_errorbars.pdf}}\\
\subfloat[][]{\includegraphics[width=0.6\textwidth]{figs/Normalized_CMB_curves_0_9b1_good_ffp10.png}}
\caption{Graph of $\relBetti{1}$ for the \texttt{FFP10} dataset. In panel (a), the observational curves are presented in yellow, and the curves corresponding to the average of simulations are presented in gray. Error bands corresponding to $(1\sigma:4\sigma)$ are also drawn. Panel (b) presents the significance of the differences. The dataset exhibits milder deviations than the \texttt{NPIPE } dataset in general, but there are mildly significant deviations at $2.77\sigma$, at scales and thresholds where the \texttt{NPIPE } dataset shows strong deviations as well.}
\label{fig:betti1_graph_ffp10}
\end{figure*}
\begin{figure*}
\centering
\subfloat[][]{\includegraphics[width=0.8\textwidth]{figs/ffp10_ec_errorbars.pdf}}\\
\subfloat[][]{\includegraphics[width=0.6\textwidth]{figs/Normalized_CMB_curves_0_9EC_good_ffp10.png} }
\caption{Graph of $\relEuler$ for the \texttt{FFP10} dataset. In panel (a), the observational curve is presented in yellow, and the curves corresponding to the average of simulations are presented in gray. Error bands corresponding to $(1\sigma:3\sigma)$ are also drawn. Panel (b) presents the significance of the differences. The Euler characteristic exhibits deviations commensurate with $\relBetti{0}$ and $\relBetti{1}$.}
\label{fig:ec_graph_ffp10}
\end{figure*}
\section{Results}
\label{sec:result}
In this section, we present our results in terms of the ranks of homology groups, $\relBetti{p}$, $p = 0, 1$, as well as the Euler characteristic, $\relEuler$, relative to the mask. Section~\ref{sec:npipe_result} presents results from the \texttt{DR4 NPIPE} dataset based on $600$ simulations, obtained using the \texttt{SEVEM} component separation pipeline. Similar results for the \texttt{DR3 FFP10} dataset based on $300$ simulations, obtained using the \texttt{SMICA} component separation pipeline, are presented in Section~\ref{sec:ffp10_result} \footnote{The Planck CMB maps are hosted at {\url{https://pla.esac.esa.int/}}. The secondary datasets encapsulating topological information of CMB maps are hosted at {\url{https://www.pratyushpranav.org/cmb_data/cmb_data_archive.tar.gz}}. The analysis codes are available at {\url{https://www.pratyushpranav.org/codes/topos2.tar.gz}}.}. The latter dataset facilitates a direct comparison with the topo-geometrical studies of the CMB performed by the Planck team in \cite{planckIsotropy2018}. For both datasets, we begin by examining the graphs of $\relBetti{0}$, $\relBetti{1}$, and of the (relative) Euler characteristic, $\relEuler$. This is followed by statistical tests that estimate the significance of results, taking into account all of the levels together for a given resolution and scale. We employed the standard $\chi^2$-test in the empirical and theoretical settings, as well as the model-independent \textup{Tukey depth} test for this purpose. We chose a-priori levels $\ell_{k}$, where $\ell_k = k/2$, setting $k_{\relBetti{0}} [0:6], k_{\relBetti{1}} = [-6:0], k_{\relEuler} = [-6:6]$. We did this to restrict ourselves to analyzing $\relBetti{0}$ for positive thresholds, $\relBetti{1}$ for negative thresholds, and $\relEuler$ across the full threshold range. The choice of regions is determined by the fact that $\relBetti{0}(\nu)$ tends to be small and carries little information for $\nu<0$, $\relBetti{1}(\nu)$ tends to be small for $\nu>0$, while the Euler characteristic is informative over the full range of levels. These levels are consistent with the levels investigated in \cite{planckIsotropy2015} and \cite{planckIsotropy2018}.
\subsection{Planck 2020 Data Release 4: \texttt{NPIPE} dataset}
\label{sec:npipe_result}
In this section, we analyze the graphs of the topological descriptors for the \texttt{NPIPE} dataset. Treating the topological descriptors as discrete random variables, we compute the mean, $\mu_{sim}$, and the standard deviation, $\sigma_{sim}$, from the simulations in the usual sense for each of the levels. We use them to assess the model-independent significance of the difference between the simulations and observations.
\subsubsection{Graph of $\relBetti{0}$}
Figure~\ref{fig:betti0_graph_npipe} presents the graphs of $\relBetti{0}$ for the various degraded resolutions and the associated smoothing scales. The mean and the standard deviation are computed from the simulations. In general, we find that the significance across resolutions and levels is approximately $2\sigma$ or lower.
\subsubsection{Graph of $\relBetti{1}$}
Figure~\ref{fig:betti1_graph_npipe} presents the graphs of $\relBetti{1}$ for the various resolutions and corresponding scales. For resolutions between $\Res = 2048,\ldots,64$, corresponding to $FWHM = 5',\ldots,160'$, we observe the significance to be approximately $2\sigma$ or lower. However, for the next higher smoothing scales corresponding to $FWHM = 320'$ (approximately $5$ degrees) and $FWHM = 640'$ (approximately $10$ degrees), we note interesting deviations that we examine next. These cases are also presented in Figure~\ref{fig:betti1_graph_32_16} in an enlarged view for clarity.
Concentrating on the top left panel of Figure~\ref{fig:betti1_graph_32_16}, at a Gaussian smoothing scale of $FWHM = 320'$, approximately 5 degrees, and at a moderately low dimensionless density threshold $\nu = -2.5$, we find a $3.91\sigma$ deviation between the simulations and the observation. We examine the behavior of $b_1$ at this level further in Figure~\ref{fig:b1_level-2.5_distr}, where we present the histogram of the distribution of $b_1$ from the simulations in gray boxes. The observed value is indicated by a red vertical line. Evidently, the distribution is nonsymmetric and hence non-Gaussian. Our tests also indicate that the Poisson distribution is a poor fit to the data as well. We detect $28$ loops in the observational curves, compared to the average of $\sim 21$ loops in the simulations, with a standard deviation of $\sim1.78$. Within the Gaussian context, this would yield the significance of the difference at approximately $4\sigma$. However, since the distribution in this bin is distinctly non-Gaussian and does not obey Poisson statistics, ascribing a $\sigma$ significance in the usual sense is not viable in this case. As a result, we simply note that the significance is higher than what may be resolvable by $600$ simulations, yielding an empirical $p$-value of at most $0.0016$. While the generally low number of loops in both simulations and observations, owing to the large scale of probing, push us to the regime of small numbers, the behavior of the statistics indicates a stable regime.
Further evidence in support of this argument is presented in Appendix~\ref{sec:stat_validity}, where we examine the distribution of $\chi^2$ and Tukey depth from the simulations, and compare them with the observations. In general, for the $\chi^2$-test, we find good agreement between the histogram of simulations and the theoretical curve. Therefore, the highly significant deviation merits consideration. The deviant behavior of loops in the earlier analysis of \textup{Planck 2015} Data release 2 (DR2) presented in \cite{pranav2019b} at similar scales is also noteworthy in this context, as is the deviant Euler characteristic from WMAP observational data reported by \cite{eriksen04ng} and \cite{park2004}. Figure~\ref{fig:loops} presents a visualization of some of these loops at moderately negative thresholds for observational maps smoothed at $FWHM = 320'$.
At the next higher smoothing scale of $FWHM = 640'$, approximately $10$ degrees, presented in the right panel of Figure~\ref{fig:betti1_graph_32_16}, the shapes of the observational curve and the mean of the simulated curves for the negative thresholds, where topological loops are the dominant entities, are widely different. As differences in shapes may be a stronger indicator of inherent differences in the models than simply differences in the amplitudes, our assessment is that this case merits scrutiny as well. However, since the numbers involved are small, about $5$ and fewer in some bins \footnote{The standard requirement for the validity of $\chi^2$ test is a minimum of five samples in each bin considered for computing the statistic.}, we consider the statistics emerging from this resolution as not stable and reject them as possible statistical fluke.
\subsubsection{Graph of $\relEuler$ }
Figure~\ref{fig:ec_graph_npipe} presents the graphs of $\relEuler$ for the various resolutions and corresponding scales. The relative Euler characteristic also deviates for the resolutions where $b_0/b_1$, or both, deviate. However, the significance of the difference shows milder characteristics owing to cancellation effects \citep{pranav2019b}. This is because the Euler characteristic is not strictly an independent quantity because it is an alternating sum of the ranks of the relative homology groups. It therefore merely reflects the deviations in the contributing Betti numbers.
\subsection{Planck 2018 Data Release 3: \texttt{FFP10} dataset}
\label{sec:ffp10_result}
In this section, we analyze the topological characteristics of the \texttt{FFP10} dataset. We also compare our results for the Euler characteristic with those presented in \cite{planckIsotropy2018}.
\subsubsection{Graph of $\relBetti{0}$}
Figure~\ref{fig:betti0_graph_ffp10} presents the graphs for the topological components in panels (a) and (b), similar to Figure~\ref{fig:betti0_graph_npipe}. In general, the observational values are within the $2\sigma$ band when compared with the simulations for almost all resolutions and levels. The exception is the number of components at $\Res = 128, FWHM = 80'$, where the observations deviate from the simulations at $2.96\sigma$ at the dimensionless density threshold $\nu = 0.5$. The smoothing scale, approximately a degree, and the moderate threshold at which the deviation occurs ensure that the statistics are away from the low-number regime.
\subsubsection{Graph of $\relBetti{1}$}
Figure~\ref{fig:betti1_graph_ffp10} presents the graphs for the topological loops. The deviations between the observations and simulations are within approximately $2\sigma$ for most of the smoothing scales and thresholds. However, for the dimensionless threshold, $\nu = -2.5$, we note a more than $2\sigma$ deviation for $\Res = 64, FWHM = 160'$ onward toward higher smoothing scales. For $\Res = 32, FWHM = 320'$, the deviation is $2.77\sigma$, and similar for $\Res = 16, FWHM = 640'$. These deviations, while not as significant as exhibited in the \texttt{NPIPE} dataset, occur at similar scales and thresholds. In addition, we also note a more than $4\sigma$ deviation for $\Res = 16, FWHM = 640'$ at $\nu = -3$. However, we reject this as a possible statistical fluke because of the extremely low numbers involved, which are about $5$ or fewer.
\subsubsection{Graph of $\relEuler$}
Figure~\ref{fig:ec_graph_ffp10} presents the graphs of $\relEuler$ for the various resolutions and corresponding scales. The Euler characteristic shows commensurate deviations with respect to $\relBetti{0} (\relBetti{1})$ because of their contributions in the alternating sum.
\subsection{Statistical significance from combined thresholds}
\begin{table*}
\caption{Table displaying the two-tailed $p$-values for relative homology for the datasets. }
\tabcolsep=0.09cm
\centering
\subfloat[][]{\reltabNpipe}\\
\subfloat[][]{\reltabffp}\\
\tablefoot{The $p$-values are obtained via the parametric (Mahalanobis distance) and non-parametric (Tukey depth) tests, for different resolutions and smoothing scales for the \texttt{NPIPE} (panel (a)) and the \texttt{FFP10} (panel (b)) dataset. Marked in boldface are $p$-values $0.05$ or smaller.}
\label{tab:npipe_degrade-pvalues}
\end{table*}
We considered the two methods detailed in Appendix~\ref{sec:stat} to compute the statistics of the combined thresholds and present $p$-values of the observed maps for them. The first method is the Mahalanobis distance \citep{mahalanobis}, also known as the $\chi^2$-test, which works in model-dependent and in empirical settings. The second method is the nonparametric Tukey depth test. In Appendix~\ref{sec:stat_validity} we examine the distribution characteristics of these statistics from the simulations. The simulation and theoretical curve agree well for the $\chi^2$ statistic. Despite this, we computed the $p$-values both theoretically and from the empirical distribution, and found that the empirical distributions yield a milder significance. We also find that the depth histograms and distributions are well behaved and represent a meaningful quantification. Considering all the three topological quantities $\relBetti{0}$, $\relBetti{1}$ and $\relEuler$, we computed the statistics for all resolutions separately to ascribe a scale dependence to the signals.
\subsubsection{\texttt{NPIPE} dataset}
All entries before the last in Table~\ref{tab:npipe_degrade-pvalues}, panel (a), present $p$-values for the variables, computed from maps degraded at specific resolutions, with additional Gaussian smoothing applied. The $\chi^2$ results are presented for the theoretical and empirical distributions in the left and middle blocks, respectively. The results for the Tukey depth are presented in the rightmost block. When we consider all the three tests, the $p$-values computed from the empirical distribution using the $\chi^2$ statistic yield the most conservative estimate of the significance, and we favor it in order to be conservative in our interpretation. $\relBetti{1}$ shows a significant difference between the observational maps and the simulations for $\Res = 32$ and $\Res = 16$. Additionally, $\relBetti{0}$ also shows a significant difference at $\Res = 32$. However, we note in this context that the distributions in this case are manifestly non-Gaussian, and their form is poorly understood theoretically. As a result, the $\chi^2$ values may be regarded with caution. This also presents the case for admitting the nonparametric Tukey depth test, which detects an outlier event in this case, yielding a $p$-value of $0.0$. The Euler characteristic shows highly significant difference at $\Res = 32$, which is an order of magnitude larger than either $b_0$ or $b_1$. This is due to the significant difference shown by the contributing $\relBetti{0}$ and $\relBetti{1}$ at this resolution, in combination with the fact that these deviations occur near the tail of the distribution. In general, because the Euler characteristic is an alternating sum of the Betti numbers, its behavior is influenced by both the Betti numbers. Cancellation effects are dominant if the deviations in the contributing Betti numbers occur in the zone of overlap, which is more toward median thresholds. If the deviations occur toward the tail, where either one of the Betti numbers is dominant, the signals of the contributing Betti numbers are amplified in the Euler characteristic signal. As an example, for the next lower resolution, $\Res = 16$, the Euler characteristic shows no significant difference even though $\relBetti{1}$ exhibits significant difference. This is because of the highly nonsignificant behavior of $\relBetti{0}$, whose contribution cancels the contribution of $\relBetti{1}$ toward the Euler characteristic. In many instances, the Tukey depth test yields $p$-values of $0.0$ for all the descriptors, indicating that the observation is a true outlier compared to the simulations. We note the trend that these instances occur when the $p$-values computed from the Mahalanobis distance also exhibit generally low values. We help interpret the Tukey depth values in Appendix~\ref{sec:stat_validity}, where we examine the trends in their distribution.
\subsubsection{\texttt{FFP10} dataset}
Results for the \texttt{FFP10} dataset are presented in panel (b) of Table~\ref{tab:npipe_degrade-pvalues}. We concentrate our analysis on the middle block, which presents the $p$-values from the $\chi^2$ statistic using the empirical distribution. For $\Res = 32, FWHM = 320'$, we note that all the topological descriptors show a nonsignificant deviation between the simulations and the observation, which is in contrast with the behavior at the same resolution in the \texttt{NPIPE} dataset, where the values are an order of magnitude lower. For the next larger scale of probing, $\Res = 16, FWHM = 640'$, we note a per mil significance of the difference between simulations and observations for the number of loops, which is an order of magnitude smaller than in the \texttt{NPIPE} dataset. Additionally, we also note the low $p$-value for $b_0$ at $N =128, FWHM = 80'$, which is the scale at which an approximately $3\sigma$ deviation occurs for the number of components between the simulations and the observation. When examining the Euler characteristic at this scale, we note that the values are mildly significant, in contrast with the \texttt{NPIPE} dataset, which exhibits nonsignificant behavior. The Tukey depth test shows the observations to be true outliers in instances at different resolutions for all the three variables, yielding $p$-values of $0.0$. As in the case with the \texttt{NPIPE} dataset, these instances occur when the $\chi^2$ test also exhibits small $p$-values.
\subsection{Comparison with earlier topo-geometrical results in the literature}
\begin{figure}
\centering
\subfloat{\includegraphics[width=0.5\textwidth]{figs/masked_loops_ver2_roundCorners.png} }\\
\caption{Visualization of the loops surrounding the low-density regions at a moderately negative threshold. The gaps in the manifold correspond to the mask that has been removed. We clearly note the equatorial belt corresponding to our galaxy, and more patches in the visible cap. Some loops live fully in the excursion region that is not influenced by the mask. We show them in red. We also depict some representative relative loops whose end points are masked. We draw closed loops for this category as well and show the portions in which they overlap the masked regions in white. The visualization is based on the observed CMB map from the \texttt{NPIPE} data set smoothed at $5$ degrees.}
\label{fig:loops}
\end{figure}
Topo-geometrical studies aimed at testing isotropy, homogeneity, and Gaussianity of the cosmic fields, such as the CMB and the 3D density distribution, are standard in the cosmological literature. The principle tools employed for this purpose are the geometrical Minkowski functionals and the topological Euler characteristic. The bridge between topology and geometry is provided by the \textup{Theorema Egrerium} of Gauss, which establishes the equivalence between the geometric $0$-th Lipschitz-Killing curvature~(equivalently the $D$-th Minkowski functional), and the purely topological notion of Euler characteristic of a $D$-dimensional space. Our results, on the other hand, involve novel and purely topological notions arising from homology theory quantified by the Betti numbers. The Euler characteristic has connections to these topological measures due to the Euler-Poincar\'{e} formula, which expresses the Euler characteristic as the alternating sum of Betti numbers.
The pioneering works that examined the Minkowski functionals and Euler characteristic of the CMB are by \cite{eriksen04ng} and \cite{park2004}, performed on the WMAP data \citep{wmap9}. While \cite{eriksen04ng} examined the whole set of Minkowski functionals as well as the Morse-theoretical concept of a skeleton length, \cite{park2004} restricted themselves to genus measurement, which is linearly related to the Euler characteristic. The purely geometric Minkowski functionals, namely the area functional and the contour length functional, show consistency with the standard model. In contrast, both works reported anomalous measurements of the Euler characteristic or genus. \cite{eriksen04ng} reported that the Euler characteristic at negative density thresholds is anomalous with the base model simulations at more than $3\sigma$ for the northern hemisphere at smoothing scales of $FWHM = 3.40^\circ$. The corresponding $\chi^2$ value reported for this scale is at the $95\%$ confidence level, which translates into a $p$-value of $0.05$. \cite{park2004} reported an anomalous genus at more than $2.5\sigma$. The anomalous behavior of genus or the Euler characteristic, specifically at negative levels, is linked to and generated by the anomalous behavior of the $\text{fir}$st homology group, represented by topological loops, and establishes a connection between our results and previous results, showing a consistency across datasets.
The analysis of the Minkowski functionals and Euler characteristic on the Planck datasets was performed by the Planck collaboration itself and was reported in \cite{planckIsotropy2015} for DR2 (FFP8) and in \cite{planckIsotropy2018} for DR3 (FFP10). \cite{planckIsotropy2018} examined the Minkowski functionals, and the graphs for the Euler characteristic were presented for $\Res = 1024, 256, 32$. The general indication is that the observational values are within the $2.5\sigma$ band computed from the simulations. Our results are largely consistent with this observation from the Planck analysis for the given resolutions, which is also exhibited by the nonanomalous $p$-values obtained from the empirical $\chi^2$ test. However, we also note that our results show stronger differences than the Planck results. In this context, it is important to note that in the Planck analysis pipeline, it has been the practice to combine all the Minkowski functionals to perform the statistical tests and subsequently present the combined $p$-values. Beyond the fact that this combination mixes signals from topo-geometrical descriptors that are a priori known to represent independent properties, it also prevents a direct comparison with our results of Euler characteristic computations.
In this context, another subtle but important point must be considered. In a detailed treatment, \cite{pranav2019a} showed that the standard equations for Minkowski functionals used in cosmology are volume-normalized representations and represent exact computations only under specific conditions. The simplest case is a compact manifold without a boundary, for example, the complete $2$-sphere in 2D, or a periodic 3D Euclidean grid. When the manifold has boundaries, for example, the masked CMB sphere, additional boundary terms are involved in the exact computation of the Minkowski functionals. For purely geometric quantities, due to their localized nature, these boundary terms may be ignored when the manifold is large compared to the boundary. For topological quantities, which are nonlocal by nature, there is an interplay between the size of the topological objects, represented by the smoothing scale, and the size of the manifold. As a result, in case of topological descriptors such as the Euler characteristic and the Betti numbers, for large scales, boundary effects become increasingly more important. A full exact computation that takes the boundary effect generated by the complicated mask into account may therefore be a more accurate reflection of reality. Our computational methods for the topological descriptors perform exact computations in the presence of arbitrary masks. For the purely geometric components of the Minkowski functionals, this will involve a computation via the full Gaussian Kinematic Formula \citep{adl10,pranav2019b}, which takes the boundary terms into account that were, for example, developed in \cite{fantaye2015}.
\section{Discussions and conclusion}
\label{sec:discussion}
We presented a topological analysis of the temperature fluctuations in the CMB in terms of homology. To account for regions with unreliable data, we computed the homology of the excursion sets relative to the mask covering these areas. We performed a multiscale analysis by degrading and smoothing the maps for a range of pixel resolutions, and a subsequent convolution with a Gaussian filter for a range of scales. We performed our analysis on the fourth and final \texttt{NPIPE} data release from the Planck team. The pipeline represents a natural evolution of the data-processing pipeline, commensurate with better understanding of systematics, residuals, and noise over a period of time in successive data releases. It incorporates the best strategies from the LFI and HFI processing pipeline, so that the level of noise and residuals at all scales is reduced \citep{npipe}. We also investigated the \textup{Planck 2018} Data release 3 (DR3), accompanied by the \texttt{FFP10} simulations \citep{ffp10}, for comparison and completeness. The present paper is a successor to \cite{pranav2019b}, where we investigated the topology of the temperature fluctuation maps based on the intermediate \textup{Planck 2015 } Data Release 2 (DR2), accompanied by \texttt{FFP8} simulations. These two investigated topological characteristics of the CMB temperature fluctuation maps for the latest three data releases by the Planck team. The overall trends in the results in the datasets show the consistency of the data processing pipeline and our own methods.
Examining the behavior of topological components or isolated objects represented by the $0$D homology group, we find that the observations deviate by approximately $2\sigma$ or less from the simulations for the \texttt{NPIPE} dataset. For the \texttt{FFP10} dataset, $\relBetti{0}$ exhibits a deviation of $2.96\sigma$ at $\Res = 128, FWHM = 80'$. The $1$D homology group, representing and quantifying the topological loops, is also consistent with the base model at $2\sigma$ for most of the resolutions and scales. However, for $FWHM = 320'$ and at moderately low negative thresholds, we record a high deviation between the simulations and observations for the \texttt{NPIPE} dataset. In a Gaussian setting, this would amount to an approximately $4\sigma$ significance. However, the distribution characteristics at this threshold are manifestly non-Gaussian, rendering the usual $\sigma$ significance nonviable. The distribution characteristics do not obey the Poisson statistics either. In view of this, and because we lack a true theoretical understanding of the distribution characteristics, we simply note that the significance is larger than what may be resolved by $600$ simulations, yielding an empirical $p$-value of at most $0.0016$. The \texttt{FFP10} dataset exhibits a deviant behavior at $\sim 2.8\sigma$ at the same threshold and resolution for the loops. Although the deviation occurs in a regime in which the low threshold and large smoothing scale may indicate statistics with low numbers, we find the statistics to be well behaved and stable. The high significance value, combined with the fact that the deviation is at a scale at which anomalies have been reported in several methods and datasets \citep{eriksen04ng,park2004,cmbanomaliesstarkman}, the case merits consideration. However, in this context, we also note that the statistical analysis is based on merely $600$ and $300$ simulations for the \texttt{NPIPE} and \texttt{FFP10} datasets, respectively. For the Euler characteristic, the \texttt{NPIPE} dataset exhibits a high significance due to the high significance exhibited by the loops. However, for the \texttt{FFP10} data set, the Euler characteristic is within approximately the $2.5\sigma$ band computed from simulations, which is largely consistent with the observations in \cite{planckIsotropy2018}. However, we note that our results generally exhibit stronger differences than the Planck analysis pipeline. While ascertaining the source of this discrepancy is beyond the scope of this paper, a plausible source of the difference may be the different methods adopted in estimating the Euler characteristic and Minkowski functionals in general in \cite{planckIsotropy2018}, which are based on theoretical equations for boundary-less manifolds. At the next higher smoothing scale, $\Res = 16, FWHM =640'$, both datasets exhibit a significantly anomalous behavior with respect to the number of loops, but not the number of components. However, we disregard this resolution because of the low numbers involved in computing the statistics, owing to the large smoothing scale and low thresholds.
In order to test for nonrandom discrepancies, we computed the $p$-values using the theoretical model-dependent $\chi^2$ test, as well as its nonparametric version computed from empirical distributions, by combining all relevant thresholds at a given resolution. We also presented the nonparametric and model-independent Tukey depth test, which indicated that the observations are true outliers with respect to the simulations in numerous instances. However, in the final interpretation, we focused on the most conservative $p$-values, estimated using the $\chi^2$ test from the empirical distributions. For the \texttt{NPIPE} dataset, the observations are consistent with the simulations, with $p$-values higher than $0.05$ for most resolutions. However, the number of components and loops shows mildly significant deviations at $\Res = 32, FWHM = 320'$. The corresponding deviation for the Euler characteristic exhibits an order-of-magnitude lower value at per mil levels. For the next larger scale, we find that the components and the Euler characteristic are consistent with the base model, while the loops exhibit a mildly anomalous $p$-value of $0.03$. While emphasizing that the statistics are based on low numbers, we note the contrasting behavior of the different topological descriptors. For the \texttt{FFP10} dataset, the components and the Euler characteristic exhibit $p$-values higher than $0.01$ for all resolutions. This is consistent with the observation in \cite{planckIsotropy2018} that the $p$-values obtained from the combined Minkowski functionals are consistent with the base model within the $99\%$ confidence level. The the number of loops exhibits a significantly anomalous behavior, yielding per mil $p$-values, in contrast with the number of components and the Euler characteristic, at $\Res = 16, FWHM = 640$. However, we note that this deviation occurs at large smoothing scales, where the numbers involved in computing the statistics are small. Disregarding this anomalous behavior of loops at large scales, which might be affected by low-number statistics, we find that the \texttt{FFP10} dataset is consistent with the standard model simulations at the $99\%$ confidence level.
In summary, while both datasets are largely consistent with the simulations based on the standard model, we find instances of interesting discrepancies in both datasets, which may be difficult to reject summarily as statistical flukes. Although most but not all of the anomalies occur on large scales, which inherently indicates statistics based on small numbers, we find them to be in statistically stable regimes in most instances. Based on the evidence presented in this paper, our assessment is that it may be a difficult task to accept or reject the null hypothesis summarily. A primary but crucial requirement may be a significantly larger number of simulations in order to achieve a clear verdict. In the absence of a true understanding of the origin of the possible anomalous behavior, any attempt to classify it will merely be speculative in nature, and we refrain from it. However, to facilitate a deeper understanding, we envision future research directed toward examining the fiducial frequency maps, as well as the polarization maps, among others, possibly with a larger suite of simulations. This will also involve testing smaller patches of the sky in order to determine whether the effect is global in nature or restricted to a specific part of the sky.
\section*{Acknowledgements}
I am indebted to the anonymous referee, whose incisive, yet extremely insightful comments have helped bring this draft to a balanced place. I am greatly indebted to Robert Adler, Thomas Buchert, Herbert Edelsbrunner, Bernard Jones, Armin Schwarzman, Gert Vegter, and Rien van de Weygaert for encouraging this solo venture, and for extremely helpful discussions. My gratitude also to Julian Borrill and Reijo Keskitalo for their patience in clarifying doubts, and their constructive comments on the draft. I also thank Tal Eliezri for insightful comments on the artwork. This work is supported by the ERC advanced grant ARThUs (grant no: 740021; PI: Thomas Buchert), with contributing influence from ERC advanced grant URSAT (grant no: 320422; PI: Robert Adler). I gratefully acknowledge the support of PSMN (P\^ole Scientifique de Mod\'elisation Num\'erique) of the ENS de Lyon, and the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231, for the use of computing resources.
\bibliographystyle{aa}
|
{
"timestamp": "2021-12-01T02:24:55",
"yymm": "2111",
"arxiv_id": "2111.15427",
"language": "en",
"url": "https://arxiv.org/abs/2111.15427"
}
|
\section{Introduction}
\subsection{Main results}
In classical Diophantine approximation, one wants to approximate an irrational number $\alpha$ by rationals $p/q$
for $p,q \in \mathbb{Z}$. Dirichlet theorem says that for every $N \in \mathbb{N}$, there exist
$p,q \in \mathbb{Z}$ with $0<q<N$, such that
$$|q\alpha-p|<1/N < 1/q.$$
In this way, one can see classical Diophantine approximation as studying distribution of $q\alpha$ modulo
$\mathbb{Z}$ near zero. Diophantine approximation for irrational numbers has been generalized to investigating
vectors, linear forms, and more generally matrices, and have become classical subjects in metric number theory.
In this article, we consider the inhomogenous Diophantine approximation: the distribution of $q\alpha$ modulo
$\mathbb{Z}$ near a ``target" $b \in \mathbb{R}$. Although Dirichlet theorem does not hold anymore,
there exist infinitely many $q\in\mathbb{Z}$ such that
\[
|q\alpha-b-p| < 1/|q| \quad\text{for some }p\in\mathbb{Z}
\]
for almost every $(\alpha,b)\in\mathbb{R}^2$ and moreover,
\[
\liminf_{p,q\in\mathbb{Z}, |q|\to \infty} |q||q\alpha -b-p|=0
\]
for almost every $(\alpha,b)\in\mathbb{R}^2$ by inhomogeneous Khintchine theorem (\cite[Theorem \rom{2} in
Chapter \rom{7}]{Cas57}).
Similarly to numbers, for an $m \times n$ real matrix $A\in M_{m,n}(\mathbb{R})$, we study $Aq \in \mathbb{R}^m$
modulo $\mathbb{Z}^m$ near the target $b \in \mathbb{R}^m$ for vectors $q \in \mathbb{Z}^n$.
In this general situation as well, using inhomogeneous Khintchine-Groshev theorem (\cite[Theorem1]{Sch64}
or \cite[Chapter1, Theorem 15]{Spr79}), we have
$$\liminf_{q\in\mathbb{Z}^n, \|q\|\to \infty} \|q\|^{n}\idist{Aq-b}^{m}=0$$
for almost every $(A,b) \in {M}_{m,n}(\mathbb{R}) \times \mathbb{R}^m$.
Here, $\idist{v}: =\displaystyle\inf_{p\in \mathbb{Z}^m} \|v-p\|$ denotes the distance from $v \in \mathbb{R}^m$
to the nearest integral vector with respect to the supremum norm $\|\cdot \|$.
The exceptional set of the above equality is our object of interest. We will consider the exceptional set with weights
in the following sense. Let us first fix, throughout the paper, an $m$-tuple and an $n$-tuple of positive reals
$\mathbf{r}=(r_1,\cdots,r_m)$, $\mathbf{s}=(s_1,\cdots,s_n)$ such that $\displaystyle\sum_{1\leq i\leq m}r_i=1=\displaystyle\sum_{1\leq j\leq n}s_j$. The special case where $r_i=1/m$ and $s_j=1/n$ for all $i=1,\dots,m$
and $j=1,\dots,n$ is called the unweighted case.
Define the $\mathbf{r}$-quasinorm of $\mathbf{x}\in\mathbb{R}^m$ and $\mathbf{s}$-quasinorm of $\mathbf{y}\in\mathbb{R}^n$ by
$$\|\mathbf{x}\|_{\mathbf{r}}:=\max_{1\leq i\leq m}|x_i|^{\frac{1}{r_i}} \quad\textrm{and}\quad \|\mathbf{y}\|_{\mathbf{s}}:=\max_{1\leq j\leq n}|y_j|^{\frac{1}{s_j}}.$$
Denote $\idist{\mathbf{x}}_\mathbf{r}: =\displaystyle\inf_{p\in \mathbb{Z}^m} \|\mathbf{x}-p\|_{\mathbf{r}}$.
We call $A$ $\epsilon$-\textit{bad} for $b\in\mathbb{R}^m$ if
\eqlabel{eq1523}{
\liminf_{q\in\mathbb{Z}^n, \|q\|_{\mb{r}} \to \infty} \|q\|_{\mathbf{s}}\idist{Aq-b}_{\mathbf{r}}\ge \epsilon
.}
Denote
\begin{align*}
\mb{Bad}(\epsilon)&\overset{\on{def}}{=}\set{(A,b)\in {M}_{m,n}(\mathbb{R}) \times \mathbb{R}^m :A\textrm{ is $\epsilon$-bad for $b$}},\\
\mb{Bad}_A(\epsilon)&\overset{\on{def}}{=}\set{b\in\mathbb{R}^m:A\textrm{ is $\epsilon$-bad for $b$}}, \;\;\mb{Bad}_A\overset{\on{def}}{=}\bigcup_{\epsilon>0}\mb{Bad}_A(\epsilon),\\
\mb{Bad}^b(\epsilon)&\overset{\on{def}}{=}\set{A\in M_{m,n}(\mathbb{R}):A\textrm{ is $\epsilon$-bad for $b$}}, \;\; \mb{Bad}^b\overset{\on{def}}{=}\bigcup_{\epsilon>0}\mb{Bad}^b(\epsilon).
\end{align*}
The set $\mb{Bad}^0$ can be seen as the set of badly approximable systems of $m$ linear forms in $n$ variables.
This set is of Lebesgue measure zero \cite{Gro38}, but has full Hausdorff dimension $mn$ \cite{Sch69}.
See \cite{PV02,KTV06,KW10} for the weighted setting.
For any $b$, $\mb{Bad}^b$ also has zero Lebesgue measure \cite{Sch} and full Hausdorff dimension for every $b$
\cite{ET}. Indeed, it is shown that $\mb{Bad}^b$ is a winning set \cite{ET} and even a hyperplane winning set
\cite{HKS}, a property which implies full Hausdorff dimension. On the other hand, the set $\mb{Bad}_A$ also has full
Hausdorff dimension for every $A$ \cite{BHKV10}. See \cite{Har12,HM17,BM17} for the weighted setting.
The sets $\mb{Bad}^b$ and $\mb{Bad}_A$ are unions of subsets $\mb{Bad}^b(\epsilon)$ and
$\mb{Bad}_A(\epsilon)$ over $\epsilon>0$, respectively, thus a more refined question is whether the Hausdorff dimension
of $\mb{Bad}^b(\epsilon)$, $\mb{Bad}_A(\epsilon)$ could still be of full dimension.
For the homogeneous case ($b=0$), the Hausdorff dimension $\mb{Bad}^0(\epsilon)$ is less than the full dimension
$mn$ (see \cite{BK13, Sim} for the unweighted case and \cite{KM19} for the weighted case).
Thus, a natural question is whether $\mb{Bad}^b(\epsilon)$ can have full Hausdorff dimension for some $b$.
Our first main result says that in the unweighted case, $\mb{Bad}^b(\epsilon)$ cannot have full Hausdorff dimension
for any $b$. We provide an effective bound on the dimension in terms of $\epsilon$ as well.
\begin{thm}\label{corb1} For the unweighted case, i.e. $r_i=1/m$ and $s_j=1/n$ for all $i=1,\dots,m$
and $j=1,\dots,n$, there exist $c>0$ and $M_0>0$ depending only on $d$ such that for any $\epsilon>0$
and $b\in\mathbb{R}^m$, $$\dim_H \mb{Bad}^b(\epsilon)\leq mn-c\epsilon^{M_0}.$$
\end{thm}
As for the set $\mb{Bad}_A(\epsilon)$, the second author, together with U. Shapira and N. de Saxc\'e, showed that
Hausdorff dimension of $\mb{Bad}_A(\epsilon)$ is less than the full dimension $m$ for almost every $A$ \cite{LSS}.
In fact, it was shown that one can associate to $A$ a certain point $x_A$ in the space of unimodular lattices
$\operatorname{SL}_d(\mathbb{R})/\operatorname{SL}_d(\mathbb{Z})$ such that if $x_A$ has no escape of mass on average for a certain diagonal flow
(see Section~\ref{sec:1.2} for more details), which is satisfied by almost every point, then the Hausdorff dimension
of $\mb{Bad}_A(\epsilon)$ is less than $m$.
In this article, we provide an effective bound on the dimension in terms of $\epsilon$ and a certain Diophantine property of $A$ as follows.
We say that an $m\times n$ matrix $A$ is $\textit{singular on average}$ if for any $\epsilon>0$
$$\lim_{N\to\infty}\frac{1}{N}\left| \set{l\in\set{1,\cdots,N}:\exists q\in\mathbb{Z}^n \ \text{s.t.} \
\idist{Aq}_{\mb{r}}<\epsilon 2^{-l} \ \textrm{and} \ 0<\|q\|_{\mb{s}}<2^l}\right| =1.$$
\begin{thm}\label{thmEff1}
For any $A\in M_{m,n}(\mathbb{R})$ which is not singular on average, there exists a constant $c(A)>0$ depending on $A$
such that for any $\epsilon>0$, $\dim_H \mb{Bad}_{A}(\epsilon)\leq m-c(A)\frac{\epsilon}{\log(1/\epsilon)}.$
\end{thm}
On the other hand, the second author, together with Y. Bugeaud, D. H. Kim and M. Rams, showed that
in the one-dimensional case ($m=n=1$), $\mb{Bad}_\alpha (\epsilon)$ has full Hausdorff dimension for some $\epsilon>0$ if and
only if $\alpha\in\mathbb{R}$ is singular on average \cite{BKLR}.
We generalize this characterization to the general dimensional setting.
\begin{thm}\label{thmA1}
Let $A\in M_{m,n}$ be a matrix. Then the following are equivalent:
\begin{enumerate}
\item\label{S1} For some $\epsilon > 0$, the set $\mb{Bad}_{A}(\epsilon)$ has full Hausdorff dimension.
\item\label{S3} $A$ is singular on average.
\end{enumerate}
\end{thm}
Note that the implication (\ref{S1}) $\implies$ (\ref{S3}) of Theorem \ref{thmA1} follows from
Theorem \ref{thmEff1}. The other direction will be shown in Section \ref{sec6}.
\subsection{Discussion of the proofs}\label{sec:1.2}
We mainly use entropy rigidity in homogeneous dynamics, which means that the measure with maximal
entropy is unique \cite{EL}. The main tool in \cite{LSS} is a relative version of entropy rigidity.
In this article, we effectivize this phenomenon in terms of static entropy and conditional measures.
To use the effective version of the entropy rigidity, we construct a ``well-behaved" partition and $\sigma$-algebra,
and compare accociated dynamical entropy and static entropy. These results are summerized in Section \ref{sec2}
in general setting such as \cite{EL}, which are of independent interest.
To describe the scheme of the proofs for main theorems, we consider more specific homogeneous space as follows.
For $d=m+n$, let us denote by $\operatorname{ASL}_d(\mathbb{R})=\operatorname{SL}_d(\mathbb{R})\ltimes\mathbb{R}^d$ the set of area-preserving affine
transformations and denote by $\operatorname{ASL}_d(\mathbb{Z})=SL_d(\mathbb{Z})\ltimes\mathbb{Z}^d=Stab_{\operatorname{ASL}_d(\mathbb{R})}(\mathbb{Z}^d)$
the stabilizer of $\mathbb{Z}^d$.
For given weights $\mb{r}\in\mathbb{R}^{m}_{>0}$ and $\mb{s}\in\mathbb{R}^{n}_{>0}$, we consider the $1$-parameter diagonal subgroup
$$\set{a_t=\mathrm{diag} (e^{r_1t},\cdots,e^{r_mt},e^{-s_1t},\cdots,e^{-s_nt})}_{t \in \mathbb R}$$
in $\operatorname{SL}_d(\mathbb{R})$ and we take a lift of this group to $\operatorname{ASL}_d(\mathbb{R})\subset \operatorname{SL}_{d+1}(\mathbb{R})$ given by
$a_t\longmapsto\left(\begin{matrix}
a_t & 0\\
0 & 1\\
\end{matrix}\right)$
and abusing the notation, we denote it again by $a_t$. Let $a\overset{\on{def}}{=} a_1$ be the time-one map of the diagonal flow $a_t$.
We denote by
$$U=\set{ \left(\begin{matrix}
I_m & A & 0\\
0 & I_n & 0\\
0 & 0 & 1\\
\end{matrix}\right)
:A\in M_{m,n}(\mathbb{R})};\; \;
W=\set{
\left(\begin{matrix}
I_m & 0 & b\\
0 & I_n & 0\\
0 & 0 & 1\\
\end{matrix}\right)
:b\in \mathbb{R}^m},$$ both of which are unstable horospherical subgroups in $\operatorname{ASL}_d(\mathbb{R})$ for $a$.
The homogeneous spaces $\operatorname{SL}_d(\mathbb{R})/\operatorname{SL}_d(\mathbb{Z})$ and $\operatorname{ASL}_d(\mathbb{R})/\operatorname{ASL}_d(\mathbb{Z})$ can be seen
as the space of unimodular lattices and the space of unimodular grids, i.e. unimodular lattices translated by a vector in $\mathbb{R}^d$, respectively.
We say that a point $x\in \operatorname{SL}_d(\mathbb{R})/\operatorname{SL}_d(\mathbb{Z})$ has \emph{$\delta$-escape of mass on average} (with respect to the diagonal flow $a_t$) if for any compact set $Q$ in $\operatorname{SL}_d(\mathbb{R})/\operatorname{SL}_d(\mathbb{Z})$,
$$\displaystyle\liminf_{N\to\infty}\frac{1}{N}|\set{\ell\in\set{1,\dots,N}: a_\ell x\notin Q}|\ge\delta.$$
A point $x \in X$ has no escape of mass on average if it does not have $\delta$-escape of mass on average for any $\delta>0$.
For $A\in M_{m,n}$ and $(A,b)\in {M}_{m,n}(\mathbb{R}) \times \mathbb{R}^m$, we associate points
$$x_A:=
\left(\begin{matrix}
I_m & A\\
0 & I_n\\
\end{matrix}\right)\operatorname{SL}_d(\mathbb{Z})\quad \text{and}\quad y_{A,b}:=
\left(\begin{matrix}
I_m & A & -b\\
0 & I_n & 0\\
0 & 0 & 1\\
\end{matrix}\right)\operatorname{ASL}_d(\mathbb{Z}),$$ respectively.
In \cite{LSS}, it was shown that $\dim_H \mb{Bad}_A(\epsilon)<m$ for all $\epsilon>0$ if $x_A$ is $\textit{heavy}$ which is a condition equivalent to no escape of mass on average. Note that $x_A$ is heavy for almost every $A\in M_{m,n}(\mathbb{R})$.
On the other hand, we remark that $A$ is singular on average if and only if
the corresponding point $x_A$ has $1$-escape of mass on average (with respect to the diagonal flow $a_t$) by Dani's correspondence (see also \cite{KKLM}).
Now we give the outline of the proofs for Theorem \ref{corb1} and Theorem \ref{thmEff1}.
From the Dani correspondence, we characterize the Diophantine property
$(A,b)\in \mb{Bad}(\epsilon)$ by the dynamical property that the orbit $(a_t y_{A,b})_{t\geq 0}$ is eventually
in some target $\mathcal{L}_\epsilon$ (Subsection \ref{sec3.2}).
Using this characterization, we can construct $a$-invariant measures with large dynamical entropies relative to
$W$ and $U$ (Proposition \ref{prop5} and Proposition \ref{prop2}), which are related to
the Hausdorff dimensions of $\mb{Bad}_A(\epsilon)$ and $\mb{Bad}^b(\epsilon)$, respectively.
Here, we use ``well-behaved" $\sigma$-algebra contructed in Proposition \ref{algebracst}.
Then we can associate the dynamical entropies with the static entropies (Lemma \ref{algexiA}).
Finally, we can obtain effective upper bounds for the Hausdorff dimensions
of $\mb{Bad}_A(\epsilon)$ and $\mb{Bad}^b(\epsilon)$ using an effective version of the variational principle (Proposition \ref{effEL}).
The article is organized as follows. In Section \ref{sec2}, we introduce some entropy, relative entropy under a certain weaker invariant assumption, and general setup. In this general setup, we construct ``well-behaved" partition and
$\sigma$-algebra in a quantitative sense. From this construction, we compare the dynamical entropy and
the static entropy. Finally, we prove an effective version of the variational principle for relative entropy in the spirit
of \cite[7.55]{EL}. In Section \ref{sec3}, we introduce preliminaries for the proofs of dimension upper bounds
including properties of dimensions with respect to quasi-metrics. We also reduce badly approximable properties to dynamical properties in the space of grids in $\mathbb{R}^{m+n}$.
In Section \ref{sec:entropyboundA} and Section \ref{sec:entropyboundb}, we construct $a$-invariant measures on
$\operatorname{ASL}_d(\mathbb{R})/\operatorname{ASL}_d(\mathbb{Z})$ with large relative entropy and estimate dimension upper bounds in Theorem \ref{thmEff1} and Theorem \ref{corb1} using the effective variational principle. We conclude the paper with Section \ref{sec6}, characterizing the singular on average property in terms of best approximations and show (\ref{S3})$\implies$(\ref{S1}) part in Theorem \ref{thmA1} using a modified version of Bugeaud-Laurent sequence in \cite{BL}. In Appedix \ref{sec:App}, we gather basic and important properties of the entropy under our weaker invariant assumption.
\section{Effective version of entropy rigidity}\label{sec2}
In this section, we will establish an effective version of entropy rigidity in \cite[Section 7]{EL}.
There have been effective uniqueness results along the line of \cite{EL} in various settings: \cite{Pol} for toral automorphisms, \cite{Kad} for hyperbolic maps on Riemannian manifolds, and \cite{Ruh} on p-adic homogeneous spaces.
In our setting of real quotients of Lie groups, a main technical difficulty is that there is no canonical partition of the spaces whereas there are Markov partitions of Riemannian manifolds or suitable generating partitions of p-adic homogeneous spaces.
\subsection{Entropy and relative entropy}\label{sec2.1}
In this subsection, we recall the definitions and basic properties of the entropy and the relative entropy for $\sigma$-algebras we use in the later sections. We refer the reader to \cite[Chapter 1 \& 2]{ELW} for details.
\begin{defi}
Let $(X,\mathcal{B},\mu,T)$ be a measure-preserving system and let $\mathcal{A}\subseteq\mathcal{B}$ be a countably-generated sub-$\sigma$-algebra.
Note that there exists an $\mathcal{A}$-measurable conull set $X'\subset X$ and a system $\set{\mu_x^\mathcal{A}|x\in X'}$ of measures on $X$, referred to as \emph{conditional measures}, given for instance by \cite[Theorem 2.2]{ELW}.
The \emph{information function} of a countable partition $\mathcal{P}$ given $\mathcal{A}$ with respect to $\mu$ is defined by
$$I_\mu(\mathcal{P}|\mathcal{A})(x)=-\log\mu_x^\mathcal{A}([x]_{\mathcal{P}}),$$
where $[x]_\mathcal{P}$ is the smallest element of $\mathcal{P}$ containing $x$.
\begin{enumerate}
\item The \emph{(static) entropy} of the partition $\mathcal{P}=\set{A_1,A_2,\dots}$ is
$$H_\mu(\mathcal{P}):=H(\mu(A_1),\dots)=-\displaystyle\sum_{i\ge 1}\mu(A_i)\log\mu(A_i)\in[0,\infty],$$
where $0\log0=0$. Moreover, the \emph{conditional (static) entropy of $\mathcal{P}$ given $\mathcal{A}$} is defined by
$$H_\mu(\mathcal{P}|\mathcal{A}):=\int_X I_\mu(\mathcal{P}|\mathcal{A})(x)d\mu(x),$$
which is the average of the information.
\item Let $\mathcal{A}$ be a sub-$\sigma$-algebra such that $T^{-1}\mathcal{A}\subseteq \mathcal{A}$. For a countable partition $\mathcal{P}$ of $X$ with finite entropy, let
$$h_\mu(T,\mathcal{P}):=\displaystyle\lim_{n\to\infty}\frac{1}{n}H_\mu\Bigl(\displaystyle\bigvee_{i=0}^{n-1}T^{-i}\mathcal{P}\Bigr)
=\displaystyle\inf_{n\ge 1}\frac{1}{n}H_\mu\Bigl(\displaystyle\bigvee_{i=0}^{n-1}T^{-i}\mathcal{P}\Bigr),$$
$$h_\mu(T,\mathcal{P}|\mathcal{A}):=\displaystyle\lim_{n\to\infty}\frac{1}{n}H_\mu(\displaystyle\bigvee_{i=0}^{n-1}T^{-i}\mathcal{P}|\mathcal{A})
=\displaystyle\inf_{n\ge 1}\frac{1}{n}H_\mu(\displaystyle\bigvee_{i=0}^{n-1}T^{-i}\mathcal{P}|\mathcal{A}).$$
Then the \emph{(dynamical) entropy of $T$} is
$$h_\mu(T):=\displaystyle\sup_{\mathcal{P}:H_\mu(\mathcal{P})<\infty}h_\mu(T,\mathcal{P}).$$
Moreover, the \emph{conditional (dynamical) entropy of $T$ given $\mathcal{A}$} is
$$h_\mu(T|\mathcal{A}):=\displaystyle\sup_{\mathcal{P}:H_\mu(\mathcal{P})<\infty}h_\mu(T,\mathcal{P}|\mathcal{A}).$$
\end{enumerate}
\end{defi}
Note that the dynamical entropy is usually defined for \textit{strictly invariant} $\sigma$-algebra $\mathcal{A}$, i.e. $\mathcal{A}=T^{-1}\mathcal{A}$, but here we define it under a weaker assumption $T^{-1}\mathcal{A}\subseteq \mathcal{A}$. Under this weaker assumption, $h_\mu(T,\mathcal{P})$ and $h_\mu(T,\mathcal{P}|\mathcal{A})$ are still well-defined since the sequences $\set{H_\mu\Bigl(\displaystyle\bigvee_{i=0}^{n-1}T^{-i}\mathcal{P}\Bigr)}$ and $\set{H_\mu\Bigl(\displaystyle\bigvee_{i=0}^{n-1}T^{-i}\mathcal{P}|\mathcal{A}\Bigr)}$ are sub-additive. Using sub-additivity of entropy, $T^{-1}\mathcal{A}\subseteq \mathcal{A}$, and invariance of entropy (see Proposition \ref{SEProp}), the sub-additivity of the relative entropy iis obtained as follows:
\begin{align*}
H_\mu(\displaystyle\bigvee_{i=0}^{m+n-1}T^{-i}\mathcal{P}|\mathcal{A})&\leq H_\mu(\displaystyle\bigvee_{i=0}^{n-1}T^{-i}\mathcal{P}|\mathcal{A})+H_\mu(T^{-n}\displaystyle\bigvee_{i=0}^{m-1}T^{-i}\mathcal{P}|\mathcal{A})\\
&\leq H_\mu(\displaystyle\bigvee_{i=0}^{n-1}T^{-i}\mathcal{P}|\mathcal{A})+H_\mu(T^{-n}\displaystyle\bigvee_{i=0}^{m-1}T^{-i}\mathcal{P}|T^{-n}\mathcal{A})\\
&=H_\mu(\displaystyle\bigvee_{i=0}^{n-1}T^{-i}\mathcal{P}|\mathcal{A})+H_\mu(\displaystyle\bigvee_{i=0}^{m-1}T^{-i}\mathcal{P}|\mathcal{A}).
\end{align*}
We gather basic and important properties of the entropy in Appendix \ref{sec:App} under our weaker assumption.
One of the important properties is the following entropy formula for ergodic decompositions:
\begin{prop}[\textbf{Entropy and ergodic decomposition}]\label{ergDec} Let $(X,\mathcal{B},\mu,T)$ be an invertible measure-preserving system on a Borel probability space, with ergodic decomposition
\[
\mu=\int_{X}\mu_{x}^{\mathcal{E}}d\mu(x)
\] as in \cite[Theorem 2.7]{ELW}.
Let $\mathcal{A}$ be a sub-$\sigma$-algebras of $\mathcal{B}$ such that $T^{-1}\mathcal{A}\subseteq \mathcal{A}$. Then
\[
h_{\mu}(T,\xi | \mathcal{A})=\int_{X}h_{\mu_{x}^{\mathcal{E}}}(T,\xi |\mathcal{A})d\mu(x)
\] for any partition $\xi$ with $H_{\mu}(\xi)<\infty$, and
\[
h_{\mu}(T | \mathcal{A})=\int_{X}h_{\mu_{x}^{\mathcal{E}}}(T |\mathcal{A})d\mu(x).
\]
\end{prop}
The proof of the above proposition is in the appendix.
See \cite[Theorem 2.34]{ELW} for the proof under the strictly invariant assumption.
\subsection{General setup}\label{sec2.2}
Let $G$ be a closed real linear group (or connected, simply connected real Lie group) and let $\Gamma<G$ be a lattice.
We consider the quotient $Y=G/\Gamma$ with a $G$-invariant probability measure $m_Y$ and call it
Haar measure on $Y$. Let $d_G$ be a right invariant metric on $G$, which induces the metric $d_Y$ on
the space $Y=G/\Gamma$. Then $Y$ is locally isometric to $G$, that is, for every $y\in Y$ there exists some $r>0$
such that the map $g\mapsto gy$ is an isometry from the open $r$-ball $B^G_r$ around the identity in $G$
onto the open $r$-ball $B^Y_r(y)$ around $y\in Y$.
Let $r_y$ be the maximal injectivity radius at $y\in Y$ which is the supremum of $r>0$ such that the above
map can be an isometry. For any $r>0$, we denote by $Y(r)=\{y \in Y : r_y \geq r\}$.
It follows from the continuity of the injectivity radius that $Y(r)$ is compact.
For any closed subgroup $L<G$, we consider the right invariant metric $d_L$ by restricting $d_G$ on $L$, and similarly denote by $B^L_r$ the open $r$-ball around the identity in $L$.
In this section, we fix an element $a\in G$ which is $\operatorname{Ad}$-diagonalizable over $\mathbb{R}$.
Let $G^{+} = \set{g\in G|a^k g a^{-k}\to id \ \textrm{as} \ k\to -\infty}$ be the unstable (resp. stable) horospherical subgroup associated to $a$ (resp. $a^{-1}$), which is always a closed subgroup of $G$ in our setting.
Let $L<G^+$ be a closed subgroup normalized by $a$ and let $\mathfrak{l}$ denote the Lie algebra of $L$.
We can take a basis $\{v_{1},\dots,v_{\dim(\mathfrak{l})}\}$ of $\mathfrak{l}$ so that the adjoint map
$Ad_{a}$ on $\mathfrak{l}$ can be considered as the expansion $(v_{i}) \mapsto (e^{c_{i}}v_{i})$ for some
$c_{i}>0$. Now assume that $c_i \leq 1$ for all $i$. Then for $\mb{c}=(c_1,\dots,c_{\dim(\mathfrak{l})})$,
we define the quasinorm $\|\cdot\|_{\mb{c}}$
by $\|x\|_{\mb{c}}=\max_{i}|x_i|^{1/c_{i}}$ for $x=\sum_i x_i v_i\in \mathfrak{l}$.
We remark that for $x, y \in \mathfrak{l}$ and $k\in\mathbb{Z}$,
\begin{itemize}
\item using the convexity of the function $s\mapsto s^{1/c_i}$,
\eqlabel{quasitriang}{
\|x+y\|_{\mb{c}}
\leq 2^{\frac{1-\min{\mb{c}}}{\min{\mb{c}}}} (\|x\|_\mb{c}+\|y\|_\mb{c}); }
\item and
$$\|\operatorname{Ad}_{a^k} x \|_{\mb{c}}=e^{k}\| x \|_{\mb{c}}.$$
\end{itemize}
The quasinorm induces the quasi-metric $d_{\mathfrak{l},\mb{c}}$ on the Lie algebra $\mathfrak{l}$,
thus induces the quasi-metric $d_{L,\mb{c}}$ locally on $L$ using the logarithm map from $L$ to $\mathfrak{l}$
(see Subsection \ref{sec3.1} for the definition of quasi-metric).
We similary denote by $B^{L,\mb{c}}_r$ the open $r$-ball around the identity in $L$ with respect to the quasi-metric
$d_{L,\mb{c}}$.
\subsection{Construction of $a^{-1}$-descending, subordinate algebra and its entropy properties}\label{sec2.3}
In this subsection, our goal is to strengthen results of \cite[\S 7]{EL} for our quantitative purposes.
\begin{defi}[7.25. of \cite{EL}]\label{algdef}
Let $G^+ <G$ be the unstable horospherical subgroup associated to $a$. Let $\mu$ be an $a$-invariant measure on $Y$ and $L<G^+$ be a closed subgroup normalized by $a$.
\begin{enumerate}
\item We say that a countably generated $\sigma$-algebra $\mathcal{A}$ is \emph{subordinate to $L$} (mod $\mu$) if for $\mu$-a.e. $y$, there exists $\delta > 0$ such that
\eqlabel{Subordef}{B^{L}_\delta\cdot y \subset [y]_{\mathcal{A}} \subset B^{L}_{\delta^{-1}}\cdot y.}
\item
We say that $\mathcal{A}$ is \emph{$a^{-1}$-descending} if $(a^{-1})^{-1}\mathcal{A}= a\mathcal{A} \subseteq \mathcal{A}$.
\end{enumerate}
\end{defi}
For each $L<G^+$ and $a$-invariant ergodic probability measure $\mu$ on $Y$, there exists a countably generated $\sigma$-algebra $\mathcal{A}$ which is $a^{-1}$-descending and subordinate to $L$ \cite[Proposition 7.37]{EL}. We will prove that such a $\sigma$-algebra can be constructed so that we also have an explicit upper bound of the measure of the set violating \eqref{Subordef} for fixed $\delta>0$. In order to prove an effective version of the variational principle later, we need this quantitative estimate independent of $\mu$.
For a subset $B\subset Y$ and $\delta>0$, we denote by $\partial_\delta B$ the $\delta$-neighborhood of the boundary of $B$, i.e.
$$\partial_\delta B\overset{\on{def}}{=}\set{y\in Y: \displaystyle\inf_{z\in B}d_Y(y,z)+\displaystyle\inf_{z\notin B}d_Y(y,z)<\delta}.$$
We also define the neighborhood of the boundary of a countable partition $\mathcal{P}$ by
$$\partial_\delta \mathcal{P}\overset{\on{def}}{=}\displaystyle\bigcup_{P\in\mathcal{P}}\partial_\delta P.$$
We first construct a finite partition which has small measures on neighborhoods of the boundary. The following lemma is the main ingredient of the effectivization in this section. A key feature is that the measure estimate below is independent of $\mu$.
\begin{lem}\label{partitioncst}
Let $\mu$ be a probability measure on $Y$. For any $r>0$, there exist a constant $0<\delta_0=\delta_0(r)<r$ and a partition
$\mathcal{P}=\set{P_1,\cdots,P_N,P_{\infty}}$ such that
\begin{enumerate}
\item\label{partprop1} $P_{\infty}\subseteq Y\smallsetminus Y(2r)$,
\item\label{partprop2} For any $1\leq i\leq N$, there exists $z_i\in Y$ such that $$B_{\frac{r}{5}}^G\cdot z_i\subseteq P_i\subseteq B_{r}^G\cdot z_i,$$
\item\label{partprop3} $\mu(\partial_\delta\mathcal{P})\leq \delta^{\frac{1}{2}}$ for any $0<\delta<\delta_0$.
\end{enumerate}
\end{lem}
\begin{proof}
Choose a maximal $\frac{9}{10}r$-separated set $\set{y_1,\cdots,y_N}$ of $Y(2r)$. Note that $Y(2r)\subseteq \bigcup_{i}B_{\frac{9}{10}r}^G\cdot y_i$. We claim that there exist $g_i\in B^G_{\frac{r}{10}}$'s satisfying $\sum_i\mu(\partial_\delta (B_r^G\cdot z_i))\leq \frac{1}{2}\delta^{\frac{1}{2}}$ and $\sum_i\mu(\partial_\delta (B_{\frac{r}{2}}^G\cdot z_i))\leq \frac{1}{2}\delta^{\frac{1}{2}}$ for any $0<\delta<r$, where $z_i=g_iy_i$. To prove this claim, we randomly choose each $g_i$ with the independent uniform distribution on $B^G_{\frac{r}{10}}$. Then we have
\eq{
\begin{aligned}
\mathbb{E}&\left(\sum_i\mu(\partial_\delta (B_r^G\cdot z_i))\right)=\sum_i\frac{1}{m_G(B^G_{\frac{r}{10}})}\int_{B^G_{\frac{r}{10}}}\int_Y\mathds{1}_{B^G_{r+\delta}\cdot g_iy_i\setminus B^G_{r-\delta}\cdot g_iy_i}(y)d\mu(y) dm_G(g_i)\\
&\asymp\sum_i\frac{1}{r^{\dim G}}\int_Y m_G\left(\set{g_i\in B^G_{\frac{r}{10}}: r-\delta\leq d(g_iy_i,y)<r+\delta}\right)d\mu(y)\\
&\ll \sum_i\frac{1}{r^{\dim G}}\int_{B^G_{\frac{11}{10}r}\cdot y_i}\delta r^{\dim G-1}d\mu
=\frac{\delta}{r}\sum_i\mu(B^G_{\frac{11}{10}r}\cdot y_i)\ll \frac{\delta}{r}.
\end{aligned}
}
In the last line, $\sum_i\mu(B_{\frac{11}{10}r}\cdot y_i)$ is bounded above by an absolute constant only depending on $\dim G$ since the balls $B_{\frac{11}{10}r}\cdot y_i$ can be overlapped at most finite times by the $\frac{9}{10}r$-separateness of $y_i$'s.
Applying the same argument for $\partial_\delta (B_{\frac{r}{2}}^G\cdot z_i)$ instead of $\partial_\delta (B_r^G\cdot z_i)$,
$$\mathbb{E}\left(\sum_i\left(\mu(\partial_\delta (B_r^G\cdot z_i))+\mu(\partial_\delta (B_{\frac{r}{2}}^G\cdot z_i))\right)\right)\ll\frac{\delta}{r}.$$
It follows that for any $0<\delta<r$,
$$\mathbb{P}\left(\sum_i\left(\mu(\partial_\delta (B_r^G\cdot z_i))+\mu(\partial_\delta (B_{\frac{r}{2}}^G\cdot z_i))\right)\ge \frac{1}{2}\delta^{\frac{1}{2}}\right)\ll \frac{\delta^{\frac{1}{2}}}{r}.$$
Hence, for any $0<\delta_0<r$, we have
\begin{align*}
\mathbb{P}\left(\bigcap_{k\geq 0}\left\{\sum_i\left(\mu(\partial_{2^{-k}\delta_0} (B_r^G\cdot z_i))+\mu(\partial_{2^{-k}\delta_0} (B_{\frac{r}{2}}^G\cdot z_i))\right)< \frac{1}{2}(2^{-k}\delta_0)^{\frac{1}{2}}\right\}\right)>1-O\left( \frac{\delta_0^{\frac{1}{2}}}{r}\right).
\end{align*}
Thus, if we take $\delta_0$ sufficiently small so that the right-hand side is positive, then we can find $\set{g_i}_{i=1}^{N}$ such that
$$\sum_i\left(\mu(\partial_\delta (B_r^G\cdot z_i))+\mu(\partial_\delta (B_{\frac{r}{2}}^G\cdot z_i))\right)\leq \delta^{\frac{1}{2}}$$
for any $0<\delta<\delta_0$. The set $\set{z_i}_{i=1}^N$ is $\frac{7}{10}r$-separated since $\set{y_i}_{i=1}^N$ is $\frac{9}{10}r$-separated, and $Y(2r)$ is covered by $B_r^G\cdot z_i$'s since it is covered by $B_{\frac{9}{10}r}^G\cdot y_i$'s. Now we define a partition $\mathcal{P}$ inductively as follows:
$$P_i=B^G_r\cdot z_i\setminus\left(\displaystyle\bigcup_{j=1}^{i-1}P_j\cup\displaystyle\bigcup_{j=i+1}^{N}B^G_{\frac{r}{2}}\cdot z_j\right)$$
for $1\leq i\leq N$ and $P_\infty=Y\setminus \bigcup_{1\leq i\leq N} P_i$.
It is clear from the construction that $P_\infty\subseteq Y\smallsetminus Y(2r)$ and $B_{\frac{r}{5}}^Y\cdot z_i\subseteq P_i\subseteq B_{r}^Y\cdot z_i$ for $1\leq i\leq N$. We also observe that the $\delta$-neighborhood of $\mathcal{P}$ is contained in $\displaystyle\bigcup_{i=1}^N \left(\partial_\delta(B_r^G\cdot z_i) \cup \partial_\delta(B_\frac{r}{2}^G\cdot z_i)\right)$. Hence, for any $0<\delta<\delta_0$,
$$\mu(\partial_\delta\mathcal{P})\leq \sum_i\left(\mu(\partial_\delta (B_r^G\cdot z_i))+\mu(\partial_\delta (B_{\frac{r}{2}}^G\cdot z_i))\right)\leq \delta^{\frac{1}{2}}.$$
\end{proof}
For a partition $\mathcal{P}$ of $Y$, we write for any extended integers $\ell \leq \ell'$ in $\mathbb{Z}\cup\set{\pm\infty}$,
$$\mathcal{P}_{\ell}^{\ell'}=\displaystyle\bigvee_{k=\ell}^{\ell'} a^{k}\mathcal{P}.$$
We will use this notation also for $\sigma$-algebras. The following lemma is a quantitative modification of \cite[Lemma 7.31]{EL}. We remark that the constants below are independent of $\mu$ while the set $E_\delta$ is depending on $\mu$.
\begin{lem}\label{Exceptional}
Given $a$-invariant probability measure $\mu$ on $Y$ and $0<r<1$, let $0<\delta_0=\delta_0(r)<r$ and a partition $\mathcal{P}$ be as in Lemma \ref{partitioncst}.
Then for any $0<\delta<\delta_0$, there exists $E_{\delta}\subset Y$ such that $\mu(E_{\delta})<C\delta^{\frac{1}{2}}$ and
$B^{G^+}_{\delta}\cdot y\subset [x]_{\mathcal{P}_0^{\infty}}$ for any $x\in Y\setminus E_\delta$, where $C>0$ is a constant only depending on $a$ and $G$.
\end{lem}
\begin{proof}
By \cite[Lemma 7.29]{EL}, there are constants $\alpha>0$ and $\rho>0$ depending on $a$ and $G$ such that for every $r'\in (0,1]$,
\eqlabel{excEq}{a^{-k}B_{r'}^{G^+}a^{k}\subset B_{\rho e^{-k\alpha}r'}^{G}}
for any $k\in\mathbb{N}$.
Set $E_{\delta}=\displaystyle\bigcup_{k=0}^{\infty}a^k\partial_{\rho e^{-k\alpha}\delta}\mathcal{P}$.
It follows from $a$-invariance of $\mu$ and (\ref{partprop3}) of Lemma \ref{partitioncst} that for any $0<\delta<\delta_0$,
$$\mu(E_{\delta})\leq \sum_{k=0}^{\infty}\mu(\partial_{\rho e^{-k\alpha}\delta}\mathcal{P})
\leq\sum_{k=0}^{\infty}\rho^{\frac{1}{2}}e^{-\frac{k\alpha}{2}}\delta^{\frac{1}{2}}=C\delta^{\frac{1}{2}},$$
where $C=\rho^{\frac{1}{2}}\sum_{k=0}^{\infty}e^{-\frac{k\alpha}{2}}$.
Let $h\in B^{G^+}_{\delta}$ and suppose $[hy]_{\mathcal{P}_0^{\infty}}\neq[y]_{\mathcal{P}_0^{\infty}}$.
Then there is some $k\geq 0$ such that $a^{-k}hy$ and $a^{-k}y$ belong to different elements of the partition $\mathcal{P}$.
Since $a^{-k}ha^k\in a^{-k}B_\delta^{G^+}a^{k}\subset B_{\rho e^{-k\alpha}\delta}^{G}$ by \eqref{excEq}, we have
$$d_Y (a^{-k}hy, a^{-k}y) \leq d_G (a^{-k}ha^k,id) \leq \rho e^{-k\alpha}\delta.$$
It follows that both $a^{-k}hy$ and $a^{-k}y$ belong to $\partial_{\rho e^{-k\alpha}\delta}\mathcal{P}$, hence $y\in E_\delta$.
It concludes that $B^{G^+}_{\delta}\cdot y\subset [y]_{\mathcal{P}_0^{\infty}}$ for any $y\in Y\setminus E_\delta$.
\end{proof}
The following proposition is a quantitative version of \cite[Proposition 7.37]{EL}. Given $a$-invariant measure $\mu$, it provides a $\sigma$-algebra which is $a^{-1}$-descending and subordinate to $L$ in the following quantitative sense.
\begin{prop}\label{algebracst}
For any $0<r<1$, let $0<\delta_0=\delta_0(r)<r$ be as in Lemma \ref{partitioncst}.
Let $\mu$ be an $a$-invariant probability measure on $Y$, and $L<G^+$ be a closed subgroup normalized by $a$.
There exist $0<\delta_1=\delta_1(r)\leq \delta_0$ and a countably generated sub-$\sigma$-algebra $\mathcal{A}^L$ of Borel $\sigma$-algbera such that
\begin{enumerate}
\item\label{algebracstprop1} $a\mathcal{A}^L \subset \mathcal{A}^L$, that is, $\mathcal{A}^L$ is $a^{-1}$-descending,
\item\label{algebracstprop2} $[y]_{\mathcal{A}^L}\subset B_{2r}^{L}\cdot y$ for any $y\in Y(2r)$,
\item\label{algebracstprop3} if $0<\delta<\delta_1$, then $B_\delta^{L}\cdot y\subset [y]_{\mathcal{A}^L}$ for any $y\in Y\setminus E_\delta$,
where $E_\delta$ is as in Lemma \ref{Exceptional}.
\end{enumerate}
Moreover, for $\mu$-almost every ergodic component $\mu_{y_0}^{\mathcal{E}}$ with $\mu_{y_0}^{\mathcal{E}}(Y(2r))>0$,
the $\sigma$-algebra $\mathcal{A}^L$ is $L$-subordinate modulo $\mu_{y_0}^\mathcal{E}$.
\end{prop}
\begin{proof}
For given $0<r<1$ and the measure $\mu$, let $\mathcal{P}=\set{P_1,\cdots,P_N,P_\infty}$ be the partition constructed in Lemma \ref{partitioncst}.
We first consider a countably generated $\sigma$-algebra $\mathcal{P}_0^\infty$. It follows from (\ref{partprop1}) and (\ref{partprop2}) of
Lemma \ref{partitioncst} that for any $y\in Y(2r)$, $[y]_{\mathcal{P}}\subset B_{2r}^G\cdot y$, hence $[y]_{\mathcal{P}_0^\infty}\subset B_{2r}^G\cdot y$.
By Lemma \ref{Exceptional}, we can find $E_\delta\subset Y$ such that $B_\delta^{G^+}\cdot y\subset[y]_{\mathcal{P}_0^\infty}$
for any $y\in Y\setminus E_\delta$.
We will replace $\mathcal{P}$ by $\mathcal{P}^L$ in such a way that $\mathcal{A}^L:=(\mathcal{P}^L)_0^\infty$ will be the desired $\sigma$-algebra.
For the natural quotient map $\pi_Y : G \to Y$, there are $B_k \subset G$ with $\diam(B_k)\leq 2r$ such that $P_k = \pi_Y (B_k)$ for $k=1,\dots,N$.
Define the $\sigma$-algebra
$$\mathcal{P}^L=\sigma\left(\left\{P_\infty, \pi_Y(B_k \cap S): k=1,\dots,N,\ S\in \mathcal{B}_{G/L}\right\}\right).$$
Then $\mathcal{P}^L$ is a refinement of $\mathcal{P}$ so that atoms of $\mathcal{P}^L$ not in $P_\infty$ are open $L$-plaques,
i.e. for any $y\notin P_\infty$, $[y]_{\mathcal{P}^L}=[y]_\mathcal{P}\cap B_{2r}^L\cdot y=V_y\cdot y$,
where $V_y\subset B_{2r}^L$ is an open bounded set. For $y\in P_\infty$, the atom is given by $[y]_{\mathcal{P}^L}=P_\infty$.
Since $\mathcal{P}^L$ is countably generated, $\mathcal{A}^L=(\mathcal{P}^L)_0^\infty$ is also countably generated.
By construction, we have $a\mathcal{A}^L=\mathcal{P}_1^\infty\subset \mathcal{A}^L$, which proves the assertion (\ref{algebracstprop1}).
It follows from $P_\infty \subset Y\smallsetminus Y(2r)$ that $[y]_{\mathcal{A}^L}\subset [y]_{\mathcal{P}^L}=V_y\cdot y\subset B_{2r}^L\cdot y$ for $y\in Y(2r)$,
which proves the assertion (\ref{algebracstprop2}).
To prove the assertion (\ref{algebracstprop3}), we recall that there are constants $\alpha>0$ and $\rho>0$ such that for every $r'\in (0,1]$,
the inclusion \eqref{excEq} holds. Choose $\delta_1 = \min(\delta_0,r/\rho)$ which depends only on $r$.
For given $0<\delta<\delta_1$, assume that $y=hz$ with $h\in B_{\delta}^L$.
Since $B_\delta^{G^+}\cdot y\subset[y]_{\mathcal{P}_0^\infty}$ for any $y\in Y\setminus E_\delta$, for any $k\geq 0$, $a^{-k}y$ and $a^{-k}z$ belong to
the same atom $P_i\subset \mathcal{P}$. If $i=\infty$, then by the definition of $\mathcal{P}^L$, $a^{-k}y$ and $a^{-k}z$ lie in the same atom of $\mathcal{P}^L$.
If $1\leq i\leq N$, then
\[
a^{-k}y,\; a^{-k}z=a^{-k}ha^{k}\cdot(a^{-k}y)\; \in P_i = \pi_Y (B_i).
\]
It follows from the inclusion \eqref{excEq} with $r'=\delta$ and the choice of $\delta_1$ that $a^{-k}ha^{k} \in B_r^L$.
Thus $a^{-k}y$ and $a^{-k}z$ belong to the same atom of $\mathcal{P}^L$. This proves the assertion (\ref{algebracstprop3}).
Now assume that $\mu_{y_0}^\mathcal{E}$ is an ergodic component with $\mu_{y_0}^{\mathcal{E}}(Y(2r))>0$. Then for $\mu_{y_0}^{\mathcal{E}}$-a.e. $y\in Y$, there exists $k\in \mathbb{N}$ such that $a^{-k}y\in Y(2r)$.
It follows that $[a^{-k}y]_{\mathcal{A}^L}\subset B_{2r}^L\cdot a^{-k}y$, hence using $\mathcal{A}^L \subset a^{-k}\mathcal{A}^L$, we have
$$[y]_{\mathcal{A}^L}= a^k [a^{-k}y]_{a^{-k}\mathcal{A}^L}\subset a^k[a^{-k}y]_{\mathcal{A}^L}\subset a^k B_{2r}^L a^{-k}\cdot y \subset B_R^L \cdot y$$
for some $R>0$. On the other hand, since $\lim_{\delta\to\infty}\mu(E_\delta)=0$ by Lemma \ref{Exceptional} and $\on{Supp} \mu_{y_0}^\mathcal{E} \subset \on{Supp} \mu$ for $\mu$-a.e. $y_0\in Y$, we can find $\delta>0$ such that $B_\delta^L\cdot y\subset [y]_{\mathcal{A}^L}$ for $\mu_{y_0}^\mathcal{E}$-a.e. $y\in Y$. Thus, $\mathcal{A}^L$ is $L$-subordinate modulo $\mu_{y_0}^\mathcal{E}$.
\end{proof}
As in \cite[Lemma 3.4]{LSS}, we need to compare the dynamical entropy and the static entropy.
However, under the weaker invariance assuption
on $\mathcal{A}^L$, we cannot use Kolmogorov-Sina\u{\i} Theorem \ref{KST} for a two-sided generator.
Nevertheless, we can obtain a similar result
in Lemma \ref{algexiA} using the following lemma and modifying the proof of Kolmogorov-Sina\u{\i} Theorem.
\begin{lem}\label{atomfree}
Let $\mu$ is an $a$-invariant probability measure on $Y$ and $L<G^+$ be a closed subgroup normalized by $a$.
For any $0<r<1$, let $\mathcal{P}$ and $\mathcal{A}^L$ be as in Lemma \ref{partitioncst} and Proposition \ref{algebracst}, respectively.
For $\mu$-almost every ergodic component $\mu_{y_0}^{\mathcal{E}}$ with $\mu_{y_0}^{\mathcal{E}}(Y(2r))>0$, the $\sigma$-algebra
$\mathcal{P}_{-\infty}^{\infty}\vee \mathcal{A}^L$ is the Borel $\sigma$-algebra of $Y$ modulo $\mu_{y_0}^{\mathcal{E}}$.
\end{lem}
\begin{proof}
It is enough to show that for $\mu_{y_0}^{\mathcal{E}}$-a.e. $y\in Y$, $[y]_{\mathcal{P}_{-\infty}^{\infty}\vee \mathcal{A}^L}=\{y\}$.
Let $\mathcal{P}=\{P_1,\dots,P_N,P_\infty\}$.
We claim that for $\mu_{y_0}^{\mathcal{E}}$-a.e. $y\in Y(2r)$, $[y]_{\mathcal{P}_{-\infty}^{\infty}\vee\mathcal{A}^L}=\{y\}$.
If the claim holds, then since $\mu_{y_0}^{\mathcal{E}}$ is ergodic and $\mu_{y_0}^{\mathcal{E}}(Y(2r))>0$, it follows that
for $\mu_{y_0}^{\mathcal{E}}$-a.e. $y\in Y\smallsetminus Y(2r)$, we have $a^{-k}y\in Y(2r)$ for some $k\geq 1$. Since $a\mathcal{A}^L\subset \mathcal{A}^L$, we have
\[
[y]_{\mathcal{P}_{-\infty}^{\infty}\vee\mathcal{A}^L}=a^k\cdot[a^{-k}y]_{a^{-k}(\mathcal{P}_{-\infty}^{\infty}\vee\mathcal{A}^L)}\subset a^k
\cdot[a^{-k}y]_{\mathcal{P}_{-\infty}^{\infty}\vee\mathcal{A}^L}=a^k \cdot\{a^{-k}y\}=\{y\}.
\]
Now let us prove the claim. Assume $y\in Y(2r)$. For all $z\in [y]_{\mathcal{P}_{-\infty}^{\infty}}$, by definition, for any $k \in \mathbb{Z}$,
$a^k y$ and $a^k z$ lie in the same atom of $\mathcal{P}$. By Lemma \ref{partitioncst}, there exist $1\leq i \leq N$ and $z_i \in Y$ such that
$y=hz_i$ and $z=h'z_i$ for some $h,h'\in B_r^G$. Thus $z=gy$ for $g=h'h^{-1} \in B_{2r}^G$.
Since $\mu_{y_0}^{\mathcal{E}}$ is ergodic and $\mu_{y_0}^{\mathcal{E}}(Y(2r))>0$,
we can take a sequence $k_j \in \mathbb{Z}$ with $j\in\mathbb{Z}$ such that
\[
a^{k_j}y\in Y(2r) \quad \text{and}\quad k_j \to \pm \infty\ \text{as}\ j\to \pm\infty.
\]
Since $d_Y (a^{k_j}y,a^{k_j}z)=d_Y(a^{k_j}y,a^{k_j}ga^{-k_j}a^{k_j}y)$, we have $d_G(id,a^{k_j}ga^{-k_j})\leq 2r$.
Taking $j\to\pm\infty$, we have $g\in B_{2r}^{G^0}(id)$, where $G^0$ is the centralizer of $a$ in $G$. Thus, we have
$[y]_{\mathcal{P}_{-\infty}^{\infty}}\subset B_{2r}^{G^0}\cdot y$.
On the other hand, for $y\in Y(2r)$, $[y]_{\mathcal{A}^L}\subset B_{2r}^L\cdot y$ by Proposition \ref{algebracst}. Therefore, we have
\[
[y]_{\mathcal{P}_{-\infty}^{\infty}\vee\mathcal{A}^L}\subset B_{2r}^{G^0}\cdot y \cap B_{2r}^{L}\cdot y =\{y\},
\]
which concludes the claim.
\end{proof}
\begin{lem}\label{algexiA}
Let $\mu$ is an $a$-invariant probability measure on $Y$ and $L<G^+$ be a closed subgroup normalized by $a$.
For any $0<r<1$, let $\mathcal{A}^L$ be as in Proposition \ref{algebracst}.
For $\mu$-almost every ergodic component $\mu_{y_0}^{\mathcal{E}}$ with $\mu_{y_0}^{\mathcal{E}}(Y(2r))>0$, we have
\eq{h_{\mu_{y_0}^{\mathcal{E}}}(a^{-1}|\mathcal{A}^L)\leq H_{\mu_{y_0}^{\mathcal{E}}}(\mathcal{A}^L|a\mathcal{A}^L).}
\end{lem}
\begin{proof}
Let $\mathcal{P}$ be the partition constructed in Lemma \ref{partitioncst}.
For any countable partition $\xi$ with $H_{\mu_{y_0}^{\mathcal{E}}}(\xi)<\infty$, using (3), (4), and (6) in Proposition \ref{DEProp} and continuity of entropy \cite[Proposition 2.14]{ELW} and Lemma \ref{atomfree}, we have
\[
\begin{split}
h_{\mu_{y_0}^{\mathcal{E}}}(a^{-1},\xi|\mathcal{A}^L)
&\underset{\mathrm{(3)}}{\leq} h_{\mu_{y_0}^{\mathcal{E}}}(a^{-1},\mathcal{P}_{-k}^{k}|\mathcal{A}^L)+H_{\mu_{y_0}^{\mathcal{E}}}(\xi|\mathcal{P}_{-k}^{k}\vee\mathcal{A}^L)\\
&\underset{\mathrm{(4)}}{=}h_{\mu_{y_0}^{\mathcal{E}}}(a^{-1},\mathcal{P}|a^k \mathcal{A}^L)+H_{\mu_{y_0}^{\mathcal{E}}}(\xi|\mathcal{P}_{-k}^{k}\vee\mathcal{A}^L)\\
&\underset{\mathrm{(6)}}{=}H_{\mu_{y_0}^{\mathcal{E}}}(\mathcal{P}|\mathcal{P}_{1}^{\infty}\vee(a^k \mathcal{A}^L)_{\infty})+H_{\mu_{y_0}^{\mathcal{E}}}(\xi|\mathcal{P}_{-k}^{k}\vee\mathcal{A}^L)\\
&=H_{\mu_{y_0}^{\mathcal{E}}}(\mathcal{P}|\mathcal{P}_{1}^{\infty}\vee(\mathcal{A}^L)_{\infty})+H_{\mu_{y_0}^{\mathcal{E}}}(\xi|\mathcal{P}_{-k}^{k}\vee\mathcal{A}^L)\\
&\xrightarrow[k\to\infty]{} H_{\mu_{y_0}^{\mathcal{E}}}(\mathcal{P}|\mathcal{P}_{1}^{\infty}\vee(\mathcal{A}^L)_{\infty})+H_{\mu_{y_0}^{\mathcal{E}}}(\xi|\mathcal{P}_{-\infty}^{\infty}\vee\mathcal{A}^L)\\
&\underset{\mathrm{Lemma}\ \ref{atomfree}}{=}H_{\mu_{y_0}^{\mathcal{E}}}(\mathcal{P}|\mathcal{P}_{1}^{\infty}\vee (\mathcal{A}^L)_{\infty})
\leq H_{\mu_{y_0}^{\mathcal{E}}}(\mathcal{P}|\mathcal{P}_{1}^{\infty}\vee \mathcal{A}^L)\\
&=H_{\mu_{y_0}^{\mathcal{E}}}(\mathcal{P}_{0}^{\infty}\vee\mathcal{A}^L|\mathcal{P}_{1}^{\infty}\vee\mathcal{A}^L)\leq H_{\mu_{y_0}^{\mathcal{E}}}(\mathcal{A}^L|a\mathcal{A}^L),
\end{split}
\]
where $(\mathcal{A}^L)_{\infty} = \bigvee_{k=0}^{\infty}a^{-k}\mathcal{A}^L=\lim_{k\to\infty}a^{-k}\mathcal{A}^L$.
\end{proof}
The quantity $H_\mu(\mathcal{A}^L|a\mathcal{A}^L)$ is called \textit{empirical entropy} and is the average of the \textit{conditional information function}
$$I_\mu(\mathcal{A}^L|a\mathcal{A}^L)(x)=-\log \mu_x^{a\mathcal{A}^L}([x]_\mathcal{A}),$$
and indeed the \textit{entropy contribution} of $L$ (see \cite[7.8]{EL} for definition). Combining \cite[Proposition 7.34]{EL} and \cite[Theorem 7.9]{EL}, we have the following upper bound of $H_\mu(\mathcal{A}^L|a\mathcal{A}^L)$.
\begin{thm}[\cite{EL}]\label{thmEL}
Let $L<G^{+}$ be a closed subgroup normalized by $a$, and let $\mathfrak{l}$ denote the Lie algebra of $L$. Let $\mu$ be an $a$-invariant ergodic probability measure on $Y$. If $\mathcal{A}$ is a countably generated sub-$\sigma$-algebra of the Borel $\sigma$-algebra which is $a^{-1}$-descending and $L$-subordinate, then $$H_\mu(\mathcal{A}|a\mathcal{A})\leq \log|\det(Ad_a|_\mathfrak{l})|$$
and equality holds if and only if $\mu$ is $L$-invariant.
\end{thm}
\subsection{Effective variational principle}\label{sec2.4}
This subsection is to effectivize the variational principle \cite[Theorem 7.9]{EL}.
Let $L<G^+$ be a closed subgroup normalized by $a$, $m_L$ be the Haar measure on $L$, and $\mu$ be an $a$-invariant probability measure on $Y$. Let $\mathcal{A}$ be a countably generated sub-$\sigma$-algebra of Borel $\sigma$-algbera which is $a^{-1}$-descending and $L$-subordinate modulo $\mu$. Note that for any $j\in\mathbb{Z}_{\geq 0}$, the sub-$\sigma$-algebra $a^j \mathcal{A}$ is also countably generated, $a^{-1}$-descending, and $L$-subordinate modulo $\mu$.
For $y\in Y$, denote by $V_y \subset L$ the shape of the $\mathcal{A}$-atom at $y\in Y$ so that $V_y \cdot y=[y]_{\mathcal{A}}$. It has positive $m_L$-measure for $\mu$-a.e. $y\in Y$ since $\mathcal{A}$ is $L$-subordinate modulo $\mu$.
Note that for any $j\in\mathbb{Z}_{\geq 0}$, we have $[y]_{a^{j}\mathcal{A}}=a^{j}V_{a^{-j}y}a^{-j}\cdot y$.
As in \cite[7.55]{EL} which is the proof of \cite[Theorem 7.9]{EL}, let us define $\tau_{y}^{a^j \mathcal{A}}$ for $\mu$-a.e $y\in Y$ to be the normalized push forward of $m_L|_{a^j V_{a^{-j}y}a^{-j}}$ under the orbit map, i.e.,
\[
\tau_{y}^{a^j \mathcal{A}}=\frac{1}{m_L (a^j V_{a^{-j}y}a^{-j})}m_L|_{a^j V_{a^{-j}y}a^{-j}}\cdot y,
\]
which is a probability measure on $[y]_{a^j\mathcal{A}}$.
The following proposition is an effective version of \cite[Theorem 7.9 and 7.55]{EL}.
\begin{prop}\label{effEL}
Let $L<G^+$ be a closed subgroup normalized by $a$ and $\mu$ be an $a$-invariant ergodic probability measure on $Y$.
Fix $j\in\mathbb{N}$ and denote by $J\ge 0$ the maximal entropy contribution of $L$ for $a^j$, that is,
\[ J= \log|\det(Ad_{a^j}|_\mathfrak{l})|.\]
Let $\mathcal{A}$ be a countably generated sub-$\sigma$-algebra of Borel $\sigma$-algbera which is $a^{-1}$-descending and $L$-subordinate.
Suppose that there exist a measurable set $K\subset Y$ and $r>0$ such that $[y]_{\mathcal{A}}\subset B^{L,\mb{c}}_r\cdot y$ for any $y\in K$, where $B^{L,\mb{c}}_r$ is as in Subsection \ref{sec2.2}. Then we have
$$H_\mu(\mathcal{A}|a^{j}\mathcal{A})\leq J+\int_Y \log\tau_y^{a^{j}\mathcal{A}}((Y\setminus K)\cup B^{L,\mb{c}}_{r}\on{Supp}\mu)d\mu(y).$$
\end{prop}
\begin{proof}
By for instance \cite[Theorem 5.9]{EL}, for $\mu$-a.e. $y\in Y$, $\mu_y^{a^{j}\mathcal{A}}$ is a probability measure on $[y]_{a^{j}\mathcal{A}}=a^{j}V_{a^{-j}y}a^{-j}\cdot y$, and $H_\mu(\mathcal{A}|a^{j}\mathcal{A})$ can be written as
\eq{H_\mu(\mathcal{A}|a^{j}\mathcal{A})=-\int_Y\log \mu_y^{a^{j}\mathcal{A}}([y]_{\mathcal{A}})d\mu(y).}
Note that $m_L(a^jBa^{-j})=e^{J}m_L(B)$ for any measurable $B\subset L$. Let $$p(y)\overset{\on{def}}{=} \mu_y^{a^{j}\mathcal{A}}([y]_{\mathcal{A}}) \quad\text{and}\quad p^{Haar}(y)\overset{\on{def}}{=} \tau_y^{a^{j}\mathcal{A}}([y]_{\mathcal{A}}).$$ Then we have
$$p^{Haar}(y)=\frac{m_L(V_y)}{m_L(a^jV_{a^{-j}y} a^{-j})}=\frac{m_L(V_y)}{m_L(V_{a^{-j}y})}e^{-J},$$
hence, applying the ergodic theorem, we have $-\int_Y\log p^{Haar}(y)d\mu(y)=J$.
Now we estimate an upper bound of $H_\mu(\mathcal{A}|a^{j}\mathcal{A})-J$ following the computation in \cite[7.55]{EL}.
Following \cite[7.55]{EL}, we can partition $[y]_{a^{j}\mathcal{A}}$ into a countable union of $\mathcal{A}$-atoms as follows:
$$[y]_{a^{j}\mathcal{A}}=\bigcup_{i=1}^{\infty}[x_i]_\mathcal{A}\cup N_y,$$
where $N_y$ is a null set with respect to $\mu_y^{a^{j}\mathcal{A}}$.
Note that $\mu_y^{a^{j}\mathcal{A}}$ is supported on $\on{Supp}\mu$ for $\mu$-a.e $y$. Let us denote by $Z=(Y\setminus K)\cup B^{L,\mb{c}}_{r}\on{Supp}\mu$. If $x_i\in Y\setminus Z= K\setminus B^{L,\mb{c}}_{r}\on{Supp}\mu$, then we have $\mu^{a^{j}\mathcal{A}}_y([x_i]_\mathcal{A})=0$ since $[x_i]_{\mathcal{A}}\subset B^{L,\mb{c}}_{r}\cdot x_i\subseteq K\setminus\on{Supp}\mu$.
Thus we have
\[
\begin{split}
H_{\mu}(\mathcal{A}|a^j \mathcal{A})-J &= -\int_Y \left(\log p(z) - \log p^{Haar}(z) \right)d\mu(z)\\
&= \int_Y \int_Y \left(\log p^{Haar}(z) - \log p(z) \right)d\mu_y^{a^j \mathcal{A}}(z)d\mu(y)\\
&= \int_Y \sum_{x_i \in Z} \int_{z\in [x_i]_\mathcal{A}} \left(\log p^{Haar}(z) - \log p(z) \right)d\mu_y^{a^j \mathcal{A}}(z)d\mu(y)\\
&= \int_Y \sum_{x_i \in Z} \log\left(\frac{\tau^{a^{j}\mathcal{A}}_y([x_i]_\mathcal{A})}{\mu^{a^{j}\mathcal{A}}_y([x_i]_\mathcal{A})}\right)\mu^{a^{j}\mathcal{A}}_y([x_i]_\mathcal{A})d\mu(y)\\
&\leq \int_Y \log\left(\sum_{x_i\in Z}\tau_y^{a^{j}\mathcal{A}}([x_i]_{\mathcal{A}})\right) d\mu(y)\\
&\leq \int_Y \log \tau_y^{a^{j}\mathcal{A}}(Z) d\mu(y).
\end{split}
\]
The fifth inequality is by the convexity of the logarithm. This proves the proposition.
\end{proof}
\section{Preliminaries for the upper bound}\label{sec3}
From now on, we fix the following notations:
$$d=m+n,\; G=\operatorname{ASL}_d(\mathbb{R}),\; \Gamma=\operatorname{ASL}_d(\mathbb{Z}),\; \text{and}\; Y=G/\Gamma.$$
We choose a right invariant metric on $G$ so that $\|g-id\| \leq d_G(g,id)$ for $g$ in the sufficiently small ball $B_r^G(id)$,
where $\|\cdot\|$ is the supremum norm on $M_{d+1,d+1}(\mathbb{R})$.
We also use all notations in Subsection \ref{sec2.2} with this setting.
Recall that the $1$-parameter diagonal subgroup $a_t$, $a=a_1$, $U$, and $W$ in the introduction.
Note that $G^+$ is the unstable horospherical subgroup associated to $a$.
Then the subgroups $U$ and $W$ are closed subgroup in $G^+$ normalized by $a$.
Denote by $\mathfrak{u}$ and $\mathfrak{w}$ the Lie algebras of $U$ and $W$, respectively.
We can take standard basis for $\mathfrak{u}$ and $\mathfrak{w}$ so that
$\mathfrak{u}=\mathbb{R}^{mn}=M_{m,n}(\mathbb{R})$ and $\mathfrak{w}= \mathbb{R}^{m}$
with the associated quasinorms given by
$$\|A\|_{\mb{r}\otimes\mb{s}}=\max_{\substack{1\leq i\leq m\\1\leq j\leq n}}
|A_{ij}|^{\frac{1}{r_i +s_j}}\quad \text{and}\quad \|b\|_{\mb{r}}=\max_{1\leq i\leq m}
|b_i|^{\frac{1}{r_i}},$$ respectively, for any $A\in M_{m,n}(\mathbb{R})$ and $b\in\mathbb{R}^m$.
We call these quasinorms $\mb{r}\otimes\mb{s}$-quasinorm and $\mb{r}$-quasinorm, respectively.
It is also satisfies that
$$\|\operatorname{Ad}_{a_t} A\|_{\mb{r}\otimes\mb{s}}=e^t\|A\|_{\mb{r}\otimes\mb{s}}\quad\text{and}
\quad \|\operatorname{Ad}_{a_t} b\|_{\mb{r}}=e^t\|b\|_{\mb{r}},$$
for any $A\in M_{m,n}(\mathbb{R})$ and $b\in \mathbb{R}^m$.
These quasinorms induce the quasi-metrics $d_{\mb{r}\otimes\mb{s}}$ and $d_{\mb{r}}$ on
$\mathfrak{u}$ and $\mathfrak{w}$, respectively. For simplicity, we keep the notations
$d_{\mb{r}\otimes\mb{s}}$ and $d_{\mb{r}}$ as locally defined quasi-metrics on $U$ and $L$, respectively.
As in Theorem \ref{thmEL}, we can explicitly compute the maximum entropy contribitions for $L=U$ and $W$. For $L=U$, the restricted adjoint map is the expansion $\operatorname{Ad}_a:(A_{ij})\mapsto (e^{r_i+s_j}A_{ij})$ of $A\in M_{m,n}$, hence
\eq{\log|\det(Ad_a|_\mathfrak{u})|=\displaystyle\sum_{i,j}(r_i+s_j)=m+n.} For $L=W$, the restricted adjoint map is the expansion $Ad_a:(b_i)\mapsto (e^{r_i}b_{i})$ of $b\in \mathbb{R}^m$, hence
\eq{\log|\det(Ad_a|_\mathfrak{w})|=\displaystyle\sum_{i}r_i=1.}
Denote by $X=\operatorname{SL}_d(\mathbb{R})/\operatorname{SL}_d(\mathbb{Z})$ and by $\pi:Y\to X$ the natural projection
sending a translated lattice $x +v$ to the lattice $x$.
More precisely, it is defined by $\pi\left(
\left(\begin{matrix}
g & v\\
0 & 1\\
\end{matrix}\right)\Gamma\right)
=g \operatorname{SL}_d(\mathbb{Z})$ for $g\in \operatorname{SL}_d(\mathbb{R})$ and $v\in \mathbb{R}^d.$
We also use the following notation:
$ w(v)=\left(\begin{matrix}
I_d & v\\
0 & 1\\
\end{matrix}\right)$ for $v\in \mathbb{R}^d$.
\subsection{Dimensions}\label{sec3.1}
Let $Z$ be a space endowed with a \emph{quasi-metric} $d_Z$, which is a symmetric, positive definite map
$d_Z: Z \times Z \to \mathbb R_{\geq0}$ such that, for some constant $C$, for all $x,y\in Z$,
$d_Z (x,y)\leq C(d_Z (x,z)+d_Z(z,y))$. For a bounded subset $S\subset Z$, the lower Minkowski dimension $\underline{\dim}_{d_Z} S$ with respect to the quasi-metric $d_Z$ is defined by
\[
\underline{\dim}_{d_Z} S \overset{\on{def}}{=} \liminf_{\delta\to 0} \frac{\log N_{d_Z}(S,\delta)}{\log1/\delta},\]
where $N_{d_Z}(S,\delta)$ is the maximal cardinality of a $\delta$-separated subset of $S$ for $d_Z$.
If $S$ is unbounded, we let $\underline{\dim}_{d_Z} S = \sup\{\underline{\dim}_{d_Z} S\cap K\ ;\ K\ \mbox{compact}\}$.
At the begining of this section, we consider Lie algebras $\mathfrak{u}$ and $\mathfrak{w}$ endowed with $\mb{r}\otimes\mb{s}$-quasinorm and $\mb{r}$-quasinorm, which induce the quasi-metrics
$d_{\mb{r}\otimes\mb{s}}$ and $d_{\mb{r}}$ on $\mathfrak{u}$ and $\mathfrak{w}$, repectively.
Now, for subsets $S \subset \mathfrak{u}=\mathbb{R}^{mn}$ and $S' \subset \mathfrak{w}=\mathbb{R}^m$, we denote the lower Minkowski dimensions of these subsets as follows:
$$\underline{\dim}_{\mb{r}\otimes\mb{s}} S \overset{\on{def}}{=} \underline{\dim}_{d_{\mb{r}\otimes\mb{s}}} S, \qquad \underline{\dim}_M S \overset{\on{def}}{=} \underline{\dim}_{d_E}S,$$
$$\underline{\dim}_{\mb{r}} S' \overset{\on{def}}{=} \underline{\dim}_{d_{\mb{r}}} S', \qquad \underline{\dim}_M S' \overset{\on{def}}{=} \underline{\dim}_{d_E}S'$$
where $d_E$ is the standard metric.
We will also consider Hausdorff dimensions $\dim_H S$ and $\dim_{H} S'$, always defined with respect to the standard metric.
We refer the reader to \cite{falconer} for general properties of Minkowski or Hausdorff dimensions, such as the inequality
\[ \underline{\dim}_M S\geq \dim_HS.\]
Following \cite{LSS}, we will relate dimension $\underline{\dim}_M$ to entropy, and further to Hausdorff dimension using $\underline{\dim}_{\mb{r}\otimes\mb{s}}$ and $\underline{\dim}_{\mb{r}}$ via the following lemma.
\begin{lem}\cite[Lemma 2.2]{LSS}\label{relating dimensions}
For subsets $S \subset \mathfrak{u}$ and $S' \subset \mathfrak{w}$,
\begin{enumerate}
\item $\underline{\dim}_{\mb{r}\otimes\mb{s}} \mathfrak{u} = \sum_{i,j}(r_{i}+s_{j}) = m+n$ and $\underline{\dim}_{\mb{r}} \mathfrak{w} = \sum_{i}r_{i}=1$,
\item $\underline{\dim}_{\mb{r}\otimes\mb{s}} S \geq (m+n) - (r_1 +s_1)(mn - \underline{\dim}_M S)$,
\item $\underline{\dim}_{\mb{r}} S' \geq 1 - r_1 (m-\underline{\dim}_M S' )$.
\end{enumerate}
\end{lem}
\subsection{Correspondence with dynamics}\label{sec3.2}
For $y=\left(\begin{matrix} g & v \\ 0 & 1 \\ \end{matrix}\right)\Gamma \in Y$ with $g\in \operatorname{SL}_d(\mathbb{R})$ and
$v\in \mathbb{R}^d$, denote by $\Lambda_y$ the corresponding unimodular grid $g\mathbb{Z}^d + v$ in $\mathbb{R}^d$.
We denote the $(\mathbf{r},\mathbf{s})$-quasinorm of $v=(\mathbf{x},\mathbf{y})\in \mathbb{R}^m\times\mathbb{R}^n$ by
$\|v\|_{\mathbf{r},\mathbf{s}}=\max\{\|\mathbf{x}\|_{\mathbf{r}}^{\frac{d}{m}},\|\mathbf{y}\|_{\mathbf{s}}^{\frac{d}{n}}\}$.
Let
$$\mathcal{L}_\epsilon\overset{\on{def}}{=}\set{y\in Y : \forall v\in\Lambda_y, \|v\|_{\mathbf{r},\mathbf{s}}\geq\epsilon},$$
which is a (non-compact) closed subset of $Y$.
Following \cite[Section 1.3]{Kle99}, we say that the pair $(A,b)\in M_{m,n}(\mathbb{R})\times \mathbb{R}^m$ is
\textit{rational} if there exists some $(p,q) \in \mathbb{Z}^m\times \mathbb{Z}^n$ such that
$Aq-b+p=0$, and \textit{irrational} otherwise.
\begin{prop}\label{prop1}
For any irrational pair $(A,b)\in M_{m,n}(\mathbb{R})\times \mathbb{R}^m$,
$(A,b)\in \mb{Bad}(\epsilon)$ if and only if the $a_t$-orbit of the point $y_{A,b}$ is eventually in $\mathcal{L}_\epsilon$,
i.e., there exists $T\ge 0$ such that $a_t y_{A,b}\in \mathcal{L}_\epsilon$ for all $t\ge T$.
\end{prop}
\begin{proof}
Suppose that there exist arbitrarily large $t$'s satisfying $a_t y_{A,b}\notin \mathcal{L}_\epsilon$. Denote
$e^{\mathbf{r} t}:= \textrm{diag}(e^{r_1t},\cdots,e^{r_mt})\in M_{m,m}(\mathbb{R})$ and
$e^{\mathbf{s}t}:=\textrm{diag}(e^{s_1t},\cdots,e^{s_nt})\in M_{n,n}(\mathbb{R}).$
Then the vectors in the grid $\Lambda_{a_t y_{A,b}}$ can be represented as
$$a_t \left(\left(\begin{matrix} I_m & A \\ 0 & I_n \\ \end{matrix}\right)
\left(\begin{matrix} p \\ q \\ \end{matrix}\right)
+\left(\begin{matrix} -b \\ 0 \\ \end{matrix}\right)\right)
=\left(\begin{matrix} e^{\mathbf{r} t}(Aq+p-b)\\ e^{-\mathbf{s} t}q\\ \end{matrix}\right)$$
for $(p,q)\in\mathbb{Z}^m\times\mathbb{Z}^n$.
Therefore $a_t x_{A,b}\notin \mathcal{L}_\epsilon$ implies that for some $q\in \mathbb{Z}^n$,
\eqlabel{eqDani}{e^{t}\idist{Aq-b}_{\mathbf{r}}<\epsilon^{\frac{m}{d}} \quad
\text{and}\quad e^{-t}\|q\|_{\mathbf{s}}<\epsilon^{\frac{n}{d}},}
thus $\|q\|_{\mathbf{s}}\idist{Aq-b}_{\mathbf{r}}<\epsilon$. Since $\idist{Aq-b}_\mb{r}\neq0$ for all $q$, we use the condition $\idist{Aq-b}_{\mathbf{r}}<e^{-t}\epsilon^{\frac{m}{d}}$ for arbitrarily large $t$ to conclude that $\|q\|_{\mathbf{s}}\idist{Aq-b}_{\mathbf{r}}<\epsilon$ holds for infinitely many $q$'s. This is a contradiction to the assumption that $(A,b)\in \mb{Bad}(\epsilon)$.
On the other hand, if $(A,b)\notin \mb{Bad}(\epsilon)$, then since $(A,b)$ is irrational, there are infinitely many $q\in\mathbb{Z}^n$ such that $\|q\|_{\mathbf{s}}\idist{Aq-b}_{\mathbf{r}}<\epsilon$. Thus we can choose arbitrarily large $t$ so that \eqref{eqDani} hold, which contradicts to the assumption that the $a_t$-orbit of the point $y_{A,b}$ is eventually in $\mathcal{L}_\epsilon$.
\end{proof}
We claim that for a fixed $b\in \mathbb{R}^m$, the subset $\mb{Bad}_{0}^b(\epsilon)$ of $\mb{Bad}^b(\epsilon)$ such that
$(A,b)$ is rational is a subset of $\mb{Bad}^0(\epsilon).$ Indeed, if $A\in \mb{Bad}^b(\epsilon)$ for some $b$ and
$(A,b)$ is rational, then $\idist{Aq_0-b}_{\mathbf{r}}=0$ for some $q_0\in\mathbb{Z}^m$ and
$\displaystyle\liminf_{\|q\|_{\mb{s}}\to \infty} \|q\|_{\mb{s}}\idist{Aq-b}_{\mathbf{r}}\ge \epsilon$,
thus $\displaystyle\liminf_{\|q\|_{\mb{s}}\to \infty} \|q\|_{\mathbf{s}}\idist{A(q-q_0)}_{\mathbf{r}}\ge \epsilon$. Therefore, we have
$$\dim_H\mb{Bad}_{0}^{b}(\epsilon)\leq\dim_H\mb{Bad}^0(\epsilon)
=mn-c_{m,n}\frac{\epsilon}{\log 1/\epsilon}<mn$$
for some constant $c_{m,n}>0$ \cite{KM19}.
For a fixed $A\in M_{m,n}(\mathbb{R})$, the subset of $\mb{Bad}_A(\epsilon)$ such that $(A,b)$ is rational is of the form $Aq+p$ for some $q,p\in\mathbb{Z}^m$ thus has Hausdorff dimension zero.
In the rest of the article, we will focus on the elements $y_{A,b}$ that are eventually in $\mathcal{L}_\epsilon$.
\subsection{Covering counting lemma}\label{sec3.3}
To construct measures of large entropy in Proposition \ref{prop5} and Proposition \ref{prop2}, we will need
the following counting lemma, which is a generalization of \cite[Lemma 2.4]{LSS}.
Here, we consider two cases: $L=U$ and $L=W$. Fix a standard basis $\{e_i: i=1,\dots, \dim \mathfrak{l}\}$ on $\mathfrak{l}$.
Denote by $\|\cdot\|_{\mb{c}}$ both of $\|\cdot\|_{\mb{r}\otimes\mb{s}}$ and
$\|\cdot\|_{\mb{r}}$, for simplicity.
Let $J_L$ be the maximal entropy contribution for $L$, that is, $J_L=\sum_i c_i$.
Recall that $J_U=m+n$, and $J_W=1$.
For $y\in Y$, let $r>0$ be smaller than injectivity radius at $y$. We will use the same notations $d_L$ and $d_{L,\mb{c}}$ on
$B_r^L \cdot y$ induced from $d_L$ and $d_{L,\mb{c}}$ on $L$, respectively.
\begin{lem}\label{CovLem}
Let $Q_\infty^0\subset X$ be such that $X\smallsetminus Q_\infty^0$ has compact closure. Set $Q_{\infty} = \pi^{-1}(Q_\infty^0)$
and fix $0<r<1/2$ such that any $d_{L,\mb{c}}$-ball of radius $3r$ has Euclidean diameter smaller than
the injectivity radius on $Y\smallsetminus Q_\infty$.
Let $y\in Y\smallsetminus Q_\infty$ and set $I=\{t\in\mathbb{Z}^+\ |\ a_ty\in Q_\infty\}$.
For any non-negative integer $T$, let
\[ E_{y,T} = \{ z\in B_{r}^{L}\cdot y \ |\
\forall t\in\{1,\dots,T\}\smallsetminus I,\, d_Y(a_ty,a_tz)\leq r\}.\]
For any $D>J_L$, the set $E_{y,T}$ can be covered by $Ce^{D|I\cap\{1,\dots,T\}|}$ $d_{L,\mb{c}}$-balls of radius
$r^{\frac{1}{\max\mb{c}}}e^{-T}$,
where $C$ is a constant depending on $Q_\infty^0$, $r$ and $D$, but independent of $T$.
\end{lem}
\begin{proof}
For given $D>J_L$, choose large enough $T_{D}\in\mathbb{N}$ so that
\eqlabel{eqct}{
\lceil e^{c_{i}T_{D}} \rceil \leq e^{c_{i}T_{D}}e^{\frac{D-J_L}{\dim \mathfrak{l}}}
} for all $i=1,\dots,\dim\mathfrak{l}$.
For $s\in\{0,\dots, T_{D}-1\}$ and $k\in\mathbb{Z}_{\geq 0}$, let us denote by $I_{s,k}(T_{D})= \{s,s+T_{D},\dots,s+kT_{D}\}$ and
\[
E_{y,k}^{s}:= \{z\in B_{r}^{L}\cdot y : \forall t \in I_{s,k}(T_{D}) \smallsetminus I, d_{Y}(a_{t}y,a_{t}z)\leq r \}.
\]
Following the proof of \cite[Lemma2.4]{LSS} with $E_{y,k}^{s}$ instead of $E_{y,T}$, we claim the following:\vspace{0.3cm}\\
\textbf{Claim}\; The set $E_{y,k}^{s}$ can be covered by $C_{s}e^{(J_L(T_{D}-1)+D)|I\cap I_{s,k}(T_{D})|}$ $d_{L,\mb{c}}$-balls of radius
$r^{\frac{1}{\max\mb{c}}}e^{-(s+kT_{D})}$, where $C_{s}$ is a constant depending on $Q_\infty^0$, $r$, $D$ and $s$, but independent of $k$.
\begin{proof}
We proof the claim by induction on $k$. Since the number of $d_{L,\mb{c}}$-balls of radius $r^{\frac{1}{\max\mb{c}}} e^{-s}$
needed to cover $B_r^L \cdot y$ is bounded by a integer constant $C_s$ depending on $r$ and $y$, the claim holds for $k=0$.
Suppose that $E_{y,k-1}^s$ can be covered by $N_{k-1}=C_s e^{(J_L(T_{D}-1)+D)|I\cap I_{s,k-1}(T_{D})|}$ $d_{L,\mb{c}}$-balls of radius
$r^{\frac{1}{\max\mb{c}}}e^{-(s+(k-1)T_{D})}$. By the inequality \eqref{eqct}, any $d_{L,\mb{c}}$-ball of radius
$r^{\frac{1}{\max\mb{c}}}e^{-(s+(k-1)T_D)}$ can be covered by
\[
\begin{split}
\prod_{i=1}^{\dim\mathfrak{l}} \left\lceil \frac{e^{-(s+(k-1)T_D) c_i }}{e^{-(s+kT_D) c_i }} \right\rceil
&= \prod_{i=1}^{\dim\mathfrak{l}}\lceil e^{T_D c_i} \rceil
\leq \prod_{i=1}^{\dim\mathfrak{l}} e^{c_{i}T_{D}}e^{\frac{D-J_L}{\dim (\mathfrak{l})}}\\
&= e^{J_L T_D}e^{D-J_L}= e^{J_L(T_D-1)+D},
\end{split}
\] $d_{L,\mb{c}}$-balls of radius $r^{\frac{1}{\max\mb{c}}}e^{-(s+kT_D)}$. Thus if $s+kT_D \in I$, then $E_{y,k}^s$ can be covered by
$N_k=e^{J_L(T_D-1)+D}N_{k-1}$ $d_{L,\mb{c}}$-balls of radius $r^{\frac{1}{\max\mb{c}}}e^{-(s+kT_D)}$.
Suppose that $s+kT_D \notin I$. Denote the above covering of $E_{y,k-1}^s$ by $\{B_j:j=1,\dots, N_{k-1}\}$.
Since $E_{y,k}^s \subset E_{y,k-1}^s$, the set $\{E_{y,k}^s \cap B_j : j=1,\dots, N_{k-1}\}$ covers $E_{y,k}^s$.
We now claim that
$$d_{L,\mb{c}}(x_1,x_2)\leq (2r)^{\frac{1}{\max\mb{c}}}e^{-(s+kT_D)}\; \text{for any}\; x_1,x_2\in E_{y,k}^s \cap B_j.$$
Indeed, since $s+kT_D \notin I$, we have $d_Y(a^{s+kT_D} y, a^{s+kT_D} x_\ell)\leq r $ for each $\ell=1,2$. Thus
$d_Y(a^{s+kT_D} x_1, a^{s+kT_D} x_2)\leq 2r$. Since $x_1,x_2 \in B_r^L \cdot y$, there are $h_1,h_2\in B_r^L$ such that $x_1=h_1 y$ and $x_2=h_2 y$.
By our choice of the right invariant metric $d_G$, we have
\[
\begin{split}
&d_Y (a^{s+kT_D}x_1,a^{s+kT_D}x_2)=
d_Y(a^{s+kT_D}h_1y,a^{s+kT_D}h_2y)\\
&= d_L(a^{s+kT_D}h_1h_2^{-1}a^{-(s+kT_D)},id)
\geq \max_{i=1,\dots,\dim\mathfrak{l}} e^{c_i(s+kT_D)} |(\log h_1h_2^{-1})_i|,
\end{split}
\]
where $(\log h_1h_2^{-1})_i$ is the $i$-th coordinate of $\log h_1h_2^{-1}$ with respect to the standard basis $\{e_i: 1\leq i\leq \dim\mathfrak{l}\}$.
Thus for each $i=1,\dots,\dim\mathfrak{l}$
\[
|(\log h_1h_2^{-1})_i|=|(\log h_1 - \log h_2)_i|\leq 2re^{-c_i(s+kT_D)}.
\]
Note that
\[
d_{L,\mb{c}}(x_1,x_2)=d_{L,\mb{c}}(h_1,h_2) =\max_{i=1,\dots,\dim\mathfrak{l}} |(\log h_1 - \log h_2)_i|^{\frac{1}{c_i}}
\]
Therefore, we have
\[
d_{L,\mb{c}}(x_1,x_2) \leq \max_{i=1,\dots,\dim\mathfrak{l}} (2r)^{\frac{1}{c_i}}e^{-(s+kT_D)}\leq (2r)^{\frac{1}{\max\mb{c}}}e^{-(s+kT_D)}.
\]
By the claim, $E_{y,k}^s \cap B_j$ is contained in a single $d_{L,\mb{c}}$-ball of radius $r^{\frac{1}{\max\mb{c}}} e^{-(s+kT_D)}$
for each $i=j,\dots,N_{k-1}$.
Hence $E_{y,k}^s$ can be covered by $N_k=N_{k-1}$ $d_{L,\mb{c}}$-balls of radius $r^{\frac{1}{\max\mb{c}}} e^{-(s+kT_D)}$.
\end{proof}
Now, for any non-negative integer $T$, we can find $s\in\{0,\dots,T_{D}-1\}$ and $k\in\mathbb{Z}_{\geq 0}$ such that
\eq{
T_{D}|I\cap I_{s,k}(T_{D})| \leq |I\cap \{1,\dots,T\}|\quad\text{and}\quad T-T_{D}<s+kT_{D}\leq T
} from the pigeon hole principle. By the above observation, $E_{y,T}\subset E_{y,k}^{s}$ can be covered by
$C_{s}e^{(J_L(T_{D}-1)+D)|I\cap I_{s,k}(T_{D})|}$ $d_{L,\mb{c}}$-balls of radius $r^{\frac{1}{\max\mb{c}}} e^{-(s+kT_D)}$.
Finally, since $T-T_{D}<s+kT_{D}\leq T$ and $D>J_L$, we can cover $E_{y,T}$ by
$Ce^{D|I\cap \{1,\dots.T\}|}$ $d_{L,\mb{c}}$-balls of radius $r^{\frac{1}{\max\mb{c}}} e^{-T}$, where $C$ is a constant depending on
$Q_\infty^0$, $r$, and $D$, but independent of $T$.
\end{proof}
\section{Upper bound for Hausdorff dimension of $\mb{Bad}_{A}(\epsilon)$}\label{sec:entropyboundA}
\subsection{Constructing measure with entropy lower bound}\label{sec4.1}
Let us denote by $\ov{X}$ and $\ov{Y}$ the one-point compactifications of $X$ and $Y$, respectively.
Let $\mathcal{A}$ be a given countably generated $\sigma$-algebra of $X$ or $Y$. We denote by $\overline{\mathcal{A}}$ the $\sigma$-algebra generated by $\mathcal{A}$
and $\set{\infty}$. The diagonal action $a_t$ is extended to the action on $\ov{X}$ and $\overline{Y}$ by $a_t(\infty)=\infty$ for $t\in\mathbb{R}$.
For a finite partition $\mathcal{Q}=\set{Q_1,\cdots,Q_N,Q_\infty}$ of $Y$ which has only one non-compact element $Q_\infty$, denote by $\overline{\mathcal{Q}}$
the finite partition $\set{Q_1, \cdots, Q_N, \overline{Q_\infty}\overset{\on{def}}{=} Q_\infty\cup\set{\infty}}$ of $\overline{Y}$.
Note that $\overline{\mathcal{Q}_0^q}=\overline{\mathcal{Q}}_0^q$ for any $q\in\mathbb{N}$.
We also denote by $\crly{P}(X)$ the space of probability measures on $X$, and use similar notations for $Y$, $\ov{X}$, and $\ov{Y}$.
In this subsection, we construct an $a$-invariant measure on $\overline{Y}$ with a lower bound on the conditional entropy for the proof of
Theorem \ref{thmEff1}. Here, the conditional entropy will be computed with respect to the $\sigma$-algebras constructed in the previous section.
If $x_A$ has no escape of mass, such measure was constructed in \cite[Proposition 2.3]{LSS}.
The following proposition generalizes the measure construction for $x_A$'s with some escape of mass.
\begin{prop}\label{prop5} For $A\in M_{m,n}(\mathbb{R})$ fixed,
let $$\eta_A=\sup\set{\eta:x_A \ \textrm{has} \ \eta\textrm{-escape of mass on average}}.$$
Then there exists $\mu_A\in\crly{P}(\overline{X})$ with $\mu_A(X)=1-\eta_A$.
Moreover, for any $\epsilon>0$, there exists an $a$-invariant measure $\ov{\mu}\in\crly{P}(\overline{Y})$ such that
\begin{enumerate}
\item\label{supp'} $\on{Supp}{\ov{\mu}}\subset \mathcal{L}_\epsilon\cup (\overline{Y}\smallsetminus Y)$,
\item\label{cusp'} $\pi_*\ov{\mu}=\mu_A$, in particular, there exists $\mu\in\crly{P}(Y)$ such that $$\ov{\mu}=(1-\eta)\mu+\eta_A\delta_{\infty},$$ where $\delta_\infty$ is the dirac delta measure on $\overline{Y}\setminus Y$.
\item\label{entropy'} For any $0<r<1$, let $\mathcal{A}^W$ be the $\sigma$-algebra of $Y$ constructed in Proposition \ref{algebracst} for $r$, $\mu$ and $L=W$. Then
$$h_{\ov{\mu}}(a^{-1}|\overline{\mathcal{A}^W})\ge 1-\eta_A-r_{1}(m- \dim_H \mb{Bad}_A(\epsilon)).$$
\end{enumerate}
\end{prop}
\begin{rem}\
\begin{enumerate}
\item Note that if $\eta_A >0$ then $x_{A}$ has $\eta_A$-escape of mass on average.
\item One can check that $\eta_A=0$ if and only if $x_{A}$ is \textit{heavy}, which is defined in \cite[Definition 1.1]{LSS}.
\end{enumerate}
\end{rem}
\begin{proof}
Since $x_{A}$ has $\eta_A$-escape of mass on average but no more than $\eta_A,$ we may fix an increasing sequence of integers
$\set{k_i}_{i\geq 1}$ such that
$$\frac{1}{k_i}\displaystyle\sum_{k=0}^{k_i-1}\delta_{a^k x_A}\overset{\on{w}^*}{\longrightarrow}\mu_A\in\crly{P}(\overline{X})$$ with $\mu_A(X)= 1-\eta_A$.
Let us denote by $\mathbb{T}^{m}= [0,1]^{m}/\!\sim$ the torus in $\mathbb{R}^{m}$, where the equivalence relation is modulo $1.$
Let
$$R^{A,T}:=\set{b\in \mathbb{T}^m|\forall t\ge T,a_t y_{A,b}\in\mathcal{L}_\epsilon}\cap\mb{Bad}_A(\epsilon).$$
As explained in Subsection \ref{sec3.2}, the subset of $\mb{Bad}_{A}(\epsilon)$ such that $(A,b)$ is rational
has Hausdorff dimension zero.
Hence, by Proposition \ref{prop1}, $\displaystyle\bigcup_{T=1}^\infty R^{A,T}$ has Hausdorff dimension equal to $\dim_H \mb{Bad}_A(\epsilon)$.
For any $\gamma >0$, it follows that there exists $T_\gamma \in \mathbb{N}$ satisfying
$\dim_H R^{A,T_\gamma}\geq \dim_H \mb{Bad}_A(\epsilon)-\gamma$.
Let $\phi_A:\mathbb{T}^{m}\to Y$ be the map defined by $\phi_A(b)=y_{A,b}$. Note that $\phi_A$ is an one-to-one Lipschitz map between
$\mathbb{T}^m$ and $\phi_A(\mathbb{T}^m)$, so we may consider a quasinorm on $\phi_A(\mathbb{T}^m)$ induced from the $\mb{r}$-quasinorm on $\mathbb{R}^m$
and denote it again by $\|\cdot\|_{\mb{r}}$.
For each $k_i\ge T_\gamma$, let $S_i$ be a maximal $e^{-k_i}$-separated subset of $R^{A,T_\gamma}$ with respect to the $\mb{r}$-quasinorm.
By Lemma \ref{relating dimensions},
\eqlabel{Sicount}{\displaystyle\liminf_{i\to\infty}\frac{\log|S_i|}{k_i}\ge \underline{\dim}_{\mb{r}}(R^{A,T_\gamma})\ge 1-r_{1}(m+\gamma- \dim_H \mb{Bad}_A(\epsilon)).}
Let $\nu_i\overset{\on{def}}{=}\frac{1}{|S_i|}\displaystyle\sum_{b\in S_i}\delta_{y_{A,b}}$ be the normalized counting measure on the set
$D_i:= \phi_{A}(S_{i})=\set{y_{A,b}: b\in S_i}\subset Y$. Extracting a subsequence if necessary, we may assume without loss of generality
that $$\mu_i\overset{\on{def}}{=}\frac{1}{k_i}\displaystyle\sum_{k=0}^{k_i-1}a_*^k\nu_i\overset{\on{w}^*}{\longrightarrow}\mu^\gamma\in\crly{P}(\overline{Y}).$$
The measure $\mu^{\gamma}$ is $a$-invariant since $a_* \mu_i -\mu_i$ goes to zero measure.
Choose any sequence of positive real numbers $(\gamma_j)_{j\geq 1}$ converging to zero and let $\set{\mu^{\gamma_j}}$ be a family of $a$-invariant probability measures on $\overline{Y}$ obtained from the above construction for each $\gamma_j$.
Extracting a subsequence again if necessary, we may take a $\text{weak}^*$-limit measure $\ov{\mu}\in \crly{P}(\overline{Y})$ of $\set{\mu^{\gamma_j}}$.
We prove that $\ov{\mu}$ is the desired measure. The measure $\ov{\mu}$ is clearly $a$-invariant.\\
(\ref{supp'}) We show that for all $\gamma>0$, $\mu^\gamma(Y\setminus\mathcal{L}_\epsilon)=0$. For any $b\in S_i\subseteq R^{A,T_\gamma}$, $a^T y_{A,b}\in \mathcal{L}_\epsilon$ holds for $T>T_\gamma$. Thus we have
\begin{align*}
\mu_i(Y\setminus\mathcal{L}_\epsilon)&=\frac{1}{k_i}\displaystyle\sum_{k=0}^{k_i-1}a^k_{*}\nu_i(Y\setminus\mathcal{L}_\epsilon)= \frac{1}{k_i}\displaystyle\sum_{k=0}^{T_\gamma}a^k_{*}\nu_i(Y\setminus\mathcal{L}_\epsilon)\\
&= \frac{1}{k_{i} |S_{i}|}\sum_{y\in D_{i}, 0\leq k \leq T_{\gamma}} \delta_{a^{k}y}(Y\setminus\mathcal{L}_\epsilon) \leq \frac{T_\gamma}{k_i}.
\end{align*}
By taking $k_i\to \infty$, we have $\mu^{\gamma}(Y\setminus\mathcal{L}_\epsilon)=0$ for arbitrary $\gamma>0$, hence
$$\ov{\mu}(Y\setminus\mathcal{L}_\epsilon)=\lim_{j\to\infty}\mu^{\gamma_j}(Y\setminus\mathcal{L}_\epsilon)=0.$$\\
(\ref{cusp'}) For all $\gamma>0$, $\pi_*\mu^\gamma=\mu_A$ holds since $\pi_*\nu_i=\delta_{x_{A}}$ for all $i\ge 1$.
It follows that $\pi_*\ov{\mu}=\mu_A$. Hence,
$$\ov{\mu}(\overline{Y}\setminus Y)=\lim_{j\to\infty}\mu^{\gamma_j}(\overline{Y}\setminus Y)=\mu_A(\overline{X}\setminus X)=\eta_A,$$
so we have a decomposition $\overline{\mu}=(1-\eta_A)\mu+\eta_A\delta_\infty$ for some $\mu\in\crly{P}(Y)$.\\
(\ref{entropy'}) Suppose that $\mathcal{Q}$ is any finite partition of $Y$ satisfying:
\begin{itemize}
\item $\mathcal{Q}$ contains an atom $Q_\infty$ of the form $\pi^{-1}(Q_\infty^0)$, where $X\smallsetminus Q_\infty^0$ has compact closure,
\item $\forall Q\in \mathcal{Q}\smallsetminus\set{Q_\infty}$, $\diam Q<r$, with $r\in (0,\frac{1}{2})$ such that any $d_{\mb{r}}$-ball of radius $3r$ has Euclidean diameter smaller than the injectivity radius on $Y\setminus Q_{\infty}$,
\item $\forall Q\in\mathcal{Q},\forall j\geq 1,\; \mu^{\gamma_j}(\partial Q)=0$.
\end{itemize}
We will first prove the following statement. For all $q\ge 1$,
\eqlabel{sttlowbdd}{\frac{1}{q}H_{\ov{\mu}}(\overline{\mathcal{Q}}_0^{q-1}|\overline{\mathcal{A}^W})\ge 1-r_{1}(m-\dim_H \mb{Bad}_A(\epsilon))-\ov{\mu}(\overline{Q_\infty}).}
It is clear if $\ov{\mu}(Q_\infty)=1$, so assume that $\ov{\mu}(Q_\infty)<1$, hence for all large enough $j\geq 1$,
$\mu^{\gamma_j}(Q_\infty)<1$. Now, we fix such $j\geq 1$ and write temporarily $\gamma=\gamma_j$.
Let $\rho>0$ be small enough so that $\beta\overset{\on{def}}{=}\mu^\gamma(Q_\infty)+\rho<1$. For large enough $i\geq 1$, we have
\begin{align*}
\beta= \mu^\gamma(Q_\infty)+\rho>\mu_i(Q_\infty)&=\frac{1}{k_i|S_i|}\displaystyle\sum_{y\in D_i, 0\le k<k_i}\delta_{a^k y}(Q_\infty)\\
&=\frac{1}{k_i}\displaystyle\sum_{0\le k<k_i}\delta_{a^k x_{A}}(Q^0_\infty).
\end{align*}
In other words, there exist at most $\beta k_i$ number of $a^k x_A$'s in $Q^0_\infty$, thus for any $y\in D_{i}$, we have
\[
|\{k\in\{0,\dots,k_{i}-1\}: a^{k}y \in Q_{\infty}\}| < \beta k_{i}.
\]
Let $\mathcal{A}^W=(\mathcal{P}^W)_0^\infty$ be a $\sigma$-algebra we constructed in Proposition \ref{algebracst} with respect to $\mu$, where $\mathcal{P}^W$ is a refinement of $\mathcal{P}=\set{P_1,\cdots,P_N,P_\infty}$ such that $[y]_{\mathcal{P}^W}=[y]_\mathcal{P}\cap B_{2r}^W$ for any $y\in Y\setminus P_\infty$.
For any $M\in\mathbb{N}$ and $y\in Y\setminus P_\infty$, we have $[y]_{{(\mathcal{P}^W)}_0^M}=[y]_{\mathcal{P}_0^M}\cap B_{2r}^W$. Since the support of $\nu_i$ is a set of finite points on a single compact $W$-orbit $\phi_A(\mathbb{T}^m)$, $(\mathcal{P}^W)_0^M=\mathcal{P}_0^M$ modulo $\nu_i$. Hence,
\eqlabel{staticbd1}{H_{\nu_i}((\mathcal{P}^W)_0^M)=H_{\nu_i}(\mathcal{P}_0^M)\leq M\log(N+1).}
From Lemma \ref{CovLem} with $L=W$, if $Q$ is any non-empty atom of $\mathcal{Q}_0^{k_i-1}\vee \mathcal{P}_0^M$, fixing any $y\in Q$, for any $D>1$,
\eq{
D_{i}\cap Q = D_{i}\cap [y]_{\mathcal{Q}_0^{k_i-1}\vee \mathcal{P}_0^M} \subset E_{y,k_{i}-1}
}
can be covered $Ce^{D\beta k_{i}}$ many $r^{1/r_1}e^{-k_{i}}$-balls for $d_{\mb{r}}$, where $C$ is a constant depending on $Q_\infty^0,r,$ and $D$,
but not on $k_i$. Since $D_{i}$ is $e^{-k_{i}}$-separated with respect to $d_{\mb{r}}$ and $r^{1/r_{1}}<\frac{1}{2}$, we get
$$\mathrm{Card}(D_i \cap Q)\leq Ce^{D\beta k_i },$$
and therefore using $(\mathcal{P}^W)_0^M=\mathcal{P}_0^M$ modulo $\nu_i$, we have
\eqlabel{staticbd2}{H_{\nu_i}(\mathcal{Q}_0^{k_i-1}\vee (\mathcal{P}^W)_0^M)=H_{\nu_i}(\mathcal{Q}_0^{k_i-1}\vee \mathcal{P}_0^M)\ge \log|S_i|-D\beta k_i-\log C.}
Combining \eqref{staticbd1} and \eqref{staticbd2}, we have
\eqlabel{staticbd3}{
\begin{aligned}
H_{\nu_i}(\mathcal{Q}_0^{k_i-1}|(\mathcal{P}^W)_0^M)&=H_{\nu_i}(\mathcal{Q}_0^{k_i-1}\vee(\mathcal{P}^W)_0^M)-H_{\nu_i}((\mathcal{P}^W)_0^M)\\
&\ge \log |S_i|-D\beta k_i-\log C -M\log (N+1).
\end{aligned}}
For $q\ge 1$, write the Euclidean division of large enough $k_i-1$ by $q$ as
$$k_i-1=qk'+s \ \textrm{with} \ s\in\set{0,\cdots,q-1}.$$
By subadditivity of the entropy with respect to the partition, for each $p\in\set{0,\cdots,q-1}$,
$$H_{\nu_i}(\mathcal{Q}_0^{k_i-1}|(\mathcal{P}^W)_0^M)\leq H_{a^{p}\nu_i}(\mathcal{Q}_0^{q-1}|(\mathcal{P}^W)_0^M)+\cdots+H_{a^{p+qk'}\nu_i}(\mathcal{Q}_0^{q-1}|(\mathcal{P}^W)_0^M)+2q\log |\mathcal{Q}|.$$
Summing those inequalities for $p=0,\cdots,q-1$, and using the concave property of entropy with respect to the measure, we obtain
\begin{align*}
qH_{\nu_i}(\mathcal{Q}_0^{k_i-1}|(\mathcal{P}^W)_0^M)
&\leq\displaystyle\sum_{k=0}^{k_i-1}H_{a^k \nu_i}(\mathcal{Q}_0^{q-1}|(\mathcal{P}^W)_0^M+2q^2\log |\mathcal{Q}|\\
&\leq k_iH_{\mu_i}(\mathcal{Q}_0^{q-1}|(\mathcal{P}_0^M)^W+2q^2\log |\mathcal{Q}|,
\end{align*}
and it follows from \eqref{staticbd3} that
\begin{align*}
\frac{1}{q}H_{\mu_i}(\mathcal{Q}_0^{q-1}|&(\mathcal{P}^W)_0^M)
\ge \frac{1}{k_i}H_{\nu_i}(\mathcal{Q}_0^{k_i-1}|(\mathcal{P}^W)_0^M)-\frac{2q\log |\mathcal{Q}|}{k_i}\\
&\ge \frac{1}{k_i}\Bigl\{(\log |S_i|-D\beta k_i -\log C -M\log(N+1)) - 2q\log|\mathcal{Q}|\Bigr\}.
\end{align*}
Now we can take $i\to\infty$ because the atoms $Q$ of $\overline{\mathcal{Q}}$ and hence of $\overline{\mathcal{Q}}_0^{q-1}$, satisfy $\mu^\gamma(\partial Q)=0$. Also, the constants $C,M,N$ and $|\mathcal{Q}|$ are independent to $k_i$. Thus we obtain
\begin{align*}
\frac{1}{q}H_{\mu^\gamma}(\overline{\mathcal{Q}}_0^{q-1}|(\overline{\mathcal{P}^W})_0^M)
&\ge 1-r_{1}(m+\gamma- \dim_H \mb{Bad}_A(\epsilon))-D\beta,
\end{align*}
and by taking $\rho \to 0$ and $D\to 1$ get
$$\frac{1}{q}H_{\mu^\gamma}(\overline{\mathcal{Q}}_0^{q-1}|(\overline{\mathcal{P}^W})_0^M)
\ge 1-r_{1}(m+\gamma- \dim_H \mb{Bad}_A(\epsilon))-\mu^{\gamma}(\overline{Q_\infty}).$$
Recall that $\gamma=\gamma_j$, and by taking $j\to \infty$ so that $\gamma_j \to 0$, we have
$$\frac{1}{q}H_{\ov{\mu}}(\overline{\mathcal{Q}}_0^{q-1}|(\overline{\mathcal{P}^W})_0^M)
\ge 1-r_{1}(m- \dim_H \mb{Bad}_A(\epsilon))-\ov{\mu}(\overline{Q_\infty}).$$
Since $(\overline{\mathcal{P}^W})_0^M\nearrow \overline{\mathcal{A}^W}$ as $M\to\infty$, we finally get \eqref{sttlowbdd}, i.e.
$$\frac{1}{q}H_{\ov{\mu}}(\overline{\mathcal{Q}}_0^{q-1}|\overline{\mathcal{A}^W})
\ge 1-r_{1}(m- \dim_H \mb{Bad}_A(\epsilon))-\ov{\mu}(\overline{Q_\infty}).$$
As explained in \cite[Proof of Theorem 4.2, Claim 2]{LSS}, we can construct a finite partition $\mathcal{Q}$ of $Y$ satisfying the bullet-requirements above. Hence,
$$h_{\ov{\mu}}(a^{-1}|\overline{\mathcal{A}^W})\ge 1-r_{1}(m- \dim_H \mb{Bad}_A(\epsilon))-\ov{\mu}(\overline{Q_\infty}),$$
for any $Q_\infty$ of $\mathcal{Q}$ satisfying the bullet-requirements.
Moreover, we may take $Q_\infty^0\subset X$ sufficiently small so that $\ov{\mu}(\ov{Q_\infty})$ is sufficiently close to $\ov{\mu}(\overline{Y}\setminus Y)=\eta_A$. It completes the proof.
\end{proof}
\subsection{The proof of Theorem \ref{thmEff1}}
In this subsection, we will estimate the dimension upper bound in Theorem \ref{thmEff1} using $a$-invariant measure with large relative entropy constructed in Proposition \ref{prop5} and the effective variational principle in Proposition \ref{effEL}. To use the effective variational principle, we need the following lemma.
For $x\in X$ and $H\geq 1$ we set:
$$\textrm{ht}(x)\overset{\on{def}}{=}\sup\set{\|gv\|^{-1}: x=gSL_d(\mathbb{Z}), v\in\mathbb{Z}^d\setminus\set{0}},$$
$$X_{\leq H}\overset{\on{def}}{=}\set{x\in X: \textrm{ht}(x)\leq H },\quad Y_{\leq H}\overset{\on{def}}{=}\pi^{-1}(X_{\le H}).$$
Note that $\textrm{ht}(x) \geq 1$ for any $x\in X$ by Minkowski's theorem and $X_{\leq H}$ and $Y_{\leq H}$ are compact sets for all $H\geq 1$ by Mahler's compact criterion.
\begin{lem}\label{volest}
Let $\mathcal{A}$ be a countably generated sub-$\sigma$-algebra of Borel $\sigma$-algbera which is $a^{-1}$-descending and $W$-subordinate. Let us fix $y\in Y_{\leq H}$ and suppose that $B^{W,\mb{r}}_{\delta}\cdot y\subset [y]_{\mathcal{A}}\subset B^{W,\mb{r}}_{r}\cdot y$ for some $0<\delta<r$. For any $0<\epsilon<1$, if $j_1\ge \log((2dH^{d-1})^{\frac{1}{r_m}}\delta^{-1})$ and $j_2\ge \log((dH^{d-1})^{\frac{1}{s_n}}\epsilon^{-\frac{n}{d}})$, then $\tau_y^{a^{j_1}\mathcal{A}}(a^{-j_2}\mathcal{L}_\epsilon)\leq 1-e^{-j_1-j_2}r^{-1}\epsilon^{\frac{m}{d}}$, where $\tau_{y}^{a^{j_1}\mathcal{A}}$ is as in Subsection \ref{sec2.4}.
\end{lem}
\begin{proof}
For $x=\pi(y)\in X_{\leq H}$, there exists $g\in SL_d(\mathbb{R})$ such that $x=gSL_d(\mathbb{Z})$ and $\displaystyle\inf_{v\in\mathbb{Z}^d\setminus\set{0}}\|gv\|\ge H^{-1}$. By Minkowski's second theorem with a convex body $[-1,1]^d$, we can choose vectors $gv_1,\cdots,gv_d$ in $g\mathbb{Z}^d$ so that $\displaystyle\prod_{i=1}^{d}\|gv_i\|\leq 1$. Then for any $1\leq i\leq d$, $$\|gv_i\|\leq \displaystyle\prod_{j\neq i}\|gv_j\|^{-1} \leq H^{d-1}.$$ Let $\Delta\subset \mathbb{R}^d$ be the parallelepiped generated by $gv_1,\cdots, gv_d$, then $\|b\|\leq dH^{d-1}$ for any $b\in \Delta$. It follows that $\|b^+\|_{\mb{r}}\leq (dH^{d-1})^{\frac{1}{r_m}}$ and $\|b^-\|_{\mb{s}}\leq (dH^{d-1})^{\frac{1}{s_n}}$ for any $b=(b^+,b^-)\in\Delta$, where $b^+\in\mathbb{R}^m$ and $b^-\in\mathbb{R}^n$. Note that the set $\pi^{-1}(x)\subset Y$ is parametrized as follows: $$\pi^{-1}(x)=\set{w(b)g\Gamma\in Y: b\in\Delta}.$$ Write $y=w(b_0)g\Gamma$ for some $b_0=(b_0^+,b_0^-)\in \Delta$. Denote by $V_y\subset W$ the shape of $\mathcal{A}$-atom so that $V_y\cdot y=[y]_{a^{j_1}\mathcal{A}}$, and $\Xi\subset\mathbb{R}^m$ the corresponding set to $V_y$ containing $0$ given by the canonical bijection between $W$ and $\mathbb{R}^m$. Since $a^{j_1}$ expands the $\mb{r}$-quasinorm with the ratio $e^{j_1}$, we have $B^{W,\mb{r}}_{e^{j_1}\delta}\cdot y\subset [y]_{a^{j_1}\mathcal{A}}\subset B^{W,\mb{r}}_{e^{j_1}r}\cdot y$, i.e.
$B^{\mathbb{R}^m,\mb{r}}_{e^{j_1}\delta}\subset \Xi\subset B^{\mathbb{R}^m,\mb{r}}_{e^{j_1}r}.$ Then the atom $[y]_{a^{j_1}\mathcal{A}}$ is parametrized as follows:
$$[y]_{a^{j_1}\mathcal{A}}=\set{w(b)g \Gamma: b=(b^+,b^-_0), b^+\in b^+_0+\Xi},$$
and $\tau_y^{a^{j_1}\mathcal{A}}$ can be considered as the normalized Lebesgue measure on the set $b^+_0+\Xi\subset \mathbb{R}^m$.
\begin{figure}
\begin{tikzpicture}
\filldraw[red!20!white] (-0.2,-2) rectangle (0.2,3);
\draw[red,thick] (-0.2,-2) rectangle (0.2,3);
\draw[white,thick] (-0.2,-2) -- (0.2,-2);
\draw[->] (-2,0) -- (4,0) node[anchor=west] {\tiny{$\mathbb{R}^{m}$}};
\draw[->] (0,-2) -- (0,4) node[anchor=south] {\tiny{$\mathbb{R}^{n}$}};
\draw[gray] (0,0) -- (0.4,-0.4);
\draw[gray] (0,0) -- (2,2);
\draw[gray] (0.4,-0.4) -- (2.4,1.6);
\draw[gray] (2.4,1.6) -- (2,2);
\fill[blue] (1.3,1) circle (1.5pt);
\draw (1.3,1) node[anchor=north,blue] {\tiny{$b_{0}$}};
\draw[very thin,dashed] (2.4,0) node[anchor=north] {\tiny{$dH^{d-1}$}} -- (2.4,2.4);
\draw[very thin,dashed] (0,2.4) -- (2.4,2.4);
\draw (-0.7,3) node[anchor=south,red] {$\Theta^{+}\times\Theta^{-}$};
\draw[blue,very thin] (-0.5,1) -- (3.1,1) node[anchor=west] {$[y]_{a^{j_{1}}\mathcal{A}}$};
\end{tikzpicture}
\caption{Intersection of $\Theta^{+}\times\Theta^{-}$ and $[y]_{a^{j_{1}}\mathcal{A}}$}
\label{atx}
\end{figure}
Let us consider the following sets:
$$\Theta^+\overset{\on{def}}{=}\set{b^+\in\mathbb{R}^m: \|b^+\|_{\mb{r}}\leq e^{-j_2}\epsilon^{\frac{m}{d}}}
\ \text{ and }\ \Theta^-\overset{\on{def}}{=}\set{b^-\in\mathbb{R}^n: \|b^-\|_{\mb{s}}\leq e^{j_2}\epsilon^{\frac{n}{d}}}.$$
If $b=(b^+,b^-)\in\Theta^+\times\Theta^-$, then $\|e^{\mb{r}j_2}b^+\|_{\mb{r}}\leq \epsilon^{\frac{m}{d}}$ and
$\|e^{-\mb{s}j_2}b^-\|_{\mb{s}}\leq \epsilon^{\frac{n}{d}}$, where $e^{\mb{r}j_2}b^+$ and $e^{-\mb{s}j_2}b^-$
denote the vectors such that $a^{j_2}b=(e^{\mb{r}j_2}b^+,e^{-\mb{s}j_2}b^-)$. It follows that $w(b)g\Gamma\notin a^{-j_2}\mathcal{L}_\epsilon$ since
$$a^{j_2}w(b^+,b^-)g\Gamma=w(e^{\mb{r}j_2}b^+,e^{-\mb{s}j_2}b^-)a^{j_2}g\Gamma\notin\mathcal{L}_\epsilon$$ by the definition of $\mathcal{L}_\epsilon$.
Now we claim that the set $\Theta^+\times\set{b_0^{-}}$ is contained in the intersection of $(b_0^++\Xi)\times\set{b_0^{-}}$ and
$\Theta^+\times\Theta^-$. See Figure \ref{atx}.
It is enough to show that $\Theta^+ \subset b_0^+ + \Xi$ and $b_0^- \in \Theta^-$.
Since $\|b_0^-\|_s \leq (dH^{d-1})^{\frac{1}{s_n}}$, the latter assertion follows from the assumption
$j_2\ge \log((dH^{d-1})^{\frac{1}{s_n}}\epsilon^{-\frac{n}{d}})$. To show the former assertion, fix any $b^+ \in \Theta^+$.
By the quasi-metric property of $\|\cdot\|_{\mb{r}}$ as in \eqref{quasitriang}, it follows from the assumptions $j_1\ge \log((2dH^{d-1})^{\frac{1}{r_m}}\delta^{-1})$ and $j_2\ge \log((dH^{d-1})^{\frac{1}{s_n}}\epsilon^{-\frac{n}{d}})$ that
\[
\begin{split}
\|b^+ - b_0^+\|_{\mb{r}} &\leq 2^{\frac{1-r_m}{r_m}}(\|b^+\|_{\mb{r}}+\|b_0^+\|_{\mb{r}})\leq 2^{\frac{1-r_m}{r_m}}(e^{-j_2}\epsilon^{\frac{m}{d}} + (dH^{d-1})^{\frac{1}{r_m}})\\
&\leq 2^{\frac{1-r_m}{r_m}}((dH^{d-1})^{-\frac{1}{s_n}}\epsilon + (dH^{d-1})^{\frac{1}{r_m}})
\leq 2^{\frac{1-r_m}{r_m}+1}(dH^{d-1})^{\frac{1}{r_m}} \\
&\leq e^{j_1}\delta.
\end{split}
\] Thus we have $b^+ \in b_0^+ +B_{e^{j_1}\delta}^{\mathbb{R}^m,\mb{r}} \subset b_0^+ +\Xi $, which concludes the former assertion.
By the above claim, we obtain
\begin{align*}
1-\tau_y^{a^{j_1}\mathcal{A}}(a^{-j_2}\mathcal{L}_\epsilon)&=\tau_y^{a^{j_1}\mathcal{A}}(Y\setminus a^{-j_2}\mathcal{L}_\epsilon)\geq \frac{m_{\mathbb{R}^m}(\Theta^{+})}{m_{\mathbb{R}^m}(b_0^{+}+\Xi)}\\
&\geq \frac{m_{\mathbb{R}^m}(B^{\mathbb{R}^m,\mb{r}}_{e^{-j_2}\epsilon^{\frac{m}{d}}})}{m_{\mathbb{R}^m}(B^{\mathbb{R}^m,\mb{r}}_{e^{j_1}r})}
=\frac{e^{-j_2}\epsilon^{\frac{m}{d}}}{e^{j_1}r}.
\end{align*}
This proves the lemma.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thmEff1}]
Suppose that $A\in M_{m,n}(\mathbb{R})$ is not singular on average, and let
$$\eta_A=\sup\set{\eta:x_A \ \textrm{has} \ \eta\textrm{-escape of mass}}<1.$$
By Proposition \ref{prop5}, there is an $a$-invariant measure $\ov{\mu}\in\crly{P}(\overline{Y})$ such that
$$\on{Supp}\ov{\mu}\subset\mathcal{L}_\epsilon\cup(\overline{Y}\setminus Y),\; \pi_*\ov{\mu}=\mu_A\in\crly{P}(\overline{X}),\;
\text{and}\; \ov{\mu}(\overline{Y}\setminus Y)=\mu_A(\overline{X}\setminus X)=\eta_A.$$
This measure can be represented by the linear combination $$\ov{\mu}=(1-\eta_A)\mu+\eta_A\delta_\infty,$$
where $\delta_\infty$ is the dirac delta measure on $\overline{Y}\setminus Y$ and $\mu\in\crly{P}(Y)$ is $a$-invariant.
There is a compact set $K\subset X$ such that $\mu_A(K)>0.99\mu_A(X)$. We can choose $0<r<1$ such that $Y(2r)\supset \pi^{-1}(K)$
and $\mu(Y(2r))>0.99$. Note that the choice of $r$ is independent of $\epsilon$ since $\mu_A$ is only determined by fixed $A$.
For such $0<r<1$ and $a$-invariant probability measure $\mu$ on $Y$, let $0<\delta_0=\delta_0(r)<r$, $0<\delta_1=\delta_1(r)\leq \delta_1$,
and a countably generated $\sigma$-algebra $\mathcal{A}^W$ be as in Lemma \ref{partitioncst} and Proposition \ref{algebracst}, respectively.
With respect to this $\sigma$-algebra, we have
$$h_{\ov{\mu}}(a^{-1}|\overline{\mathcal{A}^W})\ge (1-\eta_A)-r_1(m-\dim_H \mb{Bad}_A(\epsilon))$$
by (\ref{entropy'}) of Proposition \ref{prop5}. Since the entropy function is linear with respect to the measure, it follows that
\eq{h_{\mu}(a^{-1}|\mathcal{A}^W)= \frac{1}{1-\eta_A}h_{\ov{\mu}}(a^{-1}|\overline{\mathcal{A}^W})\ge 1-\frac{r_1}{1-\eta_A}(m-\dim_H \mb{Bad}_A(\epsilon)).}
By Lemma \ref{Exceptional}, there exists $0<\delta<\delta_1$ such that $\mu(E_\delta)<0.01$. Note that the constant $C>0$ in Lemma \ref{Exceptional}
depends only on $a$ and $G$, hence $\delta$ is independent of $\epsilon$ even if the set $E_\delta$ depends on $\epsilon$.
It follows from Proposition \ref{algebracst} that $B_\delta^W\cdot y\subset [y]_{\mathcal{A}^W}\subset B_{2r}^W\cdot y$
for any $y\in Y(2r) \setminus E_\delta$. We write $Z=Y(2r)\setminus E_\delta$ for simplicity.
In the rest of the argument, we will use Lemma \ref{algexiA} and Proposition \ref{effEL} which need ergodicity assumption. We do not know whether $\mu$ is ergodic or not, so we need to choose an ergodic component of $\mu$ properly. We have the ergodic decompositon
$$\mu=\int \mu_y^\mathcal{E}d\mu(y).$$
Let $\Upsilon:=\set{y\in Y: \mu_y^\mathcal{E}(Z)>0.9}$. Since $\mu(Z)\geq 0.98$, it follows that
$$0.98\leq \int_Y \mu_y^\mathcal{E}(Z)d\mu(y)\leq \mu(\Upsilon)+0.9\mu(Y\setminus\Upsilon),$$
hence $\mu(\Upsilon)\ge0.8$. By Proposition \ref{ergDec}, we have
\eq{
\begin{aligned}
\int_\Upsilon h_{\mu_y^\mathcal{E}}(a^{-1}|\mathcal{A}^W)d\mu(y)+\int_{Y\setminus\Upsilon} h_{\mu_y^\mathcal{E}}&(a^{-1}|\mathcal{A}^W)d\mu(y)=\int_Y h_{\mu_y^\mathcal{E}}(a^{-1}|\mathcal{A}^W)d\mu(y)\\&\geq 1-\frac{r_1}{1-\eta_A}(m-\dim_H \mb{Bad}_A(\epsilon)).
\end{aligned}}
On the other hand, by Theorem \ref{thmEL}, $h_{\mu_y^\mathcal{E}}(a^{-1}|\mathcal{A}^W)\leq 1$ for any $y\in Y$, hence $\int_{Y\setminus\Upsilon} h_{\mu_y^\mathcal{E}}(a^{-1}|\mathcal{A}^W)d\mu(y)\leq \mu(Y\setminus\Upsilon)$. It follows that
\eq{
\begin{aligned}
\int_\Upsilon h_{\mu_y^\mathcal{E}}(a^{-1}|\mathcal{A}^W)d\mu(y)&\ge
\mu(\Upsilon)-\frac{r_1}{1-\eta_A}(m-\dim_H \mb{Bad}_A(\epsilon))\\
&\ge \mu(\Upsilon)\left(1-\frac{r_1}{0.8(1-\eta_A)}(m-\dim_H \mb{Bad}_A(\epsilon))\right).
\end{aligned}}
Therefore, we can find $y_0 \in Y$ such that $\lambda:=\mu_{y_0}^\mathcal{E}$ satisfies $\lambda(Z)>0.9$ and
$$h_{\lambda}(a^{-1}|\mathcal{A}^W)\ge 1-\frac{r_1}{0.8(1-\eta_A)}(m-\dim_H \mb{Bad}_A(\epsilon)).$$
Note that $\mathcal{A}^W$ is $W$-subordinate modulo $\lambda$ by Proposition \ref{algebracst}.
Applying Lemma \ref{algexiA} for $\lambda$, we have
\eqlabel{entest}{H_{\lambda}(\mathcal{A}^W|a\mathcal{A}^W)\ge 1-\frac{r_1}{0.8(1-\eta_A)}(m-\dim_H \mb{Bad}_A(\epsilon)).}
To apply Lemma \ref{volest}, choose $H\geq 1$ such that $Y(2r) \subset Y_{\leq H}$. Note that the constant $H$ depends only on $r$.
Set
$$j_1=\lceil\log((2dH^{d-1})^{\frac{1}{r_m}}{\delta'}^{-1})\rceil\quad\text{and}\quad
j_2=\lceil\log((dH^{d-1})^{\frac{1}{s_n}}\epsilon^{-\frac{n}{d}})\rceil,$$
where $\delta'>0$ will be determined below.
Let $\mathcal{A}=a^{-k}\mathcal{A}^W$ for $k=\lceil\log(2^{\frac{1}{r_m}}(2r)^{\frac{1}{r_1}}\epsilon^{-\frac{m}{d}})\rceil+j_2$.
By Proposition \ref{algebracst}, for any $y\in Z$, we have
$B_{\delta}^{W}\cdot y \subset [y]_{\mathcal{A}^W} \subset B_{2r}^{W}\cdot y$,
which implies that
\[
B^{W,\mb{r}}_{\delta^{\frac{1}{r_m}}}\cdot y\subset[y]_{\mathcal{A}^W}\subset B^{W,\mb{r}}_{(2r)^{\frac{1}{r_1}}}\cdot y.
\]
Thus, for any $y\in Z$,
\[
B^{W,\mb{r}}_{\delta^{\frac{1}{r_m}} e^{-k}}\cdot (a^{-k}y)\subset
[a^{-k}y]_{a^{-k}\mathcal{A}^W}=[a^{-k}y]_{\mathcal{A}}\subset B^{W,\mb{r}}_{(2r)^{\frac{1}{r_1}}e^{-k}}\cdot (a^{-k}y).
\]
Finally, it follows that for any $y\in a^k Z$,
$$B^{W,\mb{r}}_{\delta'}\cdot y\subset [y]_{\mathcal{A}}\subset B^{W,\mb{r}}_{r'}\cdot y,$$
where
$$r'=2^{-\frac{1}{r_m}}e^{-j_2}\epsilon^{\frac{m}{d}}\quad\text{and}\quad \delta'=e^{-1}(2r)^{-\frac{1}{r_1}}\delta^{\frac{1}{r_m}}r'.$$
Now we will use Proposition \ref{effEL} with $L=W$, $\mu=\lambda$, $K=a^k Z$, and $r=r'$.
Note that the maximal relative entropy of $a^{j_1}$ with respect to $\mathcal{A}^W$ is $j_1$, and $\lambda$ is supported on $a^{-j_2}\mathcal{L}_\epsilon$ since $\on{Supp}\lambda\subseteq\on{Supp}\mu\subseteq\mathcal{L}_\epsilon$ and $\lambda$ is $a$-invariant.
We also have
$$B^{W,\mb{r}}_{r'}a^{-j_2}\mathcal{L}_\epsilon=a^{-j_2}B^{W,\mb{r}}_{e^{j_2}r'}\mathcal{L}_\epsilon=a^{-j_2}B^{W,\mb{r}}_{2^{-\frac{1}{r_m}}\epsilon^{\frac{m}{d}}}\mathcal{L}_\epsilon \subseteq a^{-j_2}\mathcal{L}_{2^{-\frac{d}{mr_m}}\epsilon}$$
by using the triangular inequality of $\mb{r}$-quasinorm as in \eqref{quasitriang} and the definition of $\mathcal{L}_\epsilon$ for the last inclusion.
Applying Proposition \ref{effEL}, it follows that
\eqlabel{est1}{
\begin{aligned}
H_\lambda(\mathcal{A}|a^{j_1}\mathcal{A})&\leq j_1+\int_Y\log\tau^{a^{j_1}\mathcal{A}}_y((Y\setminus a^k Z)
\cup B^{W,\mb{r}}_{r'}a^{-j_2}\mathcal{L}_\epsilon)d\lambda(y)\\
&\leq j_1+\int_Y\log\tau^{a^{j_1}\mathcal{A}}_y((Y\setminus a^k Z)
\cup a^{-j_2}\mathcal{L}_{2^{-\frac{d}{mr_m}}\epsilon})d\lambda(y)\\
&\leq j_1+\int_{a^k Z\cap Y_{\leq H}}\log\tau^{a^{j_1}\mathcal{A}}_y(a^{-j_2}\mathcal{L}_{2^{-\frac{d}{mr_m}}\epsilon})d\lambda(y)
\end{aligned}}
By Lemma \ref{volest} with $\delta=\delta'$ and $r=r'$, for any $y\in a^k Z\cap Y_{\leq H}$,
$$\tau^{a^{j_1}\mathcal{A}}_y(a^{-j_2}\mathcal{L}_{2^{-\frac{d}{mr_m}}\epsilon})\leq 1-2^{-\frac{1}{r_m}}e^{-j_1-j_2}r'^{-1}\epsilon^{\frac{m}{d}}=1-e^{-j_1},$$
hence $-\log\tau^{a^{j_1}\mathcal{A}}_y(a^{-j_2}\mathcal{L}_{2^{-\frac{d}{mr_m}}\epsilon})\ge e^{-j_1}$.
Since $\lambda(a^k Z\cap Y_{\leq H})\geq \frac{1}{2}$, it follows from \eqref{est1} that
\eqlabel{est2}{
\begin{aligned}
1-H_\lambda(\mathcal{A}^W|a\mathcal{A}^W)&=1-\frac{1}{j_1}H_\lambda(\mathcal{A}^W|a^{j_1}\mathcal{A}^W)=1-\frac{1}{j_1}H_\lambda(\mathcal{A}|a^{j_1}\mathcal{A})\\
&\ge-\frac{1}{j_1}\int_{a^k Z\cap Y_{\leq H}}\log\tau^{a^{j_1}\mathcal{A}}_y(a^{-j_2}\mathcal{L}_{2^{-\frac{d}{mr_m}}\epsilon})d\lambda(y)\\
&\ge \frac{e^{-j_1}}{2j_1}.
\end{aligned}}
Recall that $j_1$ is chosen by
\begin{align*}
j_1&=\lceil\log((2dH^{d-1})^{\frac{1}{r_m}} e(2r)^{\frac{1}{r_1}}\delta^{-\frac{1}{r_m}}2^{\frac{1}{r_m}}e^{j_2}\epsilon^{-\frac{m}{d}})\rceil\\
&\leq\lceil\log((2dH^{d-1})^{\frac{1}{r_m}+\frac{1}{s_n}} e^2 (2r)^{\frac{1}{r_1}}\delta^{-\frac{1}{r_m}}2^{\frac{1}{r_m}}
\epsilon^{-\frac{n}{d}}\epsilon^{-\frac{m}{d}})\rceil\\
&\leq \log((2dH^{d-1})^{\frac{1}{r_m}+\frac{1}{s_n}} e^3 (2r)^{\frac{1}{r_1}}\delta^{-\frac{1}{r_m}}2^{\frac{1}{r_m}})
-\log\epsilon
\end{align*}
Here, the constants $H$, $r$, and $\delta$ are only depending on fixed $A\in M_{m,n}(\mathbb{R})$, not on $\epsilon$. Combining \eqref{entest} and \eqref{est2}, we obtain
$$m-\dim_H \mb{Bad}_A(\epsilon)\geq c(A)\frac{\epsilon}{\log(1/\epsilon)},$$
where the constant $c(A)>0$ only depends on $d$, $\mb{r}$, $\mb{s}$, and $A\in M_{m,n}(\mathbb{R})$ since $\eta_A$ is also only depending on $A$. It completes the proof.
\end{proof}
\section{Upper bound for Hausdorff dimension of $\mb{Bad}^{b}(\epsilon)$}\label{sec:entropyboundb}
In this section, as explained in the introduction, we only consider the unweighted setting, that is,
\eq{
\mb{r}=(1/m,\dots,1/m)\quad\text{and}\quad \mb{s}=(1/n,\dots,1/n).
}
\subsection{Constructing measure with entropy lower bound}
Similar to Subsection \ref{sec4.1}, we will construct an $a$-invariant measure on $Y$ with a lower bound on the conditional entropy to the $\sigma$-algebra $\mathcal{A}^U$ obtained in Proposition \ref{algebracst} with $L=U$.
To control the amount of escape of mass for the desired measure, we need a modification of \cite[Theorem 1.1]{KKLM} as Proposition \ref{KKLM'} below.
For any compact set $\mathfrak{S}\subset X$ and positive integer $k>0$,
and any $0<\eta<1$, let
\begin{align*}
F_{\eta,\mathfrak{S}}&\overset{\on{def}}{=}\set{A\in \mathbb{T}^{mn}\subset M_{m,n}(\mathbb{R}):\frac{1}{k}\displaystyle\sum_{i=0}^{k-1}\delta_{a^i x_A}
(X\setminus \mathfrak{S})<\eta \ \textrm{for infinitely many} \ k},\\
F_{\eta,\mathfrak{S}}^k&\overset{\on{def}}{=}\set{A\in \mathbb{T}^{mn}\subset M_{m,n}(\mathbb{R}):\frac{1}{k}\displaystyle\sum_{i=0}^{k-1}\delta_{a^i x_A}
(X\setminus \mathfrak{S})<\eta}.
\end{align*}
Given a compact set $\mathfrak{S}$ of $X$, $k\in\mathbb{N},\eta\in(0,1)$, and $t\in\mathbb{N}$, define the set
$$Z(\mathfrak{S},k,t,\eta):=\set{A\in\mathbb{T}^{mn}:\frac{1}{k}\displaystyle\sum_{i=0}^{k-1}\delta_{a^{ti} x_A}
(X\setminus \mathfrak{S})\ge\eta};$$
in other words, the set of $A\in\mathbb{T}^{mn}$ such that up to time $k$, the proportion of times $i$ for which the orbit point $a^{ti}x_A$ is in the complement of $\mathfrak{S}$ is at least $\eta$.
The following theorem is one of the main results in \cite{KKLM}.
\begin{thm}\cite[Theorem 1.5]{KKLM}\label{KKLMcov}
There exists $t_0>0$ and $C>0$ such that the following holds. For any $t>t_0$ there exists a compact set $\mathfrak{S}:=\mathfrak{S}(t)$ of $X$ such that for any $k\in\mathbb{N}$ and $\eta\in(0,1)$, the set $Z(\mathfrak{S},k,t,\eta)$ can be covered with $Ct^{3k}e^{(m+n-\eta)mntk}$ balls in $\mathbb{T}^{mn}$ of radius $e^{-(m+n)tk}$.
\end{thm}
The following proposition is a slightly stronger variant of \cite[Theorem 1.1]{KKLM} which will be needed later. We prove this using Theorem \ref{KKLMcov}.
\begin{prop}\label{KKLM'}
There exists a familiy of compact sets $\set{\mathfrak{S}_\eta}_{0<\eta< 1}$ of $X$ such that the following is true. For any $0<\eta\leq 1$,
\eqlabel{KKLM''}{\dim_H(\mathbb{T}^{mn}\setminus \limsup_{k\to\infty}\displaystyle\bigcap_{\eta'\ge\eta}F^k_{\eta',\mathfrak{S}_{\eta'}})\leq mn-\frac{\eta mn}{2(m+n)}.}
\end{prop}
\begin{proof}
For $\eta\in(0,1)$, let $t_\eta\ge 3$ be the smallest integer such that $\frac{3\log t_\eta}{t_\eta}< \frac{\eta mn}{4}$, and $\mathfrak{S}'_\eta$ be the set $\mathfrak{S}(t_\eta)$ of Theorem \ref{KKLMcov}. For $l\ge 3$, denote by $\eta_l>0$ the smallest real number such that $t_{\eta_l}=l$. Then $\eta_l\ge\frac{2\eta_{l-1}}{3}$ for any $l\ge 4$. We note that these $\mathfrak{S}'_\eta$ can be chosen to satisfy $\mathfrak{S}'_{\eta'} \subseteq \mathfrak{S}'_\eta$ for any $0<\eta\leq\eta'$. Hence, we can find a family of compact sets $\mathfrak{S}''_\eta$ such that $\mathfrak{S}'_{\eta_l} \subseteq \mathfrak{S}''_{\eta'}$ for any $l\ge 4$ and $\eta_{l}\leq\eta'< \eta_{l-1}$. For any $\eta\in(0,1)$, we can choose $\mathfrak{S}_\eta$ to be a compact set so that for any $0\leq t\leq t_\eta$ and $x\in\mathfrak{S}''_\eta$, $a^tx\in\mathfrak{S}_\eta$.
Now we will prove that this family of compact sets $\set{\mathfrak{S}_\eta}_{0<\eta<1}$ satisfies \eqref{KKLM''}. Since $\frac{1}{k}\displaystyle\sum_{i=0}^{k-1}\delta_{a^i x_A}
(X\setminus \mathfrak{S}_\eta)\ge \eta$ implies $$\frac{1}{\lceil\frac{k}{t_\eta}\rceil}\displaystyle\sum_{i=0}^{\lceil\frac{k}{t_\eta}\rceil-1}\delta_{a^{t_\eta i} x_A}
(X\setminus \mathfrak{S}''_\eta)\ge\eta,$$
we have $\mathbb{T}^{mn}\setminus F_{\eta,\mathfrak{S}_{\eta}}^{ k}\subseteq Z(\mathfrak{S}''_\eta,\lceil\frac{k}{t_\eta}\rceil,t_\eta,\eta)$ for any $0<\eta<1$ and $k\in\mathbb{N}$. For any $\eta_{l+1}< \eta'\leq \eta_l$, $t_{\eta'}=l$ and the set $Z(\mathfrak{S}''_{\eta'},\lceil\frac{k}{t_\eta}\rceil,t_{\eta'},\eta')$ is contained in $Z(\mathfrak{S}'_{\eta_l},\lceil\frac{k}{t_{\eta_l}}\rceil,l,\eta_l)$. It follows that for any $0<\eta<1$
$$\mathbb{T}^{mn}\setminus\displaystyle\bigcup_{\eta'\ge\eta}F_{\eta',\mathfrak{S}_{\eta'}^k}^{k}\subseteq \displaystyle\bigcup_{\eta'\ge\eta}Z(\mathfrak{S}''_{\eta'},\lceil\frac{k}{t_{\eta'}}\rceil,t_{\eta'},\eta')\subseteq \displaystyle\bigcup_{l=3}^{t_\eta}Z(\mathfrak{S}'_{\eta_l},\lceil\frac{k}{l}\rceil,l,\eta_l),$$
hence
$$\mathbb{T}^{mn}\setminus\limsup_{k\to\infty}\displaystyle\bigcap_{\eta'\ge\eta}F^{k}_{\eta',\mathfrak{S}_{\eta'}}\subseteq \displaystyle\bigcup_{k_0\ge 1}\displaystyle\bigcap_{k=k_0}^{\infty}\displaystyle\bigcup_{l=3}^{t_\eta}Z(\mathfrak{S}'_{\eta_l},\lceil\frac{k}{l}\rceil,l,\eta_l).$$
By Theorem \ref{KKLMcov}, the set $\displaystyle\bigcup_{l=3}^{t_\eta}Z(\mathfrak{S}'_{\eta_l},\lceil\frac{k}{l}\rceil,l,\eta_l)$ can be covered with
\eq{\begin{aligned}
\displaystyle\sum_{l=3}^{t_\eta}Cl^{3\lceil\frac{k}{l}\rceil}e^{(m+n-\eta_l)mn\lceil\frac{k}{l}\rceil l}&\leq \displaystyle\sum_{l=3}^{t_\eta} Ct_\eta^3e^{\frac{3\log l}{l}k}e^{(m+n-\eta_l)mn(k+t_\eta)}\\
&\leq\displaystyle\sum_{l=3}^{t_\eta}Ct_\eta^3e^{(m+n)mnt_\eta}e^{(m+n-\frac{3\eta_l}{4})mnk}\\
&\leq Ct_\eta^4e^{(m+n)mnt_\eta}e^{(m+n-\frac{\eta}{2})mnk}
\end{aligned}}
balls in $\mathbb{T}^{mn}$ of radius $e^{-(m+n)k}$. Here we used $\eta_{t_{\eta}}\ge\frac{2\eta}{3}$ which follows from $\eta_l\ge\frac{2\eta_{l-1}}{3}$ for any $l\ge 4$. Thus, for any sufficiently large $k_0\in\mathbb{N}$
\eq{\begin{aligned}
\dim_H&\left(\displaystyle\bigcap_{k=k_0}^{\infty}\displaystyle\bigcup_{l=3}^{t_\eta}Z(\mathfrak{S}'_{\eta_l},\lceil\frac{k}{l}\rceil,l,\eta_l)\right)\leq\limsup_{k\to\infty}\frac{\log(Ct_\eta^4e^{(m+n)mnt_\eta}e^{(m+n-\frac{\eta}{2})mnk})}{-\log(e^{-(m+n)k})}\\
&=\limsup_{k\to\infty}\frac{\log(Ct_\eta^4e^{(m+n)mnt_\eta})+(m+n-\frac{\eta}{2})mnk}{(m+n)k}=mn-\frac{\eta mn}{2(m+n)},
\end{aligned}}
hence we get $\dim_H(\mathbb{T}^{mn}\setminus\displaystyle\limsup_{k\to\infty}\displaystyle\bigcap_{\eta'\ge\eta}F^{k}_{\eta',\mathfrak{S}_{\eta'}})\leq mn-\frac{\eta mn}{2(m+n)}$.
\end{proof}
The construction will basically follow the construction in Proposition \ref{prop5}. However, the additional step using Theorem \ref{KKLM'} is necessary to control the escape of mass since we will allow a small amount of escape of mass.
\begin{prop}\label{prop2}
Let $\set{\mathfrak{S}_\eta}_{0<\eta< 1}$ be the familiy of compact sets of $X$ as in Proposition~\ref{KKLM'}. For $b$ fixed and $\epsilon>0$, assume that $\dim_H \mb{Bad}^{b}(\epsilon)>\dim_H \mb{Bad}^{0}(\epsilon)$. Let $\eta_0:=2(m+n)(1-\frac{\dim_H \mb{Bad}^{b}(\epsilon)}{mn})$. Then there exist an $a$-invariant measure $\ov{\mu}\in \crly{P}(\overline{Y})$ such that
\begin{enumerate}
\item\label{supp} $\on{Supp}{\ov{\mu}}\subseteq \mathcal{L}_\epsilon\cup (\overline{Y}\setminus Y)$,
\item\label{cusp} $\pi_*\ov{\mu}(\overline{X}\setminus\mathfrak{S}_{\eta'})\leq \eta'$ for any $\eta_0\leq\eta'<1$, in particular, there exist $\mu\in\crly{P}(Y)$ and $0\leq \widehat{\eta}\leq\eta_0$ such that
$$\ov{\mu}=(1-\widehat{\eta})\mu+\widehat{\eta}\delta_\infty,$$
where $\delta_\infty$ is the dirac delta measure on $\overline{Y}\setminus Y$.
\item\label{entropy} For any $0<r<1$, let $\mathcal{A}^U$ be the $\sigma$-algebra of $Y$ constructed in Proposition \ref{algebracst} for $r$, $\mu$, and $L=U$. Then
$$h_{\ov{\mu}}(a^{-1}|\overline{\mathcal{A}^U})\ge(1-\widehat{\eta}^{\frac{1}{2}})(d-\frac{1}{2}\eta_0-d\widehat{\eta}^{\frac{1}{2}}).$$
\end{enumerate}
\end{prop}
\begin{proof}
For $\epsilon>0$, denote by $R$ the set $\mb{Bad}^{b}(\epsilon)\setminus \mb{Bad}_{0}^{b}(\epsilon)$, and let
$$R^{T}:=\set{A\in R \cap \mathbb{T}^{mn} \subset M_{m,n}(\mathbb{R}) |\forall t\ge T, a_t x_{A,b}\in \mathcal{L}_\epsilon}.$$ The sequence $\set{R^T}_{T\ge 1}$ is increasing, and $R=\displaystyle\bigcup_{T=1}^\infty R^{T}$ by Proposition~\ref{prop1}. Since $\dim_H \mb{Bad}^{b}(\epsilon)>\dim_H \mb{Bad}^{0}(\epsilon)\ge\dim_H \mb{Bad}_{0}^{b}(\epsilon)$, it follows that $\dim_H R=\dim_H \mb{Bad}^{b}(\epsilon)$. Thus for any
$0<\gamma<\frac{mn}{2(m+n)}-(mn-\dim_H \mb{Bad}^{b}(\epsilon))$, there exists $T_\gamma\ge 1$ satisfying
\eqlabel{RTdim}{\dim_H R^{T_\gamma}>\dim_H \mb{Bad}^{b}(\epsilon)-\gamma.}
Let $\eta=2(m+n)(1-\frac{\dim_H \mb{Bad}^{b}(\epsilon)-\gamma}{mn})$.
Note that $0<\eta<1$ in the above range of $\gamma$.
For $k\in\mathbb{N}$, write $\widetilde{F}_\eta^k:=\displaystyle\bigcap_{\eta'\ge\eta}F^k_{\eta',\mathfrak{S}_{\eta'}}$ for simplicity. Recall that we have
\eqlabel{Fetadim}{\dim_H (\mathbb{T}^{mn}\setminus\displaystyle\limsup_{k\to\infty}\widetilde{F}_\eta^k) \leq mn-\frac{\eta mn}{2(m+n)}=\dim_H \mb{Bad}^{b}(\epsilon)-\gamma} by Theorem~\ref{KKLM'}. It follows from \eqref{RTdim} and \eqref{Fetadim} that $$\dim_H(R^{T_\gamma}\cap \limsup_{k\to\infty}\widetilde{F}_\eta^k)>\dim_H \mb{Bad}^{b}(\epsilon)-\gamma.$$
Since $R^{T_\gamma}\cap \displaystyle\limsup_{k\to\infty}\widetilde{F}_\eta^k=\displaystyle\bigcap_{N=1}^\infty \displaystyle\bigcup_{k=N}^{\infty}(R^{T_\gamma}\cap \widetilde{F}_\eta^k)$, we can find an increasing sequence of positive integers $\set{k_i}\to \infty$ such that
$$\dim_H (R^{T_\gamma}\cap \widetilde{F}_{\eta}^{k_i})> \dim_H \mb{Bad}^{b}(\epsilon)-\gamma.$$
For each $k_i\ge T_\gamma$ let $S_i$ be a maximal $e^{-k_i}$-separated subset of $R^{T_\gamma}\cap \widetilde{F}_{\eta}^{k_i}$ with respect to the quasi-distance $d_{\mb{r}\otimes\mb{s}}$. Then by Lemma \ref{relating dimensions},
\eqlabel{eqn1}{\begin{aligned}
\displaystyle\liminf_{i\to\infty}\frac{\log |S_i|}{k_i}
&\ge\underline{\dim}_{\mb{r}\otimes\mb{s}} (R^{T_\gamma}\cap \widetilde{F}_\eta^{k_i})\\
&> m+n-(r_1+s_1)(mn-\dim_H \mb{Bad}^{b}(\epsilon)+\gamma)\\
&= m+n-\frac{m+n}{mn}(mn-\dim_H \mb{Bad}^{b}(\epsilon)+\gamma)\\
&=\frac{m+n}{mn}(\dim_H \mb{Bad}^{b}(\epsilon)-\gamma).
\end{aligned}}
Let $\nu_i\overset{\on{def}}{=} \frac{1}{|S_i|}\displaystyle\sum_{y\in D_i}\delta_{y}= \frac{1}{|S_i|}\displaystyle\sum_{A\in S_i}\delta_{y_{A,b}}$ be the normalized counting measure on the set $D_i:=\set{y_{A,b}:A\in S_i}\subset Y$ and let
$$\mu_i\overset{\on{def}}{=} \frac{1}{k_i}\displaystyle\sum_{k=0}^{k_i-1}a^k_{*}\nu_i \overset{\on{w}^*}{\longrightarrow} \mu^{\gamma}\in\crly{P}(\overline{Y})$$
By extracting a subsequence if necessary, there exists a probability measure $\mu^{\gamma}$ which is a weak*-accumulation point of $\set{\mu_i}$. The measure $\mu^{\gamma}$ is clearly an $a$-invariant measure since $a_*\mu_i-\mu_i$ goes to zero measure.
Choose any sequence of positive real numbers $(\gamma_j)_{j\ge 1}$ converging to zero and $(\eta_j)_{j\ge 1}$ be the corresponding sequence such that $$\eta_j=2(m+n)(1-\frac{\dim_H \mb{Bad}^{b}(\epsilon)-\gamma_j}{mn}).$$ Let $\set{\mu^{\gamma_j}}$ be a family of $a$-invariant probability measures on $\overline{Y}$ obtained from the above construction for each $\gamma_j$. Extracting a subsequence again if necessary, we may take a weak$^*$-limit measure $\ov{\mu}\in\crly{P}(\overline{Y})$ of $\set{\mu^{\gamma_j}}$. We prove that $\ov{\mu}$ is the desired measure. The measure $\ov{\mu}$ is clearly $a$-invariant.\\
(\ref{supp}) We show that for any $\gamma$, $\mu^\gamma(Y\setminus\mathcal{L}_\epsilon)=0$. For any $A\in S_i\subseteq R^{T_\gamma}$, $a^T y_{A,b}\in \mathcal{L}_\epsilon$ holds for $T>T_\gamma$. Thus
$$\mu_i(Y\setminus\mathcal{L}_\epsilon)=\frac{1}{k_i}\displaystyle\sum_{k=0}^{k_i-1}(a^k)_{*}\nu_i(Y\setminus\mathcal{L}_\epsilon)=\frac{1}{k_i}\displaystyle\sum_{k=0}^{T_\gamma}(a^k)_{*}\nu_i(Y\setminus\mathcal{L}_\epsilon)\leq\frac{T_\gamma}{k_i}.$$ By taking limit for $k_i\to \infty$, we have $\mu^\gamma(Y\setminus\mathcal{L}_\epsilon)=0$ for arbitrary $\gamma$, hence,
$$\ov{\mu}(Y\setminus\mathcal{L}_\epsilon)=\lim_{j\to\infty}\mu^{\gamma_j}(Y\setminus\mathcal{L}_\epsilon)=0.$$\\
(\ref{cusp}) For any $\gamma$, if $A\in S_i\subset \widetilde{F}_\eta^{k_i}=\displaystyle\bigcap_{\eta'\ge\eta}F_{\eta',\mathfrak{S}_{\eta'}}^{k_i}$, then for all $i \in \mathbb{N}$ and $\eta\leq\eta'\leq 1$, $\frac{1}{k_i}\displaystyle\sum_{k=0}^{k_i-1}\delta_{a^k x_A}(X\setminus \mathfrak{S}_{\eta'})<\eta'$. Therefore for all $i\in\mathbb{N}$ and $\eta\leq\eta'\leq 1$,
\begin{align*}
\pi_*\mu_i(X\setminus \mathfrak{S}_{\eta'})
&=\frac{1}{|S_i|}\displaystyle\sum_{y\in D_i} \frac{1}{k_i}\displaystyle\sum_{k=0}^{k_i-1} \pi_*\delta_{a^k y}(X\setminus \mathfrak{S}_{\eta'})\\
&=\frac{1}{|S_i|}\displaystyle\sum_{A\in S_i} \frac{1}{k_i}\displaystyle\sum_{k=0}^{k_i-1} \delta_{a^k x_A}(X\setminus \mathfrak{S}_{\eta'})
<\eta',
\end{align*}
hence $\pi_*\mu^\gamma(\overline{X}\setminus \mathfrak{S}_{\eta'})=\displaystyle\lim_{i\to\infty}\pi_*\mu_i(X\setminus \mathfrak{S}_{\eta'})\leq\eta'$. Since $\eta_j$ converges to $\eta_0$, we have
$$\pi_*\ov{\mu}(\overline{X}\setminus \mathfrak{S}_{\eta'})\leq \eta'$$ for any $\eta'>\eta_0$. Hence,
$$\ov{\mu}(\overline{Y}\setminus Y)\leq \lim_{\eta'\to\eta_0}\pi_*\ov{\mu}(\overline{X}\setminus \mathfrak{S}_{\eta'})\leq \eta_0,$$
so we have a decomposition $\ov{\mu}=(1-\widehat{\eta})\mu+\widehat{\eta}\delta_\infty$ for some $\mu\in\crly{P}(Y)$ and $0\leq\widehat{\eta}\leq\eta_0$.
For the rest of the proof, let us check the condition (\ref{entropy}).\\
(\ref{entropy})
Suppose that $\mathcal{Q}$ is any finite partition of $Y$ satisfying:
\begin{itemize}
\item $\mathcal{Q}$ contains an atom $Q_\infty$ of the form $\pi^{-1}(Q^0_\infty)$, where $X\smallsetminus Q^0_\infty$ has compact clousre,
\item $\forall Q\in \mathcal{Q}\smallsetminus\set{Q_\infty}$, $\diam Q<r$, with $r\in (0,\frac{1}{2})$ such that any $d_{\mb{r}\otimes\mb{s}}$-ball of radius $3r$ has Euclidean diameter smaller than the injectivity radius on $Y\setminus Q_{\infty}$,
\item $\forall Q\in\mathcal{Q},\forall j\ge1, \mu^{\gamma_j}(\partial Q)=0$,
\end{itemize}
We will first prove the following statement. For all $q\ge 1$,
\eqlabel{sttlowbdd'}{\frac{1}{q}H_{\mu^{\gamma}}(\overline{\mathcal{Q}}_0^{q-1}|\overline{\mathcal{A}^U})
\ge (m+n)(1-\mu^{\gamma}(\overline{Q_\infty})^{\frac{1}{2}})\left(\frac{\dim_H \mb{Bad}^{b}(\epsilon)-\gamma}{mn}-\mu^{\gamma}(\overline{Q_\infty})^{\frac{1}{2}}\right).}
It is clear if $\ov{\mu}(Q_\infty)=1$, so assume that $\ov{\mu}(Q_\infty)<1$, hence for all large enough $j\ge1$, $\mu^{\gamma_j}(Q_\infty)<1$. Now we fix such $j\ge 1$ and write temporarily $\gamma=\gamma_j$.
Let $\rho>0$ be small enough so that $\beta : = \mu^\gamma(Q_\infty)+\rho < 1$. Then
$$\beta = \mu(Q_\infty)+\rho>\mu_i(Q_\infty)=\frac{1}{k_i|S_i|}\displaystyle\sum_{y\in D_i, 0\le k<k_i}\delta_{a^k y}(Q_\infty)$$ holds for large enough $i$. In other words, there exist at most $\beta k_i|S_i|$ number of $a^k y$'s in $Q_\infty$ with $y\in D_i$ and $0\leq k<k_i$.
Let $S'_i\subset S_i$ be the set of $A\in S_i$'s such that
$$|\{0\leq k\leq k_i-1: a^ky_{A,b} \in Q_\infty \} |\ge \beta^{\frac{1}{2}}k_i.$$
Then we have $|S_i\setminus S_i'|\leq \beta^{\frac{1}{2}}|S_i|$ by the pigeonhole principle, hence
\eqlabel{Sicount}{|S'_i|\ge (1-\beta^{\frac{1}{2}})|S_i|.}
Let $\nu'_i\overset{\on{def}}{=} \frac{1}{|S'_i|}\displaystyle\sum_{y\in S'_i}\delta_y$ be the normalized counting measure on $D'_i$, where $D'_i:=\set{y_{A,b}:A\in S'_i}\subset Y$, then
$\nu_i(Q)\ge \frac{|S'_i|}{|S_i|}\nu'_i(Q)$ for all measurable set $Q\subseteq Y$.
Thus, for any arbitrary countable partition $\mathcal{Q}$ fo $Y$,
\eqlabel{eqn2}{
\begin{aligned}
H_{\nu_i}(\mathcal{Q})&=-\displaystyle\sum_{\nu_i(Q)\leq\frac{1}{e}}\log(\nu_i(Q))\nu_i(Q)-\displaystyle\sum_{\nu_i(Q)>\frac{1}{e}}\log(\nu_i(Q))\nu_i(Q)\\&\ge -\displaystyle\sum_{\nu_i(Q)\leq\frac{1}{e}}\log(\frac{|S'_i|}{|S_i|}\nu'_i(Q))\frac{|S'_i|}{|S_i|}\nu'_i(Q)
\\&=-\frac{|S'_i|}{|S_i|}\displaystyle\sum_{\nu_i(Q)\leq\frac{1}{e}}\log(\nu'_i(Q))\nu'_i(Q)-\frac{|S'_i|}{|S_i|}\log{\frac{|S'_i|}{|S_i|}}\displaystyle\sum_{\nu_i(Q)\leq\frac{1}{e}}\nu'_i(Q)
\\&\geq \frac{|S'_i|}{|S_i|}\Bigl\{H_{\nu'_i}(\mathcal{Q})+\displaystyle\sum_{\nu_i(Q)>\frac{1}{e}}\log(\nu'_i(Q))\nu'_i(Q)\Bigr\}\\&\ge (1-\beta^{\frac{1}{2}})(H_{\nu'_i}(\mathcal{Q})-\frac{2}{e}).
\end{aligned}
}
In the last inequality, we use the fact that $\nu'_i$ is a probability measure, thus there can be at most two elements $A$ of the partition for which $\nu'_i (A) > \frac{1}{e}$.
Let $\mathcal{A}^U=(\mathcal{P}^U)_0^\infty$ be a $\sigma$-algebra we constructed in Proposition \ref{algebracst} with respect to $\mu$, where $\mathcal{P}^U$ is a refinement of $\mathcal{P}=\set{P_1,\cdots,P_N,P_\infty}$ such that $[y]_{\mathcal{P}^U}=[y]_{\mathcal{P}}\cap B_{2r}^U$ for any $y\in Y\setminus P_\infty$.
For any $M\in\mathbb{N}$ and $y\in Y\setminus P_\infty$, we have $[y]_{(\mathcal{P}^U)_0^M}=[y]_{(\mathcal{P})_0^M}\cap B_{2r}^U$. Since the support of $\nu_i$ is a set of finite points on a single compact $U$-orbit, $(\mathcal{P}^U)_0^M=\mathcal{P}_0^M$ with respect to $\nu_i$. Hence,
\eqlabel{staticbdU}{H_{\nu_i}((\mathcal{P}^U)_0^M)=H_{\nu_i}(\mathcal{P}_0^M)\leq M\log(N+1).}
From Lemma \ref{CovLem} with $L=U$ and \eqref{Sicount}, if $Q$ is any non-empty atom of $\mathcal{Q}_0^{k_i-1}\vee \mathcal{P}_0^M$, fixing any $y\in Q$, for any $D>m+n$,
\eq{
D_{i}\cap Q = D_{i}\cap [y]_{\mathcal{Q}_0^{k_i-1}\vee \mathcal{P}_0^M} \subset E_{y,k_{i}-1}
}
can be covered $Ce^{D\sqrt{\beta} k_{i}}$ many $r^{\frac{1}{r_1 +s_1}}e^{-k_{i}}$-balls for $d_{\mb{r}\otimes\mb{s}}$, where $C$ is a constant depending on $Q^0_\infty,r,$ and $D$, but not on $k_i$. Since $D_{i}$ is $e^{-k_{i}}$-separated with respect to $d_{\mb{r}\otimes\mb{s}}$ and $r^{\frac{1}{r_1 +s_1}}<\frac{1}{2}$, we get
$$\mathrm{Card}(D_i \cap Q)\leq Ce^{D\sqrt{\beta} k_i }.$$
and therefore, since $(\mathcal{P}^U)_0^M=\mathcal{P}_0^M$ with respect to $\nu_i'$,
\eqlabel{staticbdU2}{
H_{\nu_i'}(\mathcal{Q}_0^{k_i-1}\vee (\mathcal{P}^U)_0^M)=H_{\nu_i'}(\mathcal{Q}_0^{k_i-1}\vee \mathcal{P}_0^M)\ge\log |S_i'|-D\beta^{\frac{1}{2}} k_i-\log C.}
Combining \eqref{eqn2}, \eqref{staticbdU}, and \eqref{staticbdU2}, we have
\eqlabel{sttbdd2}{\begin{aligned}
H_{\nu_i}(\mathcal{Q}_0^{k_i-1}|(\mathcal{P}^U)_0^M)&=H_{\nu_i}(\mathcal{Q}_0^{k_i-1}\vee (\mathcal{P}^U)_0^M)-H_{\nu_i}((\mathcal{P}^U)_0^M)\\
&\ge (1-\beta^{\frac{1}{2}})(H_{\nu_i'}(\mathcal{Q}_0^{k_i-1}\vee (\mathcal{P}^U)_0^M)-\frac{2}{e})-M\log(N+1)\\
&\ge (1-\beta^{\frac{1}{2}})(\log |S_i'|-D\beta^{\frac{1}{2}} k_i-\log C-\frac{2}{e})-M\log(N+1).
\end{aligned}}
For $q\ge 1$, write the Euclidean division of large enough $k_i-1$ by $q$ as
$$k_i-1=qk'+s \ \textrm{with} \ s\in\set{0,\cdots,q-1}.$$
By subadditivity of the entropy with respect to the partition, for each $p\in\set{0,\cdots,q-1}$,
$$H_{\nu_i}(\mathcal{Q}_0^{k_i-1}|(\mathcal{P}^U)_0^M)\leq H_{a^{p}\nu_i}(\mathcal{Q}_0^{q-1}|(\mathcal{P}^U)_0^M)+\cdots+H_{a^{p+qk'}\nu_i}(\mathcal{Q}_0^{q-1}|(\mathcal{P}^U)_0^M)+2q\log |\mathcal{Q}|.$$
Summing those inequalities for $p=0,\cdots,q-1$, and using the concave property of entropy with respect to the measure, we obtain
\begin{align*}
qH_{\nu_i}(\mathcal{Q}_0^{k_i-1}|(\mathcal{P}^U)_0^M)
&\leq\displaystyle\sum_{k=0}^{k_i-1}H_{a^k \nu_i}(\mathcal{Q}_0^{q-1}|(\mathcal{P}^U)_0^M)+2q^2\log |\mathcal{Q}|\\
&\leq k_iH_{\mu_i}(\mathcal{Q}_0^{q-1}|(\mathcal{P}^U)_0^M)+2q^2\log |\mathcal{Q}|
\end{align*}
and it follows from \eqref{sttbdd2} that
\begin{align*}
&\frac{1}{q}H_{\mu_i}(\mathcal{Q}_0^{q-1}|(\mathcal{P}^U)_0^M)
\ge \frac{1}{k_i}H_{\nu_i}(\mathcal{Q}_0^{k_i-1}|(\mathcal{P}^U)_0^M)-\frac{2q\log |\mathcal{Q}|}{k_i}\\
&\ge \frac{1}{k_i}\Bigl\{(1-\beta^{\frac{1}{2}})(\log|S_i|-D\beta^{\frac{1}{2}}k_i-\log{C}-\frac{2}{e})-M\log(N+1)-2q\log|\mathcal{Q}|\Bigr\}
\end{align*}
Now we can take $i\to\infty$ because the atoms $Q$ of $\mathcal{Q}$ and hence of $\mathcal{Q}_0^{q-1}$, satisfy $\mu^\gamma(\partial Q)=0$. Also, the constants $C,M,N$ and $|\mathcal{Q}|$ are independent to $k_i$. Thus we have
\begin{align*}
\frac{1}{q}H_{\mu^\gamma}(\overline{\mathcal{Q}}_0^{q-1}|(\overline{\mathcal{P}^U})_0^M)
&\ge (1-\beta^{\frac{1}{2}})\left(\frac{m+n}{mn}(\dim_H \mb{Bad}^{b}(\epsilon)-\gamma)-D\beta^{\frac{1}{2}}\right),
\end{align*}
from the inequality \eqref{eqn1} and by taking $\rho\to 0$ and $D\to m+n$ get
$$\frac{1}{q}H_{\mu^\gamma}(\overline{\mathcal{Q}}_0^{q-1}|(\overline{\mathcal{P}^U})_0^M))
\ge (m+n)(1-\mu^\gamma(\overline{Q_\infty})^{\frac{1}{2}})\left(\frac{\dim_H \mb{Bad}^{b}(\epsilon)-\gamma}{mn}-\mu^\gamma(\overline{Q_\infty})^{\frac{1}{2}}\right).$$
Recall that $\gamma=\gamma_j$, and by taking $j\to\infty$ so that $\gamma_j\to0$, we have
$$\frac{1}{q}H_{\ov{\mu}}(\overline{\mathcal{Q}}_0^{q-1}|(\overline{\mathcal{P}^U})_0^M))
\ge (m+n)(1-\ov{\mu}(\overline{Q_\infty})^{\frac{1}{2}})(\frac{1}{mn}\dim_H \mb{Bad}^{b}(\epsilon)-\ov{\mu}(\overline{Q_\infty})^{\frac{1}{2}}).$$
Since $(\overline{\mathcal{P}^U})_0^M\nearrow\overline{\mathcal{A}^U}$ as $M\to\infty$, we finally get \eqref{sttlowbdd'}, i.e.
$$\frac{1}{q}H_{\ov{\mu}}(\overline{\mathcal{Q}}_0^{q-1}|\overline{\mathcal{A}^U})
\ge (m+n)(1-\ov{\mu}(\overline{Q_\infty})^{\frac{1}{2}})(\frac{1}{mn}\dim_H \mb{Bad}^{b}(\epsilon)-\ov{\mu}(\overline{Q_\infty})^{\frac{1}{2}}).$$
As we did in the proof of \ref{prop5}, we take a finite partition $\mathcal{Q}$ of $Y$ satisfying the three bullet-conditions above, and also take $Q^0_\infty\subset X$ sufficiently small so that $\ov{\mu}(Q^0_\infty)$ is sufficiently close to $\widehat{\eta}$. It follows that
\eq{\begin{aligned}
h_{\ov{\mu}}(a^{-1}|\overline{\mathcal{A}^U})&\ge(m+n)(1-\widehat{\eta}^{\frac{1}{2}})(\frac{1}{mn}\dim_H \mb{Bad}^{b}(\epsilon)-\widehat{\eta}^{\frac{1}{2}})\\
&=(1-\widehat{\eta}^{\frac{1}{2}})(d-\frac{1}{2}\eta_0-d\widehat{\eta}^{\frac{1}{2}}).
\end{aligned}}
\end{proof}
\subsection{Effective equidistribution and the proof of Theorem \ref{corb1}}
In this subsection, we recall some effective equidistribution results which are necessary for the proof of Theorem \ref{corb1}. Let $\mathfrak{g}=Lie\,G(\mathbb{R})$ and choose an orthonormal basis for $\mathfrak{g}$. Define the (left) differentiation action of $\mathfrak{g}$ on $C_c^\infty(X)$ by $Zf(x)=\frac{d}{dt}f(\textrm{exp}(tZ)x)|_{t=0}$ for $f\in C_c^\infty(X)$ and $Z$ in the orthonormal basis. This also defines for any $l\in\mathbb{N}$, $L^2$-Sobolev norms $\mathcal{S}_l$ on $C^\infty_c(Y)$:
\eqlabel{Sobol}{\mathcal{S}_l(f)^2\overset{\on{def}}{=}\displaystyle\sum_{\mathcal{D}}\|\textrm{ht}\circ \pi ^{l}\mathcal{D}(f)\|^2_{L^2},}
where $\mathcal{D}$ ranges over all the monomials in the chosen basis of degree $\leq l$ and $\textrm{ht } \circ \pi$ is the function assigning 1 over the smallest length of a vector in the corresponding lattice of the given grid.
Let us define the function $\zeta : (\mathbb{T}^{d}\setminus \mathbb{Q}^d)\times\mathbb{R}^{+}\to\mathbb{N}$ by
\eq{
\zeta(b,T):= \min\left\{N\in\mathbb{N} : \min_{1\leq q\leq N}\|qb\|_{\mathbb{Z}}\leq\frac{T^2}{N} \right\}.
}
Then there exists a sufficiently large $l\in\mathbb{N}$ such that the following equidistribution theorems hold.
\begin{thm}\cite[Theorem 1.3]{Kim21}\label{Teffirr}
Let $K$ be a bounded subset in $SL_d(\mathbb{R})$ and $V\subset U$ be a fixed neighborhood of the identity in $U$ with smooth boundary and compact closure. Then, for any $t\ge 0$, $f\in C_c^\infty(Y)$, and $y=gw(b)\Gamma$ with $g\in K$ and $b\in\mathbb{T}^d\setminus\mathbb{Q}^d$, there exists a constant $\alpha_1>0$ only depending on $d$ and $V$ so that
\eqlabel{effirr}{\frac{1}{m_U(V)}\int_V f(a_tuy)dm_U(u)=\int_Y fdm_Y+O(\mathcal{S}_l(f)\zeta(b,e^{\frac{t}{2m}})^{-\alpha_1}).}
The implied constant in \eqref{effirr} only depends on $d$, $V$, and $K$.
\end{thm}
For $q\in\mathbb{N}$, define
\begin{align*}
X_q &:= \left\{gw(\mb{p}/q)\Gamma\in Y : g\in SL_{d}(\mathbb{R}), \mb{p}\in\mathbb{Z}^d, \gcd(\mb{p},q)=1\right\},\\
\Gamma_q &:= \{\gamma\in SL_{d}(\mathbb{Z}): \gamma e_1 \equiv e_1 \;(\bmod\; q)\}.
\end{align*}
\begin{lem}\label{IdLem}
The subspace $X_q\subset Y$ can be identified with the quotient space $SL_d(\mathbb{R})/\Gamma_{q}$. In particular, this identification is locally bi-Lipschitz.
\end{lem}
\begin{proof}
The action $SL_d(\mathbb{R})$ on $X_q$ by the left multiplication is transitive and $Stab_{SL_d(\mathbb{R})}(w(e_1 /q)\Gamma)=\Gamma_q$. To see the transitivity, it is enough to show that $SL_d(\mathbb{Z})e_1 \equiv \{\mb{p}\in\mathbb{Z}^{d}: \gcd(\mb{p},q)=1\} \;(\bmod\; q)$. Write $D=\gcd(\mb{p})$ and $\mb{p}'=\mb{p}/D$. Since $\gcd(D,q)=1$, there are $a,b\in\mathbb{Z}$ such that $aD+bq=1$. Take $A\in GL_d(\mathbb{Z})$ such that $\det(A)=D$ and $Ae_1=\mb{p}$. If we set $\mb{u}=b\mb{p}'+(a-1)Ae_2$, then we have $\mb{p}+q\mb{u}=(A+\mb{u}\times{^{t}}(qe_1 + e_2))e_1$ and $A+\mb{u}\times{^{t}}(qe_1 + e_2)\in SL_d(\mathbb{Z})$, which concludes the transitivity. Bi-Lipshitz property of the identification follows trivially since both $X_q$ and $SL_d(\mathbb{R})/\Gamma_q$ are locally isometric to $SL_d(\mathbb{R})$.
\end{proof}
\begin{thm}\cite[Theorem 2.3]{KM12}\label{Teffrat}
For $q\in\mathbb{N}$, let $SL_d(\mathbb{R})/\Gamma_q\simeq X_q\subset Y$. Let $K$ be a bounded subset in $SL_d(\mathbb{R})$ and $V\subset U$ be a fixed neighborhood of the identity in $U$ with smooth boundary and compact closure. Then, for any $t\ge 0$, $f\in C_c^\infty(Y)$, and $y=gw(\frac{\mathbf{p}}{q})\Gamma$ with $g\in K$ and $\mathbf{p}\in\mathbb{Z}^d$, there exists a constant $\alpha_2>0$ only depending on $d$ and $V$ so that
\eqlabel{effrat}{\frac{1}{m_U(V)}\int_V f(a_tuy)dm_U(u)=\int_{X_q} fdm_{X_q}+O(\mathcal{S}_l(f)[\Gamma_1:\Gamma_q]^{\frac{1}{2}}e^{-\alpha_2 t}).}
The implied constant in \eqref{effrat} only depends on $d$, $V$, and $K$.
\end{thm}
\begin{proof}
This result was obtained in \cite[Theorem 2.3]{KM12} in the case $q=1$. For general $q$, we refer the reader to \cite[Theorem 5.4]{KM21} which gave a sketch of required modification. \cite[Theorem 5.4]{KM21} is actually stated for different congruence subgroups from our $\Gamma_q$, but the modification still works and the additional factor in the error term is also given by $[\Gamma_1:\Gamma_q]^{\frac{1}{2}}$.
\end{proof}
Recall the definition of $\mathcal{L}_\epsilon$ in Subsection \ref{sec3.2}. Since we assume the unweighted setting, $\mathcal{L}_{\epsilon}=\{y\in Y : \forall v\in \Lambda_{y},\ \|v\|\geq \epsilon^{1/d} \}$.
\begin{lem}\label{bvol}
For any small enough $\epsilon>0$ and $q\in\mathbb{N}$, $m_Y(Y_{\leq\epsilon^{-1}}\setminus\mathcal{L}_\epsilon)\asymp\epsilon$ and $m_{X_q}(Y_{\leq\epsilon^{-1}}\setminus\mathcal{L}_\epsilon)\gg q^{-d}\epsilon$.
\end{lem}
\begin{proof}
Using Siegel integral formula \cite[Lemma 2.1]{MM11} with $f=\mathds{1}_{B_{\epsilon^{1/d}}(0)}$, which is the indicator function on $\epsilon^{1/d}$-ball centered at $0$ in $\mathbb{R}^d$, we have $m_Y(Y_{\leq\epsilon^{-1}}\setminus\mathcal{L}_\epsilon)\ll\epsilon$. On the other hands, by \cite[Theorem 1]{A15} with $A=B_{\epsilon^{1/d}}(0)$, we have $m_{Y}(\mathcal{L}_\epsilon)< \frac{1}{1+2^d \epsilon}$. It follows from Siegel integral formula on $X$ that $m_{Y}(Y_{>\epsilon^{-1}})=m_{X}(X_{>\epsilon^{-1}})\leq 2^d\epsilon^d$. Since $d\geq 2$, we have
\eq{
m_Y (Y_{\leq\epsilon^{-1}}\setminus\mathcal{L}_\epsilon) \geq m_Y (Y\setminus\mathcal{L}_\epsilon)-m_Y (Y_{>\epsilon^{-1}}) > \frac{2^d \epsilon}{1+2^d \epsilon}-2^d \epsilon^d \gg \epsilon
} for small enough $\epsilon >0$, which concludes the first assertion.
To prove the second assertion, observe that for any $x\in X_{>\epsilon^{-1/d}}$, $|\pi_{q}^{-1}(x)\cap (Y\setminus\mathcal{L}_\epsilon)|\geq 1$, where $\pi_q : X_q \to X$ is the natural projection. Since $|\pi_{q}^{-1}(x)|\leq q^{d}$ and $m_X (x\in X: \epsilon^{-1/d}<\textrm{ht}(x)\leq \epsilon^{-1}) \asymp \epsilon$, we have
\eq{
m_{X_q}(Y_{\leq\epsilon^{-1}}\setminus\mathcal{L}_\epsilon) \geq q^{-d} m_X (x\in X: \epsilon^{-1/d}<\textrm{ht}(x)\leq \epsilon^{-1}) \gg q^{-d}\epsilon.
}
\end{proof}
\begin{prop}\label{btauest}
There exist $M,M'>0$ such that the following holds. Let $\mathcal{A}$ be a countably generated sub-$\sigma$-algebra of the Borel $\sigma$-algebra which is $a^{-1}$-descending and $U$-subordinate. Fix a compact set $K\subset Y$. Let $1<R'<R$, $k=\lfloor\frac{mn\log R'}{4d}\rfloor$, and $y\in a^{4k}K$. Suppose that $B^{U}_{R'}\cdot y\subset[y]_\mathcal{A}\subset B^{U}_{R}\cdot y$ holds. For $\epsilon>0$ let $\Omega\subset Y$ be a set satisfying $\Omega\cup a^{-3k}\Omega\subseteq\mathcal{L}_{\frac{\epsilon}{2}}$. If $R'\ge\epsilon^{-M'}$, then $$1-\tau^{\mathcal{A}}_y(\Omega)\gg \left(\frac{R'}{R}\right)^{mn}\epsilon^{dM+1},$$ where the implied constant only depends on $K$.
\end{prop}
\begin{proof}
Denote by $V_y\subset U$ the shape of $\mathcal{A}$-atom of $y$ so that $V_y\cdot y=[y]_{\mathcal{A}}$. Set $V=B^{U}_{1}$. We have $B^U_{e^{-\frac{4d}{mn}}R'} \subseteq a^{4k}Va^{-4k}\subseteq V_y$ since $\frac{mn\log R'}{d}-4\leq4k\leq \frac{mn\log R'}{d}$. It follows that
\eqlabel{tauest1}{
\begin{aligned}
1-&\tau_y^{\mathcal{A}}(\Omega)=\frac{1}{m_U(V_y)}\int_{V_y} \mathds{1}_{Y\setminus\Omega}(uy)dm_U(u)\\
&\geq \frac{1}{m_U(B_R^{U})}\int_{a^{4k}Va^{-4k}} \mathds{1}_{Y\setminus\Omega}(uy)dm_U(u)\\
&\geq e^{-4d}\left(\frac{R'}{R}\right)^{mn}\left(\frac{1}{m_U(a^{4k}Va^{-4k})}\int_{a^{4k}Va^{-4k}} \mathds{1}_{Y\setminus\Omega}(uy)dm_U(u)\right)\\
&=e^{-4d}\left(\frac{R'}{R}\right)^{mn}\left(\frac{1}{m_U(V)}\int_{V} \mathds{1}_{Y\setminus\Omega}(a^{4k}ua^{-4k}y)dm_U(u)\right).
\end{aligned}
}
Let $a^{-4k}y=g_0w(b_0)\Gamma$. For the constants $\alpha_1$ in Theorem \ref{Teffirr} and $\alpha_2$ in Theorem \ref{Teffrat}, let $\alpha=\min(\alpha_1,\alpha_2)$ and $M=\frac{1}{\alpha}\left(2+l+\frac{\dim G}{2d}\right)$. By \cite[Lemma 2.4.7(b)]{KM96} with $r=C\epsilon^{\frac{1}{d}}<1$, we can take the approximation function $\theta\in C_{c}^{\infty}(G)$ of the identity such that $\theta\ge 0$, $\on{Supp} \theta\subseteq B^{G}_r(id)$, $\int_G \theta=1$, and $\mathcal{S}_l(\theta)\ll \epsilon^{-\frac{1}{d}(l+\frac{\dim G}{2})}$.
Let $\psi=\theta*\mathds{1}_{Y_{\leq \epsilon^{-1}}\setminus\mathcal{L}_{\frac{\epsilon}{4}}}$, then we have $\mathds{1}_{Y_{\leq (2\epsilon)^{-1}}\setminus\mathcal{L}_{\frac{\epsilon}{2}}}\leq\psi\leq\mathds{1}_{Y_{\leq 2\epsilon^{-1}}\setminus\mathcal{L}_{\frac{\epsilon}{8}}}$. Moreover, using Young's inequality, its Sobolev norm is bounded as follows:
\eqlabel{psiSobol}{
\begin{aligned}
\mathcal{S}_l(\psi)^2&=\displaystyle\sum_{\mathcal{D}}\|\textrm{ht}\circ\pi^l\mathcal{D}(\psi)\|^2_{L^2}
\ll\epsilon^{-l}\displaystyle\sum_{\mathcal{D}}\|\mathcal{D}(\theta)*\mathds{1}_{Y_{\leq \epsilon^{-1}}\setminus\mathcal{L}_{\frac{\epsilon}{4}}}\|^2_{L^2}\\
&\ll\epsilon^{-l}\|\mathds{1}_{Y_{\leq \epsilon^{-1}}\setminus\mathcal{L}_{\frac{\epsilon}{4}}}\|^2_{L^1}\displaystyle\sum_{\mathcal{D}}\|\mathcal{D}(\theta)\|^2_{L^2}\ll\epsilon^{-l}\mathcal{S}_l(\theta)^2,
\end{aligned}}
hence $\mathcal{S}_l(\psi)\ll \epsilon^{-\frac{l}{2}}\mathcal{S}_l(\theta)\leq \epsilon^{-(l+\frac{\dim G}{2d})}$.
In the following two cases, we apply Theorem \ref{Teffirr} and \ref{Teffrat} respectively:
\begin{enumerate}[label=(\roman*)]
\item $\zeta(b_0,e^{\frac{2k}{m}})\ge\epsilon^{-M}$
\item $\zeta(b_0,e^{\frac{2k}{m}})<\epsilon^{-M}$
\end{enumerate}
\textbf{Case (i):}
Applying Theorem \ref{Teffirr}, we have
\eqlabel{tauest2}{
\begin{aligned}
&\frac{1}{m_U(V)}\int_V\mathds{1}_{Y\setminus\Omega}(a^{4k}ua^{-4k}y)dm_U(u)
\geq\frac{1}{m_U(V)}\int_V \psi(a^{4k}ua^{-4k}y)dm_U(u)\\
&=\frac{1}{m_U(V)}\int_V \psi(a^{4k}ug_0w(b_0)\Gamma)dm_U(u)
=\int_Y\psi dm_Y+O(\mathcal{S}_l(\psi)\zeta(b_0,e^{\frac{2k}{m}})^{-\alpha})\\
&= m_Y(Y_{\leq\epsilon^{-1}}\setminus\mathcal{L}_{\frac{\epsilon}{4}})+O(\epsilon^{-(l+\frac{\dim G}{2d})}\epsilon^{M\alpha}).
\end{aligned}}
It follows from Lemma \ref{bvol} and $M\alpha=2+(l+\frac{\dim G}{2d})$ that
\eqlabel{tauest3}{
\frac{1}{m_U(V)}\int_{V} \mathds{1}_{Y\setminus\Omega}(a^{4k}ua^{-4k}y)dm_U(u)\geq m_Y(Y_{\leq\epsilon^{-1}}\setminus\mathcal{L}_{\frac{\epsilon}{4}})+O(\epsilon^2)\asymp \epsilon.}
Hence, $1-\tau^{\mathcal{A}}_y(\Omega)\gg\epsilon\left(\frac{R'}{R}\right)^{mn}$ by \eqref{tauest1} and \eqref{tauest3}.
\textbf{Case (ii):}
The assumption $\zeta(b_0,e^{\frac{2k}{m}})<\epsilon^{-M}$ implies that there exists $q\leq\epsilon^{-M}$ such that $\|qb_0\|_{\mathbb{Z}}\leq q^{2}e^{-\frac{2k}{m}}$, whence $\|b_0-\frac{\mathbf{p}}{q}\|\leq qe^{-\frac{2k}{m}}\leq \epsilon^{-M}e^{-\frac{2k}{m}}$ for some $\mathbf{p}\in\mathbb{Z}^d$. Let $y'=a^{4k}g_0w(\frac{\mathbf{p}}{q})\Gamma$. Then for any $u\in B^{U}_1$,
\eq{\begin{aligned}
\mathbf{d}^Y(a^kua^{-4k}y,a^kua^{-4k}y')&\ll e^{\frac{k}{m}}\mathbf{d}^Y(a^{-4k}y,a^{-4k}y')\\
&=e^{\frac{k}{m}}\mathbf{d}^Y(g_0w(b_0)\Gamma,g_0w(\frac{\mathbf{p}}{q})\Gamma)\\
&\asymp e^{\frac{k}{m}}\|b_0-\frac{\mathbf{p}}{q}\|\leq \epsilon^{-M}e^{-\frac{k}{m}},
\end{aligned}}
hence
\eqlabel{psidiff}{\begin{aligned}
|\psi(a^kua^{-4k}y)-\psi(a^kua^{-4k}y')|&\ll\mathcal{S}_l(\psi)\mathbf{d}^Y(a^kua^{-4k}y,a^kua^{-4k}y')\\&\ll \mathcal{S}_l(\psi)\epsilon^{-M}e^{-\frac{k}{m}}.
\end{aligned}}
Since we are assuming $a^{-3k}\Omega\subseteq\mathcal{L}_{\frac{\epsilon}{2}}$, we have
\eqlabel{tauest4}{
\begin{aligned}
&\frac{1}{m_U(V)}\int_V\mathds{1}_{Y\setminus\Omega}(a^{4k}ua^{-4k}y)dm_U(u)\\
&=\frac{1}{m_U(V)}\int_V\mathds{1}_{Y\setminus a^{-3k}\Omega}(a^{k}ua^{-4k}y)dm_U(u)\\
&\geq\frac{1}{m_U(V)}\int_V \psi(a^{k}ua^{-4k}y)dm_U(u)\\
&\geq\frac{1}{m_U(V)}\int_V \psi(a^{k}ua^{-4k}y')dm_U(u)+O(\mathcal{S}_l(\psi)\epsilon^{-M}e^{-\frac{k}{m}})\\
&=\int_{X_q}\psi dm_Y+O(\mathcal{S}_l(\psi)q^{\frac{d}{2}}e^{-\alpha k}+\mathcal{S}_l(\psi)\epsilon^{-M}e^{-\frac{k}{m}})\\
&\geq m_{X_q}(Y_{\leq (2\epsilon)^{-1}}\setminus\mathcal{L}_{\frac{\epsilon}{8}})+O(\epsilon^{-(l+\frac{\dim G}{2d})-\frac{dM}{2}}e^{-\alpha k}+\epsilon^{-(l+\frac{\dim G}{2d})-M}e^{-\frac{k}{m}}).
\end{aligned}}
We are using \eqref{psidiff} for the third line, and Theorem \ref{Teffrat} for the fourth line.
Let $M'=\min\left(\frac{4d}{\alpha}(l+\frac{\dim G}{2d}+\frac{3dM}{2}+2), 4dm(l+\frac{\dim G}{2d}+(d+1)M+2)\right)$. If $R'>\epsilon^{-M'}$, then $e^{-4dk}<e^{4d}\epsilon^{M'}$, so $\epsilon^{-(l+\frac{\dim G}{2d})-\frac{dM}{2}}e^{-\alpha k}\ll \epsilon^{dM+2}$ and $\epsilon^{-(l+\frac{\dim G}{2d})-M}e^{-\frac{k}{m}}\ll \epsilon^{dM+2}$. Combining this with Lemma \ref{bvol}, it follows that
\eqlabel{tauest5}{
\begin{aligned}
\frac{1}{m_U(V)}\int_V\mathds{1}_{Y\setminus\Omega}(a^{4k}ua^{-4k}y)&dm_U(u)\gg q^{-d}\epsilon+O(\epsilon^{dM+2})\\
&\gg \epsilon^{dM+1}+O(\epsilon^{dM+2})\gg \epsilon^{dM+1}.
\end{aligned}}
Hence, $1-\tau^{\mathcal{A}}_y(\Omega)\gg\epsilon^{dM+1}\left(\frac{R'}{R}\right)^{mn}$ by \eqref{tauest1} and \eqref{tauest5}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{corb1}]
For fixed $b$, let $\eta_0=2(m+n)(1-\frac{\dim_H \mb{Bad}^b(\epsilon)}{mn})$ as in Proposition \ref{prop2}. It is enough to consider the case that $\mb{Bad}^b(\epsilon)$ is sufficiently close to the full dimension $mn$, so we may assume $\dim_H\mb{Bad}^b(\epsilon)>\dim_H\mb{Bad}^0(\epsilon)$ and $\eta_0\leq 0.01$. By Proposition \ref{prop2}, there is an $a$-invariant measure $\ov{\mu}\in\crly{P}(\overline{Y})$ such that $\on{Supp}\ov{\mu}\subseteq \mathcal{L}_\epsilon\cup(\overline{Y}\setminus Y)$, and $\pi_*\ov{\mu}(\overline{X}\setminus\mathfrak{S}_{\eta'})\leq \eta'$ for any $\eta_0\leq\eta'\leq 1$. We also have $\mu\in\crly{P}(Y)$ and $0\leq\widehat{\eta}\leq\eta_0$ such that
$$\ov{\mu}=(1-\widehat{\eta})\mu+\widehat{\eta}\delta_\infty.$$
In particular, for $\eta'=0.01$, we have $\mu(\pi^{-1}(\mathfrak{S}_{0.01}))\geq 0.99$. We can choose $0<r<1$ such that $Y(2r)\supset\pi^{-1}(\mathfrak{S}_{0.01})$. Note that the choice of $r$ is independent of $\epsilon$ and $b$ since $\mathfrak{S}_{0.01}$ is constructed in Proposition \ref{KKLM'} independent to $\epsilon$ and $b$.
For such $0<r<1$ and $a$-invariant probability measure $\mu$ on $Y$, let $0<\delta_0=\delta_0(r)<r$, $0<\delta_1=\delta_1(r)\leq\delta_1$, and a countably generated $\sigma$-algebra $\mathcal{A}^U$ be as in Lemma \ref{partitioncst} and \ref{algebracst}, respectively. With respect to this $\sigma$-algebra, we have
\eq{h_{\ov{\mu}}(a^{-1}|\overline{\mathcal{A}^U})\ge(1-\widehat{\eta}^{\frac{1}{2}})(d-\widehat{\eta}-d\widehat{\eta}^{\frac{1}{2}})}
by (\ref{entropy}) of Proposition \ref{prop2}. By the linearlity of the entropy function with respect to the measure, we have
\eqlabel{entlow}{
\begin{aligned}
h_\mu(a^{-1}|\mathcal{A}^U)&\ge(1+\widehat{\eta}^{\frac{1}{2}})^{-1}(d-\frac{1}{2}\eta_0-d\widehat{\eta}^{\frac{1}{2}})\\
&\ge d-2d\widehat{\eta}^{\frac{1}{2}}-\frac{1}{2}\eta_0.
\end{aligned}}
By Lemma \ref{Exceptional}, there exists $0<\delta<\delta_1$ such that $\mu(E_\delta)<0.01$. Note that the constant $C>0$ in Lemma \ref{Exceptional} depends only on $a$ and $G$, hence $\delta$ is independent of $\epsilon$ even if the set $E_\delta$ depends on $\epsilon$. It follows from Proposition \ref{algebracst} that $B_\delta^U\cdot y\subset[y]_{\mathcal{A}^U}\subset B^U_{2r}\cdot y$ for any $y\in Y(2r)\setminus E_\delta$. We write $Z=Y(2r)\setminus E_\delta$ for simplicity.
In the rest of the argument, we will use Lemma \ref{algexiA} and Proposition \ref{effEL} which need ergodicity assumption, so we need to choose an ergodic component of $\mu$ properly. We have the ergodic decomposition
$$\mu=\int \mu_y^\mathcal{E}d\mu(y).$$
Let $\Upsilon:=\set{y\in Y: \mu_y^\mathcal{E}(Z)>0.9}.$ Since $\mu(Z)\geq 0.98$, it follows that
$$0.98\leq \int_Y \mu_y^\mathcal{E}(Z)d\mu(y)\leq \mu(\Upsilon)+0.9\mu(Y\setminus\Upsilon),$$
hence $\mu(\Upsilon)\ge0.8$. By Proposition \ref{ergDec} and \eqref{entlow}, we have
\eq{
\begin{aligned}
\int_\Upsilon h_{\mu_y^\mathcal{E}}(a^{-1}|\mathcal{A}^U)d\mu(y)+\int_{Y\setminus\Upsilon} h_{\mu_y^\mathcal{E}}&(a^{-1}|\mathcal{A}^U)d\mu(y)=\int_Y h_{\mu_y^\mathcal{E}}(a^{-1}|\mathcal{A}^U)d\mu(y)\\&\geq d-2d\widehat{\eta}^{\frac{1}{2}}-\frac{1}{2}\eta_0.
\end{aligned}}
On the other hand, by Theorem \ref{thmEL}, $h_{\mu_y^\mathcal{E}}(a^{-1}|\mathcal{A}^W)\leq d$ for any $y\in Y$, hence $\int_{Y\setminus\Upsilon} h_{\mu_y^\mathcal{E}}(a^{-1}|\mathcal{A}^W)d\mu(y)\leq d\mu(Y\setminus\Upsilon)$. It follows that
\eq{
\begin{aligned}
\int_\Upsilon h_{\mu_y^\mathcal{E}}(a^{-1}|\mathcal{A}^U)d\mu(y)&\ge
d\mu(\Upsilon)-2d\widehat{\eta}^{\frac{1}{2}}-\frac{1}{2}\eta_0\\
&\ge \mu(\Upsilon)\left(d-\frac{1}{0.8}(2d\widehat{\eta}^{\frac{1}{2}}+\frac{1}{2}\eta_0)\right).
\end{aligned}}
Therefore, we can find $y\in Y$ such that $\lambda:=\mu_y^\mathcal{E}$ satisfies $\lambda(Z)>0.9$ and
\eqlabel{empentlow}{h_{\lambda}(a^{-1}|\mathcal{A}^U)\ge d-\frac{1}{0.8}(2d\widehat{\eta}^{\frac{1}{2}}+\frac{1}{2}\eta_0).}
We also note that $\lambda$ is an $a$-invariant ergodic probability measure with $\on{Supp}\lambda\subseteq\mathcal{L}_\epsilon$.
By Proposition \ref{algebracst}, $\mathcal{A}^U$ is $U$-subordinate with respect to $\lambda$ and
$B_\delta^U\cdot y\subset[y]_{\mathcal{A}^U}\subset B_{2r}^U\cdot y$ for any $y\in Z$.
On the other hand, we shall get an upper bound of $h_{\lambda}(a^{-1}|\mathcal{A}^U)$ from Lemma \ref{algexiA} and Proposition \ref{effEL}. Since $\lambda$ is $a$-invariant, $\lambda(a^{4k}Y(2r))> 0.9$. Let $M$ and $M'$ be the constants in Proposition \ref{btauest}, $r'=(1-2^{\frac{1}{d}})\epsilon^{\frac{1}{d}}$, $R'=\epsilon^{-M'}$, $R=e^{\frac{mn}{d}}\frac{2r}{\delta}R'$, and $k=\lfloor\frac{mn\log R'}{4d}\rfloor$. Let $\mathcal{A}_1=a^{-j_1}\mathcal{A}^U$ and $\mathcal{A}_2=a^{j_2}\mathcal{A}^U$, where
\eq{\begin{aligned}
j_1&=\lceil-\frac{mn}{d}\log\left((2r)^{-1}(1-2^{\frac{1}{d}})\epsilon^{\frac{1}{d}}\right)\rceil,\\ j_2&=\lceil-\frac{mn}{d}\log(\delta\epsilon^{M'})\rceil.
\end{aligned}}
Then for $y\in Z$, the atoms with respect to $\mathcal{A}_1$ and $\mathcal{A}_2$ satisfy
$$[y]_{\mathcal{A}_1}\subset B^{U}_{r'} \cdot y,$$
$$B^{U}_{R'}\cdot y\subset[y]_{\mathcal{A}_2}\subset B^{U}_R\cdot y.$$
For $\Omega=B^U_{r'}\on{Supp}\lambda$, note that $\Omega\subseteq B^U_r\mathcal{L}_\epsilon\subseteq\mathcal{L}_{\frac{\epsilon}{2}}$ and $$a^{-3k}\Omega=(a^{-3k}B^U_{r'}a^{3k})a^{-3k}\on{Supp}\lambda\subseteq(a^{-3k}B^U_{r'}a^{3k})\mathcal{L}_\epsilon\subseteq\mathcal{L}_{\frac{\epsilon}{2}}$$ since $\on{Supp}\lambda$ is $a$-invariant set.
Applying Proposition \ref{btauest} with $K=Y(2r)$, $\mathcal{A}=\mathcal{A}_2$, and the same $R'$, $R$, $\Omega$ as we just defined, the following holds. For any $\epsilon>0$ and $y\in a^{4k}Y(2r)\cap (Z)$,
\eqlabel{taulow}{1-\tau_y^{\mathcal{A}_2}(\Omega)\gg \epsilon^{dM+1}}
since $\frac{R'}{R}$ is bounded below by a constant independent of $\epsilon$. By Lemma \ref{algexiA}, Proposition \ref{effEL}, and \eqref{taulow}, we have
\eq{
\begin{aligned}
(j_1+j_2)(d-h_\lambda(a^{-1}|\mathcal{A}^U))&=(j_1+j_2)d-H_{\lambda}(\mathcal{A}_1|\mathcal{A}_2)\\
&\ge -\int_Y\log \tau_y^{\mathcal{A}_2}(\Omega)d\lambda(y)\\
&\ge\int_{a^{4k}Y(2r)\cap Z}(1-\tau_y^{\mathcal{A}_2}(\Omega))d\lambda(y)\\
&\gg \lambda(a^{4k}Y(2r)\cap Z)\epsilon^{dM+1}> 0.8\epsilon^{dM+1}.
\end{aligned}}
It follows from \eqref{empentlow} and $j_1+j_2\asymp \log(1/\epsilon)$ that
\eq{\eta_0^{\frac{1}{2}}\gg\frac{1}{0.8}(2d\widehat{\eta}^{\frac{1}{2}}+\frac{1}{2}\eta_0) \geq d-h_\lambda(a^{-1}|\mathcal{A}^U)\gg\epsilon^{dM+2}.}
Since $\eta_0=2(m+n)(1-\frac{\dim_H \mb{Bad}^b(\epsilon)}{mn})$, we have
$$mn-\dim_H \mb{Bad}'(\epsilon)\ge c\epsilon^{2(dM+2)}$$
for some constant $c$ only depending on $d$.
\end{proof}
\section{Characterization of singular on average property and Dimension esitimates}\label{sec6}
In this section, we will show (\ref{S3})$\implies$(\ref{S1}) in Theorem \ref{thmA1}.
Let $A\in M_{m,n}$ and consider two subgroups
\eq{
G(A)\overset{\on{def}}{=} A\mathbb{Z}^n + \mathbb{Z}^m \subset \mathbb{R}^m \quad\text{and}\quad G({^{t}A})\overset{\on{def}}{=} {^{t}A}\mathbb{Z}^m + \mathbb{Z}^n \subset \mathbb{R}^n.
}
If we view alternatively $G(A)$ as a subgroup of classes modulo $\mathbb{Z}^m$, lying in the $m$-dimensional torus $\mathbb{T}^m$,
Kronecker's theorem asserts that $G(A)$ is dense in $\mathbb{T}^m$ if and only if the group $G({^{t}A})$ has maximal rank $m+n$ over $\mathbb{Z}$ (See \cite[Chapter \rom{3}, Theorem \rom{4}]{Cas57}). Thus, if $\text{rank}_\mathbb{Z} (G({^{t}A}))<m+n$, then $\text{Bad}_A(\epsilon)$ has full Hausdorff dimension for any $\epsilon>0$. Hence, throughout this section, we consider only matrices $A$ for which $\text{rank}_\mathbb{Z} (G({^{t}A}))=m+n$.
\subsection{Best approximations}
We set up a weighted version of the best approximations following \cite{CGGMS}. (See also \cite{BL} and \cite{BKLR} for the unweighted setting.)
Given $A\in M_{m,n}$, we denote
\[ M(\mb{y})= \inf_{\mb{q}\in\mathbb{Z}^n} \|{^{t}A}\mb{y}-\mb{q}\|_\mb{s}.\]
Our assumption that $\text{rank}_\mathbb{Z} (G({^{t}A}))$ equals $m+n$ guarantees that $M(\mb{y})>0$ for all non-zero $\mb{y}\in\mathbb{Z}^m$. One can construct a sequence of $\mb{y}_i\in\mathbb{Z}^n$ called \textit{a sequence of weighted best approximations to $^{t}A$}, which satisfies the following properties:
\begin{enumerate}
\item\label{Best1} Setting $Y_i=\|\mb{y}_i\|_\mb{r}$ and $M_i=M(\mb{y}_i)$, we have \[ Y_1<Y_2<\cdots\quad \text{and}\quad M_1>M_2>\cdots, \]
\item\label{Best2} $M(\mb{y})\geq M_i$ for all non-zero $\mb{y}\in\mathbb{Z}^m$ with $\|\mb{y}\|_\mb{r}<Y_{i+1}$.
\end{enumerate}
The sequence $(Y_i)_{i\geq1}$ has at least geometric growth.
\begin{lem}\cite[Lemma 4.3]{CGGMS}\label{lem:CGGMS}
There exists a positive integer $V$ such that for all $i\geq 1,$
\[ Y_{i+V}\geq 2Y_i.\] In particular, there exist $c>0$ and $\gamma>1$ such that \[ Y_{i}\geq c\gamma^i \] for all $i\geq1$.
\end{lem}
\begin{rem}\label{rem:DT} \
\begin{enumerate}
\item The first statement in the above lemma can be found in the proof of \cite[Lemma 4.3]{CGGMS}.
\item From the weighted Dirichlet's Theorem (see \cite[Theorem 2.2]{Kle98}), one can check that $M_{k}Y_{k+1}\leq 1 $ for all $k\geq1$.
\end{enumerate}
\end{rem}
\subsection{Characterization of singular on average property}
In this section, we will characterize the singular on average property in terms of best approximations. At first, we will show $A$ is singular on average if and only if $^{t}A$ is singular on average. To do this, following \cite[Chapter \rom{5}]{Cas57}, we prove a transference principle between two homogeneous approximations with weights. See also \cite{GE15,Ger20}.
\begin{defi}\label{Parall}
Given positive numbers $\lambda_1,\dots,\lambda_d$, consider the parallelepiped
\eq{
\mathcal{P}=\left\{ \mb{z}=(z_1,\dots,z_d)\in \mathbb{R}^d:|z_i|\leq \lambda_i,\ i=1,\dots,d \right\}.
} We call the parallelepiped
\eq{
\mathcal{P}^{*}=\left\{\mb{z}=(z_1,\dots,z_d)\in \mathbb{R}^d:|z_i|\leq\frac{1}{\lambda_i}\prod_{j=1}^{d}\lambda_j,\ i=1,\dots,d \right\}
} the \textit{pseudo-compound} of $\mathcal{P}$.
\end{defi}
\begin{thm}\label{TranThm}\cite{GE15}
Let $\mathcal{P}$ be as in Definition \ref{Parall} and let $\Lambda$ be a full-rank lattice in $\mathbb{R}^d$. Then
\eq{
\mathcal{P}^{*}\cap \Lambda^{*} \neq \{\mb{0}\} \implies c\mathcal{P}\cap\Lambda \neq\{\mb{0}\},
} where $c=d^{\frac{1}{2(d-1)}}$ and $\Lambda^{*}$ is the dual lattice of $\Lambda$.
\end{thm}
\begin{cor}\label{TranCor1}
For positive integer $m,n$ let $d= m+n$ and let $A\in M_{m,n}$ and $0<\epsilon<1$ be given. For all large enough $X\geq 1$, if there exists a nonzero $\mb{q}\in \mathbb{Z}^n$ such that
\eqlabel{Asol}{
\idist{A\mb{q}}_{\mb{r}}\leq \epsilon X^{-1} \quad \text{and}\quad \|\mb{q}\|_{\mb{s}} \leq X,
} then there exists a nonzero $\mb{y}\in \mathbb{Z}^m$ such that
\eqlabel{TranAsol}{
\idist{^{t}A\mb{y}}_{\mb{s}} \leq c^{(\frac{1}{r_m}+\frac{1}{s_n})}\epsilon^{\frac{r_m s_n}{s_n +r_1 (1-s_n)}}Y^{-1} \quad\text{and}\quad \|\mb{y}\|_\mb{r} \leq Y,
} where $c$ is as in Theorem \ref{TranThm} and $Y=c^{\frac{1}{r_m}}\epsilon^{-\frac{r_m (1-s_n)}{s_n+r_1 (1-s_n)}}X$.
\end{cor}
\begin{proof}
Consider the following two parallelepipeds:
\eq{\begin{split}
\mathcal{Q}&=\left\{\mb{z}=(z_1,\dots,z_d)\in \mathbb{R}^{d}: \begin{split} &|z_i| \leq \epsilon^{r_i}X^{-r_i}, \quad i=1,\dots,m \\ &|z_{m+j}| \leq X^{s_j}, \quad j=1,\dots,n \end{split} \right\},\\
\mathcal{P}&=\left\{\mb{z}=(z_1,\dots,z_d)\in \mathbb{R}^{d}: \begin{split} &|z_i| \leq Z^{r_i}, \quad i=1,\dots,m \\ &|z_{m+j}| \leq \delta^{s_j}Z^{-s_j}, \quad j=1,\dots,n \end{split} \right\},
\end{split}
} where \eq{
\delta=\epsilon^{\frac{r_m s_n}{s_n +r_1 (1-s_n)}}\quad\text{and}\quad Z=\epsilon^{-\frac{r_m (1-s_n)}{s_n +r_1 (1-s_n)}}X.
}
Observe that the pseudo-compound of $\mathcal{P}$ is given by
\eq{
\mathcal{P}^{*}=\left\{\mb{z}=(z_1,\dots,z_d)\in \mathbb{R}^{d}: \begin{split} &|z_i| \leq \delta Z^{-r_i}, \quad i=1,\dots,m \\ &|z_{m+j}| \leq \delta^{1-s_j}Z^{s_j}, \quad j=1,\dots,n \end{split} \right\}
} and that $\mathcal{Q} \subset \mathcal{P}^{*}$ since $\epsilon^{r_i}X^{-r_i}\leq \delta Z^{-r_i}$ and $X^{s_j}\leq \delta^{1-s_j}Z^{s_j}$ for all $i=1,\dots,m$ and $j=1,\dots,n$.
Now, the existence of a nonzero solution $\mb{q}\in R_v^n$ of the inequalities \eqref{Asol} is equivalent to
\eq{
\mathcal{Q}\cap \left(\begin{matrix} I_m & A \\ & I_n \\ \end{matrix}\right) \mathbb{Z}^d \neq \{\mb{0}\},
} which implies that
\eq{
\mathcal{P}^{*}\cap \left(\begin{matrix} I_m & A \\ & I_n \\ \end{matrix}\right) \mathbb{Z}^d \neq \{\mb{0}\}.
} By Theorem \ref{TranThm}, we have
\eq{
c\mathcal{P}\cap \left(\begin{matrix} I_m & \\ -{^{t}A} & I_n \\ \end{matrix}\right) \mathbb{Z}^d \neq \{\mb{0}\},
} which concludes the proof of Corollary \ref{TranCor1}.
\end{proof}
\begin{cor}\label{eqCor}
Let $m,n$ be positive integers and $A\in M_{m,n}$. Then $A$ is singular on average if and only if $^{t}A$ is singular on average.
\end{cor}
\begin{proof}
It follows from Corollary \ref{TranCor1}.
\end{proof}
Now, we will characterize the singular on average property in terms of best approximations. Let $A\in M_{m,n}$ be a matrix and $(\mb{y}_k)_{k\geq 1}$ be a sequence of weighted best approximations to $^{t}A$ and write
\[Y_k=\|\mb{y}_k\|_\mb{r},\quad M_k=\inf_{\mb{q}\in\mathbb{Z}^n} \|{^{t}A}\mb{y}_k-\mb{q}\|_\mb{s}. \]
\begin{prop}\label{propA1}
Let $A\in M_{m,n}$ be a matrix and let $(\mb{y}_k)_{k\geq 1}$ be a sequence of best approximations to $^{t}A$. Then the following are equivalent:
\begin{enumerate}
\item\label{SS3} $^{t}A$ is singular on average.
\item\label{SS2} For all $\epsilon>0$, \eq{\lim\limits_{k\to\infty}\frac{1}{\log Y_{k}}\left|\{i\leq k:M_{i}Y_{i+1}>\epsilon\}\right|=0.}
\end{enumerate}
\end{prop}
\begin{proof}
($\ref{SS3})\implies(\ref{SS2})$ : Let $0<\epsilon<1$. Observe that for each integer $X$ with $Y_{k} \leq X < Y_{k+1}$, the inequalities
\eqlabel{inequal}{
\|{^{t}A}\mb{p}-\mb{q}\|_\mb{s} \leq \epsilon X^{-1}\quad \text{and}\quad 0 < \|\mb{p}\|_\mb{r} \leq X
}
have a solution if and only if $X\leq \frac{\epsilon}{M_k}$. Thus, for each integer $\ell\in[\log_2{Y_k},\log_2{Y_{k+1}})$ the inequalities \eqref{inequal} have no solutions for $X=2^\ell$ if and only if
\eqlabel{NoSol}{
\log_2{\epsilon}-\log_2{M_k}<\ell<\log_2{Y_{k+1}}.
}
Now we assume that $^{t}A$ is singular on average. For given $\delta>0$, if the set $\{k\in\mathbb{N}:M_k Y_{k+1}>\delta\}$ is finite, then it is done. Suppose the set $\{k\in\mathbb{N}:M_k Y_{k+1}>\delta\}$ is infinite and let \eq{\{k\in\mathbb{N}:M_{k}Y_{k+1}>\delta \}=\left\{j(1)<j(2)<\cdots<j(k)<\cdots:k\in\mathbb{N}\right\}.}
Set $\epsilon=\delta/2$ and fix a positive integer $V$ in Lemma \ref{lem:CGGMS}.
For an integer $\ell$ in $[\log_2{Y_{j(k)+1}}-1,\log_2{Y_{j(k)+1}})$, observe that
\[
\log_2{\epsilon}-\log_2{M_{j(k)}} < \log_2{Y_{j(k)+1}}-1.
\]
Hence the inequalities \eqref{inequal} have no solutions for $X=2^\ell$ by \eqref{NoSol}.
By Lemma \ref{lem:CGGMS}, $\log_2{Y_{j(k)+1+V}}-1\geq\log_2{Y_{j(k)+1}}$. So, we have $\log_2{Y_{j(k+V)+1}}-1\geq\log_2{Y_{j(k)+1}}$.
Now fix $i=0,\cdots,V-1$. Then the intervals
\[[\log_2{Y_{j(i+sV)+1}}-1,\log_2{Y_{j(i+sV)+1}}), \quad s=1,\cdots,k \]
are disjoint. Thus, for an integer $N\in[\log_2{Y_{j(i+kV)+1}},\log_2{Y_{j(i+(k+1)V)+1}})$, the number of $\ell$ in $\left\{1,\cdots,N\right\}$ such that \eqref{inequal} have no solutions for $X=2^\ell$ is at least $k$. Since $^{t}A$ is singular on average,
\[
\frac{k}{\log_{2}{Y_{j(i+(k+1)V)+1}}}\leq\frac{1}{N}\left|\left\{\ell\in\left\{1,\cdots,N\right\}:\eqref{inequal}\text{ have no solutions for }X=2^\ell \right\}\right|
\]
tends to $0$ with $k$, which gives $\frac{i+1+kV}{\log_{2}{Y_{j(i+1+kV)}}}$ tends to $0$ with $k$ for all $i=0,\cdots,V-1$. Thus, we have $\frac{k}{\log_{2}{Y_{j(k)}}}$ tends to $0$ with $k$.
For any $k\geq 1$, there is an unique positive integer $s_k$ such that \[j(s_k)\leq k < j(s_k +1),\] and observe that $s_k=|\{i\leq k : M_i Y_{i+1}>\delta \}|$. Thus, by the monotonicity of $Y_k$, we have
\eq{
\lim_{k\to\infty}\frac{1}{\log_{2}Y_{k}}|\{i\leq k:M_{i}Y_{i+1}>\delta\}|\leq \lim_{k\to\infty}\frac{s_k}{\log_{2} Y_{j(s_k)}}=0.
}
($\ref{SS2})\implies(\ref{SS3})$ : Given $0<\epsilon<1$, the number of integers $\ell$ in $[\log_2{Y_k},\log_2{Y_{k+1}})$ such that \eqref{inequal} have no solutions for $X=2^\ell$ is at most
\[
\lceil \log_2{M_{k}Y_{k+1}}-\log_2{\epsilon} \rceil \leq \log_2{M_{k}Y_{k+1}}-\log_2{\epsilon}+1.
\]
Thus, for an integer $N$ in $[\log_2{Y_k},\log_2{Y_{k+1}})$, we have
\begin{align*}
\frac{1}{N} |\{\ell\in\{1,\cdots,N\}&:\eqref{inequal}\ \text{have no solutions for } X=2^\ell\}| \\
&\leq\frac{1}{N}\sum_{i=1}^{k}\max\left(0,\log_2{M_{i}Y_{i+1}}-\log_2{\epsilon}+1\right)\\
&\leq\frac{1}{\log_2{Y_k}}\sum_{i=1}^{k}\max\left(0,\log_2{M_{i}Y_{i+1}}-\log_2{\epsilon}+1\right).
\end{align*}
Since $M_{i}Y_{i+1}\leq1$ for each $i\geq1$,
\begin{align*}
\frac{1}{\log_2{Y_k}}\sum_{i=1}^{k}&\max\left(0,\log_2{M_{i}Y_{i+1}}-\log_2{\epsilon}+1\right)\\
&\leq\frac{1}{\log_2{Y_k}}\left(-\log_2{\epsilon}+1\right)|\{i\leq k : M_i Y_{i+1}>\epsilon/2\}|.
\end{align*}
Therefore, $^{t}A$ is singular on average.
\end{proof}
\subsection{Modified Bugeaud-Laurent sequence}
In this subsection we construct the following modified Bugeaud-Laurent sequence assuming the singular on average property. We refer the reader to \cite[Section 5]{BL} for the original version of the Bugeaud-Laurent sequence.
\begin{prop}\label{lem:MBL}
Let $A\in M_{m,n}$ be such that $^{t}A$ is singular on average and let $(\mb{y}_k)_{k\geq 1}$ be a sequence of weighted best approximations to $^{t}A$. For each $S>R>1$, there exists an increasing function $\varphi:\mathbb{Z}_{\geq1}\to\mathbb{Z}_{\geq1}$ satisfying the following properties:
\begin{enumerate}
\item for any integer $i\geq1$,
\eqlabel{Mgrow}{
Y_{\varphi(i+1)}\geq RY_{\varphi(i)}\quad \text{and} \quad M_{\varphi(i)}Y_{\varphi(i+1)}\leq R.
}
\item
\eqlabel{Mdensity}{
\limsup_{k\to\infty}\frac{k}{\log{Y_{\varphi(k)}}}\leq\frac{1}{\log{S}}.
}
\end{enumerate}
\end{prop}
\begin{proof}
The function $\varphi$ is constructed in the following way. Fix a positive integer $V$ in Lemma \ref{lem:CGGMS} and let $\mathcal{J}=\{j\in\mathbb{Z}_{\geq 1}:M_j Y_{j+1}\leq R/S^3\}$. Since $^{t}A$ is singular on average, by Proposition \ref{propA1} with $\epsilon=R/S^3$, we have \eqlabel{assumption}{\lim_{k\to\infty}\frac{1}{\log Y_{k}}\left|\{i\leq k:i\in\mathcal{J}^{c}\}\right|=0.}
If the set $\mathcal{J}$ is finite, then we have $\lim\limits_{k\to\infty}Y_{k}^{1/k}=\infty$ by \eqref{assumption}, hence the proof of \cite[Theorem 2.2]{BKLR} implies that there exists a function $\varphi:\mathbb{Z}_{\geq 1}\to \mathbb{Z}_{\geq 1}$ for which
\[
Y_{\varphi(i+1)}\geq RY_{\varphi(i)}\quad\text{and}\quad Y_{\varphi(i)+1}\geq R^{-1}Y_{\varphi(i+1)}.
\]
The fact that $M_{i}Y_{i+1}\leq 1$ for all $i\geq 1$ implies $M_{\varphi(i)}Y_{\varphi(i+1)}\leq R$. Equation \eqref{Mdensity} follows from $\lim\limits_{k\to\infty}Y_{k}^{1/k}=\infty$, which concludes the proof of Proposition \ref{lem:MBL}.
Now, suppose that $\mathcal{J}$ is infinite. Then there are two possible cases:
\begin{enumerate}[label=(\roman*)]
\item $\mathcal{J}$ contains all sufficiently large positive integers.
\item There are infinitely many positive integers in $\mathcal{J}^{c}$.
\end{enumerate}
\textbf{Case (i).} Assume the first case and let $\psi(1)=\min\{j:\mathcal{J}\supset\mathbb{Z}_{\geq j}\}$. Define the auxiliary increasing sequence $(\psi(i))_{i\geq 1}$ by
\eq{
\psi(i+1)=\min\{j\in\mathbb{Z}_{\geq 1}:SY_{\psi(i)}\leq Y_j\},
} which is well defined since $(Y_i)_{i\geq 1}$ is increasing. Note that $\psi(i+1) \leq \psi(i)+\lceil\log_2{S}\rceil V$ since $Y_{\psi(i)+\lceil\log_2{S}\rceil V}\geq SY_{\psi(i)}$ by Lemma \ref{lem:CGGMS}. Let us now define the sequence $(\varphi(i))_{i\geq 1}$ by, for each $i\geq 1$,
\[ \varphi(i) =
\begin{cases}
\psi(i) &\quad \text{if } M_{\psi(i)}Y_{\psi(i+1)} \leq R/S,\\
\psi(i+1)-1 &\quad \text{otherwise}.
\end{cases}
\]
Then the sequence $(\varphi(i))_{i\geq 1}$ is increasing and $\varphi \geq \psi$.
Now we claim that for each $i \geq 1$,
\eqlabel{MSgrow}{
Y_{\varphi(i+1)}\geq SY_{\varphi(i)}\quad\text{and}\quad M_{\varphi(i)}Y_{\varphi(i+1)}\leq R,
} which implies Equation \eqref{Mdensity} since $Y_{\varphi(k)}\geq S^{k-1}Y_{\varphi(1)}$ for all $k\geq 1$. Thus, the claim concludes the proof of Proposition \ref{lem:MBL}.
\begin{proof}[Proof of Equation \eqref{MSgrow}]
There are four possible cases on the values of $\varphi(i)$ and $\varphi(i+1)$.
\medskip
$\bullet$~ Assume that $\varphi(i)=\psi(i)$ and $\varphi(i+1)
=\psi(i+1)$. By the definition of $\psi(i+1)$, we have
$$
Y_{\varphi(i+1)}=Y_{\psi(i+1)}\geq SY_{\psi(i)}=S Y_{\varphi(i)}.
$$
If $\psi(i)\neq \psi(i+1)-1$, then by the definition of $\varphi(i)$,
we have
$$
M_{\varphi(i)} Y_{\varphi(i+1)}=M_{\psi(i)} Y_{\psi(i+1)}\leq R/S \leq R.
$$
If $\psi(i)= \psi(i+1)-1$, then $\varphi(i+1)= \varphi(i)+1$, hence
$$
M_{\varphi(i)}Y_{\varphi(i+1)}=M_{\varphi(i)}Y_{\varphi(i)+1}\leq 1 \leq R.
$$
This proves Equation \eqref{MSgrow}.
\medskip
$\bullet$~ Assume that $\varphi(i)=\psi(i)$ and $\varphi(i+1)=
\psi(i+2)-1$. By the definition of $\psi(i+1)$, we have
$$
Y_{\varphi(i+1)}=Y_{\psi(i+2)-1}\geq Y_{\psi(i+1)}\geq SY_{\psi(i)}= S Y_{\varphi(i)}.
$$
It follows from the minimality of $\psi(i+2)$ that $SY_{\psi(i+1)}> Y_{\psi(i+2)-1}$. If $\psi(i+1)>\psi(i)+1$, then $M_{\psi(i)}
Y_{\psi(i+1)}\leq R/S$ by the definition of $\varphi(i)$. Hence, we have
$$
M_{\varphi(i)}Y_{\varphi(i+1)}= M_{\psi(i)}Y_{\psi(i+2)-1}\leq SM_{\psi(i)}Y_{\psi(i+1)}\leq R.
$$
If $\psi(i+1)=\psi(i)+1$, then $M_{\psi(i)} Y_{\psi(i)+1} \leq R/S^3$ since $\psi(i)\in\mathcal{J}$. Hence,
$$
M_{\varphi(i)}Y_{\varphi(i+1)}= M_{\psi(i)} Y_{\psi(i+2)-1}\leq SM_{\psi(i)} Y_{\psi(i)+1} \leq R/S^2 \leq R.
$$
This proves Equation \eqref{MSgrow}.
\medskip
$\bullet$~ Assume that $\varphi(i)=\psi(i+1)-1$ and $\varphi(i+1)=
\psi(i+1)$. Since $\psi(i+1)-1\in\mathcal{J}$, we have
$$
M_{\varphi(i)}Y_{\varphi(i+1)}=M_{\psi(i+1)-1}Y_{\psi(i+1)}\leq R/S^3 \leq R.
$$
If $\psi(i+1)-1=\psi(i)$, then by the definition of $\psi(i+1)$, we
have
$$
\frac{Y_{\varphi(i+1)}}{Y_{\varphi(i)}}=\frac{Y_{\psi(i+1)}}{Y_{\psi(i+1)-1}}
=\frac{Y_{\psi(i+1)}}{Y_{\psi(i)}}\geq S.
$$
If $\psi(i+1)-1>\psi(i)$, then we have $M_{\psi(i)}Y_{\psi(i+1)}>R/S$ by the definition of $\varphi(i)$, and we have $Y_{\psi(i+1)-1} < SY_{\psi(i)} \leq SY_{\psi(i)+1}$ from the minimality of $\psi(i+1)$. We also have $M_{\psi(i)}Y_{\psi(i)+1} \leq R/S^3$ since $\psi(i)\in\mathcal{J}$. Therefore
$$
\frac{Y_{\varphi(i+1)}}{Y_{\varphi(i)}} =
\frac{Y_{\psi(i+1)}}{Y_{\psi(i+1)-1}}=
\frac{M_{\psi(i)}Y_{\psi(i+1)}}{M_{\psi(i)}Y_{\psi(i+1)-1}}
\geq\frac{R/S}{SM_{\psi(i)}Y_{\psi(i)+1}}
\geq\frac{R/S}{R/S^2}=S.
$$
This proves Equation \eqref{MSgrow}.
\medskip
$\bullet$~ Assume that $\varphi(i)=\psi(i+1)-1$ and $\varphi(i+1)=\psi(i+2)-1$.
By the previous case computations, we have
$$
\frac{Y_{\varphi(i+1)}}{Y_{\varphi(i)}}
=\frac{Y_{\psi(i+2)-1}}{Y_{\psi(i+1)-1}}
\geq\frac{Y_{\psi(i+1)}}{Y_{\psi(i+1)-1}}\geq S.
$$
We have $SY_{\psi(i+1)}> Y_{\psi(i+2)-1}$ from the minimality of $\psi(i+2)$. Thus since $\psi(i+1)-1\in\mathcal{J}$, we have
$$
M_{\varphi(i)}Y_{\varphi(i+1)}=M_{\psi(i+1)-1}Y_{\psi(i+2)-1}=M_{\psi(i+1)-1}Y_{\psi(i+1)}
\left(\frac{Y_{\psi(i+2)-1}}{Y_{\psi(i+1)}}\right) \leq R.
$$
This proves Equation \eqref{MSgrow}.
\end{proof}
\noindent \textbf{Case (ii).} Now we assume the second case and let $j_0 =\min\mathcal{J}$. Partition $\mathbb{Z}_{\geq j_0}$ into disjoint subset
\[
\mathbb{Z}_{\geq j_0}= C_1 \sqcup D_1 \sqcup C_2 \sqcup D_2 \sqcup \cdots
\]
where $C_i\subset\mathcal{J}$ and $D_j\subset\mathcal{J}^{c}$ are sets of consecutive integers with
\[
\max C_i < \min D_i \leq \max D_i < \min C_{i+1}
\]
for all $i\geq 1$. We consider the following two subcases.
\\
\textbf{(ii) - 1.} If there is $i_0 \geq 1$ such that $|C_i|< 3\lceil\log_2{S}\rceil V$ for all $i\geq i_0$, then we have, for $k_0 = \min C_{i_0}$,
\eq{
\frac{k}{\log Y_k}\leq \frac{k_0+\left(3\lceil\log_2{S}\rceil V +1\right)|\{i\leq k: i\in\mathcal{J}^{c}\}|}{\log Y_{k}},
}
since there exists an element of $\mathcal{J}^c$ in any finite sequence of $3\lceil\log_2{S}\rceil V +1$ consecutive integers at least $k_0$. Therefore $\lim\limits_{k\to\infty}Y_{k}^{1/k}=\infty$ by \eqref{assumption} and this concludes the proof of Proposition \ref{lem:MBL} following the proof when $\mathcal{J}$ is finite at the beginning.
\\
\textbf{(ii) - 2.} The remaining case is that the set
\eq{
\{i:|C_{i}|\geq 3\lceil\log_2{S}\rceil V \}=\{i(1)<i(2)<\cdots<i(k)<\cdots:k\in\mathbb{N}\}
}
is infinte.
For each $k\geq 1$, let us define an increasing finite sequence $(\psi_k (i))_{1\leq i \leq m_k +1}$ of positive integers by setting $\psi_k (1)=\min C_{i(k)}$ and by induction
\[
\psi_k (i+1) = \min \{j\in C_{i(k)}: SY_{\psi_k (i)} \leq Y_j \},
\] as long as this set is nonempty. Since $C_{i(k)}$ is a finite sequence of consecutive positive integers with length at least $3\lceil\log_2{S}\rceil V$ and $Y_{i+\lceil\log_2{S}\rceil V}\geq SY_{i}$ for every $i\geq 1$ by Lemma \ref{lem:CGGMS}, there exists an integer $m_k \geq 2$ such that $\psi_k (i)$ is defined for $i=1,\dots,m_k +1$. Note that $\psi_k (i)$ belongs to $\mathcal{J}$ since $C_{i(k)}\subset \mathcal{J}$.
As in \textbf{Case (i)}, let us define an increasing finite sequence $(\varphi_k (i))_{1\leq i\leq m_k}$ of positive integers by
\[
\varphi_k (i) =\begin{cases}
\psi_k (i) &\quad \text{if } M_{\psi_k (i)}Y_{\psi_k (i+1)} \leq R/S,\\
\psi_k (i+1)-1 &\quad \text{otherwise}.
\end{cases}
\]
Following the proof of \textbf{Case (i)}, we have for each $i=1,\dots,m_k -1$,
\eqlabel{kMSgrow}{
Y_{\varphi_k (i+1)}\geq S Y_{\varphi_k (i)} \quad\text{and}\quad M_{\varphi_k (i)}Y_{\varphi_k (i+1)} \leq R.
}
Note that $\varphi_k (m_k)<\varphi_{k+1}(1)$. Let us define an increasing finite sequence $(\varphi_k'(i))_{1\leq i\leq n_k +1}$ of positve integers to interpolate between $\varphi_k (m_k)$ and $\varphi_{k+1}(1)$. Let $j_0 =\varphi_{k+1}(1)$. If the set $\{j\in\mathbb{Z}_{\geq \varphi_k (m_k)}: Y_{j_0}\geq RY_j\}$ is empty, then we set $n_k =0$ and $\varphi_k'(1)=j_0 = \varphi_{k+1}(1)$. Otherwise, follwing \cite[Theorem 2.2]{BKLR}, by decreasing induction, let $n_k \in\mathbb{Z}_{\geq 1}$ be the maximal positive integer such that there exists $j_1,\dots, j_{n_k}\in\mathbb{Z}_{\geq 1}$ such that for $\ell=1,\dots,n_k$, the set $\{j\in\mathbb{Z}_{\geq \varphi_k (m_k)}: Y_{j_{\ell-1}}\geq RY_j\}$ is nonempty and for $\ell=1,\dots,n_k +1$, the integer $j_\ell$ is its largest element. Set $\varphi_k'(i)=j_{n_k +1 -i}$ for $i=1,\dots, n_k +1$. Then the sequence $(\varphi_k'(i))_{1\leq i\leq n_k +1}$ is contained in $[\varphi_k(m_k),\varphi_{k+1}(1)]$ and satisfies that for $i=1,\dots,n_k$,
\eqlabel{kMRgrow}{
Y_{\varphi_k' (i+1)}\geq RY_{\varphi_k'(i)} \quad\text{and}\quad M_{\varphi_k' (i)}Y_{\varphi_k'(i+1)}\leq R
} from the proof of \cite[Theorem 2.2]{BKLR}.
Now, putting alternatively together the sequences $(\varphi_k (i))_{1\leq i\leq m_k -1}$ and $(\varphi_k' (i))_{1\leq i\leq r_k}$ as $k$ ranges over $\mathbb{Z}_{\geq 1}$, we define $N_k = \sum_{\ell =1}^{k-1}(m_\ell -1 +n_\ell)$ and
\[
\varphi (i) =\begin{cases}
\varphi_k (i-N_k) &\quad \text{if } 1+N_k \leq i\leq m_k -1 +N_k,\\
\varphi_k' (i+1-m_k -N_k) &\quad \text{if } m_k +N_k \leq i\leq r_k -1 +m_k +N_k.
\end{cases}
\]
Here, we use the standard convention that an empty sum is zero.
With Equation \eqref{kMSgrow} for $i=1,\dots,m_k -2$ and Equation \eqref{kMRgrow} for $i=1,\dots,n_k$, since $\varphi_k'(n_k +1) = \varphi_{k+1}(1)$, it is enough to show the following lemma to prove that the map $\varphi$ satisfies Equation \eqref{Mgrow}.
\begin{lem}\label{lem:kMgrow}
For every $k\in\mathbb{Z}_{\geq 1}$, we have
\eqlabel{kMgrow}{
Y_{\varphi_k'(1)}\geq RY_{\varphi_{k}(m_k -1)}\quad\text{and}\quad M_{\varphi_k(m_k-1)}Y_{\varphi_k'(1)}\leq R.
}
\end{lem}
\begin{proof}
Since $\varphi_k'(1)\geq \varphi_k(m_k)$ and Equation \eqref{kMSgrow} with $i=m_k-1$, we have
\[
Y_{\varphi_k'(1)}\geq Y_{\varphi_k(m_k)} \geq SY_{\varphi_k(m_k-1)} \geq RY_{\varphi_{k}(m_k -1)},
\] which prove the left hand side of Equation \eqref{kMgrow}.
If $\varphi_k'(1)=\varphi_k(m_k)$, then Equation \eqref{kMSgrow} with $i=m_k-1$ gives the right hand side of Equation \eqref{kMgrow}.
Now assume that $\varphi_k'(1)>\varphi_k(m_k)$. By the maximality of $n_k$, we have $Y_{\varphi_k'(1)}\leq RY_{\varphi_k(m_k)}$. First, we will prove that $\varphi_k(m_k)=\psi_k (m_k)$. For a contradiction, assume that $\varphi_k(m_k)=\psi_k(m_k+1)-1>\phi_k(m_k)$. Following the third subcase of the proof of Equation \eqref{MSgrow}, we have
\[
\frac{Y_{\psi_k (m_k+1)}}{Y_{\psi_k(m_k+1)-1}}=\frac{M_{\psi_k(m_k)}Y_{\psi_k (m_k+1)}}{M_{\psi_k(m_k)}Y_{\psi_k(m_k+1)-1}}\geq S.
\]
Hence by the construction of $\varphi_k'(1)$, we have $\varphi_k'(1)=\varphi_k(m_k)$, which is a contradiction to our assumption $\varphi_k'(1)>\varphi_k(m_k)$.
To show the right hand side of Equation \eqref{kMgrow}, we consider two possible values of $\varphi_k(m_k-1)$.
Assume that $\varphi_k(m_k-1)=\psi_k(m_k-1)$. If $\psi_k(m_k-1)>\psi_k(m_k)-1$, then by the definition of $\varphi_k(m_k-1)$, we have $M_{\psi_k(m_k-1)}Y_{\psi_k(m_k)}\leq R/S$. If $\psi_k(m_k-1)=\psi_k(m_k)-1$, then $M_{\psi_k(m_k-1)}Y_{\psi_k(m_k)}\leq R/S^3 \leq R/S$ since $\psi_k(m_k)-1\in \mathcal{J}$. Since $\varphi_k(m_k)=\psi_k(m_k)$, we have
\[
M_{\varphi_k(m_k-1)}Y_{\varphi_k'(1)} = M_{\psi_k(m_k-1)}Y_{\psi_k(m_k)}\left(\frac{Y_{\varphi_k'(1)}}{Y_{\varphi_k(m_k)}}\right) \leq R,
\]
which proves the right hand side of Equation \eqref{kMgrow}.
Assume that $\varphi_k(m_k-1)=\psi_k(m_k)-1$. Since $\varphi_k(m_k)=\psi_k(m_k)$ and $\psi_k(m_k)-1\in \mathcal{J}$, we have
\[
M_{\varphi_k(m_k-1)}Y_{\varphi_k'(1)} = M_{\psi_k(m_k)-1}Y_{\psi_k(m_k)}\left(\frac{Y_{\varphi_k'(1)}}{Y_{\varphi_k(m_k)}}\right) \leq R,
\]
which proves the right hand side of Equation \eqref{kMgrow}, and concludes the proof of Lemma \ref{lem:kMgrow}.
\end{proof}
Finally, we will show Equation \eqref{Mdensity} for the map $\varphi$. Since there exists an element of $\mathcal{J}^c$ in any finite sequence of $3\lceil\log_2{S}\rceil V +1$ consecutive integers in the complement of $\bigcup_{k\geq 1} C_{i(k)}$, there exists $c_0 \geq 0$ such that for every $k\geq 1$, we have
\[
\frac{|\{j\leq \varphi(k): j\notin \bigcup_{k\geq 1} C_{i(k)} \}|}{\log Y_{\varphi(k)}} \leq
\frac{c_0+\left(3\lceil\log_2{S}\rceil V +1\right)|\{j\leq \varphi(k): j \in\mathcal{J}^{c}\}|}{\log Y_{\varphi(k)}},
\] which converges to $0$ as $k\to +\infty$ by \eqref{assumption}. Let us define
$$n(k)= |\{i\leq k : Y_{\varphi(i)} \geq SY_{\varphi(i+1)}\}|.$$
For each integer $\ell\geq 1$, since $Y_{i+\lceil\log_2{S}\rceil V}\geq SY_{i}$ for every $i\geq 1$ by Lemma \ref{lem:CGGMS}, and by the maximality of $m_\ell$ in the construction of $(\varphi_\ell (i))_{1\leq i\leq m_\ell}$, we have $|\{j\in C_{i(\ell)}: j\geq \varphi_{\ell}(m_\ell)\}|\leq 2\lceil\log_2{S}\rceil V$. If $\varphi(i)$ belongs to $C_{i(\ell)}$ but $\varphi(i+1)$ does not, then $\varphi(i)\geq \varphi_\ell (m_\ell)$. If $\varphi(i)$ and $\varphi(i+1)$ belong to $C_i(\ell)$, then $\varphi$ and $\varphi_\ell$ coincide on $i$ and $i+1$. Thus, by Equation \eqref{kMSgrow}, we have
\[
\begin{split}
k-n(k)&=|\{i\leq k : Y_{\varphi(i)} < SY_{\varphi(i+1)}\}|\\
&\leq \left(2\lceil\log_2{S}\rceil V\right) \big|\{j\leq \varphi(k): j\notin \bigcup_{k\geq 1} C_{i(k)}\}\big|.
\end{split}
\]
Therefore, we have
\[
\begin{split}
\limsup_{k\to\infty}\frac{k}{\log Y_{\varphi(k)}}&=\limsup_{k\to\infty}\frac{n(k)+k-n(k)}{\log Y_{\varphi(k)}} = \limsup \frac{n(k)}{\log Y_{\varphi(k)}}\\
&\leq \limsup_{k\to\infty}\frac{n(k)}{\log S^{n(k)-1}Y_{\varphi(1)}}=\frac{1}{\log S}.
\end{split}
\]
This proves Equation \eqref{Mdensity} and concludes the proof of Proposition \ref{lem:MBL}.
\end{proof}
\subsection{Dimension estimates}
Following the notation in \cite{BHKV10}, given a sequence $\{\mb{y}_i\}$ in $\mathbb{Z}^m\setminus \{\mb{0}\}$ and $\alpha\in(0,1/2)$, let
\[
\text{Bad}_{\{\mb{y}_i\}}^\alpha \overset{\on{def}}{=}\{\mb{\theta}\in\mathbb{R}^m:|\mb{\theta}\cdot\mb{y}_{i}|_{\mathbb{Z}}\geq\alpha\ \text{for all}\ i\geq1\}.
\]
\begin{prop}\cite{CGGMS}\label{prop:CGGMS}
Let $A\in M_{m,n}$ be a matrix and let $(\mb{y}_k)_{k\geq 1}$ be a sequence of weighted best approximations to $^{t}A$ and let $R>1$ and $\alpha\in(0,1/2)$ be given.
Suppose that there exists an increasing function $\varphi:\mathbb{Z}_{\geq1}\to\mathbb{Z}_{\geq1}$ such that for any integer $i\geq1$
\[
M_{\varphi(i)}Y_{\varphi(i+1)}\leq R.
\]
Then $\textup{Bad}_{\{\mb{y}_{\varphi(i)}\}}^\alpha$ is a subset of $\textup{Bad}_A(\epsilon)$ where $\epsilon=\frac{1}{R}\left(\frac{\alpha^2}{4mn}\right)^{1/\delta}$ and $\delta=\min\{r_i,s_j:1\leq i\leq m, 1\leq j\leq n\}$.
\end{prop}
\begin{proof}
In the proof of \cite[Theorem 1.1]{CGGMS}, the condition $Y_{\varphi(i)+1}\geq R^{-1}Y_{\varphi(i+1)}$ is used. However, the assumption $M_{\varphi(i)}Y_{\varphi(i+1)}\leq R$ also implies the same conclusion.
\end{proof}
\begin{prop}\cite{CGGMS}\label{prop:CGGMS_2}
For any $\alpha\in(0,1/2)$, there exists $R(\alpha)>1$ with the following property. Let $(\mb{y}_k)_{k\geq1}$ be a sequence in $\mathbb{Z}^m\setminus\{\mb{0}\}$ such that $\|\mb{y}_{k+1}\|_{\mb{r}}/\|\mb{y}_{k}\|_{\mb{r}}\geq R(\alpha)$ for all $k\geq1$. Then
\[
\textup{dim}_{H}\left(\textup{Bad}_{\{\mb{y}_i\}}^\alpha\right) \geq m-C\limsup_{k\to\infty}\frac{k}{\log{\|\mb{y}_{k}\|_{\mb{r}}}}.
\]
for some positive constant $C=C(\alpha)$.
\end{prop}
\begin{proof}
The proof of \cite[Theorem 6.1]{CGGMS} concludes this proposition.
\end{proof}
The two propositions are used in \cite[Theorem 5.1]{BKLR} in the unweighted setting.
\begin{proof}[Proof of Theorem \ref{thmA1} (\ref{S3})$\implies$(\ref{S1})]
Suppose $A$ is singular on average. By Corollary \ref{eqCor}, $^{t}A$ is also singular on average. Let $(\mb{y}_k)_{k\geq 1}$ be a sequence of weighted best approximations to $^{t}A$. Then, by Proposition \ref{lem:MBL}, Proposition \ref{prop:CGGMS}, and Proposition \ref{prop:CGGMS_2}, for each $S>R(\alpha)>1$, we have
\begin{align*}
\textup{dim}_{H}\left(\textup{Bad}_A(\epsilon)\right)
&\geq \textup{dim}_{H}\left(\textup{Bad}_{\{\mb{y}_{\varphi(i)}\}}^\alpha\right)\\
&\geq m- C\limsup_{k\to\infty}\frac{k}{\log{Y_{\varphi(k)}}}\\
&\geq m- \frac{C}{\log{S}}
\end{align*}
where $\epsilon=\frac{1}{R(\alpha)}\left(\frac{\alpha^2}{4mn}\right)^{1/\delta}$. Taking $S\to\infty$, we have $\textup{dim}_{H}\left(\textup{Bad}_A(\epsilon)\right)=m$ for $\epsilon=\frac{1}{R(\alpha)}\left(\frac{\alpha^2}{4mn}\right)^{1/\delta}$.
\end{proof}
|
{
"timestamp": "2021-12-01T02:24:27",
"yymm": "2111",
"arxiv_id": "2111.15410",
"language": "en",
"url": "https://arxiv.org/abs/2111.15410"
}
|
\section{Introduction}
Video frame interpolation (VFI) has been extensively employed to deliver an improved user experience across a wide range of important applications. VFI increases the temporal resolution (frame rate) of a video through synthesizing intermediate frames between every two consecutive original frames. It can mitigate the need for costly high frame rate acquisition processes~\cite{kalluri2020flavr}, enhance the rendering of slow-motion content~\cite{jiang2018super}, support view synthesis~\cite{flynn2016deepstereo} and improve rate-quality trade-offs in video coding~\cite{wu2018video}.
In recent years, deep learning has empowered a variety of VFI algorithms. These methods can be categorized as flow-based~\cite{jiang2018super, xu2019quadratic} or kernel-based~\cite{niklaus2017video, lee2020adacof}. While flow-based methods use the estimated optical flow maps to warp input frames, kernel-based methods learn local or shared convolution kernels for synthesizing the output. To handle challenging scenarios encountered in VFI applications, various techniques have been employed to enhance these methods, including non-linear motion models~\cite{xu2019quadratic, sim2021xvfi, park2021asymmetric}, coarse-to-fine architectures~\cite{park2020bmbc, sim2021xvfi, chen2021pdwn, zhang2020flexible}, attention mechanisms~\cite{choi2020channel, kalluri2020flavr}, and deformable convolutions~\cite{lee2020adacof, gui2020featureflow}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{figures/overall.pdf}
\end{center}
\vspace{-6mm}
\caption{\label{fig:overview}
High-level architecture of ST-MFNet, which employs a two-stage workflow to interpolate an intermediate frame.}
\vspace{-5mm}
\end{figure}
Although these methods have significantly improved performance compared with conventional VFI approaches~\cite{baker2011database}, their performance can still be inconsistent, especially for content exhibiting large motions, occlusions and dynamic textures. Large motion typically means large pixel displacements, which are difficult to capture using Convolutional Neural Networks (CNNs) with limited receptive fields~\cite{adaconv, niklaus2017video}. In the case of occlusion, pixels relating to occluded objects will not appear in all input frames, thus preventing interpolation algorithms from accurately estimating the intermediate locations of those pixels~\cite{choi2020channel,kalluri2020flavr}. Finally, \textit{dynamic textures} (e.g. water, fire, foliage, etc.) exhibit more complex motion characteristics compared to the movements of rigid objects~\cite{zhang2011parametric,tafi1}. Typically, they are spatially irregular and temporally stochastic, causing most existing VFI methods to fail, especially those based on optical flow\cite{liu2017video, jiang2018super}.
To solve these problems, we propose a novel video frame interpolation model, the Spatio-Temporal Multi-Flow Network (ST-MFNet), which consistently offers improved interpolation performance across a wide range of content types. ST-MFNet employs a two-stage architecture, as shown in Figure \ref{fig:overview}. In Stage I, the Multi-InterFlow Network (MIFNet) first predicts multi-interflows~\cite{dai2017deformable,lee2020adacof} at multiple scales (including an up-sampling scale simulating sub-pixel motion estimation), using a customized CNN architecture, UMSResNext, with variable kernel sizes. The multi-flows here correspond to a many-to-one mapping which enables more flexible transformation, facilitating the modeling of complex motions. To further improve the performance for large motions, a Bi-directional Linear Flow Network (BLFNet) is employed to linearly approximate the intermediate flows based on the bi-directional flows between input frames, which are estimated using a coarse-to-fine architecture~\cite{sun2018pwc}. In the second stage, inspired by recent work on texture synthesis~\cite{xie2019learning, yang2021spatiotemporal}, we integrate a 3D CNN, Texture Enhancement Network (TENet) that performs spatial and temporal filtering to capture longer-range dynamics and to predict textural residuals. Finally, we trained our model based on the ST-GAN~\cite{yang2021spatiotemporal} methodology, which was originally proposed for texture synthesis. This ensures both spatial consistency and temporal coherence of interpolated content. Extensive quantitative and qualitative studies have been performed which demonstrate the superior performance of ST-MFNet over current state-of-the-art VFI methods on a wide range of test data including large and complex motions and dynamic textures.
The primary contributions of this work are:
\begin{itemize}[noitemsep,nolistsep,leftmargin=*]
\item A novel VFI method where multi-flow based (MIFNet) and single-flow based warping (BLFNet) are combined to enhance the capturing of complex and large motions.
\item A new CNN architecture (UMSResNext) for the MIFNet, which predicts multiple intermediate flows at various scales, including an up-sampling scale for high precision sub-pixel motion estimation.
\item The use of a spatio-temporal CNN (TENet) and ST-GAN, which were originally designed for texture synthesis, to enhance the interpolation of complex textures.
\item Validation, through comprehensive experiments, that our model consistently outperforms state-of-the-art VFI methods on various scenarios, including large and complex motions and various texture types.
\end{itemize}
\section{Related Work}
In this section, we summarize recent advances in video frame interpolation (VFI) and then briefly introduce examples of dynamic texture synthesis, which have inspired the development of our method.
\subsection{Video Frame Interpolation}
Most existing VFI methods can be classified as flow-based or kernel-based:
\noindent\textbf{Flow-based VFI.} This class typically involves two steps: optical flow estimation and image warping. Input frames, $I_1$ and $I_2$, are warped to a target temporal location $t$ based on either the intermediate optical flows $F_{t\rightarrow 1}, F_{t\rightarrow 2}$ (backward warping~\cite{jaderberg2015spatial}), or $F_{1\rightarrow t}, F_{2\rightarrow t}$ (forward warping~\cite{niklaus2018context}). These flows can be approximated from bi-directional optical flows ($F_{1\rightarrow 2}$ and $F_{2\rightarrow 1}$) between the input frames~\cite{jiang2018super, reda2019unsupervised, bao2019memc, bao2019depth, xu2019quadratic, niklaus2018context, niklaus2020softmax, liu2020enhanced, sim2021xvfi}. Such approximations often assume motion linearity, and hence are prone to errors in non-linear motion scenarios. Various efforts have been made to alleviate this issue, including the use of depth information~\cite{bao2019depth}, higher order motion models~\cite{xu2019quadratic, liu2020enhanced}, and adaptive forward warping~\cite{niklaus2020softmax}. A second group of methods~\cite{liu2017video, xue2019video, park2020bmbc, zhang2020flexible, huang2020rife, park2021asymmetric} have been developed to improve approximation by directly predicting intermediate flows. These approaches typically employ a coarse-to-fine architecture, which supports a larger receptive field for capturing large motions. In all of the above methods, the predicted flows correspond to a one-to-one pixel mapping, which inherently limits the ability to capture complex motions.
\noindent\textbf{Kernel-based VFI.} In these methods, various convolution kernels~\cite{adaconv, niklaus2017video, lee2020adacof, shi2020video, ding2021cdfi, cheng2021multiple, chen2021pdwn,gui2020featureflow, long2016learning, choi2020channel, kalluri2020flavr} are learned as a basis for synthesizing interpolated pixels. Earlier approaches~\cite{adaconv, niklaus2017video} predict a fixed-size kernel for each output location, which is then convolved with co-located input pixels. This limits the magnitude of captured motions to the kernel size used, while more memory and computational capacity are required when larger kernel sizes are adopted. To overcome this problem, deformable convolution (DefConv)~\cite{dai2017deformable} was adapted to VFI in AdaCoF~\cite{lee2020adacof}, which allows kernels to be convolved with any input pixels pointed by local offset vectors. This can be considered as \textit{multi-interflows}, representing a many-to-one mapping. Further improvements to AdaCoF have been achieved by allowing space-time sampling~\cite{shi2020video}, feature pyramid warping~\cite{ding2021cdfi}, and using a coarse-to-fine architecture~\cite{chen2021pdwn}.
\subsection{Dynamic Texture Synthesis}
Dynamic textures (e.g. water, fire, leaves blowing in the wind etc.) generally exhibit high spatial frequency energy alongside temporal stochasticity, with inter-frame motions irregular in both the spatial and temporal domains. Classic synthesis methods rely on mathematical models such as Markov random fields~\cite{wei2000fast} and auto-regressive moving average model~\cite{doretto2003dynamic} to capture underlying motion characteristics. More recently, deep learning techniques, in particular 3D CNNs and GAN-based training~\cite{gatys2015texture, yang2016stationary, xie2019learning,wang2021conditional,yang2021spatiotemporal}, have been adopted to achieve more realistic synthesis results. It should be noted that both dynamic texture synthesis and VFI require accurate modeling of spatio-temporal characteristics. However the techniques developed specifically for texture synthesis have not yet been fully exploited in VFI methods. This is a focus of our work.
\begin{figure*}[ht]
\subfloat[MIFNet] {\includegraphics[width=0.68\linewidth]{figures/UMSResNext.pdf}}\;\!\!
\hspace{8mm}
\subfloat[Multi-flow head] {\includegraphics[width=0.23\linewidth]{figures/multiflowhead.pdf}}\;\!\!
\vspace{-3mm}
\caption{\label{fig:mifnet}
Illustration of the MIFNet. (a) The overall architecture of MIFNet, with a U-Net style backbone and multi-flow estimation heads at three scales. (b) The convolutional layers inside the multi-flow head at each scale.}
\vspace{-4mm}
\end{figure*}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.9\linewidth]{figures/MSResNextBlock.pdf}
\end{center}
\vspace{-6mm}
\caption{\label{fig:msresnextblock}
Illustration of the MSResNext block, which consists of two ResNext branches with different kernel sizes, followed by a channel attention module.}
\vspace{-4mm}
\end{figure}
\section{Proposed Method: ST-MFNet}
The architecture of ST-MFNet is shown in Figure~\ref{fig:overview}. While conventional VFI methods are formulated as generating intermediate frame $I_t$ ($t=1.5$) between two given consecutive frames $I_1, I_2$, we instead employ two more frames $I_0, I_3$ to improve the modeling of motion dynamics. Given the consecutive frames $I_0, I_1, I_2, I_3$, our model first processes $I_1, I_2$ in two branches. The Multi-InterFlow Network (MIFNet) branch estimates the multi-scale many-to-one multi-flows from $I_t$ to $I_1, I_2$; the Bi-directional Linear Flow Network (BLFNet) branch approximates one-to-one optical flows from $I_1, I_2$ to $I_t$. The input frames are warped based on the flows generated by MIFNet and BLFNet, and then fused by the Multi-Scale Fusion module to obtain an intermediate result $\tilde{I}_t$. This multi-branch structure combines both single-flow and multi-flow based methods and was found to offer enhanced interpolation performance. In the second stage, this frame is combined with all the inputs $I_0, I_1, I_2, I_3$ in temporal order and fed into the Texture Enhancement Network (TENet), which captures longer-range dynamic and generates residual signals for the final output.
\subsection{Multi-InterFlow Network} \label{sec:method_1}
\noindent\textbf{Multi-InterFlow warping.} For self-completeness, we first briefly describe the multi-interflow warping operation~\cite{lee2020adacof}. Given two images $I_A, I_B$ with size $H\times W$, conventional optical flow $F_{A\rightarrow B} = (\mathbf{f}_x, \mathbf{f}_y)$ from $I_A$ to $I_B$ specifies the x- and y-components of pixel-wise offset vectors, where $\mathbf{f}_x, \mathbf{f}_y\in \mathbb{R}^{H\times W}$. The pixel value at each location $(x,y)$ of the corresponding backwarped~\cite{jaderberg2015spatial} $\hat{I}_A$ is defined as
\begin{gather}
\hat{I}_A(x,y) = I_B(x+\mathbf{f}_x(x,y), y+\mathbf{f}_y(x,y))
\end{gather}
where the values at non-integer grid locations are obtained via bilinear interpolation. The multi-interflow proposed in~\cite{lee2020adacof} can be defined as $G_{A\rightarrow B} = (\boldsymbol{\alpha}, \boldsymbol{\beta}, \mathbf{w})$, but now $\boldsymbol{\alpha}, \boldsymbol{\beta} \in \mathbb{R}^{H\times W\times N}$ represent a collection of the x- and y-components of $N$ flow vectors respectively and $\mathbf{w}\in [0,1]^{H\times W\times N}$ is their weighting kernels ($\sum_{i=1}^N \mathbf{w}(x,y,i)=1$). That is, for each location $(x,y)$, $G_{A\rightarrow B}$ contains $N$ flow vectors and $N$ weights. The corresponding warping is defined as follows.
\vspace{-10pt}
\small
\begin{equation}
\label{eqn2}
\hat{I}_A(x,y) = \sum_{i=1}^N \mathbf{w}(x,y,i) \cdot I_B(x+\boldsymbol{\alpha}(x,y,i), y+\boldsymbol{\beta}(x,y,i))
\end{equation}
\normalsize
Such multi-flow warping corresponds to a many-to-one mapping, which allows flexible sampling of source pixels. This enables the capture of larger and more complex motions.
Given input frames $I_1, I_2$, the MIFNet predicts the multi-interflows $\{G^l_{t\rightarrow 1}, G^l_{t\rightarrow 2}\}$ from the intermediate frame $I_t$ to the inputs at three scale levels: $l=-1,0,1$, where $l=i$ means spatial down-sampling by $2^i$ (i.e. $l=-1$ denotes up-sampling), so that re-sampled inputs $I^l_{1},I^l_{2}$ can be warped to time $t$ using Equation (\ref{eqn2}) to produce $\hat{I}^l_{t1},\hat{I}^l_{t2}$ respectively. Here the incorporation of the finer scale ($l=-1$) further increases the precision of multi-flow warping (through 8-tap filter up-sampling, see below).
\noindent\textbf{Architecture.} The architecture of the MIFNet is shown in Figure~\ref{fig:mifnet}~(a). In order to capture pixel movements at multiple scales, we devise a U-Net style feature extractor, U-MultiScaleResNext (UMSResNext), consisting of eight MSResNext blocks (illustrated in Figure~\ref{fig:msresnextblock}). Each MSResNext block employs two ResNext blocks~\cite{xie2017aggregated} in parallel with different kernel sizes in the middle layer, 3$\times$3 and 7$\times$7, which further increases the network \textit{cardinality}~\cite{xie2017aggregated,ma2020cvegan}. The outputs of these two ResNext blocks are then concatenated and connected to a channel attention module~\cite{Hu_2018_CVPR}, which learns adaptive weighting of the feature maps extracted by the two ResNext blocks. Such feature selection mechanism has also been found to enhance motion modeling~\cite{choi2020channel, kalluri2020flavr}. In UMSResNext, the up-sampling operation is performed by replacing the k$\times$k grouped convolutions in the middle layer with (k+1)$\times$(k+1) grouped transposed convolutions.
The features extracted by UMSResNext are then passed to the multi-flow heads for multi-interflow prediction. In contrast to \cite{lee2020adacof}, multi-flows here are predicted at various scales $l=-1,0,1$, and occlusion maps are not generated (occlusion is handled by the BLFNet). As shown in Figure~\ref{fig:mifnet} (b), each multi-flow head contains 6 sub-branches, predicting the x-, y-components ($\boldsymbol{\alpha},\boldsymbol{\beta}$) and the kernel weights ($\mathbf{w}$) of ${G^l_{t\rightarrow 1},G^l_{t\rightarrow 2}}$. The predicted flows are then used to backwarp the inputs $I_1,I_2$ at corresponding scales using Equation~(\ref{eqn2}). Here a bilinear filter is used for down-sampling input frames, and an 8-tap filter originally designed for sub-pixel motion estimation~\cite{sullivan2012overview} is employed for up-sampling.
\subsection{Bi-directional Linear Flow Network}
To improve large motion interpolation, bi-directional flows $F_{1\rightarrow 2}, F_{2\rightarrow 1}$ between inputs $I_1, I_2$ are also predicted using a pre-trained flow estimator~\cite{sun2018pwc}, which is based on a coarse-to-fine architecture. The intermediate flows are then linearly approximated as follows.
\begin{equation}
F_{1\rightarrow t} = 0.5F_{1\rightarrow 2} \quad F_{2\rightarrow t} = 0.5F_{2\rightarrow 1}
\end{equation}
According to the intermediate flows, the frames $I_1, I_2$ are forward warped using the efficient softsplat operator~\cite{niklaus2020softmax}, which learns occlusion-related softmax-alike weighting of reference pixels in the forward warping process. Another advantage of softsplat is that it is differentiable, allowing the flow estimator to be end-to-end optimized. Finally, BLFNet branch outputs warped frames $\hat{I}_{t1}^\text{soft}, \hat{I}_{t2}^\text{soft}$. The employment of the BLFNet branch was found to be essential for handling large motion and occlusion and improving the overall capacity of the proposed model.
\subsection{Multi-Scale Fusion Module}
The Multi-Scale Fusion Module is employed to produce an intermediate interpolation result using the frames warped at multiple scales in the previous steps. Here we adopt the GridNet~\cite{fourure2017residual} architecture due to its superior performance on fusing multi-scale information~\cite{niklaus2018context, niklaus2020softmax}. The GridNet is configured here to have 4 columns and 3 rows, with the first, second and third rows corresponding to scales of $l=-1,0,1$ respectively. The first and third rows take $\{\hat{I}^{-1}_{t1}, \hat{I}^{-1}_{t2}\}$ and $\{\hat{I}^{1}_{t1}, \hat{I}^{1}_{t2}\}$ as inputs, while the second row takes $\{\hat{I}^{0}_{t1}, \hat{I}^{0}_{t2}, \hat{I}_{t1}^\text{soft}, \hat{I}_{t2}^\text{soft}\}$, where $\{\cdot\}$ denotes channel-wise concatenation. Finally, this module outputs the intermediate result $\tilde{I}_t$ at the original spatial resolution ($l=0$).
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{figures/tenet.pdf}
\end{center}
\vspace{-7mm}
\caption{\label{fig:tenet}
The architecture of the Texture Enhancement Network.}
\vspace{-4.5mm}
\end{figure}
\subsection{Texture Enhancement Network}
At the end of the first stage, the output of the Multi-Scale Fusion module, $\tilde{I}_t$, is concatenated with four original inputs to form $\{I_0,I_1,\tilde{I}_t,I_2,I_3\}$, which are then fed into the Texture Enhancement Network (TENet). Including additional frames here allows better modeling of higher-order motions and also provides more information on longer-term spatio-temporal characteristics. Motivated by recent work in dynamic texture synthesis~\cite{xie2019learning, yang2021spatiotemporal}, where spatio-temporal filtering was found to be effective for generating coherent video textures, we integrate a 3D CNN for texture enhancement. This CNN architecture (shown in Figure~\ref{fig:tenet}) is a modified version of the network developed in \cite{kalluri2020flavr}, but with reduced layer widths. This is based on the consideration that the intermediately warped frame $\tilde{I}_t$ has already been produced which is relatively close to the target. It is different from the original scenario in \cite{kalluri2020flavr}, where the network is expected to directly synthesize the interpolated output using the four original input frames. Finally, the TENet is expected to output a residual signal containing textural difference between $\tilde{I}_t$ and the target frame, which contributes to the final output of ST-MFNet.
\subsection{Loss Functions}\label{sec:loss}
We trained two versions of ST-MFNet in this work. For the distortion oriented model, a Laplacian pyramid loss~\cite{bojanowski2017optimizing} ($\mathcal{L}_{lap}$) was used as the objective function. This model was further fine-tuned using an ST-GAN based perceptual loss ($\mathcal{L}_{p}$) to obtain the perceptually optimized version.
\noindent\textbf{Laplacian pyramid loss.} ST-MFNet was trained end-to-end by matching its output $I^{out}_t$ with the ground-truth intermediate frame $I^{gt}_t$ using the Laplacian pyramid loss~\cite{bojanowski2017optimizing}, which has been previously used for VFI in~\cite{niklaus2018context,niklaus2020softmax,liu2020enhanced}. The loss function is defined below.
\begin{equation}
\mathcal{L}_{lap} = \sum_{s=1}^{S} 2^{s-1} \norm{L^s(I^{out}_t)-L^s(I^{gt}_t)}_1 \label{l_lap}
\end{equation}
Here $L^s(I)$ denotes the $s$\textsuperscript{th} level of the Laplacian pyramid of an image $I$, and $S$ is the maximum level.
\noindent\textbf{Spatio-temporal adversarial loss.}
To further improve the perceptual quality of the ST-MFNet output, we also trained our model using the Spatio-Temporal Generative Adversarial Networks (ST-GAN) training methodology~\cite{yang2021spatiotemporal}. Different from the conventional GAN~\cite{goodfellow2020generative} focusing on a single image, the discriminator $D$ of the ST-GAN also processes temporally adjacent video frames which improves temporal consistency. This is key for video frame interpolation. The architecture of the ST-GAN discriminator used in this work is provided in Appendix~\ref{sec:dis}. This discriminator was trained with the following loss.
\begin{equation}
\mathcal{L}_D = -\log (1-D(I^{out}_t, I_1, I_2)) - \log (D(I_t^{gt}, I_1, I_2))
\end{equation}
The corresponding adversarial loss for the generator (ST-MFNet) is given below.
\begin{equation}
\mathcal{L}_{adv} = -\log (D(I^{out}_t, I_1, I_2))
\end{equation}
This is then combined with the Laplacian pyramid loss to form the perceptual loss for ST-MFNet fine-tuning,
\begin{align}
\mathcal{L}_{p} = \mathcal{L}_{lap} + \lambda\mathcal{L}_{adv} \label{l_p}
\end{align}
where $\lambda$ is a weighting hyper-parameter that controls the perception-distortion trade-off~\cite{blau2018perception}.
\section{Experimental Setup}\label{sec:expsetup}
\noindent\textbf{Implementation details.}
In our implementation, we set the number of flows $N=25$ (which is the default value of the original multi-flows in \cite{lee2020adacof}) for the MIFNet branch. The maximum level $S$ for $\mathcal{L}_{lap}$ was set to 5, and the weighting hyper-parameter $\lambda=100$. We used the AdaMax optimizer~\cite{kingma2014adam} in the training with $\beta_1=0.9,\beta_2=0.999$. The learning rate was set to 0.001 and reduced by a factor of 0.5 whenever the validation performance stops improving for 5 epochs. The pre-trained flow estimator~\cite{sun2018pwc} in the BLFNet branch was frozen for the first 60 epochs and then fine-tuned for 10 more epochs to further improve VFI performance. The network was trained for a total number of 70 epochs using a batch size of 4. All training and evaluation processes were executed with a NVIDIA P100 GPU.
\noindent\textbf{Training datasets.} We used the training split of Vimeo-90k (septuplet) dataset~\cite{xue2019video} which contains 91,701 frame septuplets at a spatial resolution of 448$\times$256. It is noted that Vimeo-90k was produced with constrained motion magnitude and complexity. To further enhance the VFI performance on large motion and dynamic textures, we used an additional dataset, BVI-DVC~\cite{ma2020bvi}, which contains 800 videos of 64 frames at four spatial resolutions (200 videos each): 2160p, 1080p, 540p and 270p. This dataset covers a wide range of texture/motion types and frame rates (from 24 to 120 FPS). For each training epoch, we randomly sampled 12800, 6400, 800, 800 septuplets from these four resolution groups respectively, leaving out a subset of video frames for validation. We augmented all septuplets from both Vimeo-90k and BVI-DVC by randomly cropping 256$\times$256 patches and performing flipping and temporal order reversing. This resulted in more than 100,000 septuplets of 256$\times$256 patches. In each septuplet, the 1\textsuperscript{st}, 3\textsuperscript{rd}, 5\textsuperscript{th} and 7\textsuperscript{th} frames were used as inputs and the 4\textsuperscript{th} as the ground-truth target. The test split of Vimeo-90k together with unused subset of BVI-DVC was utilized as the validation set for hyper-parameter tuning and training monitoring.
\begin{figure*}[t]
\begin{center}
\subfloat[Overlay]
{\includegraphics[width=0.117\linewidth]{figures/mifnet_overlay.pdf}}\,
\subfloat[GT]
{\includegraphics[width=0.117\linewidth]{figures/mifnet_gt.pdf}}\,
\subfloat[w/o MIFNet]
{\includegraphics[width=0.117\linewidth]{figures/mifnet_womifnet.pdf}}\,
\subfloat[w/ MIFNet]
{\includegraphics[width=0.117\linewidth]{figures/mifnet_wmifnet.pdf}}\,\,
\subfloat[Overlay]
{\includegraphics[width=0.117\linewidth]{figures/blfnet_overlay.pdf}}\,
\subfloat[GT]
{\includegraphics[width=0.117\linewidth]{figures/blfnet_gt.pdf}}\,
\subfloat[w/o BLFNet]
{\includegraphics[width=0.117\linewidth]{figures/blfnet_woblfnet.pdf}}\,
\subfloat[w/ BLFNet]
{\includegraphics[width=0.117\linewidth]{figures/blfnet_wblfnet.pdf}}\\
\subfloat[Overlay]
{\includegraphics[width=0.117\linewidth]{figures/umsresnext_overlay.pdf}}\,
\subfloat[GT]
{\includegraphics[width=0.117\linewidth]{figures/umsresnext_gt.pdf}}\,
\subfloat[U-Net]
{\includegraphics[width=0.117\linewidth]{figures/umsresnext_woumsresnext.pdf}}\,
\subfloat[UMSResNext]
{\includegraphics[width=0.117\linewidth]{figures/umsresnext_wumsresnext.pdf}} \,\,
\subfloat[Overlay]
{\includegraphics[width=0.117\linewidth]{figures/tenet_overlay.pdf}}\,
\subfloat[GT]
{\includegraphics[width=0.117\linewidth]{figures/tenet_gt.pdf}}\,
\subfloat[w/o TENet]
{\includegraphics[width=0.117\linewidth]{figures/tenet_wotenet.pdf}}\,
\subfloat[w/ TENet]
{\includegraphics[width=0.117\linewidth]{figures/tenet_wtenet.pdf}}\\
\subfloat[Overlay]
{\includegraphics[width=0.16\linewidth]{figures/gan_overlay.pdf}} \,
\subfloat[GT]
{\includegraphics[width=0.16\linewidth]{figures/gan_gt.pdf}} \,
\subfloat[Ours-$\mathcal{L}_{lap}$]
{\includegraphics[width=0.16\linewidth]{figures/gan_lap.pdf}} \,
\subfloat[TGAN]
{\includegraphics[width=0.16\linewidth]{figures/gan_ficondgan.pdf}} \,
\subfloat[FIGAN]
{\includegraphics[width=0.16\linewidth]{figures/gan_figan.pdf}} \,
\subfloat[Ours-$\mathcal{L}_{p}$]
{\includegraphics[width=0.16\linewidth]{figures/gan_stgan.pdf}}
\end{center}
\vspace{-5mm}
\caption{Qualitative results interpolated by different variants of our method. Here ``Overlay" means the overlaid adjacent frames. Figures (a)-(d): w/ MIFNet vs w/o MIFNet; figures (e)-(h): w/ BLFNet vs w/o BLFNet; figures (i)-(j): UMSResNext vs U-Net; figures (m)-(p): w/ TENet vs w/o TENet; figures (q)-(v): comparison of different GANs.}
\label{fig:ablation}
\vspace{-3mm}
\end{figure*}
\noindent\textbf{Evaluation dataset.}
Since our model takes four frames as input, the evaluation dataset should be able to provide frame quintuplets $I_0,I_1,I_t^{gt},I_2,I_3$ ($t=1.5$). In this work, we used the test quintuplets in \cite{xu2019quadratic}, which were extracted from the UCF-101~\cite{soomro2012ucf101} (100 quintuplets) and DAVIS~\cite{perazzi2016benchmark} (2847 quintuplets) datasets. The evaluation was also based on the SNU-FILM dataset~\cite{choi2020channel}, which specifies a list of 310 triplets at four motion magnitude levels. As original sequences are provided in the SNU-FILM dataset, we extended its pre-defined test triplets into quintuplets for the evaluation here. Other commonly used test datasets, e.g. Middlebury~\cite{baker2011database} and UCF-DVF~\cite{liu2017video}, have not been employed here. This is because these databases only contain frame triplets, which cannot provide sufficient input frames for our model.
To further test interpolation performance on various texture types, we developed a new test set, VFITex, which contains twenty 100-frame videos at UHD or HD resolution and with a frame rate of 24, 30 or 50 FPS, collected from the Xiph~\cite{montgomery3xiph}, Mitch Martinez Free 4K Stock Footage~\cite{mitch}, UVG database~\cite{mercat2020uvg} and the Pexels website~\cite{pexels}. This dataset covers diverse textured scenes, including crowds, flags, foliage, animals, water, leaves, fire and smoke. Based on the computational capacity available, we center-cropped HD patches from the UHD sequences, preserving the original UHD characteristics. All frames in each sequence were used for evaluation, totaling 940 quintuplets. More details of the training and evaluation datasets and their license information are provided in Appendix~\ref{sec:license}.
\begin{table}[t]
\resizebox{\columnwidth}{!}{\begin{tabular}{llll}
\toprule
& \multicolumn{1}{c}{UCF101} & \multicolumn{1}{c}{DAVIS} & \multicolumn{1}{c}{VFITex} \\ \hline
Ours-\textit{w/o BLFNet} & 33.218/0.970 & 27.767/0.881 & 28.498/0.915 \\
Ours-\textit{w/o MIFNet} & 33.202/0.969 & 27.886/0.889 & 28.357/0.911 \\
Ours-\textit{w/o TENet} & 32.895/0.970 & 27.484/0.880 & 28.241/0.910 \\
Ours-\textit{unet} & 33.378/0.970 & 28.096/0.892 & 28.898/0.925 \\
\hline
Ours & 33.384/0.970 & 28.287/0.895 & 29.175/0.929 \\
\bottomrule
\end{tabular}}
\vspace{-2.5mm}
\caption{Ablation study results (PSNR/SSIM) for ST-MFNet.}
\label{tab:ablation}
\vspace{-5mm}
\end{table}
\noindent\textbf{Evaluation Methods.}
Two most commonly used quality metrics, PSNR and SSIM~\cite{wang2004image}, were employed here for objective assessment of the interpolated content. We note that these metrics do not always correlate well with video quality as perceived by a human observer~\cite{hore2010image, kalluri2020flavr}. Hence, in order to further compare the video frames interpolated by our method and the benchmark references, a user study was conducted based on a psychophysical experiment. The details of the user study are described in Section~\ref{sec:qualitative}.
\begin{table*}[t]
\begin{center}
\resizebox{\linewidth}{!}{
\begin{tabular}{lccccccccc}
\toprule
& \multirow{2}[1]{*}{UCF101} & \multirow{2}[1]{*}{DAVIS} & \multicolumn{4}{c}{SNU-FILM} & \multirow{2}[1]{*}{VFITex} & \multirow{2}[2]{*}{\makecell{RT \\ (sec)}} & \multirow{2}[2]{*}{\makecell{\#P \\ (M)}} \\
\cmidrule(l{5pt}r{5pt}){4-7}
& & &Easy&Medium&Hard&Extreme& & \\
\midrule
DVF~\cite{liu2017video} & 32.251/0.965 & 20.403/0.673 & 27.528/0.876 & 24.091/0.817 & 21.556/0.760 & 19.709/0.705 & 19.946/0.709 & 0.157 & 3.82\\
SuperSloMo~\cite{jiang2018super} & 32.547/0.968 & 26.523/0.866 & 36.255/0.984 & 33.802/0.973 & 29.519/0.930 & 24.770/0.855 & 27.914/0.911 & 0.107 & 39.61\\
SepConv~\cite{niklaus2017video} & 32.524/0.968 & 26.441/0.853 & 39.894/0.990 & 35.264/0.976 & 29.620/0.926 & 24.653/0.851 & 27.635/0.907 & 0.062 & 21.68\\
DAIN~\cite{bao2019depth} & \underline{32.524}/\underline{0.968} & 27.086/0.873 & OOM & OOM & OOM & OOM & OOM & 0.896 & 24.03\\
BMBC~\cite{park2020bmbc} & \underline{32.729}/\underline{0.969} & 26.835/0.869 & OOM & OOM & OOM & OOM & OOM & 1.425 & 11.01\\
AdaCoF~\cite{lee2020adacof} & \underline{32.610}/\underline{0.968} & 26.445/0.854 & 39.912/0.990 & 35.269/0.977 & 29.723/0.928 & 24.656/0.851 & 27.639/0.904 & 0.051 & 21.84\\
FeFlow~\cite{gui2020featureflow} & 32.520/0.967 & 26.555/0.856 & OOM & OOM & OOM & OOM & OOM & 1.385 & 133.63\\
CDFI~\cite{ding2021cdfi} & \underline{32.653}/\underline{0.968} & 26.471/0.857 & 39.881/0.990 & 35.224/0.977 & 29.660/0.929 & 24.645/0.854 & 27.576/0.906 & 0.321 & 4.98\\
CAIN~\cite{choi2020channel} & \underline{32.537}/\underline{0.968} & 26.477/0.857 & \underline{39.890}/\underline{0.990} & 35.630/0.978 & 29.998/0.931 & 25.060/0.857 & 28.184/0.911 & 0.071 & 42.78\\
SoftSplat~\cite{niklaus2020softmax} & 32.835/0.969 & \textcolor{blue}{27.582}/0.881 & \textcolor{blue}{40.165}/\textcolor{blue}{0.991} & \textcolor{blue}{36.017}/\textcolor{blue}{0.979} & 30.604/0.937 & \textcolor{blue}{25.436}/0.864 & 28.813/0.924 & 0.206 & 12.46\\
EDSC~\cite{cheng2021multiple} & \underline{32.677}/\underline{0.969} & 26.689/0.860 & 39.792/0.990 & 35.283/0.977 & 29.815/0.929 & 24.872/0.854 & 27.641/0.904 & 0.067 & 8.95\\
XVFI~\cite{sim2021xvfi} & 32.224/0.966 & 26.565/0.863 & 38.849/0.989 & 34.497/0.975 & 29.381/0.929 & 24.677/0.855 & 27.759/0.909 & 0.108 & 5.61\\
QVI~\cite{xu2019quadratic} & 32.668/0.967 & 27.483/\textcolor{blue}{0.883} & 36.648/0.985 & 34.637/0.978 & \textcolor{blue}{30.614}/\textcolor{blue}{0.947} & 25.426/\textcolor{blue}{0.866} & \textcolor{blue}{28.819}/\textcolor{blue}{0.926} & 0.257 & 29.23\\
FLAVR~\cite{kalluri2020flavr} & \underline{\textcolor{red}{33.389}}/\underline{\textcolor{red}{0.971}} & 27.450/0.873 & 40.135/0.990 & 35.988/\textcolor{blue}{0.979} & 30.541/0.937 & 25.188/0.860 & 28.487/0.915 & 0.695 & 42.06\\
\midrule
ST-MFNet (Ours) & \textcolor{blue}{33.384}/\textcolor{blue}{0.970} & \textcolor{red}{28.287}/\textcolor{red}{0.895} & \textcolor{red}{40.775}/\textcolor{red}{0.992} & \textcolor{red}{37.111}/\textcolor{red}{0.985} & \textcolor{red}{31.698}/\textcolor{red}{0.951} & \textcolor{red}{25.810}/\textcolor{red}{0.874} & \textcolor{red}{29.175}/\textcolor{red}{0.929} & 0.901 & 21.03\\
\bottomrule
\end{tabular}}
\vspace{-2.5mm}
\caption{Quantitative comparison results (PSNR/SSIM) for ST-MFNet and 14 tested methods. In some cases, underlined scores based on the pre-trained models are provided in the table, when they outperform their re-trained counterparts. OOM denotes cases where our GPU runs out of memory for the evaluation. For each column, the best result is colored in \textcolor{red}{red} and the second best is colored in \textcolor{blue}{blue}. The average runtime (RT) for interpolating a 480p frame as well as the number of model parameters (\#P) for each method are also reported.}
\label{quantitative}
\vspace{-6mm}
\end{center}
\end{table*}
\section{Results and Analysis}\label{sec:exp}
In this section, we analyze our proposed model through ablation studies, and compare it with 14 state-of-the-art methods both quantitatively and qualitatively.
\subsection{Ablation Study}
The key ablation study results are summarized in Table~\ref{tab:ablation}, where five versions of ST-MFNet have been evaluated. Figure \ref{fig:ablation} provides a visual comparison between the frames generated by each test variant and the full ST-MFNet model. Additional ablation study results are available in Appendix~\ref{sec:abl}.
\noindent\textbf{MIFNet and BLFNet branches.} To verify that the MIFNet and BLFNet branches are both effective, two variants of ST-MFNet, Ours-\textit{w/o BLFNet} and Ours-\textit{w/o MIFNet}, were created by removing the BLFNet and MIFNet branches respectively. Both variants were trained and evaluated using the same configurations described above. While in both cases, Ours-\textit{w/o MIFNet} and Ours-\textit{w/o BLFNet} achieve lower overall performance compared to the full ST-MFNet (Ours), Ours-\textit{w/o MIFNet} exhibits a larger drop in its performance on the VFITex test set compared to Ours-\textit{w/o BLFNet}. This indicates the more important role of the MIFNet branch for capturing complex texture dynamics. On the other hand, for the DAVIS dataset that contains a lot of large motions, Ours-\textit{w/o MIFNet} performs better than Ours-\textit{w/o BLFNet}, which further confirms the larger contribution of the coarse-to-fine architecture in the BLFNet on this type of content. It can also be observed in Figure {\ref{fig:ablation}}, for the case without MIFNet (sub-figures (a-d)), that the model fails to capture the complex motion of the wave. When BLFNet was removed from the original ST-MFNet (sub-figures (e-h)), the occluded region which is also undergoing a large movement has not been interpolated properly.
\noindent\textbf{UMSResNext for multi-flow estimation.} To measure the efficacy of the new UMSResNext, we replaced the UMSResNext-based feature extractor described in Section~\ref{sec:method_1} with the U-Net used in \cite{lee2020adacof} to predict similar multi-flows. This is denoted as Ours-\textit{unet}. As shown in Table~\ref{tab:ablation}, ST-MFNet with UMSResNext achieves enhanced performance on all test sets, and this is also demonstrated by the visual comparison example in Figure {\ref{fig:ablation}} (i-l). Another advantage of using UMSResNext is that it has much fewer parameters ({\raise.17ex\hbox{$\scriptstyle\sim$}}4M) than U-Net ({\raise.17ex\hbox{$\scriptstyle\sim$}}21M).
\noindent\textbf{Texture Enhancement.} The importance of the TENet was also analyzed by training another variant Ours-\textit{w/o TENet}, where the TENet is removed. Table~\ref{tab:ablation} shows that there is a significant performance decrease compared to the full version, especially on DAVIS and VFITex. This demonstrates the contribution of the spatio-temporal filtering on frames over a wider temporal window for content with large and complex motions. Figure~{\ref{fig:ablation}} (m-p) also shows an example, where the full ST-MFNet with the TENet produces richer textural detail compared to the version without TENet.
\noindent\textbf{ST-GAN.} To investigate the effectiveness of the ST-GAN training, we compared the visual quality of the interpolated content generated by the fine-tuned network (Ours-$\mathcal{L}_{p}$) and the distortion-oriented model Ours-$\mathcal{L}_{lap}$. We also replaced the ST-GAN with two existing GANs, FIGAN~\cite{lee2020adacof} and TGAN~\cite{saito2017temporal}. Example frames produced by these variants are shown in Figure~{\ref{fig:ablation}} (q-v), where the result generated by Ours-$\mathcal{L}_{p}$ exhibits sharper edges and clearer structures compared to those produced by Ours-$\mathcal{L}_{lap}$, and other GANs.
\begin{figure*}[t]
\centering
\subfloat {\includegraphics[width=0.107\linewidth]{figures/fulls/sunbath_full.pdf}}\;\!\!
\subfloat {\includegraphics[width=0.107\linewidth]{figures/crops/sunbath_bmbc.pdf}}\;\!\!
\subfloat {\includegraphics[width=0.107\linewidth]{figures/crops/sunbath_dain.pdf}}\;\!\!
\subfloat {\includegraphics[width=0.107\linewidth]{figures/crops/sunbath_softsplat.pdf}}\;\!\!
\subfloat {\includegraphics[width=0.107\linewidth]{figures/crops/sunbath_flavr.pdf}}\;\!\!
\subfloat {\includegraphics[width=0.107\linewidth]{figures/crops/sunbath_qvi.pdf}}\;\!\!
\subfloat {\includegraphics[width=0.107\linewidth]{figures/crops/sunbath_ours.pdf}}\;\!\!
\subfloat {\includegraphics[width=0.107\linewidth]{figures/crops/sunbath_oursp.pdf}}\;\!\!
\subfloat {\includegraphics[width=0.107\linewidth]{figures/crops/sunbath_gt.pdf}}\\
\subfloat {\includegraphics[width=0.107\linewidth]{figures/fulls/flamingo_full.pdf}}\;\!\!
\subfloat {\includegraphics[width=0.107\linewidth]{figures/crops/flamingo_bmbc.pdf}}\;\!\!
\subfloat {\includegraphics[width=0.107\linewidth]{figures/crops/flamingo_dain.pdf}}\;\!\!
\subfloat {\includegraphics[width=0.107\linewidth]{figures/crops/flamingo_softsplat.pdf}}\;\!\!
\subfloat {\includegraphics[width=0.107\linewidth]{figures/crops/flamingo_flavr.pdf}}\;\!\!
\subfloat {\includegraphics[width=0.107\linewidth]{figures/crops/flamingo_qvi.pdf}}\;\!\!
\subfloat {\includegraphics[width=0.107\linewidth]{figures/crops/flamingo_ours.pdf}}\;\!\!
\subfloat {\includegraphics[width=0.107\linewidth]{figures/crops/flamingo_oursp.pdf}}\;\!\!
\subfloat {\includegraphics[width=0.107\linewidth]{figures/crops/flamingo_gt.pdf}}\\
\setcounter{subfigure}{0}
\subfloat[Overlay] {\includegraphics[width=0.107\linewidth]{figures/fulls/flag_full.pdf}}\;\!\!
\subfloat[BMBC] {\includegraphics[width=0.107\linewidth]{figures/crops/flag_bmbc.pdf}}\;\!\!
\subfloat[DAIN] {\includegraphics[width=0.107\linewidth]{figures/crops/flag_dain.pdf}}\;\!\!
\subfloat[Softsplat] {\includegraphics[width=0.107\linewidth]{figures/crops/flag_softsplat.pdf}}\;\!\!
\subfloat[FLAVR] {\includegraphics[width=0.107\linewidth]{figures/crops/flag_flavr.pdf}}\;\!\!
\subfloat[QVI] {\includegraphics[width=0.107\linewidth]{figures/crops/flag_qvi.pdf}}\;\!\!
\subfloat[Ours] {\includegraphics[width=0.107\linewidth]{figures/crops/flag_ours.pdf}}\;\!\!
\subfloat[Ours-$\mathcal{L}_p$] {\includegraphics[width=0.107\linewidth]{figures/crops/flag_oursp.pdf}}\;\!\!
\subfloat[GT] {\includegraphics[width=0.107\linewidth]{figures/crops/flag_gt.pdf}}\\[-0.9em]
\vspace*{0.05cm}
\caption{Qualitative interpolation examples by different methods. The first column (a) shows the overlaid adjacent frames. Columns (b-f) correspond to some of the best-performing benchmark methods. The results of our distortion-oriented model (g) and perception-oriented model (h) are also included, along with the ground truth frames (i).}
\label{fig:qualitative}
\vspace*{-0.4cm}
\end{figure*}
\begin{figure}[t]
\begin{center}
\includegraphics[width=\linewidth]{figures/userstudy.pdf}
\end{center}
\vspace{-9mm}
\caption{\label{fig:userstudy}
Results of the user study showing preference ratios for the tested interpolation methods. The error bars denote standard deviation over test videos.}
\vspace{-6mm}
\end{figure}
\subsection{Quantitative Evaluation}
We compared the proposed ST-MFNet with 14 state-of-the-art VFI models including DVF~\cite{liu2017video}, SuperSloMo~\cite{jiang2018super}, SepConv~\cite{niklaus2017video}, DAIN~\cite{bao2019depth}, BMBC~\cite{park2020bmbc}, AdaCoF~\cite{lee2020adacof}, FeFlow~\cite{gui2020featureflow}, CDFI~\cite{ding2021cdfi}, CAIN~\cite{choi2020channel}, Softsplat~\cite{niklaus2020softmax}, EDSC~\cite{cheng2021multiple}, XVFI~\cite{sim2021xvfi}, QVI~\cite{xu2019quadratic} and FLAVR~\cite{kalluri2020flavr}. For fair comparison, we re-trained all benchmark models with the same training and validation datasets used for ST-MFNet under identical training configurations. The comprehensive evaluation results are summarized in Table~\ref{quantitative}, where the best and second best results in each column are highlighted in red and blue respectively. For all benchmark networks, we additionally evaluated their pre-trained versions provided in the original literature (where applicable). For each test set, if the pre-trained results are better than the re-trained counterparts, the former are presented and underlined.
Two key observations can be made from Table~\ref{quantitative}. Firstly, by using our training set (Vimeo-90k+BVI-DVC), the re-trained performance of all compared models has been improved over their pre-trained versions on large and complex motions, i.e. on DAVIS, SNU-FILM (medium, hard, extreme) and VFITex. For seven models, the pre-trained versions achieved higher PSNR and SSIM values on the UCF-101 dataset. This may be due to the similar characteristics between their pre-training dataset, Vimeo-90k and UCF-101. We also noted that our ST-MFNet offers the best results for DAVIS, SNU-FILM (all subsets) and VFITex, with a significant improvement of 0.36-1.09dB (PSNR) over the runner-up for each test set. It is only outperformed by the pre-trained FLAVR on UCF101 with marginal difference of 0.005dB (PSNR) and 0.001 (SSIM). This demonstrates the excellent generalization ability of the proposed ST-MFNet.
\noindent\textbf{Complexity.} Considering the fact that some models cannot be tested on high resolution content, we measured the model complexity solely based on the 480p sequences from DAVIS test set. The average runtime (RT, in seconds) for interpolating one frame is reported in Table~\ref{quantitative} for each tested network, alongside its total number of parameters. We noticed that ST-MFNet has a relatively high computational complexity among all tested models. The reduction of model complexity remains one of our future works.
\subsection{Qualitative Evaluation}\label{sec:qualitative}
\noindent\textbf{Visual comparisons.}
Examples frames interpolated by our model and several best-performing state-of-the-art methods are shown in Figure~\ref{fig:qualitative}. It can be observed that the results generated by the perceptually trained ST-MFNet (Ours-$\mathcal{L}_p$) are closer to the ground truth, containing fewer visual artifacts and exhibiting better perceptual quality.
\noindent\textbf{User Study.}
As single frames cannot fully reflect the perceptual quality of interpolated content, we conducted a user study where our method was compared against three competitive benchmark approaches, QVI, FLAVR and Softsplat (re-trained using its original perceptual loss~\cite{niklaus2020softmax}). For this study, 20 videos randomly selected from DAVIS, SNU-FILM and VFITex were used as the test content for the three tested models. In each trial of a test session, participants were shown a pair of videos including one interpolated by perceptually optimized ST-MFNet and the other one by QVI, FLAVR or Softsplat. This results in a total of 60 trials in each test session. The order of video presentation was randomized in each trial (the order of trials was also random), and the subject in each case was asked to choose the sequence with higher perceived quality. Twenty subjects were paid to participate in this study. More details of the user study can be found in Appendix~\ref{sec:study}.
The collected user study results are summarized in Figure~\ref{fig:userstudy}. It can be observed that approaching 70\% of users on average preferred ST-MFNet against QVI, and this figure is statistically significant for 95\% confidence based on a t-test experiment ($p<.00000003$). The average preference difference between our method and FLAVR is smaller, with 56\% users in favor of ST-MFNet results. This was also significant at a 95\% confidence level ($p<.0001$). Finally, when comparing against Softsplat, around 60\% of subjects favored our method, where the significance holds again at 95\% level ($p<.000005$).
\section{Limitations and Potential Negative Impacts}
Although superior interpolation performance has been observed from the proposed method, we are aware of the relatively low inference speed associated with this model. This is mainly due to its large network capacity. Training such large models can also potentially introduce negative impact on the environment due to the significant power consumption of computational hardware~\cite{lacoste2019quantifying}. This can be mitigated by using more efficient hardware, and through model complexity reduction based on network compression~\cite{modelcompression} and knowledge distillation~\cite{hinton2015distilling}.
\section{Conclusion}
In this paper, we propose a novel video frame interpolation algorithm, ST-MFNet, which consistently achieved improved interpolation performance (up to a 1.09dB PSNR gain) over state-of-the-art methods on various challenging video content. The proposed method features three main innovative design elements. Firstly, flexible many-to-one multi-flows were combined with conventional one-to-one optical flows in a multi-branch fashion, which enhances the ability of capturing large and complex motions. Secondly, a novel architecture was designed to predict multi-interflows at multiple scales, leading to reduced complexity but enhanced performance. Thirdly, we employed a 3D CNN architecture and the ST-GAN originally proposed for texture synthesis to enhance the visual quality of textures in the interpolated content. Our quantitative and qualitative experiments showed that all of these contribute to the final performance of our model, which consistently outperforms many state-of-the-art methods with significant gains.
\section{Acknowledgment}
The author Duolikun Danier is funded jointly by the University of Bristol and China Scholarship Council.
|
{
"timestamp": "2021-12-01T02:26:44",
"yymm": "2111",
"arxiv_id": "2111.15483",
"language": "en",
"url": "https://arxiv.org/abs/2111.15483"
}
|
\section{Introduction}
The evasive nature of theorised dark matter particles has fuelled an almost century long search \citep{Zwicky_1937}, with a present goal of verifying the particle nature via direct detection experiments. These experiments conventionally assume that the local dark matter distribution can be characterised by the Standard Halo Model (SHM) \citep{Evans_2018}. The SHM assumes that the velocity distribution is represented everywhere by a smooth Maxwell-Boltzmann distribution by postulating an isothermal dark matter halo profile with $\rho \propto r^{-2}$. However, this assumption has been challenged on a number of occasions, with data sets such as RAVE-TGAS \citep{Herzog_Arbeitman_2018} and SDSS-\emph{Gaia} DR2 \citep{Necib_2019} suggesting anisotropic effects in the local dark matter distribution.
The Large Magellanic Cloud (LMC), a satellite galaxy of the Milky Way (MW), is both a contributor to, and a significant perturber of, the local dark matter distribution. The LMC is currently on its first infall, travelling at $327 \kms$ in a heliocentric frame \citep{Kallivayalil_2013} and has just passed its pericentre, currently at a distance of $50$kpc from Earth \citep{Pietrzynski_2019}.
The LMC contributes to the local dark matter distribution by populating the high-velocity tail owing to the overlap of its dark matter halo with the solar neighbourhood in standard cosmologically-motivated models \citep{Besla_2019}. Moreover, recent work analysing the dynamics of the MW-LMC system has shown that MW particles in the solar neighbourhood may also have higher velocities than assumed in the SHM \citep{ Petersen..reflexmotion..2020}. These deviations from the SHM provide a unique opportunity for direct detection experiments.
Direct detection experiments probe mass-cross section parameter space for theorised weakly interacting massive particles by reducing background events until a dark matter signal can be detected through interactions with atomic nuclei \citep{Lewin_1996}. Current experiments place exclusion curves on the mass-cross section plane, but as experiment sensitivities increase to the ton-scale \citep[e.g.][]{Aprile_2017}, the next generation of direct detection experiments are approaching an irreducible background signal called the neutrino floor \citep{Boehm_2019}. Neutrinos cause a weak nuclear recoil in the target source and at low masses, these events can dominate the signal until it is no longer possible to distinguish between the neutrino signal and potential dark matter signal \citep{Grothaus_2014}. The presence of high velocity particles shifts the present exclusion curves to be sensitive to lower masses, further restricting the possible space in which a dark matter signal could be found and increasing the need to reduce the neutrino barrier as it becomes necessary to probe at higher sensitivities and lower masses. However, some direct detection experiments are able to analyse the directionality of the particles detected \citep{Grothaus_2014}, which could aid in reducing the background in both Xenon- \citep{Nygren_2013} and Argon-based \citep{Akimov_2020} detectors.
\vspace{-0.5cm}
\section{Modeling the Milky Way-Large Magellanic Cloud System} \label{Section:Model}
We design updated idealised cosmologically-motivated $N$-body simulations to model the local dark matter distribution. The model consists of a MW comprised of a stellar disc and dark matter halo, and an LMC component comprised of a dark matter halo. Both the MW and LMC halo are modeled using an NFW profile, $\rho(r) = \rho_0\tilde{r}^{-1}(1 + \tilde{r})^{-2}$, where $\rho_0$ is the central density of the dark matter halo, $\tilde{r}=r/R_s$, and $R_s$ is the scale radius of the halo. The MW model roughly matches observational constraints on the mass profile of the MW \citep{Eadie_2019}, with a scale radius $R_{s,{\rm NFW~MW}}=15$kpc.
The MW stellar disc model is an exponential disc with scale length $R_d=3$kpc, a $\sech^2$ vertical profile \citep[$z_0=600$ kpc, in the notation of][]{Petersen_2021_commensurabilities}, and mass $5\times10^{10}\msun$.
We use an LMC virial mass of $M_{\rm vir} = 25 \times 10^{10}\msun$ \citep{Penarrubia_2015,Erkal_2019}. We fix the LMC mass enclosed at 8.7 kpc to match circular velocity observations \citep{vanderMarel_2014}, resulting in $R_{s,{\rm NFW~LMC}}=18$kpc. For both halo profiles, we apply an error function truncation such that the initial halo profile is $\rho_{\rm trunc}(r)=0.5\rho(r)\left(1-{\rm erf}\left[(r-r_{\rm trunc})/w_{\rm trunc}\right]\right)$. The truncation parameters are $r_{\rm trunc}=2R_{\rm vir}$ and $w_{\rm trunc}=0.3R_{\rm vir}$, where $R_{\rm vir,~MW}=300$ kpc, and $R_{\rm vir,LMC}=150$ kpc.
\begin{figure}
\includegraphics[width=\columnwidth,scale=0.5]{SchematicNewTrajectory.eps}
\caption{\label{fig:schematic} Present-day dark matter distributions in the MW (blue) and LMC (red). The solar radius is marked with a bright red circle, and the solar neighbourhood with a white circle. The outer isodensity surface ($1\times10^{-4}{\rm GeV}/{\rm cm}^3$) of the LMC dark matter encompasses the solar neighbourhood. Further, the LMC deformation is apparent, elongated along its orbit (white dashed line; the LMC recently passed underneath the MW centre roughly along the $y-z$ plane). The MW deformation is more subtle, and primarily stretches down toward the lower left corner of the figure (the effect is not detectable in the solar neighbourhood). Contours are logarithmically spaced in density, and match between the two components.}
\end{figure}
\begin{table}
\centering
\begin{tabular}{ccc}
\hline
& \multicolumn{2}{c}{Local DM Density (Gev/cm$^3$)}\\
& Static & Evolved\\
\hline
\hline
MW & 0.30220 & 0.32295 \\
LMC & 0.00059 & 0.00043 \\
\hline
\end{tabular}
\caption{\label{tab:Densities}The dark matter density at the solar neighbourhood defined as in Section~\ref{Section:Model}.}
\end{table}
We use $N_{\rm MW~disc}=2\times10^6$, $N_{\rm MW~halo}=4\times10^7$, and $N_{\rm LMC}=3\times10^7$ equal-mass particles. The details of the realisation procedure may be found in \citet{Petersen_2021_commensurabilities}. The realisation results in isotropic dark matter halos for the MW and LMC \citep[as in Model 1 of][]{Besla_2019}.
To model the trajectory, we use the LMC centre and mean proper motion for the LMC computed from the measurements in \citet{Kallivayalil_2013}, \(\left(\alpha_{\rm LMC},\delta_{\rm LMC}\right)=\left(78.76^\circ\pm0.52,-69.19^\circ\pm0.25\right)\), \(\left(\mu_{\alpha^\star,{\rm LMC}},\mu_{\delta,{\rm LMC}}\right)=\left(-1.91\pm0.02~{\rm mas/yr},0.229\pm0.047~{\rm mas/yr}\right)\), the distance from \citet{Pietrzynski..21}, \(d_{\rm LMC} = 49.59\pm0.54~{\rm kpc}\), and the line-of-sight velocity from \citet{vanderMarel_2002}, \(v_{\rm los,~LMC}=262.2\pm3.4~\kms\). The comparison between the simulation and the observed LMC position is done in Cartesian coordinates, and assumes that the peak density of the LMC corresponds to the observed LMC disc centre.
We find the transformation to Cartesian galactocentric coordinates by Monte-Carlo sampling the errors on each of the measurements, and assuming $(U,V,W)_\odot = (11.1,12.24,7.25)$ as the local standard of rest \citep{Schonrich..2010}, a circular velocity at the solar radius $V_{\rm circ,\odot}=229\kms$ following \citet{Eilers..2019}, the distance to the galactic center as $R_{\rm GC}=8.275~{\rm kpc}$ \citep{Gravity..2021}, and the Sun's height above the galactic midplane as 20.8 pc \citep{Bennett..2019}.
We first run point-mass models of the MW and LMC in reverse from their present-day locations in Cartesian coordinates for $T=3.5$ Gyr, giving a first trajectory. We then perform a fine-grained initial conditions search to nearly match the observed LMC phase-space location. The $N$-body runs are evolved using {\sc exp}, a basis field expansion $N$-body solver \citep{Weinberg..1999,Petersen..EXP..2021} that results in low-noise evolution, with the trade off that the evolution must resemble the allowed degrees of freedom in the system. For the evolution of the MW-LMC system, the basis field expansion is able to parameterise the potential structure at least to the present day. To search the space efficiently, we build a reduced version of the initial conditions by sampling a tenth of the particles from each component. We construct a grid of initial positions and velocities for the LMC that span $[-10\%,0,10\%]$ in $(y,z,v,w)$. Following results from the initial grid, we refine the initial search space by selecting the closest model and repeating the procedure, this time allowing $(x,u)$ to also vary at the $[-3\%,0,3\%]$ level. We then select the closest model from this grid and define the result as our initial position and velocity. Ultimately, we start the LMC at $T=-3.5$ Gyr: $(x,y,z)=(30, 555, -39)~\kpc$ and $(u,v,w)=(0, -96, -30)~\kms$. The MW starts initially at rest. Figure \ref{fig:schematic} shows a snapshot of the MW and LMC 3-dimensional dark matter distribution at the present day.
In the static model, we do not allow the MW-LMC system to evolve with time and instead simply place the LMC at the present-day location. While it would be computationally far less expensive to realise a static model,
such a model will miss significant dynamical effects caused by the interaction of the MW and LMC.
We also define a `solar neighbourhood' of particles. Even at the resolution of our current simulations, we are forced to select a large region around the Sun in order to obtain sufficient statistics for analysis. First, based on arguments from the analytic NFW profile, we assume that we will be insensitive to variations on scales smaller than the local value of $|d\rho/\rho|^{-1}$. For our MW model at 8 kpc, this value is 5 kpc. For our LMC model at 50 kpc, this value is 18 kpc. However, we wish to find the smallest radius sphere around the Sun that returns a convergent density value, given our particle sampling. To determine this radius, we define increasingly-large spheres, stepping by $\Delta r=0.5\kpc$ outward from the solar radius until we obtain two successive values where the density changes by less than 1\%. We then define this as the systemic error on density measurements in the solar neighbourhood.
Given this study, for the MW, we define the solar neighbourhood as particles with $(r - r_\odot)<2~\kpc$, and for the LMC, we define the solar neighbourhood as particles with $(r - r_\odot)<3~\kpc$. We report the measured densities in Table~\ref{tab:Densities}, highlighting the difference between the static and evolved models: effects induced by the mutual evolution of the MW and LMC.
\vspace{-0.2cm}
\section{Velocity Distributions} \label{Section:VelocityDistribution}
\begin{figure}
\centering
\includegraphics[width=\columnwidth,scale=0.5]{VelocityDist.eps}
\caption{Probability distribution of observed velocities in the solar neighbourhood for the MW and LMC models. The probability distributions are normalised to unity. The dashed lines show results for the static case, while the solid lines show the results in the evolved case, where the MW and LMC have been allowed to affect each other. \label{fig:VelocityDist}}
\end{figure}
\begin{table}
\centering
\begin{tabular}{llcccc}
\hline
term && MW$_{\rm static}$ &LMC$_{\rm static}$ & MW$_{\rm evolved}$ &LMC$_{\rm evolved}$\\
\hline
\multicolumn{2}{l}{$\langle v_{\rm observed}\rangle$} & 328 & 532 & 337 & 751 \\
\hline
\multicolumn{2}{l}{$\langle v_{\rm model}\rangle$} & 91 & 43 & 91 & 43 \\
\multicolumn{2}{l}{$\langle v_{\rm heliocentric}\rangle$} & 242 & 242 & 242 & 242 \\
\multicolumn{2}{l}{$\langle v_{\rm evolution}\rangle$} & & & & \\
&$\langle v_{\rm secular}\rangle$ & 0 & - & -25 & - \\
&$\langle v_{\rm reflex}\rangle$ & 0 & - & 51 & - \\
&$\langle v_{\rm infall}\rangle$ & - & 323 & - & 317 \\
&$\langle v_{\rm tidal}\rangle$ & - & 0 & - & 225 \\
\hline
\end{tabular}
\caption{\label{tab:PeakVelocities} The median velocities, in $\kms$, of the observed velocities (cf. Figure~\ref{fig:VelocityDist}) as well as velocities broken down by contributions.}
\end{table}
Given the simulated particle phase space for the solar neighbourhood, we wish to develop a succinct description of the density and velocity distributions. The density is directly estimated from the solar neighbourhood (see Section~\ref{Section:Model}). The velocity distributions are more complex, and are the combination of several physical processes. We separate the observed velocity distribution for each component into physically-motivated dynamical terms:
\begin{equation}
\label{eqn:VelocityComponents}
\vec{v}_{\rm observed} = \vec{v}_{\rm model} + \vec{v}_{\rm heliocentric} + \vec{v}_{\rm evolution}.
\end{equation}
We define $\vec{v}_{\rm model}$ as the velocity of particles in the solar neighbourhood of the modeled halo (set by the velocity dispersion profile). The term $\vec{v}_{\rm heliocentric}$ is the annual-average velocity of the Sun (defined in Section~\ref{Section:Model}). The term $\vec{v}_{\rm evolution}$ covers additional effects, and its characterisation is a primary objective of this work. Equation~(\ref{eqn:VelocityComponents}) is valid for every dark matter particle, but for the purposes of this work, we will characterise the median speed (velocity modulus) of the observed distribution, $\langle v_{\rm component}\rangle = |\vec{v}_{\rm component}|_{\rm median}$, where `component' may refer to observed, model, heliocentric, or evolution terms. Median values are listed in Table~\ref{tab:PeakVelocities}. After exploring the data, we find that the overall shape of the velocity distributions is set by $\vec{v}_{\rm model}$, and the additional terms can be modeled as linear-sum contributions\footnote{This formulation is approximate, as it ignores directionality. See the values in Table~\ref{tab:PeakVelocities}. However, most terms are dominated by a single direction, making this approximation roughly true, and instructive for understanding the importance of different terms.}. Therefore, to model the median Earth-observed velocity for each component, we only need to have the model velocity dispersion, the heliocentric velocity conversion, and the dynamical ingredients. The dark matter velocities of particles in both the MW and LMC will obey equation~(\ref{eqn:VelocityComponents}), but the effects contributing to $\vec{v}_{\rm evolution}$ will be different.
\vspace{-0.2cm}
\subsection{Milky Way velocity distribution}
\label{subsubsec:MWdist}
We characterise the dynamical imprint on the MW dark matter particles as
\begin{equation}
\label{eqn:VelocityComponentsMW}
\vec{v}_{\rm MW~evolution} = \vec{v}_{\rm secular} + \vec{v}_{\rm reflex}
\end{equation}
The term $v_{\rm secular}$ arises from the deformation of the MW dark matter halo owing to secular processes \citep[e.g.][]{Petersen_2016} that arise due to the presence of the MW disc. However, the more significant contribution is the reflex motion $\vec{v}_{\rm reflex}$. Reflex motion is characterised by the movement of the MW’s disc due to the infall of the LMC pulling the MW barycentre from its equilibrium position.
The importance of the MW's reflex motion has only recently been identified \citep{Petersen..reflexmotion..2020, Petersen_2021_reflex_motion}, and its inclusion in equation~(\ref{eqn:VelocityComponentsMW}) provides a novel approach to the analysis of the velocity distribution. In the heliocentric reference frame, the reflex motion appears inverted, so it seems as if the outer halo is moving relative to the disc. We therefore define particles in the outer halo as those exhibiting the apparent reflex motion ($r_{\rm apo} > 40 \kpc$, in line with \citealt{Petersen_2021_reflex_motion}). Using this definition to reduce the sample to include only particles exhibiting reflex motion we find that $\langle v_{\rm observed}\rangle$ of the evolved model, given in Table \ref{tab:PeakVelocities} as 337$\kms$, increases to 464$\kms$.
The LMC has only recently passed Earth, so local particles have not yet had time to react to the presence of the LMC, meaning direct acceleration cannot be responsible. Instead, the apparent motion of the MW disc causes this observation in the heliocentric frame.
\vspace{-0.2cm}
\subsection{Large Magellanic Cloud velocity distribution}
\label{subsubsec:LMCdist}
The motion of the LMC is characterised by the infall velocity of the luminous centre, which is dependent on the initial conditions of the model (see Section \ref{Section:Model}). The infall velocity can be measured directly from the simulation and confirmed observationally.
Figure \ref{fig:VelocityDist} shows a clear difference in the median velocities of the LMC particles for a static and evolved model, from $\langle v_{\rm observed}\rangle=532 \rm \kms$ in the static model, to $\langle v_{\rm observed}\rangle=751 \rm \kms$ in the evolved model. It is also clear that in the evolved model, there is only a small overlap with the MW distribution before the high-velocity regime is composed purely of LMC particles. We, therefore, aim to quantify the dynamical effects present in the evolved model that lead to this significant discrepancy. We do this by identifying the unique velocity components contributing to the observed median value. These are described in equation~(\ref{eqn:VelocityComponentsLMC}), where we have specified the vector components of the LMC dark matter particles:
\begin{equation}
\label{eqn:VelocityComponentsLMC}
\vec{v}_{\rm LMC~evolution} = \vec{v}_{\rm infall} + \vec{v}_{\rm tidal}.
\end{equation}
The term $\vec{v}_{\rm infall}$ is the bulk infall velocity of the LMC, and $\vec{v}_{\rm tidal}$ is the velocity of LMC particles due to the tidal deformation experienced by the LMC. The particles are no longer obviously bound to the LMC, but are rather streaming through the solar neighbourhood. For particles in the solar neighbourhood, both of these terms act approximately as constant offsets owing to their focused directionality. We neglect $\vec{v}_{\rm secular}$ in this case, as we only model the LMC using a spherical distribution. We find that there is a significant velocity boost in the evolved model due to this tidal evolution. The tidal acceleration of particles provides a considerable contribution to the observed median velocity of LMC particles, with a median value of $\langle v_{\rm tidal}\rangle=225\kms$. The particles are no longer obviously bound to the LMC, but are rather streaming through the solar neighbourhood.
\vspace{-0.2cm}
\section{Implications for detection} \label{Section:Discussion}
Quantifying the deviation from the SHM in the solar neighbourhood allows for more robust predictions in the search for dark matter, especially with the improving sensitivity of direct detection experiments such as XENON1T \citep{Aprile_2017} and COHERENT \citep{Akimov_2020}. We also discuss the possibility of detecting LMC dark matter with directional detectors and the implications for annual modulation signals.
\vspace{-0.1cm}
\subsection{Direct detection experiments}
\vspace{-0.2cm}
\begin{figure*}
\centering
\includegraphics[width=18cm,scale=0.5]{DensityPlot.eps}
\caption{The observed direction of origin for particles plotted on an Aitoff projection, shown only for the evolved model. Darker colours indicate higher numbers of particles. \textbf{(a)} shows only the MW particles \textbf{(b)} shows only the LMC particles \textbf{(c)} shows any particles with a high velocity. The solid curve draws a contour around the densest region in June, and the dotted line shows this in December. The larger dashed curve in \textbf{(b)} draws the LMC's infall trajectory (traveling right-to-left).}
\label{fig:DensityPlot}
\end{figure*}
Previous works have suggested that sources of dark matter substructure might be present in the solar neighbourhood. For example, \citet{Evans_2018} proposed a developed SHM to account for the deviations caused by the Gaia Sausage, and \citet{ O_Hare_2020_substructure} studied the structure resulting from accretion events. The LMC is an obvious significant perturber as well as source of dark matter \citep[as first reported by][]{Besla_2019}, with effects that should not be ignored in the search for dark matter.
In our static model, the MW distribution dominates the signal in the detector, as the LMC density in the static model is negligible compared to the number of MW particles at all values of $v_{\rm min}$. Defining the ratio of densities as a function of velocity, $\eta(v)\equiv \rho_{LMC}(v)/(\rho_{MW}(v)+\rho_{LMC}(v))$), we find that $\eta(v)<0.0015$ for all values of $v$, a value that is not feasibly detectable. However, in the evolved model, the number of MW particles quickly becomes negligible, 99\% of particles are from the LMC at $782\kms$: $\eta(>782\kms)>0.99$. This owes to the appreciably higher median velocity of LMC particles in the evolved model, causing the limited overlap of the MW and LMC distributions. This creates an opportunity to search for dark matter signatures from LMC particles owing to their high velocities in the heliocentric frame. By creating a $v_{\rm min}$ threshold to create a sample of pure LMC particles, their signature will not be obscured by the population of MW particles. We find that the LMC particles push the distribution to higher velocities, so the exclusion curves will be pushed to lower masses \citep[in agreement with][]{Besla_2019}.
\vspace{-0.2cm}
\subsection{Directional detectors}
The resolution of direct detection experiments is inherently limited by background events that begin to dominate the signal at some minimum interaction cross section \citep{Gonzalez_Garcia_2018}. Where neutrinos become the dominant background signal, it becomes challenging to distinguish between neutrino signals and dark matter candidates \citep[][]{Boehm_2019}. The next generation of direct detection experiments are aiming to reduce this barrier is by using directional detectors -- detectors sensitive to the apparent origin direction of dark matter particles -- to identify sources of background such as solar neutrinos \citep{Akimov_2020}.
The distinct trajectory of the LMC ($\vec{v}_{\rm infall}$) provides an exciting opportunity for these detectors. However, it could be challenging due to the fact the LMC's motion is in a similar direction to solar motion ($\vec{v}_{\rm heliocentric}$). We therefore looked at how sensitive directional detectors would need to be in order to distinguish these as LMC particles. Figure~\ref{fig:DensityPlot} shows the positions of the particles today. The centroid position\footnote{We use the techniques implemented in \href{https://github.com/michael-petersen/unitcentroid}{{\tt unitcentroid}} to compute the centroid of a particle distribution on a unit sphere.} of the observed particle direction origin is given by a white cross, with the size of the arms representing the error bars. The centroid difference between the MW and LMC particles is $26\pm6 ^\circ$. From Figure \ref{fig:DensityPlot}, we conclude that the high velocity particles predominantly come from the LMC, with a significantly reduced scatter in angular position on the sky compared to the MW particles.
Directional detection experiments generally aim to reduce the neutrino barrier by identifying solar neutrinos and removing them from the background counts \citep{Grothaus_2014}. However, there are a number of sources contributing to the neutrino background, not just solar neutrinos, that may complicate this picture. These sources (such as atmospheric, geoneutrinos, and detector neutrinos) are outlined in \citet{O_Hare_2020}. The different sources of neutrinos dominate the background signal of different mass ranges, with solar neutrinos most significant in the low mass range. These alternative neutrino sources contribute to a far more scattered background signal that directional detectors are less equipped to reduce. Therefore, by utilising the unique velocity and directional properties of LMC dark matter particles, we can invert the technique used by directional detectors; instead of searching for particles to reject as background sources, we search directly for possible dark matter signals from the LMC.
\vspace{-0.1cm}
\subsection{Annual modulation}
As well as a change in the count rate in detectors, we also predict variation in the directionality of the particles over the year, induced by the Earth's motion around the Sun. This directional modulation is present in both the MW and LMC particles as shown by the contours in Figure~\ref{fig:DensityPlot}. The average centroid position over the year is shown by a white cross, but the centroid position will change due to annual modulation. Thus we can define the level of sensitivity that directional detectors would need to have to identify this modulation. We find the modulation amplitude is $11 \pm 6 ^{\circ}$ for the MW and $4 \pm 4 ^{\circ}$ for the LMC. With modest counts of detections ($\approx10^4$), one can likely detect the modulation in the MW, but the LMC would require significantly more counts in order to constrain the small amplitude variations.
\vspace{-0.4cm}
\section{Conclusions} \label{Section:Conclusions}
We analyse the phase-space distribution of dark matter particles in the solar neighbourhood in both static and evolved models of the MW-LMC system. The main results of this work are as follows:
\begin{enumerate}
\item The increased velocity of LMC particles and decreased density are primarily due to the LMC tidal evolution (visible in Figure~\ref{fig:schematic}). For MW particles, the effect of reflex motion is more significant than direct acceleration by the LMC in populating the high velocity tail with MW particles.
\item The differences in the velocity distributions between the static and evolved models (Figure~\ref{fig:VelocityDist}) owe to the response of the MW and LMC to each other. Therefore, in order to get an accurate description of the local phase-space distribution, the evolved model should be considered.
\item The high velocity of LMC particles and directional coherence ($26\pm6 ^\circ$ between the MW and LMC centroids; Figure~\ref{fig:DensityPlot}) in the heliocentric frame provide an opportunity for targeted detection. By searching above a $v_{\rm min}$ threshold to create a sample of pure LMC particles, directional detectors may be able to search for LMC dark matter particles beyond the neutrino floor.
\item The centroid change in the annual modulation is $11 \pm 6^{\circ}$ for the MW and $4 \pm 4 ^{\circ}$ for the LMC (Figure~\ref{fig:DensityPlot}). It will therefore be easier to detect annual modulation changes for the MW, compared to the LMC. Owing to the velocity distributions, centroid differences from annual modulation will be easier to detect at lower $v_{\rm min}$ detection thresholds.
\end{enumerate}
The next decade will see an influx of data as direct detection experiments probe for dark matter signatures at higher sensitivities. Our work has shown that, as these experiments progress, it is crucial to account for the presence of the LMC in order to model local phase space accurately. Finally, the study of simulations modelling the MW-LMC system is essential to shed more light on the extent of the LMC's impact, and additional models should be explored.
\vspace{-0.6cm}
\section*{Acknowledgements}
KD thanks the Carnegie Trust for project funding. MSP thanks Martin Weinberg for use of the {\sc exp} code and acknowledges funding from a French CNRS grant as well as a UK STFC Consolidated Grant. This work used cuillin, the Intitute for Astronomy's computing cluster (http://cuillin.roe.ac.uk), partially funded by the STFC and managed by Eric Tittley. This project made use of \textsc{numpy} \citep{numpy}, \textsc{matplotlib} \citep{matplotlib}, \textsc{ipython} \citep{ipython}, and \textsc{jupyter} \citep{jupyter}.
\vspace{-0.6cm}
\section*{Data Availability}
The data for particles in the solar neighbourhood, including cumulative distribution curves as well as scripts to generate the figures in this paper, is available on Github. \url{https://github.com/katelinbdonaldson/Local-LMC-dark-matter}
\vspace{-0.5cm}
\bibliographystyle{mnras}
|
{
"timestamp": "2021-12-01T02:25:23",
"yymm": "2111",
"arxiv_id": "2111.15440",
"language": "en",
"url": "https://arxiv.org/abs/2111.15440"
}
|
\section{Introduction}
Passivity is the backbone of linear circuit theory. As a system theoretic concept,
it provides a fundamental bridge between physics and computation, well
beyond electrical circuits. Passive linear systems are those that can be realized as port interconnections
of passive elements \autocite{Bott1949}, and the KYP lemma provides an
algorithmic framework for the analysis of passive circuits by convex optimization
\autocite{Yakubovich1962, Kalman1963, Popov1964}.
The circuit concept of passivity has generated amongst the most important developments
of control theory over the last several decades, including dissipativity theory
\autocite{Willems1972, Willems1972a}, nonlinear
passivity theory \autocite{Hill1976, Hill1980, Moylan2014}, and passivity based
control \autocite{vanderSchaft2017, Jayawardhana2006, Ortega1998, Sepulchre1997}.
This paper explores the concept of \emph{maximal monotonicity} as a generalization
of the LTI theory of passivity that retains the fundamental bridge between physics
and computation beyond the world of linear, time-invariant systems.
The property of maximal monotonicity first arose in the
study of nonlinear electrical circuits, in early efforts to extend the
tractability of linear, time invariant, passive networks to networks
containing nonlinear resistors. The prototype of a maximal monotone
element was Duffin's
\emph{quasi-linear} resistor \cite{Duffin1946}, a nonlinear resistor with a
non-decreasing $i-v$ characteristic. Other early forms of monotonicity are
found in the work of \textcite{Golomb1935}, \textcite{Zarantonello1960} and
the work of \textcite{Dolph1961} on ``dissipative'' linear mappings.
Quasi-linearity was refined by Minty
\cite{Minty1960, Minty1961, Minty1961a} to produce the modern concept of
maximal monotonicity, in the context of an algorithm for solving networks of
nonlinear resistors. Desoer and Wu \cite{Desoer1974} studied existence and
uniqueness of solutions to networks of nonlinear resistors, capacitors and
inductors defined by maximal monotone relations.
Following the influential paper of Rockafellar in 1976
\cite{Rockafellar1976}, maximal monotonicity has grown to become a
fundamental property in convex optimization \cite{Rockafellar1997, Ryu2016,
Ryu2021a, Parikh2013, Bertsekas2011, Combettes2011}, forming the basis of a large body of
work on tractable first order methods for large scale and nonsmooth
optimization problems, which have seen a surge of interest in the last
decade. However, the physical significance of maximal monotonicty in
nonlinear circuit theory has been somewhat forgotten.
The operator-theoretic property of
maximal monotonicity can be interpreted, when defined on an appropriate space, as the incremental form of cyclo-passivity
\autocite{Willems1974}. It coincides with passivity only for linear systems. Like
passivity, maximal monotonicity is preserved under port interconnections
\autocite{Camlibel2013}.
However, unlike passivity, maximal monotonicity comes equipped with a convex
algorithmic theory, for linear \emph{and} nonlinear systems. Maximal monotonicity
plays an important role in the simulation of nonsmooth dynamical systems
\autocite{Acary2008, Brogliato2016}. The first connection
between maximal monotone operators and passive linear systems appears in this area, in the work of
\textcite{Brogliato2004}. This work
inspired a line of research on Lur'e systems consisting of a passive LTI system in
feedback with a nonsmooth maximal monotone operator \autocite{Brogliato2009, Brogliato2011a,
Brogliato2013, Camlibel2016, Adly2017, Brogliato2020}.
In this paper, we revisit the classical study of nonlinear electrical
networks in light of recent developments in the theory of
maximal monotone operators. Preliminary results in this direction were
presented at ECC2021 \autocite{Chaffey2021}. We study the problem of computing the
periodic output of a system described by a series/parallel interconnection of
basic elements, which is forced by a periodic input.
The solution is computed using a fixed point iteration
in the space of periodic trajectories, and computation is split using a
splitting algorithm. The
splitting corresponds precisely to the interconnection structure -
computational steps are performed individually for each element.
The interconnection must be monotone, however the elements themselves need
not be.
The approach of this paper is reminiscent of the frequency response analysis of LTI systems
using the transfer functions of their components. Existing frequency response
methods for nonlinear systems are either approximate and limited in their
applicability, as in harmonic analysis
\cite{Feldmann1996, Blagquiere1966, Krylov1947, Slotine1991}, or involve performing a
transient simulation and waiting for convergence \cite{Aprille1972,
Brogliato2016, Cellier2006}.
A similar problem has been studied by Heemels
\emph{et al.} \cite{Heemels2017}, for the class of systems described by a
Lur'e-type feedback interconnection of a passive LTI state space system and a
maximal monotone nonlinearity. They use a fixed point algorithm to perform a
time-stepping simulation of the state space model. In contrast, in this
paper, we use fixed point algorithms in the space of periodic trajectories.
The first part of this paper introduces a general framework for modelling
monotone one-port circuits, and a general method for solving their periodic
input/output behavior. In
Section~\ref{sec:illustrative_example},
we motivate our work with a simple example. In Section~\ref{sec:elements}, we introduce the basic theory of
maximal monotonicity. In section~\ref{sec:1ports}, we develop a modelling
framework for monotone one-port circuits, built from the series/parallel
interconnection of smaller one-ports.
In section~\ref{sec:computation}, we
develop a computational technique to compute the periodic output of a periodically
driven maximal monotone one-port using off-the-shelf optimization
methods, and introduce a new splitting algorithm which applies to arbitrary
series/parallel circuits.
The second part of this paper applies this computational technique to two
classes of systems.
Section~\ref{sec:RLC} applies the theory to resistors, inductors and
capacitors, and gives two detailed examples, including a
large-scale circuit consisting of 300,000 elements.
Section~\ref{sec:conductance} applies the theory to memristive systems, using
the specific example of a neuronal potassium conductance. Finally, in
Section~\ref{sec:literature}, several connections to the literature are
explored.
\section{Motivating example}\label{sec:illustrative_example}
We begin with a simple example, which motivates the developments of this paper:
the series interconnection of two resistors (Figure~\ref{fig:series-resistors}).
\begin{figure}[h!]
\centering
\includegraphics{resistors}
\caption{Series interconnection of two resistors.}
\label{fig:series-resistors}
\end{figure}
Consider first the linear, time invariant case, $v_j = R_j i_j$. The series
interconnection maps the applied current $i$ to the port voltage $v$ by the relation
$v = R_1 i + R_2 i$. The inverse relation maps voltage $v$ to current $i$ by $i =
v/(R_1 + R_2)$. A parallel connection is dual: voltage and current are exchanged,
and resistance is replaced by its reciprocal, conductance.
The fundamentals of the circuit remain unchanged if the resistors are each replaced
by a linear passive transfer function. Indeed, the series interconnection of two
passive 1-ports remains passive, and the inverse of a passive transfer function is
again passive.
If we replace the linear resistors by nonlinear, but passive resistors, however,
several attractive properties of the resistors are lost.
A passive resistor can have regions of negative slope in its
$i-v$ curve (\textcite{Chua1983} give a catalogue of physical
examples). The inverse of such a resistor
may not be well defined. If, however, we consider monotone nonlinear resistors,
the fundamentals of the LTI case remain unchanged. Monotonicity of a resistor means
its $i-v$ curve is nondecreasing; most importantly, invertibility of the
interconnection is retained. This is illustrated in
Figure~\ref{fig:monotone-passive}.
\begin{figure}[h]
\centering
\includegraphics{passive-monotone}
\caption{The $i$-$v$ curves of a passive and a monotone resistor.}%
\label{fig:monotone-passive}
\end{figure}
\section{Maximal monotone relations}
\label{sec:elements}
We begin by introducing some mathematical preliminaries.
\subsection{Relations}
\begin{definition}
A \emph{relation} on a space $X$ is a subset $S \subseteq X \times X$.
\end{definition}
We write $y \in S(u)$ to denote $(u, y) \in S$.
The usual operations on functions can be extended to relations:
\begin{IEEEeqnarray*}{rCl}
S^{-1} &=& \{ (y, u) \; | \; y \in S(u) \}\\
S + R &=& \{ (x, y + z) \; | \; (x, y) \in S,\; (x, z) \in R \}\\
SR &=& \{ (x, z) \; | \; \exists\, y \text{ s.t. } (x, y) \in R,\; (y, z) \in S \}.
\end{IEEEeqnarray*}
Note that the relational inverse $S^{-1}$ always exists, but in general, $S S^{-1}
\neq I$, where $I$ is the identity relation $\{(x, x)\;|\;x \in X\}$.
\subsection{Signal spaces}
Throughout this paper, we let $\mathcal{H}$ be a
Hilbert space with inner product $\bra{\cdot}\ket{\cdot}$ and induced norm
$\norm{x} = \sqrt{\bra{x}\ket{x}}$.
The particular Hilbert spaces we consider are spaces of periodic signals, described
by a single period. A trajectory $w(t)$ is said to be
$T$-periodic if $w(t) = w(t + T)$ for all $t$.
Let $L_{2,T}$ denote the space of signals $u: [0, T] \to \R^n$ which are square integrable, that is,
\begin{IEEEeqnarray*}{rCl}
\int_{0}^{T} u\tran(t) u(t) \dd{t} < \infty.
\end{IEEEeqnarray*}
This is a Hilbert space with inner product
\begin{IEEEeqnarray*}{rCl}
\bra{u}\ket{y} \coloneqq \int_{0}^{T} u\tran(t)y(t) \dd{t}
\end{IEEEeqnarray*}
and induced norm $\norm{u} \coloneqq \sqrt{\bra{u}\ket{u}}$.
The discrete-time counterpart of $L_{2, T}$ is denoted by $l_{2, T}$, the space of
length $T$ sequences which are square summable:
\begin{IEEEeqnarray*}{rCl}
\Sigma_{t = 0}^T u\tran(t)u(t) < \infty.
\end{IEEEeqnarray*}
Again,
this is a Hilbert space with inner product
\begin{IEEEeqnarray*}{rCl}
\bra{u}\ket{y} \coloneqq \Sigma_{t = 0}^{T} u\tran(t)y(t)
\end{IEEEeqnarray*}
and induced norm $\norm{u} \coloneqq \sqrt{\bra{u}\ket{u}}$.
$L_{2, [0, \infty)}$ and $L_{2, (-\infty, \infty)}$ are defined analogously to $L_{2,
T}$, but with time intervals $[0, \infty)$ and $(-\infty, \infty)$, respectively.
\subsection{Maximal monotonicity}
The property of \emph{monotonicity} connects the physical property of energy
dissipation in a device to algorithmic analysis methods.
Monotonicity on $\mathcal{H}$ is defined as follows.
\begin{definition}
A relation $S \subseteq \mathcal{H}\times\mathcal{H}$ is called \emph{monotone} if
\begin{IEEEeqnarray*}{rCl}
\langle u_1 - u_2 | y_1 - y_2 \rangle \geq 0
\end{IEEEeqnarray*}
for any $(u_1, y_1), (u_2, y_2) \in S$.
A monotone relation is called \emph{maximal} if it is not properly contained in any
other monotone relation.
\end{definition}
By way of example, a relation $S \subseteq \R \times \R$ is monotone if its graph is
non-decreasing, and maximal if its graph has no endpoints.
Note that this definition refers to monotonicity in the operator theoretic sense, and
this is distinct from the notion of monotonicity in the sense of partial order
preservation by a state-space system (see, for example, \cite{Angeli2003}).
Monotonicity is preserved under a number of operations.
The proof of the following lemma may be found in \cite{Ryu2021a}.
\begin{lemma}
\label{lem:monotone_properties}
Consider relations $G$ and $F$ which are monotone on $\mathcal{H}$. Then
\begin{enumerate}
\item $G^{-1}$ is monotone; \label{inversion}
\item $G + F$ is monotone; \label{sum}
\item $\alpha G$ is monotone for $\alpha > 0$.
\end{enumerate}
\end{lemma}
Maximality is preserved under inversion. However, in general, maximality is not
preserved when two relations are added (indeed,
their sum may be empty). We make the following assumption on summations throughout
the rest of this paper, which guarantees maximality of the sum, by \cite[Thm. 1]{Rockafellar1970}.
\begin{assumption}
\label{ass:domains}
Any summation of two relations $G$ and $F$ obeys
\begin{IEEEeqnarray*}{rrCl}
& \interior \dom F \cap \dom G &\neq& \varnothing\\
\text{or } & \interior \dom G \cap \dom F &\neq&
\varnothing,
\end{IEEEeqnarray*}
where $\dom S$ denotes the domain of the relation $S$.
\end{assumption}
This assumption is sufficient (but not necessary) for the existence of solutions to
the summation (that is, the resulting relation is nonempty). We omit the proof of this fact.
\subsection{Stronger monotonicity properties}
\label{sec:more_properties}
\begin{definition}
A relation $S$ has a \emph{Lipschitz constant of} $\lambda>0$, or is
\emph{$\lambda$-Lipschitz} if, for all
$(u_1, y_1), (u_2, y_2) \in S$,
\begin{equation*}
\norm{y_1 - y_2} \leq \lambda\norm{u_1 - u_2}.
\end{equation*}
If $\lambda < 1$, $S$ is called a \emph{contraction}. If $\lambda = 1$, $S$ is called
\emph{nonexpansive}.
\end{definition}
Note that if $S$ is $\lambda$-Lipschitz, it is also $\bar\lambda$-Lipschitz for all
$\bar\lambda > \lambda$.
\begin{definition}
Given $\theta \in (0, 1)$, a relation $S$ is said to be $\theta$-\emph{averaged} if $S = (1 - \theta)I + \theta G$, where $I$ is the
identity relation and $G$ is some nonexpansive relation.
\end{definition}
\begin{definition}
Given $\mu > 0$, a relation $S$ is $\mu$-\emph{coercive} or
$\mu$-\emph{strongly monotone} if, for all $(u_1, y_1), (u_2, y_2) \in S$,
\begin{equation*}
\bra{u_1 -u_2}\ket{y_1 - y_2} \geq \mu\norm{u_1 - u_2}^2.
\end{equation*}
$S$ is called $\mu$-\emph{hypomonotone} in the case that $\mu < 0$.
If the sign of $\mu$ is unknown, we simply say $S$ is
$\mu$-\emph{monotone}.
\end{definition}
\begin{definition}
Given $\gamma > 0$, a relation $S$ is $\gamma$-\emph{cocoercive}
if, for all $(u_1, y_1), (u_2, y_2) \in S$,
\begin{equation*}
\bra{u_1 - u_2}\ket{y_1 - y_2} \geq \gamma \norm{y_1 - y_2}^2.
\end{equation*}
$S$ is called $\gamma$-\emph{cohypomonotone} in the case that $\gamma < 0$.
\end{definition}
It is seen immediately that $F$ is $\mu$-coercive if and only if $F^{-1}$ is
$\mu$-cocoercive. It also follows from the Cauchy-Schwarz inequality that $F$
has a Lipschitz constant of $1/\gamma$ if $F$ is $\gamma$-cocoercive. Finally, if
$A$ is $\mu$-coercive (resp. $\gamma$-cocoercive) and $B$ is monotone, $A + B$ is
is $\mu$-coercive (resp. $\gamma$-cocoercive). For more details on these
properties, we refer the reader to \autocite[\S 2.2]{Ryu2021a} and
\autocite{Giselsson2019}.
\section{Monotone one-port circuits}
\label{sec:1ports}
The systems considered in this paper are electrical one-port circuits. The study of circuits modelled as
one-ports is classical, dating back to work by Foster \autocite{Foster1924}, Brune
\autocite{Brune1931}, Bott and Duffin \autocite{Bott1949},
and others. In the spirit of this classical work, and of the ``tearing, zooming and linking''
modelling methodology advocated by \textcite{Willems2007}, we will model
one-port circuits by building them as series and parallel interconnections
of smaller one-port circuits.
One-port circuits have two external terminals. The port voltage $v$ may be measured
across these terminals, and the port current $i$ may be measured through them. We assume that each of
these variables takes values in $\R$.
A one-port circuit $E$ is defined
by a relation on $L_{2, T}$ between current and voltage. We denote by $d(E) \in \{i\to v, v \to i\}$ the
direction of the relation $E$, either current to voltage (current controlled) or
voltage to current (voltage controlled). We will often denote current controlled
circuits by $R$, and voltage controlled circuits by $G$.
We say that $E$ is an \emph{$\alpha$-monotone one-port} if it is defined by an $\alpha$-monotone relation.
\subsection{Series and parallel interconnections}
Two one-ports may be combined to build a new one-port by series or parallel
interconnection. These are illustrated in Figure~\ref{fig:series-parallel-circuit}.
\begin{figure}[ht]
\centering
\includegraphics{parallel-series.pdf}
\caption{Series (left) and parallel (right) interconnections of two 1-ports.}
\label{fig:series-parallel-circuit}
\end{figure}
When two one-ports are
connected in parallel, their relations must be from voltage to current. If they are
not, one or both relations must be inverted before interconnection. Let $G_1$ and
$G_2$ be two one-port circuits such that $d(G_1) = d(G_2) = v \to i$.
For a parallel interconnection, the composition of Kirchoff's laws and the relations
$G_1$ and $G_2$ creates a natural forward relation from voltage to current, as
follows.
\begin{enumerate}
\item KVL: $v = v_1 = v_2$
\item Device: $(v_1, i_1) \in G_1$, $\quad(v_2, i_2) \in G_2$
\item KCL: $i_1 + i_2 = i$.
\end{enumerate}
We therefore have a new relation $G = G_1 + G_2$, $d(G) = v \to i$.
This is illustrated in the left of Figure~\ref{fig:parallel-block}. Calculating the inverse
relation, we have
\begin{IEEEeqnarray*}{rCl}
i &\in& (G_1 + G_2)(v)\\
G_1(v) &\in& i - G_2(v)\\
v &\in& G_1^{-1}(i-G_2(v)),
\end{IEEEeqnarray*}
which is the negative feedback interconnection of $G_1^{-1}$ and $G_2$, illustrated in the right of Figure~\ref{fig:parallel-block}.
For a series interconnection, the roles of current and voltage are reversed. Letting
$R_1$ and $R_2$ be two one-port circuits such that $d(R_1) = d(R_2) = i \to v$, their
series interconnection gives a relation from current to voltage, as follows.
\begin{enumerate}
\item KCL: $i = i_1 = i_2$
\item Device: $(i_1, v_1) \in R_1$, $\quad(i_2, v_2) \in R_2$
\item KVL: $v_1 + v_2 = v$.
\end{enumerate}
The new relation is $R = R_1 + R_2$, with $d(R) = i \to v$. The inverse relation,
from $v$ to $i$, is the negative feedback interconnection of $R_1^{-1}$ and $R_2$.
This is illustrated in Figure~\ref{fig:series-block}.
\begin{figure}
\centering
\includegraphics{parallel-block}
\caption{Block diagram of parallel interconnection, illustrating parallel
forward relation from voltage to current, and negative feedback relation from
current to voltage.}
\label{fig:parallel-block}
\end{figure}
\begin{figure}
\centering
\includegraphics{series-block}
\caption{Block diagram of series interconnection, illustrating
parallel forward relation from current to voltage, and negative feedback
relation from voltage to current.}
\label{fig:series-block}
\end{figure}
Monotonicity of circuits is preserved under series and parallel interconnection.
Precisely, we have the following.
\begin{proposition}\label{prop:series_parallel}
\begin{enumerate}
\item Let $E_1$ and $E_2$ be monotone one-port circuits such that
$d(E_1), d(E_2) \in \{i\to v, v \to i\}$. Then the series and
parallel interconnections of $E_1$ and $E_2$ are both
monotone one-ports. \label{prop:series_parallel:one}
\item Let $G_1$ and $G_2$ be one-port circuits such that $G_1$ is
$\alpha$-monotone, $G_2$ is $\beta$-monotone, and
$d(G_1) = d(G_2) = v \to i$. Then the parallel
interconnection of $G_1$ and $G_2$ is $(\alpha +
\beta)$-monotone. \label{prop:series_parallel:two}
\item Let $R_1$ and $R_2$ be one-port circuits such that $R_1$ is
$\alpha$-monotone, $R_2$ is $\beta$-monotone, and
$d(R_1) = d(R_2) = i \to v$. Then the parallel
interconnection of $R_1$ and $R_2$ is $(\alpha +
\beta)$-monotone. \label{prop:series_parallel:three}
\end{enumerate}
\end{proposition}
\begin{proof}
The proof of Part~\ref{prop:series_parallel:one} follows directly from the
preservation of monotonicity under inversion and addition
(Lemma~\ref{lem:monotone_properties}).
Parts~\ref{prop:series_parallel:two} and~\ref{prop:series_parallel:three}
follow from the fact that if $E_1$ is $\alpha$-monotone and $E_2$ is
$\beta$-monotone, $E_1 + E_2$ is $(\alpha + \beta)$-monotone:
\begin{IEEEeqnarray*}{+cl+x*}
&\bra{u_1 - u_2}\ket{(E_1 + E_2)u_1 - (E_1 + E_2)u_2} \\
=& \bra{u_1 - u_2}\ket{E_1u_1 - E_1u_2} + \bra{u_1 - u_2}\ket{E_2u_1 - E_2u_2}\\
\geq& (\alpha + \beta)\norm{u_1 - u_2}^2. &\qedhere
\end{IEEEeqnarray*}
\end{proof}
Preservation of monotonicity under port interconnections is explored in more detail
by \c{C}amlibel and van der Schaft \autocite{Camlibel2013}.
Repeatedly applying series and parallel interconnections allows a collection of
one-port circuits to be assembled into a single, larger one-port circuit, using the
relational operations of inversion and addition.
\subsection{Monotonicity and passivity}\label{sec:feedback}
Monotone one-ports have close
connections to the classical theory of
feedback systems, and in particular, passivity and cyclo-passivity.
\textcite{Desoer1975} define \emph{incremental positivity} of an operator $S$ on $L_{2, [0, \infty)}$ as
the property that $\bra{u_1 - u_2}\ket{S(u_1) - S(u_2)} \geq 0$ for all inputs $u_1,
u_2 \in L_{2, [0, \infty)}$. Incremental positivity is precisely monotonicity on $L_{2, [0, \infty)}$.
Incremental positivity is closely related to incremental passivity. Define the extended
$L_{2, [0, \infty)}$ space, $L_{2, e}$, to be the space of signals $u$ such that, for all $T>0$, $P_T u \in L_{2, [0, \infty)}$, where $P_T$ is the truncation operator, defined by
\begin{IEEEeqnarray*}{rCl}
P_T(u)(t) = \begin{cases}
u(t) & t < T\\
0 & \text{otherwise}.
\end{cases}
\end{IEEEeqnarray*}
\emph{Incremental passivity} of an operator $S$ on $L_{2, e}$ is then defined as the
property that $\bra{P_T u_1 - P_T u_2}\ket{S(P_T u_1) - S(P_T u_2)} \geq 0$ for all inputs $u_1,
u_2 \in L_{2, e}$ and all $T>0$. In general, incremental passivity is a stronger
notion than incremental positivity, however the two notions are equivalent for causal
operators - the proof of this fact is identical to that of \autocite[Lemma 2, p. 200]{Desoer1975}.
Cyclo-passivity is a generalisation of
passivity to storages that are not necessarily lower bounded, first introduced by
\textcite{Willems1974} and later developed by \textcite{Hill1975}.
For recent work on cyclo-passivity of multi-ports, see
\textcite{vanderSchaft2020, vanderSchaft2020a, vanderSchaft2021}.
To the best of the authors' knowledge, the incremental version of this property has never been
studied. We define incremental cyclo-passivity as follows:
\begin{definition}
A relation $S$ on $L_{2, (-\infty, \infty)}$ is said to be \emph{incrementally cyclo-passive} if, for all
$T$, and all $(u_1, y_1), (u_2, y_2) \in S$ with the property that $u_1(-T) = u_1(T)$ and
$u_2(-T) = u_2(T)$ (respectively for $y_1$ and $y_2$), then
\begin{IEEEeqnarray*}{+rCl+x*}
\bra{u_1 - u_2}\ket{y_1 - y_2} &\geq& 0. & \qedhere
\end{IEEEeqnarray*}
\end{definition}
This corresponds to monotonicity on the space of all periodic trajectories \emph{of any period}.
The property required for the computational methods described in this paper,
monotonicity on the space of periodic signals with a particular period, is a weaker notion.
We conclude this section by remarking that the preservation of monotonicity under
port interconnection, proved in Proposition~\ref{prop:series_parallel}, can be
reinterpreted in terms of negative feedback. As shown in
Figures~\ref{fig:parallel-block}, the negative feedback interconnection
of two operators $F$ and $G$ can be represented as a parallel interconnection of $F^{-1}$ and $G$.
Proposition~\ref{prop:series_parallel} then allows us to recover the incremental form
of the fundamental theorem of passivity.
\begin{corollary}
Given two operators $F$ and $G$, each monotone on a Hilbert space
$\mathcal{H}$, their negative feedback interconnection $(F^{-1} +
G)^{-1}$ is monotone on $\mathcal{H}$.
\end{corollary}
\section{Algorithmic steady-state analysis of series/parallel monotone one-ports}
\label{sec:computation}
In this section, we develop an algorithmic method for computing the periodic response
of a monotone one-port which is forced by a periodic input. We consider a circuit
made of series and parallel interconnections of one-port elements, each defining a
(discrete time) monotone operator on $l_{2, T}$. The circuit defines a monotone
operator $M$. Concrete examples of such circuits are given in Sections~\ref{sec:RLC}
and~\ref{sec:conductance}.
Without loss of generality, we consider the problem of computing the ``output''
current $i^\star$ of the monotone operator $M$ corresponding to an ``input'' voltage
$v^\star$.
We compute the solution as the fixed point of an iterative splitting algorithm
determined from the series and parallel structure of the circuit. The algorithm is
first presented for two elements, then generalized to an arbitrary composition of
series and parallel interconnections.
\subsection{Splitting algorithms for two element circuits} \label{sec:splitting}
\label{sec:fixed_point_algorithms}
There is a large body of literature on splitting algorithms, which solve problems of the form $0 \in M_1(u) +
M_2(u)$, where $M_1 + M_2$ is a maximal monotone relation. If $M$ consists of two
elements, connected in series or parallel, we can convert our problem to this form by
writing $0 \in M_1(i) + M_2(i) - v^\star$ (assuming a series interconnection - the
parallel interconnection is obtained by exchanging $i$ and $v$). The offset
$-v^\star$ does not affect the monotonicity properties of $M$. Splitting algorithms
allow computation to be performed separately for the
components $M_1$ and $M_2$, and are useful when computation for the individual components is
easy, but computation for their sum is hard. Here, we describe two splitting algorithms - the
forwards/backwards splitting, and the Douglas-Rachford splitting.
Given an operator $S$ and a
scaling factor $\alpha$, the $\alpha$-resolvent of $S$ is defined to be the operator
\begin{IEEEeqnarray*}{rCl}
\res_{\alpha S} \coloneqq (I + \alpha S)^{-1}.
\end{IEEEeqnarray*}
If $S$ is maximal monotone, $\res_{S}$ is single-valued \autocite{Minty1961}.
\subsubsection*{Forward/backward splitting}
The simplest splitting algorithm is the forward/backward splitting
\autocite{Passty1979, Gabay1983, Tseng1988}. Suppose $M_2$ and
$\res_{\alpha M_1}$ are single-valued. Then:
\begin{IEEEeqnarray*}{lrCl}
& 0 &\in& M_1(x) + M_2(x)\\
\iff & 0 & \in & x - \alpha M_1(x) - (x + \alpha M_2(x))\\
\iff &(I + \alpha M_2)x &\ni& (I - \alpha M_1)x \\
\iff & x &=& \res_{\alpha M_2} (I - \alpha M_1) x.
\end{IEEEeqnarray*}
The fixed point iteration $x^{j+1} = \res_{\alpha M_2}(x^j - \alpha M_1(x^j))$ is the forward/backward splitting algorithm.
The most general convergence conditions for this algorithm are given by
\textcite[$\S6$]{Giselsson2019}, which guarantee that this iteration is an averaged
operator. These conditions may be summarised as follows.
\begin{proposition}
\label{prop:forward-backward}
Let $\mu \geq 0$, $\omega \geq 0$ and $\beta > 0$, and $M_1$ and $M_2$ be
operators on a Hilbert space $\mathcal{H}$.
The forward/backward algorithm, with scaling factor $\alpha \in (0, 2/(\beta + 2\mu))$, converges to a zero of $M_1 + M_2$, if one
exists, in each of
the following cases:
\begin{itemize}
\item $M_1$ is maximally $\mu$-monotone, $M_1 - \mu I$ is
$1/\beta$-cocoercive, $M_2$ is maximally $(-\omega)$-monotone
and $\mu \geq \omega$.
\item $M_1$ is maximally $(-\omega)$-monotone, $M_1 + \omega I$ is
$1/\beta$-cocoercive, $M_2$ is maximally $\mu$-monotone
and $\mu \geq \omega$.
\item $M_1$ is $\beta$-Lipschitz, $M_2$ is maximally $\mu$-monotone and
$\mu \geq \beta$.
\end{itemize}
\end{proposition}
\subsubsection*{Douglas-Rachford splitting}
One of the most successful splitting
algorithms is the Douglas-Rachford algorithm \autocite{Douglas1956, Lions1979},
which forms the basis of the Alternating Direction Method of
Multipliers \cite{Boyd2010}.
The reflected resolvent, or Cayley operator, is the operator
\begin{IEEEeqnarray*}{rCl}
R_{\alpha S} \coloneqq 2\res_{\alpha S} - I.
\end{IEEEeqnarray*}
Given two operators $M_1$ and $M_2$, and a scaling factor $\alpha$,
the Douglas-Rachford algorithm is the iteration
\begin{IEEEeqnarray*}{rCl}
x^{k + 1} &=& T(x^{k-1}),
\end{IEEEeqnarray*}
where $T$ is given by
\begin{IEEEeqnarray}{rCl}
T = \frac{1}{2}(I + R_{\alpha M_1} R_{\alpha M_2}).\label{eq:DR_operator}
\end{IEEEeqnarray}
\textcite[Thm 5.1]{Giselsson2019} give the most general conditions for convergence of
the Douglas-Rachford algorithm, which guarantee that $T$ is averaged.
\begin{proposition} \label{thm:DR_convergence}
Let $M_1$ and $M_2$ be operators on a Hilbert space $\mathcal{H}$. Let $\mu
> \omega \geq 0$ and $\alpha \in (0, 1/\omega)$. The Douglas-Rachford
algorithm converges to a zero of $M_1 + M_2$, if one exits, in each of the
following cases.
\begin{itemize}
\item $M_1$ is maximally $(-\omega)$-monotone and $M_2$ is maximally
$\mu$-monotone.
\item $M_2$ is maximally $(-\omega)$-monotone and $M_1$ is maximally
$\mu$-monotone.
\end{itemize}
\end{proposition}
\subsection{A nested splitting algorithm for three element
circuits}\label{sec:three}
If $M$ is composed of three elements, with one series interconnection and one
parallel interconnection (see Figure~\ref{fig:three-circuit}), $M$ has the form
$M = M_1 + (M_2 + M_3)^{-1}$, and we can convert our problem to the form $0 \in (M_1 + (M_2 +
M_3)^{-1})(u)$ again by offsetting by the input current or voltage. A naive approach to
solving this problem is to using a splitting algorithm such as the forward/backward algorithm,
with the resolvent step applied for $M_1$ and the forward step applied for $(M_2 +
M_3)^{-1}$. Applying this forward step amounts to solving $y = (M_2 + M_3)^{-1}(u)$
for some $u$, which may be rewritten as $0 \in (M_2 + M_3)(y) - u$. This can be
solved by again applying the forward/backward algorithm.
\begin{figure}[hb]
\centering
\includegraphics{three-circuit}
\caption{The two possible configurations of three elements with one series
interconnection and one parallel interconnection.}%
\label{fig:three-circuit}
\end{figure}
This naive procedure has poor complexity: for every forward/backward
step for $M_1 + (M_2 + M_3)^{-1}$, an \emph{entire} fixed point iteration has to
be computed for (an offset version of) $M_2 + M_3$. In this section, we propose an
alternative procedure. Rather than apply a forward step for the relation $(M_2 +
M_3)^{-1}$, we simply apply a \emph{single step} of the fixed point iteration needed
to compute this forward step, using the forward/backward algorithm. Assume, without
loss of generality, that $d(M_1) = i \to v$ and $d(M_2) = d(M_3) = v \to i$ (the
configuration shown on the left of Figure~\ref{fig:three-circuit}). Suppose that
$v^\star \in (M_1 + (M_2 + M_3)^{-1})(i)$. Assume that $M_3$, $\res_{\alpha_1 M_2}$
and $\res_{\alpha_2 M_1}$ are single-valued. We then have:
\begin{IEEEeqnarray}{rCl}
v^\star &\in& v + M_1(i)\label{eq:i_step}\\
v &\in& (M_2 + M_3)^{-1}(i),\label{eq:v_step}
\end{IEEEeqnarray}
where $v$ is the voltage over $M_2$, illustrated on the left of
Figure~\ref{fig:three-circuit}. Equation~\eqref{eq:i_step} gives
\begin{IEEEeqnarray*}{rCl}
i + \alpha_2 M_1(i) &\ni& i - \alpha_2 v + \alpha_2 v^\star\\
i &=& (I + \alpha_2 M_1)^{-1}(i - \alpha_2 v + \alpha_2 v^\star)\\
i &=& \res_{\alpha_2 M_1}(i - \alpha_2 v + \alpha_2 v^\star).
\end{IEEEeqnarray*}
Equation~\eqref{eq:v_step} gives
\begin{IEEEeqnarray*}{rCl}
i &\in& (M_2 + M_3)(v)\\
v + \alpha_1 M_2(v) &\ni& v - \alpha_1 M_3 (v) + \alpha_1 i\\
v &=& (I + \alpha_1 M_2)^{-1}(v - \alpha_1 M_3(v) + \alpha_1 i)\\
v &=& \res_{\alpha_1 M_2}(v - \alpha_1 M_3(v) + \alpha_1 i).
\end{IEEEeqnarray*}
This shows that a fixed point of the iteration
\begin{IEEEeqnarray*}{rCl}
v^{k + 1} &=& \res_{\alpha_1 M_2} (v^k - \alpha_1 M_3(v^k) + \alpha_1 i^k)\\
i^{k + 1} &=& \res_{\alpha_2 M_1} (i^k - \alpha_2 v^{k+1} + \alpha_2 v^\star)
\end{IEEEeqnarray*}
is a solution to our original problem.
In the next section, we generalize this algorithm to an arbitrary series/parallel
monotone one-port, and in Theorem~\ref{thm:nested_convergence}, we give a general
condition under which the algorithm is guaranteed to converge to such a fixed point.
\subsection{A nested splitting algorithm for arbitrary series/parallel
circuits}\label{sec:nested}
In this section, we introduce a new splitting algorithm, the \emph{nested
forward/backward algorithm}, which generalizes the algorithm described in the
previous section to monotone one-ports with
arbitrary series and parallel interconnections, which have the general form shown in
Figure~\ref{fig:nested_algo} (allowing elements to be open circuits, short circuits,
or whole subcircuits). We assume for simplicity that the relations $G_j$ and $R_j$
are single-valued, although the extension to multi-valued relations
is straightforward.
\begin{figure}[hb]
\centering
\includegraphics[width=\linewidth]{nested_algo.pdf}
\caption{Circuit structure with nested series and parallel interconnections.
$R_n$ represents a one-port whose $i-v$ relation is known, $G_n$ represents a
one-port whose $v-i$ relation is known.}%
\label{fig:nested_algo}
\end{figure}
The $v-i$ relation of the circuit in Figure~\ref{fig:nested_algo} is given by
\begin{IEEEeqnarray}{rCl}
i_n &=& (R_n + (G_n + (\ldots + (R_1 + R_0)^{-1}\ldots
)^{-1})^{-1})^{-1}(v_n).\label{eq:nested_relation}
\end{IEEEeqnarray}
If each inversion is solved using a fixed point iteration, the number of fixed points
that must be computed scales with order $\mathcal{O}(m^n)$, where $n$ is the number
of inverses in Equation~\eqref{eq:nested_relation}, and $m$ is the number of steps
needed to compute each inverse. Following the argument of the
previous section, the nested forward/backward
algorithm, given in Algorithm~\ref{alg:nested}, solves equations of the form \eqref{eq:nested_relation} by
replacing inverse operators with a single step of the forward/backward iteration
needed to compute them. In this way, every inversion is computed simultaneously,
using a single fixed point algorithm.
\algdef{SE}[DOWHILE]{Do}{DoWhile}{\algorithmicdo}[1]{\algorithmicwhile\ #1}%
\begin{algorithm}
\caption{Nested Forward/Backward Algorithm}\label{alg:nested}
\begin{algorithmic}[1]
\State \textbf{Data:} Step sizes $\alpha_j$, $j = 1, \ldots, 2n-1$, external signal
$v_n$, convergence tolerance $\epsilon$.
\For{$j = 1, \ldots, n$}
\State Initialize $v^1_{j-1}$, $i^1_j$.
\EndFor
\State $k = 1$
\Do
\State $i_1^{k+1} = \res_{\alpha_1 R_1}(i_1^k - \alpha_1 R_0(i_1^k) +
\alpha_1 v_1^k)$
\label{alg:i_1_update}
\For{$j = 2, \ldots, n$}
\State $v_{j-1}^{k + 1} = \res_{\alpha_{2j-2} G_{j}}
(v_{j-1}^k - \alpha_{2j-2}i_{j-1}^{k+1} + \alpha_{2j-2}i_j^k)$
\label{alg:v_update}
\State $i_{j}^{k + 1} = \res_{\alpha_{2j-1} R_{j}}
(i_{j}^k - \alpha_{2j-1}v_{j-1}^{k+1} + \alpha_{2j-1}v_j^k)$
\label{alg:i_update}
\EndFor
\State $k = k+1$.
\DoWhile{$\max_j (|v_j^{k + 1} - v^k_j|, |i_j^{k + 1} -
i^k_j|) > \epsilon$}
\end{algorithmic}
\end{algorithm}
\begin{theorem}\label{thm:nested_convergence}
Algorithm~\ref{alg:nested} converges to a solution of
Equation~\ref{eq:nested_relation} as $k \to \infty$ if $R_0$ is coercive and Lipschitz, all the
$R_j, G_j$ are monotone for $j = 1, \ldots, n$, and the eigenvalues of
$\mathcal{A}$ all lie within the unit circle, where $\mathcal{A}$ is defined as the matrix with columns
\begin{IEEEeqnarray*}{l}\label{eq:nested_matrix}
\begin{pmatrix}
\beta_1 \gamma_1 \\
\alpha_2 \beta_1 \gamma_2 \gamma_1 \\
\alpha_3 \alpha_2 \beta_1\gamma_3 \gamma_2 \gamma_1 \\
\alpha_4 \alpha_3 \alpha_2 \beta_1 \gamma_4\gamma_3\gamma_2\gamma_1\\ \vdots
\end{pmatrix},\;
\begin{pmatrix}
\gamma_1 \alpha_1\\
\gamma_2(1 +\gamma_1 \alpha_1 \alpha_2) \\
\alpha_3\gamma_2\gamma_3(1 +\gamma_1\alpha_1 \alpha_2) \\
\alpha_4 \alpha_3 \gamma_4\gamma_3\gamma_2(1 + \gamma_1\alpha_1\alpha_2)\\ \vdots
\end{pmatrix},\\
\begin{pmatrix}
0 \\
\gamma_2 \alpha_2 \\
\gamma_3(1 + \gamma_2 \alpha_2 \alpha_3) \\
\alpha_4\gamma_4 \gamma_3(1 + \gamma_2\alpha_2 \alpha_3)\\ \vdots
\end{pmatrix},\;
\begin{pmatrix}
0\\
0\\
\gamma_3 \alpha_3 \\
\gamma_4(1 + \gamma_3\alpha_3 \alpha_4)\\ \vdots
\end{pmatrix},
\end{IEEEeqnarray*}
and so on, where, for $j=1, \ldots, n$, $\gamma_{2j-2}$ is a Lipschitz constant of
$\res_{\alpha_{2j-2} R_j}$, $\gamma_{2j-1}$ is a Lipschitz constant of
$\res_{\alpha_{2j-1} G_j}$ and $\beta_1$ is a Lipschitz constant of the
operator $(I - \alpha_1 R_0)$.
\end{theorem}
To clarify, the constants $\alpha_j$ may be chosen to tune the convergence rate of
the algorithm. The constants $\beta_1$ and $\gamma_j$ must be Lipschitz constants
for the relevant operators. Coercivity and Lipschitz continuity of $R_0$ means that
$\alpha_1$ can be chosen so that $0 < \beta_1 < 1$ \autocite[p. 39]{Ryu2021a}.
Monotonicity of $R_j$ and $G_j$ for all $j$ implies that
all resolvents used in the algorithm are nonexpansive, so the $\gamma_j$ can
always be set to $1$, and this gives the simplest test for convergence. A
less conservative test is given by setting the $\gamma_j$ to their minimum
possible values.
\begin{proof}[Proof of Theorem~\ref{thm:nested_convergence}]
We begin by showing that a fixed point of the iteration in
Algorithm~\ref{alg:nested} is a solution to
Equation~\ref{eq:nested_relation}.
Indeed, substituting $v_j^{k+1} = v_j^k$ and $i_j^{k+1} = i_j^k$ into
lines~\ref{alg:i_1_update}, \ref{alg:v_update} and~\ref{alg:i_update} of
Algorithm~\ref{alg:nested} gives
\begin{IEEEeqnarray*}{rCl}
v_1 &=& R_1(i_1) + R_0(i_1)\\
i_j &=& G_j(v_{j-1}) + i_{j-1}\\
v_j &=& R_j(i_j) + i_{j-1},
\end{IEEEeqnarray*}
from which we obtain
\begin{IEEEeqnarray*}{rCl}
i_1 &=& (R_1 + R_0)^{-1}(v_1)\\
i_2 &=& (R_2 + (G_2 + (R_1 + R_0)^{-1})^{-1})^{-1} (v_2),
\end{IEEEeqnarray*}
and so on, to arrive at Equation~\ref{eq:nested_relation}, as required.
We now show that Algorithm~\ref{alg:nested} converges to a fixed point under
the stated conditions. We simplify notation by defining $u_j = i_j$, $j$
odd, and $u_j = v_j$, $j$ even.
Let $u^{k}$ and $w^{k}$ be two sequences
of iterates generated by Algorithm~\ref{alg:nested}, with the same input
$u_n^k = w_n^k = v^\star$, and denote $u_j^k -
w_j^k$ by $\Delta
u_j^{k}$. It then follows from lines~\ref{alg:i_1_update}, \ref{alg:v_update} and~\ref{alg:i_update} of
Algorithm~\ref{alg:nested} that, for $j = 1, \ldots, 2n-1$,
\begin{IEEEeqnarray*}{rCl}
\norm{\Delta u_1^{k+1}} \leq \gamma_1 \norm{\Delta u_1^k -\alpha_1
\Delta R_0(u_1^k) + \alpha_1 \Delta u_2^k}\\
\norm{\Delta u_j^{k+1}} \leq \gamma_j \norm{\Delta u_j^k - \alpha_j
\Delta u_{j-1}^{k+1} + \alpha_j \Delta u_{j+1}^k},
\end{IEEEeqnarray*}
from which it follows, via the triangle inequality, that
\begin{IEEEeqnarray*}{rCl}
\norm{\Delta u_1^{k+1}} \leq \gamma_1 \beta_1 \norm{\Delta u_1^k} +
\gamma_1 \alpha_1 \norm{\Delta u_2^k}\\
\norm{\Delta u_j^{k+1}} \leq \gamma_j \norm{\Delta u_j^k} + \gamma_n \alpha_j
\norm{\Delta u_{j-1}^{k+1}} + \gamma_j \alpha_j \norm{\Delta u_{j+1}^k},
\end{IEEEeqnarray*}
where $\Delta u_n^k = 0$ for all $k$. Let $n(\Delta u^k)$ denote the vector
$(\norm{\Delta u_1^{k}}, \norm{\Delta u_2^{k}}, \norm{\Delta u_3^{k}},
\norm{\Delta u_4^{k}}, \ldots)\tran$. It follows that
\begin{IEEEeqnarray*}{rCl}
n(\Delta u^{k+1}) &\leq&
\mathcal{A} n(\Delta u^k)\\
&\leq& \mathcal{A}^{k} n(\Delta u^1),
\end{IEEEeqnarray*}
where $\mathcal{A}$ is the matrix given in the statement of the theorem. It
follows from the nonnegativity of $n(\Delta u^{k+1})$ (or from the
elementwise nonnegativity of $\mathcal{A}$) that $\mathcal{A}^{k}
n(\Delta u^1)$ is elementwise nonnegative for all $k$.
We then have $0 \leq n(\Delta u^{k+1}) \leq z^{k+1}$, where $z^{k+1}$ is the
solution to the difference equation $z^{k+1} = \mathcal{A}z^k$ with initial
condition $n(\Delta u^1)$. Since the eigenvalues of $\mathcal{A}$ are within
the unit circle, it is a standard result of linear systems theory
that there exist a norm $\norm{\cdot}_P$ and rate $0 < \lambda < 1$ such that
$\norm{z^{k+1}}_P \leq \lambda \norm{z^k}_P$. It follows that the sequence $n(\Delta u^k)$ converges to the zero vector
in the norm $\norm{\cdot}_P$ at least as fast as the sequence $z^k$.
It then follows from the Banach fixed point theorem that each $u_j^k$ converges to a limit $u^\star_j$ as $k\to \infty$, which completes the proof.
\end{proof}
\section{RLC circuits}
\label{sec:RLC}
Here, we consider one-port circuits formed by the series and parallel interconnection
of resistors, capacitors and inductors. This is the class of circuits considered in
the conference version of this paper \autocite{Chaffey2021}.
A resistor is a relation $R$ on $\R$, the \emph{device law}, between current and voltage:
\begin{IEEEeqnarray*}{rrCl}
& R = \left\{(i, v) \in \R \times \R\; |\; v \in R(i)\right\}\\
\text{or } & R = \left\{(v, i) \in \R \times \R\; |\; i \in G(v)\right\}.
\end{IEEEeqnarray*}
A resistor defines a 1-port relation on $L_{2, T}$ by applying
the relation $R$ at each time:
\begin{IEEEeqnarray*}{rCl}
S = \left\{(i, v) \in L_{2, T} \times L_{2, T} \; |\; (v(t), i(t)) \in R \text{ for all } t \right\}.
\end{IEEEeqnarray*}
A capacitor is a relation $C$ on $L_{2, T}$ between the integral of current and
voltage, defined by a device law $C(\cdot):\R \to \R$:
\begin{IEEEeqnarray*}{rrCl}
& C = \left\{(i, v) \in L_{2, T} \times L_{2, T}\; |\;i \in \dv{t} C(v)\right\}\\
\end{IEEEeqnarray*}
An inductor is given by a relation $L$ on $L_{2, T}$ between the integral of voltage and current, defined by a device law $L(\cdot):\R\to \R$:
\begin{IEEEeqnarray*}{rrCl}
& L = \left\{(i, v) \in L_{2, T} \times L_{2, T}\; |\; v \in \dv{t} L(i)\right\}\\
\end{IEEEeqnarray*}
The following proposition shows that resistors map $T$-periodic inputs
to $T$-periodic outputs, capacitors map $T$-periodic voltages to $T$-periodic
currents, and inductors map $T$-periodic currents to $T$-periodic voltages.
\begin{proposition}\label{thm:periodic_elements}
Memoryless relations and the derivative map $T$-periodic inputs to
$T$-periodic outputs.
\end{proposition}
\begin{proof}
Let $f$ be a memoryless relation, that is, a relation
between $u$ and $y$ such that $y(t) \in f(u(t))$. Then $y(t + T) \in f(u(t
+ T)) = f(u(t)) \ni y(t)$.
The property also holds for the derivative:
\begin{IEEEeqnarray*}{+rCl+x*}
\td{u(t)}{t} &=& \lim_{h \rightarrow 0} \frac{u(t) + u(t + h)}{h}\\
&=& \lim_{h \rightarrow 0} \frac{u(t + T) + u(t + T + h)}{h}\\
&=& \td{u(t + T)}{t}.&\qedhere
\end{IEEEeqnarray*}
\end{proof}
The following proposition gives a characterization of the monotonicity of resistors on $L_{2, T}$ in terms of their devices laws.
\begin{proposition}
\label{lem:monotone-r}
A resistor is monotone on $L_{2, T}$ if and only if its device law defines a monotone relation
on $\R$ between $i(t)$ and $v(t)$ for all $t$.
\end{proposition}
\begin{proof}
\emph{If:} By monotonicity of the device law on $\R$, we have
\begin{IEEEeqnarray*}{rCl}
(i_1(t) - i_2(t))(v_1(t) - v_2(t)) \geq 0 \text{ for all } t,
\end{IEEEeqnarray*}
from which it follows that
\begin{IEEEeqnarray*}{rCl}
\bra{i_1 - i_2}\ket{v_1 - v_2} &=& \int_{0}^{T} (i_1(t) -
i_2(t))(v_1(t) - v_2(t)) \dd{t}\\
&\geq& 0.
\end{IEEEeqnarray*}
\emph{Only if:} Assume by contradiction that the device law is not monotone
on $\R$, that is, there exist $\iota_1, \iota_2 \in \R$ such that
\begin{IEEEeqnarray*}{rCl}
(\iota_1 - \iota_2)(R(\iota_1) - R(\iota_2)) \leq 0.
\end{IEEEeqnarray*}
Taking the constant signals $i_1(t) = \iota_1$, $i_2(t) = \iota_2$ on $L_{2,
T}$ shows that the resistor is not monotone on $L_{2, T}$.
\end{proof}
A natural question is whether the same can be said for inductors and capacitors - are
these devices monotone if their device laws $C$ and $L$ are monotone? A striking result of
\textcite{Kulkarni2001} is that this is true if and only if the device laws are
\emph{linear}.
\begin{proposition}
\label{lem:monotone-l-c}
Capacitors and inductors with monotone device laws on $\R$ are monotone on
$L_{2, T}$ for all $T \geq 0$ if and only if their device laws are linear.
\end{proposition}
\begin{proof}
The result is given by \autocite[Lemma~A.2]{Kulkarni2001}, noting that the
signals used in their proof (Equation~A.4) are truncated square waves, which
are signals on $L_{2, T}$ for $T$ equal to the length of the truncation.
\end{proof}
We now collect some results which show that, under mild conditions, series/parallel RLC circuits
define operators on $L_{2, T}$.
\begin{proposition}\label{thm:periodic_series_parallel}
A series (resp. parallel) interconnection of $n$ one-ports which map $T$-periodic
currents (voltages) to
$T$-periodic voltages (currents) also maps $T$-periodic currents (voltages)
to $T$-periodic
voltages (currents).
\end{proposition}
\begin{proof}
Periodicity is preserved under summation of signals, and therefore
preserved by Kirchoff's laws. Indeed, if $y(t) = u_1(t) + u_2(t)$, and $u_1$
and $u_2$ are both $N$-periodic, then $y(t + T) = u_1(t+T) + u_2(t+T) =
u_1(t) + u_2(t) = y(t)$.
\end{proof}
Next, we show that one-port circuits which obey simple conditions on their
interconnections map periodic inputs to periodic outputs.
Other classes of systems with this property include contractive state space systems
\autocite{Sontag2010} and approximately finite memory input/output maps
\autocite{Sandberg1992}.
\begin{theorem}\label{thm:periodic}
Let $M$ be the relation on $L_{2, T}$, from either $v$ to $i$ or $i$ to $v$, of a
1-port constructed from the series and parallel interconnection of $n$
constituent one-ports $M_i$, such that the
construction obeys the following conditions
\begin{enumerate}
\item $M_i:L_{2, T} \to L_{2, T}$ for all $i$;\label{condition:dom}
\item any one-port which must be inverted during the construction is
coercive and Lipschitz.\label{condition:resistor}
\end{enumerate}
Then $M$ maps any input in $L_{2, T}$ to a unique output in
$L_{2, T}$.
\end{theorem}
\begin{proof}
By assumption, each of the
relations $M_i$ maps $T$-periodic inputs to $T$-periodic outputs (we denote
this property by PIPO for the remainder of this proof). We show that
constructing a circuit under the given conditions preserves this property.
This amounts to showing that the PIPO property is preserved under summation
and inversion. We have already observed that it is preserved under summation
in Proposition~\ref{thm:periodic_series_parallel}. It remains to show
that inversion preserves the PIPO property if the one-port to be inverted is
coercive and Lipschitz. Let $F$ be $\mu$-coercive and $\lambda$-Lipschitz.
Define the incremental relation $\Delta y =
F(u) - y^\star$ on $L_{2, T}$. $\Delta
F$ has the same coercivity and Lipschitz properties as $F$.
We show that the operator $I - \gamma \Delta F$ is a contraction mapping on $L_{2, T}$
for small enough $\gamma > 0$. Indeed,
\begin{IEEEeqnarray*}{l}
\norm{(I - \gamma \Delta F)(x) - (I - \gamma\Delta F)y}^2 \\= \norm{x - y}^2 -
2\gamma \bra{x - y}\ket{\Delta Fx - \Delta Fy} + \gamma^2\norm{\Delta Fx -
\Delta Fy}^2\\
\leq \left(1 - 2\gamma \mu + \gamma^2\lambda^2 \right) \norm{x - y}^2,
\end{IEEEeqnarray*}
where the inequality follows from the definitions of coercive and
Lipschitz operators. Solving $0 < \left(1 - 2\gamma \mu +
\gamma^2\lambda^2 \right) < 1$ gives an allowable range of $\gamma \in (0,
2\alpha/\lambda^2)$ for $I - \gamma \Delta
F$ to be a contraction mapping on $L_{2, T}$. It then follows from the Banach fixed point
theorem that $I - \gamma \Delta F$ has a unique fixed point $u^\star \in L_{2, T}$ \autocite[\S
2.4.2]{Ryu2021a}, \autocite{Banach1922}:
\begin{IEEEeqnarray*}{lrCl}
&u^\star &=& u^\star - \gamma \Delta F(u^\star)\\
\iff &\Delta F(u^\star) &=& 0\\
\iff & F(u^\star) &=& y^\star.
\end{IEEEeqnarray*}
This shows that
$F$ is invertible on $L_{2, T}$.
\end{proof}
When applied to RLC circuits, condition~\ref{condition:dom} of Theorem~\ref{thm:periodic}
requires capacitors to be connected in parallel and inductors to be connected in series.
The definitions of inductors and capacitors above are time-invariant.
\textcite{Georgiou2020} define time-varying, or adjustable, capacitors and inductors,
termed the \emph{varcapacitor} and \emph{varinductor}:
\begin{IEEEeqnarray*}{rCll}
i(t) &=& c(t) \dv{t}(c(t) v(t))&\qquad\text{varcapacitor}\\
v(t) &=& l(t) \dv{t}(l(t)i(t))&\qquad\text{varinductor}.
\end{IEEEeqnarray*}
If $i(t)$, $v(t)$, $l(t)$ and $c(t)$ are $T$-periodic, these devices are monotone on
$L_{2, T}$.
\begin{proposition}
Varcapacitors with $T$-periodic $c(t)$ and varinductors with $T$-periodic
$l(t)$ are monotone on $L_{2, T}$.
\end{proposition}
\begin{proof}
For a varcapacitor, we have
\begin{IEEEeqnarray*}{cl}
&\int_0^T (v_1(t) - v_2(t))(i_1(t) - i_2(t)) \dd{t} \\
=& \int_0^T \dv{t} \frac{1}{2} c^2(t) (v_1(t) - v_2(t))^2 \dd{t}\\
=& \frac{1}{2}c^2(T)v^2(T) - \frac{1}{2}c^2(0)v^2(0)\\
=& 0.
\end{IEEEeqnarray*}
Likewise, for a varinductor, we have
\begin{IEEEeqnarray*}{+cl+x*}
&\int_0^T (v_1(t) - v_2(t))(i_1(t) - i_2(t)) \dd{t} \\
=& \int_0^T \dv{t} \frac{1}{2} l^2(t) (i_1(t) - i_2(t))^2 \dd{t}\\
=& \frac{1}{2}l^2(T)i^2(T) - \frac{1}{2}l^2(0)i^2(0)\\
=& 0.&\qedhere
\end{IEEEeqnarray*}
\end{proof}
We now give two detailed examples of the steady-state analysis of an RLC circuit.
In order to obtain relations on $l_{2, T}$,
the derivative is discretized to give an operator $D$. Any discretization may be
used. For the examples in this paper, we use the backwards finite difference, given by the relation
\begin{IEEEeqnarray*}{rCl}
D = \bigg\{ (u, y) \; \bigg| \; y = T D_T u \bigg\},
\end{IEEEeqnarray*}
where $D_T$ is the $T \times T$ matrix
\begin{IEEEeqnarray*}{rCl}
D_T &=&
\begin{bmatrix}
1 & 0 & \ldots & 0 & -1\\
-1 & 1 & \ldots & 0 & 0\\
0 & -1 & \ldots & 0 & 0\\
\vdots & \vdots & \ddots & \vdots &
\vdots \\
0 & 0 & \ldots & -1 & 1
\end{bmatrix}.
\end{IEEEeqnarray*}
Note that $D$ is a maximal monotone relation, as $D_T + D_T\tran \succeq 0$
\autocite[$\S2.2.3$]{Ryu2021a}. To obtain an accurate discrete model, a sufficient number of time steps must be used.
\begin{example}\label{ex:envelope_detector}
An envelope detector is a simple nonlinear circuit consisting of a diode in series
with an LTI RC filter
(Figure~\ref{fig:envelope_detector}). It is used to demodulate AM radio signals.
\begin{figure}[hb]
\centering
\includegraphics{ex1_clean}
\caption{An envelope detector, configured as a 1-port.}%
\label{fig:envelope_detector}
\end{figure}
We model the
diode using the Shockley equation:
\begin{equation*}
v = R_{\text{diode}}(i) \coloneqq nV_T \ln\left(\frac{i}{I_s} + 1\right),
\end{equation*}
where $I_s$ is the reverse bias saturation current, $V_T$ is the thermal voltage and $n$
is the ideality factor. The $i-v$ graph of the diode relation is strictly increasing
with no endpoints; the diode relation is therefore maximal monotone.
The RC filter is itself a parallel interconnection of a resistor and capacitor, which maps voltage to current:
\begin{IEEEeqnarray*}{rCl}
G_{RC} = C D + \frac{1}{R}I.
\end{IEEEeqnarray*}
As $G_{RC}$ is linear, it has a Lipschitz constant $L$ equal to its
largest singular value and is coercive with constant $m = \lambda_{\min}((G_{RC} +
G_{RC}\tran)/2)$ \autocite{Ryu2021a}.
The incremental voltage $\Delta v = v - v^\star$ is given as a relation of $i$ by
\begin{IEEEeqnarray*}{rCl}
\Delta v = R_\text{diode}(i) + R_{RC}(i) - v^\star.
\end{IEEEeqnarray*}
Given an input voltage $v^\star$, we solve for the corresponding current $i^\star$ using the Douglas-Rachford splitting. This
involves applying both the resolvents $\res_{RC}$ and $\res_{\text{diode}}$.
The resolvent $\res_{RC}$ is given by $(I + \lambda G_{RC}^{-1})^{-1}$. This matrix
is pre-computed and stored in memory.
The resolvent of the diode, $\res_{\text{diode}}$, is given by $\res_{\text{diode}}^{-1}(x) = (I + \lambda
R_\text{diode}(x) - \lambda v^\star)$. There
is no analytic expression for this operator. Rather, the resolvent is computed
numerically using the guarded Newton algorithm \cite{Parikh2013}.
Figure~\ref{fig:envelope_detector_inverse} shows the results of performing this scheme with
an input of $v^\star = \sin(2\pi t)$ A, with $R = 1\, \Omega$, $C = 1$ F, $I_s =
1\times10^{-14}$ A, $n = 1$ and $V_T = 0.02585$ V. The number of time steps used is
$500$.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{groupplot}
[
group style={
group size=1 by 2,
vertical sep = 0.5cm
},
width=0.5\textwidth,
height=4cm,
cycle list name=colors,
grid=both,
grid style={line width=.1pt, draw=Gray!20},
axis x line=bottom,
axis y line=left
]
\nextgroupplot[ylabel={\footnotesize Voltage (V)}, xmin=0, xmax=1]
\addplot[CornflowerBlue] table [x = t, y = v, col sep = comma, mark = none]{"./envelope_detector_inverse.csv"};
\addlegendentry{\footnotesize $v$ - input};
\nextgroupplot[xlabel={\footnotesize Time (s)}, ylabel={\footnotesize Current (A)}, xmin=0, xmax=1]
\addplot[BurntOrange] table [x = t, y = i, col sep = comma, mark = none]{"./envelope_detector_inverse.csv"};
\addlegendentry{\footnotesize $i$ - output};
\end{groupplot}
\end{tikzpicture}
\caption{Input voltage $v^\star$ and the resulting current $i$ for an
envelope detector. One period of a periodic input and output is shown. Circuit parameters are $R = 1\, \Omega$, $C = 1$ F, $I_s =
1\times10^{-14}$ A, $n = 1$ and $V_T = 0.02585$ V.
Algorithmic parameters are $\alpha = 0.01$, $\epsilon = 1\times10^{-5}$ and $500$ time
steps.}%
\label{fig:envelope_detector_inverse}
\end{figure}
\end{example}
\begin{example}\label{ex:large_scale}
In this example, we analyze the large-scale circuit shown in
Figure~\ref{fig:large_scale}, which consists
of $n$ identical units, each consisting of a diode and LTI RC filter. The diode and
RC filter are modelled as in Example~\ref{ex:envelope_detector}.
\begin{figure*}[t]
\centering
\includegraphics{ex2_clean}
\caption{Circuit for Example~\ref{ex:large_scale}. The circuit consists of
$n$ repeated units, each consisting of a diode and LTI RC filter.}%
\label{fig:large_scale}
\end{figure*}
When viewed as an interconnection of one-ports, the circuit has a recursive
structure. Following the notation of Figure~\ref{fig:large_scale}, for $1 \leq m \leq n$, we have:
\begin{IEEEeqnarray*}{rCl}
v_{m} &=& R_m(i_m)\\
&=& R_{\text{diode}}(i_m) + G_{n}^{-1}(i_m),\\
i_{m} &=& G_{m-1}(v_{m - 1})\\
&=& G_{RC}(v_{m-1}) + R^{-1}_{m-1}(v_{m-1}).
\end{IEEEeqnarray*}
The base case is $G_1 = G_{RC}$. This circuit has the form of
Figure~\ref{fig:nested_algo}, with $R_0$ = $G_{RC}^{-1}$, $R_j = R_{\text{diode}}$ and
$G_j = G_{RC}$ for all $j>1$. The circuit is solved using the nested
forward/backward algorithm introduced in Section~\ref{sec:nested}.
Figure~\ref{fig:large_scale_result} shows the results of performing this
scheme with $n = 100,000$ repeated units (a total of 300,000 components).
The input is $v^\star = 1 +\sin(2\pi t)$ A, with circuit parameters $R = 1\, \Omega$, $C = 1$ F, $I_s =
1\times10^{-14}$ A, $n = 1$ and $V_T = 0.02585$ V. The number of time steps used is
$256$. With $n=100,000$ units, computation took 1937 s on a standard desktop
computer. With $n=10$ units, computation took an average of 243 ms, over 21 runs.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{groupplot}
[
group style={
group size=1 by 2,
vertical sep = 0.5cm
},
width=0.5\textwidth,
height=4cm,
cycle list name=colors,
grid=both,
grid style={line width=.1pt, draw=Gray!20},
axis x line=bottom,
axis y line=left
]
\nextgroupplot[ylabel={\footnotesize Voltage (V)}, xmin=0, xmax=1]
\addplot[CornflowerBlue] table [x = t, y = v, col sep =
comma, mark = none]{"./large_scale_100k.csv"};
\addlegendentry{\footnotesize $v_n$ - input};
\nextgroupplot[xlabel={\footnotesize Time (s)}, ylabel={\footnotesize Current (A)}, xmin=0, xmax=1]
\addplot[BurntOrange] table [x = t, y = i, col sep = comma,
mark = none]{"./large_scale_100k.csv"};
\addlegendentry{\footnotesize $i_n$ - output};
\end{groupplot}
\end{tikzpicture}
\caption{Input voltage $v_n$ and the resulting current $i_n$ for the
circuit of Figure~\ref{fig:large_scale}, with $n=100,000$. Circuit parameters
are $R = 1\, \Omega$, $C = 1$ F, $I_s =
1\times10^{-14}$ A, $n = 1$ and $V_T = 0.02585$ V. Algorithm parameters are
$\alpha_j = 1.5$ for all $j$ and $\epsilon = 1\times10^{-4}$. One period of a periodic input and output is shown.}%
\label{fig:large_scale_result}
\end{figure}
\end{example}
\section{Memristive systems}
\label{sec:conductance}
In 1976, \textcite{Chua1976} introduced the class of
\emph{memristive systems}, described by state space models of the form
\begin{subequations}\label{eq:memristive}
\begin{IEEEeqnarray}{rCl}
\dot{x} &=& f(x, u, t)\\
y &=& g(x, u, t)u.
\end{IEEEeqnarray}
\end{subequations}
This model class describes systems which behave like resistors, in that they
cannot store energy and do not produce a phase shift, but, unlike resistors, do have a memory. This
work was motivated by systems such as the Hodgkin-Huxley neural membrane model
\autocite{Hodgkin1952}, thermistors and discharge tubes.
In this section, we
show that our methods may be applied to particular members of this class.
If $\dot{x} = f(x, u, t)$ is a contractive state space system and $u(t)$ is a
$T$-periodic input, there is a unique, globally asymptotically stable $T$-periodic
output $y(t)$ to the memristive system~\eqref{eq:memristive} \autocite{Lohmiller1998}. The
memristive system then defines an operator on $L_{2, T}$, mapping the $T$-periodic
input $u(t)$ to the $T$-periodic output $y(t)$.
To determine the monotonicity properties of memristive systems,
we use the Scaled Relative Graph (SRG).
The SRG of an operator is a region of the extended complex plane, from which the
incremental properties of the operator can be easily read. SRGs were recently
introduced by \textcite{Ryu2021} for the study of monotone operator methods in
optimization, and have been used for the study of systems in feedback by the authors
in references \autocite{Chaffey2021c, Chaffey2021d}.
The SRG of an operator
$R: \mathcal{H} \to \mathcal{H}$ is defined as follows. Given $u_1, u_2 \in
\mathcal{U} \subseteq \mathcal{H}$, $u_1 \neq u_2$, define the set of complex numbers $z_R(u_1, u_2)$ by
\begin{IEEEeqnarray*}{rCl}
z_R(u_1, u_2) \coloneqq &&\left\{\frac{\norm{y_1 - y_2}}{\norm{u_1 - u_2}} e^{\pm j\angle(u_1 -
u_2, y_1 - y_2)}\right.\\&&\bigg|\; y_1 \in R(u_1), y_2 \in R(u_2) \bigg\}.
\end{IEEEeqnarray*}
If $u_1 = u_2$ and there are corresponding
outputs $y_1 \neq y_2$, then
$z_R(u_1, u_2)$ is defined to be $\{\infty\}$. If $R$ is single valued at $u_1$,
$z_R(u_1, u_1)$ is the empty set.
\begin{definition}
The \emph{Scaled Relative Graph} (SRG) of $R$ over $\mathcal{U} \subseteq \mathcal{H}$ is
\begin{IEEEeqnarray*}{rCl}
\srg[\mathcal{U}]{R} \coloneqq \bigcup_{u_1, u_2 \in\, \mathcal{U}} z_R(u_1, u_2).
\end{IEEEeqnarray*}
If $\mathcal{U} = \mathcal{H}$, we write $\srg{R} \coloneqq \srg[\mathcal{H}]{R}$.
\end{definition}
It follows from \autocite[Prop. 3.3, Thm. 3.5]{Ryu2021} that an operator is
$\mu$-monotone if and only if its SRG lies in the region $\{z \in \C\;|\; \Re(z) \geq
\mu\}$.
SRGs can be determined analytically for large classes of systems, including linear
operators, LTI systems, static nonlinearities and interconnections of these systems
\autocite{Chaffey2021c, Huang2020a, Pates2021, Ryu2021}.
\begin{example}\label{ex:conductance}
The Hodgkin-Huxley model represents a nerve axon membrane as a parallel
interconnection of active \emph{ion channels} with a capacitor
\autocite{Hodgkin1952}. Each ion channel is a time-varying conductance, which may be
modelled as a memristive system. In this example, we consider
the potassium conductance $i = G_\mathrm{K}(v)$, which is given by the equations
\begin{IEEEeqnarray*}{rCl}
i &=& \bar{g}_\mathrm{K} n^4(v - v_\mathrm{K})\\
\dv{n}{t} &=& \alpha_n (v)(1 - n) - \beta_n(v)n\\
\alpha_n(v) &=& \frac{0.01 (10 + v)}{\exp(1 + v/10) - 1}\\
\beta_n(v) &=& 0.125\exp(v/80).
\end{IEEEeqnarray*}
Following \textcite{Hodgkin1952a}, the constants $\bar{g}_\mathrm{K}$ and
$v_\mathrm{K}$ are set to $19$ m mho$/$cm$^2$ and $12$ mV, respectively.
The dynamics in $n$ are contractive \autocite[Prop. 1]{Burghi2021}, therefore the
potassium conductance defines an operator on $L_{2, T}$.
The analytical SRG of the potassium conductance is difficult to determine, but
we can test its monotonicity by sampling its SRG.
Figure~\ref{fig:K_srg} shows points in the SRG of the potassium conductance, computed
over signals of the form $u = \alpha \sin(\gamma t) + \delta$, for real parameters
$\alpha, \gamma, \delta$. This plot suggests that the potassium conductance is $(-0.002)$-monotone on $L_{2, T}$.
While we do not have a theoretical guarantee that this is the case, we can test
whether the potassium conductance behaves as if it is $(-0.002)$-monotone when
connected in a circuit.
\begin{figure}[hb]
\centering
\includegraphics{K_srg}
\caption{Sampling of the SRG of a potassium conductance.}
\label{fig:K_srg}
\end{figure}
We consider the parallel interconnection of the potassium current with an LTI resistor,
shown in Figure~\ref{fig:K_circuit}. The port relation of this circuit is given by
\begin{IEEEeqnarray*}{rCl}
i = R^{-1} v + G_\mathrm{K}(v).
\end{IEEEeqnarray*}
Given a periodic input current $i^\star$, the corresponding voltage is solved using
the forward/backward algorithm. The algorithm
converges when $R \geq 500\, \Omega$, supporting the hypothesis that the potassium
conductance is $(-0.002)$-monotone. The Lissajous figure, or $i-v$ plot, is shown in
Figure~\ref{fig:K_lissajous} for an input current of $i(t) = \sin(2\pi t)$. The
potassium conductance exhibits the characteristic zero-crossing Lissajous figure of a
memristive system \autocite{Chua1976}.
\begin{figure}[hb]
\centering
\includegraphics{ex2}
\caption{Parallel interconnection of an LTI resistor and a potassium
conductance.}
\label{fig:K_circuit}
\end{figure}
\begin{figure}[hb]
\centering
\begin{tikzpicture}
\begin{axis}
[
no markers,
name = ax1,
width=0.4\textwidth,
height=0.35\textwidth,
ticklabel style={/pgf/number format/fixed},
ylabel={\footnotesize Current (A)},
xlabel={\footnotesize Voltage (V)},
cycle list name=colors,
grid=both,
grid style={line width=.1pt, draw=Gray!20},
axis x line=bottom,
axis y line=left
]
\addplot table [x=v, y=i, col sep = comma, mark =
none]{"./K_current.csv"};
\end{axis}
\end{tikzpicture}
\caption{Lissajous figure of a potassium conductance in parallel with a $500\;
\Omega$ resistor. The large signal magnitudes are not physically
realistic, but are chosen for illustrative purposes. The Lissajous figure
always passes through the origin, a fundamental characteristic of a memristive
system.}%
\label{fig:K_lissajous}
\end{figure}
\end{example}
\section{Connections with the Literature}
\label{sec:literature}
The class of systems which can be represented by a series and parallel interconnection
of maximal monotone resistors and LTI capacitors and inductors encompasses Lur'e
systems with a passive LTI system in the forward path and a maximal monotone relation
in the return path. Indeed, if the forward path has a transfer function $G(s)$ and
the feedback path is a relation $R$, the Lur'e system can be synthesized as the
series interconnection of a resistor with resistance relation $R$ and an LTI network
with impedance $G(s)^{-1}$ (which can be synthesized using the Bott-Duffin
construction \cite{Bott1949}). The $i-v$ relation on $L_{2,
T}$ of this 1-port is $i = (R + \bar G^{-1})^{-1}(v)$, where $\bar G$ is the relation on $L_{2,
T}$ corresponding to $G$.
Lur'e systems with passive linear part and a maximal monotone nonlinearity in the
feedback path have been a focus of research on nonsmooth dynamical systems - see, for
example, the survey by \textcite{Brogliato2020}. These systems may be modelled by
differential inclusions, linear complementarity systems or evolution variational
inequalities. A number of specialized time-stepping methods have been developed for
these classes of systems \autocite{Acary2008}.
The periodic response of such Lur'e systems is studied by Heemels \emph{et al.}
\cite{Heemels2017}, who give two algorithms for computing the output, given a periodic
input. Given a state space realization, they show that
the system can be represented as a maximal monotone differential inclusion, and that
backwards Euler discretization corresponds to computing the resolvent of this
differential inclusion at each time step. Their first algorithm involves iteratively
computing this resolvent forward in time. Their second algorithm combines the
computation of this resolvent over a period with a periodic boundary condition.
In comparison, the algorithm we present here iterates through
signal space, rather than forwards in time, and has several advantages. It is
independent of a state space realization or differential inclusion representation -
the relations used in computation represent the components of the system and retain
their physical meaning. This structure allows splitting algorithms to be applied to
separate computation for each system component.
Our method may be regarded as a signal space analog of the shooting method for
solving boundary value problems in the time domain (see, for example,
\autocite{Keller2018}). Given a transition mapping
$\phi(t, y_0): [0, T] \times \R^n \to \R^n$, which takes an initial condition $y_0$ to
the output value of a dynamical system at time $t$, and a boundary condition $\phi(T, \cdot) = y_T$, the shooting
method solves for a compatible initial condition by finding a zero of $F(a) \coloneqq
y(T, a) - y_T$. It is interesting to note that if the dynamical system is order-preserving, or monotone in the sense of
\textcite{Smith1995} (that is, $a \leq b \implies \phi(t, a) \leq \phi(t, b)$ for all
$t \geq 0$), then $F$ is a monotone operator on $\R^n$.
In the linear, time invariant case, the physical property of passivity allows
questions to be answered in a computationally tractable way, for example, a passive
storage function can be found by solving an LMI \autocite{Willems1972}. For
nonlinear passive systems, these computationally tractable methods no longer apply, in
general. For nonlinear systems with incremental properties, however, tractable
methods do exist. This is the fundamental result of contraction theory
\autocite{Lohmiller1998} and has been noted more recently in dissipativity analysis by
Verhoek, Koelewijn and T\'oth \autocite{Verhoek2020} and Forni, Sepulchre and van der
Schaft \cite{Forni2013c}. The approaches in these works differ from that
of this paper, however, in their reliance on differentiable state space models and
state-dependent linear matrix inequalities, rather than monotone operator methods.
\section{Conclusions}
\label{sec:conclusions}
We have applied monotone operator optimization methods to the problem of computing
the periodic output of a periodically forced, maximal monotone one-port circuit. Splitting
algorithms allow the computation to be separated in a way which mirrors the structure of
the system, and a new splitting algorithm has been introduced which is suited to
circuits with nested series/parallel interconnections. This method has been demonstrated on the classes of circuits built from maximal monotone
resistors and LTI capacitors and inductors, and memristive dynamic conductances such
as the neuronal conductances of the Hodgkin-Huxley model.
The mathematical property of monotonicity connects the physical property of energy
dissipation with a well-established algorithmic theory for computation, for systems
modelled as nonlinear operators. This mirrors the connection between energy
dissipation in LTI state space systems and computational methods for LMIs,
established by the theory of dissipativity \autocite{Willems1972}. Preliminary work
by the authors \autocite{Das2021} shows that the algorithmic methods proposed here
may be extended beyond the class of systems formed by the interconnection of monotone
elements, to those systems formed by the \emph{difference} of monotone elements.
This includes systems with self-sustaining oscillations.
\section{Acknowledgements}
The authors gratefully acknowledge many insightful discussions with Fulvio Forni,
whose suggestions greatly improved this manuscript.
\printbibliography
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{chaffey.png}}]{Thomas Chaffey} (M17) received the B.Sc (advmath) degree in
mathematics and computer science in 2015 and the M.P.E degree in mechanical
engineering in 2018, from the University of Sydney, Australia. He is
currently pursuing the Ph.D. degree in engineering at the University of
Cambridge. His research interests are in the modelling, analysis and
control of nonlinear systems. He received the Best Student Paper Award at
the 2021 European Control Conference.
\end{IEEEbiography}
\begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{sepulchre.jpeg}}]{Rodolphe Sepulchre} (M96,SM08,F10) received the engineering degree and the Ph.D. degree from the Université catholique de Louvain in 1990 and in 1994, respectively. His is Professor of engineering at Cambridge University since 2013. His research interests are in nonlinear control and optimization, and more recently neuromorphic control. He co-authored the monographs "Constructive Nonlinear Control" (Springer-Verlag, 1997) and "Optimization on Matrix Manifolds" (Princeton University Press, 2008).
He is Editor-in-Chief of IEEE Control Systems. In 2008, he was awarded the IEEE Control Systems Society Antonio Ruberti Young Researcher Prize. He is a fellow of IEEE, IFAC, and SIAM. He has been IEEE CSS distinguished lecturer between 2010 and 2015. In 2013, he was elected at the Royal Academy of Belgium.
\end{IEEEbiography}
\end{document}
|
{
"timestamp": "2021-12-01T02:24:08",
"yymm": "2111",
"arxiv_id": "2111.15407",
"language": "en",
"url": "https://arxiv.org/abs/2111.15407"
}
|
\subsubsection*{\bibname}}
\usepackage{graphicx}
\usepackage[pdftex, plainpages=false]{hyperref}
\usepackage{amsmath, amssymb}
\usepackage{float}
\usepackage{xcolor}
\usepackage{tikz}
\newcommand{\antti}[1]{{{\color{blue} [Antti: #1]}}}
\newcommand{\vitoria}[1]{{{\color{violet} [Vitoria: #1]}}}
\newcommand{\aapo}[1]{{{\color{magenta} [Aapo: #1]}}}
\newcommand{\perp \!\!\! \perp}{\perp \!\!\! \perp}
\newcommand{\mathbf{w}}{\mathbf{w}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbf{d}}{\mathbf{d}}
\renewcommand{\a}{\mathbf{a}}
\renewcommand{\b}{\mathbf{b}}
\newcommand{\mathbf{s}}{\mathbf{s}}
\renewcommand{\S}{\mathbf{S}}
\newcommand{\mathbf{e}}{\mathbf{e}}
\newcommand{\tilde{\mathbf{x}}}{\tilde{\mathbf{x}}}
\newcommand{\tilde{\tilde{\mathbf{x}}}}{\tilde{\tilde{\mathbf{x}}}}
\newcommand{\mathbf{c}}{\mathbf{c}}
\newcommand{\tilde{q}}{\tilde{q}}
\newcommand{\bar{q}}{\bar{q}}
\newcommand{\mathbf{x}}{\mathbf{x}}
\newcommand{\xi}{\xi}
\newcommand{\phi}{\phi}
\newcommand{{\boldsymbol{\xi}}}{{\boldsymbol{\xi}}}
\newcommand{\bar{\mathbf{w}}}{\bar{\mathbf{w}}}
\newcommand{\mathbf{\bar{u}}}{\mathbf{\bar{u}}}
\newcommand{\bar{u}}{\bar{u}}
\newcommand{\mathbf{z}}{\mathbf{z}}
\newcommand{\mathbf{y}}{\mathbf{y}}
\newcommand{\tilde{x}}{\tilde{x}}
\newcommand{\tilde{y}}{\tilde{y}}
\renewcommand{\ss}{\tilde{s}}
\newcommand{\mathbf{\xx}}{\mathbf{\tilde{x}}}
\newcommand{\mathbf{\yy}}{\mathbf{\tilde{y}}}
\newcommand{\mathbf{\ss}}{\mathbf{\ss}}
\newcommand{z}{z}
\newcommand{\mathbf{\sest}}{\mathbf{z}}
\newcommand{y}{y}
\newcommand{\mathbf{m}}{\mathbf{m}}
\newcommand{\mathbf{h}}{\mathbf{h}}
\newcommand{\mathbf{u}}{\mathbf{u}}
\newcommand{\mathbf{k}}{\mathbf{k}}
\newcommand{\mathbf{v}}{\mathbf{v}}
\newcommand{\mathbf{c}}{\mathbf{c}}
\renewcommand{\u}{\mathbf{u}}
\newcommand{\mathbf{f}}{\mathbf{f}}
\newcommand{\mathbf{g}}{\mathbf{g}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbf{J}}{\mathbf{J}}
\renewcommand{\H}{\mathbf{H}}
\newcommand{\mathbf{C}}{\mathbf{C}}
\newcommand{\mathbf{I}}{\mathbf{I}}
\newcommand{\mathbf{D}}{\mathbf{D}}
\newcommand{\mathbf{U}}{\mathbf{U}}
\newcommand{\mathbf{M}}{\mathbf{M}}
\newcommand{\mathbf{B}}{\mathbf{B}}
\renewcommand{\L}{\mathbf{L}}
\newcommand{\mathbf{A}}{\mathbf{A}}
\newcommand{\mathbf{V}}{\mathbf{V}}
\newcommand{\mathbf{W}}{\mathbf{W}}
\newcommand{\mathbf{X}}{\mathbf{X}}
\newcommand{\mathbf{Y}}{\mathbf{Y}}
\newcommand{\mathbf{q}}{\mathbf{q}}
\newcommand{\mathbf{D}_\mathbf{q}}{\mathbf{D}_\mathbf{q}}
\newcommand{\boldsymbol{\mu}}{\boldsymbol{\mu}}
\newcommand{\boldsymbol{\Sigma}}{\boldsymbol{\Sigma}}
\newcommand{\mathbf{Q}}{\mathbf{Q}}
\newcommand{\mathbf{n}}{\mathbf{n}}
\newcommand{\mathbf{0}}{\mathbf{0}}
\newcommand{\mathbf{l}}{\mathbf{l}}
\newcommand{\mathbf{S}}{\mathbf{S}}
\newtheorem{theorem}{Theorem}
\newtheorem{corollary}{Corollary}
\begin{document}
\twocolumn[
\papertitle{Binary Independent Component Analysis via Non-stationarity}
\paperauthor{ Antti Hyttinen \And Vit\'oria Barin-Pacela \And Aapo Hyvärinen }
\paperaddress{HIIT \& Department of Computer Science\\
University of Helsinki\\
Helsinki, Finland \And
Mila\\
Université de Montréal \\
Montreal, Canada
\And Department of Computer Science\\
University of Helsinki\\
Helsinki, Finland} ]
\begin{abstract}
We consider independent component analysis of binary data. While fundamental in practice, this case has been much less developed than ICA for continuous data. We start by assuming a linear mixing model in a continuous-valued latent space, followed by a binary observation model. Importantly, we assume that the sources are non-stationary; this is necessary since any non-Gaussianity would essentially be destroyed by the binarization.
Interestingly, the model allows for closed-form likelihood by employing the cumulative distribution function of the multivariate Gaussian distribution. In stark contrast to the continuous-valued case, we prove non-identifiability of the model with few observed variables; our empirical results imply identifiability when the number of observed variables is higher. We present a practical method for binary ICA that uses only pairwise marginals, which are faster to compute than the full multivariate likelihood.
\end{abstract}
\section{INTRODUCTION}
Despite significant progress in both linear and nonlinear ICA in recent years~\citep{tcl, hyvarinen19, Khemakhem2019}, ICA for binary data remains a challenging and important problem. Binary data is abundant in various fields, such as bioinformatics, health informatics, social sciences, natural language, and electrical engineering. An ICA model for binary data may also open new opportunities in solving problems closely related to ICA, such as
causal discovery~\citep{Shimizu06JMLR}.
Methods for binary ICA have been proposed based on either binary or continuous-valued independent components. In the case of binary components, \citet{Himberg01, nguyen10} assumed an OR mixture model.Some extensions of Latent Dirichlet can be seen as binary ICA \citep{Podosinnikova15,Buntine05}.
On the other hand, \cite{kaban06} presented an approach based on a latent linear model and binarized observations, although the components were restricted to the unit interval, which restricts its applicability.
\citet{Khemakhem2019} presented a nonlinear ICA model with binarized observations, which is a promising approach for our purposes.
Our goal here is to study the prospects of ICA for binary data by using an intuitively appealing model.
Identifiability of such a model is crucial to investigate, and we also want a consistent estimator which is not based on approximations whose validity is not clear.
None of the approaches above fulfills all these criteria.\footnote{As noted in the Corrigendum of \citet{Khemakhem2019} (v4 on arxiv), their initial identifiability proof for a discrete non-linear ICA model was incorrect.}
We propose a
binary ICA model
inspired by recent developments in nonlinear ICA.
We formulate a latent linear model with a separate binarizing measurement equation.
Crucially, we assume the components to be non-stationary, which is a powerful principle and very useful here because any non-Gaussianity may be destroyed by binarization.
Thus, we obtain a binary ICA model whose likelihood can actually be described in closed form via the multivariate Gaussian cumulative distribution function. We further propose to combine the likelihood with a
moment-matching approach to obtain a
fast and accurate estimation algorithm. In fact, due to the model structure, pairwise marginal distributions of non-binarized data can be accurately estimated from binary sample data and the likelihood can be computed directly from them.
We investigate the identifiability of the model, and somewhat surprisingly, we show that low-dimensional models are in fact non-identifiable---while higher-dimensional models are (empirically) shown to be identifiable.
\clearpage
\section{A MODEL FOR BINARY ICA}
\label{model}
We define here a binary counterpart of the linear ICA model. In particular, we consider here a model with non-stationarity: we assume the data consists of $n$ observed variables, divided into $n_u$ segments which express the non-stationarity. Thus, each data point has a segment index $u$ assigned to it. This additionally observed variable $u$ assumes categorical values, which is a special case of the auxiliary variable framework of \citet{Khemakhem2019}.
Such non-stationarity based on a segment-wise Gaussian model is well-known in linear ICA \citep{Pham01,JSSv076i02}. It is not only natural in the case of non-stationary time series, but also when there is any other external discrete variable, such as the experimental condition or intervention, or even a class label which modulates the distribution of the data.
We assume the data is generated from $n_z$ latent variables (independent components, or sources), collected into a latent random vector $\mathbf{z}$, which are generated independently of each other from a Gaussian distribution. Crucially, the parameters of the Gaussian distribution change as a function of the segment as
$$
\mathbf{z} | u \sim \mathcal{N}(\boldsymbol{\mu}_\mathbf{z}^{u},\boldsymbol{\Sigma}_\mathbf{z}^{u})
$$
where $\boldsymbol{\Sigma}^u_\mathbf{z}$ is a diagonal matrix of the source
variances in segment $u$.
We define ``intermediate'' variables $\mathbf{y}$ which are a linear mixing of the sources by a mixing matrix $\mathbf{A}$ with $n$ rows and $n_z$ linearly independent columns
$$
\mathbf{y} = \mathbf{A} \mathbf{z} \sim \mathcal{N}(\mathbf{A} \boldsymbol{\mu}_\mathbf{z}^{u}, \ \mathbf{A} \boldsymbol{\Sigma}_\mathbf{z}^{u} \mathbf{A}^{\intercal})
$$
While some work in ICA considers noisy continuous observations by adding noise to $\mathbf{y}$, we can consider here binarized observations instead.
Binarization is done using a linking function $\sigma$ so that
$$
P(x_i=1) = \sigma(y_i)
$$
We use a linking function based on the Gaussian cumulative distribution function (CDF):
$$
\sigma(y_i) = \Phi(\sqrt{\frac{\pi}{8}} y_i|0,1)
$$
where $\Phi$ is the CDF of Gaussian distribution, here with mean $0$ and variance $1$. We use $\sqrt{\frac{\pi}{8}}$ as the coefficient to match closely to the often employed sigmoid function $\sigma(y_i) = \frac{1}{1+e^{-y_i}}$~\citep{waissi,sigmoidtrick}.
This Gaussian CDF-based linking function has certain nice algebraic properties compared to the sigmoid, as we will see in Section~3.
Furthermore, the linking function has the following intuitive interpretation. Take $y_i$, add independent noise $\epsilon$ form $\mathcal{N}(0, \frac{8}{\pi})$, and binarize $y_i$ simply by a hard threshold 0 to get $x_i$. This gives the same distribution for $x_i$, since the probabilities match:
\begin{eqnarray*}
P(x_i=1) = P( y_i + \epsilon > 0 ) = P(\epsilon > -y_i)\\
= \int_{-y_i}^{\infty} \mathcal{N}\left(\epsilon|0, \frac{8}{\pi}\right) d \epsilon
=\Phi\left(\sqrt{\frac{\pi}{8}} y_i \vert 0,1\right).
\end{eqnarray*}
A binary ICA model $\mathcal{M}=(\mathbf{A},\{\boldsymbol{\mu}_\mathbf{z}^u\}_u,\{\boldsymbol{\Sigma}_\mathbf{z}^u\}_u)$ thus consists of the following parameters: mixing matrix $\mathbf{A}$, the means $\boldsymbol{\mu}_\mathbf{z}^u$ and the diagonal (co)variance matrices $\boldsymbol{\Sigma}_\mathbf{z}^u$ for all segments $u$, denoted by $\{\boldsymbol{\mu}_\mathbf{z}^u\}_u$ and $\{\boldsymbol{\Sigma}_\mathbf{z}^u\}_u$.
\section{THE LIKELIHOOD}
A surprising observation regarding the the latent variable model defined in Section~2, is that we can calculate the likelihood in closed-form by employing the multivariate Gaussian CDF.
For example, the model
defines the probability of the data vector of all ones, denoted by $\mathbf{1}$, as:
\begin{eqnarray}
&& P(\mathbf{x}=\mathbf{1}|\mathcal{M}, u)
=\int P(\mathbf{x}=\mathbf{1}|\mathbf{y}) P(\mathbf{y}|\mathcal{M}, u)d \mathbf{y} \nonumber\\
&& = \int\Phi \left(\sqrt{\frac{\pi}{8}} \mathbf{y} \vert \mathbf{0}, \mathbf{I} \right) \mathcal{N}(\mathbf{y}\vert \mathbf{A} \boldsymbol{\mu}_\mathbf{z}^{u}, \mathbf{A} \boldsymbol{\Sigma}_\mathbf{z}^{u} \mathbf{A}^{\intercal})d \mathbf{y} \nonumber
\end{eqnarray}
where the univariate Gaussian CDFs are written as a multivariate Gaussian CDF $\Phi$ with a identity covariance matrix.
The benefit of using a Gaussian CDF based linking function comes into play here, as the intergral is directly a value of a multivariate Gaussian CDF
\citep{waissi,sigmoidtrick}: The above integral actually specifies the probability of first drawing $\mathbf{y}$ and then, independently, drawing standard Gaussian variable $\mathbf{n} \sim \mathcal{N}(\mathbf{0},\mathbf{I})$ that is elementwise smaller. We therefore have:
\begin{eqnarray*}
P(\mathbf{x}=\mathbf{1}|\mathcal{M}, u)
=P\left( \mathbf{n} - \sqrt{\frac{\pi}{8}} \mathbf{y} < \mathbf{0} \right)
\end{eqnarray*}
This motivates us to define $\mathbf{q}$, an important quantity in the following developments as:
\begin{eqnarray}
\mathbf{q}&=& \mathbf{n} -\sqrt{\frac{\pi}{8}}\mathbf{y}, \label{eq:q}
\end{eqnarray}
which is simply a noisy, rescaled and sign-flipped version of the linear mixture $\mathbf{y}$. As noted in the preceding section, using the Gaussian CDF linking function is equivalent to adding Gaussian noise and thresholding, which is exactly what happens here, and very useful since then in each segment, we have a purely Gaussian generative model which is thresholded. In fact, since $\mathbf{q}$ is the sum of two independent Gaussian random vectors it also has a Gaussian distribution $\mathbf{q} \sim \mathcal{N} \left(\boldsymbol{\mu}_{ \mathbf{q}}^u , \boldsymbol{\Sigma}_{ \mathbf{q}}^u \right)$ with:
\begin{eqnarray}
\boldsymbol{\mu}_{\mathbf{q}}^u&=& -\sqrt{\frac{\pi}{8}} \mathbf{A} \boldsymbol{\mu}_\mathbf{z}^u, \label{eq:muq}\\
\boldsymbol{\Sigma}_{ \mathbf{q}}^u &=& \mathbf{I} + \frac{\pi}{8} \mathbf{A} \boldsymbol{\Sigma}_\mathbf{z}^u \mathbf{A}^{\intercal}. \label{eq:sigma} \label{eq:sigmaq}
\end{eqnarray}
The probability of data vector of ones is then:
\begin{eqnarray}
P(\mathbf{x}=\mathbf{1}|\mathcal{M}, u) = P\left( \mathbf{q} < \mathbf{0} \right) =
\Phi \left(\mathbf{0} \vert\boldsymbol{\mu}_{ \mathbf{q}}^u, \boldsymbol{\Sigma}_{ \mathbf{q}}^u \right), \label{eq:prob}
\end{eqnarray}
where $\Phi$ is the CDF of the multivariate Gaussian such that all variables integrated from $-\infty$ to $0$; it is readily implemented in basic packages~\citep{mvnorm}.
Similar derivation gives the probabilities for other assignments to $\mathbf{x}$. These probabilities can be expressed compactly for all value assignments as:
\begin{equation}
P(\mathbf{x} \vert \mathcal{M}, u) = \Phi \left(l(\mathbf{x}), u(\mathbf{x}) \vert \boldsymbol{\mu}_{ \mathbf{q}}^u, \boldsymbol{\Sigma}_{ \mathbf{q}}^u \right)
\end{equation}
in which the multivariate Gaussian probability density function is integrated from the lower bound $l(\mathbf{x}_{u}) $ to the upper bound $u(\mathbf{x}_{u})$ with:
\begin{eqnarray*}
l(\mathbf{x})[i] = \begin{cases}-\infty \text{ if } \mathbf{x}[i]=1 \\
\quad\,0 \text{ otherwise}
\end{cases}
u(\mathbf{x})[i]= \begin{cases}\;\,0 \text{ if } \mathbf{x}[i]=1 \\
\infty \text{ otherwise}
\end{cases}
\end{eqnarray*}
\begin{figure}
\centering
\includegraphics[scale=0.25,trim={16cm 9cm 16cm 12cm},clip]{Rplot.jpeg}
\caption{Binary ICA model for two observed variables and three segments. For each segment there is a bivariate Gaussian distribution on $\mathbf{q}$, the probability of an assignment to the binary observed variables is the probability mass in the corresponding quadrant.\label{fig:intuition}}
\end{figure}
Importantly, this formulation allows for a particularly clear intuitive interpretation of the model. Figure~\ref{fig:intuition} shows this for two observed variables and three segments. For each segment the model defines a bivariate Gaussian distribution for $\mathbf{q}$, depicted by colors and contours on the planes. The probability for an assignment of the observed binary variables in a segment is simply the probability mass in a corresponding quadrant. The multivariate Gaussian distributions for $\mathbf{q}$ in each segment are related in the sense that they are formed by the same mixing matrix performing on independent sources particular to the segment.
The log-likelihood of the whole data set can then be calculated as
\begin{eqnarray}
l
&=&
\sum_{(\mathbf{x},u)} \hspace{-1mm} c(\mathbf{x},u)\log \Phi(l(\mathbf{x}), u(\mathbf{x}) \vert \boldsymbol{\mu}_{ \mathbf{q}}^u, \boldsymbol{\Sigma}_{\mathbf{q}}^u), \label{qdistr}
\end{eqnarray}
where $c(\mathbf{x},u)$ is the count of the data points $\mathbf{x},u$
and the sum is taken over all possible assigments to $\mathbf{x},u$.
\section{ON IDENTIFIABILITY}
Many ICA models can only be identified up to scaling and permutation indeterminacies of the sources~\citep{ICAbook,Khemakhem2019}. Straightforwardly we can see that those limitations apply for our model as well. By re-ordering columns of the mixing matrix and the source, the implied distribution is unaffected; similarly, we can counteract the scaling (or sign-flip) of the mixing matrix columns by scaling (or sign-flipping) the sources. However, binarization actually induces additional indeterminacies as we will show next.
\subsection{The Binarization Indeterminacy}
First, we note that the probability in Equation~\ref{eq:prob}, stays exactly the same even if $\mathbf{q}$ is multiplied by a diagonal matrix $\mathbf{Q}^u$, possibly different for each segment, with positive entries on the diagonal:
\begin{eqnarray*}
P\left( \mathbf{q} < \mathbf{0} \right) &=&P\left( \mathbf{Q}^u \mathbf{q} < \mathbf{0} \right).
\end{eqnarray*}
Note that this is valid even if the elementwise operator is $>$ or a mixture of $>$ and $<$.\footnote{For the probability of $\mathbf{x}$ being all ones, any permutation matrix $\mathbf{Q}^u$ would similarly preserve the implied probability, but the probability of some other assignment for $\mathbf{x}$ (each of which corresponds to some mixture of $>$ and $<$) may change then.} That is, we lose the scale information on $\mathbf{q}$ in the binarization.
Two binary ICA models $\mathcal{M}=(\mathbf{A},\{\boldsymbol{\mu}_\mathbf{z}^u\}_u,\{\boldsymbol{\Sigma}_\mathbf{z}^u\}_u)$ and $\mathcal{M}'=(\mathbf{A}',\{{\boldsymbol{\mu}'}_\mathbf{z}^u\}_u,\{{\boldsymbol{\Sigma}'}_\mathbf{z}^u\}_u)$ are indistinguishable if there are positive diagonal matrices $\{\mathbf{Q}^u\}_u$ such that for each segment $u$ means and covariances of $\mathbf{q}$ satisfy:
\begin{eqnarray}
{\boldsymbol{\mu}'}_{ \mathbf{q}}^u &=& \mathbf{Q}^u \boldsymbol{\mu}_{ \mathbf{q}}^u \label{eq_1}, \\
{\boldsymbol{\Sigma}'}_{ \mathbf{q}}^u &=& \mathbf{Q}^u\boldsymbol{\Sigma}_{ \mathbf{q}}^u \mathbf{Q}^u,
\label{eq_2}
\end{eqnarray}
which can be written using the model parameters (Equations~\ref{eq:muq} and~\ref{eq:sigmaq}) as:
\begin{eqnarray}
\sqrt{\dfrac{\pi}{8}}\mathbf{A}' \boldsymbol{\mu}'_u&=& \mathbf{Q}^u \sqrt{\dfrac{\pi}{8}} \mathbf{A} \boldsymbol{\mu}_\mathbf{z}^u,
\label{eq_1v} \label{arithmetic1}\\
\mathbf{I} + \dfrac{\pi}{8} \mathbf{A}' \boldsymbol{\Sigma}'_u (\mathbf{A}')^\intercal &=& \mathbf{Q}^u(\mathbf{I} + \dfrac{\pi}{8} \mathbf{A} \boldsymbol{\Sigma}_\mathbf{z}^u \mathbf{A}^\intercal )\mathbf{Q}^u.
\label{eq_2v} \label{arithmetic2}
\end{eqnarray}
Figure~\ref{fig:indet} shows an example of this equivalence relation for one segment and two observed variables. The two Gaussian distributions represented by the blue and red contours imply the exact same joint distribution for binary observed variables. The amount of mass in each of the 4 quadrants is exactly the same.
\begin{figure}
\centering
\includegraphics[scale=0.50]{scalingfactorplot.pdf}
\caption{Two Gaussian distributions (red and blue) for a two dimensional $\mathbf{q}$ which imply the same binary distributions after binarization by the linking function. That is because the mass of both distributions in each of the 4 quadrants is identical.\label{fig:indet} }
\end{figure}
\subsection{Row Order Indeterminacy for $n=2$}
One of consequences of the binarization indeterminacy is the following non-identifiability result, proven in Appendix A.
\begin{theorem}
If the row order of the mixing matrix $\mathbf{A}$ of a two-dimensional binary ICA model is reversed, then the source means $\boldsymbol{\mu}^u_\mathbf{z}$ and variances $\boldsymbol{\Sigma}^u_\mathbf{z}$ can be adjusted such that the implied (observed) binary distributions remains identical.
\end{theorem}
Although the result may generalize to certain sparse higher dimensional models,
fortunately, it does not seem to jeopardize the estimation of higher dimensional models in general.
The result does have serious consequences for
causal discovery~\citep{Shimizu06JMLR}.
Consider two structural equation models, implying opposite causal directions:
$$
\mathbf{y}:= \left( \begin{array}{cc}
0 & 0\\
b & 0
\end{array}\right)\mathbf{y} + \mathbf{z}, \quad \mathbf{y}:= \left( \begin{array}{cc}
0 & b\\
0 & 0
\end{array}\right)\mathbf{y} + \mathbf{z}.
$$
where $\mathbf{z}$ has a Gaussian distribution in each segment with diagonal covariance matrix $\boldsymbol{\Sigma}_\mathbf{z}^u$. The models
imply respectively the mixing models:
$$
\mathbf{y}=\left( \begin{array}{cc}
1 & 0\\
b & 1
\end{array}\right)\mathbf{z}, \quad \mathbf{y}=\left( \begin{array}{cc}
1 & b\\
0 & 1
\end{array}\right)\mathbf{z}.
$$
If we observed binarized $\mathbf{y}$, i.e. $\mathbf{x}$, we can at most identify the mixing matrix up to row order, column order and column scale.
By switching the column order and then the row order of mixing matrix on the left, we get the mixing matrix on the right. Thus, unlike in the continuous case, we cannot detect the causal direction between two variables without further assumptions.
\subsection{Identifiability of $\mathbf{q}$-Correlations}
Note that the indistinguishable models in Equation~\ref{eq_2} have equal \emph{correlation matrices} (i.e.\ matrices of Pearson correlation coefficients) for the random variables $\mathbf{q}$.
The next theorem and corollary show that the correlations between elements of $\mathbf{q}$ are indeed theoretically identifiable from the distributions of the binary observed variables. Intuitively, the higher the correlation the more likely will the pair of binary observed variables receive equal assignments. The fairly technical proof is given in Appendix B.
\begin{theorem}
Two binary ICA models imply different binary distributions (in a given segment) if the correlation matrices for $\mathbf{q}$ are not equal.
\end{theorem}
This result is crucial for the development of our novel estimation method (Section~5.2), via the
corollary:
\begin{corollary} \label{corollary1}
The correlation matrix of $\mathbf{q}$ in a given segment is identifiable from binary observations.
\end{corollary}
On the other hand, the following theorem recaps the well-known result \citep{ICAbook,Pham01} that the means do not help in estimating the mixing matrix:
\begin{theorem}
If two models
$\mathcal{M}$
and $\mathcal{M}'$ with $n=n_z$
imply the same correlation matrices for $\mathbf{q}$ (in a given segment)
then the means $\boldsymbol{\mu}_\mathbf{z}^u$ can be adjusted such that the implied binary distributions are identical. \label{thm:means}
\end{theorem}
\subsection{A Heuristic Variable-Counting Approach}
\begin{table}[ht]
\centering
{\small
\begin{tabular}{r|rrrrr}
& $n_u= 2$ & $n_u= 3$ & $n_u= 4$ & $n_u= 5$ & $n_u= 6$ \\
\hline
n= 2 & -6 & -7 & -8 & -9 & -10 \\
n= 3 & -9 & -9 & -9 & -9 & -9 \\
n= 4 & -12 & -10 & -8 & -6 & -4 \\
n= 5 & -15 & -10 & -5 & \textbf{0} & 5 \\ \hline
n= 6 & -18 & -9 & \textbf{0} & 9 & 18 \\
n= 7 & -21 & -7 & \textbf{7} & 21 & 35 \\
n= 8 & -24 & -4 & \textbf{16} & 36 & 56 \\
n= 9 & -27 & \textbf{0} & 27 & 54 & 81 \\
n= 10 & -30 & \textbf{5} & 40 & 75 & 110 \\
\hline
\end{tabular}}
\caption{Heuristic identifiability analysis.
Each entry states the number of equations (statistics) minus the number of unknowns (parameters). The minimal cases with a non-negative number,
suggesting identifiability, are bolded.\label{tab:idtable}}
\end{table}
We conclude our identifiability analysis by a well-known heuristic approach to identifiability used in factor analysis. It is based on counting the number of statistics we can calculate (or equations that we can formulate), and the number of unknowns (model parameters) we need to solve. If the number of statistics is at least as large as the number of parameters, there is hope that the model is identifiable.
The calculations in Table~\ref{tab:idtable} are based on Equations~\ref{arithmetic1} and~\ref{arithmetic2} when the number of sources equals the number of observations ($n=n_z$); recall that the number of segments is denoted by $n_u$. The equations/statistics correspond to $n_u(n^2-n)/2$ correlations, $n_u\cdot n$ variances and $n_u\cdot n$ means. Unknowns include $n\cdot n$ mixing matrix coefficients, $n_u\cdot n$ segment-wise source variances,
$n_u\cdot n$ segment-wise source means,
as well as $n_u\cdot n$ segment-wise scaling terms (positive diagonal elements of $\mathbf{Q}^u$).\footnote{In line with the classical literature in factor analysis, we ignore the source scale and order indeterminacies.} These calculations suggest how identifiability depends on the numbers of the segments and the observed variables. We investigate the validity of these predictions experimentally below. Interestingly, the calculations suggest that the bivariate case is never identifiable.
\section{METHODS FOR BINARY ICA}
Next, we present three methods for estimating the binary ICA model, building on the theory in Sections~3 and~4. The \texttt{BLICA} method of Section~5.2 is the main novel contribution of the paper.
\subsection{Maximum Likelihood Estimation}
We have already derived the likelihood of our model. A straightforward approach is then to optimize this using e.g. L-BFGS. The gradient involves the calculation of moments for the \emph{truncated} multivariate Gaussian distribution.
These can be obtained from R-package \texttt{tmvtnorm}~\citep{tmvtnorm}.
Unfortunately, the computation of the likelihood and its gradient can only be done for small models in practice, because the evaluation of multivariate Gaussian CDF is time consuming, necessitating sampling-based approximations. Our experiments refer to this as \texttt{full MLE}.
\subsection{The \texttt{BLICA} Method}
\begin{algorithm}[!t]
\begin{algorithmic}[1]
\State Input data recorded at $n_u$ different segments.
\For{ segment $u \in \{1,\ldots,n_u\}$ }
\For{ each observed variable pair $\{x_i,x_j\}$ }
\State \parbox[t]{6cm}{Estimate the correlation between $q_i$ and $q_j$ by maximizing the marginal pairwise likelihood of $x_i$ and $x_j$ (in segment $u$).}
\EndFor
\State \parbox[t]{7cm}{Form and regularize the correlation matrix $\hat{\boldsymbol{\Sigma}}_\mathbf{q}^u$ obtained from the pairwise correlations.}
\EndFor
\State Optimize scaled Gaussian likelihood
with L-BFGS
over sufficient statistics $\hat{\boldsymbol{\Sigma}}_\mathbf{q}^u$ from all segments $u$.
\State Return the estimated mixing matrix $\mathbf{A}$ and source variances $\boldsymbol{\Sigma}^u_\mathbf{z}$ for all segments $u$.
\end{algorithmic}
\caption{The \texttt{BLICA} algorithm for Binary ICA.\label{alg:BLICA}}
\end{algorithm}
However, we can circumvent computational burden of the high dimensional Gaussian CDF.
Due to the theory in Section 4, the correlations of $\mathbf{q}$ convey the essential information between the binary data and the continuous mixing model. Since the marginalization properties of our model are inherited from the multivariate Gaussian, such correlations can be estimated from \emph{pairwise} marginal distributions; in 2D the Gaussian CDF is still quite quick to compute.
Thus, we combine maximum likelihood estimation with what could be called a "moment-matching" approach as follows.
We first recover the pairwise correlations of the continuous-valued $\mathbf{q}$ from the observed binary data (this is possible by Corollary~\ref{corollary1}) via MLE in 2D. Then we fit those correlations to the correlations implied by the latent linear mixing model using a more scalable MLE in the continuous-valued latent space. The resulting algorithm is summarized as Algorithm~\ref{alg:BLICA}.
\begin{figure*}
\centering
\includegraphics[scale=0.42]{plot3.pdf}
\caption{Identifiability of the model with equal number of observed variables and sources. The \texttt{BLICA} method used true (pairwise) probability distributions (i.e. infinite sample limit data). Each box is based on 30 models. Runs with log-error less than $-7$ (e.g. those that had $MCS=1$ to machine precision) are marked with $-7$.
\label{fig:idplot} }
\end{figure*}
\textbf{Correlation estimation.}
On line 4, we estimate each correlations separately by directly fitting a model for $\mathbf{q}$ in Equation~\ref{qdistr} in two dimensions.
To calculate multivariate Gaussian CDF we use the R package \texttt{mvtnorm}~\citep{mvnorm}.
We employ the \texttt{GenzBretz} method, which is particularly suitable for the fast evaluation needed here~\citep{genz1993comparison}. Furthermore, the estimation can be simplified~\citep{lee}. Due to Equation~\ref{arithmetic2} the variances $\boldsymbol{\Sigma}_\mathbf{q}[i,i]$ can be set to 1.
Furthermore, since the marginal of $x_i$ is
\begin{eqnarray*}
P(x_i=1|u)&=&\Phi(-\boldsymbol{\mu}^u_\mathbf{q}[i]/\sqrt{\boldsymbol{\Sigma}^u_\mathbf{q}[i,i]}|0,1),
\end{eqnarray*}
$\boldsymbol{\mu}_\mathbf{q}[i]$ can be computed using the inverse CDF~\citep{mvnorm}. The univarite optimization problem in the interval $[-1,1]$ can be solved using a line search method~\citep{brent2013algorithms}. The scalability of Algorithm~1 depends crucially on this step as $n_u\cdot (n^2-n)/2$ correlations need to be estimated.
\textbf{Regularization.} When estimating the correlation of $\mathbf{q}$ from sample data, it can happen that the correlation matrix $C$ is close to singular or not positive definite. We use the following regularization on line 5 based a parameter $r$~\citep{warton}, which marks the approximate condition number targeted.
The regularized correlation matrix is then
\begin{equation}
\frac{1}{1+\delta} (\hat{\boldsymbol{\Sigma}}_\mathbf{q}^u + \delta \mathbf{I}), \text{ where } \delta = \max ( 0, \frac{\lambda_1-r\cdot \lambda_n }{ r-1} ), \nonumber
\end{equation}
where $\lambda_1$ is the largest and $\lambda_n$ the smallest eigenvalue of $\hat{\boldsymbol{\Sigma}}_\mathbf{q}^u$.
This regularization keeps the unit diagonal.
\textbf{Scaled Gaussian likelihood.}
Finally on line 6, we fit the estimated correlations using the Gaussian likelihood model
over the different segments and a stationary mixing matrix as defined by the model (Section~2). But in contrast to the usual case where we have covariance matrices, we need to here account for the "binarization indeterminacy", resulting in additional nuisance scaling parameter, as pointed out above. We use the term scaled Gaussian likelihood to refer to ordinary multivariate Gaussian likelihood where we include additional parameters $\mathbf{Q}^u$ as the scaling factors. The fitting is thus done by the following scaled Gaussian likelihood based on the sufficient statistics:
\begin{equation}
l =\sum_{u=1}^{n_u} \frac{N}{2}[
-\log (\det (\boldsymbol{\Sigma}_\mathbf{q}^u)) - \mathrm{TRACE}(\hat{\boldsymbol{\Sigma}}_\mathbf{q}^u (\boldsymbol{\Sigma}_\mathbf{q}^u)^{-1} )] \nonumber
\end{equation}
where recall that $\boldsymbol{\Sigma}_\mathbf{q}^u=\mathbf{Q}^u(\mathbf{I}+\mathbf{A}\boldsymbol{\Sigma}_\mathbf{z}^u \mathbf{A}^T)\mathbf{Q}^u$ by Equation~\ref{arithmetic2} is a function of
the mixing matrix $\mathbf{A}$,
source variances $\{\boldsymbol{\Sigma}_\mathbf{z}^u\}_u$ (diagonal, positive elements) and scaling factors $\{\mathbf{Q}^u\}_u$ (diagonal, positive elements). Note that without the scaling factors $\{\mathbf{Q}^u\}_u$, the mixing matrix $\mathbf{A}$ could be found via joint diagonalization~\citep{JSSv076i02}. Note also that due to Theorem~\ref{thm:means} the source means need not be estimated since only correlations can and need to be estimated. Here, instead we perform the fitting by maximizing this likelihood using L-BFGS~\citep{lbfgs} with respect to the aforementioned parameters.
\subsection{Binary ICA through Linear iVAE} \label{ivae}
\citet{Khemakhem2019} presented the identifiable Variational Autoencoder (iVAE), an approach for nonlinear ICA employing variational autoencoders \citep{Kingma2014, Rezende2014} that assumes access to an additionally observed variable such that the sources are independent given the auxiliary variable; further, each source follows an exponential family distribution given the auxiliary variable.
Here, we apply iVAE to estimate the binary ICA model from Section~\ref{model}. As proposed by \citet{Kingma2014} and \citet{ Khemakhem2019}, we use the factorized Bernoulli observational model and apply a sigmoid function element-wise to the output of the decoder to obtain the binary probability distributions. Due to the linearity of our mixing model and the segment-wise structure, we can simplify the encoder (posterior approximation) of the VAE, and make all the transformations in the iVAE affine or linear, thus greatly simplifying the system. The \texttt{linear iVAE} is presented in more detail in Appendix~E.
\begin{figure*}
\includegraphics[scale=0.36]{plot1b.pdf}\;\includegraphics[scale=0.36]{plot1a.pdf}\;\includegraphics[scale=0.36]{plot2.pdf}
\caption{
Finite sample performance.
Left: 10 observed variables and 10 sources.
Center: 6 observed variables and 6 sources.
Right: 6 observed variables and 2 sources.
Each box is based on thirty 40-segment datasets.
\label{fig:10observed}\label{fig:comparison} }
\end{figure*}
\subsection{Estimation of the Sources}
After estimating the mixing matrix $\mathbf{A}$, it may be desired to estimate the sources $\mathbf{z}$ as well.
We note that in the case of binary data, the individual source values cannot be accurately estimated (even up to scale and order indeterminacies) due to the inherent noise introduced by the binarization procedure. Presumably, though, if the number of observed variables is large and the number of sources is small, the estimation may be reasonable. In any case, the posterior $p(\mathbf{z}|\mathbf{x},u)$ can be easily calculated after estimating the mixing matrix.
\section{EXPERIMENTS}
We implemented our proposed methods and baselines using R (\texttt{BLICA}, \texttt{full MLE}) and python (\texttt{linear iVAE}). The methods will be published online in open source.
Here we investigate empirically the identifiability of the model, as well as the finite-sample estimation performance and the scalability of our proposed methods, also comparing to previous approaches.
\textbf{Data.} Data was generated in the following way. Means were drawn from $\mathrm{unif}(-0.5,0.5)$, standard deviations from
$\mathrm{unif}(0.5,3)$. Mixing matrix elements were from $\mathrm{unif}(-3,3)$ while ensuring invertibility by resampling until the condition number (kappa) was below 20 for $n<20$, or for $n\leq 20$ below the 75th quantile of 1000 sampled similar dimensional mixing matrices. For practical estimations from finite sample data we use 40 segments with varying the sample size.
\textbf{Evaluation.}
ICA methods are often compared in terms of mean correlation coefficient of the estimated sources. Here however, binarization induces heavy noise and individual samples of the estimated sources cannot be accurately estimated. We therefore focus our evaluation on the mixing matrices,
and measure the mean cosine similarity (MCS) of the columns (see Appendix~D).
\textbf{Identifiability: Results.}
We start by evaluating identifiability empirically. Crucially, since \texttt{BLICA} only employs pairwise marginals, we can use the exact binary distributions implied by the model as inputs, thus avoiding any effects of finite samples. Figure~\ref{fig:idplot} shows results on which models can be identified when the number of sources equals the number of observations ($n=n_z$). In many cases, the method found the mixing matrix essentially up to machine precision, which can be seen as indication of identifiability.
Each box includes 30 different data generating models, for each we ran \texttt{BLICA} 3 times; MCS of the run with highest scaled Gaussian likelihood is plotted.
With only 2 segments, or only 2 variables, the model is not identifiable for any setting. Generally, the more observed variables we have, the less segments are needed for identification.
The implications of the empirical results in Figure~\ref{fig:idplot} are similar to the equations vs. unknowns arithmetic in Table~\ref{tab:idtable}. For example, 3 segments cannot identify models less than 9 variables.
\begin{figure*}
\centering
\includegraphics[scale=0.36]{plot4.pdf}\;\includegraphics[scale=0.36]{plot5.pdf}\;\includegraphics[scale=0.36]{timeplot.pdf}
\caption{Scalability to higher dimensions.
Left: equal number of sources and observed variables. Center: 10 sources.
Each box is based on thirty 40-segment datasets with 1000 samples per segment.
Right: Running times of the steps of \texttt{BLICA} (Algorithm~1).
\label{fig:highdim} }
\end{figure*}
\textbf{Finite-sample estimation: Methods.} Next we turn our attention to estimation performance from finite sample data.
We compare our new \texttt{BLICA} (with different regularization parameter value $r$) method to its main competitors, \texttt{fastICA}~\citep{Himberg01,fastica} and the baseline implementations of \texttt{linear iVAE} and \texttt{full MLE}. We note that the model of \texttt{fastICA} is somewhat different, but it still employs a linear mixing of the sources and has the same sources scale and order indeterminacies; thus, \texttt{MCS} comparison is sensible. \texttt{fastICA} does not use the segment index, but pools all data from different segments.
Recall from Section~\ref{ivae} that the \texttt{linear iVAE} uses the same model, but instead of employing the likelihood, it optimizes the ELBO objective through L-BFGS. For runs with $n<20$ observed variables a time budget of 2h was used, the results that were obtained within the time limit are reported. For larger simulations we allowed 12h per run. To avoid local minima due to the difficult optimization landscape, we ran \texttt{linear iVAE}, \texttt{full MLE} and \texttt{BLICA} with 3 different learning seeds and selected the best run according to the objective function (e.g. likelihood).
\textbf{Finite-sample estimation: Results.}
Figure~\ref{fig:10observed} (left) shows the result for 10 observed variables and 10 sources. \texttt{BLICA} clearly outperforms others consistently improving with increasing sample size. With smaller dimensions, 6 observed variables and 6 sources in Fig.~\ref{fig:10observed} (center), more samples are needed. \texttt{full MLE} cannot perform sufficiently many optimization steps within the time limit of 2h. However, if the number of sources is limited we again achieve good estimation results: Fig.~\ref{fig:10observed} (right) shows that for 6 observed variables and 2 sources, high MCS can be obtained with only 50 samples per segments. Interestingly, \texttt{linear iVAE} performs well only with fewer sources than observations, while \texttt{fastICA} is not able to reliably estimate the mixing matrix from binary data.
\textbf{Scalability: Results.} Figure~\ref{fig:highdim} assesses the performance in higher dimensions over data sets with 40 1000-sample segments, thirty for each $n$.
Only \texttt{BLICA} can estimate the mixing matrix with equal number of observed variables equals and sources in Fig.~\ref{fig:highdim} (left). When the number of sources is fixed to 10 in Fig.~\ref{fig:highdim} (center), also \texttt{linear iVAE} shows improving performance with increasing number of observed variables. Finally, Fig.~\ref{fig:highdim} (right) shows the running time performance of \texttt{BLICA} (Algorithm~1) on the previous runs. The estimation of the quadratic number of correlations starts taking considerable time with 100 observed variables. L-BFGS is relatively quick in solving the optimization problem to a solution close to the final result (i.e. 1\% lower MCS), then still gradually improving.
\section{RELATED WORK} \label{sec:related}
\citet{Himberg01}
consider binary observed vectors $\mathbf{x}$ and binary sources $\mathbf{z}$, so that the ICA mixing model is given by the Boolean expression $ x_{i}=\bigvee_{j=1}^{n_z} a_{i j} \wedge z_{j}$.
They show that this Boolean OR mixing can be approximated by a linear mixing model followed by a unit step function. Thus, they propose to estimate the model by ordinary ICA, and obtain reasonable results when the data is very sparse.
Similarly, \citet{nguyen10} studied binary ICA with OR mixtures by defining a disjunctive generative model.
They prove identifiability
and propose an algorithm without continuous-valued approximations.
\citet{kaban06} proposed a model where
continuous sources follow a Beta distribution, followed by a binary observation model.
While their approach is related to ours, their latent variables are restricted to a finite interval, and they estimate the model using variational approximation which is unlikely to yield consistent estimators.
Discrete ICA has further been approached by extensions of LDA where the topic intensities are mutually independent \citep{Podosinnikova15, Buntine05, canny04}. Although their identifiability guarantees are limited \citep{podosinnikova16}, their method has the advantage of allowing for discrete, non-binary data.
\cite{lee} consider PCA for binary data, employing a binarized Gaussian model.
Finally, we note that the very idea of estimating latent variable models by non-stationarity, originating in \citep{Matsuoka95,Pham01}, has been recently increasingly used in estimating generative models \citep{Hyva16NIPS,Khemakhem2019} as well as for causal discovery \citep{zhang2017causal,Monti19UAI}, even in deep learning.
Instead of the wide-spread idea of joint diagonalization of covariance matrices~\citep{belouchrani1997blind,Tsatsanis}, we used correlation matrices without explicit diagonalization criteria; related work on diagonalizing correlation matrices can be found in \citep{corrdiag}.
\section{CONCLUSION}
We presented a model for ICA of binary data which is based on a linear latent mixing model and non-stationarity of the sources.
We investigated the identifiability, showing some surprising indeterminacies not present in ordinary ICA, including the fact that in the two-variable case the model cannot be identified. We believe that our identifiability results, theoretical and empirical, will be useful in future research on binary ICA. Based on our approach using a Gaussian link function, the likelihood can be obtained in closed form although the Gaussian CDF is still computationally heavy. These advances allowed for a practical method \texttt{BLICA}, that combines maximum likelihood estimation and moment-matching; it was shown to be applicable in higher dimensions while still empirically showing consistent behaviour.
As future work, we aim to also investigate which prospects the new algorithm opens in applications.
\clearpage
\bibliographystyle{plainnat}
|
{
"timestamp": "2021-12-01T02:25:04",
"yymm": "2111",
"arxiv_id": "2111.15431",
"language": "en",
"url": "https://arxiv.org/abs/2111.15431"
}
|
\section{Introduction}
The new generation of extragalactic imaging surveys of galaxies, such as Euclid \citep{euclid_survey} or LSST \citep{lsst}, will deliver an unprecedented volume of data. These new surveys also imply more demanding scientific requirements which are translated into new challenges from the data analysis perspective. The deeper the survey, the larger the density of detected sources, leading to an increase of the probability of observing overlapping galaxies due to projection effects. We call those objects \emph{blended}, and the task of identifying and treating that confusion, \emph{deblending}. Misidentifying blended galaxies significantly impacts the error budget and can even prevent reaching the scientific requirements for precision cosmology \citep{blending_lsst}.
Accurately identifying blended galaxies is therefore a key task to guarantee the full potential of the forthcoming cosmological surveys. Several machine learning (ML) based \citep{boucaud, arcelin} and non-ML based solutions \citep{scarlett} have been proposed, although the issue remains open. In particular, a robust
quantification of uncertainty to be propagated into the final error budget generally lacks from the current approaches.
In this work in progress, we adapt the Probabilistic U-Net (hereafter PUnet) architecture proposed by \citet{PUnet} to work on galaxy images. We namely modify the loss function to deal with very unbalanced datasets and demonstrate that the Probabilistic Deep Learning model provides meaningful segmentation uncertainties while obtaining
competitive scores both in completeness and purity, even when trained from a unique ground truth.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{blend_prediction3.pdf}
\caption{Examples of outputs of the probabilistic segmentation model on our test dataset. Each row shows a different case. Columns from left to right: input image; ground truth; average of $50$ realisations; standard deviation of the realisations; pixel-wise absolute difference between truth and prediction. The first row highlights the informative uncertainty for the segmentation with overlapping galaxies, along with the capacity to correctly predict truncated objects. The second row shows that the uncertainty follows the residuals at the border of the galaxy, where the predicted segmentation is not well defined. The last row acts as a null case by showing the capacity of our model to identify a pure background image and to not over-segment.}
\label{fig:predictions}
\end{figure}
\section{Model}
The global architecture of our model follows the one used by \citet{PUnet}. The architecture of the PUnet~ is made of two parts. On the one hand, there is a classical U-Net~ \citep{Unet} architecture, whose goal is to create realistic segmentation maps from the input galaxy images. This constitutes the deterministic part of the model. On the other hand, we have an encoder that compresses the images into a Gaussian latent space. This latent space is sampled and concatenated to the output of the U-Net~ before a final convolution layer followed by a softmax activation to produce a non-deterministic output. The output is then thresholded to produce the desired segmentation map with overlapping regions.
During training, the latent space is regularised using a second encoder that takes as input the ground truth segmentation in addition to the input image. At inference time, that second encoder is removed and the samples are taken only from the latent space of the first encoder. That sampling allows one to have multiple realisations of the segmentation and hence an estimate of the uncertainty at the pixel level. See appendix Section \ref{sec:architecture} and Figure \ref{fig:model} for more details about the specific architecture used.
The loss function is a weighted sum of two terms: a reconstruction loss and a Kullback--Leibler Divergence \citep{KL} between the Gaussian distributions of the two latent space encoders. The former takes care of producing realistic segmentation maps while the latter ensures that the inferred distribution without the ground truth preserves a memory of the ground truth.
We introduce two main modifications to the original PUnet~ model. First, we replace the reconstruction loss by a Dice loss \citep{dice}, more suited to our unbalanced segmentation problem (see Section \ref{sec:data}). We weight the Dice loss so that a wrong prediction on a blend as more importance than a wrong prediction on the background, which dominates the image. Second, we set our model to train from a single ground truth. While the original PUnet~ model was used to cope with \emph{ambiguous} segmentations, i.e images with different possible ground truths according to experts, in our case, we only have one ground truth. However, we show that the model can still capture an informative uncertainties from the diversity of the dataset.
\section{Data}\label{sec:data}
Our data consists of $128\times\,128$ image stamps which have been extracted from a large simulated galaxy field, meant to reproduce the typical properties of future space missions both in terms of signal-to-noise and spatial resolution. In this first proof-of-concept work, the galaxies in the field are simulated as pure analytic Sersic profiles \citep{sersic}, using the \texttt{Galsim} software \citep{galsim}. A noise realisation is then added to the stamp. The true segmentation maps are created by taking all the pixels of the galaxy that are above the noise level of the image. It produces ternary maps, where all pixels belonging to the background are set to $0$ and the pixels of the galaxy are set to $1$ and overlapping regions to $2$. We simulate $4$ galaxy fields of $25\,000\times\,25\,000$ pixels, and extract the stamps from them, which represents a training set of $~150\,000$ images. We preprocess the images with an inverse hyperbolic sine (\texttt{arcsinh}), and then a normalisation by the maximum of each stamp.
We emphasise three properties of the dataset which contribute to make the problem more complex. First, the three classes are strongly imbalanced. $\sim96\%$ of the pixels belong to the background, $\sim3\%$ to isolated galaxy, and $<1\%$ correspond to the overlapping regions we would like to focus on. The imbalance between the classes is resolved with the weighted Dice loss, we do not apply any data augmentation. Second, there is a significant fraction of images with no galaxies on them, which we call background stamps. The model could easily fall in a mode where it predicts only background. Finally, because we arbitrarily cut our entire fields into stamps following a fixed grid, a significant amount of galaxies can be cut in the border of the stamps. It means that our algorithm needs to learn the shape of a galaxy, but also recognise a truncated galaxy profile.
Our test sets is done following the exact same procedure, and contains $\sim 3000$ isolated galaxies and the same number of blended objects.
\section{Results}\label{sec: results}
\subsection{Images}\label{sec:images}
We illustrate in Figure \ref{fig:predictions} a set of typical prediction cases extracted from the test dataset to provide a qualitative overview of the model behaviour. Three interesting properties can be highlighted from the Figure. 1 - The capacity of the model for segmenting overlapping galaxies while retaining some uncertainty in the prediction on the overlapping region; 2 - The accurate segmentation of multiple galaxies in the field, including truncated objects, with residuals being concentrated towards the galaxy outskirts and 3 - The ability of the network to deal with empty stamps. That last case acts as a null test for the network and represents an improvement with respect to the model from \citet{boucaud} which would fail at predicting only a background noise.
\begin{table}
\centering
\begin{tabular}{l|c|c}
& Completeness & Purity \\
\hline
Isolated & 99.1 & 98.5 \\
\hline
Blended & 87.3 & 93.6
\end{tabular}
\caption{Global scores on a large field of galaxies.}
\label{tab:c/p}
\end{table}
\subsection{Completeness, Purity and IoU}
We use three standard metrics to evaluate the results: the completeness, the purity, and the classical segmentation metric: Intersection over Union (IoU), between the ground truth and the prediction.
The results are summarised in Table \ref{tab:c/p} and Figure \ref{fig:IOU}. Our model is able to detect almost all isolated galaxies up to magnitude $25.2$, the limiting magnitude of our simulated galaxies. The purity is also very high, with a value of $98.5$. For the blended galaxies, the model also reaches high scores as compared to other existing approaches. We measure a completeness of $87$ and a purity of $93.6$. Our model reaches an IoU above $0.8$ at any magnitude, and up to $0.9$ for bright objects (small magnitude). The results follow the same trend for blended objects, with a lower value (as expected), and still above $0.55$ for any magnitude.
\begin{figure}[!htb]
\centering
\begin{minipage}{.45\linewidth}
\centering
\includegraphics[width=\linewidth, height=0.2\textheight]{IOU_mag.png}
\caption{Intersection over Union in bins of magnitude, for isolated and blended galaxies. Larger magnitudes mean fainter objects with lower signal-to-noise.}
\label{fig:IOU}
\end{minipage}%
\hspace{1cm}
\begin{minipage}{0.45\linewidth}
\centering
\includegraphics[width=\linewidth, height=0.2\textheight]{var_mag.png}
\caption{Mean variance of the detected object by bin of magnitude. The increase of the variance with magnitude is an expected behavior for a physically motivated uncertainty.}
\label{fig:var_mag}
\end{minipage}
\end{figure}
\subsection{Uncertainty calibration}
The third column of the Figure \ref{fig:predictions} shows the standard deviation of $50$ different predictions for the same input. We appreciate a sensitive behavior of the variance.
We see a larger variance at the border of the galaxies, where the flux is low, which means a harder prediction, while it is low at the centre of the object, even for fainter galaxies. We also show in Figure \ref{fig:var_mag} that the variance of objects increases with magnitude, which should be expected if the uncertainty is physically meaningful.
\section{Future developments}
This work is an ongoing project. Based on the promising preliminary results we envision several improvements.
We plan to make the weights of the dice losses trainable and fine tune the other hyper parameters. We will compare to state of the art classical detection algorithms \citep[e.g.][]{sextractor}. For now, we have limited our analysis to only one data set with similar properties than the training set. We will further investigate the behaviour of our model on more realistic data (observations or complex simulations), with other noise levels, etc. We will pursue the calibration of the uncertainty to better understand the link between the variance of the prediction and the aleatoric uncertainty. Finally, we may use a sliding window for the prediction of big fields instead of a fix cut grid (see appendix~\ref{app:big_field}).
\section{Conclusion}
This work shows the first application of a deep learning probabilistic model to the segmentation of astronomical images. The model has been modified to take into account the specific properties of those images such as a low signal-to-noise and strong unbalancing. Our preliminary results indicate that our method achieves competitive performance for the detection of isolated and blended galaxies in large photometric fields, while providing an estimate of the model uncertainty.
\bibliographystyle{aa.bst}
|
{
"timestamp": "2021-12-01T02:25:52",
"yymm": "2111",
"arxiv_id": "2111.15455",
"language": "en",
"url": "https://arxiv.org/abs/2111.15455"
}
|
\section{}
The twistor space of a hypercomplex or a hyperk\"ahler manifold $M$ is a complex manifold $Z$ equipped with a holomorphic submersion $\pi:Z\to \oP^1$ and an antiholomorphic involution $\sigma$ covering the antipodal map. The manifold $M$ is then recovered as (a component of) the Kodaira moduli space of $\sigma$-invariant sections of $\pi$ with normal bundle splitting as $\bigoplus\sO(1)$.
In \cite{Sigma} the first author observed that, if $\dim M=4$, then we also obtain a hypercomplex or pseudo-hyperk\"ahler structure on a subset of the Douady space consisting of $\sigma$-invariant curves of degree $d$, $d>1$, which are cohomologically stable, i.e.\ satisfy $h^1(N_{C/Z}(-2))=0$.
In \cite{BP2} C. Peternell and the first author showed that in the case of curves of genus $0$ in $\oP^3\backslash \oP^1$ (i.e.\ in the twistor space of the flat $\oR^4$) this pseudo-hyperk\"ahler structure can be obtained as a pseudo-hyperk\"ahler quotient of a flat space by a non-reductive Lie group. Even in that case, however, we had been unable to determine the signature of the metric for $d>3$.
\par
In this work we investigate the pseudo-hyperk\"ahler geometry of higher degree $\oP^1$'s embedded in the twistor space of an arbitrary $4$-dimensional hyperk\"ahler manifold. First of all, if such a $\oP^1$ of degree $d$ is to satisfy reality conditions, then $d$ must be odd. This has been proved in \cite[Prop.\ 5.9]{BP2}, but it also follows from the observation that a rational map $\phi:\oP^1\to\oP^1$ of degree $d$ can commute with the antipodal map only for odd $d$. With this restriction, let us denote by $M_d$ the subset of the Douady space consisting of $\sigma$-invariant cohomologically stable $\oP^1\subset Z$ of degree $d$. $M_d$ is hypercomplex, resp.\ pseudo-hyperk\"ahler, if $M$ is hypercomplex, resp. hyperk\"ahler. We remark that in this situation ``cohomologically stable" is equivalent to $N_{C/Z}\simeq \sO_{\oP^1}(2d-1)\oplus \sO_{\oP^1}(2d-1)$.
Our main result is:
\begin{mtheorem} Let $M$ be a $4$-dimensional hyperk\"ahler manifold. Suppose that $d\in \oN$ is odd and $M_d$ is nonempty. Then:
\begin{itemize}
\item[(i)] the signature of the pseudo-hyperk\"ahler metric on $M_d$ is $(2d+2,2d-2)$;
\item[(ii)] there exists a natural submersion $\rho: M_d\to \oR\oP^{2d-2}$ and an open dense subset $U$ of $M_d$ such
each fibre of $\rho|_U$ has a natural $d$-hypercomplex\footnote{A definition of a $d$-hypercomplex manifold may be found in \S\ref{realm}.} structure.
\end{itemize}
\end{mtheorem}
Part (ii) holds also if $M$ is only hypercomplex. The subset $U$ consists of $C$ such that the restricted vertical tangent bundle $(\Ker d\pi)|_C$ is isomorphic to $ \sO_{\oP^1}(d)\oplus \sO_{\oP^1}(d)$.
\par
The structure of the paper is as follows. In the next section we recall facts about the geometry of degree $d$ curves (of arbitrary arithmetic genus) in $Z$. We also interpret the $\sO_{\oP^1}(2)$-valued symplectic form on the fibres of
the twistor space of such curves directly in terms of the normal bundles of curves (as long as they are local complete intersections). In \S 2 we define and study the map $\rho$ without any reality assumptions. These are imposed in \S 3, where we prove Theorem A. Finally, we discuss in detail the case of degree $d$ $\oP^1$'s embedded in the twistor space of an ALE or ALF gravitational instanton of type $A_k$. In the ALE case we can actually view $M_d$ as an open subset of the real locus of the Hilbert scheme of degree $d$ rational curves on a singular Fano $3$-fold - a hypersurface in a weighted projective $4$-space (cf.\ Remark \ref{Fano}).
\vspace{1mm}
{\em Acknowledgement.} This work has been carried out while both authors were members of, and the second author was fully funded by the DFG Priority Programme 2026 ``Geometry at infinity", the support of which is gratefully acknowledged.
\section{Geometry of Douady spaces of curves in a twistor space\label{Douady}}
Let $Z$ be a complex $3$-dimensional manifold with a holomorphic submersion $\pi:Z\to \oP^1$. We write $\sO_Z(i)$ for $\pi^\ast \sO_{\oP^1}(i)$ and $\sF(i)$ for $\sF\otimes\sO_Z(i)$ for any sheaf $\sF$ on $Z$. We denote by $T_F$ the vertical tangent bundle $\Ker d\pi$ of $Z$. From the exact sequence
$$0\longrightarrow T_F\longrightarrow TZ \longrightarrow \pi^\ast T\oP^1\longrightarrow 0, $$
we conclude that $K_Z\simeq \Lambda^2 T^\ast_F(-2)$. In particular, an $\sO(2)$-valued symplectic form $\omega$ along the fibres of $\pi$, i.e.\ a trivialisation of $\Lambda^2 T^\ast_F(2)$, can be viewed as a nowhere vanishing section of $K_Z(4)$.
\par
We now consider the subset $X_{d}$ of the Douady space of $1$-dimensional compact subspaces of $Z$ consisting of subschemes $C$ such that
$\pi|_C:C\to \oP^1$ is flat of degree $d$. In particular, each such $C$ is pure-dimensional and Cohen-Macaulay. We denote by $X_{d}^{(i)}$, $i=0,1,2$, the subset of $X_{d}$ consisting of $C$, the normal sheaf $\sN_{C/Z}$ of which satisfies $h^1(\sN(-i))=0$. We summarize the main properties of $X_{d}^{(i)}$ as follows:
\begin{proposition} In each statement below suppose that the corresponding $X_{d}^{(i)}$, $i=0,1,2$, is nonempty.
\begin{itemize}
\item[(i)]$X_{d}^{(0)}$ a smooth $4d$-dimensional manifold with a canonical isomorphism\\ $T_C X_{d}^{(0)}\simeq H^0(C,\sN_{C/Z})$ for each $C$.
\item[(ii)] $X_{d}^{(1)}$ is equipped with a natural integrable $2$-Kronecker structure, i.e.\ a holomorphic vector bundle $E$, $E_C= H^0(C,\sN_{C/Z}(-1))$, and a bundle map $\alpha:E\otimes \cx^2\to TX_{d}^{(1)}$ such that $\alpha(E\otimes v)$ is an integrable rank $2d$ distribution for any nonzero $v\in \cx^2$.
\item[(iii)] $X_{d}^{(2)}$ is a $\cx$-hypercomplex manifold, i.e.\ the map $\alpha$ is an isomorphism everywhere. Consequently $X_{d}^{(2)}$ is equipped with a holomorphic Obata connection, i.e. a torsion-free holomorphic connection with holonomy in $GL(d,\cx)\simeq GL(E)$.
\item[(iv)] If $Z$ is also equipped with an $\sO(2)$-valued symplectic form along the fibres of $\pi$, then $X_{d}^{(2)}$ is a
$\cx$-hyperk\"ahler manifold, i.e.\ it has a nowhere degenerate $\cx$-valued symmetric bilinear form $g$, such that the corresponding holomorphic Levi-Civita connection coincides with the Obata connection.
\end{itemize}\label{geom}
\end{proposition}
\begin{proof} Part (i) is easy in the case when $Z$ is quasiprojective. It follows then from the fact that codimension $2$ Cohen-Macualay subspaces are locally unobstructed \cite[\S 2.8]{Hart}. In the general case we have to proceed differently. We consider the relative Hilbert scheme $Z^{[d]}_\pi$ of $d$ points along the fibres of $\pi$. It is a smoooth $(2d+1)$-dimensional manifold with a holomorphic submersion $\pi^{[d]}:Z^{[d]}_\pi \to \oP^1$, and the same argument as in \cite[Prop.\ 3.1]{BP2} shows that $X_d$ is isomorphic to the Douady space of sections of $\pi^{[d]}$. Furthermore, \cite[Lemma 3.2]{BP2} remains true, so that the normal bundle $N_s$ of a section $s$ corresponding to a curve $C$ is isomorphic to $\pi_\ast \sN_{C/Z}$. Hence, if $H^1(C,\sN_{C/Z})=0$, then $h^1(N_s)=0$, and this means that the Douady space of sections is smooth at $s$. This proves (i).
Parts (ii)-(iv) have been proved in \cite{Sigma, BP1}.
\end{proof}
In the case when the curve $C$ is a {\em local complete intersection} (lci), we can say more:
\begin{proposition} \begin{itemize}
\item[(i)] If $C\in X_{d}^{(0)}$, resp. $C\in X_{d}^{(1)}$, is lci, then there is a canonical isomorphism $$T_C^\ast X_{d}^{(0)}\simeq H^1(C, \sN_{C/Z}\otimes K_Z),\quad \text{resp.}\enskip E_C^\ast \simeq H^1(C,\sN_{C/Z}\otimes K_Z(1)).$$
\item[(ii)] If $C\in X_{d}^{(2)}$ is lci, then there are additional canonical isomorphisms
$$ E_C\simeq H^1(C,\sN_{C/Z}(-3)),\quad E_C^\ast\simeq H^0(C,\sN_{C/Z}\otimes K_Z(3)).$$
\end{itemize}
\end{proposition}
\begin{proof} Write $N$ for the locally free sheaf $\sN_{C/Z}$. The adjunction formula holds for lci subschemes \cite[Ch.\ 6, Thm.\ 4.9]{QL},
and hence:
\begin{equation} K_C\simeq \left.{K_Z}\right|_C\otimes \Lambda^2 N.\label{KCZ}\end{equation}
Consequently:
$$ T_C^\ast X_{d}^{(0)}\simeq H^0(C,N)^\ast\simeq H^1(C,K_C\otimes N^\ast)\simeq H^1(C,K_Z\otimes N).$$
The second isomorphism in (i) follows completely analogously, given that $E_C\simeq H^0(C,N(-1))$.
\par
For (ii) observe that we have a short exact sequence
$$ 0\longrightarrow N(-3)\longrightarrow N(-2)\oplus N(-2)\longrightarrow N(-1)\longrightarrow 0, $$
from which the first isomorphism follows immediately, since $N(-2)$ has trivial cohomology. Since $N^\ast\otimes K_C(2)$ also has trivial cohomology, the same argument, using the exact sequence
$$ 0\longrightarrow N^\ast\otimes K_C(1)\longrightarrow \bigl(N^\ast\otimes K_C(2)\bigr)^{\oplus 2}\longrightarrow N^\ast\otimes K_C(3)\longrightarrow 0$$
and \eqref{KCZ}, shows the second isomorphism.
\end{proof}
It follows that a nowhere vanishing section $\omega$ of $\Lambda^2 T_F^\ast(2)\simeq K_Z(4)$ defines an isomorphism
\begin{equation}E_C\simeq H^0(C,\sN_{C/Z}(-1))\stackrel{\cdot\omega}{\longrightarrow} H^0(C,\sN_{C/Z}\otimes K_Z(3))\simeq E_C^\ast\label{omegad}\end{equation}
for any lci curve $C\in X_{d}^{(2)}$. Write $\omega^{[d]}$ for the corresponding nondegenerate bilinear form on $E$ given by $(s,t)\mapsto (s\omega)(t)$.
\begin{remark} This construction of a symplectic form on $E$ is due to Nash \cite{N}, who showed that the hyperk\"ahler structure on a moduli space of framed Euclidean $SU(2)$-monopoles can be obtained this way.\end{remark}
\par
Let $\zeta\in\oP^1$. For a $C\in X_{d}^{(2)}$, sections of $\sN_{C/Z}(-1)$ can be identified with the tangent space
to $C_\zeta=C\cap \pi^{-1}(\zeta)$ in the Hilbert scheme of $d$ points in the fibre $\pi^{-1}(\zeta)$. Formula \eqref{omegad} implies that $\omega^{[d]}$ coincides with the induced symplectic form \cite{Beau} on the Hilbert scheme of points (this is obvious on the subset where $C_\zeta$ consists of distinct points, and hence, by continuity, everywhere).
Therefore (cf.\ \cite{Sigma}) the symplectic form $\omega^{[d]}$ induces a $\cx$-hyperk\"ahler structure on $X_{d}^{(2)}$, and a pseudo-hyperk\"ahler structure on the $\sigma$-invariant subset of $X_{d}^{(2)}$, if $Z$ is equipped with an antiholomorphic involution $\sigma$ covering the antipodal map.
\par
Following Nash \cite{N}, we are going to give another proof of the skew-symmetry of $\omega^{[d]}$, since the argument will be helpful when proving Theorem A(i).
\begin{proposition} $\omega^{[d]}$ is skew-symmetric.\label{other}\end{proposition}
\begin{proof}\let\qed\relax We can express $\omega^{[d]}$ as the composition of the natural skew-symmetric map
$$H^0(C,N(-1))\times H^0(C,N(-1)){\to}H^0\bigl(C,(\Lambda^2N)(-2)\bigr)$$
with
$$ H^0\bigl(C,(\Lambda^2N)(-2)\bigr)\simeq H^0(C,K_Z^\ast(-2)\otimes K_C)\stackrel{\cdot(\omega\lambda)}{\longrightarrow} H^1(C,K_C)\simeq \cx,$$
where $\lambda\in H^1(C,\sO_C(-2))$ is the pullback of the extension class of
$$0\to \sO_{\oP^1}(-3)\to \sO_{\oP^1}(-2)\oplus \sO_{\oP^1}(-2)\to \sO_{\oP^1}(-1)\to 0.\quad\quad\Box$$
\end{proof}
\section{Rational curves}
Let $\pi:Z\to \oP^1$ be as in the previous section. We denote by $X_{d,0}$ the component of $X_d$ consisting of smooth rational curves $C\simeq \oP^1$, and write $X_{d,0}^{(i)}=X_d^{(i)}\cap X_{d,0}$, $i=0,1,2$. We remark that $C\in X_{d,0}^{(2)}$ if and only if its normal bundle is isomorphic to $\sO_{\oP^1}(2d-1)\oplus \sO_{\oP^1}(2d-1)$.
\par
For a $C\in X_{d,0}$, let $R_C$, resp.\ $B_C$, be the ramification divisor, resp.\ the branch divisor, of $\pi|_C$. These are $0$-dimensional subschemes of $\oP^1$ and we obtain a holomorphic map
\begin{equation} \rho: X_{d,0}\longrightarrow \oP^{2d-2}, \quad C\mapsto B_C.\label{rho}\end{equation}
\subsection{Covers of $\oP^1$ and their parametrisations\label{covers}}
In order to understand the map $\rho$, we make a brief detour. The map $\rho$ can be viewed abstractly as associating to a degree cover $\pi:C\to \oP^1$ its branch divisor $B_C$. On the other hand, we can also parameterise $C$, $f:\oP^1\to C$, and obtain a degree $d$ rational map $\phi=\pi\circ f$. Let $\Rat_d$ denote the space of degree $d$ rational maps $\oP^1\to\oP^1$. The quotient of $\Rat_d$ by $PGL(2,\cx)$ can be viewed as the moduli space of abstract degree $d$ covers of $\oP^1$, but since the action of $PGL(2,\cx)$ has fixed points, this quotient is not manifold.
On the other hand, we can associate to $\phi$ its branch divisor. Classical Hurwitz conditions \cite{Hur} imply that, given an effective divisor $B$ of degree $2d-2$ on $\oP^1$, there exist, up to automorphisms, only finitely many rational maps $\phi:\oP^1\to \oP^1$ of degree $d$ with branch divisor $B$.
Let $\phi\in \Rat_d$ and consider the induced sequence
\begin{equation} 0\longrightarrow T\oP^1\stackrel{d\phi}{\longrightarrow} \phi^\ast T\oP^1\longrightarrow\sF_\phi\longrightarrow 0,\label{sF2}\end{equation}
where $\sF_\phi$ is supported on the ramification divisor of $\phi$. The space of global sections of the middle term is naturally isomorphic to $T_\phi\Rat_d$, while global sections of $T\oP^1$ correspond to infinitesimal automorphisms of $\oP^1$. Thus we can identify global sections of $\sF_\phi$ with deformations of the branch divisor $B$ of $\phi$, i.e.\ locally on $\Rat_d$ we have a natural isomorphism $H^0(\oP^1,\sF_\phi)\simeq T\oP^{2d-2}$.
\subsection{The geometry of the map $\rho$\label{map-rho}} With these preparations, we can prove:
\begin{proposition} The map $\rho$ is a submersion on an open subset of $X_{d,0}^{(0)}$ where $h^1({T_F}|_C)=0$. This open subset contains $X_{d,0}^{(2)}$.
\end{proposition}
\begin{proof} Let $C\in X_{d,0}^{(0)}$. We have an analogue of \eqref{sF2}:
\begin{equation} 0\longrightarrow TC\stackrel{d\pi}{\longrightarrow} \pi^\ast T\oP^1\longrightarrow\sF\longrightarrow 0.\label{sF}\end{equation}
The sheaf $\sF$ is supported on the ramification divisor $R_C$ and is isomorphic to $\sF_\phi$ for any parameterisation of $C$. Owing to the above discussion we have a natural isomorphism $T_{B_C}\oP^{2d-2}\simeq H^0(C,\sF)$.
\par
We also have the following two short exact sequences:
\begin{equation} 0\longrightarrow TC\longrightarrow TZ|_C{\longrightarrow} N_{C/Z}\longrightarrow 0,\label{normal}\end{equation}
\begin{equation} 0\longrightarrow T_F\longrightarrow TZ\stackrel{d\pi}{\longrightarrow} \pi^\ast T\oP^1\longrightarrow 0,\label{dpi}\end{equation}
where $T_F$ is the vertical tangent bundle.
Observe that the composition
$$ TZ|_C\stackrel{d\pi}{\longrightarrow} \pi^\ast T\oP^1\longrightarrow \sF$$
factors through $N_{C/Z}$, and we obtain the following short exact sequence of sheaves on $C$:
\begin{equation} 0\longrightarrow {T_F}|_C\longrightarrow N_{C/Z}\longrightarrow \sF\longrightarrow 0.\label{NtoF}\end{equation}
The induced map $H^0(C,N_{C/Z})\to H^0(C,\sF)$ is $d\rho|_C$, and the first statement follows. If $C\in X_{d,2}^{(2)}$, then $N_{C/Z}\simeq \sO_{\oP^1}(2d-1)^{\oplus 2}$. Sequence
\eqref{normal} implies then that the direct summands of $TX|_C$ have degree at most $2d-1$. Sequence \eqref{dpi}, restricted to $C$, implies now that the direct summands of ${T_F}|_C$ have degree at most $2d-1$.
Since $c_1({T_F}|_C)=2d$, it follows that the direct summands of ${T_F}|_C$ have positive degree.
\end{proof}
We now consider the structure of the fibres of $\rho$. As discussed in \S\ref{covers}, the connected components of $\rho^{-1}(B)$ correspond to $PGL(2,\cx)$-orbits of rational maps with branch divisor $B$. Let us fix such a rational map $\phi:\oP^1\to\oP^1$, and suppose that there exists a $C_0\in X^{(0)}_{d,0}$ with a parameterisation $f_0:\oP^1\to C_0$ such that $\pi\circ f_0= \phi$. Then the connected component $X_\phi$ of $\rho^{-1}(B)$ containing $C$ is isomorphic to the space of embeddings $f:\oP^1\to Z$ such that $\pi\circ f=\phi$. Let $\phi^\ast Z$ denote the fibred product
\begin{equation} \phi^\ast Z=\{(t,z)\in \oP^1\times Z\:;\: \phi(t)=\pi(z)\},\label{fibred}
\end{equation}
and $\tilde\phi:\phi^\ast Z\to Z$ the projection on the second coordinate.
We conclude that $X_\phi$ is isomorphic to the open subset of the Kodaira moduli space of sections $s$ of $\phi^\ast Z\to \oP^1$ such that $\tilde\phi\circ s$ is an isomorphism.
The tangent space to $X_\phi$ at $C$ is canonically isomorphic to $H^0(C, {T_F}|_C)$ (on the open subset where $h^1({T_F}|_C)=0$).
\begin{remark} Let $s:\oP^1\to Z$ be a section of $\pi$ with normal bundle isomorphic to $\sO(1)\oplus \sO(1)$. Then $s\circ \phi$ is a section of $\phi^\ast Z$ with normal bundle $\sO(d)\oplus\sO(d)$. Hence $\phi^\ast Z$ has a $(2d+2)$-dimensional smooth family of sections with this normal bundle. A generic element of this family will map to an embedded $\oP^1$ in $Z$. Sequence \eqref{NtoF} implies then that the normal bundle $N$ of this $\oP^1$ satisfies $h^1(N(-1))=0$. Consequently, each fibre of $\rho$ contains elements of $X_{d,0}^{(1)}$.\label{X1}
\end{remark}
\begin{example} Let $Z$ be the twistor space of the flat $\oR^4$, i.e. the total space of $\sO_{\oP^1}(1)\oplus \sO_{\oP^1}(1)$. Equivalently $Z=\oP^3\backslash \oP^1$, where $\oP^1=\{[z_0,z_1,0,0]\}$. The map $\pi:Z\to \oP^1$ is then the projection onto the last two coordinates. Let $C$ be a degree $d$ rational curve in $Z$, parameterized by $[f_0(u,v),\dots,f_3(u,v)]$, where $f_i(u,v)$ are homogeneous polynomials of degree $d$, $i=0,\dots,3$. The normal bundle of $C$ is then the cokernel of
$Df:\sO_{\oP^1}(1)\oplus\sO_{\oP^1}(1)\to \sO_{\oP^1}(d)^{\oplus 4}$, where $Df$ is the Jacobian matrix of $(f_0,\dots,f_3)$ \cite{GS}. The sheaf $\sF$ is the cokernel of $D\phi:\sO_{\oP^1}(1)\oplus\sO_{\oP^1}(1)\to \sO_{\oP^1}(d)\oplus \sO_{\oP^1}(d)\simeq {T_F}|_C$ where $\phi=(f_2,f_3)$, and
$T_F=\{(a,b,0,0)\in TZ\}$. If $N_{C/Z}\simeq \sO_{\oP^1}(2d-1)\oplus \sO_{\oP^1}(2d-1)$, then we have an exact sequence
$$ 0\to \sO_{\oP^1}(1)\oplus\sO_{\oP^1}(1)\stackrel{Df}{\longrightarrow} \sO_{\oP^1}(d)^{\oplus 4}\stackrel{(\alpha_1,\alpha_2)}{\longrightarrow} \sO_{\oP^1}(2d-1)\oplus \sO_{\oP^1}(2d-1)\to 0,$$
where $\alpha_1$ and $\alpha_2$ are $2\times 2$ matrices of degree $d-1$ homogeneous polynomials in $u,v$. If we write
$\phi=(f_2,f_3)$ and $\psi=(f_0,f_1)$, then the exactness of the above sequence implies $\alpha_1D\psi+\alpha_2D\phi=0$. The sequence \eqref{NtoF} is then
$$ 0\longrightarrow \sO_{\oP^1}(d)\oplus \sO_{\oP^1}(d) \stackrel{\alpha_1}{\longrightarrow} \sO_{\oP^1}(2d-1)\oplus \sO_{\oP^1}(2d-1)\longrightarrow \sF\longrightarrow 0.$$
The connected component $X_\phi$ of $\rho^{-1}(B)$ is an open subset of $\cx^{2d+2}$ consisting of pairs $(f_0(u,v),f_1(u,v))$ of homogeneous polynomials of degree $d$ such that $[f_0,f_1,f_2,f_3]$ is an embedding.\label{P3}
\end{example}
\section{Real manifolds\label{realm}}
We now suppose, in addition, that $Z$ is equipped with an antiholomorphic involution $\sigma$ covering the antipodal map on $\oP^1$. We denote by $M_d^{(i)}$, $i=0,1,2$, the $\sigma$-invariant part of the corresponding $X_d^{(i)}$, and by $M_{d,0}^{(i)}$ the $\sigma$-invariant part of $X_{d,0}^{(i)}$. The manifolds $M_d^{(1)}$ and $M_d^{(2)}$ are equipped with the real versions of the geometry stated in Proposition \ref{geom}, i.e.\ an integrable quaternionic $2$-Kronecker structure in the case of $M_d^{(1)}$, and a hypercomplex or pseudo-hyperk\"ahler structure on $M_d^{(2)}$. In the case of rational curves we have the following restriction on $d$:
\begin{lemma}{\cite[Prop.\ 5.9]{BP2}} Let $C$ be a connected projective curve of arithmetic genus $0$ equipped with a flat projection $\pi:C\to\oP^1$ of degree $d$. If $C$ admits an antiholomorphic involution covering the antipodal map on $\oP^1$, then $d$ is odd.\hfill $\Box$\label{d-odd}\end{lemma}
We assume, therefore, that $d$ is odd. The restriction of \eqref{rho} to $ M_{d,0}^{(0)}$ yields a smooth map
\begin{equation} \rho: M_{d,0}^{(0)}\longrightarrow \oR\oP^{2d-2}.\label{rho2}\end{equation}
The connected components of its fibres correspond to $SO(3)$-orbits of rational maps $\phi:\oP^1\to \oP^1$ of degree $d$ which commute with the antipodal map (up to automorphisms). We recall the notion of a $d$-hypercomplex manifold \cite{DM1,DM2,BiTAMS}:
\begin{definition} Let $d\in \oN$ be odd. An almost $d$-hypercomplex structure on a smooth manifold $M$ is given by an isomorphism
$T^\cx M\simeq E\otimes \cx^{d+1}$, where $E$ is a quaternionic vector bundle. Moreover, this isomorphism is required to intertwine the complex conjugation on $T^\cx M$ and the tensor product of the quaternionic structure on $E$ and the standard quaternionic structure on $\cx^{d+1}$.
\par
An almost $d$-hypercomplex structure is integrable, i.e.\ a $d$-hypercomplex structure, if, for each Borel subgroup $B_\zeta\subset SL(2,\cx)$, $\zeta\in \oP^1$, the subbundle $E\otimes K_\zeta$ is involutive, where $K_\zeta$ is the direct sum of all, except the lowest, weight subspaces of $\cx^{d+1}$ for the standard irreducible representation of $SL(2,\cx)$. \label{d-hcx}
\end{definition}
As discussed in \cite{BiTAMS}, this is the natural geometry on the space of sections of a holomorphic submersion $\pi:Z\to \oP^1$, the normal bundle of which splits as $\bigoplus\sO(d)$.
\begin{proposition} The fibres of the map $\rho$ restricted to the open subset of $ M_{d,0}^{(0)}$ where ${T_F}|_C\simeq \sO_{\oP^1}(d)\oplus\sO_{\oP^1}(d)$
have a natural $d$-hypercomplex structure.
\end{proposition}
\begin{proof} Let $M_\phi$ be a connected component of a fibre of $\rho$ determined by a rational map $\phi:\oP^1\to \oP^1$ of degree $d$, commuting with the antipodal map. The arguments of the previous section imply that $M_\phi$ is an open subset of the $\sigma$-invariant part of the Kodaira moduli space of sections of $\phi^\ast Z$. As observed in the previous section, the normal bundle of such a section corresponding to $C$ in the open subset of the statement is isomorphic to ${T_F}|_C\simeq \sO_{\oP^1}(d)\oplus\sO_{\oP^1}(d)$.
\end{proof}
\subsection{Signature of the metric}
Suppose now that $Z$ is also equipped with an $\sO(2)$-valued symplectic $2$-form along fibres of $\pi$, which is compatible with the real structure. Then the manifold $M_{d,0}^{(2)}$ has a natural pseudo-hyperk\"ahler metric $g$. We shall now use the description of the induced symplectic form $\omega^{[d]}$ on the bundle of $E$ over $M_{d,0}^{(2)}$, given in \S\ref{Douady}, to determine the signature of the metric. A real tangent vector in $TM_{d,0}^{(2)}\simeq E\otimes \cx^2$ can be written as $x=(e,-je)$, where $j$ is the quaternionic structure of $E$, and the metric is then (cf., e.g., \cite[(3.103)]{HKLR})
\begin{equation}g(x,x)=-2\omega^{[d]}(e,je).\label{metric}\end{equation}
\par
Let now $C\in M_{d,0}^{(2)}$. We fix a parametrisation $r:\oP^1\to C$ such that $\sigma\circ r=r\circ \sigma$ (where $\sigma:\oP^1\to\oP^1$ denotes the antipodal map) and consider its composition with $\pi|_C$. This is a degree $d$ rational map, commuting with the antipodal map. Without loss of generality we can assume that neither $0$ nor $\infty$ is mapped to $\infty$. We can then write this rational map as $p(t)/q(t)$, where $p$ and $q$ are relatively prime degree $d$ polynomials in the affine coordinate $t$ on $\oP^1$.
\par
Since $C\in M_{d,0}^{(2)}$, its normal bundle $N$ is isomorphic to $\sO_{\oP^1}(2d-1)\oplus \sO_{\oP^1}(2d-1)$. Then $N(-1)\simeq\sO_{\oP^1}(d-1)\oplus\sO_{\oP^1}(d-1)$, and the isomorphism
$$E_C\otimes \cx^2\simeq H^0(C,N(-1))\otimes \cx^2\longrightarrow H^0(C,N)\simeq T_C M_{d,0}^{(2)}$$
can be written as
$$ H^0(\oP^1,\sO(d-1)\oplus\sO(d-1))\otimes\cx^2 \ni \bigl((f_1,g_1),(f_2,g_2)\bigr)\longmapsto (pf_1+qg_1,pf_2 +qg_2).$$
We can also assume that the quaternionic structure of $E_C$ is the standard one on $ H^0(\oP^1,\sO(d-1)\oplus\sO(d-1))$, i.e.
$$ j\bigl(f(t),g(t)\bigr)=t^{d-1}\Bigl(-\ol{g(-1/\bar t\,)},\ol{f(-1/\bar t\,)}\Bigr).$$
We now unravel the description of $\omega^{[d]}$, given in the proof of Proposition \ref{other}.
Let $(f_i,g_i)\in H^0(C,N(-1))$, $i=1,2$, be two sections, consisting of pairs of polynomials of degree $d-1$. Then $f_1g_2-g_1f_2\in H^0\bigl(C,(\Lambda^2N)(-2)\bigr)$, which we view (using $\omega\in H^0(C,K_Z(4)|_C)$) as a section of $H^0(C,K_C(2))$. The corresponding meromorphic $1$-form is $(f_1g_2-g_1f_2)(q^2(t))^{-1}dt$, and it has poles bounded by $2(q(t))$. The extension class in $H^1(\oP^1,\sO_{\oP^1}(-2))$ can be viewed as the Laurent tail $\zeta^{-1}\cdot \infty$, and its pullback is then
$$\sum_{i=1}^d \bigl( \text{linear term of $q(t)/p(t)$ at $t=t_i$} \bigr)\cdot t_i,$$
where $t_i$ are the zeros of $q$ (since $\zeta=p(t)/q(t)$). The pairing of $H^0(C,K_C(2))$ and $H^1(C,\sO(-2))$ is given by the residue map
$$\sum_{i=1}^d \Res_{t=t_i} \frac{q(t)}{p(t)}(f_1g_2-g_1f_2)(q^2(t))^{-1}dt=\sum_{i=1}^d \Res_{t=t_i} \frac{f_1g_2-g_1f_2}{p(t)q(t)}dt.$$
Let us write, for a polynomial of degree $k$,
$$ \tau (f)(t)=(-t)^k\ol{f\bigl(-1/\bar t\bigr)}.$$
The square of this map is ${\rm Id}$ if $k$ is even, and $-{\rm Id}$ if $k$ is odd. The fact that $p/q$ commutes with the antipodal map means that $p=-\tau(q)$.
\par
Let $\gamma$ be a simple contour in $\cx$ separating the roots from $q$ from the roots of $\tau(q)=-p$. It follows from the above and from \eqref{metric} that the metric on $T_C M_{d,0}^{(2)}$ is equal to
\begin{equation} \|x\|^2=\bigl\|\bigl((f,g),-j(f,g)\bigr)\bigr\|^2=\frac{1}{\pi i}\oint_\gamma \frac{f\tau(f)+g\tau(g)}{q\tau(q)}dt.\label{hermitian}\end{equation}
We want to determine the signature of the right-hand side on pairs $(f,g)$ of polynomials of degree $d-1$. Owing to the continuity, it is enough to compute the signature for one particular $q$, say $q(t)=t^d$. Then $\tau(q)=1$ (since $d$ is odd) and the right-hand side is the middle degree term of $2f\tau(f)+2g\tau(g)$, i.e.:
$$ 2\sum_{i=0}^{d-1}(-1)^{d-1-i}|f_i|^2+2\sum_{i=0}^{d-1}(-1)^{d-1-i}|g_i|^2.$$
Therefore the signature of the metric $g$ is $(2d+2,2d-2)$.
\section{Example: gravitational instantons of type $A_k$}
Consider an ALE or an ALF gravitational instanton $M$ of type $A_k$. We recall, after Hitchin \cite{Hit-pol}, its construction using twistor methods. The twistor space of $M$ has a singular model given as a hypersurface $\bar Z$ in the total space of a vector bundle over $T\oP^1$. If $\zeta$ is the affine coordinate on $\oP^1$ and $\eta$ is the corresponding fibre coordinate on $T\oP^1$, we denote by $L^c$, $c\in \cx$, the line bundle on $T\oP^1$ with transition function $\exp(-c\eta/\zeta)$ from $\zeta\neq\infty$ to $\zeta\neq 0$. Then $\bar Z$ is given by
\begin{equation*} \{(x,y,z)\in L^c(k)\oplus L^{-c}(k)\oplus \sO_{\oP^1}(2)\,;\, xy=\prod_{i=1}^k(z-a_i(\zeta))\}\label{hypers}\end{equation*}
where $c$ is real and $a_i$ are quadratic polynomials satisfying reality conditions. $M$ is then the space of real sections of $\pi:\bar Z\to \oP^1$ obtained by choosing an arbitrary real section $z(\zeta)=(x_2+ix_3)+2x_1\zeta-(x_2-ix_3)\zeta^2$ of $\sO_{\oP^1}(2)$, and dividing the set of all zeros of $z(\zeta)-a_i(\zeta)$, $i=1,\dots,k$, into two subsets $\Delta_1$, $\Delta_2$, interchanged by the antipodal map. This can be done consistently as shown in \cite{Hit-pol}. The sections of $\pi:\bar Z\to \oP^1$ are then
\begin{equation} x(\zeta)=Ae^{c(x_1-(x_2-ix_3)\zeta)}\prod_{\zeta_i\in\Delta_1}(\zeta-\zeta_i),\quad y(\zeta)=Be^{-
c(x_1-(x_2-ix_3)\zeta)
}\prod_{\zeta_i\in\Delta_2}(\zeta-\zeta_i),
\label{xy}\end{equation}
over $\zeta\neq \infty$.
The nonzero scalars $A,B$ are determined up to a circle action, which yields an isometric $S^1$-action on $M$.
We remark that resolving the singularities of $\bar Z$ is not necessary for computing $M$ and its metric.
\par
We now discuss the geometry of real (i.e.\ $\sigma$-invariant) $\oP^1$'s of degree $d$, $d$ - odd, in $\bar Z$.
Let $\phi=p(t)/q(t)$ be a rational map of degree $d$, commuting with the antipodal map, and assume for simplicity that $t=\infty$ is not a pole of $\phi$.
The function $\phi$ can be viewed as the transition function for the bundle $\sO_{\oP^1}(d)$ from $U_0=\{q\neq 0\}$ to $U_1=\{p\neq 0\}$. A section of $\sO_{\oP^1}(kd)$ is then represented by $b/q^k$ on $U_0$ and $b/p^k$ on $U_1$, where $b$ is a polynomial of degree $kd$. Let $z=b/q^2$ be a section of $\sO_{\oP^1}(2d)$ and write $b=b_0p+b_1q$. We get a section of the line bundle $L_\phi^c$ with transition function $\exp(cz/\phi)$ by setting
$$(s_0,s_1)=\bigl(\exp(-cb_0/q),\exp(cb_1/p)\bigr)$$
in $U_0$ and $U_1$ respectively. If we now consider the fibred product $\phi^\ast \bar Z$, as in \S\ref{map-rho}, then its sections, and hence the fibre of the map $\rho: M_{d,0}^{(2)}\to \oR\oP^{2d-2}$, are obtained in the same way as for $d=1$: choose an arbitrary real section $z(t)=b(t)/q(t)^2$ of $\sO_{\oP^1}(2d)$, divide the zeros of all $z(t)-a_i(\phi(t))$ into two sets, and obtain $x(t), y(t)$ as in \eqref{xy}, replacing the exponential factors by $\bigl(\exp(-cb_0/q)$ over $q(t)\neq 0$ and by $\exp(cb_1/p)$ over $p(t)\neq 0$. The space of real sections of $\phi^\ast\bar Z$ with normal bundle $\sO(d)\oplus\sO(d)$ is nonempty owing to Remark \ref{X1}, and is a $d$-hypercomplex analogue of the original gravitational instanton, as introduced in \cite[\S 3.1.2]{DM2}.
\par
A generic section $s$ of $\phi^\ast\bar Z$ will yield an embedded $\oP^1$ in $\bar Z$, and hence,
by varying $\phi$, we obtain a $4d$-dimensional space of embedded real $\oP^1$'s of degree $d$ in $\bar Z$.
\par
We claim that, for generic $c$ and $a_i$, $i=1,\dots,k$, the normal bundle of a generic such curve is $\sO(2d-1)\oplus \sO(2d-1)$, i.e.\ $M_{d,0}^{(2)}$ is nonempty (and hence of dimension $4d$). Indeed, were this not the case, the normal bundle of every degree $d$ rational curve (flat over $\oP^1$) in the twistor space of $\bigl(\oR^4\backslash \{0\}\bigr)/\oZ_k$ would also be different from $\sO(2d-1)\oplus \sO(2d-1)$. This twistor space $Z_0$ is the quotient by $\oZ_k$ of the total space $W$ of $\sO(1)\oplus\sO(1)$ with the zero section removed. In particular a generic degree $d$ rational curve in $W$ descends to a rational curve of degree $d$ curve in $Z_0$ with isomorphic normal bundle. Since $W$ is an open subset of $\oP^3$, a generic degree $d$ $\oP^1$ in $W$ has normal bundle isomorphic to $\sO(2d-1)\oplus \sO(2d-1)$ \cite{GS}. This contradiction proves our claim.
\par
We can say more in the case $c=0$, i.e.\ when $Z$ is the twistor space of an ALE manifold.
The fibred product $\phi^\ast \bar Z$ is then a hypersurface in the total space of the vector bundle $E_d=\sO(kd)\oplus\sO(kd)\oplus\sO(2d)$.
If $s$ is a section of $\phi^\ast \bar Z$, given by homogeneous polynomials $x(u,v),y(u,v),z(u,v)$ of degrees $kd,kd,$ and $2d$, then its normal bundle fits into a short exact sequence
\begin{equation} 0\longrightarrow N_{s/\phi^\ast\bar Z} \stackrel{j}{\longrightarrow} E_d\longrightarrow \sO(2kd) \longrightarrow 0,\label{NCphi}\end{equation}
since $N_{s/E_d}\simeq E_d$. The projection $ E_d\longrightarrow \sO(2kd)$ is given by $$\Bigl[y,x,-\sum_i\prod_{j\neq i}(z-a_j)\Bigr]^T,$$
from which one can compute $N_{s/\phi^\ast\bar Z} $.
The original twistor space $\bar Z$ is a hypersurface in the total space of the bundle $E_1$. We can view the curve $C$ given by $\phi $ and $s$ as being embedded in $E_1$. Its normal bundle fits then into a short exact sequence (cf.\ Ex.\ \ref{P3}):
\begin{equation} 0\longrightarrow \sO(1)\oplus\sO(1) \stackrel{\Psi}{ \longrightarrow} \sO(kd)^{\oplus 2}\oplus\sO(2d)\oplus\sO(d)^{\oplus 2} \longrightarrow N_{C/E_1} \longrightarrow 0,\label{NE1}\end{equation}
\nopagebreak[9]
where $\Psi$ is the Jacobi matrix of $[x(u,v),y(u,v),z(u,v),p(u,v),q(u,v)]^T$. We can extend \eqref{NCphi} and \eqref{NE1} to a commutative diagram:
$$
\begin{tikzcd} & & 0 \ar[d]& 0\ar[d] &\\[-6pt] & & N_{s/\phi^\ast\bar Z}\oplus\sO(d)^{\oplus 2}\ar[r,"\nu"] \ar[d, "j\oplus {\rm Id}"] & N_{C/\bar Z}\ar{r}\ar{d} & 0\\
0 \ar[r] & \sO(1)\oplus\sO(1)\ar[r, "\Psi"] \ar[ru, "\lambda", dashrightarrow] & E_d\oplus\sO(d)^{\oplus 2}\ar[r] \ar[d] &N_{C/E_1} \ar[r]\ar[d] & 0\\ & & \sO(2kd)\ar[d]\ar[r, equal] &\sO(2kd)\ar[d] &\\[-6pt] & & 0 & 0 &
\end{tikzcd}
$$
It is now the matter of (a complicated) linear algebra to compute the map $\lambda$ from $\Psi$ and from the vertical projection $E_d\to \sO(2kd)$. Assuming that $ N_{s/\phi^\ast\bar Z}\simeq \sO(d)\oplus\sO(d)$, $\lambda$ will be a $4\times 2$ matrix of degree $d-1$ polynomials in $u,v$, the coefficients of which depend on the polynomials $x,y,z,p,q$. The map $\nu$ can then be computed from $\lambda$, and the condition that $N_{C/\bar Z}\simeq \sO(2d-1)\oplus\sO(2d-1)$ can be written as a determinant of a matrix given by coefficients of $\lambda$. The polynomials $x$ and $y$ depend (up to scale) algebraically on $z,p,q$ and, hence, $X_{d,0}^{(2)}$, and its real part $M_{d,0}^{(2)}$, are described by this algebraic relation between the coefficients of arbitrary polynomials $z,p,q$ of degrees $2d,d,d$.
\begin{remark}
In principle, once the maps $\lambda$ and $\nu$ are determined, the pseudo-hyperk\"ahler metric on $M_{d,0}^{(2)}$ can be computed using the method of the previous section.
\end{remark}
\begin{remark} We should like to point out that in the ALE case, the singular model $\bar Z$ compactifies to a hypersurface in the weighted projective space $\oP=\oP(1,1,2,k,k)$. For more details on this compactification see \cite{Kron}. Since the degree of the hypersurface is $2k$, the adjunction formula implies that its canonical sheaf is isomorphic to $\sO_{\oP}(2k-1-1-2-k-k)\simeq \sO_{\oP}(-4)$, and hence the compactification of $\bar Z $ is Fano.
\label{Fano}\end{remark}
|
{
"timestamp": "2021-12-01T02:25:53",
"yymm": "2111",
"arxiv_id": "2111.15457",
"language": "en",
"url": "https://arxiv.org/abs/2111.15457"
}
|
\section{Introduction}
We will use the following hypotheses and notations throughout the paper:
\[\mathbb{N} \triangleq \left\{ {1, 2, \ldots, m, \ldots } \right\},~\mathbb{Z} \triangleq \left\{ {0, \pm 1, \ldots, \pm m, \ldots } \right\},\]
\[\mathbb{R} \triangleq \left( { - \infty ,\infty } \right),~\left( {{a_1}, {a_2}, \cdots, {a_n}} \right) \in {\mathbb{R}^n},~E_{n} \subseteq {\mathbb{R}^n},\;n \in \mathbb{N},~n\geq 2.\]
If the function $f:E_{n}\to \mathbb{R}$ satisfy the conditions:
\begin{equation*}
f\left( {{a_{k + 1}},{a_{k + 2}}, \cdots, {a_{k + n}}} \right) = f\left( {{a_1},{a_2}, \cdots, {a_n}} \right),\;\forall k \in \mathbb{Z},
\end{equation*}
where
\begin{equation}\label{1.}
{a_k} = {a_i} \Leftrightarrow k \equiv i\;\left( {\bmod \;n} \right),\;\forall k \in \mathbb{Z},\;i = 1, 2, \ldots, n,
\end{equation}
then we say that $f$ is a \textbf{cyclic function}.
According to the above definition, we know that, under the hypotheses in (\ref{1.}), for any function $\chi :E_{m} \to \mathbb{R}, ~2\leq m\leq n$, the function
\[f:E_{n} \to \mathbb{R},\;f\left( {{a_1},{a_2},\cdots, {a_n}} \right) \triangleq \sum_{i = 1}^{n } {\chi \left( {{a_{i }},{a_{i + 1}}, \cdots ,{a_{i + m-1}}} \right)}=\sum_{\text{cyc}:~n}^{1\leq i\leq n} {\chi \left( {{a_{i }},{a_{i + 1}}, \cdots, {a_{i + m-1}}} \right)}\]
is a cyclic function, where and in the future, $\sum_{\text{cyc}:~n}^{1\leq i\leq n}$ represent the \textbf{cyclic summation}. For example,
\[\sum_{\text{cyc}:~n}^{1\leq i\leq n} {a_i^{{a_{i + 1}}}}={ \sum_{i=1}^{n-1} {a_i^{{a_{i+1}}}}+a_n^{{a_{n+1}}}}={ \sum_{i=1}^{n-1} {a_i^{{a_{i+1}}}}+a_n^{{a_1}}},~\forall n\geq 2.\]
In particular, the function
$f\left( {{a_1},{a_2},\cdots, {a_n}} \right) \triangleq \sum_{i = 1}^{n }a_{i}$ is a cyclic function. In general, if $f:E_{n} \to \mathbb{R}$ is a symmetric function, then $f$ is a cyclic function.
Let $\left( {{a_1},{a_2},\cdots, {a_n}} \right)\in(0,\infty)^{n},~n\geq2.$ Then we say that the function
\begin{equation*} \text{C}:(0,\infty)^{n}\rightarrow \mathbb{R},~\text{C}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right)\triangleq\sum_{\text{cyc}:~n}^{1\leq i\leq n} {a_i^{{a_{i + 1}}}} \end{equation*}
is a \textbf{Cater cyclic function} which is a cyclic function, and
\[\text{C}_{*}:(0,\infty)^{n}\rightarrow \mathbb{R},~\text{C}_{*}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right)\triangleq\sum_{i=1}^{n} {a_i^{{a_{i}}}}\]
with
\[\text{C}^{*}:(0,\infty)^{n}\rightarrow \mathbb{R},~\text{C}^{*}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right)\triangleq\sum_{i=1}^{n} {a_i^{{a_{n+1-i}}}}\]
are \textbf{Cater-type cyclic functions} which are also the cyclic functions.
Assume that $f:E_{n} \to \mathbb{R}$ and $g:E_{n} \to \mathbb{R}$ are two cyclic functions. Then we say that the inequalities
\begin{equation*} f\left( {{a_1}, {a_2},\cdots, {a_n}} \right) \geq g\left( {{a_1}, {a_2},\cdots, {a_n}} \right)~\text{and}~f\left( {{a_1}, {a_2},\cdots, {a_n}} \right) \leq g\left( {{a_1}, {a_2},\cdots, {a_n}} \right)
\end{equation*}
are \textbf{cyclic inequalities}.
In 1980, F. S. Cater established an interesting cyclic inequality involving power exponents as follows \cite{1}:
\begin{equation}\label{2}
\text{C}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right) > 1 + \left( {n - 2} \right)\min \left\{ {a_1^{{a_2}},a_2^{{a_3}}, \cdots, a_{n - 1}^{{a_n}},a_n^{{a_1}}} \right\},~\forall n\geq2,
\end{equation}
which is called as \textbf{Cater inequality}.
Like the periodic function, the cyclic function is also a class of important functions. Since the analytic expression of a cyclic function is very special and complex, so we need to estimate the bounds of the cyclic function or establish a cyclic inequality.
Cater inequality is a cyclic inequality, which is a typical representative of the cyclic inequalities. Since the analytic expression of the Cater cyclic function is of extreme particularity and complexity, it is an interesting topic that to establish a new Cater-type cyclic inequality.
In this paper, we will study the lower with upper bounds of the Cater cyclic function and establish two Cater-type cyclic inequalities, and display the applications of the dimension reduction method \cite{3,11,9,10} in the inequality theory.
The research methods of this paper are based on the mathematical induction and the dimension reduction method. The research tools of this paper include mathematical analysis theory, inequalities theory and the mean theory \cite{3,11,4,6,10,12,13,7,20,8,16,17,18,15,12.1,21}.
Our main result are the following Theorems \ref{theorem2.4} and \ref{theorem2.3}.
\begin{theorem}\label{theorem2.4} Let $\left( {{a_1}, {a_2}, \cdots ,{a_n}} \right) \in {\left( 0,\infty \right)^n},$ where $n\geq2.$ If
\begin{equation}\label{4.01}
0<a_1\leq a_2\leq\cdots\leq a_n~\text{with}~a_{1}^{a_{n}}\geq e^{-1},
\end{equation}
then we have the following \textbf{Cater-type cyclic inequality}:
\begin{equation}\label{5.01}
\text{C}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right)\geq \text{C}^{ *}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right).
\end{equation}
Equality in (\ref{5.01}) holds if $n=2$ or ${a_1} = {a_2} =\cdots = {a_n}.$
\end{theorem}
\begin{theorem}\label{theorem2.3} Let $\left( {{a_1}, {a_2}, \cdots ,{a_n}} \right) \in {\left( 0,\infty \right)^n},$ where $n\geq2.$ If
\begin{equation}\label{4}
0<a_1\leq a_2\leq\cdots\leq a_n,
\end{equation}
then we have the following \textbf{Cater-type cyclic inequality}:
\begin{equation}\label{5}
\text{C}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right)\leq \text{C}_{*}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right).
\end{equation}
Equality in (\ref{5}) holds if and only if ${a_1} = {a_2} =\cdots = {a_n}.$
\end{theorem}
\section{Proof of Theorem \ref{theorem2.4}}
\begin{proof} Let $j_{1}j_{2}\cdots j_{n}$ be a permutation \cite{07} of $1,2,\ldots,n$, and let
\begin{equation}\label{0.00}
1\leq i< k\leq n~\text{and}~1\leq j_{i}<j_{k}\leq n .
\end{equation}
By (\ref{4.01}) and (\ref{0.00}), we have
\begin{equation}\label{0.0}
0<a_{1}\leq a_{i}\leq a_{k}\leq a_{n}~\text{and}~0<a_{1}\leq a_{j_{i}}\leq a_{j_{k}}\leq a_{n}.
\end{equation}
We first prove that
\begin{equation}\label{0.1}
a_{i}^{a_{j_{i}}}+a_{k}^{a_{j_{k}}}\geq a_{i}^{a_{j_{k}}}+a_{k}^{a_{j_{i}}}.
\end{equation}
Indeed, if $a_{i}= a_{k}$ or $a_{j_{i}}=a_{j_{k}}$, then (\ref{0.1}) is an equation. Now we assume that
\begin{equation}\label{0.011}0<a_{1}\leq a_{i}< a_{k}\leq a_{n}~\text{and}~0<a_{1}\leq a_{j_{i}}< a_{j_{k}}\leq a_{n}.\end{equation}
We define a auxiliary function as follows:
\[f(t)\triangleq t^{a_{j_{i}}}-t^{a_{j_{k}}},~0<a_{i}\leq t \leq a_{k}.\]
Then inequality (\ref{0.1}) can be rewritten as
\begin{equation}\label{0.2}
f(a_{i})\geq f(a_{k}).
\end{equation}
Sine
\[\frac{\text{d}f(t)}{\text{d}t}< 0,~\forall t\in \left[a_{i}, a_{k}\right]\Rightarrow f(a_{i})\geq f(a_{k}),\]
so we just need to prove that
\begin{equation}\label{0.3}
\frac{\text{d}f(t)}{\text{d}t}< 0,~\forall t\in \left[a_{i}, a_{k}\right].
\end{equation}
Since $t\geq a_{i}\geq a_{1}>0$,
\begin{eqnarray*}
\frac{\text{d}f(t)}{\text{d}t}&=&a_{j_{i}}t^{a_{j_{i}}-1}-a_{j_{k}}t^{a_{j_{k}}-1}\\
&=&a_{j_{k}}t^{a_{j_{i}}-1}\left(\frac{a_{j_{i}}}{a_{j_{k}}}-t^{a_{j_{k}}-a_{j_{i}}}\right)\\
&\leq& a_{j_{k}}t^{a_{j_{i}}-1}\left(\frac{a_{j_{i}}}{a_{j_{k}}}-{a_{i}}^{a_{j_{k}}-a_{j_{i}}}\right)\\
&\leq& a_{j_{k}}t^{a_{j_{i}}-1}\left(\frac{a_{j_{i}}}{a_{j_{k}}}-{a_{1}}^{a_{j_{k}}-a_{j_{i}}}\right),
\end{eqnarray*}
\[\frac{a_{j_{i}}}{a_{j_{k}}}-{a_{1}}^{a_{j_{k}}-a_{j_{i}}}< 0\Rightarrow (\ref{0.3}),\]
and
\[\frac{a_{j_{i}}}{a_{j_{k}}}-{a_{1}}^{a_{j_{k}}-a_{j_{i}}}< 0\Leftrightarrow \log a_{1}>- \frac{\log a_{j_{k}}-\log a_{j_{i}}}{a_{j_{k}}-a_{j_{i}}},\]
so we just need to prove that
\begin{equation}\label{0.4}
\log a_{1}>- \frac{\log a_{j_{k}}-\log a_{j_{i}}}{a_{j_{k}}-a_{j_{i}}}.
\end{equation}
By (\ref{4.01}), we have
\begin{equation}\label{0.60}
a_{1}^{a_{n}}\geq e^{-1}\Leftrightarrow \log a_{1}\geq -\frac{1}{a_{n}}.
\end{equation}
According to (\ref{0.011}), (\ref{0.60}) and the Lagrange mean value theorem, there exist $\zeta \in \left(a_{j_{i}},a_{j_{k}}\right)$ such that
\[- \frac{\log a_{j_{k}}-\log a_{j_{i}}}{a_{j_{k}}-a_{j_{i}}}=-\frac{\text{d}\log \zeta}{\text{d}\zeta}=-\frac{1}{\zeta}<-\frac{1}{a_{j_{k}}}\leq -\frac{1}{a_{n}}\leq\log a_{1}\Rightarrow (\ref{0.4})\Rightarrow (\ref{0.3}).\]
Hence inequality (\ref{0.1}) is proved.
Based on the above proof, we know that equality in (\ref{0.1}) holds if and only if $a_{i}= a_{k}$ or $a_{j_{i}}=a_{j_{k}}$.
Next, we prove inequality (\ref{5.01}) as follows.
We define a auxiliary function as follows:
\begin{equation}\label{0.6}
F\left(j_{1}j_{2}\cdots j_{n}\right)\triangleq \sum_{i=1}^{n}a_{i}^{a_{j_{i}}},
\end{equation}
where $j_{1}j_{2}\cdots j_{n}$ is a permutation of $1,2,\ldots,n.$ Now we prove that
\begin{equation}\label{0.7}
F\left(j_{1}j_{2}\cdots j_{n}\right)\geq F\left(n(n-1)\cdots(n-m) j_{m+2}^{*}j_{m+3}^{*}\cdots j_{n}^{*}\right),~\forall m:0\leq m\leq n-1,
\end{equation}
where $j_{m+2}^{*}j_{m+3}^{*}\cdots j_{n}^{*}$ is a permutation of $1,2,\ldots,n-m-1.$
We use the mathematical induction for $m$.
(I) Let $m=0.$ If $j_{1}=n,$ then (\ref{0.7}) is an equality. Assume that $j_{1}<n.$ Then exists $k:2\leq k\leq n$, such that $j_{k}=n$.
Since
\[F\left(j_{1}j_{2}\cdots j_{k}\cdots j_{n}\right)=a_{1}^{a_{j_{1}}}+a_{k}^{a_{j_{k}}}+\sum_{1\leq i\leq n,~i\ne j_{1},j_{k}}a_{i}^{a_{j_{i}}},\]
by inequality (\ref{0.1}), we have
\[a_{1}^{a_{j_{1}}}+a_{k}^{a_{j_{k}}}\geq a_{1}^{a_{j_{k}}}+a_{k}^{a_{j_{1}}}=a_{1}^{a_{n}}+a_{k}^{a_{j_{1}}},\]
that is,
\[F\left(j_{1}j_{2}\cdots j_{n}\right)=F\left(j_{1}j_{2}\cdots j_{k}\cdots j_{n}\right)\geq F\left(j_{k}j_{2}\cdots j_{1}\cdots j_{n}\right)= F\left(nj_{2}^{*} j_{3}^{*}\cdots j_{n}^{*}\right),\]
where $j_{2}^{*}j_{3}^{*}\cdots j_{n}^{*}=j_{2}\cdots j_{1}\cdots j_{n}$ is a permutation of $1,2,\ldots,n-1.$ Hence inequality (\ref{0.7}) holds when $m=0.$
(II) Suppose that inequality (\ref{0.7}) holds, where $0\leq m\leq n-2.$ Now we prove that
\begin{equation}\label{0.8}
F\left(j_{1}j_{2}\cdots j_{n}\right)\geq F\left(n(n-1)\cdots(n-m-1) j_{m+3}^{**}j_{m+4}^{**}\cdots j_{n}^{**}\right),
\end{equation}
where $ j_{m+3}^{**}j_{m+4}^{**}\cdots j_{n}^{**}$ is a permutation of $1,2,\ldots,n-m-2.$
We first prove that
\begin{equation}\label{0.9}
F\left(n(n-1)\cdots(n-m) j_{m+2}^{*}j_{m+3}^{*}\cdots j_{n}^{*}\right)\geq F\left(n\cdots(n-m-1) j_{m+3}^{**}j_{m+4}^{**}\cdots j_{n}^{**}\right),
\end{equation}
where $ j_{m+3}^{**}j_{m+4}^{**}\cdots j_{n}^{**}$ is a permutation of $1,2,\ldots,n-m-2.$
Indeed, if $ j_{m+2}^{*}=n-m-1,$ then (\ref{0.9}) is an equality. Assume that $j_{m+2}^{*}<n-m-1.$ Then there exists $k:m+3\leq k\leq n$, such that $j_{k}^{*}=n-m-1$.
By inequality (\ref{0.1}) and the proof of (I), we have
\begin{eqnarray*}
&&F\left(n(n-1)\cdots(n-m) j_{m+2}^{*}j_{m+3}^{*}\cdots j_{n}^{*}\right)\\
&=& F\left(n(n-1)\cdots(n-m) j_{m+2}^{*}j_{m+3}^{*}\cdots j_{k}^{*}\cdots j_{n}^{*}\right) \\
& \geq & F\left(n(n-1)\cdots(n-m)j_{k}^{*} j_{m+3}^{*}\cdots j_{m+2}^{*} \cdots j_{n}^{*}\right) \\
& = & F\left(n(n-1)\cdots(n-m)(n-m-1) j_{m+3}^{*}\cdots j_{m+2}^{*} \cdots j_{n}^{*}\right) \\
& = & F\left(n(n-1)\cdots(n-m)(n-m-1) j_{m+3}^{**}j_{m+4}^{**}\cdots j_{n}^{**}\right),
\end{eqnarray*}
where $ j_{m+3}^{**}j_{m+4}^{**}\cdots j_{n}^{**}= j_{m+3}^{*}\cdots j_{m+2}^{*} \cdots j_{n}^{*}$ is a permutation of $1,2,\ldots,n-m-2.$ Thus, inequality (\ref{0.9}) is proved.
By inequalities (\ref{0.7}) and (\ref{0.9}), we get the inequality (\ref{0.8}). This ends the proof.
In inequality (\ref{0.7}), set $m=n-1,$ we get
\[F\left(j_{1}j_{2}\cdots j_{n}\right)\geq F\left(n(n-1)\cdots321\right),\]
that is,
\begin{equation}\label{0.5}
\sum_{i=1}^{n}a_{i}^{a_{j_{i}}}\geq \sum_{i=1}^{n}a_{i}^{a_{n+1-i}}=\text{C}^{ *}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right).
\end{equation}
Equality in (\ref{0.5}) holds if $j_{1}j_{2}\cdots j_{n}=n(n-1)\cdots321$ or ${a_1} ={a_2} = \cdots = {a_n}.$
In inequality (\ref{0.5}), set
\[j_{1}j_{2}\cdots j_{n}= 23\cdots (n-1)n1.\]
Then inequality (\ref{0.5}) can be rewritten as inequality (\ref{5.01}). Hence inequality (\ref{5.01}) is proved.
Based on the above proof, we know that equality in (\ref{5.01}) holds if $n=2$ or ${a_1} = {a_2} = \cdots = {a_n}.$
The proof of Theorem \ref{theorem2.4} is completed.
\end{proof}
\begin{remark}\label{remark4.1} Let $n\geq2,$ $1\leq a_1\leq a_2\leq\cdots\leq a_n$ or $e^{-1}\leq a_1\leq a_2\leq\cdots\leq a_n\leq 1$. Then (\ref{4.01}) holds.
According to Theorem \ref{theorem2.4}, inequality (\ref{5.01}) holds.
\end{remark}
\begin{remark}\label{remark4.2} Let $a_{i}=\varepsilon+{(i-1)}/{n},~1\leq i\leq n,~n\geq2,$ where $\varepsilon=0.5173446105249118\cdots $ is the root of the equation
$x^{x+1}=e^{-1},$ where $x\in(0,1).$ Then $0<a_1\leq a_2\leq\cdots\leq a_n$ and
\[a_{1}^{a_{n}}=\varepsilon^{\varepsilon+1-n^{-1}}>\varepsilon^{\varepsilon+1}=e^{-1}.\]
According to Theorem \ref{theorem2.4}, inequality (\ref{5.01}) holds. The calculation of $\varepsilon$ is based on the \textbf{Mathematica} software. The relevant literatures on proving inequalities by means of the mathematical software can be see \cite{11,6,7,8}.
\end{remark}
\begin{remark}\label{remark4.3} Based on the proof of Theorem \ref{theorem2.4} we know that: Under the
hypotheses of Theorem \ref{theorem2.4}, we have
\begin{equation}\label{0.10}
\text{C}^{ *}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right)=\sum_{i=1}^{n}a_{i}^{a_{n+1-i}}\leq\sum_{i=1}^{n}a_{i}^{a_{j_{i}}}\leq \sum_{i=1}^{n}a_{i}^{a_{i}}=\text{C}_{ *}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right),
\end{equation}
where $j_{1}j_{2}\cdots j_{n}$ is a permutation of $1,2,\ldots,n$.
\end{remark}
\begin{remark}\label{remark4.4} For the Cater-type cyclic function $\text{C}^{ *}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right)$, we have
\begin{equation}\label{0.11}
\text{C}^{ *}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right)>\frac{n}{2},~\forall n\geq2,
\end{equation}
\begin{equation} \label{2.01}
\inf\left\{\text{C}^{ *}\left( {{a_1}, {a_2}, \cdots, {a_{2m}}} \right)\right\}=\text{C}^{ *}\left(0I_{m},I_{m}\right)=m
\end{equation}
and
\begin{equation} \label{2.02}
\inf\left\{\text{C}^{ *}\left( {{a_1}, {a_2}, \cdots, {a_{2m+1}}} \right)\right\}=\text{C}^{ *}\left(0I_{m}, e^{-1}, I_{m}\right)=m+e^{-e^{-1}},
\end{equation}
where $m\in\mathbb{N},$ $I_{m}=(1,1,\cdots, 1)\in\mathbb{R}^{m},$ and $e^{-e^{-1}}=0.6922006275553464\cdots>0.5.$
Indeed, in inequality (\ref{2}), set $n=2,$ we get
\[\text{C}\left( {{a_1}, {a_2}} \right)=a_{1}^{a_{2}}+a_{2}^{a_{1}}>1,~\forall a_{1}>0,~\forall a_{2}>0.\]
Hence
\begin{equation}\label{2.0}
\text{C}^{ *}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right)=\frac{1}{2}\sum_{i=1}^{n}\text{C}\left( {{a_i}, {a_{n+1-i}}} \right)>\frac{1}{2}\sum_{i=1}^{n}1=\frac{n}{2}.
\end{equation}
Since
\[\text{C}\left( {0, 1} \right)=\text{C}\left( {1, 0} \right)=1,~\inf_{t>0}\left\{\text{C}\left( {{t}, {t}} \right)\right\}=2\inf_{t>0}\left\{t^t\right\}=\text{C}\left( {{e^{-1}}, {e^{-1}}} \right)=2e^{-e^{-1}},\]
by (\ref{2.0}), we get (\ref{2.01}) and (\ref{2.02}).
\end{remark}
\section{Proof of Theorem \ref{theorem2.3}}
In order to prove Theorem \ref{theorem2.3}, we need to establish several lemmas as follows.
\begin{lemma}\label{lemma3.0} Let $\left( {{a_1}, \cdots ,{a_2}} \right) \in {\left( 0,\infty \right)^n}$ and $n=2.$ Then inequality (\ref{5}) holds. Equality in (\ref{5}) holds if and only if ${a_1} = {a_2}.$
\end{lemma}
\begin{proof} If $a_{1}= a_2$, then $\mbox{\rm C}\left(a_{1},a_{2}\right)=\mbox{\rm C}_{*}\left(a_{1},a_{2}\right).$ Assume that $a_{1}\ne a_2.$ Without losing of generality, we may assume that $a= a_{1}> a_2=b>0.$
If $a\geq 1$, then
\begin{eqnarray*}
a^{a-b}> b^{a-b}&\Rightarrow& b^{b}> \frac{a^{b}b^{a}}{a^{a}}\\
&\Rightarrow& \mbox{\rm C}_{*}\left(a_{1},a_{2}\right)-\mbox{\rm C}\left(a_{1},a_{2}\right)=a^{a}+b^{b}-\left(a^{b}+b^{a}\right)\\
&>& a^{a}+\frac{a^{b}b^{a}}{a^{a}}-\left(a^{b}+b^{a}\right)=\frac{\left(a^{a}-a^{b}\right)\left(a^{a}-b^{a}\right)}{a^{a}}\geq0\\
&\Rightarrow& (\ref{5}).
\end{eqnarray*}
Now we assume that $1>a= a_{1}>a_2=b>0.$ By the A-G inequality \cite{17}
\[\sum_{i=1}^{n}p_{i}x_{i}\geq\prod_{i=1}^{n}x_{i}^{p_{i}},\]
where $p_{i}>0,~x_{i}>0,~i=1, \ldots,n,~n\geq2,~\sum_{i=1}^{n}p_{i}=1,$ and the logarithm inequalities
\[\frac{x}{1+x}<\log(1+x)< x,~\forall x: -1<x\ne0,\]
we have
\begin{eqnarray*}\frac{\text{d}}{\text{d}t}\left(\frac{\log t}{1-t}\right)&=&(1-t)^2\left[\log(1+t-1)-\frac{t-1}{1+t-1}\right]>0,\forall t:0<t<1\\
&\Rightarrow& \frac{\log a}{1-a}>\frac{\log b}{1-b}\Leftrightarrow a^{1-b}-b^{1-a}> 0\\
&\Rightarrow& b^{a-1}-a^{b-1}> 0\\
&\Rightarrow&\frac{ab\left(b^{a-1}-a^{b-1}\right)}{a^{b}+b^{a}}=a\times\frac{b^{a}}{a^{b}+b^{a}}-b\times\frac{a^{b}}{a^{b}+b^{a}}>0\\
&\Rightarrow& \frac{\mbox{\rm C}_{*}\left(a_{1},a_{2}\right)}{\mbox{\rm C}\left(a_{1},a_{2}\right)}=\frac{b^{a}}{a^{b}+b^{a}}\left(\frac{a}{b}\right)^{a}+
\frac{a^{b}}{a^{b}+b^{a}}\left(\frac{b}{a}\right)^{b}\\
&\geq&\left(\frac{a}{b}\right)^{a\times\frac{b^{a}}{a^{b}+b^{a}}}\left(\frac{b}{a}\right)^{b\times\frac{a^{b}}{a^{b}+b^{a}}}
=\left(\frac{a}{b}\right)^{a\times\frac{b^{a}}{a^{b}+b^{a}}-b\times\frac{a^{b}}{a^{b}+b^{a}}} >1\\
&\Rightarrow&(\ref{5}).
\end{eqnarray*}
Hence inequality (\ref{5}) holds when $n=2$, and equality in (\ref{5}) holds if and only if ${a_1} = {a_2}$. This ends the proof of Lemma \ref{lemma3.0}.
\end{proof}
According to the theory of mathematical analysis, we have the following Lemma \ref{lemma3.1}.
\begin{lemma}\label{lemma3.1} Let the function $ f:\left( {\alpha ,\beta } \right) \to \mathbb{R}$ be continuous, and let \[f(\alpha)\triangleq f\left( {\alpha + 0} \right),~f(\beta)\triangleq f\left( {\beta - 0} \right).\]
If $f$ has no any minimum points, then we have
\begin{equation}\label{7}
\mathop {\inf }\limits_{t \in \left( {\alpha ,\beta } \right)} \left\{ {f\left( t \right)} \right\} = \min \left\{ {f\left( {\alpha} \right),f\left( {\beta } \right)} \right\};
\end{equation}
If $t_{1}, \cdots, t_{k}, ~k\geq1,$ are all the minimum points of the function $f$, then we have
\begin{equation}\label{7.01}
\mathop {\inf }\limits_{t \in \left( {\alpha ,\beta } \right)} \left\{ {f\left( t \right)} \right\} = \min \left\{ {f\left( {\alpha} \right), f\left( {\beta} \right), f\left(t_{1}\right), \cdots, f\left(t_{k}\right)} \right\}.
\end{equation}
\end{lemma}
In the following discussion, we define an auxiliary function as follows:
\begin{equation}\label{7.1}
F:(0,\infty)^{2}\rightarrow\mathbb{R},~ F\left( {x,y} \right) \triangleq \left( {y - x} \right)\log x + \log y - \log x.
\end{equation}
\begin{lemma}\label{lemma3.01} Let $F$ be defined by (\ref{7.1}). Then we have
\begin{equation}\label{7.2}
F(x,y)>0, ~\forall x,y: 0<x<y<1.
\end{equation}
\end{lemma}
\begin{proof} We arbitrarily fix $x \in \left( {0,1} \right),$ that is, $x \in \left( {0,1} \right)$ is a constant. Since $y\in(x,1),$ and
$$\;\frac{{\partial F\left( {x,y} \right)}}{{\partial y}} = \log x + \frac{1}{y},\;\;\frac{{{\partial ^2}F\left( {x,y} \right)}}{{\partial {y^2}}} = - \frac{1}{{{y^2}}} < 0,$$
the function $F:\left( {x,1} \right) \to \mathbb{R}$ is a concave function for the variable $y$, which has no any minimum points. So, by Lemma \ref{lemma3.1}, we have
\begin{equation}\label{12.1}
F\left( {x,y} \right) > \mathop {\inf }\limits_{x < y < 1} \left\{ {F\left( {x,y} \right)} \right\}\; = \min \left\{ {F\left( {x,x} \right),F\left( {x,1} \right)} \right\} = \min \left\{ {0, - x\log x} \right\} = 0.
\end{equation}
This ends the proof of Lemma \ref{lemma3.01}.
\end{proof}
In the following discussion, we define an auxiliary function as follows:
\begin{equation}\label{13}
\phi: \Omega\rightarrow\mathbb{R},~\phi(x,y,z)\triangleq y^y+z^x-(z^y+y^x),
\end{equation}
where
\begin{equation}\label{14}
\Omega\triangleq \left\{(x,y,z)\in (0,\infty)^{3}:\max\left\{x,z\right\}\leq y \right\}.
\end{equation}
\begin{lemma}\label{lemma3.4} Let the function $\phi: \Omega\rightarrow\mathbb{R}$ be define by (\ref{13}), and let $(x,y,z)\in\Omega.$ If $y\geq1,$ then we have the inequality
\begin{equation}\label{15}
\phi(x,y,z)\geq 0.
\end{equation}
Equality in (\ref{15}) holds if and only if $y=z$ or $y=x$.
\end{lemma}
\begin{proof} The inequality (\ref{15}) can be rewritten as
\begin{equation}\label{16}
y^{x}\left(y^{y-x}-1\right)\geq z^{x}\left(z^{y-x}-1\right).
\end{equation}
If $ z\leq1\leq y, $ by $0<\max\left\{x,z\right\}\leq y$, we have
\[ y^{x}\left(y^{y-x}-1\right)\geq 0\geq z^{x}\left(z^{y-x}-1\right)\Rightarrow(\ref{16})\Rightarrow(\ref{15});\]
If $1\leq z\leq y, $ from $0<\max\left\{x,z\right\}\leq y$, we have
\begin{eqnarray*}
y^{x}\geq z^{x}>0,~y^{y-x}-1\geq z^{y-x}-1\geq 0\Rightarrow y^{x}\left(y^{y-x}-1\right)\geq z^{x}\left(z^{y-x}-1\right)\Rightarrow(\ref{16})\Rightarrow(\ref{15}).
\end{eqnarray*}
Hence (\ref{15}) is proved.
Based on the above proof, we see that equality in (\ref{15}) holds if and only if $y=z$ or $y=x$. Lemma \ref{lemma3.4} is proved.
\end{proof}
\begin{lemma}\label{lemma3.5} Let the function $\phi: \Omega\rightarrow\mathbb{R}$ be define by (\ref{13}), and let $(x,y,z)\in\Omega.$ If $0<x\leq z\leq y<1,$ then inequality (\ref{15}) also holds. Equality in (\ref{15}) holds if and only if $y=z$.
\end{lemma}
\begin{proof} By the proof of Lemma \ref{lemma3.4}, we just need to prove (\ref{16}).
Indeed, if $y= z$, then (\ref{16}) is an equation. Let's assume that $0<x\leq z<y<1.$ Then $0<x<y<1.$
Now we proved that
\begin{equation}\label{16.1}
y^{x}\left(y^{y-x}-1\right)> z^{x}\left(z^{y-x}-1\right).
\end{equation}
Since
\begin{eqnarray*}
(\ref{16.1})&\Leftrightarrow& y^{x}\left(1-y^{y-x}\right)< z^{x}\left(1-z^{y-x}\right)\\
&\Leftrightarrow& x\log y+\log\left(1-y^{y-x}\right)< x\log z+\log\left(1-z^{y-x}\right)\\
&\Leftrightarrow& x(\log y-\log z)< -\log\left(1-y^{y-x}\right)+\log\left(1-z^{y-x}\right)\\
&\Leftrightarrow& x<\frac{-\log\left(1-y^{y-x}\right)+\log\left(1-z^{y-x}\right)}{\log y-\log z},
\end{eqnarray*}
we see that inequality (\ref{16.1}) can be rewritten as
\begin{equation}\label{17}
x < \frac{{f\left( y \right) - f\left( z \right)}}{{g\left( y \right) - g\left( z \right)}},
\end{equation}
where
\begin{equation}\label{18}
f\left( t \right) \triangleq - \log \left( {1 - {t^{y - x}}} \right),\;g\left( t \right) \triangleq \log t,~0<x\leq z \leq t \leq y < 1,
\end{equation}
and $t$ is independent of $x,y$ and $z$. By (\ref{18}), we have
\begin{equation}\label{18.1}
\;\frac{{{f'}\left( t \right)}}{{{g'}\left( t \right)}} = \frac{{\left( {y - x} \right){t^{y - x}}}}{{1 - {t^{y - x}}}}=(y-x)\left( \frac{{1}}{{1 - {t^{y - x}}}}-1\right),\;0<x\leq z \leq t \leq y < 1.
\end{equation}
By (\ref{18.1}), the function $\omega: (0,1)\rightarrow\mathbb{R},~\omega(t)\triangleq{f'}\left( t \right)/{g'}\left( t \right)$, is strictly incremental. So we have
\begin{equation}\label{18.2}
\frac{{{f'}\left( t \right)}}{{{g'}\left( t \right)}}>\frac{{{f'}\left( z \right)}}{{{g'}\left( z \right)}}\geq\frac{{{f'}\left(x \right)}}{{{g'}\left( x \right)}},
\;\forall t: 0<x\leq z < t < y < 1.
\end{equation}
According to the Cauchy mean value theorem, (\ref{18}), (\ref{18.1}) and (\ref{18.2}), there exists a $\zeta \in \left( {z,y} \right) \subset \left( {0,1} \right)$ such that
\begin{equation}\label{19}
\frac{{f\left( y \right) - f\left( z \right)}}{{g\left( y \right) - g\left( z \right)}} = \frac{{{f'}\left( \zeta \right)}}{{{g'}\left( \zeta \right)}} > \frac{{{f'}\left( x \right)}}{{{g'}\left( x \right)}} = \frac{{\left( {y - x} \right){x^{y - x}}}}{{1 - {x^{y - x}}}}.
\end{equation}
Noting that
\begin{eqnarray*}
\frac{{\left( {y - x} \right){x^{y - x}}}}{{1 - {x^{y - x}}}} > x &\Leftrightarrow& y{x^{y - x}} - {x^{y - x + 1}} > x - {x^{y - x + 1}}\\
&\Leftrightarrow& y{x^{y - x}} >x \\
&\Leftrightarrow& \left( {y - x} \right)\log x + \log y - \log x >0,\\
\end{eqnarray*}
that is,
\begin{equation}\label{9.2}
\frac{{\left( {y - x} \right){x^{y - x}}}}{{1 - {x^{y - x}}}} > x \Leftrightarrow F\left( {x,y} \right)>0.
\end{equation}
By $0<x<y<1,$ (\ref{19}), (\ref{9.2}) and Lemma \ref{lemma3.01}, we have
\begin{eqnarray*}
&&F\left( {x,y} \right)>0, ~\forall x,y: 0<x<y<1\\
&\Rightarrow&\frac{{\left( {y - x} \right){x^{y - x}}}}{{1 - {x^{y - x}}}} > x, \\
&\Rightarrow&\frac{{f\left( y \right) - f\left( z \right)}}{{g\left( y \right) - g\left( z \right)}}> \frac{{\left( {y - x} \right){x^{y - x}}}}{{1 - {x^{y - x}}}}> x, \\
&\Rightarrow& \frac{{f\left( y \right) - f\left( z \right)}}{{g\left( y \right) - g\left( z \right)}} > x\\
&\Rightarrow& (\ref{17}) \Rightarrow (\ref{16.1}).
\end{eqnarray*}
Hence inequality (\ref{16.1}) is proved.
Since $(\ref{16.1}) \Rightarrow (\ref{16})\Rightarrow(\ref{15})$, so the inequality (\ref{15}) is also proved.
Based on the above proof, we see that equality in (\ref{15}) holds if and only if $y=z$. Lemma \ref{lemma3.5} is proved.
\end{proof}
Now we turn to the proof of Theorem \ref{theorem2.3}.
\begin{proof} We use the mathematical induction for $n$.
When $n = 2$, by Lemma \ref{lemma3.0}, Theorem \ref{theorem2.3} is true.
Suppose that Theorem \ref{theorem2.3} is true for $n$, $n \geq 2.$ Then
\begin{equation}\label{34}
\text{C}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right)-\text{C}_{*}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right)\leq 0,
\end{equation}
and equality in (\ref{34}) holds if and only if ${a_1} = {a_2} = \cdots = {a_n}$, where
\[0<{{a_1}\leq{a_2}\leq \cdots\leq{a_{n}}}.\] Now we prove that
\begin{equation}\label{30}
\text{C}\left( {{a_1}, {a_2}, \cdots ,{a_{n+1}}} \right) \leq \text{C}_{*}\left( {{a_1}, {a_2}, \cdots,{a_{n+1}}} \right),
\end{equation}
and equality in (\ref{30}) holds if and only if ${{a_1}={a_2}=\cdots={a_{n+1}}},$ where
\[0<{{a_1}\leq{a_2}\leq \cdots\leq{a_{n+1}}}.\]
Since $0<{{a_1}\leq{a_2}\leq \cdots\leq{a_{n+1}}},$ we have
\begin{equation}\label{31}
\max \left\{ {{a_1},{a_2}, \cdots ,{a_{n + 1}}} \right\} = {a_{n + 1}}.
\end{equation}
For ease of expression, by (\ref{31}), we may assume that
\begin{equation}\label{32}
\left( {x,y,z} \right) \triangleq \left( {{a_1},{a_{n + 1}},{a_n}} \right) \in \Omega,
\end{equation}
where $\Omega$ is defined by (\ref{14}). Since
\begin{eqnarray*}
&&\text{C}\left( {{a_1}, {a_2}, \cdots, {a_{n+1}}} \right) - \text{C}_{*}\left( {{a_1}, {a_2}, \cdots,{a_{n+1}}} \right)\\
&=&\sum\nolimits_{i=1}^{n}{a_i^{{a_{i + 1}}}}+{a_{n+1}^{a_{1}}} -\left(\sum\nolimits_{i=1}^{n}{a_i^{{a_{i }}}}+{a_{n+1}^{a_{n+1}}}\right)\\
&=&\sum\nolimits_{i=1}^{n-1}{a_i^{{a_{i + 1}}}}+{a_{n}^{a_{n+1}}}+{a_{n+1}^{a_{1}}} -\left(\sum\nolimits_{i=1}^{n}{a_i^{{a_{i }}}}+{a_{n+1}^{a_{n+1}}}\right)\\
&=&\sum\nolimits_{\text{cyc}:~n}^{1\leq i\leq n}{a_i^{{a_{i + 1}}}}-{a_{n}^{a_{1}}}+{a_{n}^{a_{n+1}}}+{a_{n+1}^{a_{1}}} -\left(\sum\nolimits_{i=1}^{n}{a_i^{{a_{i }}}}+{a_{n+1}^{a_{n+1}}}\right)\\
&=&\left(\sum\nolimits_{\text{cyc}:~n}^{1\leq i\leq n}{a_i^{{a_{i + 1}}}}-\sum\nolimits_{i=1}^{n}{a_i^{{a_{i }}}}\right)-{a_{n}^{a_{1}}}+{a_{n}^{a_{n+1}}}+{a_{n+1}^{a_{1}}} -{a_{n+1}^{a_{n+1}}}\\
&=&\left(\sum\nolimits_{\text{cyc}:~n}^{1\leq i\leq n}{a_i^{{a_{i + 1}}}}-\sum\nolimits_{i=1}^{n}{a_i^{{a_{i }}}}\right)
-\left[y^y+z^x-\left(z^y+y^x\right)\right]\\
&=&\text{C}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right) - \text{C}_{*}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right)-\phi(x,y,z),
\end{eqnarray*}
we have,
\begin{equation}\label{33}
\text{C}\left( {{a_1}, \cdots, {a_{n+1}}} \right) - \text{C}_{*}\left( {{a_1}, \cdots, {a_{n+1}}} \right)=\text{C}\left( {{a_1}, \cdots ,{a_{n}}} \right) - \text{C}_{*}\left( {{a_1}, \cdots ,{a_{n}}} \right) -\phi(x,y,z),
\end{equation}
where $\phi(x,y,z)$ is defined by (\ref{13}).
Since $ 0<a_1\leq a_2\leq\cdots\leq a_{n+1},$ we see that (\ref{31}) with (\ref{32}) hold and $x\leq z\leq y$. According to the inductive hypothesis, we see that (\ref{34}) holds. By Lemmas \ref{lemma3.4} and \ref{lemma3.5}, we know that (\ref{15}) holds. According to the (\ref{15}), (\ref{34}) and (\ref{33}), we have
\begin{eqnarray*}
&& \text{C}\left( {{a_1}, {a_2}, \cdots, {a_{n+1}}} \right) - \text{C}_{*}\left( {{a_1}, {a_2}, \cdots, {a_{n+1}}} \right)\\
&=&\text{C}\left( {{a_1}, {a_2}, \cdots ,{a_{n}}} \right) - \text{C}_{*}\left( {{a_1}, {a_2}, \cdots ,{a_{n}}} \right) -\phi(x,y,z)\\
&\leq& -\phi(x,y,z)\leq0\Rightarrow (\ref{30}).
\end{eqnarray*}
Hence inequality (\ref{30}) is proved.
According to the inductive hypothesis and Lemmas \ref{lemma3.4} with \ref{lemma3.5}, we see that equality in (\ref{30}) holds if and only if
${a_1} = {a_2} = \cdots = {a_n}~\text{and}~a_{n + 1}=y=z={a_n}$, i.e. ${a_1} = {a_2} = \cdots = a_{n+1}.$
This completes the proof of Theorem \ref{theorem2.3}.
\end{proof}
\begin{remark} \label{remark3.1} The proof method of Theorem \ref{theorem2.3} is called the dimensionality reduction method. The relevant literatures on proving inequalities by means of the dimensionality reduction method can be see \cite{3,11,9,10}. The dimension reduction process of the proof is as follows. \\
\textbf{(A)} Inequality (\ref{30}) contain $n+1$ variables and inequality (\ref{5}) contain $n$ variables. We transform inequality (\ref{30}) into inequality (\ref{5}) by mean of the mathematical induction. This transformation process is based on the inequality (\ref{15}). \\
\textbf{(B)} Inequality (\ref{15}) contain three variables $x,y,z.$ By mean of Lemmas \ref{lemma3.1}-\ref{lemma3.5}, we transform inequality (\ref{15}) into inequality (\ref{7.2}), which contain only two variables.\\
\textbf{(C)} Lemmas \ref{lemma3.1} and \ref{lemma3.01} transform inequality (\ref{7.2}) into inequality (\ref{12.1}). There is only one variable at the right end of the inequality (\ref{12.1}).\\
\textbf{(D)} For an inequality with only one variable, we use mathematical analysis theory to deal with it.\\
\end{remark}
\begin{remark}\label{remark3.2} Let the function $f:[0,1]\rightarrow(0,\infty)$ be continuous and incremental, and let $\lim_{n\rightarrow\infty}n^{-1}\text{C}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right)$ exists, where $a_{i}\triangleq f\left({i}/{n}\right),$ $i=1,2,\ldots,n,$ $n\geq 2$, and $n^{-1}\text{C}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right)$ is the mean of the positive real numbers $a_1^{{a_2}}, a_2^{{a_3}}, \cdots, a_{n - 1}^{{a_n}}, a_n^{{a_1}}.$ Then, by Theorem \ref{theorem2.3} and the mathematical analysis theory, we have the following inequality:
\begin{equation}\label{02}
\lim_{n\rightarrow\infty}n^{-1}\text{C}\left( {{a_1}, {a_2}, \cdots, {a_{n}}} \right)\leq\int_{0}^{1}\left[f\left(t\right)\right]^{f\left(t\right)}\text{d}t,
\end{equation}
where $\int_{0}^{1}\left[f\left(t\right)\right]^{f\left(t\right)}\text{d}t$ is the mean of the function $f^{f}.$
The relevant literatures on mean theory can be see \cite{3,11,4,6,10,12,13,7,20,8,16,17,18,15,12.1,21}.
\end{remark}
\textbf{Competing interests.} The authors declare that they have no conflicts of interest in
this joint work.
\textbf{Authors contributions.} All authors contributed equally and significantly in this
paper. All authors read and approved the final manuscript.
\textbf{Acknowledgements.} The authors would like to acknowledge the support from the
National Natural Science Foundation of China (No. 11161024).
\textbf{References}
|
{
"timestamp": "2021-12-03T02:14:14",
"yymm": "2111",
"arxiv_id": "2111.15465",
"language": "en",
"url": "https://arxiv.org/abs/2111.15465"
}
|
\section{introduction}
Particles with a macroscopic decay length,
ranging from a few {centimeters}
to several hundred
meters and beyond, can be classified as long-lived particles (LLPs)
at the large hadron collider
(LHC).
Such LLPs are endemic in new physics models
beyond the standard model (SM);
see e.g.\
\cite{Lee:2018pag, Alimena:2019zri} for recent reviews.
A number of new detectors at the LHC
have been recently proposed to
search for LLPs, which can be collectively referred to
as lifetime frontier detectors.
These include the detectors that are placed in the forward region:
FACET~\cite{FACET:talk1},
FASER~\cite{Feng:2017uoz, Ariga:2018zuc, Ariga:2018uku, Ariga:2018pin, Ariga:2019ufm},
{FASER2} \cite{ Ariga:2019ufm, Kling:2021fwx},
AL3X~\cite{Gligorov:2018vkc},
and {MoEDAL-MAPP} \cite{Staelens:2019gzt};
the detectors that are placed in the central region:
MATHUSLA
\cite{Chou:2016lxi, Curtin:2018mvb, Alpigiani:2018fgd, Lubatti:2019vkf, Alpigiani:2020tva},
CODEX-b~\cite{Gligorov:2017nwh, Aielli:2019ivi},
ANUBIS~\cite{Bauer:2019vqk};
and the precision timing detectors that are to be installed
at ATLAS, CMS, and LHCb to mitigate the pileup backgrounds
in the coming HL-LHC phase:
CMS-MTD~\cite{CMStiming},
ATLAS-HGTD~\cite{Allaire:2018bof},
LHCb-TORCH~\cite{LHCb:2017MTD, LHCb:2008vvz}.
A plethora of LLPs can be studied in the newly proposed
lifetime frontier detectors
\cite{Curtin:2017izq,
Evans:2017lvd,
Feng:2017vli,
Kling:2018wct,
Helo:2018qej,
Liu:2018wte,
Feng:2018pew,
Cerri:2018rkm,
Curtin:2018ees,
Berlin:2018jbm,
Dercks:2018eua,
Dercks:2018wum,
Flowers:2019gvj,
Kim:2019oyh,
Mason:2019okp,
Boiarska:2019vid,
No:2019gvl,
Krovi:2019hdl,
Jodlowski:2019ycu,
Du:2019mlc,
Hirsch:2020klk,
Yuan:2020eeu,
Liu:2020vur,
Dreiner:2020qbi,
DeVries:2020jbs,
Bertuzzo:2020rzo,
Cottin:2021lzz,
Cheung:2021utb,
Guo:2021vpb,
Bhattacherjee:2021rml,
Mitsou:2021tti}.
One well-motivated new physics particle is the dark photon
(denoted by $A'_\mu$)
which can naturally arise
in kinetic mixing model
\cite{Holdom:1985ag}
\cite{Foot:1991kb}
and in Stueckelberg models
\cite{Kors:2005uz}
\cite{Feldman:2006ce}
\cite{Feldman:2006wb}
\cite{Feldman:2007wj}
\cite{Feldman:2009wv}
\cite{Du:2019mlc}.
The interaction between
the dark photon $A'_\mu$ and the SM fermion $f$ can be parameterized
as
\begin{equation}
e \, \epsilon \, Q_f \, A'_\mu \bar f \gamma^\mu f.
\label{eq:epsilon}
\end{equation}
Long-lived dark photons (LLDPs) have a small $\epsilon$ coupling,
which, however, leads to a suppressed collider signal.
Recently, a new dark photon model is proposed in Ref.\ \cite{Du:2019mlc}
where the dark photon is produced at colliders by the hidden fermion
radiation so that the collider signal no longer suffers from the small
$\epsilon$ parameter. For that reason, the LLDP signal at the LHC
in this new dark photon model can be significantly enhanced.\footnote{See
\cite{Buschmann:2015awa, Arguelles:2016ney, Kim:2019oyh, Krovi:2019hdl}
for other dark photon models with a sizeable LLDP signal.}
Thus, we will refer to the dark photon models,
where the dark photon interacts with the SM sector only via
the interaction Lagrangian in Eq.\ \eqref{eq:epsilon},
as the ``minimal'' dark photon models, to be distinguished
from the dark photon models proposed in
Ref.\ \cite{Du:2019mlc}.
In this paper, we investigate the capability of various lifetime frontier
detectors in probing the parameter space of LLDPs both
in the minimal dark photon model
and in the newly proposed dark photon model \cite{Du:2019mlc}.
We carry out detailed analysis for detectors:
the far forward detector, FACET and FASER,
the central transverse detector, MATHUSLA,
and the precision timing detector, CMS-MTD.
We compute
the expected limits from these detectors.
We find that the parameter space probed by
FACET and MATHUSLA
are significantly enlarged
by the hidden fermion radiation in the new dark photon model,
as compared the minimal dark photon model.
We also find that the LLDP signal at the newly proposed
far detector FACET is significantly larger
than FASER, owing to a larger decay volume
and a shorter distance to the interaction point of the FACET detector.
The rest of the paper is organized as follows.
We briefly review the dark photon model that has
an enhanced LLDP signal
in section~\ref{sec:model}.
A mini-overview on lifetime-frontier detectors is given
in section~\ref{sec: detector-review}.
We discuss three main DP production channels
in section~\ref{sec:DP_production}.
We analyze the signal events in different lifetime-frontier detectors
in section~\ref{sec:simu-and-considerations}.
Given
in section~\ref{sec:result}
are the {sensitivities} to the parameter space from four different detectors:
FACET,
FASER(2),
MATHUSLA,
and CMS-MTD.
A semi-analytic comparison between far detectors is given
in section~\ref{sec:facet-faser-comparison}.
We summarize our findings in section~\ref{sec:summary}.
\section{The model and its parameter space}
\label{sec:model}
In this analysis, we consider
the dark photon model
that has been proposed
recently to enhance the (suppressed) long-lived
dark photon signal at the LHC \cite{Du:2019mlc}.
In this model,
the standard model is extended by a hidden sector
that consists of two abelian gauge groups
$U(1)_F$ and $U(1)_W$
with corresponding gauge bosons
$X_\mu$ and $C_\mu$ respectively, and
one Dirac fermion $\psi$ charged under both gauge
groups \cite{Du:2019mlc}.
The gauge boson mass terms {(due to the Stueckelberg
mechanism
\cite{Kors:2005uz,
Feldman:2006ce,
Feldman:2006wb,
Feldman:2007wj,
Feldman:2009wv,
Du:2019mlc})}
and the gauge interaction terms in
the hidden sector are given by
\begin{eqnarray}
\label{eq:lagrangian}
{\cal L} =
- \frac{1}{2} ( \partial_\mu \sigma_1 + m_{1}\epsilon_1 B_{\mu} + m_{1} X_{\mu} )^2
- \frac{1}{2} ( \partial_\mu \sigma_2 + m_{2}\epsilon_2 B_{\mu} + m_{2} C_{\mu} )^2
+ g_F \bar \psi \gamma^\mu \psi X_{\mu}
+ g_W \bar \psi \gamma^\mu \psi C_{\mu},
\end{eqnarray}
where $B_\mu$ is the hypercharge boson in the SM,
$\sigma_1$ and $\sigma_2$ are the axion fields in the Stueckelberg
mechanism,
$g_F$ and $g_W$ are the gauge coupling constants,
and $m_1$, $m_2$, $m_{1}\epsilon_1$,
and $m_{2}\epsilon_2$ are mass terms in
the Stueckelberg mechanism
with $\epsilon_{1,2}$ being (small)
dimensionless numbers.
The 2 by 2 neutral gauge boson mass matrix {in the SM}
is
extended to a 4 by 4 mass matrix due to the fact that
the two new gauge bosons, $X_\mu$ and $C_\mu$,
have mixed mass terms with the SM hypercharge boson $B_\mu$;
the new neutral gauge boson mass matrix
in the basis of $V= ( C,X, B, A^3)$
is given by \cite{Du:2019mlc}
\begin{equation}
M^2 =
\begin{pmatrix}
m_{2}^2 & 0 & m_{2}^2 \epsilon_2 & 0\cr
0 & m_{1}^2 & m_{1}^2 \epsilon_1 & 0 \cr
m_{2}^2 \epsilon_2 & m_{1}^2 \epsilon_1 &
\sum\limits_{i=1}^2 m_{i}^2 \epsilon_i^2 + {g'^2 v^2 \over 4}
& - {g'g v^2 \over 4} \cr
0 & 0 & - {g'g v^2 \over 4} & {g^2 v^2 \over 4}
\end{pmatrix}
\label{eq:massmatrix}
\end{equation}
where $A^3$ is the third component of the $SU(2)_L$ gauge
bosons, $g$ and $g'$ are gauge couplings for
the SM $SU(2)_L$ and $U(1)_Y$
gauge groups respectively,
and $v = 246$ GeV is the vacuum expectation value of the SM Higgs boson.
Diagonalization of the mass matrix (via an orthogonal transformation
${\cal O}$) leads to the mass eigenstates
$E= ( Z', A', Z, A)$ with $E_i={\cal O}_{ji} V_{j}$
where $A$ is the SM photon,
$Z$ is the SM $Z$ boson,
$A'$ is the dark photon,
and $Z'$ is the new heavy boson.
The interaction Lagrangian between
the mass eigenstates of the neutral gauge {bosons}
and the fermions is given by \cite{Du:2019mlc}
\begin{equation}
\label{eq:coupling}
\left[ \bar f \gamma_\mu (v^f_i - \gamma_5 a^f_i) f
+ v^\psi_i \bar \psi \gamma_\mu \psi \right] E^\mu_i
\end{equation}
where $f$ is the SM fermion.
The small coupling $v_4^\psi$ between the hidden
fermion $\psi$ and the SM photon can be
rewritten as $v_4^\psi \equiv e \delta$
where $\delta$ is usually referred to as
``millicharge''.
\begin{figure}[htbp]
\begin{centering}
\includegraphics[width=0.4\textwidth]{./figures/limit_mchi_eps2_v5}
\caption{
The upper bound on $\epsilon_2$ as a function of $m_\psi$.
The other parameters are
$m_1 = 3$ GeV, $m_2 = 700$ GeV,
$g_F = 1.5$, $g_W = 1.0$,
and $\epsilon_1 \ll \epsilon_2$.
Here $\epsilon_2 \simeq (-g'/g_W) \delta$ where $\delta$ is the millicharge
of $\psi$.
The limits include
the constraints on millicharged particles (shaded light gray)
\cite{Davidson:2000hf, Acciarri:2019jly, Ball:2020dnx},
the electroweak precision measurements for
the $Z$ mass shift (dashed red) \cite{Du:2019mlc},
the $Z$ invisible decay (dashed green)~\cite{ALEPH:2005ab},
the di-lepton high mass resonance search
at ATLAS ({dashdotted} blue) ~\cite{ATLAS:2019vcr},
and the {monojet} search at ATLAS (solid black)
\cite{Aaboud:2017phn}.
}
\label{fig:limit-mchi-eps2}
\end{centering}
\end{figure}
Fig.~\ref{fig:limit-mchi-eps2} shows various experimental
constraints on the model, including
the
{constraints}
from millicharged particle searches
\cite{Davidson:2000hf, Acciarri:2019jly, Ball:2020dnx},
the electroweak precision measurements for
the $Z$ mass shift \cite{Du:2019mlc},
the $Z$ invisible decay \cite{ALEPH:2005ab},
the di-lepton high mass resonance search
at ATLAS \cite{ATLAS:2019vcr},
and the {monojet} search at ATLAS
\cite{Aaboud:2017phn}.
Here we choose $m_1 = 3$ GeV,
$m_2 = 700$ GeV, $g_F = 1.5$, $g_W = 1$,
and $\epsilon_1 \ll \epsilon_2$.
Throughout this analysis
we use $m_2 = 700$ GeV, $g_F = 1.5$, and $g_W = 1$
as the default values for these three parameters
as in {Ref.\ \cite{Du:2019mlc}};
in the {parameter} space
of interest, we have
$m_1 \simeq$ GeV $\ll m_2$
so that the dark photon mass
$m_{A'} \simeq m_1$,
and the heavy $Z'$ boson
has a mass $m_{Z'} \simeq m_2$.
For {the hidden fermion mass} $m_\psi \gtrsim 3$ GeV
the electroweak constraint on the $Z$ mass shift
gives the most stringent limit,
$\epsilon_2 \lesssim 0.036$,
whereas for the mass range
$0.3$ GeV $\lesssim m_\psi \lesssim 3$ GeV,
the leading constraints come from
the recent ArgoNeuT data \cite{Acciarri:2019jly}
and the milliQan demonstrator data \cite{Ball:2020dnx}.
We note that the mass fraction of the millicharged DM
is constrained to be $ \lesssim 0.4\%$ by the CMB data
\cite{Boddy:2018wzy, dePutter:2018xte, Kovetz:2018zan},
which is satisfied in the parameter
space of interest of our model \cite{Du:2019mlc}.
\section{A mini-overview on lifetime-frontier detectors}
\label{sec: detector-review}
A number of new lifetime-frontier
detectors have been proposed recently
at the LHC,
which can be used to search for LLPs.
Table \ref{tab:detectors} shows
the angular coverage,
location, size, and expected running time
of these new detectors.
We classify the detectors into three
categories: forward detectors,
central transverse detectors,
and precision timing detectors.
The forward detectors include
FACET \cite{FACET:talk1},
FASER \cite{Feng:2017uoz, Ariga:2018zuc,
Ariga:2018uku, Ariga:2018pin, Ariga:2019ufm},
{FASER2} \cite{ Ariga:2019ufm, Kling:2021fwx},
AL3X \cite{Gligorov:2018vkc},
and {MoEDAL-MAPP} \cite{Staelens:2019gzt}.
The central transverse detectors include
CODEX-b \cite{Gligorov:2017nwh, Aielli:2019ivi},
MATHUSLA
\cite{Chou:2016lxi, Curtin:2018mvb, Alpigiani:2018fgd,
Lubatti:2019vkf, Alpigiani:2020tva},
ANUBIS \cite{Bauer:2019vqk}.
The precision timing detectors include
CMS-MTD \cite{CMStiming},
ATLAS-HGTD \cite{Allaire:2018bof},
and LHCb-TORCH~\cite{LHCb:2017MTD, LHCb:2008vvz}.
Below we provide a mini-overview
of the new lifetime frontier detectors.
\begin{table*}[htbp]
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Detector
& \multicolumn{1}{|p{1.5cm}|}{\centering $\eta$}
& \multicolumn{1}{|p{4cm}|}{\centering Distance from IP (m)}
& \multicolumn{1}{|p{3cm}|}{\centering Decay volume (m$^3$)}
& \multicolumn{1}{|p{2cm}|}{\centering LHC runs}
\\ \hline
\hline
FACET~\cite{FACET:talk1}
& $[6, 7.2]$
& $100$ ({upstream})
& $12.3$
& run 4 (2027)
\\
\hline
FASER~\cite{Feng:2017uoz, Ariga:2018zuc,
Ariga:2018uku, Ariga:2018pin, Ariga:2019ufm}
& $> 9$
& $480$ (downstream)
& $0.047$
& run 3 (2022)
\\
\hline
{FASER2} \cite{ Ariga:2019ufm, Kling:2021fwx}
& $> 6.87$
& $480$ (downstream)
& $15.7$
& HL-LHC
\\
\hline
AL3X~\cite{Gligorov:2018vkc}
& $[0.9, 3.7]$
& $5.25$ ({upstream})
& $915.2$
& run 5 ({2032})
\\
\hline
{MoEDAL-MAPP} \cite{Staelens:2019gzt}
& $\sim 3.1$
& $55$ ({upstream})
& $\sim 150$
& run 3 (2022)
\\
\hline
\hline
CODEX-b~\cite{Gligorov:2017nwh, Aielli:2019ivi}
& $[0.14, 0.55]$
& $26$ (transverse)
& $10^3$
& run 4 (2027)
\\
\hline
MATHUSLA
\cite{Chou:2016lxi, Curtin:2018mvb, Alpigiani:2018fgd,
Lubatti:2019vkf, Alpigiani:2020tva}
& $[0.64, 1.43]$
& $60$ (transverse)
& $2.5 \times 10^5$
& HL-LHC
\\
\hline
ANUBIS~\cite{Bauer:2019vqk}
& [0.06, 0.21]
& $24$ (transverse)
& $\sim 1.3 \times 10^4$
& HL-LHC
\\
\hline
\hline
CMS-MTD~\cite{CMStiming}
& $[-3, 3]$
& {$1.17$ (barrel), $3.04$ (endcaps)}
& 25.4
& HL-LHC
\\
\hline
ATLAS-HGTD~\cite{Allaire:2018bof}
& $[2.4, 4]$
& $ 3.5$ (endcaps)
& $8.7$
& HL-LHC
\\
\hline
LHCb-TORCH~\cite{LHCb:2017MTD, LHCb:2008vvz}
& $[1.6, 4.9]$
& $9.5 $ {(beam direction)}
& --
& HL-LHC
\\
\hline
\end{tabular}
\caption{Proposed detectors for long-lived particles searches at the LHC.
The first column shows the detector name,
the second column shows the pseudorapidity coverage,
the third column shows the distance from {interaction point (IP)}
to the near side of the detector
and the location
(to far side of the detector, for FASER),
the fourth column shows the decay volume of the detector,
and the last column shows the starting time of {data-taking}.
The first {five} detectors are located at the
forward region of the {corresponding} IP;
the middle three detectors are located
at the far central {transverse} region of the
{corresponding} IP;
the last three detectors are the
precision timing detectors
to be installed at CMS, ATLAS and LHCb
respectively to control the HL-LHC
pile-up {background}.
The HL-LHC is expected to start
{data-taking} in 2027 (run 4) \cite{LHCtime}.
Here ``upstream'' (``downstream'')
means that the detector is located in
the clockwise (anti-clockwise) direction
of the {corresponding} IP,
{viewed from above}.}
\label{tab:detectors}
\end{table*}
\subsection{Forward detectors}
FASER
(the ForwArd Search ExpeRiment),
is located at $480$ m downstream of the ATLAS detector
along the beam axis
\cite{Feng:2017uoz, Ariga:2018zuc, Ariga:2018uku, Ariga:2018pin, Ariga:2019ufm}.
FASER has a cylindrical
decay volume {of length} ${L} = 1.5$ m and radius $R = 10$ cm.
FASER has been installed at {the} TI12 tunnel at {the} LHC
and
{is expected} to collect data during LHC Run 3 (2022)
\cite{FASER:2019dxq}.
The upgrade version, FASER 2, with {a decay volume of length} ${L = 5}$ m
and radius $R = 1$ m
is proposed to be installed during the HL-LHC run (2026-35)
\cite{ Ariga:2019ufm, Kling:2021fwx}.
\begin{figure}[htbp]
\begin{centering}
\includegraphics[width=0.5\textwidth]{figures/FACET-new-layout.pdf}
\caption{Schematic layout of the proposed
FACET detector (side view) \cite{albrow}.}
\label{fig:facet-layout1}
\end{centering}
\end{figure}
FACET
(Forward-Aperture CMS ExTension)
is a new lifetime frontier detector which
is proposed to be installed $\sim$100 m
upstream of the CMS detector
along the beam axis \cite{FACET:talk1}.
FACET {is proposed to} be built based on the CMS Phase 2 Upgrade concept,
combining silicon tracker, timing detector, HGCAL-type EM/HAD calorimeter,
and GEM-type muon system in a compact design \cite{FACET:talk1, FACET:talk2};
the latest design of the
FACET detector is shown in Fig.\ \ref{fig:facet-layout1}.
The decay volume of the FACET experiment
is an enlarged LHC quality vacuum beam pipe
which is 18 m long and has a radius of 50 cm
\cite{FACET:LOI, FACET:talk1, FACET:talk2}.
The FACET detector is shielded by about {35}-50 m
of steel (in the Q1-Q3 quadrupoles and D1 dipole)
in front of it \cite{FACET:LOI};
additional shielding materials are placed before the decay volume,
as shown in Fig.\ \ref{fig:facet-layout1}.
The FACET detector,
surrounding the LHC beam pipe which has a
radius $R = 18$ cm, is placed behind
the decay volume.
As a new proposed far forward detector, FACET has some {merits}.
The {35}-50 m steel shielding before FACET,
corresponding to $200-300$ nuclear interaction lengths,
is comparable to the shielding material for FASER, which
is $\sim 100$ m of concrete/rock,
corresponding to $\sim 240$ nuclear interaction lengths.
FACET will benefit from the high quality LHC vacuum
pipe as the decay volume \cite{FACET:talk1, FACET:talk2}.
FACET plans to have both the EM
and HAD calorimeters \cite{FACET:LOI},
whereas FASER has only EM calorimeter
\cite{Ariga:2018zuc, Ariga:2018uku, Ariga:2018pin, Ariga:2019ufm}.
This allows FACET to have a better detection efficiency
for the hadronic decays of the DP,
especially for the neutral hadronic decays.
AL3X
(A Laboratory for Long-Lived eXotics)
is {an on-axis cylindrical detector} which
has been proposed to be installed
at ALICE experiment during the LHC Run 5 \cite{Gligorov:2018vkc}.
The detector will make use of
the existing ALICE time projection chamber and the L3 electromagnet.
It is also envisioned to move the ALICE detector by 11.25 m downstream from its current location,
providing space for a spherical shell segment of tungsten to shield the detector from the IP.
The AL3X detector is then expected to be located $5.25$ m away from the IP along the beam axis,
with a 12 m long cylindrical decay volume of a 0.85 m inner radius
and a 5 m outer radius.
The MoEDAL-MAPP {detector}
is the MAPP
(Apparatus for the detection of Penetrating Particles)
detector at MoEDAL
(Monopole and Exotics Detector at the LHC)
\cite{Staelens:2019gzt}, which
is proposed to be installed at the UGCI gallery
near the LHCb experiment (IP8) in future LHC runs.
MoEDAL-MAPP is
$55$ m from IP8
and with an angle of $5^{\circ}$ away
from the beam line,
with a fiducial volume
of $\sim$150 m$^3$
\cite{Staelens:2019gzt}.
\subsection{Central detectors}
CODEX-b
(Compact Detector for Exotics at LHCb)
has been proposed to be constructed at the LHCb
cavern
\cite{Gligorov:2017nwh, Aielli:2019ivi}.
The decay volume is designed to be
$10\, \rm{m} \times 10 \,\rm{m} \times 10\, \rm{m}$.
It is located $\sim$5 m in the z axis (beam direction)
and $\sim$26 m in the transverse direction away
from the LHCb IP,
with a pseudorapidity coverage of $0.14 <\eta< 0.55$.
The demonstrator detector, CODEX-$\beta$
(about $2\, \rm{m} \times 2 \,\rm{m} \times 2\, \rm{m}$)
has been developed for the LHC Run 3 \cite{Aielli:2019ivi}.
MATHUSLA
(MAssive Timing Hodoscope for Ultra-Stable neutraL pArticles)
is a new proposed experiment
near the ATLAS or CMS {IP}
\cite{Chou:2016lxi, Curtin:2018mvb, Alpigiani:2018fgd,
Lubatti:2019vkf, Alpigiani:2020tva}.
It is proposed to {be} placed $\sim$68 m downstream from the IP
and $\sim$60 m above the LHC beam axis
with a decay volume
of $100\, \rm{m} \times 100\, \rm{m} \times 25\, \rm{m}$
\cite{Alpigiani:2020tva}.
MATHUSLA was previously
proposed to be installed at
$\sim$100 m downstream from the IP
and $\sim$100 m above the LHC beam axis
with a decay volume of
$200\, \rm{m} \times 200\, \rm{m} \times 20\, \rm{m}$
\cite{Chou:2016lxi, Curtin:2018mvb, Alpigiani:2018fgd, Lubatti:2019vkf}.
In this analysis, we adopt the parameters from the
recent proposal \cite{Alpigiani:2020tva}.
ANUBIS
(AN Underground Belayed In-Shaft search experiment)
\cite{Bauer:2019vqk}
is a {new proposed} experiment taking advantage of
the 18 m diameter,
56 m long PX14 installation shaft
of the ATLAS experiment.
The proposed detector consists of four tracking stations
which have the same cross section area of 230 m$^2$
and are 18.5 m apart from each other.
\subsection{Precision timing detectors}
To mitigate the high pile-up background at the
HL-LHC, various precision timing detectors will
be installed at
CMS \cite{CMStiming},
ATLAS \cite{Allaire:2018bof, Allaire:2019ioj,Garcia:2020wxj},
and LHCb \cite{LHCb:2017MTD},
which can be used for LLP searches
\cite{Liu:2018wte, Mason:2019okp,Kang:2019ukr,
Cerri:2018rkm, Du:2019mlc, Liu:2020vur, Cheung:2021utb}.
The CMS-MTD {detector}
consists of
the precision minimum ionizing particle (MIP) timing detector
with a timing resolution of 30 picoseconds
\cite{CMStiming}.
The timing layers will be installed between
the inner trackers and the electromagnetic calorimeter
for the barrel and {endcap} regions.
The timing detector in the barrel region
has a length of 6.08 m along the beam axis direction
and a transverse distance of $1.17$ m away from the beam.
The timing detectors in the {endcap} regions
have a pseudorapidity coverage of $1.6 < | \eta | < 3.0$
and are located $\sim 3.0$ m from the IP.
The decay volume of LLPs at CMS-MTD is
$\sim 25.4$ ${\rm m}^3$ if one demands that
the LLPs decay before arriving {at} the timing layers and
the decay vertex has a transverse distance of
$0.2\, {\rm m} < L_T < 1.17\, {\rm m}$ from the beam
axis \cite{Liu:2018wte, Liu:2020vur}.
The HGTD (High Granularity Timing Detector)
has been proposed to be installed
in front of the ATLAS endcap and forward calorimeters
at $z = \pm 3.5$ m from the IP
during the ATLAS Phase-II upgrade
\cite{Allaire:2018bof, Allaire:2019ioj, Garcia:2020wxj}.
The ATLAS-HGTD can cover the pseudorapidity range
of $2.4 < | \eta | < 4.0$, and
is expected to have a time resolution of 35 ps (70 ps) per hit
at the start (end) of HL-LHC \cite{Garcia:2020wxj}.
The decay volume of ATLAS-HGTD is $\sim 8.7 \,\rm{m}^3$,
if LLPs are required to decay before arriving {at} the timing detector and
the decay vertex has a transverse distance of
$0.12\, {\rm m} < L_T < 0.64\, {\rm m}$
\cite{Garcia:2020wxj}.
The TORCH (Time Of {internally} Reflected CHerenkov light) detector
has been proposed to be installed at the next upgrade of LHCb \cite{LHCb:2017MTD}.
The TORCH will be located at $z\sim 9.5$ m from the LHCb IP
with {the} angular acceptance of 1.6$<\eta<$4.9.
The precision of each track in the TORCH system is 15 ps \cite{LHCb:2017MTD}.
\section{The dark photon production}
\label{sec:DP_production}
In our model, there are three main processes to produce
dark photon $A'$ at the LHC:
rare meson decays (hereafter MD),
coherent proton bremsstrahlung (hereafter PB),
and hidden sector radiation (hereafter HR);
the corresponding Feynman diagrams are shown
in Fig.\ (\ref{fig:feyndiag}).
The MD and PB processes are common
for the dark photon models,
because dark photons are produced via
interactions between the dark photon and
charged particles in the SM
in these two processes.
The HR process is new in our model \cite{Du:2019mlc},
which is mediated by the interaction between
the dark photon and the hidden sector particle $\psi$.\footnote{Here
we do not consider
the dark photon direct production channel which
consists of the following processes
$q\bar{q} \to A'$, $q\bar q \to g A'$, $q g\to q A'$ and $\bar{q} g \to \bar{q} A'$,
because they suffer from large PDF uncertainties
for sub-GeV $A'$
and are suppressed by $\epsilon_1$
which is much smaller than $\epsilon_2$ in the HR process.}
\begin{figure}[htbp]
\begin{centering}
\includegraphics[width=0.28\textwidth]{./figures/feyn-meson-decay}
\includegraphics[width=0.3\textwidth]{./figures/feyn-proton-brems}
\includegraphics[width=0.26\textwidth]{./figures/feyn-diag-v2}
\caption{
Feynman diagrams for the dark photon production at the LHC:
from meson decays (left),
from the proton bremsstrahlung (middle),
and from the hidden fermion radiation (right).
}
\label{fig:feyndiag}
\end{centering}
\end{figure}
\subsection{Meson decays}
\label{subsec:MD}
Dark photons can be produced in
the $m \to \gamma + A'$ process,
where $m$ denotes a light meson,
as shown in the left diagram in
Fig.\ \ref{fig:feyndiag};
the branching ratio can be computed via \cite{Batell:2009di}
\begin{equation}
{\rm BR} \left (m \to A' + \gamma \right) = 2\, \epsilon^{2}
\left( 1-\frac{M_{A'}^2}{M_{m}^2} \right)^{3}
{\rm BR}\left(m \to \gamma \gamma \right),
\label{eq:brm}
\end{equation}
{where $\epsilon$ is the coupling constant given in Eq.\ \eqref{eq:epsilon}.}
In the parameter space of interest of our model,
one has $\epsilon \approx (0.27/e)\, \epsilon_1$ for $m_1 \lesssim 30$ GeV.
For light mesons, one has
${\rm BR}(\pi^{0} \rightarrow \gamma \gamma ) \simeq 0.99$
and ${\rm BR}(\eta \rightarrow \gamma \gamma) \simeq 0.39$ \cite{Zyla:2020zbs}.
Since light mesons can be copiously produced
in the forward direction at
high energy $pp$ collisions,
(for example, the
production cross section of $\pi^0$ ($\eta$)
in each hemisphere at the LHC
is $1.6 \times 10^{12}$ pb
($1.7 \times 10^{11}$ pb) \cite{Ariga:2018uku}),
dark photon from rare meson decays
can be a leading dark photon production mode at the LHC
if the decay is kinematically allowed
\cite{Feng:2017uoz}.
We neglect the $m \to A' A'$ process
because we have $\epsilon \ll 1$
for the LLDP.
In our analysis,
we generate the {four-momentum spectrum}
for the $\pi^0/\eta$ mesons
using the EPOS-LHC \cite{Pierog:2015} model in
CRMC \cite{Ulrich:crmc}
with $10^5$ simulation events of $pp$ inelastic collision at the
LHC with $\sqrt{s}=13$ TeV.
We then boost the momentum of the dark photon
(which is isotropically distributed in the $\pi^0/\eta$ rest frame)
to the lab frame, by using the meson momentum.
Our simulations are found to be consistent with
FORESEE \cite{Kling:2021fwx}.
{We also simulate the heavy mesons $D^0$, $B^0$ and $J/\psi$
using PYTHIA 8 \cite{Sjostrand:2014zea}.
We found that the DP production cross section
due to decays of these heavy mesons
is about five orders of magnitude smaller than the light mesons ($\pi^0$ and $\eta$).
Therefore we neglect the contribution from heavy meson decays in our analysis.}
\subsection{Proton bremsstrahlung}
\label{subsection:proton_bremss}
Proton bremsstrahlung process
is another major production mode of
light dark photons in high energy
$pp$ collisions; the Feynman
diagram
is shown as the middle diagram
in Fig.\ \ref{fig:feyndiag}.
The dark photon signal arising from
the proton bremsstrahlung process
can be computed by the
Fermi-Weizsacker-Williams (FWW) method
\cite{Fermi:1924, Williams:1934, Weizsacker:1934},
in which {the} proton is treated as a coherent object;
the total number of the dark photon produced
is given by \cite{Feng:2017uoz}
\begin{eqnarray}
N_{A'}^{\rm PB} &=& {\cal L} \,
{|F_{1}\left(m_{A^{\prime}}^{2}\right)|^{2}}
\int d z \, d p_{T}^{2} \,\sigma_{p p}\left(s^{\prime}\right)
w\left(z, p_{T}^{2}\right)
\Theta\left(\Lambda_{\mathrm{QCD}}^{2}-q_{\rm{min}}^{2}\right),
\label{eq:proton-brem}
\end{eqnarray}
where $N_{A'}^{\rm PB}$ is the number of dark photon
events from the PB process,
${\cal L}$ is the integrated luminosity,
$F_1$ is the form factor function,
$z=p^L_{A'}/p_p$ with $p^L_{A'}$ being
the longitudinal momentum of
the dark photon
and $p_p$ the proton beam momentum,
$p_T$ is the transverse momentum of the dark photon,
$\sigma_{pp}(s^{\prime})$ is the inelastic cross section
\cite{Tanabashi:2018oca} with $s' = 2 m_p (E_p -E_{A'})$
{in the rest frame of one of the colliding protons},
$w\left(z, p_{T}^{2}\right)$ is the splitting function,
$\Lambda_{\mathrm{QCD}} \simeq 0.25$ GeV
is the QCD scale,
and $q$ is the momentum carried by the virtual photon
in the middle diagram
in Fig. \ref{fig:feyndiag}.
The splitting function $w\left(z, p_{T}^{2}\right)$ in Eq.~\eqref{eq:proton-brem}
is given by~\cite{Brunner:2014, Kim:1973, Tsai:1977}
\begin{eqnarray}
\begin{aligned}
w\left(z, p_{T}^{2}\right)& \simeq & \frac{\epsilon^{2} \alpha}{2 \pi H}\left\{\frac{1+(1-z)^{2}}{z}-2 z(1-z)\right.
\left(\frac{2 m_{p}^{2}+m_{A^{\prime}}^{2}}{H}-z^{2} \frac{2 m_{p}^{4}}{H^{2}}\right) \\
&& + \,\; 2 z(1-z)\left(z+(1-z)^{2}\right) \frac{m_{p}^{2} m_{A^{\prime}}^{2}}{H^{2}} \left. + 2 z(1-z)^{2} \frac{m_{A^{\prime}}^{4}}{H^{2}}\right\},
\end{aligned}
\end{eqnarray}
where $H=p_{T}^{2}+(1-z) m_{A^{\prime}}^{2}+z^{2} m_{p}^{2}$.
To guarantee the validity of the FWW
approximation, the Heaviside function $\Theta$ is imposed in Eq.~\eqref{eq:proton-brem}
with the minimal virtuality of the photon
cloud around the beam proton given by \cite{Kim:1973, Tsai:1977}
\begin{equation}
\left|q_{\min }^{2}\right| \approx \frac{1}{4 E_{p}^{2} z^{2}(1-z)^{2}}\left[p_{T}^{2}+(1-z) m_{A^{\prime}}^{2}+z^{2} m_{p}^{2}\right]^{2}.
\end{equation}
The form factor $F_1(p_{A'}^2)$ in Eq.~\eqref{eq:proton-brem} is given by~\cite{Feng:2017uoz, Faessler:2009tn}
\begin{equation}
F_1(p_{A'}^2) = \sum_{V =\rho \, \rho' \, \rho'' \, \omega \, \omega' \, \omega''} \frac{f_V m_V^2}{m_V^2 - p_A'^2 - i m_V \Gamma_V},
\label{eq:PB_formfactor}
\end{equation}
where $m_V$ ($\Gamma_V$) is the mass (decay width) of the vector meson,
$f_{\rho} = 0.616$, $f_{\rho'} = 0.223$, $f_{\rho''} = -0.339$, $f_{\omega} = 1.011$, $f_{\omega'} = -0.881$, and $f_{\omega''} = 0.369$.
\subsection{Hidden radiation}
Dark photons can also be produced via hidden fermion
radiations in the HR process,
as shown in the third diagram of Fig.\ \ref{fig:feyndiag}.
Within certain parameter
space of the models in {Ref.\ \cite{Du:2019mlc}},
the HR process can be more important
than the MD and PB processes.
For the models considered in this analysis,
dark photons in the HR process
are produced at the LHC
in the radiation process of
the hidden sector fermion $\psi$,
which are pair-produced at the
LHC via the $q \bar q \to \gamma^{*}/Z/Z' \to \bar \psi \psi$
process,
as shown in the third diagram in Fig.~\ref{fig:feyndiag}.
In the MD and PB processes,
the dark photon production cross section
is suppressed by the small $\epsilon$ parameter
(given in Eq.\ \eqref{eq:epsilon})
needed for the long lifetime of the dark photon.
In the HR process, however,
the LHC production of $\psi$ is not
controlled by $\epsilon$ so that
the LHC cross section {of} $\psi$
can be {sizable} even for heavy
$\psi$.
\begin{figure}[htbp]
\includegraphics[width=0.4\textwidth]{figures/xsec_new4}
\caption{
The contributions
to the $pp \rightarrow \psi\bar{\psi}$ cross section
at the LHC
from three different mediators:
$\gamma$ (blue-dashed),
$Z$ (black-dotted),
$Z'$ (red-{{dashdotted}}).
{The total cross section (green-solid)
taking into account all contributions
(including the $A'$ contribution and the
interference terms) is also shown.}
We use
$\epsilon_{1}=6\times 10^{-7}$,
$\epsilon_{2}= {0.005}$,
and $m_{A'} = 0.4$ GeV.
The gray shaded region
indicates the parameter space
excluded by the millicharge constraints
\cite{Davidson:2000hf, Acciarri:2019jly, Ball:2020dnx}.
We use {NNPDF23LO} \cite{Ball:2012cx} which
is the default PDF in MADGRAPH 5.
}
\label{fig:prop-xsec-psipair}
\end{figure}
To obtain the contribution from the HR process,
we use FeynRules \cite{Alloul:2013bka}
to produce the UFO file for our model,
which is then passed into MADGRAPH 5 \cite{Alwall:2014hca}
to generate the $p p \to \psi \bar{\psi}$ events at the LHC.
We further use PYTHIA 8
\cite{Sjostrand:2014zea, Carloni:2010tw, Carloni:2011kk}
to simulate the dark radiation process of the $\psi$ particle
to obtain the dark photons.
Fig.~\ref{fig:prop-xsec-psipair} shows the
contributions to the
$p p \to \psi \bar\psi$ cross section at the LHC
from three different mediators
(photon, $Z$, and $Z'$),
where the interference effects have been
neglected.\footnote{We neglect the process mediated
by the dark photon since it is suppressed by the
small $\epsilon$ parameter needed for LLDP so that
it is several orders of magnitude smaller than
the other three mediators in our analysis.
}
We use MADGRAPH 5 \cite{Alwall:2014hca}
to compute the cross sections, where
we have fixed
$m_{A'} = 0.4$ GeV,
$\epsilon_1 = 6 \times 10^{-7}$,
and $\epsilon_2 = {0.005}$.
For $m_{\psi} \lesssim 8$ GeV
the dominant contribution to the
$\psi \bar \psi$ pair-production cross section comes
from the s-channel photon process;
for higher $\psi$ mass,
the contributions from $Z$ and $Z'$
exchanges become more important.
\subsection{Comparison of the three DP production channels}
In Fig.~\ref{fig:compare-distribution},
we compare
the three dark photon production channels
at the LHC, both in the $4\pi$ solid angle
and in the very forward region.
The very forward region is
defined by the dark photon pseudorapidity $\eta_{A'} > 6$.\footnote{The
angular acceptance of FACET is $6< \eta <7.2$ \cite{FACET:talk1},
and the angular acceptance of FASER is $\eta > 9$
\cite{Ariga:2018zuc, Ariga:2018uku, Ariga:2018pin, Ariga:2019ufm}.}
We choose $\epsilon_{1} = 10^{-6}$ and $\epsilon_2 = {0.005}$
for both figures in Fig.~\ref{fig:compare-distribution}.
The dark photon cross section
in the HR process is calculated by
$\sigma_{A'}^{\rm HR} = \bar{n}_{A'} \sigma_{p p \to \psi \bar\psi}$
where $\bar{n}_{A'}$ is the expected number
of dark photons per $\psi \bar \psi$ event.
In our analysis, $\bar{n}_{A'}$ is computed
as the ratio of the total number of dark photons
to the total number of $\psi \bar \psi$ events
in the simulation,
namely $\bar{n}_{A'} = N_{A'}^{\rm HR } / N_{\psi \bar \psi}$.
\begin{figure}[htbp]
\includegraphics[width=0.4\textwidth]{figures/diff-production-full-fwd}
\includegraphics[width=0.4\textwidth]{figures/diff-production-full-fwd-0p2}
\caption{The LHC production cross section of dark photon $\sigma_{A'}$
from three contributions: HR (red lines), MD (blue lines) and PB (black lines).
The solid lines correspond to the cross section in full solid angle,
and the dashed lines represent the cross section in
the very forward region with $\eta_{A'} > 6$.
Here we use $\epsilon_1 = 10^{-6}$ and $\epsilon_2 = 0.005$ for both panels;
we choose $m_{1} = 0.4$ GeV in the left panel
and $m_{1} = 0.2 \,m_{\psi}$ in the right panel.
The gray shaded region
($m_{\psi} \lesssim 0.45$ GeV for $\epsilon_2 = {0.005}$)
indicates the parameter space
excluded by the millicharge constraints
\cite{Davidson:2000hf, Acciarri:2019jly, Ball:2020dnx}.
}
\label{fig:compare-distribution}
\end{figure}
The left panel figure of Fig.~\ref{fig:compare-distribution}
shows the dark photon cross sections as a function of the
hidden fermion mass $m_\psi$ for the case
where the dark photon mass is fixed at
$m_{A'} \simeq 0.4$ GeV.
The dark photon cross section in the HR process
decreases with the hidden fermion mass $m_\psi$;
the cross sections in the MD and PB processes
are independent of $m_\psi$,
since these processes do not involve the hidden fermion $\psi$.
For light $\psi$ the HR process dominates the
MD and PB processes,
whereas for heavy $\psi$ the MD and PB
processes become more important.
In particular, the HR process dominates
the dark photon production if
$m_\psi \lesssim$ 5 GeV (30 GeV) in the
very forward ($4\pi$ solid angle) region.
The right panel figure of Fig.~\ref{fig:compare-distribution}
shows the dark photon cross sections as a function of the
dark photon mass $m_{A'}$ for the case
where $m_\psi = 5 \, m_{A'}$.
The HR process dominates
{the entire mass range except the small}
resonance region near $m_{A'} \simeq 0.8$ GeV,
where the PB process becomes larger.
We note that, in the right panel of Fig.~\ref{fig:compare-distribution},
the resonance in the PB process is due to
the pole structure (due to various vector mesons)
in the form factor given
in Eq.~\eqref{eq:PB_formfactor},
and the kink features in the MD cross section
arise because of the mass threshold
effects in meson decays.
About $10\%$ of the dark photons in the MD and PB processes
are produced in the very forward region as shown
in Fig.~\ref{fig:compare-distribution}.
For the HR process,
the number of dark photons produced in the very forward region
is sizable in the low $\psi$ mass region,
with a fraction up to
$\sim {15\%}$
for $m_\psi \simeq 0.5$ GeV,
as shown in the left panel figure of Fig.~\ref{fig:compare-distribution}.
For heavy $\psi$ mass the cross section in
the very forward region is significantly reduced,
for example, less than 1\% of the dark photon
in the HR process produced in the forward region
when $m_\psi \gtrsim 6$ GeV.
This is because heavier $\psi$ particles tend to be produced more isotropically
than lighter $\psi$ particles and thus lead to {fewer}
events in the forward region.
\subsection{PDF uncertainties}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.4\textwidth]{figures/pdf-uncertainty}
\caption{Comparison of LHC cross sections using
different PDFs.
The LHC cross sections of
$\sigma (pp\to \psi \bar\psi)$ (solid),
$\sigma(A')$ of the HR process
in the $4 \pi$ angular region (dotted),
in the forward region $\eta_{A'} > 6$ (dashed),
and in the FACET detector ({dashdotted})
are computed with
NNPDF23 \cite{Ball:2012cx} (black),
NNPDF40 \cite{Ball:2021leu} (red),
and CT18 \cite{Hou:2019qau} (blue).}
\label{fig:pdf-uncertainty}
\end{figure}
For light
$\psi$ one has to integrate over the small $x$ region
in PDFs where there are large uncertainties \cite{Feng:2017uoz}.
In the process $pp \to \psi \bar \psi$, the minimum value of
$x$ is
$
x_{\rm min} = {4 m_\psi^2 / s},
$
if there is no cut on the $\psi$ momentum.
Thus, for the $m_\psi = 15 \, (0.5)$ GeV case,
one has to integrate over the $x$ range near
$x_{\rm min} \simeq 5 \times 10^{-6}\, (6 \times 10^{-9})$.
The minimum value of $x$ is $10^{-9}$
in the PDFs sets:
NNPDF23LO \cite{Ball:2012cx},
NNPDF40 \cite{Ball:2021leu}, and
CT18 \cite{Hou:2019qau}.
Thus, for the $m_\psi = 0.5$ GeV case,
the dark photon production cross section
in the HR process (denoted as $\sigma_{A'}^{\rm HR}$)
depends on
the PDFs in the $x$ region where PDFs begin .
To check the stability of the LHC cross sections
(of small $m_\psi$)
against different PDFs,
we compute various LHC cross sections
including
$\sigma (pp\to \psi \bar\psi)$,
$\sigma_{A'}^{\rm HR}$
in the $4 \pi$ angular region,
$\sigma_{A'}^{\rm HR}$
in the forward region $\eta_{A'} > 6$,
and
$\sigma_{A'}^{\rm HR}$
in the FACET detector,
by using three different PDFs:
NNPDF23LO
(the default PDFs in MADGRAPH 5),
NNPDF40, and CT18,
in Fig.\ \ref{fig:pdf-uncertainty}.
For $\sigma(pp \to \psi \bar\psi)$
at $m_\psi \simeq 0.5$ GeV,
the NNPDF40 (CT18)
leads to a cross section that is about
{$30\%$ ($45\%$)}
of that from NNPDF23;
for $\sigma_{A'}^{\rm HR}$
in the $4 \pi$ angular region,
these two percentage numbers become
{$55\%$ ($80\%$)}.
This is because the $\psi$ particles have to be
energetic enough to radiate
dark photons,
{which then corresponds} to
larger $x_{\rm min}$ values in the PDF integration,
leading to less PDF uncertainties.
The PDF uncertainties in the $4\pi$ angular region
are smaller than the forward region, which is
due to the fact that the $4\pi$ region includes
the region with significant transverse momentum.
In the sensitivity contours of FACET as shown in
Fig.\ \ref{fig:psi5DP},
the mass of $\psi$ has to satisfy $m_\psi \gtrsim 1.5$ GeV
to be consistent with the millicharge constraints.
We find that NNPDF40 (CT18) leads to a cross section
of $\sim 33\%$ ($\sim 64\%$) of NNPDF23
at $m_\psi \simeq 1.5$ GeV, as shown
in Fig.\ \ref{fig:pdf-uncertainty}.
For the $m_\psi \simeq 15$ GeV case
(the $\psi$ mass in Fig.\ \ref{fig:psi15GeV}), we find that
the cross section computed with NNPDF40 (CT18)
is $\sim 80\%$ ($\sim 97\%$) of that with NNPDF23,
in Fig.\ \ref{fig:pdf-uncertainty}.
Thus the PDF uncertainty on our sensitivity contours
is less significant. Furthermore,
the sensitivity contours analyzed with different
PDFs, as shown in Fig.\ \ref{fig:psi5DP},
show that different PDFs only modify the limits
for small $\epsilon_1$ values (the lower edge of
the contours), but have unnoticeable effects on
{large} $\epsilon_1$ values (the upper edge of
the contours).
This is due to the fact that
the large $\epsilon_1$ values correspond to small
decay lengths, and thus the dark photon should have a
significant momentum to decay inside the far detectors.
For that reason, the $x_{\rm min}$ in the PDF integration
becomes larger for
the model points with large $\epsilon_1$
values, resulting in insignificant PDF uncertainty.
\section{Analysis}
\label{sec:simu-and-considerations}
In this analysis, we investigate the LLDP
signals in the following four detectors:
FACET, FASER, MATHUSLA,
and CMS-MTD.
We carry out analysis for the model points
in the parameter space spanned by
the DP mass $m_{A'}$ and the DP lifetime $\tau_{A'}$.\footnote{We
select 60 grid points in the mass range $m_{A'} \in (0.1,\, 30)$ GeV
and 600 grid points $\tau_{A'} \in (10^{-4},\, 10^{2})$ m
for FACET and FASER detectors,
and 800 grid points $\tau_{A'} \in (10^{-4},\, 10^{4})$ m
for MATHUSLA detector.
The points on both axes are chosen uniformly on log scale.}
For each model point,
we compute the DP signal events
from the MD, PB and HR processes.
For the MD and PB processes,
we obtain the DP momentum
and the position of its decay vertex,
by using the simulations discussed in
section \ref{subsec:MD} and
section \ref{subsection:proton_bremss}, respectively.
We then boost the daughter particles from
dark photon decays to the lab frame,
from the rest frame of the dark photon,
where the daughter particles are isotropic.
For the HR process, we use MADGRAPH 5 \cite{Alwall:2014hca}
to generate $10^6$ events for the
$p p \to \psi \bar{\psi}$ process, and use PYTHIA 8
\cite{Sjostrand:2014zea, Carloni:2010tw, Carloni:2011kk}
to simulate the hidden radiation of the $\psi$ particle
and the decay of the dark photon,
which outputs the momentum information
for the DP and its daughter
particles, as well as the decay position
of the DP.
To expedite the analysis
(only a small fraction of simulated events from PYTHIA 8
are actually inside the decay volume of the detectors),
we disregard the decay position
of the dark photon provided by
PYTHIA 8 and use the dark photons
that decay both inside and outside of the decay volume.
Thus, for the three far detectors (FACET, FASER, MATHUSLA),
we compute the probability of detecting a DP as follows
\begin{equation}
P_{A'} =
f(\theta, \phi)
\int_{L_{\rm min}}^{L_{\rm max}} d \ell
\frac{e^{-\ell/\ell_{A'}}} {\ell_{A'}} \, \omega \, ,
\label{eq:prob-detection}
\end{equation}
where $L_{\rm min}$ ($L_{\rm max}$)
is the minimal (maximum)
distance between the decay volume and the IP
along the $(\theta, \phi)$ direction
{with $\theta$ and $\phi$ the polar and azimuthal angles of
the dark photon respectively},
$\ell_{A'} = \tau_{A'} |\vec{p}_{A'}|/m_{A'}$
is the decay length of dark photon
with $\tau_{A'}$ being the lifetime,
$f(\theta, \phi)$ describes the angular acceptance
of the decay volume,
and
{$\omega$ equals 1 if the decay final states of the
DP satisfy
additional detector cuts
($\omega$ equals 0 otherwise).}
For a cylindrical detector (e.g.\ FASER and FACET)
that is
placed along the beam direction
with a distance $d$ from the IP
to the near side of the detector,
the parameters in Eq.~\eqref{eq:prob-detection} are given by
\begin{eqnarray}
L_{\rm min} &=& d, \quad
L_{\rm max} = d+L, \\
f(\theta, \phi) &=& \Theta(R/{L_{\rm min}} - \tan \theta) \,\Theta(\tan \theta - r/L_{\rm max}),
\label{eq:fthetaphi}
\end{eqnarray}
where
$L$ is the length of decay volume of the detector,
$r$ ($R$) is the inner (outer) radius of the decay volume,
and $\Theta$ is the Heaviside step function.
For the FACET detector, one has
$r={18}$ cm and $R=50$ cm;
for the FASER (FASER 2) detector, one has $r=0$ and $R=10$ (100) cm.
For the cylindrical forward detectors,
the pseudorapidity range is often used
to describe the acceptance of the detectors,
$f(\theta, \phi) = \Theta(\eta_{\rm{max}} - \eta_{A'}) \Theta(\eta_{A'} - \eta_{\rm{min}})$.
Thus for the FACET detector,
one has $\eta_{\rm{min}} \simeq 6$ and
$\eta_{\rm{max}}\simeq {7.2}$;\footnote{$\eta_{\rm{min}} \simeq 6$
corresponds to
the left-upper corner of the decay volume, and
$\eta_{\rm{max}}\simeq {7.2}$ corresponds to
the right-bottom (inner radius) corner
of the upper half of the decay volume as shown in the Fig.~\ref{fig:facet-layout1}.
}
for the FASER (FASER 2) detector,
one has $\eta_{\rm{min}} \simeq 9$ (7) and $\eta_{\rm{max}} = +\infty$.
For a box-shape detector with height $H$, width $W$, length $L$
and is located at a distance $d$ from the IP along the z-axis
and a distance $h$ above the LHC beam (along the x-axis)
(e.g.\ MATHUSLA),
one has\footnote{Note the distance $d$ here is different from
that in Tab.~\ref{tab:detectors}.}
\begin{eqnarray}
\label{eq:box1-new}
L_{\rm max} &=& \left\{
\begin{aligned}
&\frac{h+H}{\sin \theta \cos \phi}\quad& {\rm if} \;\;
& \tan \theta > \frac{h + H } { (d+L) \cos \phi}\; \&\; |\tan\phi | < \frac{W}{2(h+H)},\\
&\frac{d+L}{\cos \theta }\quad& {\rm if} \;\;
& \tan \theta < \frac{h + H } { (d+L) \cos \phi}\; \&\; |\sin\phi | < \frac{W}{2(d+L)\tan\theta}, \\
&\frac{W}{2\sin \theta |\sin\phi|}\quad& {\rm if} \;\;
& |\sin\phi | > \frac{W}{2(d+L)\tan\theta},
\end{aligned}
\right. \\
\label{eq:box2-new}
L_{\rm min} &=& \left\{
\begin{aligned}
&\frac{ h }{\sin \theta \cos \phi}\quad& {\rm if} \;\; & \tan \theta < \frac{h } { d \cos \phi}, \\
&\frac{d}{\cos \theta }\quad& {\rm if} \;\; & \tan \theta > \frac{h} {d\cos \phi},
\end{aligned}
\right. \\
\label{eq:box3-new}
f(\theta, \phi) &=& \Theta \left( \tan \theta - \frac{h}{(d+L)\cos \phi} \right) \,
\Theta \left( \frac{h + H }{d\cos \phi} - \tan \theta\right) \,
\Theta \left(\frac{W}{2 h} - |\tan \phi| \right) \Theta \left(\cos\phi \right).
\end{eqnarray}
For the MATHUSLA detector, we use
$d=$ 68 m,
$h=$ 60 m,
$W=$ 100 m,
$L=$ 100 m,
and $H=$ 25 m
\cite{Alpigiani:2020tva}.
For FACET,
we {further} require both daughter particles from the DP decay to
traverse both the tracker and
the calorimeter detectors.
For {the} FASER detector, we further apply a detector cut
on the energy of DP daughter particles
$E_{\rm vis} > 100$ GeV \cite{Ariga:2018uku}
to reduce the trigger rate
and remove possible {background} (BG) at low energies.
For {the} FACET detector, because the BG events
are expected to be highly suppressed
due to the front shielding and
the high quality vacuum of the decay volume,
no detector cut is required.
For {the} MATHUSLA detector, we require both
DP daughter particles to hit the ceiling detector
and are well-separated with an opening angle
$\Delta \theta > 0.01$ \cite{Curtin:2018mvb};
we note that $\omega=0$
for the second and third lines of
Eq.~\eqref{eq:box1-new}, by requiring such a cut.
Thus the number of events in the far detector can be obtained
\begin{equation}
N = {\cal L} \cdot \sigma_{A'} \cdot \langle P_{A'} \rangle \quad {\rm with } \quad
\langle P_{A'} \rangle = \frac{1}{N_{\rm A'}} \sum_{i=1}^{N_{A'}} P_{A'_i},
\end{equation}
where $\sigma_{A'}$ is the total DP production cross section,
$\langle P_{A'} \rangle$ denotes the average detection probability
of the DP event,
$N_{A'}$ is the total number of the DP in the simulation
and $P_{A'_i}$ is the individual detection probability of
the $i$th dark photon event in the simulation
which is given by Eq.~\eqref{eq:prob-detection}.
For the CMS-MTD detector, we only
consider the DPs produced from the HR
process for the CMS-MTD analysis. This is because
the CMS-MTD detector does not have sensitivity
to the DP mass below $\sim$GeV \cite{Du:2019mlc}.
Following {Ref.\ \cite{Du:2019mlc}},
we use MADGRAPH 5
to generate $\psi \bar \psi$ events
with an ISR jet to time stamp the event, i.e.,
$pp \to \psi\bar{\psi}j$ where
the ISR jet is required to have $p_T > 30$ GeV and $|\eta| < 2.5$.
The DP is required to have
a transverse decay length $0.2 {\rm m} < \ell_{A'}^T < 1.17 {\rm m}$
and a longitudinal decay length $|z_{A'}| <3.04$ m.
The final state leptons from DP decays
are detected by the precision timing detector;
the leading lepton should have $p_T > 3$ GeV.
The time delay variable \cite{Liu:2018wte}
between the ISR jet and the leading lepton
is required to
$\Delta t > 1.2\ \rm{ns}$ \cite{Du:2019mlc}.
\section{Result}
\label{sec:result}
In this section we discuss the projected sensitivities of
the future LLP detectors including FACET, FASER, MATHUSLA
and the precision timing detector {CMS-MTD}.
Our main results are shown in
Figs.\ (\ref{fig:MDplusPB},
\ref{fig:psi15GeV},
\ref{fig:psi5DP},
\ref{fig:FACET-Nevent}),
{where sensitivity contours for far detectors
are made by requiring the new physics events
to be $N = 5$, under the assumption that
the SM processes do not contribute any event
in the decay volume after various shieldings
and detector cuts.}
We are only interested in the parameter space in which $m_{A'} < 2 m_\psi$
so that the dark photon {is} kinematically forbidden
to decay into the hidden fermion pair, {leading to a
long-lived dark photon.}\footnote{{If
$m_{A'} > 2 m_\psi$, the dark photon can decay into
a pair of hidden fermions, which then leads to a
prompt decay dark photon, assuming an order-one
gauge coupling in the hidden sector.}}
\begin{figure}[htbp!]
\begin{centering}
\includegraphics[width=0.4\textwidth]{figures/mdplusbrem_300}
\includegraphics[width=0.4\textwidth]{figures/mdplusbrem}
\caption{Projected sensitivities from
FACET (red),
FASER (magenta),
FASER2 (black),
and MATHUSLA (green),
at the HL-LHC
with the integrated luminosities of
${\cal L} = 300$ fb$^{-1}$ (left panel)
and ${\cal L} = 3$ ab$^{-1}$ (right panel)
to the ``minimal'' dark photon models
in which only the MD and PB processes
contribute to the signals.
Contours correspond to
the expected signal events $N=5$.
The dark gray shaded region indicates the
parameter space that has been excluded
by various experiments;
the limits are obtained with the Darkcast
package \cite{darkcast}.
}
\label{fig:MDplusPB}
\end{centering}
\end{figure}
Fig.\ \ref{fig:MDplusPB} shows the projected sensitivities
on the minimal dark photon models
with $300$ fb$^{-1}$
and $3$ ab$^{-1}$ data,
from
FACET, FASER, FASER2, and MATHUSLA.
We only include the MD and PB processes here;
the HR process is absent.
For that reason, the analysis in Fig.\ \ref{fig:MDplusPB}
is also applicable to the minimal dark photon model.
Among the new detectors,
the parameter space probed by
FACET is larger than the
other experiments.
In particular, with an integrated luminosity of
$ 300\, {\rm fb}^{-1}$ (3 ${\rm ab}^{-1}$) at the HL-LHC,
FACET can probe the DP mass up to $\sim 1.3$ GeV
{($1.5$ GeV)},
whereas FASER can only probe
the DP mass up to $\sim 0.12$ GeV
{($0.25$ GeV plus the island near $0.79$ GeV)},
and FASER2 can only probe
the DP mass up to $\sim 0.8$ GeV
{($1.3$ GeV)}.
Because DPs arising from the PB and MD processes are likely
to be distributed in the forward region,
MATHUSLA, a detector located in the central transverse region,
has difficulties to probe the parameter space of the minimal dark photon model.
For that reason, MATHUSLA only probe a small parameter region
with 3 ${\rm ab}^{-1}$ data,
which, however, has been excluded already
by the current experimental constraints.
We note that the dips at $m_{A'} \sim 0.8$ GeV in the contours
are due to the resonance in the PB process,
and the kink features
at $m_{A'} \sim 0.2$ GeV are due to
the mass threshold effects in the MD process.
\begin{figure}[htbp]
\begin{centering}
\includegraphics[width=0.4\textwidth]{./figures/psi15contour1_new4}
\includegraphics[width=0.4\textwidth]{./figures/psi15contour2_new4}
\caption{Projected sensitivities from
FACET (red),
FASER (magenta),
FASER2 (black),
and MATHUSLA {(green)},
at the HL-LHC
with the integrated luminosities of
${\cal L} = 300$ fb$^{-1}$ (left panel)
and ${\cal L} = 3$ ab$^{-1}$ (right panel)
to our dark photon model
in which all the three dark photon
production channels
(MD, PB, and HR)
contribute to the signals.
Here we fix $m_\psi = 15$ GeV
and $\epsilon_2 = 0.01$, and
require $m_{A'} < 2 m_{\psi}$
so that the dark photon cannot
decay into invisible final states.
Contours correspond to
the expected signal events $N=5$.
The dark gray shaded region indicates the
excluded dark photon parameter space
by various experiments
where the HR process is not considered;
the limits are obtained with the Darkcast
package \cite{darkcast}.
}
\label{fig:psi15GeV}
\end{centering}
\end{figure}
Fig.~\ref{fig:psi15GeV} shows the projected sensitivities
for our dark photon models
{from FACET, FASER, FASER2, MATHUSLA,
and CMS-MTD.}
Here the dark photon production contributions
from all channels including the MD, PB and HR processes
are considered.
With the inclusion of the HR process,
the FACET and MATHUSLA sensitivity
contours are significantly enlarged
to heavier DP mass region,
as compared to Fig.~\ref{fig:MDplusPB};
the FASER and FASER2 sensitivity contours,
on the other hand, are similar to those in Fig.~\ref{fig:MDplusPB}.
With $ 300\, {\rm fb}^{-1}$ ($3\,{\rm ab}^{-1}$) data at the HL-LHC,
FACET can probe the parameter space of our model
up to $ m_{A'} \simeq 1.9 \, (15)$ GeV.
The {CMS-MTD} probes a relative large dark photon mass region:
down to dark photon mass $\sim 3 \, (2)$ GeV
for $ 300\, {\rm fb}^{-1}$ ($ 3\, {\rm ab}^{-1}$) data at HL-LHC.
This is due to the fact that a light dark photon
leads to not only a small time delay
but also small transverse momenta
of the final state leptons,
which will suffer from a large SM background
for the time delay searches \cite{Du:2019mlc}.
Interestingly, this {CMS-MTD} sensitivity region partly overlaps
with MATHUSLA sensitivity region for the luminosity of $ 300\, {\rm fb}^{-1}$,
and with both FACET and MATHUSLA sensitivity regions
for the luminosity of $ 3\, {\rm ab}^{-1}$.
Thus, if a dark photon in this overlap region is discovered,
one can {see}
the FACET and MATHUSLA to
verify the results from the CMS-MTD.
\begin{figure}[htbp]
\begin{centering}
\includegraphics[width=0.4\textwidth]{./figures/psi5acontour1_new4}
\includegraphics[width=0.4\textwidth]{./figures/psi5acontour2_new4}
\caption{Same as Fig.\ \ref{fig:psi15GeV} except
$m_\psi=5 \, m_{A'}$.
The light gray region is excluded by
the millicharge constraints
\cite{Davidson:2000hf, Acciarri:2019jly, Ball:2020dnx}.
For the FACET contours we use
NNPDF23 \cite{Ball:2012cx} (red-solid),
CT18 \cite{Hou:2019qau} (red-dashed),
and NNPDF40 \cite{Ball:2021leu} (red-dotted).}
\label{fig:psi5DP}
\end{centering}
\end{figure}
{Fig.~\ref{fig:psi5DP}
shows the expected limits
from FACET, FASER, FASER2, MATHUSLA,
and CMS-MTD
to the parameter space of our dark photon model
with the mass relation $m_\psi=5 \, m_{A'}$.}
{The sensitivity contours are similar to Fig.~\ref{fig:psi15GeV},
but with some changes.}
{For light $\psi$, the millicharge constraints are important,
which excludes the parameter space $m_{A'} \lesssim 0.3$ GeV
(corresponding to $m_\psi > 1.5$ GeV for $\epsilon_2 = 0.01$).}
The parameter space probed by FASER with
${\cal L} = 300\, {\rm fb}^{-1}$ ($3\, {\rm ab}^{-1}$) at the HL-LHC
is (nearly) excluded by the millicharge constraints.
Further, the heavy dark photon mass region can no longer be
probed by various detectors as in Fig.~\ref{fig:psi15GeV}.
This is because the heavy dark photon mass corresponds to the
heavy $\psi$ mass via the mass relation $m_\psi=5 \, m_{A'}$,
which then leads to a suppressed $pp \to Z^* \to \psi \psi$ cross section.
Similar to the result in Fig.~\ref{fig:psi15GeV},
the {CMS-MTD} sensitivity region is
partly overlapped with FACET and MATHUSLA.
To check the PDF uncertainties on the sensitivity contours,
we further compute the FACET contours
using three different sets of PDFs:
NNPDF23 \cite{Ball:2012cx} (red-solid),
CT18 \cite{Hou:2019qau} (red-dashed),
and NNPDF40 \cite{Ball:2021leu} (red-dotted).
As shown in Fig.~\ref{fig:psi5DP},
the upper edge of the FACET contours
from the three PDFs are almost identical;
the lower edge of the FACET contours
from the three PDFs, however,
can be seen with some visible differences
from each other.
For example,
for $m_{A'} \sim 0.3$ GeV,
the lower edge of the FACET contours with
$ 300\, {\rm fb}^{-1}$
are located at
$\epsilon_1 = 1.9 \times 10^{-8}$ with NNPDF23,
$\epsilon_1 = 2.3 \times 10^{-8}$ with CT18,
and $\epsilon_1 = 3.2 \times 10^{-8}$ with NNPDF40,
as shown on the left panel figure of Fig.~\ref{fig:psi5DP};
for $3\, {\rm ab}^{-1}$ data, $\epsilon_1$ are
$5.9 \times 10^{-9}$,
$7.3 \times 10^{-9}$,
and $1.0 \times 10^{-8}$ respectively,
as shown on the right panel figure of Fig.~\ref{fig:psi5DP}.
Thus different PDFs will result in changes
to the FACET contours but the effects are not significant.
\begin{figure}[htbp]
\begin{centering}
\includegraphics[width=0.41\textwidth]{./figures/one_D_slice_ctau}
\includegraphics[width=0.4\textwidth]{./figures/one_D_slice_epsilon}
\caption{The number of signal events
in the FACET detector
at the HL-LHC with ${\cal L} = 3 \,{\rm ab}^{-1}$,
as a function of
the DP lifetime {$c\tau_{A'}$} (left panel)
and of the coupling $\epsilon_1$ (right panel).
Here we fix $m_\psi = 15$ GeV and $\epsilon_2 = 0.01$,
and vary the dark photon mass to be
$m_{A'} = 2$ GeV (green),
4 GeV (blue),
and 10 GeV (red).
}
\label{fig:FACET-Nevent}
\end{centering}
\end{figure}
The left panel in Fig.~\ref{fig:FACET-Nevent} shows
the number of signal events in the FACET detector
as a function of the proper lifetime, for three
different dark photon masses.
The number of events {decreases}
with the dark photon mass.
The peak of the distribution of the events shifts {to} a
larger $c\tau_{A'}$ value when the dark photon mass increases.
The peak shift is due to the detector cut on the DP decay length:
a larger $c \tau_{A'}$ is needed for a heavier DP mass
so that the DP has the desired decay length
to disintegrate in the FACET decay volume.
With the criterion of $N > 5$ events, FACET
can probe the $c\tau_{A'}$ range of $[0.04\, {\rm m} - 30 \, {\rm m}]$
for DP mass $m_{A'} = 2$ GeV,
$c\tau_{A'} \in [0.09\, {\rm m} - 25 \, {\rm m}]$
for $m_{A'} = 4$ GeV,
and $c\tau_{A'} \in [0.3\, {\rm m} - 10 \, {\rm m}]$
for $m_{A'} = 10$ GeV.
The right panel in Fig.~\ref{fig:FACET-Nevent}
shows the number of signal events
in the FACET detector as a function
of the parameter $\epsilon_1$.
With the criterion of $N > 5$ events, FACET
can probe the
{$\epsilon_1 \in [2.1 \times 10^{-8} - 5.5 \times 10^{-7}]$}
for DP mass $m_{A'} = 2$ GeV,
{$\epsilon_1 \in [1.4 \times 10^{-8} - 2.3 \times 10^{-7}]$}
for $m_{A'} = 4$ GeV,
and
{$\epsilon_1 \in [1.3 \times 10^{-8} - 7.5 \times 10^{-8}]$}
for $m_{A'} = 10$ GeV.
\section{Expected number of events in far detectors}
\label{sec:facet-faser-comparison}
Here we provide an approximated expression for number
of dark photon events in the far detectors, and also compare
the number of events for two far detectors
that are of different sizes and placed with different distances
from the IP.
Denote the cross sectional area of the decay volume of a
far detector as $A$ and the length as $L$;
the volume of the decay volume is then $V=AL$.
If the far detector is placed at a distance $d$ from
the IP with $d\gg L$, the probability {of the DP} to decay
within the interval $(d,d+L)$ can be approximated by
\begin{equation}
P \simeq \exp \left[- {d \over \ell_{A'}} \right] {L \over \ell_{A'} },
\end{equation}
where $\ell_{A'}$ is the decay length of the DP.
The number of DPs that
disintegrate inside the decay volume is then given by
\begin{equation}
N \simeq N_{\rm IP} {A \over 4 \pi d^2} P
= N_{\rm IP} {1 \over 4 \pi } {V \over d^3}
\exp \left[- {d \over \ell_{A'}} \right] {d \over \ell_{A'} },
\label{approx_nsig}
\end{equation}
where $N_{\rm IP}$ is the total number of DPs produced at the IP,
and we have assumed an isotropic distribution for DPs
for simplicity.
Thus for given $N_i$, $V$, and $d$, the optimal decay
length to be probed is $\ell_{A'} = d$.
Eq.\ \eqref{approx_nsig} also suggests that in order to obtain
a large signal of LLPs, one should build a large decay volume and place
it close to the IP if the SM backgrounds are under control;
see also \cite{FACET:Green} for a similar discussion.
Next we compare two detectors with different $V$ and $d$.
The ratio of the number of events {is} given by
\begin{equation}
{N_1 \over N_2} =
{V_1 \over V_2} \left[ {d_2 \over d_1} \right]^2
\exp \left[- {d_1-d_2 \over \ell_{A'}} \right] .
\end{equation}
Using the parameters given in Table \ref{tab:detectors}, we find that
${N_{\rm FACET}/N_{\rm FASER}} \simeq 7 \times 10^3
\exp(380\, {\rm m}/ \ell_{A'})$.
Thus the number of events in FACET is at least
$7 \times 10^3$ times larger than FASER,
if one neglects the background considerations and
other effects.
This is the main reason that the contours of FACET
sensitivity are much larger than FASER.
Similarly, we find that ${N_{\rm FACET}/N_{\rm FASER2}} \simeq 18
\exp(380\, {\rm m}/ \ell_{A'})$.
We find that these ratios between FACET and FASER(2) estimated
here are consistent with the results from
our simulations.\footnote{For
example, for the model point $m_{A'} = 0.5$ GeV
and $\epsilon_1 = 2.9 \times 10^{-7}$ in Fig.~\ref{fig:psi5DP},
we find that ${N_{\rm FACET}/N_{\rm FASER}}
\simeq 8400$ and
${N_{\rm FACET}/N_{\rm FASER2}}
\simeq 33$ in our simulations.}
\section{summary}
\label{sec:summary}
We study the capability of the various new lifetime
frontier experiments in probing long-lived
dark photon models.
We consider both the minimal dark photon model,
and the dark photon model proposed by some of us recently
that has an enhanced long-lived dark photon signal
at the LHC.
In the new dark photon model that has an
{enhanced} long-lived dark photon signal at the LHC,
the standard model is extended
by the Stueckelberg mechanism to include
a hidden sector, which consists of two gauge bosons
and one Dirac fermion $\psi$.
The Stueckelberg mass terms eventually lead to
a GeV-scale dark photon $A'$
and a TeV-scale $Z'$
with couplings $\epsilon_1$ and $\epsilon_2$
to the SM sector respectively.
The dark photon signal at the LHC in this new model
is enhanced because it is proportional to $\epsilon_2$
which can be significantly larger than $\epsilon_1$,
which is small so that the dark photon is long-lived.
We compute various experimental constraints
on the $\epsilon_2$ parameter including the most recent
{constraints} on millicharge from
the ArgoNeuT and milliQan demonstrator experiments.
There are three major production channels for
the long-lived dark photon in the parameter space of interest:
the MD, PB, and HR processes.
The MD and PB are present in both the minimal dark photon model
and the new dark photon model,
and are mostly distributed in the forward region.
The HR process, however, is only present in the new dark photon model,
and {has} significant contributions to both the forward region
and the transverse region
(but still with dominant contributions in the forward region).
We find that the HR process provides the dominant contributions
for large dark photon mass, which opens up new
parameter space to be probed by various new
lifetime-frontier detectors.
We provide a mini-overview on the various
lifetime-frontier detectors and select four detectors
for further detailed analysis,
which include the far detectors
FACET, FASER (and its upgraded version, FASER2),
and MATHUSLA,
and the future precision timing detector CMS-MTD.
We compute the sensitivity contours
in the parameter
space spanned by the dark photon mass and the
parameter $\epsilon_1$.
For example, with $ 300\, {\rm fb}^{-1}$ ($3\,{\rm ab}^{-1}$) data at the HL-LHC,
FACET can probe the parameter space
up to $ m_{A'} \simeq 1.9 \, (15)$ GeV,
for the case where $m_\psi = 15$ GeV.
We find that the sensitivity contours from FACET
and MATHUSLA are significantly enlarged by the HR process,
and the CMS-MTD is only sensitive to the HR process.
The enhancement in the central transverse detector MATHUSLA
is mainly due to the fact that
the MD and PB events are highly concentrated in the
forward direction, and the HR process has some significant
contributions in the transverse direction.
We further compare the signal events between the two far forward detectors:
FACET and FASER.
We find that FACET is likely to detect many more events than
FASER, which is mainly due to the larger decay volume of the FACET detector
and its smaller distance from the interaction point.
The FASER2 detector, with a much larger decay volume
than FASER, can somewhat
offset the effects of the long distance from the interaction point.
Thus we find that the FACET contours are larger than FASER and FASER2
in our analysis.
We also find that there exists parameter space that can be probed by different kinds of
lifetime-frontier experiments. Thus, for example, if a long-lived dark photon
signal {were} found in one precision timing detector (e.g.\ CMS-MTD),
it could then be verified by a
far forward {detector} (e.g.\ FACET)
and a far transverse {detector} (e.g.\ MATHUSLA).
\section{Acknowledgement}
We thank Michael Albrow
for helpful discussions.
The work is supported in part
by the National Natural Science
Foundation of China under Grant No.\
11775109.
|
{
"timestamp": "2021-12-01T02:27:35",
"yymm": "2111",
"arxiv_id": "2111.15503",
"language": "en",
"url": "https://arxiv.org/abs/2111.15503"
}
|
\section{INTRODUCTION}
We study systems of linear ordinary differential equations of Schr{\"o}dinger type
\begin{equation}
\label{eq0}
\psi'(t) = -\,\mathrm{i}\,H(t)\,\psi(t)=:A(t)\psi(t)\,, \qquad t \in [t_0, t_{\mathrm{end}}]\,, \qquad
\psi(t_0) = \psi_0\; \mbox{ given}\,,
\end{equation}
with a time-dependent Hermitian matrix $H \colon \mathbb{R} \to \mathbb{C}^{d \times d}$.
The exact flow of (\ref{eq0}) is denoted by $\nE(t;t_0)\psi_0$ in the following.
We focus on a system that
is described by a Hubbard model of electron-hopping between discrete sites.
For the numerical experiments we use the parameters corresponding to a simple approximation of a Mott transistor~\cite{Zhong2015}.
The time dependence is introduced by coupling
of the electronic system to a source--drain potential which is switched on rapidly.
The corresponding electromagnetic field is treated classically and
enters the equation as a scalar potential and modifies hoppings via Peierls' substitution \cite{Li2020}.
We consider the numerical solution of the system by exponential-based
Magnus-type time integrators in conjunction with an adaptive Lanczos
method \cite{auzingeretal18b,jaweckietal20},
$$u_{n+1}=\nS(\tau;t_n)u_n, \qquad \mbox{where}$$
\begin{eqnarray*}
&&\nS(\tau;t_n) =
\nS_J(\tau) \cdots \nS_1(\tau)
= \mathrm{e}^{\Omega_J (\tau)} \,\cdots\, \mathrm{e}^{\Omega_1(\tau)}\,, \\
&&\Omega_j(\tau) = \tau\, B_j(\tau),~~ j=1, \ldots, J,
\qquad B_{j}(\tau) = \sum_{k=1}^K a_{jk}\, A_{k}(\tau),\qquad A_{k}(\tau) = A(t_n+c_k \tau)\,.
\end{eqnarray*}
The coefficients $a_{jk}$, $c_k$ are determined from the \emph{order conditions}
such that the method attains convergence order $ p $. In this study,
we will use the methods referred to as \texttt{CF2} (exponential midpoint
rule), \texttt{CF4oH} and \texttt{CF6n} in \cite{solar1}, the two
latter constructed by us with the aim to optimize the error for a given
computational effort. For comparison, we also show the embedded Runge--Kutta
method by Dormand \& Prince \texttt{DoPri45}.
The computational challenge results on the one hand from
the high dimension of the underlying system. Indeed, for a model with
$N$ discrete locations, the state space has dimension $4^N$. Thus, for
a model with the claim of physical relevance, the problem quickly reaches
the limitations of modern supercomputers.
On the other hand, the modeling of the switching process in a Mott transistor,
features a quickly attenuating electric field in a small time interval.
Thus the smoothness of the solution varies strongly in the course of the
time propagation, which suggests to use adaptive choice of the time step
to increase the efficiency and reliability of a numerical computation.
To this end, we use classical step-size choice based on asymptotically correct
defect-based estimators to equidistribute the local error, which were
constructed and analyzed in \cite{auzingeretal18b}.
\section{MODEL OF A MOTT TRANSISTOR}\label{sec:examples}
The Mott transistor, which is based on the Mott-insulator-to-metal transition driven by the change of the gate voltage $V_g$, can be described within the Hubbard model~\cite{hubbard63}, the paradigm model for the description of strongly
interacting electrons~\cite{Ma93,PKBS16}. We resort here to the \emph{second-quantization} formalism.
It describes the electron occupation on a given number of sites $N$, corresponding to Wannier discretization.
Only a single orbital per site is considered with nearest-neighbor hopping between the sites.
Due to the Pauli exclusion principle, there are only four states per site allowed (no electrons, one electron with spin-up or -down, two electrons with opposite spins). The electrons interact via Coulomb interaction $U_{ij\sigma\sigma'}$.
In the considered model the Hamiltonian in (\ref{eq0}) has the form
\begin{equation}\label{eq1}
H(t) = H_{\mathrm{stat}}+H_{\mathrm{dyn}}(t).
\end{equation}
The static part for arbitrary hopping $v_{ij}$ reads
\begin{equation}\label{eq.HubHamStat}
H_{\mathrm{stat}} = \sum_{i,j,\sigma} v_{ij} \hat{c}_{j\sigma}^{\dagger} \hat{c}_{i\sigma}^{\phantom\dagger}
+ \frac{1}{2}\sum_{i,j,\sigma,\sigma'} U_{ij\sigma\sigma'} \hat n_{i\sigma} \hat n_{j\sigma'}
\end{equation}
where $i,j$ sum over all $N$ sites and the spins $\sigma,\sigma' \in\{\uparrow,\downarrow \}$
are either \emph{up} or \emph{down}.
The notation $\hat{c}_{j\sigma}^\dagger \hat{c}_{i\sigma}^{\phantom\dagger}$ describes the ``hopping'' of an electron
from site $i$ to $j$ with \emph{creation} and \emph{annihilation operators}
$\hat{c}_{j \sigma}^\dagger$ and $\hat{c}_{i \sigma}$.
The \emph{hopping amplitudes} $v_{ij}$ with $i,j=1,\ldots,N$ give the probability (rate) of such an
electron hopping; $\hat{n}_{j\sigma}=\hat{c}_{j\sigma}^\dagger \hat{c}_{j\sigma}^{\phantom\dagger}$
is the \emph{occupation number operator}, which counts the number of electrons with spin $\sigma$ at site $j$.
For details on the notation in~(\ref{eq.HubHamStat}) refer to~\cite{hubbard63,Ma93,PKBS16}.
In the following we take only the local part of the Coulomb interaction, i.e. $U_{ij\sigma\sigma'}=U\delta_{ij}(1-\delta_{\sigma\sigma'})$.
\noindent The dynamic part of the Hamiltonian is in general given by
\begin{equation}\label{eq.HubHamDyn}
H_{\mathrm{dyn}}(t) = \sum_{\alpha=1}^{N_\alpha} \left(\Re (f_\alpha(t)) \:
H_{\mathrm{symm}}^{\alpha} + \mathrm{i} \Im (f_\alpha(t)) \: H_{\mathrm{anti}}^\alpha \right) +
\sum_{\beta=1}^{N_\beta} g_\beta(t) \: H_{\mathrm{pot}}^\beta.
\end{equation}
The real matrices $H_{\mathrm{symm}}^\alpha, H_{\mathrm{anti}}^\alpha,$ and
$H_{\mathrm{pot}}^\alpha$ have the following properties:
$ g_\alpha \in \mathbb{R}$, \
$ H_{\mathrm{symm}}^\alpha$ is symmetric, \
$ H_{\mathrm{anti}}^\alpha$ is skew-symmetric, \
$ H_{\mathrm{pot}}^\beta$ is diagonal.
Here, we restrict ourselves to the Coulomb gauge where
$f_\alpha(t) =0 \ \forall \alpha,$ $N_\beta=1.$
For the performance analysis we choose a rapidly attenuating potential
\begin{equation}
g_{1}(t) \equiv g(t) = 1 - \frac{1}{\mathrm{e}^{(t-t_0)/T} +1}, \quad t_0,T \in \mathbb{R}^+,\qquad
H_{\mathrm{pot}}^1 = \sum_{i\sigma} V_i \hat{n}_{i \sigma} =: H_{\mathrm{pot}},
\quad V_i \in \mathbb{R}.\label{pulse1}
\end{equation}
\section{ERROR ANALYSIS}\label{sec:ana}
In this section, we investigate the error structure of commutator-free
Magnus-type time integrators for the model of a Mott transistor.
To this end, we recapitulate general convergence results given in \cite{auzingeretal18b}:
The local time-stepping error
can be expressed in terms of an iterated defect $\nD_i(\tau;t_0)$, where
$$ \nD_0=\nD=\psi'(t)-A(t)\psi(t), \qquad \nD_{i+1} = \nD_i'-A\nD_i,\quad i=0,1,\dots.$$
For the exponential midpoint rule, we thereby obtain for the local error
$\nL(\tau;t_0)\psi_0:=(\nS(\tau;t_0)-\nE(\tau;t_0))\psi_0$
\begin{eqnarray*}
\nL(\tau;t_0)\hspace*{-3mm}&=& \hspace*{-3mm}\int_{0}^{\tau} \Pi(\tau,\sigma_1)\,\nD(\sigma_1;t_0)\,\mathrm{d}\sigma_1 =
\int_{0}^{\tau} \Pi(\tau,\sigma_1)\,\int_0^{\sigma_1}\Pi(\sigma_1,\sigma_2)\,
\int_0^{\sigma_2}\Pi(\sigma_2,\sigma_3)\, \mathrm{d}\sigma_3\, \mathrm{d}\sigma_2\,\mathrm{d}\sigma_1
\cdot \nD_2(0;t_0) + O(\tau^4)\\
\hspace*{-3mm}&=:& \hspace*{-1mm}\underbrace{\mathcal{I}_3(\tau)}_{=\,O(\tau^3)}\!\cdot\,(\Gamma''(0)-A''(t_0))
+ O(\tau^4),\qquad \mbox{where} \quad\frac{\mathrm{d}}{\mathrm{d}\tau} \mathrm{e}^{\tau\, A(\tau/2)}
=\Gamma(\tau)\,\mathrm{e}^{\tau\,A(\tau/2)}\,.
\end{eqnarray*}
It follows from the arguments in \cite{auzingeretal18b} that more generally,
the leading local error term for a higher-order CFM of order $p$ depends
on the difference $\Gamma^{(p)}(0)-A^{(p)}(0)$.
For the exponential midpoint rule of order two, this means in particular that the following
error estimate holds:\\[2mm]
\cite[Proposition 4.1]{auzingeretal18b} \label{pro:EMR-locerr}
\textit{Consider the solution of~{\rm (\ref{eq0})} by the exponential
midpoint rule.
If $A\in C^3$, then the local error satisfies
\begin{equation} \label{EMR-L-leading}
\| \nL(\tau;t_0) \|_2 \leq
\frac{1}{12} \tau^3 \big\| [A(t_0)),A'(t_0)]
- \frac{1}{2} A''(t_0) \big\|_2
+ O(\tau^4)\,.
\end{equation}
}
For the Hubbard model with the potential (\ref{pulse1}), the convergence analysis above
has the following implications.
Differentiation of $g(t)$ shows the following asymptotics:
\begin{eqnarray}
g(t) &=& 1- \frac{1}{\mathrm{e}^{(t-t_0)/T}+1} \in (0,1)\,,\\
g'(t) &=& \frac{1}{T} \frac{\mathrm{e}^{(t-t_0)/T}}{(\mathrm{e}^{(t-t_0)/T}+1)^2} \in (0,1/T),\qquad
g''(t) = \frac{1}{T^2} \left(\frac{\mathrm{e}^{(t-t_0)/T}}{(\mathrm{e}^{(t-t_0)/T}+1)^2} -
\frac{2 \mathrm{e}^{(t-t_0)/T}}{(\mathrm{e}^{(t-t_0)/T}+1)^3}\right) \in (0,1/T^2)\,,\\
&\vdots& \nonumber \\
g^{(p)}(t) &=& O(T^{-p})\,.
\end{eqnarray}
When considering the asymptotics of the error as a function of $T$, clearly the local
error is dominated by the highest derivative of $A(t)$, where the time-dependence
occurs in $g$. For the exponential midpoint
rule, for example, (\ref{EMR-L-leading}) shows that this is $A''$. Analogously,
for a method of order $p$ the local error
is dominated by $\Gamma^{(p)}(0)-A^{(p)}(0)$ and is thus proportional to $\tau^{p+1}T^{-p}$.
\section{NUMERICAL RESULTS}\label{sec:num}
\begin{center}
\begin{figure}
\includegraphics[height=1.4cm, trim=0mm -45mm 0mm 45mm]{drawing_model_8x1.pdf} \hspace*{2mm}
\includegraphics[height=3.0cm, trim=0mm 3mm 0mm 0mm,clip]{8x1__EV__occ_iT5.pdf}
\caption{\emph{Left}: Schematic depiction of the Mott transistor with the source--drain voltage applied to the outer sites [described by the potential $V_{sd}(t)$] and gate voltage ($V_g$).
\emph{Right}: Time dependence of the charge at the sites $n_i(t)$ and of the source--drain potential $g(t)$.
Computed with $\texttt{DoPri45}$ (solid lines) and $\texttt{CF4oH}$ (crosses).}
\label{fig:model}
\end{figure}
\end{center}
As a numerical illustration, we show the adaptive solution of our model of a Mott transistor
by Magnus-type integrators. The geometry of the model is illustrated in Figure~\ref{fig:model} (left).
The dynamics of the transistor is approximated by the dynamics of an 8-site chain with 8 electrons.
The sites are then split into three sections which correspond to an electron reservoir (left and right) which is coupled to a source--drain potential $V_{sd}$. The scalar potential added on the mid-sites ($V_g$ in the left part of Figure~\ref{fig:model}) models the gate of the transistor. It controls whether the transistor is in a conducting or an insulating state. For the numerical analysis shown here we chose $V_g=0=V_i, \quad i \in {2,3,4,5,6,7}$ (conducting state). For the source--drain potential we used $V_1=-V_8=V_{sd}/2=1.04 U$; $T=2^{-5}$. The potential is switched on at $t_0=10$ and $\psi_0$ is chosen as the (unique) ground state of the model. Hubbard interaction $U=10 |v_{12}|$.
The error displayed is
$||{\psi}(t=50) - {\psi}^{\mathrm{ref}}(t=50) ||_2$,
where the reference solution $\psi^{\mathrm{ref}}$ was computed
with adaptive \texttt{CF4oH} with a high accuracy of $\texttt{tol}=10^{-11}$.
The tolerance of the Lanczos iteration was $10^{-12}$ throughout.
In the right plot of Figure~\ref{fig:model}, the index of $n_i=n_{i\uparrow}+n_{i\downarrow}$
corresponds to the sites labeled in the left part of the figure, and
$n_{i\sigma}(t) = \psi^\dagger(t) \, \hat n_{i\sigma} \, \psi(t)$.
It shows the occupation numbers (proportional to the local charge) computed
with a fine error tolerance by the Dormand \& Prince Runge--Kutta method
(solid lines), and the adaptive Magnus-type method \texttt{CF4oH} (crosses)
with the same error tolerance of $\mathtt{tol}=10^{-11}$.
We observe that the solution computed with adaptive time stepping
chooses appropriately small steps where the solution varies strongly,
and very large steps where the solution is smooth. Nonetheless,
the solutions correspond very well at the grid points. We conclude
that adaptive commutator-free Magnus-type methods serve very well
to accurately approximate the solution both in regions where dense
grids are required and where the dynamics allows for large time steps.
The adaptive choice of the steps thus implies a gain in efficiency
in contrast to uniform time steps.
Indeed, in Figure~\ref{fig:comp}, we show the error as a function of the
number of matrix--vector multiplications, which constitute most of the
computational effort required for the integrators.
The left plot gives the results for adaptive time stepping, on
the right-hand side, the results for equidistant time steps are
given for comparison. We observe that the error
is smaller for a comparable computational effort when the adaptive
strategy is used. This results from the unsmooth solution dynamics
associated with the rapid changes in the electric field. The specially constructed
CFM integrators are most efficient, adaptive Runge--Kutta methods
show consistent convergence behavior, but prohibitively large computational effort.
\begin{center}
\begin{figure}
\includegraphics[height=3.5cm]{8x1_ATS__AllSchemes_Error_vs_Mult__iT5.pdf} \hspace*{0.5cm}
\includegraphics[height=3.5cm]{8x1_ETS__AllSchemes_Error_vs_Mult__iT5.pdf}
\caption{Comparison of the error as a function of the number of matrix--vector multiplications
for adaptive time stepping (left) and equidistant time steps (right).}
\label{fig:comp}
\end{figure}
\end{center}
\section*{ACKNOWLEDGMENTS}
This work was supported by the Austrian Science Fund (FWF) under grant P 30819-N32.
|
{
"timestamp": "2021-12-01T02:27:43",
"yymm": "2111",
"arxiv_id": "2111.15505",
"language": "en",
"url": "https://arxiv.org/abs/2111.15505"
}
|
\section{Introduction}\label{sec:I}
The behavior of charged quantum particles inside a uniform magnetic field in \linebreak \mbox{Minkowski spacetime} displays quantized energy levels ---\,the so-called Landau levels\,--- that were first derived by Rabi based on the relativistic Dirac equation \cite{Rabi} and by Landau based on the Schr\"odinger equation \cite{Landau} (see e.g., Ref.\,\cite{LandauLifshitz} for a textbook introduction). This discovery led to many physical applications in condensed matter physics, such as the explanation of the Landau diamagnetism \cite{Landau}, of the Shubnikov–de Haas effect \cite{Hadju}, of the de Haas-van Alphen effect \cite{Shoenberg} and of the integer quantum Hall effect \cite{QHEBook}. Remarkably, the Landau levels have even found astrophysical applications. In fact, the physics of neutron stars and other highly magnetized stellar objects is expected to be governed to some extent by the discrete nature of the energy levels occupied by the charged fermions moving under the influence of the intense magnetic field of such astrophysical objects \cite{BookNeutronStar,WhiteDwarfs,WhiteDwarfs3,Mass-Radius,WhiteDwarfs4,Broderick,Chamel1,Chamel2,Yakovlev, NeutronStarMatter}. When we recall that such highly compact astrophysical objects are also sources of very intense gravitational fields, it becomes extremely important to investigate the fate of Landau levels under the influence of a combination of a magnetic field and the gravitational field of a spherical object. Such a study has recently been conducted in Refs.\,\cite{GravityLandauI,GravityLandauII,QHGravity} based on the Klein-Gordon and Schr\"odinger equations by considering the gravitational interaction of the particle with a massive sphere as a small perturbation compared to the interaction of the particle with the magnetic field. Although relying on the Schr\"odinger equation and treating gravity perturbatively as done in these studies might be sufficient for laboratory \mbox{applications \cite{Landry1,Landry2,COWBall,Josephson},} when seeking astrophysical applications, implementing a general-relativistic and a non-perturbative approach and appealing to the relativistic Dirac equation are highly recommended.
The Dirac equation in a curved spacetime caused {\it purely} by an intense magnetic field--known as the Melvin universe \cite{Melvin}---has been solved exactly in Ref.\,\cite{Nunes} where the quantized energy levels were extracted. Contemplating curved spacetimes caused purely by a magnetic field is motivated by the well-known fact that the strong magnetic fields of neutron stars could reach up to {$10^{12-15}$}\,G
on the surface of magnetars \cite{Magnetars}, and up to $10^{17}\,$G in their interior (see, e.g., Refs.\,\cite{InsideNS1,InsideNS2} and the references therein). On the other hand, neutron stars are made of neutrons, as well as of a plasma of electrons, protons and muons, the energies of which are affected by such strong magnetic fields. The aim of the present paper is therefore to include the general-relativistic effect of the intense gravitational field of such stellar objects by studying the dynamics of charged fermions moving {\it inside} a massive spherical object in the presence of a uniform magnetic field. To the best of our knowledge, such a study has not appeared in the literature before. Unlike the purely magnetic spacetime considered in Ref.\,\cite{Nunes}, however, we consider in the present work the curved spacetime caused purely by the distribution of matter inside the core of the stellar objects rather than by the uniform magnetic field. The contribution of the latter to spacetime curvature will thus be considered negligible. The justification behind such an assumption will be discussed in Section \ref{sec:II}.
The remainder of this paper is structured as follows. In Section \ref{sec:II}, we derive the Dirac equation in a general static and spherically symmetric curved spacetime in the presence of a uniform magnetic field. We discuss the various symmetries of the equation and the complications one faces when attempting to solve the equation exactly. In \mbox{Section \ref{sec:III}}, we apply the equation obtained in Section \ref{sec:II} to the case of the interior Schwarzschild solution. We then extract the quantized energy levels of charged fermions moving along the equator inside a massive spherical object by keeping only the leading terms of the differential equation. In Section \ref{sec:IV}, we briefly discuss the ways to overcome the limitations we encountered in solving our equation in Section \ref{sec:III}. In Section \ref{sec:V}, we apply our results to the study of the magnetization of neutron stars. We conclude this paper with a short discussion section in which we summarize our findings.
\section{Curved-Spacetime Dirac Equation in a Uniform Magnetic Field}\label{sec:II}
Since our aim in this paper is to consider the Dirac equation in the static and spherically symmetric interior Schwarzschild solution, let us start by deriving in this section the equation for a general static and spherically symmetric curved spacetime in the presence of a uniform magnetic field. For that purpose, let us write our spacetime metric in the following general form:
\begin{equation}\label{GeneralMetric}
{\rm d}s^2=-\mathcal{A}^2(r)\,c^2{\rm d}t^2+\mathcal{B}^2(r)\,{\rm d}r^2+r^2({\rm d}\theta^2+\sin^2\theta{\rm d}\phi^2),
\end{equation}
where the radial functions $\mathcal{A}(r)$ and $\mathcal{B}(r)$ are everywhere regular except, maybe, at the origin $r=0$. In addition, let us assume the constant and uniform magnetic field $\bf B$
to be parallel to the $z$-direction. A convenient gauge for the vector potential would then be $A_{\mu}=(0,0,0,\frac{1}{2}{Br^2\sin^2\theta})$. The Dirac equation of a particle of mass $m$ and of charge $e$ (that we take to be the negative charge of the electron for definiteness) moving inside a curved spacetime and minimally coupled to a Maxwell field $A_\mu$ reads \cite{Pollock,BookDiracInCS},
\begin{equation}\label{GeneralDirac}
i\hbar e^\mu_a\gamma^a\left(\partial_\mu+\frac{1}{8}\omega_\mu^{\;bc}\,[\gamma_b,\gamma_c]-\frac{ieA_\mu}{\hbar }\right)\psi=mc\psi.
\end{equation}
The quantities $e^\mu_a$ are the inverse of the spacetime tetrads $e^a_\mu$, whereas $\gamma^a$ are the gamma matrices, $\omega_\mu^{\,bc}$ is the spin connection and $[\gamma_b,\gamma_c]$ is the commutator of the gamma matrices. The spin connection is antisymmetric, $\omega_\mu^{\;bc}=-\omega_\mu^{\;cb}$, and it is expressed in terms of the tetrad fields and the Christoffel symbols $\Gamma_{\mu\nu}^\lambda$ of the spacetime by $\omega_\mu^{\;bc}=e^b_\nu\partial_\mu e^{c\nu}+\Gamma_{\mu\nu}^\lambda e^b_\lambda e^{c\nu}$.
\newpage
On the other hand, two choices for the four curved-spacetime gamma matrices $\gamma^\mu=e^a_\mu\gamma^a$ are possible. One may choose either to work with the tetrads' axes parallel to the $r$, $\theta$ and $\phi$ coordinates, or choose to orient the tetrads' axes parallel to some rectangular coordinate system \cite{BrillWheeler}. When combined with the standard representation of the constant flat-spacetime gamma matrices $\gamma^a$, given explicitly by
\begin{equation}\label{DiracMatrices}
\gamma^0=
\begin{pmatrix}
\mathbbm{1} & 0\\
0 & -\mathbbm{1}
\end{pmatrix},\qquad
\gamma^k=
\begin{pmatrix}
0 & \sigma_k\\
-\sigma_k & 0
\end{pmatrix},
\end{equation}
where $\mathbbm{1}$ is the $2\times2$ unit matrix and $\sigma_k$ (for $k=1,2,3$) are the three Pauli matrices, the curved-space gamma matrices $\gamma^\mu$ take on a simpler form in the first case than in the second \cite{BrillWheeler}. The spinor wavefunction $\psi$ in the two cases will be different, but physical quantities as well as the radial wave equation will all be the same as the two representations are simply related by a unitary transformation \cite{BrillWheeler}. However, to deal with the mixed cylindrical and spherical symmetries imposed, respectively, by the magnetic and gravitational fields, it is easier to adopt the spherical-coordinates representation of the gamma matrices, in which the three Pauli matrices take the form
\begin{equation}\label{PauliM}
\sigma_r=
\begin{pmatrix}
\cos\theta & \sin\theta e^{-i\phi}\\
\sin\theta e^{i\phi} & -\cos\theta
\end{pmatrix},\qquad
\sigma_\theta=
\begin{pmatrix}
-\sin\theta & \cos\theta e^{-i\phi}\\
\cos\theta e^{i\phi} & \sin\theta
\end{pmatrix},\qquad
\sigma_\phi=
\begin{pmatrix}
0 & -ie^{-i\phi}\\
ie^{i\phi} & 0
\end{pmatrix}.
\end{equation}
We now choose the spacetime tetrads for the metric (\ref{GeneralMetric}) to be $e^a_\mu={\rm diag}(\mathcal{A},\mathcal{B},r,r\sin\theta)$. With these tetrads, we easily evaluate the nonvanishing components of the contracted spin connection with the commutator of the gamma matrices to be:
\begin{align}
\tfrac{1}{8}\omega_0^{ab}[\gamma_a,\gamma_b]&=\frac{\mathcal{A}'}{2\mathcal{B}}\gamma^0\gamma^1,\qquad \tfrac{1}{8}\omega_2^{ab}[\gamma_a,\gamma_b]=\frac{1}{2\mathcal{B}}\gamma^1\gamma^2,\nonumber\\ \tfrac{1}{8}\omega_3^{ab}[\gamma_a,\gamma_b]&=\frac{\sin\theta}{2\mathcal{B}}\gamma^1\gamma^3+\frac{\cos\theta}{2}\gamma^2\gamma^3.
\end{align}
Plugging these, together with the vector potential $A_\mu$, into Eq.\,(\ref{GeneralDirac}) and then multiplying the whole equation on the left by $\beta=\gamma^0$ and using the definition $\alpha^k=\beta\gamma^k$ of the Dirac alpha matrices, the resulting equation reads,
\begin{equation}\label{ExplicitGeneralDirac}
\left[\frac{i\partial_t}{\mathcal{A}c}\psi+\frac{i\alpha^1}{\mathcal{B}}\left(\partial_r+\frac{1}{r}+\frac{\mathcal{A}'}{2\mathcal{A}}\right)+\frac{i\alpha^2}{r}\left(\partial_\theta+\frac{\cot\theta}{2}\right)+\frac{i\alpha^3}{r\sin\theta}\left(\partial_\phi-\frac{ieA_\phi}{\hbar}\right)\right]\psi=\frac{mc}{\hbar}\beta\psi.
\end{equation}
We are interested in this paper in stationary states of energy $E$, for which we have $\partial_t\psi=-\tfrac{i}{\hbar}E\psi$. On the other hand, we may simplify further Eq.\,(\ref{ExplicitGeneralDirac}) by setting $\psi=\Psi/(r\sqrt{\mathcal{A}\sin\theta})$. With these two replacements, the resulting equation in $\Psi$ takes the form
\begin{equation}\label{SimplifiedGeneralDirac}
\Bigg[\frac{E}{\mathcal{A}\hbar c}+\frac{i\alpha^1}{\mathcal{B}}\partial_r+\frac{i\alpha^2}{r}\partial_\theta+\frac{i\alpha^3}{r\sin\theta}\left(\partial_\phi-\frac{ieB}{2\hbar }r^2\sin^2\theta\right)-\frac{mc}{\hbar}\beta\Bigg]\Psi=0.\\
\end{equation}
This is the general equation obeyed by fermions under the influence of the uniform magnetic field. Unlike the well-known case of the Dirac equation in a spherically symmetric potential, i.e., in a central potential (see, e.g., Ref.\,\cite{Greiner}), our Eq.\,(\ref{SimplifiedGeneralDirac}) may not be separated in the radial variables $r$ and the angular variables $\theta$ and $\phi$. This is due to the mixture of the spherical symmetry imposed by the gravitational field and the cylindrical symmetry imposed by the vertical uniform magnetic field $\bf B$.
One might then be tempted to switch to cylindrical coordinates $(\rho,\phi,z)$ to benefit from the cylindrical symmetry pertaining to the magnetic field. Unfortunately, even doing so would not make Eq.\,(\ref{SimplifiedGeneralDirac}) easier to solve. In fact, by making the substitutions $\rho=r\sin\theta$ and $z=r\cos\theta$, Eq.\,(\ref{SimplifiedGeneralDirac}) would contain operators of the form $(\rho^2+z^2)^{-\frac{1}{2}}z\,\partial_\rho$ and $(\rho^2+z^2)^{-\frac{1}{2}}\rho\,\partial_z$. It is then evident that even if we neglect the $z$-variation of $\Psi$ by assuming $\partial_z\Psi\approx0$, i.e., by neglecting the vertical momentum of the particle, we would still be left with an equation that is not separable in the variables $\rho$, $z$ and $\phi$.
It seems then that the only choice left is to set $z=0$ as well. Physically, this is due (in the {\it absence} of any other interaction of the particle) to the unstable nature of any position $z\neq0$, caused by the transverse gravitational attraction that pulls the particle away from any plane other than the equator. This observation can also be reached by
referring to the variable $\theta$ in Eq.\,(\ref{SimplifiedGeneralDirac}). In fact, the only way of removing the $\theta$-dependence in that equation is to set $\theta=\frac{\pi}{2}$, i.e., to restrict ourselves to particles moving along the equatorial plane. This is because setting $\theta=\theta_0\neq
\frac{\pi}{2}$ would prevent us from letting $\rho=r\sin\theta_0$ vary and have at the same time planar motion. If $\theta_0$ is set to a constant other than $\frac{\pi}{2}$, then for $\rho$ to change, $r$ must change too, which makes the particle leave the plane $\theta=\theta_0$. Only along the equatorial plane $\theta_0=\frac{\pi}{2}$ does $\rho=r\sin\frac{\pi}{2}=r$ become a variable describing \mbox{planar motion}.
We will come back to the possibility of considering a plane other than the equator in Section \ref{sec:IV}. Before setting $\theta=\frac{\pi}{2}$, however, we should first extract the general single differential equation from the coupled Dirac spinors.
\section{The Quantized Energy Levels}\label{sec:III}
To extract a quantization condition, if any, on the energy $E$ from Eq.\,(\ref{SimplifiedGeneralDirac}), we need first to decompose the Dirac spinor into a pair of $2$-spinors as follows
\begin{equation}
\Psi(r,\theta,\phi)=
\begin{bmatrix}
\Phi(r,\theta,\phi)\\
\Theta(r,\theta,\phi)
\end{bmatrix}.
\end{equation}
Substituting this ansatz into Eq.\,(\ref{SimplifiedGeneralDirac}), the latter splits into the following coupled first-order differential equations:
\begin{align}\label{SplitEqsSpinors}
\left(\frac{E}{\mathcal{A}\hbar c}-\frac{mc}{\hbar}\right)\Phi+\left[\frac{i\sigma_r}{\mathcal{B}}\partial_r+\frac{i\sigma_\theta}{r}\partial_\theta+\frac{i\sigma_\phi}{r\sin\theta}\left(\partial_\phi-\frac{ieB}{2\hbar }r^2\sin^2\theta\right)\right]\Theta=0,\nonumber\\
\left(\frac{E}{\mathcal{A}\hbar c}+\frac{mc}{\hbar}\right)\Theta+\left[\frac{i\sigma_r}{\mathcal{B}}\partial_r+\frac{i\sigma_\theta}{r}\partial_\theta+\frac{i\sigma_\phi}{r\sin\theta}\left(\partial_\phi-\frac{ieB}{2\hbar }r^2\sin^2\theta\right)\right]\Phi=0.
\end{align}
It is clear from these coupled differential equations that given the $\rho$-dependence of all the coupling terms, extracting one of the 2-spinors from one equation and substituting it into the other would result in a very complicated single equation. This is unlike what happens with the standard Dirac equation for the case of the central potential of the hydrogen atom \cite{Greiner}. It would be natural then at this stage to attempt to solve this system of equations by applying the approach developed in Refs.\,\cite{Alhaidari1,Alhaidari2,Alhaidari3}, which consists of applying a gauge transformation followed by a unitary transformation to turn one of the coupling terms in one of the two equations into a constant. Indeed, such a method considerably simplifies the resulting final single equation one arrives at. Unfortunately, such an approach cannot be consistently applied in our case because that approach is specifically designed only for a certain type of interactions of the particle with the gauge field, which is not the case for our present physical system\footnote{FH is grateful to Prof. A.\,D. Alhaidari for the helpful discussion on this subtle point.}. The other possibility would then be to attempt to rather adopt the approach developed in Ref.\,\cite{Alhaidari4} for the Dirac equation in curved spacetime. Unfortunately, that approach is not helpful to us either as it becomes rapidly involved even when the particle is not coupled to a gauge potential. For this reason, such an approach has been applied in Ref.\,\cite{Alhaidari4} only in $1+1$ dimensions. We are thus left only with the old strategy for dealing with coupled differential equations of the type (\ref{SplitEqsSpinors}).
Extracting the 2-spinor $\Theta$ from the second line of Eq.\,(\ref{SplitEqsSpinors}) and plugging it into the first, we obtain the following single equation for the 2-spinor $\Phi$:
\begin{align}\label{2-SpinorEq}
&\frac{\Phi_{,rr}}{\mathcal{B}^2}+\left(\frac{2}{\mathcal{B}r}-\frac{\mathcal{B}_{,r}}{\mathcal{B}^3}+\frac{\mathcal{A}_{,r}E}{\mathcal{A}\mathcal{B}^2\left[E+\mathcal{A}mc^2\right]}\right)\Phi_{,r}\nonumber\\
&+\frac{\Phi_{,\theta\theta}}{r^2}+\left(\frac{i\left[\mathcal{B}-1\right]\sigma_\phi}{\mathcal{B}r^2}+\frac{\cot\theta}{r^2}+\frac{i\sigma_\phi}{r}\frac{\mathcal{A}_{,r}E}{\mathcal{A}\mathcal{B}\left[E+\mathcal{A}mc^2\right]}\right)\Phi_{,\theta}\nonumber\\
&+\frac{\Phi_{,\phi\phi}}{r^2\sin^2\theta}
+\left(\frac{i\left[1-\mathcal{B}\right]\sigma_\theta}{\mathcal{B}r^2\sin\theta}-\frac{ieB}{\hbar}-\frac{i\sigma_\theta}{r\sin\theta}\frac{\mathcal{A}_{,r}E}{\mathcal{A}\mathcal{B}\left[E+\mathcal{A}mc^2\right]}\right)\Phi_{,\phi}\nonumber\\
&+\Bigg(\frac{E^2}{\mathcal{A}^2\hbar^2c^2}-\frac{m^2c^2}{\hbar^2}-\frac{e^2B^2r^2\sin^2\theta}{4\hbar^2}-\frac{\left[\mathcal{B}+1\right]eB\sigma_\theta\sin\theta}{2\hbar\mathcal{B}}+\frac{\sigma_reB\cos\theta}{\hbar}\nonumber\\
&\qquad-\frac{\mathcal{A}_{,r}EeBr\,\sigma_\theta\sin\theta}{2\hbar\mathcal{A}\mathcal{B}\left[E+\mathcal{A}mc^2\right]}\Bigg)\Phi=0.
\end{align}
With this result, we can now proceed to restrict the motion of the particle to the equatorial plane by setting $\theta=\frac{\pi}{2}$. Therefore, we also need to perform the replacement $r\sin\theta=r\equiv\rho$ everywhere in this equation. On the other hand, for the $\theta$-derivatives, we use the following identities valid for motion restricted to a plane: $\partial_\theta=r\cos\theta\partial_\rho$ and $\partial_\theta^2=-r\sin\theta\partial_\rho+r^2\cos^2\theta\partial_\rho^2$. Therefore, for $\theta=\frac{\pi}{2}$ we should discard the partial derivative $\partial_\theta$ and perform the replacement $\partial_\theta^2\rightarrow-\rho\,\partial_\rho$. The above equation then becomes
\begin{align}\label{Plane2-SpinorEq}
&\frac{\Phi_{,\rho\rho}}{\mathcal{B}^2}+\left(\frac{2-\mathcal{B}}{\mathcal{B}\rho}-\frac{\mathcal{B}_{,\rho}}{\mathcal{B}^3}+\frac{\mathcal{A}_{,\rho}E}{\mathcal{A}\mathcal{B}^2\left[E+\mathcal{A}mc^2\right]}\right)\Phi_{,\rho}\nonumber\\
&+\frac{\Phi_{,\phi\phi}}{\rho^2}
+\left(\frac{i\left[\mathcal{B}-1\right]\sigma_z}{\mathcal{B}\rho^2}-\frac{ieB}{\hbar}+\frac{i\sigma_z}{\rho}\frac{\mathcal{A}_{,\rho}E}{\mathcal{A}\mathcal{B}\left[E+\mathcal{A}mc^2\right]}\right)\Phi_{,\phi}\nonumber\\
&+\left(\frac{E^2}{\mathcal{A}^2\hbar^2c^2}-\frac{m^2c^2}{\hbar^2}-\frac{e^2B^2\rho^2}{4\hbar^2}+\frac{\left[\mathcal{B}+1\right]eB\sigma_z}{2\hbar\mathcal{B}}+\frac{\mathcal{A}_{,\rho}EeB\rho\,\sigma_z}{2\hbar\mathcal{A}\mathcal{B}\left[E+\mathcal{A}mc^2\right]}\right)\Phi=0.
\end{align}
We have used here the fact that according to Eq.\,(\ref{PauliM}), the spherical-coordinates Pauli matrix $\sigma_\theta$ reduces to $-\sigma_z$ for $\theta=\frac{\pi}{2}$ (as expected), where $\sigma_z$ is the usual third Pauli matrix in Cartesian coordinates.
On the other hand, according to expressions (\ref{PauliM}) of the Pauli matrices, we see in Eq.\,(\ref{Plane2-SpinorEq}) that all the terms are diagonal and that the equation does not involve the angular variable $\phi$ explicitly. Therefore, without loss of generality, we can write the two independent solutions $\Phi_+(\rho,\phi)$ and $\Phi_-(\rho,\phi)$ of the equation in a basis made of the spin-eigenstates along the $z$-direction with eigenvalues $s=\pm1$, respectively. We thus introduce arbitrary radial functions $f_+(\rho)$ and $f_-(\rho)$ such that
\begin{equation}\label{Sigma3Basis}
\Phi_+(\rho)=e^{i\ell\phi}
\begin{bmatrix}
f_+(\rho)\\0
\end{bmatrix},\qquad
\Phi_-(\rho)=e^{i\ell\phi}
\begin{bmatrix}
0\\f_-(\rho)
\end{bmatrix}.
\end{equation}
Imposing the periodic condition along the equator, $\Phi(\rho,\phi+2\pi)=\Phi(\rho,\phi)$, we learn that the quantum number $\ell$ must be an integer number, which we take here to be nonnegative in accordance with the negative sign we chose for the charge $e$. Plugging these ansatzes into Eq.\,(\ref{Plane2-SpinorEq}) yields the following second-order differential equation in $\rho$:
\begin{align}\label{fEq}
&\frac{f_s''}{\mathcal{B}^2}+\left(\frac{2-\mathcal{B}}{\mathcal{B}\rho}-\frac{\mathcal{B}'}{\mathcal{B}^3}+\frac{\mathcal{A}'E}{\mathcal{A}\mathcal{B}^2\left[E+\mathcal{A}mc^2\right]}\right)f'_s\nonumber\\
&+\Bigg(\frac{E^2}{\mathcal{A}^2\hbar^2c^2}-\frac{m^2c^2}{\hbar^2}+\frac{eB\ell}{\hbar}-\frac{e^2B^2\rho^2}{4\hbar^2}-\frac{\mathcal{B}\ell^2+\left[\mathcal{B}-1\right]s\ell}{\mathcal{B}\rho^2}+\frac{\left[\mathcal{B}+1\right]eBs}{2\hbar\mathcal{B}}\nonumber\\
&\qquad-\frac{\mathcal{A}'Es\left[2\hbar\ell-eB\rho^2\right]}{2\hbar\rho\mathcal{A}\mathcal{B}\left[E+\mathcal{A}mc^2\right]}\Bigg)f_s=0,
\end{align}
Here, and henceforth, a prime denotes a derivative with respect to the variable $\rho$, and the function $f_s(\rho)$ stands for the two cases $f_\pm(\rho)$ of $s=\pm1$, respectively. Next, we introduce the following ansatz: \begin{equation}\label{FirstAnsatz}
f_s(\rho)=\left(\frac{\mathcal{B}}{\mathcal{A}}E+\mathcal{B}mc^2\right)^{\frac{1}{2}}F_s(\rho),
\end{equation}
for some radial function $F_s(\rho)$. Please note that only the convergence of the radial function $F_s(\rho)$ is required henceforth, for the argument of the square root in this ansatz converges for both large and small values of $\rho$ as we shall see below after introducing the explicit components of the spacetime metric. Plugging now this ansatz into Eq.\,(\ref{fEq}), the latter takes the following form:
\begin{multline}\label{SimplifiedfEq}
\!\!\!\frac{F_s''}{\mathcal{B}^2}+\frac{2-\mathcal{B}}{\mathcal{B}\rho}F'_s+\bigg[\frac{E^2}{\mathcal{A}^2\hbar^2c^2}-\frac{m^2c^2}{\hbar^2}+\frac{eB\ell}{\hbar}-\frac{e^2B^2\rho^2}{4\hbar^2}-\frac{\mathcal{B}\ell^2+(\mathcal{B}-1)s\ell}{\mathcal{B}\rho^2}\\
+\frac{(\mathcal{B}+1)eBs}{2\hbar\mathcal{B}}-\frac{\mathcal{A}'Es(2\hbar\ell-eB\rho^2)}{2\hbar\rho\mathcal{A}\mathcal{B}(E+\mathcal{A}mc^2)}-\frac{1}{2\mathcal{B}^2}\left(\frac{\mathcal{A}'E}{\mathcal{A}[E+\mathcal{A}mc^2]}-\frac{\mathcal{B}'}{\mathcal{B}}\right)'\\
-\left(\frac{\mathcal{A}'E}{2\mathcal{A}\mathcal{B}\left[E+\mathcal{A}mc^2\right]}-\frac{\mathcal{B}'}{2\mathcal{B}^2}\right)^2-\frac{2-\mathcal{B}}{2\mathcal{B}\rho}\left(\frac{\mathcal{A}'E}{\mathcal{A}\left[E+\mathcal{A}mc^2\right]}-\frac{\mathcal{B}'}{\mathcal{B}}\right)\bigg]F_s=0.
\end{multline}
Finally, the following ansatz,
\begin{equation}\label{FinalAnsatz}
F_s(\rho)=G_s(\rho)\exp\left[\int\frac{(1-\mathcal{B})^2}{2\rho}{\rm d}\rho\right],
\end{equation}
for an arbitrary radial function $G_s(\rho)$, allows us to convert Eq.\,(\ref{SimplifiedfEq}) into the following more useful form:
\begin{align}\label{FinalfEq}
&G''_s+\frac{G'_s}{\rho}+\bigg[
\frac{E^2}{\mathcal{A}^2\hbar^2c^2}-\frac{m^2c^2}{\hbar^2}+\frac{eB\ell}{\hbar}-\frac{e^2B^2\rho^2}{4\hbar^2}-\frac{\ell^2}{\rho^2}+\frac{(1-\mathcal{B})s\ell}{\mathcal{B}\rho^2}+\frac{(1+\mathcal{B})eBs}{2\hbar\mathcal{B}}\nonumber\\
&-\frac{\mathcal{A}'Es\left[2\hbar\ell-eB\rho^2\right]}{2\hbar\rho\mathcal{A}\mathcal{B}\left[E+\mathcal{A}mc^2\right]}-\frac{1}{2\mathcal{B}^2}\left(\frac{\mathcal{A}'E}{\mathcal{A}\left[E+\mathcal{A}mc^2\right]}-\frac{\mathcal{B}'}{\mathcal{B}}\right)'-\left(\frac{\mathcal{A}'E}{2\mathcal{A}\mathcal{B}\left[E+\mathcal{A}mc^2\right]}-\frac{\mathcal{B}'}{2\mathcal{B}^2}\right)^2\nonumber\\
&-\frac{2-\mathcal{B}}{2\mathcal{B}\rho}\left(\frac{\mathcal{A}'E}{\mathcal{A}\left[E+\mathcal{A}mc^2\right]}-\frac{\mathcal{B}'}{\mathcal{B}}\right)-\frac{\mathcal{B}'(1-\mathcal{B})}{\mathcal{B}^2\rho}-\frac{(1-\mathcal{B})^4}{4\mathcal{B}^2\rho^2}\bigg]\mathcal{B}^2G_s=0.
\end{align}
It is clear that Eq.\,(\ref{FinalfEq}) would not be easy to solve exactly. However, we can clearly see now which terms of the equation could safely be dropped out even without making any prior assumption on the orders of magnitude of the various physical quantities involved, such as the energy of the particle and the strengths of the gravitational and magnetic interaction terms. In Section \ref{sec:IV}, we shall come back to this point to discuss the order of magnitude of the correction that would have been brought to our final result for the energy levels of the particle if we had kept those extra terms.
Now, while Eq.\,(\ref{FinalfEq}) seems analytically challenging to solve even when keeping only the dominant terms, the fact that the metric components $\mathcal{A}^2$ and $\mathcal{B}^2$ of the interior Schwarzschild solution depend on $\rho^2$ instead of $\rho$ will greatly simplify our task as we shall see shortly. For a spherical body of mass $M$, of radius $R$ and of uniform density, the interior Schwarzschild solution describing the gravitational field inside the body is given by the metric (\ref{GeneralMetric}), with \cite{Synge}
\begin{equation}\label{InteriorSchwar}
\mathcal{A}(\rho)=\frac{3}{2}\left(1-\frac{r_s}{R}\right)^{\frac{1}{2}}-\frac{1}{2}\left(1-\frac{r_s\rho^2}{R^3}\right)^{\frac{1}{2}},\qquad \mathcal{B}(\rho)=\left(1-\frac{r_s\rho^2}{R^3}\right)^{-1/2},
\end{equation}
where $r_s=2GM/c^2$ is the Schwarzschild radius of the massive body. For convenience, we introduce the dimensionless parameter $\eta=\frac{3}{2}(1-\frac{r_s}{R})^{\frac{1}{2}}$ and the inverse length squared $\lambda=\frac{r_s}{R^3}$. Please note that by choosing this metric, we have ignored the possible geometric effect of the magnetic field on the spacetime. We will come back to this point in Section \ref{sec:IV} as well.
Let us now show the convergence of the square root in our ansatz (\ref{FirstAnsatz}). Please note that for very small values of $\rho$, the functions (\ref{InteriorSchwar}) approximate to $\mathcal{A}\sim\eta-\frac{1}{2}+\frac{\lambda}{4}\rho^2$ and $\mathcal{B}\sim1+\frac{\lambda}{2}\rho^2$, from which we see that the leading-order $\rho$-dependent term in the argument of the square root in Eq.\,(\ref{FirstAnsatz}) is proportional to $\rho^2$. This establishes thus the convergence of that square root for very small $\rho$. On the other hand, for $\rho\geq R$, the functions (\ref{InteriorSchwar}) lead, by continuity, to the metric components of the {\it exterior} Schwarzschild solution, for we have then $\mathcal{A}=\mathcal{B}^{-1}=(1-\frac{r_s}{\rho})^{\frac{1}{2}}$. Therefore, for very large values of $\rho$, the functions $\mathcal{A}$ and $\mathcal{B}$ in the argument of the square root in our ansatz (\ref{FirstAnsatz}) approximate to $\mathcal{A}\sim1-\frac{r_s}{2\rho}$ and $\mathcal{B}\sim1+\frac{r_s}{2\rho}$. This implies that the leading-order $\rho$-dependent term in the argument of the square root is proportional to $1/\rho$. This establishes thus the convergence of that square root for large values of $\rho$ as well, whence we deduce the convergence of that square root for all values of $\rho$.
Next, we use the functions (\ref{InteriorSchwar}) to expand $1/\mathcal{A}$ and $1/\mathcal{A}^2$, up to the first order in the parameter $\lambda$, as follows:
\begin{align}\label{ABPowerSeries}
\frac{1}{\mathcal{A}}&\sim\frac{2}{(2\eta-1)}-\frac{\lambda\rho^2}{(2\eta-1)^2},\nonumber\\
\frac{1}{\mathcal{A}^{2}}&\sim\frac{4}{(2\eta-1)^2}-\frac{4\lambda\rho^2}{(2\eta-1)^3}.
\end{align}
Expanding only up to the first order in $\lambda$ is justified by the order of magnitude of $\lambda$ (even for neutron stars as we will see in Section \ref{sec:IV}). On the other hand, we also have $1/\mathcal{B}\sim1-\frac{\lambda}{2}\rho^2$ and $\mathcal{B}^2\sim1+\lambda\rho^2$. With these expansions, Eq.\,(\ref{SimplifiedfEq}) takes, up to the first order in the parameter $\lambda$, the following form:
\begin{align}\label{1stOrderFinalfEq}
&\frac{{\rm d}^2G_s}{{\rm d}\rho^2}+\frac{1}{\rho}\frac{{\rm d}G_s}{{\rm d}\rho}+\Bigg[
\frac{4E^2}{(2\eta-1)^2\hbar^2c^2}-\frac{m^2c^2}{\hbar^2}+\frac{eB(\ell+s)}{\hbar}+\frac{\lambda eBs}{4\hbar}-\frac{\lambda(2\ell^2+s\ell-2)}{2}\nonumber\\
&-\frac{2\lambda(s\ell+1)}{(2\eta-1)[2+(2\eta-1)\frac{mc^2}{E}]}-\frac{\ell^2}{\rho^2}-\Bigg(\frac{e^2B^2}{4\hbar^2}+\frac{\lambda m^2c^2}{\hbar^2}+\frac{8\lambda E^2[\eta-1]}{\hbar^2c^2[2\eta-1]^3}+\frac{\lambda eB[2\ell+s]}{2\hbar}\nonumber\\
&+\frac{\lambda eBs}{\hbar(2\eta-1)[2+(2\eta-1)\frac{mc^2}{E}]}\Bigg)\rho^2-\frac{\lambda e^2B^2}{4\hbar^2}\rho^4\Bigg]G_s=0.
\end{align}
This equation has the form
\begin{equation}\label{GEquation}
G_s''+\frac{G_s'}{\rho}+\left(\varepsilon-\frac{\ell^2}{\rho^2}-\kappa^2\rho^2-\tau^2\rho^4\right)G_s=0,
\end{equation}
where a prime denotes a derivative with respect to $\rho$ and the constants $\varepsilon$, $\kappa^2$ and $\tau^2$ are straightforwardly read off from Eq.\,(\ref{1stOrderFinalfEq}). Setting $G_s=\chi_s/\sqrt{\rho}$ in Eq.\,(\ref{GEquation}), the latter takes the form
\begin{equation}\label{Anharmonic}
\chi''_s+\left(\varepsilon-\frac{\ell^2-\frac{1}{4}}{\rho^2}-\kappa^2\rho^2-\tau^2\rho^4\right)\chi_s=0.
\end{equation}
This is a Schr\"odinger equation with a centrifugal barrier term and an isotropic quartic anharmonic oscillator potential term. Please note that (i) in the case of a weak gravitational interaction compared to the magnetic one, i.e., for $\lambda\ll |e|B/\hbar$, and/or (ii) in the case of charged fermions moving very close to the center of the spherical mass, i.e., for $\rho\ll R$, the purely quartic term $\tau^2\rho^4$ can be dropped out. In that case, Eq.\,(\ref{Anharmonic}) reduces to the one that has already been solved exactly in Ref.\,\cite{GravityLandauI}, where the quantized eigenvalues $\varepsilon_{n,\ell}$ have been extracted. We will come back to these special cases in Section \ref{sec:IV-1}. Here, we will extract the eigenvalues from Eq.\,(\ref{Anharmonic}) by keeping the quartic term.
It is shown in Ref.\,\cite{JMP(1986)} that even though Eq.\,(\ref{Anharmonic}) is not solvable exactly, one may still extract an approximate analytical expression for its eigenvalues using the Jeffreys–Wentzel–Kramers–\\Brillouin (JWKB) approximation. To the fourth order in the approximation, the result reads \cite{JMP(1986)}
\begin{equation}\label{QuantizedEpsilon}
\varepsilon_{n,\ell,s}=(2 \tau)^{\frac{2}{3}}\sum_{k=0}^8\left[\frac{3\sqrt{\pi}\,\Gamma(\tfrac{3}{4})}{\sqrt{2}\,\Gamma(\tfrac{1}{4})}\left(2n+1+\ell\right)\right]^{\frac{4-2k}{3}}N_k\left(\kappa,\tau,\ell\right),
\end{equation}
where the explicit expressions of the nine terms $N_k(\kappa, \tau,\ell)$ are given in Appendix \ref{sec:App}. It is clear from this expression that the usual flat-space Landau levels of a charged fermion moving under the influence of a uniform magnetic field are dramatically altered. In fact, instead of the splitting of the levels that occurs due to a weak gravitational field \mbox{outside \cite{GravityLandauI}} or inside a spherical mass \cite{QHGravity}, what happens here is a complete redistribution of the levels. Both the separation of the latter and their strengths are redefined for each principle quantum number $n$.
For the sake of illustration and simplicity, we display here explicitly the expression we obtain when keeping only the first term of the sum in Eq.\,(\ref{QuantizedEpsilon}). By substituting the constants $\varepsilon$, $\kappa$ and $\tau$, as well as the term $N_0=1$ (as given in Appendix \ref{sec:App}) into \mbox{Eq.\,(\ref{QuantizedEpsilon}),} we find
\begin{align}\label{QEpsilonN0}
E_{n,\ell,s}&=\left(\eta-\frac{1}{2}\right)\Bigg\{m^2c^4-\hbar c^2eB\left(\ell+s+\frac{\lambda s}{4}\right)+\frac{\lambda\hbar^2c^2(2\ell^2+s\ell-2)}{2}\nonumber\\
&+\frac{2\lambda\hbar^2c^2(s\ell+1)}{(2\eta-1)[2+(2\eta-1)\frac{mc^2}{E}]}+\hbar^2c^2\left[\left(2n+1+\ell\right)\frac{3\,\Gamma(\tfrac{3}{4})}{\Gamma(\tfrac{1}{4})}\right]^{\frac{4}{3}}\left(\frac{\pi eB\sqrt{\lambda}}{2\hbar}\right)^\frac{2}{3}\Bigg\}^{\frac{1}{2}}.
\end{align}
This does not, in fact, look at all like the familiar Landau levels. It must be noted here that this formula does not reduce to the flat-space result when setting $\lambda=0$ because the expansion in terms of the JWKB integrals is about the pure quartic oscillator levels, not the other way around \cite{JMP(1986)}. Therefore, to achieve a high accuracy in the Formula (\ref{QuantizedEpsilon}), $\kappa^2$ should not be larger than $\tau^2$. To properly recover the flat-space case, one needs to set $\lambda=0$ in Eq.\,(\ref{1stOrderFinalfEq}). Similarly, when the gravitational interaction is weaker than the magnetic interaction, one needs to extract the corresponding Landau levels by starting from Eq.\,(\ref{1stOrderFinalfEq}) as we shall do now.
\subsection{Weak Quartic Term}\label{sec:IV-1}
When the gravitational interaction is weak compared to the magnetic interaction and/or when the charged particles move very closely to the center of the spherical mass, the quartic term $\tau^2\rho^4$ in Eq.\,(\ref{GEquation}) can safely be dropped out so that the equation takes the form,
\begin{equation}\label{GEquationWithoutQuartic}
G_s''+\frac{G_s'}{\rho}+\left(\varepsilon-\frac{\ell^2}{\rho^2}-\kappa^2\rho^2\right)G_s=0.
\end{equation}
The solution to this equation is easily found to be given in terms of the confluent hypergeometric function as follows (see Ref.\,\cite{GravityLandauI} for the details of the derivation):
\begin{equation}
G_s(\rho)=C\rho^{\ell}e^{-\frac{|\kappa|}{2}\rho^2}\,_1F_1\left(\frac{1+\ell}{2}-\frac{\varepsilon}{4|\kappa|};\ell+1;|\kappa|\rho^2\right).
\end{equation}
This solution is one of the two independent solutions of Eq.\,(\ref{1stOrderFinalfEq}) that is finite at the origin $\rho=0$. $C$ is one of the constants of integration, and$\,_1F_1(a;b;\xi)$ is the confluent hypergeometric function of a variable $\xi$ for any values of the parameters $a$ and $b$, except when $b$ is a negative integer for which case the function has a simple pole \cite{HandBook}. Therefore, to guarantee a square-integrable character for the wavefunction, the following condition must be imposed for an arbitrary positive integer $n$:
\begin{equation}
\frac{1+\ell}{2}-\frac{\varepsilon}{4|\kappa|}=-n,
\end{equation}
for which case, the confluent hypergeometric function becomes indeed a polynomial of finite degree $n$. By substituting into this condition, the values of $\varepsilon$ and $\kappa$ as read off from \mbox{Eq.\,(\ref{1stOrderFinalfEq})}, we arrive at the following condition involving the energy $E$ of the \mbox{charged fermion:}
\begin{align}
&\frac{4E^2}{(2\eta-1)^2\hbar^2c^2}-\frac{m^2c^2}{\hbar^2}+\frac{eB(\ell+s)}{\hbar}+\frac{\lambda eBs}{4\hbar}-\frac{\lambda(2\ell^2+s\ell-2)}{2}\nonumber\\
&-\frac{2\lambda(s\ell+1)}{(2\eta-1)[2+(2\eta-1)\frac{mc^2}{E}]}=(2n+1+\ell)\Bigg[\frac{e^2B^2}{\hbar^2}+\frac{4\lambda m^2c^2}{\hbar^2}+\frac{32\lambda E^2(\eta-1)}{\hbar^2c^2(2\eta-1)^3}\nonumber\\
&+\frac{2\lambda eB(2\ell+s)}{\hbar}+\frac{4\lambda eBs}{\hbar(2\eta-1)(2+[2\eta-1]\frac{mc^2}{E})}\Bigg]^{\frac{1}{2}}.
\end{align}
We may solve this equation for $E$ by keeping again only the first-order terms in the parameter $\lambda$, to arrive at the following quantization condition on the energy $E$ of \mbox{the particle}:
\begin{align}\label{FinalQuantization}
E_{n,\ell,s}&=\hbar c\left(\eta-\frac{1}{2}\right)\Bigg\{\frac{m^2c^2}{\hbar^2}-\frac{eB(\ell+s)}{\hbar}-\frac{\lambda eBs}{4\hbar}+\frac{\lambda(2\ell^2+s\ell-2)}{2}\nonumber\\
&\quad+\frac{\lambda(s\ell+1)}{(2\eta-1)}\frac{\sqrt{1+(2n+1-s)\frac{\hbar eB}{m^2c^2}}}{1+\sqrt{1+(2n+1-s)\frac{\hbar eB}{m^2c^2}}}+(2n+1+\ell)\Bigg[\frac{e^2B^2}{\hbar^2}+\frac{4\lambda m^2c^2}{\hbar^2}\nonumber\\
&\quad+\frac{2\lambda eB(2\ell+s)}{\hbar}+(\eta-1)\frac{8\lambda(m^2c^2+[2n+1-s]\hbar eB)}{\hbar^2(2\eta-1)}\nonumber\\
&\quad+\frac{2\lambda eBs}{\hbar(2\eta-1)}\frac{\sqrt{1+(2n+1-s)\frac{\hbar eB}{m^2c^2}}}{1+\sqrt{1+(2n+1-s)\frac{\hbar eB}{m^2c^2}}}\Bigg]^{\frac{1}{2}}\Bigg\}^{\frac{1}{2}}.
\end{align}
We readily note that for $M=0$ (i.e., by setting $\eta=\frac{3}{2}$ and $\lambda=0$), the result (\ref{FinalQuantization}) reduces to the usual quantized energy levels of a charged fermion inside a uniform magnetic field in Minkowski spacetime: $E_{n,s}=[m^2c^4+\hbar c^2(2n+1-s\frac{e}{|e|})|e|B]^{\frac{1}{2}}$. In addition, we also note, by setting $B=0$, that even in the absence of the magnetic field the gravitational field induces quantized energy levels. Furthermore, we neatly recognize a general-relativistic correction to the pure relativistic Landau levels arising from the magnetic field. Such a correction arises from the factor $(\eta-\frac{1}{2})$ multiplying the right-hand side.
We are now going to discuss the ways to overcome the limitations we encountered in this section and what modifications would be brought to the results we obtained so far.
\section{Refinements and Prospective Extensions}\label{sec:IV}
Two limitations, we could not avoid when deriving our results in the previous section, were (i) the fact that we considered only the possible motion of the particle along the equator inside the massive sphere and (ii) the fact that we ignored the geometric effect that might be caused to spacetime by the magnetic field. The two other limitations we imposed on our analysis were (iii) the uniform mass density we assumed for the interior of the spherical mass and (iv) the first-order approximation in $\lambda$ we limited ourselves to when writing Eq.\,(\ref{1stOrderFinalfEq}) to be able to solve the latter exactly. Our aim in this section is to address these limitations and argue that they do not actually alter much the actual physical conclusions we would have reached had we not adopted those various approximations.
The first limitation consists of the restricted number of degrees of freedom we imposed on the charged fermion. By restricting the motion of the particle to the equatorial plane inside the massive sphere, we implicitly assumed the particle's motion to be confined to such a plane due to the collective interaction of the particles among themselves. However, it is actually possible, for the same reason, to also consider the possibility of motion along a plane other than the equatorial plane. In fact, the vertical pull of gravity is then balanced by these other interactions so that only the horizontal variation of the gravitational potential matters. For such a case, however, we need to switch entirely to cylindrical coordinates. The interior Schwarzschild solution needs to be expressed in such coordinates and we need to set $r^2=\rho^2+z_0^2$ for some vertical distance $z_0$ from the equator in the resulting differential equation that would be similar in form to Eq.\,(\ref{2-SpinorEq}). As the differential operator $\partial_\theta$ would be replaced by $z_0\partial_\rho$ and the operator $\partial_\theta^2$ would be replaced by $z_0^2\partial_\rho^2-(\rho+z_0)\partial_\rho$, the equation would still be a second-order differential equation in the single variable $\rho$. A first-order approximation in the parameter $\lambda$ might then be adopted. Although such an equation would certainly not be as easy to solve as Eq.\,(\ref{1stOrderFinalfEq}), we expect the energy of the particle to be only shifted by a constant while obeying a quantization condition similar to our results (\ref{QuantizedEpsilon}) and (\ref{FinalQuantization}). A further refinement consists then of considering the case $\partial_z\Psi\neq0$ and a non-constant $z$. Unfortunately, the fact that two variables would then be involved, no simple procedure could be expected. A separate future work should specifically be devoted to solving that case using cylindrical coordinates. Indeed, the problem of solving our very general Eq.\,(\ref{2-SpinorEq}) without restricting the latter to the equatorial plane might be very important for other astrophysical applications besides neutron stars. We believe that the problem might very well be tackled by adapting to our case the existing numerical methods that have already proved fruitful elsewhere for solving such equations (see, e.g., the recent work \cite{Numerical} and the references therein).
The second interesting refinement that should be discussed here is the possibility of adding to the metric components, given by Eq.\,(\ref{InteriorSchwar}), the spacetime curvature caused by the magnetic field. The geometric effect of combining the external gravitational field sourced by a spherical mass with a uniform magnetic field $B$ gives rise to the so-called Schwarzschild–Melvin spacetime \cite{Ernst}. Therefore, an analogue modification to the interior Schwarzschild metric components given by Eq.\,(\ref{InteriorSchwar}) is also expected when including the spacetime curvature caused by $B$. However, since the corrections brought to the familiar exterior Schwarzschild solution by the magnetic field are of the order of $\epsilon_0GB^2/c^2$, where $\epsilon_0$ is the vacuum permittivity constant \cite{Galtsov}, we expect the correction that would be brought to the parameter $\lambda$ in our result (\ref{FinalQuantization}) to be of the order of $\sim$$\epsilon_0GB^2/c^2$ as well. For a neutron star of mass $\sim$$1.4\,M_\odot$ and of radius $10$\,km, the Schwarzschild radius is about $\sim$$4.1\,$km for which we find that $\lambda\sim0.004\,$km$^{-2}$. On the other hand, for such a neutron star with an interior magnetic field as strong as $\sim$$10^{17}\,$G, we have $\epsilon_0GB^2/c^2\sim7\times10^{-9}\,$km$^{-2}$. The expected correction is thus quite insignificant.
The third limitation arises from having discarded in Eq.\,(\ref{1stOrderFinalfEq}) terms of the second order and higher in the parameter $\lambda$. We can justify now such an approximation by the fact that, as we just saw for a typical neutron star, $\lambda\sim4\times10^{-3}\,$km$^{-2}$. This entails that if we had kept those higher order terms the expected correction to be brought to the quantized energy levels (\ref{FinalQuantization}) can be obtained using time-independent perturbation theory in the manner worked out in Refs. \,\cite{GravityLandauI,GravityLandauII} for the case of Landau levels in the exterior Schwazschild solution.
Another refinement that should be taken into account is the fact that matter density inside neutron stars is not really uniform as we have assumed it in our calculations in the previous section. The interior matter density increases with increasing depth. However, the core of a neutron star makes up the largest part of the star and may be subdivided into outer and inner parts, each characterized by slowly varying mass densities with depth \cite{NSDensity}. For this reason, although considering a uniform mass density is only a simplifying model for the interior of a neutron star, we expect that the conclusions that would be derived by assuming a slowly varying $\rho$-dependent mass density would not differ much from those obtained here. Unfortunately, an analysis based on the interior Schwarzschild solution with a $\rho$-dependent mass density is beyond the scope of the present paper and should be considered in future works as well. It must also be noted here that numerical methods would certainly be beneficial for such a task.
The last two extensions of this work we would like to mention here is the possibility of studying the effect of the interior Schwarzschild solution on (i) charged fermions when the magnetic field is nonuniform and (ii) on the neutron star matter when the latter is taken to be a superfluid \cite{NSSuperfluid}. For the former case, one needs to use again numerical methods (as in Ref.\,\cite{NonUniformB}) to solve the resulting equation, whereas for the latter case, one needs to consider, instead of the Dirac equation, the Gross–Pitaevskii equation coupled to the magnetic field inside the interior Schwarzschild solution.
\section{Applications}\label{sec:V}
Despite the limitations we discussed above, it is still interesting to apply our results to some systems that rely on the Landau energy levels of charged fermions. We will first examine the charged fermions inside a neutron star, and then we examine charged fermions inside a laboratory-made spherical mass.
\subsection{Inside a Neutron Star}
We propose to use here our results (\ref{QuantizedEpsilon}) and (\ref{FinalQuantization}) to examine the effect of gravity on the magnetization $\mathscr{M}$ inside a neutron star. The magnetization $\mathscr{M}$ is given by the following general formula \cite{Broderick}:
\begin{equation}\label{GeneralM}
\mathscr{M}=\sum_{i=e,\mu,p}\left(\frac{\partial \epsilon_i}{\partial B}-\mu^i_f\frac{\partial n_i}{\partial B}\right),
\end{equation}
where the summation is over the electrons, muons and protons, respectively; $\epsilon_i$ are the energy densities of those charged fermions and $n_i$ are their number densities. The magnetic field $B$ in this formula is the original seed field (magnetizing field) that is thus gradually modified by the magnetization into $H=B+4\pi\mathscr{M}$, in CGS units \cite{LLPTextbook}. A strong magnetization has been suggested by some authors to play a role even in giving rise to the original (primal) magnetic field itself \cite{Dong2013}. However, despite considerable research on the topic, there is in the literature no consensus yet on the origin of such strong seed magnetic fields at the surface nor in the core of such compact stars. For a quick guide through the various proposals put forward in the literature, see, e.g., the reviews \cite{ReviewEoS2015,Universe2021}.
Considering the magnetizing field $B$ to be constant in time and uniform over the radius of the star, as we do in this paper, is also only an approximation. However, given that the relaxation times of the star's magnetic field decay is of the order of $\sim$$10^3$ \mbox{years \cite{Universe2018}}, our present assumption of a constant magnetic field fits amply within the early stages of the magnetic evolution of the star. Similarly, assuming a slow variation of the magnetic field with distance inside highly magnetized stars as presented, for example, in Ref.\,\cite{BDistribution} is still consistent with our present purpose of providing a preliminary study on the effect of gravity on the magnetic properties of such compact stellar objects.
First, recall that when ignoring the contribution of the gravitational field, the quantities $\epsilon_i$ and $n_i$ are given by \cite{Broderick}
\begin{align}\label{EnergyAndNumberDensity}
\epsilon_i&=\frac{|e|B}{4\pi^2}\sum_s\sum_{n=0}^{n_{max}}\left(\mu^i_fk^i_{f,n,s}+\tilde{m}^{i2}_{n,s}\ln\left|\frac{\mu^i_f+k^i_{f,n,s}}{\tilde{m}^i_{n,s}}\right|\right),\nonumber\\
n_i&=\frac{|e|B}{2\pi^2}\sum_s\sum_{n=0}^{n_{max}}k^i_{f,n,s},
\end{align}
where
\begin{align}\label{mTilde}
\tilde{m}^{i2}_{n,s}&=m^{i2}+\left(2n+1-\frac{e}{|e|}s\right)|e|B,\nonumber\\
k_{f,n,s}^{i2}&=\mu_f^{i2}-\tilde{m}_{n,s}^{i2}.
\end{align}
The Fermi energies $\mu^{i}_f$ are fixed by the chemical potentials and the maximum integer $n_{max}$ in the summations represents the integer preceding the value of $n$ for which $k_{f,n,s}^{i2}$ becomes negative. As the equations are more involved, we will work in this subsection in the natural units $\hbar=c=1$. Formula (\ref{GeneralM}) then gives \cite{Broderick} \begin{equation}\label{FlatMagnetization}
\mathscr{M}=\sum_{i=e,\mu,p}\left[\frac{\epsilon_i-\mu^i_fn_i}{B}+\frac{B}{2\pi^2}\sum_{s}\sum_{n=0}^{n_{max}}\left(n+\frac{1}{2}-\frac{s}{2}\right)\ln\left|\frac{\mu^i_f+k^i_{f,n,s}}{\tilde{m}^i_{n,s}}\right|\right].
\end{equation}
We will adapt these general formulas to our case. Since the formulas to be developed here are more relevant for very strong magnetic fields, we also extract from Eq.\,(\ref{FinalQuantization}) the following approximation for the quantized energy levels:
\begin{align}\label{StrongBApproximation}
E_{n,\ell,s}&=\left(\eta-\frac{1}{2}\right)\Bigg\{m^2\!+\!\left(2n+2\ell+1-\frac{e}{|e|}s\right)|e|B\!-\!\frac{\lambda eBs}{4\hbar}\!+\!\frac{\lambda(2\ell^2+s\ell-2)}{2}\!+\!\frac{\lambda(s\ell+1)}{2\eta-1}\nonumber\\
&\quad+(2n+1+\ell)\Bigg[\lambda (2\ell+s)+(\eta-1)\frac{4\lambda(2n+1-s)}{2\eta-1}+\frac{\lambda s}{2\eta-1}\Bigg]\Bigg\}^{\frac{1}{2}}.
\end{align}
Note that in the absence of gravity, i.e., when $\lambda=0$, this formula reduces to the first identity in Eq.\,(\ref{mTilde}). Let us then start with the consequences of this formula which is valid either (i) for weak gravitational fields and/or (ii) for fermions very close to the center of the star. Since the terms proportional to $\lambda$ in this expression de not contain $B$, it is straightforward to compute the $B$-derivatives in Eq.\,(\ref{GeneralM}). The result for the magnetization remains thus exactly the same as in Eq.\,(\ref{FlatMagnetization}), but with $\tilde{m}_{n,s}^{i2}$ replaced by $E_{n,\ell,s}$ as given by Eq.\,(\ref{StrongBApproximation}):
\begin{equation}\label{CurvedMagnetization}
\mathscr{M}=\sum_{i=e,\mu,p}\left[\frac{\epsilon_i-\mu^i_fn_i}{B}+\frac{B(2\eta-1)}{4\pi^2}\sum_{s}\sum_{n=0}^{n_{max}}\left(n+\frac{1}{2}-\frac{s}{2}\right)\ln\left|\frac{\mu^i_f+k^i_{f,n,s}}{E^i_{n,\ell,s}}\right|\right].
\end{equation}
We neatly see the general-relativistic correction factored out in the second term of the sum. More important, however, is the fact that the {\it general expression} of the magnetization remains unaltered in this case.
On the other hand, for the case of a {\it non-negligible} gravitational field the quartic term in Eq.\,(\ref{1stOrderFinalfEq}) should be taken into account, leading to the result (\ref{QEpsilonN0}) for $E_{n,\ell,s}$ at the first-order term of the JWKB approximation. As the Landau levels are completely destroyed, we see that even at this order in the approximation the magnetization is completely different from the expression (\ref{CurvedMagnetization}). Actually, even Formulas (\ref{EnergyAndNumberDensity}) and (\ref{mTilde}) do not hold anymore, for such formulas were derived by assuming the linear dependence (\ref{mTilde}) of $\tilde{m}_{n,s}^{i2}$ on the integer $n$. As expression (\ref{QEpsilonN0}) of $E_{n,\ell,s}$ in terms of $n$ displays no such linearity, and since that expression (\ref{QEpsilonN0}) is itself only an approximation for the highly nonlinear full \mbox{expression (\ref{QuantizedEpsilon}),} we conclude that the Landau-levels-based formalism for computing the magnetization $\mathscr{M}$ of neutron stars does not generally apply to the charged plasma in the core of highly dense neutron stars. The condition for the applicability of the formalism is to have the quartic anharmonic oscillator potential term in Eq.\,(\ref{Anharmonic}) negligible, i.e., to have $\lambda\ll |e|B/\hbar$ or to apply the formalism only to fermions close to the vicinity of the star's center. In other words, we conclude that only for a magnetic neutron star of mass $M$, of radius $R$ and with a magnetic field $B$ such that $|e|BR^3c^2\gg GM\hbar$, that the usual formalism generally applies to the charged matter inside the core of the star.
Magnetization in neutron stars affects both the equation of state of the stars' matter and the stars' structure and properties, such as their masses and transport properties. Indeed, both the pressure anisotropy and the softening of the equation of state of the stars matter is believed to be related to magnetization and the distribution of the charged fermions as they occupy the Landau levels. The equation of state determines, in turn, the mass-radius relation of neutron stars (see, e.g., the recent work \cite{EOS2020} and the \mbox{references therein}).
It is believed that the net magnetization in neutron stars remains negligible until around $10^{16}$\,G, beyond which it starts increasing and becomes oscillatory under the de Haas-van Alphen effect. Since we just saw that the Landau levels become dramatically altered only for non-negligible gravitational fields compared to the magnetic interaction, we conclude that such oscillations will not be much affected in typical stars. For weaker magnetic fields, however, the general-relativistic correction we found in Eq.\,(\ref{CurvedMagnetization}) for the magnetization might leave an observable signature on the equation of state of the star's matter and, hence, on the mass-radius relation of the star as well. More work dedicated specifically to such an interesting investigation is required though.
\subsection{At a Laboratory Level}
Another application of our results is to consider physical systems at the laboratory level. In this case, an interesting system would be the free electrons moving inside a massive metallic sphere put inside a uniform magnetic field. For this case, the gravitational contribution is much weaker than the magnetic interaction, so Eq.\,(\ref{FinalQuantization}) is what we need to apply here. Expanding that expression up to the first order in $\lambda$, we obtain
\begin{align}\label{FinalQuantizationWeakG}
E_{n,\ell,s}&\approx\left(\eta-\frac{1}{2}\right)\Bigg\{m^2c^4+\hbar c^2\left(2n\!+\!2\ell\!+\!1\!-\!\frac{e}{|e|}s\right)|e|B-\frac{\hbar c^2\lambda eBs}{4}+\frac{\hbar^2 c^2\lambda(2\ell^2\!+\!s\ell\!-\!2)}{2}\nonumber\\
&\quad+\frac{\lambda\hbar^2c^2(s\ell+1)}{(2\eta-1)}\frac{\sqrt{1+(2n\!+\!1\!-\!s)\frac{\hbar eB}{m^2c^2}}}{1+\sqrt{1+(2n\!+\!1\!-\!s)\frac{\hbar eB}{m^2c^2}}}+(2n+1+\ell)\hbar^2c^2\lambda\Bigg[\frac{2 m^2c^2}{\hbar eB}+2\ell+s\nonumber\\
&\quad+(\eta-1)\frac{4(m^2c^2+[2n+1-s]\hbar eB)}{\hbar eB(2\eta-1)}+\frac{ s}{2\eta-1}\frac{\sqrt{1+(2n+1-s)\frac{\hbar eB}{m^2c^2}}}{1+\sqrt{1+(2n+1-s)\frac{\hbar eB}{m^2c^2}}}\Bigg]\Bigg\}^{\frac{1}{2}}.
\end{align}
These are the energy levels of the electrons inside the massive sphere. The $\lambda$-terms are the corrections brought to the familiar levels observed in the absence of gravity. This expression is valid for both relativistic and non-relativistic fermions, and it reduces to the familiar relativistic Landau levels in the absence of gravity. This result offers novel ways for exploiting the Landau levels of fermions in the presence of gravity at the \mbox{laboratory level.}
\section{Summary}\label{sec:VI}
We have obtained the Dirac equation for a charged fermion in a general static and spherically symmetric curved spacetime in the presence of a static and uniform magnetic field. We applied the general equation we obtained to the case of fermions in the {\it interior} Schwarzschild solution describing the gravitational field inside a massive sphere of uniform density, and we derived the quantized energy levels of the charged particles. We found that our result reduces to the familiar flat-spacetime relativistic Landau levels of charged fermions inside a uniform magnetic field only when the gravitational interaction is weaker than the magnetic interaction and/or when the charged particles move very closely to the center of the spherical mass.
We then discussed, based on these results, the consequences on the physics of neutron stars. We found that the magnetization of the core of the latter would dramatically be altered for an extremely high gravitational field compared to the magnetic interaction and/or when not focusing solely on the fermions in the vicinity of the center of the star. We arrived at the conclusion that in the general case, the usual formalism for extracting the magnetization of neutron stars applies only when the latter satisfy the condition $|e|BR^3c^2\gg GM\hbar$. Although we arrived at such a conclusion by relying on a first-order approximation in the gravitational parameter $\lambda$, the generality of our results is not affected. In fact, such a first-order approximation remains valid for any relative strengths of the gravitational and magnetic interactions experienced by the charged particles inside typical neutron stars. Any extra correction to those levels would be obtained simply using time-independent perturbation theory.
In Section \ref{sec:IV}, we outlined a few refinements that might be brought to our model and the prospective outlook on future improvements that will allow us to go beyond the limitations we imposed on our present study. However, even with the simple model dealt with here, we have already glimpsed with new insights into the contribution of gravity in shaping the magnetization of highly compact neutron stars and into novel ways of exploiting the dynamics of charged fermions in the presence of gravity at the laboratory level.
\section*{Acknowledgments}
The authors are grateful to Patrick Labelle for the helpful discussions and comments, and to the anonymous referees for their constructive comments that helped enrich our presentation. This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant No. RGPIN-2017-05388 and by the Fonds de Recherche du Québec - Nature et Technologies (FRQNT). PS acknowledges support from Bishop's University via the Graduate Entrance Scholarship award.
|
{
"timestamp": "2021-12-01T02:25:34",
"yymm": "2111",
"arxiv_id": "2111.15448",
"language": "en",
"url": "https://arxiv.org/abs/2111.15448"
}
|
\section{Introduction}
\label{sec:intro}
With the advent of deep neural networks (DNNs), we have witnessed a dramatic performance improvement in a variety of computer vision and NLP tasks, across different applications, such as image classification \cite{krizhevsky2012imagenet} or semantic segmentation \cite{chen2017rethinking}. Nevertheless, recent studies\cite{guo2017calibration, mukhoti2020calibrating} have shown that these high-capacity models are poorly calibrated, often resulting in over-confident predictions. As a result, the predicted probability values associated with each class overestimate the actual likelihood of
correctness.
Quantifying the predictive uncertainty for modern DNNs has received an increased attention recently, with a variety of alternatives to better calibrate network outputs. A simple strategy consists in including a post-processing step during the test phase to transform the output of a trained network \cite{guo2017calibration, zhang2020mix, Tomani2021Posthoc, Ding2021LocalTemp}, with the parameters of this additional operation determined on a validation set. Despite their simplicity and low computational cost, these methods were shown to be effective when training and testing data are drawn from the same distribution.
However, one of their observed limitations is that the choice of the transformation parameters, such as temperature scaling, is highly dependent on the dataset and network.
A more principled alternative is to explicitly maximize the Shannon entropy of the predictions during training
by integrating a term into the learning objective, which penalizes confident output distributions \cite{pereyra2017regularizing}.
Furthermore, recent efforts to quantify the quality of predictive uncertainties have focused on investigating the effect of the entropy on the training labels \cite{xie2016disturblabel,muller2019does,mukhoti2020calibrating}. Findings from these works evidence that, popular losses, which modify the hard-label assignments, such as label smoothing \cite{szegedy2016rethinking} and focal loss \cite{lin2017focal}, implicitly integrate an entropy maximization objective and have a favourable effect on model calibration. As shown comprehensively in the recent study in \cite{mukhoti2020calibrating}, these losses, with implicit or explicit maximization of the entropy, represent the state-of-the-art in model calibration.
\vspace{0.5cm}
\noindent \textbf{Contributions} are summarized as follows:
\begin{itemize}
\item We provide a unifying constrained-optimization perspective of current state-of-the-art calibration losses. Specifically, these losses could be viewed as approximations of a linear penalty (or a Lagrangian) imposing equality constraints on logit distances. This points to an important limitation of such underlying hard equality constraints, whose ensuing gradients constantly push towards a non-informative solution, which might prevent from reaching the best compromise between the discriminative performance and calibration of the model during gradient-based optimization.
\item Following our observations, we propose a simple and flexible generalization based on inequality constraints, which imposes a controllable margin on logit distances.
\item We provide comprehensive experiments and ablation studies over two standard image classification benchmarks (CIFAR-10 and Tiny-ImageNet), one fine-grained image classification dataset (CUB-200-2011), one semantic segmentation dataset (PASCAL VOC 2012) and one NLP dataset (20 Newsgroups), with various network architectures. Our empirical results demonstrate the superiority of our method compared to state-of-the-art calibration losses. Our findings suggest that, for complex datasets, such as fine-grained image classification, our margin-based method yields substantial improvements in term of calibration.
\end{itemize}
\section{Related work}
\label{sec:related}
\noindent \textbf{Post-processing approaches.} A straightforward yet efficient strategy to mitigate mis-calibrated predictions is to include a post-processing step, which transforms the probability predictions of a deep network \cite{guo2017calibration, zhang2020mix, Tomani2021Posthoc, Ding2021LocalTemp}. Among these methods, \textit{temperature scaling}\cite{guo2017calibration}, a variant of Platt scaling \cite{platt1999probabilistic}, employs a single scalar parameter over all the pre-softmax activations, which results in softened class predictions.
Despite its good performance on in-domain samples, \cite{ovadia2019can} demonstrated that temperature scaling does not work well under data distributional shift. \cite{Tomani2021Posthoc} mitigated this limitation by transforming the validation set before performing the post-hoc calibration step. In \cite{Ma2021postrank}, a ranking model was introduced to improve the post-processing model calibration, whereas \cite{Ding2021LocalTemp} used a simple regression model to predict the temperature parameter during the inference phase.
\noindent \textbf{Probabilistic and non-probabilistic methods.} Several probabilistic and non-probabilistic approaches have been also investigated to measure the uncertainty of the predictions in deep neural networks. For example, Bayesian neural networks have been used to approximate inference by learning a posterior distribution over the network parameters, as obtaining the exact Bayesian inference is computationally intractable in deep networks. These Bayesian-based models include variational inference \cite{blundell2015weight,louizos2016structured}, stochastic expectation propagation \cite{hernandez2015probabilistic} or dropout variational inference \cite{gal2016dropout}. Ensemble learning is a popular non-parametric alternative, where the empirical variance of the network predictions is used as an approximate measure of uncertainty. This yields improved discriminative performance, as well as meaningful predictive uncertainty with reduced miscalibration.
Common strategies to generate ensembles include differences in model hyperparameters \cite{wenzel2020hyperparameter}, random initialization of the network parameters and random shuffling of the data points \cite{lakshminarayanan2016simple}, Monte-Carlo Dropout \cite{gal2016dropout,zhang2019confidence}, dataset shift \cite{ovadia2019can} or model orthogonality constraints \cite{larrazabal2021orthogonal}. However, a main drawback of this strategy stems from its high computational cost, particularly for complex models and large datasets.
\noindent \textbf{Explicit and implicit penalties.} Modern classification networks trained under the fully supervised learning paradigm resort to training labels provided as binary one-hot encoded vectors. Therefore, all the probability mass is assigned to a single class, resulting in minimum-entropy supervisory signals (i.e., entropy equal to zero). As the network is trained to follow this distribution, we are implicitly forcing it to be overconfident (i.e., to achieve a minimum entropy), thereby penalizing uncertainty in the predictions. While temperature scaling artificially increases the entropy of the predictions, \cite{pereyra2017regularizing} included into the learning objective a term to penalize confident output distributions by explicitly maximizing the entropy. In contrast to tackling overconfidence directly on the predicted probability distributions, recent works have investigated the effect of the entropy on the training labels. The authors of \cite{xie2016disturblabel} explored adding label noise as a regularization, where the disturbed label vector was generated by following a generalized Bernoulli distribution. Label smoothing \cite{szegedy2016rethinking}, which successfully improves the accuracy of deep learning models, has been shown to implicitly calibrate the learned models, as it prevents the network from assigning the full probability mass to a single class, while maintaining a reasonable distance between the logits of the ground-truth class and the other classes \cite{pereyra2017regularizing,muller2019does}.
More recently, \cite{mukhoti2020calibrating} demonstrated that focal loss \cite{lin2017focal} implicitly minimizes a Kullback-Leibler (KL) divergence between the uniform distribution and the softmax predictions, thereby increasing the entropy of the predictions. Indeed, as shown in \cite{muller2019does,mukhoti2020calibrating}, both label smoothing and focal loss implicitly regularize the network output probabilities, encouraging their distribution to be close to the uniform distribution. To our knowledge, and as demonstrated experimentally in the recent studies in \cite{muller2019does,mukhoti2020calibrating}, loss functions that embed implicit or explicit maximization of the entropy of the predictions yield state-of-the-art calibration performances.
\section{Preliminaries}
\label{sec:form}
Let us denote the training dataset as $\mathcal{D}(\mathcal{X}, \mathcal{Y})=\{(\mathbf{x}^{(i)}, \mathbf{y}^{(i)})\}_{i=1}^N$, where $\mathbf{x}^{(i)} \in \mathcal{X} \subset \mathbb{R}^{\Omega_i}$ represents the $i^{th}$ image, $\Omega_i$ the spatial image domain, and $\mathbf{y} \in \mathcal{Y} \subset \mathbb{R}^K$ its corresponding ground-truth label with $K$ classes, provided as one-hot encoding.
Given an input image $\mathbf{x}^{(i)}$, a neural network parameterized by $\theta$ generates a logit vector, defined as $f_{\theta}(\mathbf{x}^{(i)})=\mathbf{l}^{(i)} \in \mathbb{R}^K $. To simplify the notations, we omit sample indices, as this does not lead to ambiguity, and just use $\mathbf{l} = (l_k)_{1 \leq k \leq K} \in \mathbb{R}^K$ to denote logit vectors. Note that the logits are the inputs of the softmax probability predictions of the network, which are computed
as:
\[\mathbf{s} = (s_k)_{1 \leq k \leq K} \in \mathbb{R}^K; \quad s_{k} = \frac{e^{l_k}}{\sum_j^K e^{l_j}}\]
The predicted class is computed as $\hat{y} = \argmax_k s_k$, whereas the predicted confidence is given by $\hat{p} = \max_{k} s_k$.
\comment{
\begin{table*}
\scriptsize
\caption{
\textbf{Notations and formulations used in this paper.} Note that the network parameters $\theta$ and sample indices $i$ are omitted in the prediction quantities and loss functions, so as to simplify notations, and this does not lead to ambiguity.
}
\label{table:notations}
\centering
\resizebox{1.0\textwidth}{!}{
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{@{}ll@{}}
\multicolumn{2}{c}{Dataset} \\
\toprule
Concept & Formula \\
\midrule
Labeled dataset & $\mathcal{D}(\mathcal{X}, \mathcal{Y})=\{(\mathbf{x}^{(i)}, \mathbf{y}^{(i)})\}_{i=1}^N$ \\
Indices/number of classes & $1 \leq k \leq K$ \\
Input and label space & $\mathbf{x} \in \mathcal{X},\ \mathbf{y} \in \mathcal{Y} \subset \mathbb{R}^K $ \\
Label & $\mathbf{y}=(y_k \in \{0, 1\})_{1 \leq k \leq K}$ \\
Label smoothing & $\mathbf{y}^{LS} = (1-\alpha)\mathbf{y} + \alpha / K,\ 0 < \alpha < 1 $ \\
\bottomrule
\end{tabular}
\qquad
\begin{tabular}{@{}ll@{}}
\multicolumn{2}{c}{Modeling} \\
\toprule
Concept & Formula \\
\midrule
Model parameters & $\theta$
\\
Logit / pre-softmax activation / & $\l^{\theta} \in \mathbb{R}^K $ \\
Softmax prediction & $s_{k} = \mathbb{S} (k| \mathbf{s})$
\\
Predicted confidence & $ \hat{p} = \max_{k} s_{k} $ \\
Predicted label & $\hat{\mathbf{y}} = (\hat{y}_{k} \in \{0,1\})_{1 \leq k \leq K}$ \\
Model accuracy & $ \mathbb{P}(\hat{\mathbf{y}} = \mathbf{y} | \hat{p}) $ \\
\bottomrule \\
\end{tabular}
}
\end{table*}
}
\noindent \textbf{Calibrated models.}
\textit{Perfectly calibrated} models are those for which the predicted confidence for each sample is equal to the model accuracy : $\hat{p} = \mathbb{P}(\hat{y} = y | \hat{p})$, where $y$ denotes the true labels.
Therefore, an \textit{over-confident model} tends to assigns predicted confidences that are larger than its accuracy, whereas an \textit{under-confident model} displays lower confidence than the model's accuracy.
\noindent \textbf{Miscalibration of DNNs.} The normal training objective of fully supervised discriminative deep models involves minimizing the negative log-likelihood of the correct answer, which is widely known as the cross-entropy (CE) loss.
The latter reaches its minimum when the predictions for all the training samples match the hard (binary) ground-truth labels, i.e., $s_k = 1$ when $k$ is the ground-truth class of the sample and $s_k = 0$ otherwise.
Minimizing the CE implicitly pushes softmax vectors $\mathbf{s}$ towards the vertices of the simplex, thereby magnifying the distances between the largest logit $\max_k(l_k)$ and the rest of the logits,
yielding over-confident predictions and miscalibrated models.
\section{A constrained-optimization perspective of calibration}
\label{sec:view}
In this section, we present a novel constrained-optimization perspective of current calibration methods for deep networks, showing that the existing
strategies, including Label Smoothing (LS) \cite{muller2019does,szegedy2016rethinking}, Focal Loss (FL) \cite{mukhoti2020calibrating,lin2017focal} and Explicit Confidence Penalty (ECP) \cite{pereyra2017regularizing}, impose {\em equality} constraints on logit distances. Specifically, they embed either explicit or implicit penalty functions, which push all the logit distances to zero.
\subsection{Definition of logit distances}
Let us first define the vector of logit distances between the winner class and the rest as:
\begin{align}\label{eq:logit distance}
\mathbf{d} (\mathbf{l}) = (\max_j (l_j) - l_k)_{1 \leq k \leq K} \in \mathbb{R}^{K}
\end{align}
Note that each element in $\mathbf{d}(\mathbf{l})$ is non-negative.
In the following, we show that LS, FL and ECP correspond to different {\em soft penalty} functions for imposing the same hard equality constraint $\mathbf{d} (\mathbf{l}) = \mathbf 0$ or, equivalently, imposing inequality
constraint $\mathbf{d} (\mathbf{l}) \leq \mathbf 0$ (as $\mathbf{d} (\mathbf{l})$ is non-negative by definition).
Clearly, enforcing this equality constraint in a hard manner would result in all $K$ logits being equal for a given sample, which corresponds to non-informative softmax predictions $s_k = \frac{1}{K} \, \forall k$.
\subsection{Penalty functions in constrained optimization}
In the general context of constrained optimization\cite{Bertsekas95}, {\em soft} penalty functions are widely used to tackle {\em hard} equality or inequality constraints. For the discussion in the sequel, consider specifically the following hard equality constraint:
\begin{equation}
\label{equality-constraint-label-smoothing}
\mathbf{d} (\mathbf{l}) = {\mathbf 0}
\end{equation}
The general principle of a soft-penalty optimizer is to replace a hard constraint of the form in Eq.~\ref{equality-constraint-label-smoothing} by adding an additional term $\mathcal{P}(\mathbf{d} (\mathbf{l}))$ into the main objective function to be minimized. Soft penalty $\mathcal{P}$ should be a continuous and differentiable function, which reaches its global minimum when the constraint is satisfied, i.e., it verifies: $\mathcal{P}(\mathbf{d} (\mathbf{l})) \leq \mathcal{P}(\mathbf {0}) \, \forall \, \mathbf{l} \in \mathbb{R}^{K}$.
Thus, when the constraint is violated, i.e., when $\mathbf{d} (\mathbf{l})$ deviates from $\mathbf {0}$, the penalty term $\mathcal{P}$ increases.
\noindent \textbf{Label smoothing.} In addition to improving the discriminative performance of deep neural networks, recent evidence \cite{lukasik2020does,muller2019does} suggests that Label Smoothing (LS) \cite{szegedy2016rethinking} positively impacts model calibration. In particular, LS modifies the hard target labels with a smoothing parameter $\alpha$, so that the original one-hot training labels $\mathbf{y} \in \{0, 1\}^K$ become $\mathbf{y}^\text{LS} = (y_{k}^\text{LS})_{1 \leq k \leq K}$, with $y_{k}^\text{LS}=y_{k}(1-\alpha)+\frac{\alpha}{K}$. Then, we simply minimize the cross-entropy between the modified labels and the network outputs:
\begin{align}
\label{eq:ls}
{\cal L}_\text{LS} = -\sum_{k} y_{k}^\text{LS} \log s_{k} = -\sum_{k} ((1-\alpha)y_k + \frac{\alpha}{K}) \log s_{k}
\end{align}
where $\alpha \in [0,1]$ is the smoothing hyper-parameter.
It is straightforward to verify that cross-entropy with label smoothing in Eq.~\ref{eq:ls} can be decomposed into a
standard cross-entropy term augmented with a Kullback-Leibler (KL) divergence between uniform distribution ${\mathbf u} = \frac{1}{K}$ and the softmax prediction:
\begin{align}
\label{eq:ls-kl}
{\cal L}_\text{LS} \stackrel{\mathclap{\normalfont\mbox{c}}}{=} {\cal L}_\text{CE} + \frac{\alpha}{1-\alpha}{\cal D}_\text{KL}\left({\mathbf u} || \mathbf{s} \right )
\end{align}
where $\stackrel{\mathclap{\normalfont\mbox{c}}}{=}$ stands for equality up to additive and/or non-negative multiplicative constants. Now, consider the following bounding relationships between a linear penalty (or a Lagrangian) for equality constraint $\mathbf{d} (\mathbf{l}) = {\mathbf 0}$ and the KL divergence in Eq.~\ref{eq:ls-kl}.
\begin{proposition}
\label{prop:ls}
A linear penalty (or a Lagrangian) for constraint $\mathbf{d} (\mathbf{l}) = {\mathbf 0}$ is bounded from above and below
by ${\cal D}_\text{KL}\left({\mathbf u} || \mathbf{s} \right )$, up to additive constants:
\begin{align}
{\cal D}_\text{KL}\left({\mathbf u} || \mathbf{s} \right ) - \log(K) \stackrel{\mathclap{\normalfont\mbox{c}}}{\leq} \frac{1}{K}\sum_k (\max_j (l_j) - l_k) \stackrel{\mathclap{\normalfont\mbox{c}}}{\leq} {\cal D}_\text{KL}\left({\mathbf u} || \mathbf{s} \right ) \nonumber
\end{align}
where $\stackrel{\mathclap{\normalfont\mbox{c}}}{\leq}$ stands for inequality up to an additive constant.
\end{proposition}
These bounding relationships could be obtained directly from softmax and ${\cal D}_\text{KL}$
expressions, along with the following well-known property of the LogSumExp function: $\max_k(l_k) \leq \log{\sum_k^K e^{l_k}} \leq \max_k(l_k) + \log(K)$.
Prop. \ref{prop:ls} means that LS is (approximately) optimizing a linear penalty (or a Lagrangian) for logit-distance constraint $\mathbf{d}(\mathbf{l}) = \mathbf{0}$, which encourages equality of all logits; see the illustration in Figure~\ref{fig:method}, top-left.
\noindent \textbf{Focal loss.} Another popular alternative for calibration is focal loss (FL) \cite{lin2017focal}, which attempts to alleviate the over-fitting issue in CE by directing the training attention towards samples with low confidence in each mini-batch. More concretely, the authors
proposed to use a modulating factor to the CE, $(1-s_k)^{\gamma}$, which controls the trade-off between easy and hard examples. Very recently, \cite{mukhoti2020calibrating} demonstrated that focal loss is, in fact, an upper bound on CE augmented with a term that implicitly serves as a maximum-entropy regularizer:
\begin{align}
{\cal L}_\text{FL} = -\sum_{k} (1 - s_{k})^{\gamma} y_{k} \log s_{k} \geq \mathcal{L}_{\text{CE}} - \gamma\mathcal{H}(\mathbf{s})
\label{eq:fl}
\end{align}
where $\gamma$ is a hyper-parameter and $\mathcal{H}$ denotes the Shannon entropy of the softmax prediction, given by
\[\mathcal{H}(\mathbf{s}) = -\sum_k s_k \log(s_k)\]
In this connection, FL is closely related to ECP \cite{pereyra2017regularizing}, which explicitly added the negative entropy term, $-\mathcal{H}(\mathbf{s})$, to the training objective. It is worth noting that minimizing the negative entropy of the prediction is equivalent to minimizing the KL divergence between the prediction and the uniform distribution, up to an additive constant, i.e.,
\[\mathcal{H}(\mathbf{s}) \stackrel{\mathclap{\normalfont\mbox{c}}}{=} {\cal D}_\text{KL}(\mathbf{s} || \mathbf{u})\]
which is a reversed form of the KL term in Eq.~\ref{eq:ls-kl}.
Therefore, all in all, and following Prop.~\ref{prop:ls} and the discussions above, both LS, FL and ECP could be viewed as different penalty functions for imposing the same logit-distance equality constraint $\mathbf{d}(\mathbf{l}) = \mathbf{0}$. This motivates our margin-based generalization of logit-distance constraints, which we introduce in the following section, along with discussions of its desirable properties (ex., gradient dynamics) for calibrating neural networks.
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{figures/margin-illustration.pdf}
\caption{Illustration of the linear (left) and margin-based (right) penalties for imposing
logit-distance constraints, along with the corresponding derivatives.}
\label{fig:method}
\end{figure}
\subsection{Margin-based Label Smoothing (MbLS)}
\label{sec:our}
Our previous analysis shows that LS, FL and ECP are closely related from a constrained-optimization perspective, and they could be seen as approximations of a linear penalty for imposing constraint $\mathbf{d} (\mathbf{l}) = {\mathbf 0}$, pushing all logit distances to zero; see Figure~\ref{fig:method}, top-left. Clearly, enforcing this constraint in a hard way yields a non-informative solution where all the classes have exactly the same logit and, hence, the same class prediction: $s_k = \frac{1}{K}\, \forall K$. While this trivial solution is not reached in practice when using soft penalties (as in LS, FL and ECP) jointly with CE, we argue that the underlying equality constraint $\mathbf{d} (\mathbf{l}) = {\mathbf 0}$ has an important limitation, which might prevent from reaching the best compromise between the discriminative performance and calibration of the model during gradient-based optimization. Figure \ref{fig:method}, left, illustrates this: With the linear penalty for constraint $\mathbf{d} (\mathbf{l}) = {\mathbf 0}$ in the top-left of the Figure, the
derivative with respect to logit distances is a strictly positive constant (left-bottom), yielding during training {\em a gradient term that constantly pushes towards the trivial, non-informative solution} $\mathbf{d} (\mathbf{l}) = {\mathbf 0}$ (or equivalently $s_k = \frac{1}{K}\, \forall K$). To alleviate this issue, we propose to replace
equality constraint $\mathbf{d} (\mathbf{l}) = {\mathbf 0}$ with the more general inequality constraint $\mathbf{d} (\mathbf{l}) \leq {\mathbf m} $, where ${\mathbf m}$ denotes the $K$-dimensional vector with all elements equal to $m > 0$.
Therefore, we include a margin $m$ into the penalty, so that the logit distances in $\mathbf{d}(\mathbf{l})$ are allowed to
be below $m$ when optimizing the main learning objective:
\begin{align}
\label{eq:our-constraint}
\min \quad \mathcal{L}_{\text{CE}} \quad \text{s.t.} \quad \mathbf{d}(\mathbf{l}) \leq \textbf{m} , \quad \textbf{m} > \textbf{0}
\end{align}
The intuition behind adding a strictly positive margin $m$ is that, unlike the linear penalty for constraint $\mathbf{d} (\mathbf{l}) = {\mathbf 0}$ (Figure \ref{fig:method}, left), the gradient is back-propagated only on those logits where the distance is above the margin (Figure \ref{fig:method}, right). This contrasts with the
linear penalty, for which there exists always a gradient, and its value is the same across all the logits, regardless of their distance.
Even though the constrained problem in Eq. \ref{eq:our-constraint} could be solved by a Lagrangian-multiplier algorithm, we resort to a simpler unconstrained approximation by ReLU function:
\begin{align}
\label{eq:our-l1}
\min \quad & \mathcal{L}_{\text{CE}} + \lambda \sum_k \max(0, \max_j (l_j) - l_k - m)
\end{align}
Here, the non-linear ReLU penalty for inequality constraint $\mathbf{d} (\mathbf{l}) \leq {\mathbf m}$ discourages logit distances from surpassing a given margin $m$, and $\lambda$ is a trade-off weight balancing the two terms.
Clearly, and as discussed in Sec.~\ref{sec:view}, several competitive calibration methods could be viewed as approximations for imposing constraint $\mathbf{d} (\mathbf{l}) = {\mathbf 0}$ and, therefore, correspond to the special case of our method when setting the margin to $m=0$. Our comprehensive experiments in the next section demonstrate clearly the benefits of introducing a strictly positive margin $m$.
Note that our model in Eq.~\ref{eq:our-l1} has two hyper-parameters, $m$ and $\lambda$. We fixed $\lambda$ to $0.1$ in all our experiments on a variety of problems and benchmarks, and tuned only the margin $m$ over validation sets. In this way, when comparing with the existing calibration solutions, we use the same budget of hyper-parameter optimization ($m$ in our method vs. $\alpha$ in LS or $\gamma$ in FL).
\begin{table*}[h!]
\caption{Calibration performance for different approaches on two popular image classification benchmarks.
Two models are used on each dataset : ResNet-50 (R-50) and ResNet-101 (R-101).
Best method is highlighted in bold, whereas the second best method is underlined.
}
\label{table:big}
\centering
\footnotesize
\resizebox{0.9\textwidth}{!}
{
\setlength{\tabcolsep}{2.0pt}
\begin{tabular}{@{}llllccccccccccccccccccccc@{}}
\toprule
\multirow{2}{*}{\textbf{Dataset}} && \multirow{2}{*}{\textbf{Model}} && \multicolumn{2}{c}{\textbf{CE}} &&
\multicolumn{2}{c}{\textbf{ECP}\cite{pereyra2017regularizing}} &&
\multicolumn{2}{c}{\textbf{LS} \cite{szegedy2016rethinking}} && \multicolumn{2}{c}{\textbf{FL} \cite{lin2017focal}} &&
\multicolumn{2}{c}{\textbf{FLSD}\cite{mukhoti2020calibrating}} &&
\multicolumn{2}{c}{\textbf{Ours (m=0)}} && \multicolumn{2}{c}{\textbf{Ours}} \\
\cmidrule{5-6} \cmidrule{8-9} \cmidrule{11-12}
\cmidrule{14-15} \cmidrule{17-18} \cmidrule{20-21} \cmidrule{23-24}
&& && ECE & AECE && ECE & AECE && ECE & AECE && ECE & AECE && ECE & AECE && ECE & AECE && ECE & AECE \\
\midrule
\multirow{2}{*}{Tiny-ImageNet} && R-50 && 3.73 & 3.69 && 4.00 & 3.92 && 3.17 & 3.16 && 2.96 & 3.12 && 2.91 & 2.95 && \underline{2.50} & \underline{2.58} && \textbf{1.64} & \textbf{1.73} \\
&& R-101 && 4.97 & 4.97 && 4.68 & 4.66 && 2.20 & 2.21 && 2.55 & 2.44 && 4.91 & 4.91 && \underline{1.89} & \underline{1.95} && \textbf{1.62} & \textbf{1.68} \\
\midrule
\multirow{2}{*}{CIFAR-10} && R-50 && 5.85 & 5.84 && 3.01 & \textbf{2.99} && \underline{2.79} & 3.85 && 3.90 & 3.86 && 3.84 & 3.60 && 3.72 & 4.29 && \textbf{1.16} & \underline{3.18} \\
&& R-101 && 5.74 & 5.73 && 5.41 & 5.40 && 3.56 & 4.68 && 4.60 & 4.58 && 4.58 & 4.57 && \underline{3.07} & \underline{3.97} && \textbf{1.38} & \textbf{3.25} \\
\bottomrule
\end{tabular}
}
\end{table*}
\section{Experiments}
\label{sec:exp}
\noindent \textbf{Datasets.} Our method is validated on a variety of popular image classification benchmarks, including two standard datasets, \textbf{CIFAR-10} \cite{krizhevsky2009learning}
and \textbf{Tiny-ImageNet} \cite{deng2009imagenet}, and one fine-grained dataset, \textbf{CUB-200-2011} \cite{WahCUB_200_2011}. A main difference between these tasks is that fine-grained visual categorization focuses on differentiating between \textit{hard-to-distinguish} object classes, typically from subcategories, such as species of birds or flowers, whereas conventional datasets contain more general categories, i.e., \textit{is this a dog or a car?} To show the general applicability of our method, we also evaluate it on one well-known segmentation benchmark, \textbf{PASCAL VOC 2012} \cite{VOC2015}.
Last, we conduct experiments on the \textbf{20 Newsgroups} dataset \cite{lang1995newsweeder}, a popular Natural Language Processing (NLP) benchmark on text classification.
\noindent \textbf{Architectures.} For image classification tasks, the state-of-the-art architecture ResNet \cite{he2016deep} is employed, whereas DeepLabV3 \cite{chen2017rethinking} is used for semantic segmentation. Regarding the NLP recognition task, we train a Global Pooling CNN (GPool-CNN) architecture \cite{lin2013network}, following \cite{mukhoti2020calibrating}. For a fair comparison, we employ the same settings across all the benchmarks and models. Please refer to the supplementary material for a detailed description of each dataset and training settings.
\noindent \textbf{Metrics.} To evaluate the calibration performance, we resort to the standard metric in the literature \cite{mukhoti2020calibrating}: expected calibration error (ECE) \cite{naeini2015ece}. This metric represents the expected absolute difference between the predicted confidence and model accuracy: $ \mathbb{E}_{\mathbf{p}} [|\mathbb{P}(\hat{\mathbf{y}} = \mathbf{y} | \hat{p}) - \hat{p} |]$.
In practice, an approximate estimation is used to calculate ECE given a finite number of samples. Specifically, we group the predictions into $M$ equispaced bins. Let $B_{m}$ denote the set of samples with predicted confidence belonging to the $m^{th}$ bin, where the interval is $[\frac{i-1}{M}, \frac{i}{M}]$.
Then, the accuracy of $B_{m}$ is: $A_{m} = \frac{1}{|B_m|} \sum_{i \in B_m} \mathbbm{1}(\hat{\mathbf{y}}_i = \mathbf{y}_i )$, where $\mathbbm{1}$ is the indicator function.
Similarly, the mean confidence of $B_m$ is defined as the average confidence of all samples in the bin : $C_{m} = \frac{1}{|B_m|} \sum_{i \in B_m} \hat{p}_i$.
Then, ECE can be approximated as a weighted average of the absolute difference between the accuracy and confidence of each bin:
\vspace{-3mm}
\begin{align}
\label{eq:ece}
ECE = \sum_{m}^{M} \frac{|B_m|}{N} |A_m - C_m|
\end{align}
In our implementation, the number of bins is set to $M=15$. We also consider Adaptive ECE (AECE) for which bin sizes are calculated to evenly distribute samples across the bins.
In addition, we show the widely employed \textit{reliability diagrams} \cite{mizil2015predictinggood}, which plot the accuracy as a function of the confidence. A perfectly calibrated model has a reliability diagram that approximates a diagonal line, since the accuracy of each bin ideally matches the corresponding confidence. In contrast, the curve mostly lies above the diagonal for an under-confident model, while an over-confident model would show a curve mostly below the diagonal. To measure the discriminative performance of classification models, we provide the accuracy (Acc) on the testing set. Finally, the mean intersection over union (mIoU) is employed to measure the segmentation performance.
\noindent \textbf{Baselines.} In addition to cross-entropy (CE), we evaluate the performance of relevant works, including label smoothing (LS) \cite{szegedy2016rethinking}, focal loss (FL) \cite{lin2017focal} and explicit confidence penalty (ECP) in \cite{pereyra2017regularizing}. In addition, we also include the results from the recent adaptive sample-dependent focal loss (FLSD) in \cite{mukhoti2020calibrating}, which provided highly competitive calibration performances and advocated the use of FL for calibration\footnote{In fact, initially designed for object detection, FL was not used for calibration before the recent study in \cite{mukhoti2020calibrating}}. To set the hyper-parameters of the different methods we employed the values reported
in recent literature \cite{muller2019does,mukhoti2020calibrating}. More concretely, the smoothing factor $\alpha$ in LS is set to $0.05$, $\lambda$ in FL is set to $3$, and the scheduled $\lambda$ in FLSD is $5$ for $s_k \in [0, 0.2)$ and $3$ for $s_k \in [0.2, 1)$ (with $k$ being the right class for a given sample).
Last, we empirically set the balancing hyper-parameter in ECP to $0.1$, as it brings consistent performance in our experiments.
\noindent \textbf{Our method.} The proposed method has only one hyper-parameter $m$ (we kept $\lambda$ fixed to $0.1$, so that the label-smoothing term has the same budget of hyper-parameters as the other methods).
As for margin $m$, it was chosen based on the validation set of each dataset, which yielded relatively stable margin values across different tasks and consistent behaviour over both validation and testing data (See Figure \ref{fig:tiny-margin}): $m=6$ on CIFAR-10 and 20 Newsgroup, and $m=10$ on Tiny-Imagenet, CUB-200-2011 and Pascal VOC Segmentation.
Note that we perform ablation studies to assess the impact of varying $m$.
\subsection{Results}
\label{ssec:quant}
\noindent \textbf{Standard image classification benchmarks.} We first evaluate the calibration behaviour of both baselines and proposed model on two well-known image classification datasets, whose results are reported in Table \ref{table:big}. In particular, we show that training a model with hard targets, i.e., CE, leads to miscalibrated predictions across datasets and backbone architectures. In addition, by penalizing low-entropy predictions, either explicitly (i.e., ECP \cite{pereyra2017regularizing}) or implicitly (i.e., LS \cite{szegedy2016rethinking}, FL \cite{lin2017focal} and FLSD \cite{mukhoti2020calibrating}), we can typically train better calibrated networks.
Intuitively, the regularization terms added by these methods interplay with the main cross-entropy objective, controlling up to some extent the amount of confidence on the predictions.
Thus, even though the impact of the different methods differs across datasets, the calibration performance is typically improved over the standard cross entropy training. Last, we can observe that both versions of our model yield the best results in nearly all of the cases, with just one setting ranking second across all the models. Furthermore, the significant improvement observed when the margin is included, i.e., $m>0$, motivates its use, suggesting that our method provides better calibrated networks. {\em An interesting observation is that, while existing models are quite sensitive to the employed backbone, the predicted uncertainty estimates provided by our models are considerably robust, presenting the smallest variations across architectures}. For instance, when using higher-capacity backbones on CIFAR-10, calibration metrics across all existing methods are considerably degraded (ECP \cite{pereyra2017regularizing}:+2.4, LS \cite{szegedy2016rethinking}:+0.77, FL \cite{lin2017focal}:+0.7 and FLSD: \cite{mukhoti2020calibrating}:+0.74), whereas models calibrated with our approach suffer minor changes (\textit{Ours}:+0.22).
In terms of discriminative performance (Table~\ref{table:bigClass}), we can observe that the proposed MbLS provides performance on par with LS and CE, sometimes ranking as the best method. On the other hand, FL and its variant FLSD obtain the worst results, with performance gaps
of 1-3\% lower than the proposed model. These results suggest that, in the standard image classification benchmarks used for calibration, our model achieves the best calibration performance, whereas it maintains, or improves, the discriminative power of state-of-the-art classification losses investigated for calibration.
\begin{table}[h!]
\caption{Classification performance
on two popular image classification benchmarks.
Best method is highlighted in bold, whereas the second best method is underlined. $\Delta$ columns highlight the differences with regard to the best method in each case.
}
\label{table:bigClass}
\centering
\footnotesize
\resizebox{1.0\columnwidth}{!}
{
\setlength{\tabcolsep}{2.0pt}
\begin{tabular}{@{}llccccccccccc@{}}
\toprule
\multirow{2}{*}{\textbf{Dataset}} & \multirow{2}{*}{\textbf{Model}} &
\multirow{2}{*}{\textbf{CE}} &
\multirow{2}{*}{\textbf{ECP}} &
\multirow{2}{*}{\textbf{LS} } & \multirow{2}{*}{\textbf{FL} } &
\multirow{2}{*}{\textbf{FLSD}} &
\multicolumn{2}{c}{\textbf{Ours (m=0)}} && \multicolumn{2}{c}{\textbf{Ours}} \\
\cmidrule{8-9} \cmidrule{11-12}
&&&&&&& Acc & $\Delta$ && Acc & $\Delta$ \\
\midrule
\multirow{2}{*}{Tiny-ImageNet} & R-50 & 65.02 & 64.98 & \textbf{65.78} & 63.09 & 64.09 & \underline{65.15} & -0.63 && 64.74 & -1.04 \\
& R-101 & 65.62 & 65.69 & \textbf{65.87} & 62.97 & 62.96 & 65.72 & -0.15 && \underline{65.81} & -0.06 \\
\midrule
\multirow{2}{*}{CIFAR-10} & R-50 & 93.20 & 94.75 & \underline{94.87} & 94.82 & 94.77 & 94.76 & -0.49 && \textbf{95.25} & +0.38 \\
& R-101 & 93.33 & 93.35 & 93.23 & 92.42 & 92.38 & \textbf{95.36} & +0.23 && \underline{95.13} & -0.23 \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.9\textwidth]{figures/reliability_main.pdf}
\caption{\textbf{Calibration visualizations of ResNet-50 on Tiny-ImageNet.} Reliability diagrams is computed with 25 bins. The zoom-in figures for part of the diagrams are also included, clearly showing the differences.}
\label{fig:tiny-resnet50}
\end{figure*}
\begin{table}[h!]
\caption{Results on the fine-grained image classification benchmark \textit{CUB-200-2011} with ResNet-101 as backbone.
}
\label{tab:CUB}
\centering
\footnotesize
\begin{tabular}{@{}lccc@{}}
\toprule
Method & Acc & ECE & AECE \\
\midrule
CE & 73.09 & 6.75 & 6.65 \\
ECP \cite{pereyra2017regularizing} & 73.51 & 5.55 & 5.44 \\
LS \cite{szegedy2016rethinking} & 74.51 & 5.16 & 5.14 \\
FL \cite{lin2017focal} & 72.87 & 8.41 & 8.39 \\
FLSD \cite{mukhoti2020calibrating} & 72.59 & 8.54 & 8.53 \\
\midrule
\textbf{Ours (m=0)} & 73.92 & 5.11 & 5.29 \\
\textbf{Ours} & \textbf{74.56} & \textbf{2.78} & \textbf{2.63} \\
\bottomrule
\end{tabular}
\end{table}
\noindent \textbf{Fine-grained image classification.} We now investigate the calibration and discriminative performance on a more complex scenario. In particular, in the previous section we assessed the behaviour of a variety of methods in the scenario of clearly different categories, whereas in this study we include subordinate classes of a common superior class. This setting is arguably more challenging, mostly due to the difficulty of finding informative regions and extracting discriminative features across subcategories. Results from this study are reported in Table \ref{tab:CUB}. In line with previous results, networks trained with hard-encoded labels leads to overconfident networks. Explicitly penalizing low entropy predictions, i.e., ECP \cite{pereyra2017regularizing}, or implicitly with LS results in better calibrated and higher performing models. Nevertheless, if FL and its variant FLSD are used for training, both calibration and classification performances are degraded, leading to the worst results across models. This suggests that, even though FL has been recently shown to work very competitively on the standard benchmarks \cite{mukhoti2020calibrating}, its calibration benefits might vanish on more complex datasets. Last, the network trained with the proposed MbLS method obtains the best calibration and classification performances, with a remarkable gap compared to existing state-of-the-art. Note that, for the sake of fairness, the hyperparameters used for all the models, including our method, are the same as the ones employed on the previous section for Tiny-ImageNet.
\begin{figure*}[h]
\centering
\includegraphics[width=0.95\textwidth]{figures/segmentation_result_v2.pdf}
\caption{\textbf{Qualitative results on semantic segmentation.}
We present several examples from the qualitative segmentation results on the PASCAL VOC 2012 validation set, showing the superiority by our method, in terms of calibration performance. In the left, we give the original image with ground-truth (GT) mask, then we present the \textbf{ confidence map (a)} and the \textbf{reliability diagram (b)} with the ECE (\%) score for each method. The value of confidence map represent the predicted confidence, i.e., the element of the soft-max probability for the winner class. It is noted that deeper color denotes higher confidence in the map, as shown in the legend at the upper right corner.}
\label{ap:fig:segment}
\end{figure*}
\noindent \textbf{Reliability diagram.} We further investigate the calibration behaviour of the proposed model with reliability diagrams, whose results for Tiny-ImageNet with ResNet50 are shown in Figure \ref{fig:tiny-resnet50}. What we expect from a perfectly calibrated model is that its reliability diagram matches the dashed red line, where the output likelihood predicts perfectly the model accuracy. We first observe that the model trained with the standard cross entropy (\textit{first plot}) is overconfident, as its accuracy is mostly below the confidence values. Both state-of-the-art methods (\textit{second and third plots}) reverse this trend, and present reliability diagrams closer to the dashed line, which indicates that models trained with these losses are actually better calibrated. Even though both improve the calibration performance, an interesting observation is that the range accuracy vs confidence where they are better calibrated is indeed the opposite (LS provides better estimates for higher probabilities, whereas FL predictions are better calibrated in a low regime, close to 0). Last, we can observe that the reliability diagram slope provided by our method is much closer to a slope of 1, suggesting that the model is better calibrated. This observation is supported by the quantitative results reported in Section \ref{ssec:quant}.
\noindent \textbf{Effect of the margin $\mathbf{m}$.} In this section, we study the impact of margin $m$ in Eq.~\ref{eq:our-l1}, as depicted on both validation and testing data in Figure~\ref{fig:tiny-margin}.
In particular, we show the evolution of calibration and classification metrics on two datasets, which differ significantly in their input dimensionality.
The objective of these experiments is to demonstrate the robustness of the method with respect to the margin values, and to show the consistency between the optimal margin values over validation and testing data. Despite the fact that optimal $m$ may vary across different data sets, different choices of $m$ do not affect the performance drastically. Indeed, we can observe that the trend in performance is similar for both datasets, particularly on the testing data. First, imposing a small margin value has a negative impact on the calibration, which might be due to the aggressive gradients resulting from the strong constraint (e.g., $m=0$). Once the optimal $m$ is obtained, larger values result in slightly worst calibrated networks, compared to the best model. Nevertheless, even if we select a network trained with a suboptimal margin, its calibration performance still outperforms state-of-the-art calibration losses. For example, with $m=20$, ECE is equal to 3.05 and 2.99 in CIFAR-10 and Tiny-ImageNet, respectively, whereas LS obtains 2.79 and 3.17, FL 3.90 and 2.96. This demonstrates that our method is capable of bringing, at least, comparable improvements over current literature, even without the need of tuning the value of $m$ over a validation set.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\columnwidth]{figures/margin_study3.pdf}
\caption{\textbf{Evaluating the effect of the margin (m).}
We present the variation of both ECE and Accuracy on CIFAR-10 (\textit{top}) and Tiny-ImageNet (\textit{bottom}) across different margin values. The network used in this study is ResNet-50 and $\lambda$ in Eq.~\ref{eq:our-l1} is set to 0.1.}
\label{fig:tiny-margin}
\end{figure}
\begin{figure*}[h]
\centering
\includegraphics[width=0.9\textwidth]{figures/tiny_study_lambda.pdf}
\caption{\textbf{Evaluating the effect of the balancing weight.}
We present the variation of both ECE and Accuracy on the Tiny-ImageNet validation set (\textit{left}) and on Tiny-ImageNet test set (\textit{right}) using different balancing weight values, i.e., $\lambda$ in our method and $\alpha$ in LS. The network used in this study is ResNet-50.}
\label{ap:fig:tiny-lambda}
\end{figure*}
\noindent \textbf{Ablation study on the balancing weight.}
We now investigate the impact of the balancing weight $\lambda$ in our method, and compare it to the effect of $\alpha$ in Label Smoothing (LS), whose results are depicted in Figure~\ref{ap:fig:tiny-lambda}.
In particular, we show the evolution of calibration and classification metrics on Tiny-ImageNet validation and test sets.
One may observe that, unlike LS, our method with margin is more robust with respect to the balancing weight in both subsets.
The reason is that our method imposed a more reasonable constraint with positive margin, while the added penalty term of LS may lead to a non-informative solution with increased weight.
\begin{table}[h!]
\caption{Performance of \textit{our method without margin (m=0)} and label smoothing (LS) given equivalent weights on Tiny-ImageNet with ResNet-50.
}
\label{tab:nomargin}
\centering
\footnotesize
\resizebox{0.9\columnwidth}{!}
{
\begin{tabular}{@{}c@{}}
\begin{tabular}{@{}llccccc@{}}
\toprule
&\multirow{2}{*}{Method} & \multicolumn{5}{c}{$\alpha$ in LS (Eq.~\ref{eq:ls-kl}) / $\lambda$ in Ours (Eq.~\ref{eq:our-l1})} \\
\cmidrule{3-7}
& & $0$ (CE) & $0.05$ & $0.1$ & $0.2$ & $0.3$ \\
\midrule
\multirow{2}{*}{ECE} & LS \cite{szegedy2016rethinking} & 3.73 & 3.17 & 6.53 & 12.05 & 18.04 \\
& \bf Ours ($m=0$) & 3.73 & 2.50 & 7.70 & 14.48 & 21.93 \\
\midrule
\multirow{2}{*}{Acc} & LS \cite{szegedy2016rethinking} & 65.02 & 65.78 & 65.02 & 65.39 & 65.60 \\
& \bf Ours ($m=0$) & 65.02 & 65.15 & 65.43 & 65.14 & 66.02 \\
\bottomrule
\end{tabular}
\end{tabular}
}
\end{table}
\noindent \textbf{Equivalence with Label Smoothing.} As presented in the theoretical connections between different losses, Label Smoothing approximates a particular case of the proposed loss when $m$ is equal to $0$.
Table \ref{table:big} and Table \ref{tab:CUB} empirically show that the results of LS and Ours (m=0) are almost consistent for all cases.
It is noted that we follow the best practice in LS by setting equivalent balancing weight of $\lambda$ in Ours (m=0) to $0.05$ in the above experiments.
To further validate empirically this observation, we gradually increase the controlling hyperparameter in both LS ($\alpha$) and our method. The results are presented in Table~\ref{tab:nomargin} and Figure~\ref{ap:fig:tiny-lambda}. It is seen that by varying the relative trade-off weights between the main cross-entropy and the penalty in both LS and our method we can obtain similar trends and scores, particularly for smaller values of the balancing terms.
\noindent \textbf{Results on image segmentation.} Segmentation performances on the popular Pascal VOC dataset are reported in Table \ref{tab:voc}. We can observe that, regardless of the backbone network, the proposed approach leads to the best calibrated and highest performing models, which is consistent with empirical observations in previous experiments. Differences between the proposed method and existing literature are further magnified when ResNet-50, a higher-capacity model, is used as a backbone network. These observations suggest that: \textit{i)} the probability values predicted by our method are a better estimate of the actual likelihood of correctness and \textit{ii)} its calibration performance does not degrade when increasing the model capacity.
\begin{table}[h!]
\caption{Segmentation results on the VOC 2012 validation set.
Best methods are highlighted in bold.}
\label{tab:voc}
\centering
\footnotesize
{
\begin{tabular}{@{}llccc@{}}
\toprule
Backbone & Method & mIoU & ECE & AECE \\
\midrule
\multirow{5}{*}{ResNet-34} & CE & 68.78 & 8.94 & 8.89 \\
& ECP \cite{pereyra2017regularizing} & 69.54 & 8.72 & 8.68 \\
& LS \cite{szegedy2016rethinking} & 69.71 & 8.11 & 8.47 \\
& FL \cite{lin2017focal} & 68.31 & 11.60 & 11.61 \\
& \textbf{Ours} & \textbf{70.24} & \textbf{7.93} & \textbf{8.00} \\
\midrule
\multirow{5}{*}{ResNet-50} & CE & 70.92 & 8.26 & 8.23 \\
& ECP \cite{pereyra2017regularizing} & 71.16 & 8.31 & 8.26 \\
& LS \cite{szegedy2016rethinking} & 71.00 & 9.35 & 9.95\\
& FL \cite{lin2017focal} & 69.99 & 11.44 & 11.43 \\
& \textbf{Ours} & \textbf{71.20} & \textbf{7.94} & \textbf{7.99} \\
\bottomrule
\end{tabular}
}
\end{table}
\noindent \textbf{Visual results on semantic segmentation.}
Several visual results from the segmentation task are depicted in Figure~\ref{ap:fig:segment}. In particular, we show the confidence maps (\textit{a}) and the reliability diagrams (\textit{b}) across each method. We can observe that the proposed model provides the best reliability diagrams, as the ECE curves are closer to the diagonal. This indicates that the predicted probabilities are a good estimate of the correctness of the prediction.
In comparison to the reliability diagrams on image classification, i.e., Figure~\ref{fig:tiny-resnet50}, other methods degrades more in the this more challenging application, while our method shows better robustness and generability.
Then, if we look at the confidence maps we can observe several interesting facts. First, our method tends to yield consistent confidence within the region for each object or class.
Then the confidence maps of our method show better edge sharpness, matching the expected property that the model should only be less confident at the boundaries while yield confident prediction for other pixels.
Other methods could yield uncertain predictions in a larger region near the boundaries (CE in the first row or FL in the third row) and sometimes even at the center of the object (LS and FL in the second row).
This may also explain why our method achieve better segmentation IoU (Table 5 in the main text) as well.
\noindent \textbf{Results on text classification.} We also investigate the calibration of models trained on non-visual pattern recognition tasks, such as text classification, which are evaluated on the 20 Newsgroups dataset. Table \ref{table:NLP} reports the results on this benchmark, which show that the proposed model achieves better discriminative and calibration performance compared to existing works. It is noteworthy to mention that differences are substantial in terms of calibration, suggesting that the proposed approach also provides significantly better uncertainty estimates in this task than competing methods.
\begin{table}[h!]
\caption{Results on the testing set of the 20 Newsgroups dataset.
Best method is highlighted in bold.}
\label{table:NLP}
\centering
\small
\resizebox{1.0\columnwidth}{!}
{
\setlength{\tabcolsep}{2.0pt}
\begin{tabular}{@{}cccccccccccccccccc@{}}
\toprule
\multicolumn{2}{c}{\textbf{CE}} && \multicolumn{2}{c}{\textbf{ECP}} \cite{pereyra2017regularizing} &&
\multicolumn{2}{c}{\textbf{LS} \cite{szegedy2016rethinking}} && \multicolumn{2}{c}{\textbf{FL} \cite{lin2017focal}} &&
\multicolumn{2}{c}{\textbf{FLSD}} \cite{mukhoti2020calibrating} &&
\multicolumn{2}{c}{\textbf{Ours}} \\
\midrule
Acc & ECE && Acc & ECE && Acc & ECE && Acc & ECE && Acc & ECE && Acc & ECE \\
67.01 & 22.75 && 66.48 & 22.97 && 67.14 & 8.07 && 66.08 & 10.80 && 65.85 & 10.87 && \textbf{67.89} & \textbf{5.40} \\
\bottomrule
\end{tabular}
}
\end{table}
\vspace{-3mm}
\section{Limitations}
Despite the superior performance of our method over existing approaches, there exist several limitations in this work. For instance, recent evidences in the literature \cite{ovadia2019can} have demonstrated that simple temperature scaling methods does not work well under domain distributional shift, and advocate the use of more complex methods that take epistemic uncertainty into account as the shift increases, such as ensembles. Nevertheless, despite these findings, the performances of baselines (i.e., LS \cite{szegedy2016rethinking} or focal loss \cite{lin2017focal}) and the proposed model have not been investigated in this scenario, which might shed light about potential benefits or drawbacks of these approaches on non-independent and identically distributed (\textit{i.i.d.}) regimes.
{\small
\bibliographystyle{ieee_fullname}
|
{
"timestamp": "2021-12-01T02:25:03",
"yymm": "2111",
"arxiv_id": "2111.15430",
"language": "en",
"url": "https://arxiv.org/abs/2111.15430"
}
|
\section{Introduction}
\label{sec:intro}
\input{floats/fig_eyecatch2}
Depth estimation plays an important role in many computer vision and robotics applications, such as 3D modeling, augmented reality, navigation, or industrial inspection.
Structured light (SL) systems estimate depth by actively projecting a known pattern on the scene
and observing with a camera how light interacts (i.e., deforms and reflects) with the surfaces of the objects.
In close range, these systems provide more accurate depth estimates than passive stereo methods,
and so they have been used in commercial products like KinectV1~\cite{KinectV1} and Intel RealSense~\cite{IntelRealSense}.
Due to simple hardware and accurate depth estimates, SL systems are suitable for applications like 3D modeling, augmented reality, and indoor-autonomous navigation.
SL systems are constrained by the bandwidth of the devices (projector and camera) and the power of the projector light source.
These constraints limit the acquisition speed, depth resolution, and performance in ambient illumination of the SL system.
The main drawback of SL systems based on traditional cameras is that frame rate and redundant data acquisition limit processing to tens of Hz.
By suppressing redundant data acquisition, processing can be accelerated and made more lightweight.
This is a fundamental idea of recent systems based on event cameras, such as~\cite{Matsuda15iccp,Brandli13fns,Martel18iscas}.
Event cameras, such as the Dynamic Vision Sensor (DVS) \cite{Lichtsteiner08ssc,Suh20iscas,Finateu20isscc} or the ATIS sensor \cite{Posch11ssc,Posch14ieee}, are bio-inspired sensors that measure per-pixel intensity \emph{changes} (i.e., temporal contrast) asynchronously, at the time they occur.
Thus, event cameras excel at suppressing temporal redundancy of the acquired visual signal, and they do so at the circuit level, thus consuming very little power.
Moreover, event cameras have a very high temporal resolution (in the order of microseconds), which is orders of magnitude higher than that of traditional (frame-based) cameras, so they allow us to acquire visual signals at very high speed.
This is another key idea of SL systems based on event cameras: the high temporal resolution simplifies data association by sequentially exposing the scene, one point~\cite{Matsuda15iccp} or one line~\cite{Brandli13fns} at a time.
However, the unconventional output of event cameras (a stream of asynchronous per-pixel intensity changes, called ``events'', instead of a synchronous sequence of images) requires the design of novel computer vision methods ~\cite{Gallego20pami,Benosman14tnnls,Kim14bmvc,Zhu17icra,Rebecq17ral,Gallego17pami,Osswald17srep,Mueggler18tro,Gallego19cvpr}.
Additionally, event cameras have a very high dynamic range (HDR) ($>$\SI{120}{\decibel}), which allows them to operate in broad illumination conditions~\cite{Rosinol18ral,Rebecq19pami,Cohen18amos}.
This paper tackles the problem of depth estimation using a SL system comprising a laser point-projector and an event camera (Figs.~\ref{fig:eyecatch} and \ref{fig:setup}).
Our goal is to exploit the advantages of event cameras in terms of data redundancy suppression, large bandwidth (i.e., high temporal resolution) and HDR.
Early work~\cite{Matsuda15iccp} showed the potential of these types of systems; however, 3D points were estimated independently from each other, resulting in noisy 3D reconstructions.
Instead, we propose to exploit the regularity of the surfaces in the world to obtain more accurate and less noisy 3D reconstructions.
To this end, events are no longer processed independently, but jointly and following a forward projection model rather than the classical depth-estimation approach (stereo matching plus triangulation by back-projection).
\textbf{Contributions}. In summary, our contributions are:
\begin{itemize}[topsep=1pt,parsep=2pt,partopsep=2pt]
\setlength\itemsep{0em}
\item A novel formulation for depth estimation from an event-based SL system comprising a laser point-projector and an event camera.
\mm{We model the laser point projector as an ``inverse'' event camera
and estimate depth by maximizing the spatio-temporal consistency between the projector's and the event camera's data, when interpreted as a stereo system.}
\item The proposed method is robust to noise in the event timestamps (e.g., jitter, latency, BurstAER) as compared to the state-of-the-art \cite{Matsuda15iccp}.
\item A convincing evaluation of the accuracy of our method using \mm{ten} stationary scenes and
a demonstration of the capabilities of our setup to scan \mm{eight} sequences with high-speed motion.
\item A dataset comprising all static and dynamic scenes recorded with our setup, \mm{and source code}.
To the best of our knowledge it is the first public dataset of its kind.
\end{itemize}
The following sections review the related work (Section~\ref{sec:related-work}),
present our approach \mm{ESL} (Section~\ref{sec:methodolody}),
and evaluate the method, comparing against the state-of-the-art and against ground truth data (Section~\ref{sec:experim}).
\section{Event-based Structured Light Systems}
\label{sec:related-work}
Prior structured light (SL) systems that have addressed the problem of depth estimation with event cameras are summarized in Table~\ref{tab:related-work}.
Since event cameras are novel sensors (commercialized since 2008), there are only a handful of papers on SL systems.
These can be classified according to whether the shape of the light source (point, line, 2D pattern)
and according to the number of event cameras used.
One of the earliest works combined a DVS with a pulsed laser line to reconstruct a small terrain~\cite{Brandli13fns}.
The pulsed laser line was projected at a fixed angle with respect to the DVS while the terrain moved beneath, perpendicular to the projected line.
The method used an adaptive filter to distinguish the events caused by the laser (up to $f\!\sim$\SI{500}{\Hz}) from the events caused by noise or by the terrain's motion.
\input{floats/fig_hw_setup}
\input{floats/table_background}
The SL system MC3D~\cite{Matsuda15iccp} comprised a laser point projector (operating up to \SI{60}{\Hz}) and a DVS.
The laser raster-scanned the scene, and its reflection was captured by the DVS, which converted temporal information of events at each pixel into disparity.
It exploited the redundancy suppression and high temporal resolution of the DVS, also showing appealing results in dynamic scenes.
{In \cite{Leroux18arxiv}, a Digital Light Processing (DLP) projector was used to illuminate the scene with frequency-tagged light patterns.
Each pattern's unique frequency facilitated the establishment of correspondences between the patterns and the events,
leading to a sparse depth map that was later interpolated.
Recently, \cite{Martel18iscas} combined a laser light source with a \emph{stereo} setup consisting of two
DAVIS event cameras~\cite{Brandli14ssc}.
The laser illuminated the scene and the synchronized event cameras recorded the events generated by the reflection from the scene.
Hence the light source was used to generate stereo point correspondences, which were then triangulated (back-projected) to obtain a 3D reconstruction.
More recently, \cite{Mangalore20spl} proposed a SL system with a fringe projector and an event camera.
A sinusoidal 2D pattern with different frequencies illuminated the scene and its reflection was captured by the camera and processed (by phase unwrapping) to generated depth estimates.
The closest work to our method is MC3D~\cite{Matsuda15iccp} since both use a laser point-projector and a single event camera,
which is a sufficiently general and simple scenario that allows us to exploit the high-speed advantages of event cameras and the focusing power of a point light source.
In both methods we may interpret the laser and camera as a stereo pair.
The principle behind MC3D is to map the spatial disparity between the projector and event camera to temporal information of the events.
When events are generated, their timestamps are mapped to disparity by multiplying by the projector's scanning speed.
This operation amplifies the noise inherent in the event timestamps and leads to brittle stereo correspondences.
Moreover, this noise amplification depends on the projector's speed, which is product of the projector resolution and scanning frequency. Hence, MC3D's performance degrades as the scanning frequency increases.
By contrast, our method maximizes the spatio-temporal consistency between the projector's and event camera's data, thus leading to lower errors (especially with higher scanning frequencies).
By exploiting the regularities in neighborhoods of event data, as opposed to the point-wise operations in MC3D, our method improves robustness against noise.
\section{Depth Estimation}
\label{sec:methodolody}
This section introduces basic considerations of the event-camera - projector setup (Section~\ref{sec:method:preliminaries})
and then presents our optimization approach to depth estimation (Section~\ref{sec:method:energy-based-formulation})
using spatio-temporal consistency between the signals used on the event camera and the projector.
Overall, our method is summarized in Fig.~\ref{fig:geometricConfig} and Algorithm~\ref{alg:pseudo-code-patches}.
\subsection{Basic Considerations}
\label{sec:method:preliminaries}
\input{floats/fig_geometric_setup}
We consider the problem of depth estimation using a laser point projector and an event camera.
Fig.~\ref{fig:geometricConfig} illustrates the geometry of our configuration.
The projector illuminates the scene by moving a laser light source in a raster scan fashion. %
The changes in illumination caused by the laser are observed by the event camera, whose pixels respond asynchronously by generating events\footnote{An event camera generates an event $e_k = (\mathbf{x}_k,t_k,p_k)$ at time $t_k$ when the increment of logarithmic brightness at the pixel $\mathbf{x}_k=(x_k,y_k)^\top$
reaches a predefined threshold $C$:
$L(\mathbf{x}_k,t_k) - L(\mathbf{x}_k,t_k-\Delta t_k) = p_k C$, where $p_k \in \{-1,+1\}$ is the sign (polarity) of the brightness change,
and $\Delta t_k$ is the time since the last event at the same pixel location. \cite{Gallego20pami,Gallego15arxiv}}.
Ideally, every camera pixel receives light from a single scene point, which is illuminated by a single location of the laser as it sweeps through the projector's pixel grid (Fig.~\ref{fig:geometricConfig}).
Since the light source moves in a predefined manner and the event camera and the projector are synchronized,
as soon as an event is triggered, one can match it to the current light source pixel (neglecting latency, light traveling time, etc.)
to establish a stereo correspondence between the projector and the camera.
This concept of converting \emph{point-wise} temporal information into disparity was explored in~\cite{Matsuda15iccp},
which relied on precise timing of both laser and event camera to establish accurate correspondences.
However this one-to-one, ideal situation breaks down due to noise as the scanning speed increases.
Let us introduce the time constraints and effects from both devices: projector and event camera.
\textbf{Projector's Sweeping Time and Sensor's Temporal Resolution}.
Without loss of generality, we first assume the projector and the event camera are in a canonical stereo configuration, i.e., epipolar lines are horizontal (this can be achieved via calibration and rectification).
The time that it takes for the projector to move its light source from one pixel to the next one horizontally in the raster scan, $\delta t$, is inversely proportional to the scanning frequency $f$ and the projector's spatial resolution ($W \times H$ pixels):
\vspace{-0.5ex}
\begin{equation}
\label{eq:projector-time-dt}
\delta t = 1 / (f\,H\,W).
\vspace{-0.5ex}
\end{equation}
For example, our projector scans at $f=\SI{60}{\Hz}$ and has $1920 \times 1080$ pixels,
thus it takes $1/f = \SI{16.6}{\milli\second}$ to sweep over all its pixels, spending at most $\delta t \approx \SI{8}{\nano\second}$ per pixel.
This is considerably smaller than the temporal resolution of event cameras (\SI{1}{\micro\second})
(i.e., the camera cannot perceive the time between consecutive raster-scan pixels).
To overcome this issue and be able to establish stereo correspondences along epipolar lines from time measurements, we take advantage of geometry:
we \emph{rotate} the projector by \SI{90}{\degree}, so that the raster scan is now vertical (Fig.~\ref{fig:geometricConfig}).
The time that it now takes for the projector to move its light source from a pixel to its adjacent one on the still horizontal epipolar line is
\vspace{-0.5ex}
\begin{equation}
\label{eq:projector-time-dt-line}
\delta t_{\text{line}} = 1 / (f\,H),
\vspace{-0.5ex}
\end{equation}
the time that it takes to sweep over one of the $H$ raster lines.
For the above projector, $\delta t_{\text{line}}\approx \SI{15.4}{\micro\second}$, which is larger than the temporal resolution of most event cameras~\cite{Gallego20pami}.
Hence, the event camera is now able to distinguish between consecutive projector pixels on the same epipolar line.
\textbf{Event camera noise sources}: Let us describe the different noise characteristics of event cameras that factor into this system.
\emph{Latency} is defined as the time it takes for an event to be triggered since the moment the logarithmic change of intensity exceeded the threshold.
Typically this latency can range from \SI{15}{\micro\second} to \SI{1}{\milli\second}.
Since this affects all the timestamps equally, this can be considered as a constant offset that does not affect relative timestamps between consecutive events.
\emph{Jitter} is the random noise that appears in the timestamps.
This can have a huge variance depending on the scene and the illumination conditions.
\emph{BurstAER mode}. This pixel read-out mode is very common in high resolution event cameras.
It is a technique used to quickly read events from the pixel array.
Instead of reading out each individual event pixel (which takes longer time for higher resolution cameras), this method reads out an entire row or group of rows together and assigns the same timestamp to all the events in these rows.
This causes banding effects that appear in the event timestamps,
hence they also affect the quality of the reconstructed depth map.
\subsection{Maximizing Spatio-Temporal Consistency}
\label{sec:method:energy-based-formulation}
The method in \cite{Matsuda15iccp}, (mentioned in Section~\ref{sec:method:preliminaries}, and described in Section~\ref{sec:experim:baseline} as baseline), computes disparity independently for each event and is, therefore, highly susceptible to noise, especially as the scanning speed increases.
We now propose a method that processes events in space-time neighborhoods
and exploits the regularity of surfaces present in natural scenes
to improve robustness against noise and produce spatially coherent 3D reconstructions.
\textbf{Time Maps}:
The laser projector illuminates the scene in a raster scan fashion. %
During one scanning interval, $T = 1/f$, the projector traverses each of its pixels $\bx_{p}$ at a precise time $\tau_{p}$,
which allows us to define a time map over the projector's pixel grid: $\bx_{p} \mapsto \tau_{p}(\bx_{p})$.
Similarly for the event camera we can define another time map (see \cite{Lagorce17pami}) $\bx_{c} \mapsto \tau_{c}(\bx_{c})$,
where $\tau_{c}(\bx_{c})$ records the timestamp of the last event at pixel $\bx_{c}$.
Owing to this similarity between time maps and the fact that the projector emits light whereas the camera acquires it,
we think of the projector as an ``inverse'' event camera.
That is, the projector creates an ``illumination event'' $\tilde{e}=(\bx_{p},t_p,1)$ when light at time $t=t_p$ traverses pixel $\bx_{p}$.
These ``illumination events'' are sparse, follow a raster-like pattern and are $\delta t \approx \SI{8}{\nano\second}$ apart \eqref{eq:projector-time-dt}.
For simplicity, we do not make a distinction between $\tau_{p}$ and $\tau_{c}$ and refer to them as time maps (i.e., regardless of whether they are in the projector's or the event camera's image plane).
Exemplary time maps are shown in Fig.~ \ref{fig:geometricConfig}.
\textbf{Geometric Configuration}:
A point $\bx_{c}$ on the event camera’s image plane transfers onto a point $\bx_{p}$ on the projector's image plane following a chain of transformations that involves the surface of the objects in the scene (Fig.~\ref{fig:geometricConfig}).
If we represent the surface of the objects using the depth $Z$ with respect to the event camera, we have:
\vspace{-0.5ex}
\begin{equation}
\label{eq:transferPointFromCamera}
\bx_{p} = \pi_p \bigl(\TE\; \pi_c^{-1} \bigl(\bx_{c}, Z(\bx_{c})\bigr)\bigr)
\vspace{-0.5ex}
\end{equation}
where $\pi_p$ is the perspective projection on the projector's frame,
$\TE$ is the rigid-body motion from the camera to the projector,
$\pi_c^{-1}$ is the inverse perspective projection of the event camera
(assumed to be well-defined by a unique point of intersection between the viewing ray from the camera and the surfaces in the scene).
\textbf{Time Constancy Assumption}.
In the above geometric configuration,
the ``illumination events'' from the projector induce regular events on the camera.
Equivalently, in terms of timestamps,
the time map $\tau_{p}$ on the projector’s image plane induces a time map $\tau_{c}$ on the camera’s image plane:
\vspace{-0.5ex}
\begin{equation}
\label{eq:IdealTimeSurfaceTransfer}
\tau_{c} (\bx_{c}) = \tau_{p} (\bx_{p}).
\vspace{-0.5ex}
\end{equation}
This equation states a \emph{time-consistency principle} between $\tau_{c}, \tau_{p}$, which assumes negligible travel time and photoreceptor delay~\cite{Zhou18eccv,Ieng18fnins,Zhou20tro}, i.e., instantaneous transmission from projector to camera, as if ``illumination events'' and regular events were simultaneous.
This time-consistency principle will play the same role that photometric consistency (e.g., the brightness constancy assumption $I_2(\mathbf{x}_2) = I_1(\mathbf{x}_1))$ plays in conventional (i.e., passive) multi-view stereo.
\textbf{Disparity map from stereo matching}.
We formulate the problem of depth estimation using epipolar search, where we compare local neighborhoods of ``illumination'' and regular events (of size $W \!\times W \!\times T$) on the rectified image planes, seeking to maximize their consistency.
In terms of time maps, a neighborhood $\tau_\star(\mathbf{x}_\star, W)$, of size $W \times W$ pixels around point $\mathbf{x}_\star$, is a compact representation of the spatio-temporal neighborhood of the point $\mathbf{x}_\star$, since it not only contains spatial information but also temporal one, by definition of $\tau_\star$.
Our goal becomes then to maximize consistency~\eqref{eq:IdealTimeSurfaceTransfer}, and we do so by searching for $\tau_{p}(\bx_{p}, W)$ (along the epipolar line) that minimizes the error
\begin{equation}
\label{eq:objectiveFunction}
Z^\ast \doteq \arg\min_{Z} C(\bx_{c}, Z),
\end{equation}
\begin{equation}
\label{eq:residualCalculation}
C(\bx_{c}, Z) \doteq \| \tau_{c}(\bx_{c}, W) - \tau_{p}(\bx_{p}, W) \|^{2}_{L^2(W\times W)}.
\end{equation}
\input{floats/alg_pseudocode}
\textbf{Discussion of the Approach}.
The temporal noise characteristics of event cameras (e.g jitter, latency, BurstAER mode, etc.) influence the quality of the obtained depth maps.
The advantages of the proposed method are as follows.
(\emph{i}) \emph{Robustness to noise} (event jitter):
By considering spatio-temporal neighborhoods of events for stereo matching, our method becomes less susceptible to individual event's jitter than point-wise methods~\cite{Matsuda15iccp}.
(\emph{ii}) \emph{Less data required}:
Point-wise methods improve depth accuracy on static scenes by averaging depth over multiple scans~\cite{Matsuda15iccp}.
Our method exploits spatial relationships between events, which makes up for temporal averaging, and therefore produces good results with less data, thus enabling better reconstructions of dynamic scenes.
We may further smooth the depth maps by using a non-linear refinement step.
(\emph{iii}) \emph{Single step stereo triangulation}: Depth parametrization and stereo matching are combined in a single step, as opposed to the classical two-step approach of first establishing correspondences and then triangulating depth like SGM or SGBM.
This improves accuracy by removing triangulation errors from non-intersecting rays.
(\emph{iv}) \emph{Trade-off controllability}:
Parameter $W$ allows us to control the quality of the estimated depth maps, with a trade-off:
a small $W$ produces fine-detailed but noisy depth maps,
whereas a large $W$ filters out noise at the expense of recovering fewer details, with (over-)smooth depth maps.
Noise due to BurstAER mode or temporal resolution may affect large pixel areas.
We may mitigate this type of noise by using large neighborhoods at the expense of smoothing depth discontinuities.
On the downside, the method is computationally more expensive than~\cite{Matsuda15iccp}, albeit it is still practical.
The pseudo-code of the method is given in Alg.~\ref{alg:pseudo-code-patches}.
Overall, Alg.~\ref{alg:pseudo-code-patches} may be interpreted as a principled non-linear method to recover depth from raw measurements, which may be initialized by a simpler method, such as~\cite{Matsuda15iccp}.
\section{Experiments}
\label{sec:experim}
This section evaluates the performance of our event-based SL system for depth estimation.
We first introduce the hardware setup (Section~\ref{sec:experim:hardware})
and the baseline methods and ground truth used for comparison (Section~\ref{sec:experim:baseline}).
Then we perform experiments on static scenes to quantify the accuracy of Alg.~\ref{alg:pseudo-code-patches},
and on dynamic scenes to show its high-speed acquisition capabilities (Section~\ref{sec:experim:results}).
\subsection{Hardware Setup}
\label{sec:experim:hardware}
To the best of our knowledge, there is no available dataset on which the proposed method can be tested.
Therefore, we build our setup using a Prophesee event camera and a laser point source projector (Fig.~\ref{fig:setup}).
\textbf{Event Camera}:
In our setup, we use a Prophesee Gen3 camera~\cite{Posch11ssc,propheseeevk}, with a resolution of $640 \times 480$ pixels.
This sensor provides only regular events (change detection, not exposure measurement) which are used for depth estimation.
We use a lens with a field of view (FOV) of \SI{60}{\degree}.
\textbf{Projector Source}:
We use a Sony Mobile projector MP-CL1A.%
The projector has a scanning speed of \SI{60}{\Hz} and a resolution of $1920\times 1080$ pixels.
During one scan (an interval of \SI{16}{\milli\second}), the point light source moves in a raster scanning pattern. %
The light source consists of a Laser diode (Class 3R), of wavelength \SIrange{445}{639}{\nano\meter}.
The event camera and the laser projector are synchronized via an external jack cable.
The projector's FOV is \SI{20}{\degree}.
The projector and camera are \SI{11}{\centi\meter} apart and their optical axes form a \SI{26}{\degree} angle.
\textbf{Calibration}:
We calibrate the intrinsic parameters of the event camera using a standard calibration tool (Kalibr \cite{Furgale13iros}) on the images produced after converting events to images using E2VID \cite{Rebecq19pami} when viewing a checkerboard pattern from different angles.
We calibrate the extrinsic parameters of the camera-projector setup and the intrinsic parameters of the projector using a standard tool for SL systems~\cite{Moreno12impvt}.
\input{floats/table_comparison}
\input{floats/fig_final_stationary}
\subsection{Baselines and Ground Truth}
\label{sec:experim:baseline}
Let us specify the depth estimation methods used for comparison and how ground truth depth is provided.
\textbf{MC3D Baseline}.
We implemented the state-of-the-art method proposed in \cite{Matsuda15iccp}.
Moreover, we improved it by removing the need to scan the two end-planes of the scanning volume, which were used to linearly interpolate depth.
The details are described in the supplementary material.
Due to the event jitter of the event camera and noisy correspondences (e.g., missing matches), the disparity map for a single scanning period of \SI{16}{ms} is typically noisy and has many gaps (``holes'').
Hence, we apply a median filter in post-processing (also used by~\cite{Matsuda15iccp}).
However, this process does not remove all noise. %
Hence, we apply inpainting with hole filling and total variational (TV) denoising in post-processing.
In the experiments, we use as baseline the MC3D method~\cite{Matsuda15iccp} with a single single scan (\SI{16}{\milli\second}).
\textbf{SGM Baseline}.
The main advantage of formulating the projector as an inverse event camera and its associated time map is that any stereo algorithm can be applied for disparity calculation between the projector' and event camera's time maps.
We therefore test the Semi-Global Matching (SGM) method~\cite{Hirschmuller08pami} on such timestamp maps.
\textbf{Ground truth}. We average the scans of MC3D over a period of \SI{1}{\second}.
With a frequency of \SI{60}{\Hz}, this temporal averaging approach combines $60$ depth scans into one.
\textbf{Evaluation metrics}.
We define two evaluation metrics:
($i$) the root mean square error (\emph{RMSE}), namely the Euclidean distance between estimates and ground truth, measured in \SI{}{\centi\meter}, and ($ii$) the \emph{fill rate} (or completeness), namely the percentage of ground-truth points, which have been estimated by the proposed method within a certain error.
RMSE is often used to evaluate the quality of depth maps;
however, this metric is heavily influenced by the scene depth, especially if there are missing points in the estimated depth map.
We therefore also measure the fill rate, with a depth error threshold of 1\% of the average scene depth.
\subsection{Results}
\label{sec:experim:results}
We assess the performance of our method on static and dynamic scenes, as well as in HDR illumination conditions.
\vspace{-1ex}
\subsubsection{Static Scenes}
\label{sec:experim:static-scenes}
Static scenes enable the acquisition of accurate ground truth by temporal averaging, which ultimately allows us to assess the accuracy of our method.
To this end, we evaluate our method on ten static scenes with increasing complexity:
a 3D printed model of Michelangelo's David, a 3D printed model of a heart, book-duck-cylinder, plants, City of Lights and cycle-plant.
We also include long-range indoor scenes of desk and room having maximum depth of \textbf{\SI{6.5} {\meter}}.
The scenes have varying depths (range and average depth).
Depth estimation results are collected in Fig.~\ref{fig:experim:static} and Table~\ref{tab:comparison}.
The depth error was measured on the overlapping region with the ground truth.
As it can be observed, on all scenes, our method, which processes the event data triggered by a single scan pass of the \SI{60}{\Hz} projector, outperforms the MC3D baseline method with the same input data (\SI{16}{\milli\second}).
Although SGM gives satisfactory results in comparison to MC3D, it suffers from artefacts that arise when temporal consistency is not strictly adhered to.
Table~\ref{tab:comparison} reports the fill rate (completion) and RMS error for our method and the two baselines (MC3D, SGM).
The even rows incorporate post-processing (``proc''), which fills in holes (i.e., increases the fill ratio) and decreases the RMS depth error. The best results are obtained using our method and post-processing.
However, the effect of post-processing is marginal in our method compared to the effect it has on the baseline methods.
\input{floats/fig_zoomduck}
Fig.~\ref{fig:depthdiff:signed} zooms into the signed depth errors for the Book-Duck scene (top row in Fig.~\ref{fig:experim:static}).
Here, SGM gives the largest errors, specially at the duck's edges;
MC3D yields smaller errors, but still has marked object contours and gaps;
finally, our approach has the smallest error contours.
\subsubsection{High Dynamic Range Experiments}
We also assess the performance of our method on a static scene under different illumination conditions (Fig.~\ref{fig:experim:static:hdr}),
which demonstrates the advantages of using an event-based SL depth system over conventional-camera--based depth sensor like Intel RealSense D435.
\input{floats/fig_hdr}
Fig.~\ref{fig:experim:static:hdr} shows qualitatively how our method provides consistent depth maps under different illumination conditions, whereas a frame-based depth sensor, e.g. Intel RealSense, does not cope well with such challenging scenarios.
Table~\ref{tab:comparison:hdr} compares our method against the event-based baselines in HDR conditions.
While all event-based methods estimate consistent depth maps across the HDR conditions, our method outperforms the MC3D baseline significantly.
We observe that as illumination increases, there is a slight decrease of the errors.
The reason is that the noise (i.e., jitter) in the event timestamps decreases with illumination.
\subsubsection{Sensitivity with respect to the Neighborhood Size}
\label{sec:experim:sensitivity}
Fig.~\ref{fig:experim:patch-sensitivity} qualitatively shows the performance of Alg.~\ref{alg:pseudo-code-patches} as the size of the local aggregation neighborhood increases from $W=3$ to $W=15$ pixels on the event camera's image plane.
As anticipated in Section~\ref{sec:method:energy-based-formulation}, there is a trade-off between accuracy, detail preservation, and noise reduction.
Our method allows us to control the desired depth estimation quality along this trade-off via the parameter $W$.
\input{floats/fig_windowsize}
\subsubsection{Dynamic Scenes}
\label{sec:experim:motion-scenes}
\input{floats/fig_dynamic}
We also test our method on eight dynamic scenes (Fig.~\ref{fig:experim:dynamic}) with diverse challenging scenarios to show the capabilities of the proposed method to recover depth information in high-speed applications.
Specifically, Fig.~\ref{fig:experim:dynamic} shows depth recovered using our method and the baselines for the eight sequences.
The figure shows a good performance of our technique in fast motion scenes and in the presence of \mbox{(self-)}occlusions (e.g., Scotch tape and Multi-object) and thin structures (e.g., fan).
Objects do not need to be convex to recover depth with the proposed SL system.
We observe that MC3D depth estimation is inaccurate due to inherent noise in the event timestamps.
In the case of tape spin and fan scenes, MC3D depth has significant holes which cannot be recovered even after post-processing.
SGM performs better than MC3D, however its performance decreases in the presence of noise:
in the origami fan scene, the depth along the wing of the fan and the wall has significant artefacts.
Our method is robust to these artefacts and can accurately estimate depth in challenging scenes.
Qualitative comparison against Intel RealSense shows favorable performance of our event-based SL method compared to frame-based SL for dynamic scenes.
Because it is \emph{challenging} (if not impossible) to obtain \emph{accurate} ground truth depth at \SI{}{\milli\second} resolution in natural dynamic scenes, such as the deforming origami fan rotating at variable speed, spinning tape, etc. we do not report quantitative results.
In static scenes, we acquire accurate ground truth depth by time-averaging \SI{1}{\second} of scan data.
However, this is not possible in dynamic scenes.
The static scene experiments allow us to assess the accuracy of our method (which only requires \SI{16}{\milli\second} of data) and provide a ballpark for the accuracy of dynamic scenes. %
\textbf{Discussion}.
The experiments show that the proposed method produces, with the input data from a single scan pass, accurate depth maps at high frequency.
This was possible by exploiting local event correlations at the expense of increasing the computational effort compared to MC3D.
The current Python implementation of the proposed method is 38 times slower than MC3D.
Nevertheless, we think this can be optimized further for real-time operation.
We also found that the method suffers in the presence of strong specularities (coin sequence, bike scene).
Still, our method is able to handle specularities better than passive systems that process images using the brightness constancy assumption, which breaks down in these scenarios.
\section{Conclusion}
\label{sec:conclusion}
We have introduced a novel method for depth estimation using a laser point-projector and an event camera.
The method aims at exploiting correlations between events (sparse space-time measurements), which previous methods on the same setup had not explored.
We formulated the problem from first principles, aiming at maximizing spatio-temporal consistency while formulating the problem
in an amenable stereo fashion.
The experiments showed that the proposed method outperforms the frame-based (Intel RealSense) and event-based baselines (MC3D, SGM),
producing, given input data from a single scan pass, similar 3D reconstruction results as the temporal average of 60 scans with MC3D.
The method also provides best results in dynamic scenes and under broad illumination conditions.
Exploiting local correlations was possible by introducing more event processing effort into the system.
The effect of post-processing on the output of our method was marginal, signaling a thoughtful design.
Finally, we think that the ideas presented here can spark a new set of techniques for high-speed depth acquisition and denoising with event-based structured light systems.
\section*{Acknowledgement}
\up{We thank Dr.~Dario Brescianini and Kira Erb for their help with the prototype and data collection.}
\section*{MC3D Baseline}
We implemented the state-of-the-art method proposed in \cite{Matsuda15iccp}.
Moreover, we improved it by removing the need to scan the two end-planes of the scanning volume, which were used to linearly interpolate depth, as we explain.
The method in \cite{Matsuda15iccp} required to scan two planes at known distances from the setup at the two ends of the scanning volume.
These planes were used for calibration and depth estimation.
If $d_n, d_f$ are the disparities corresponding to these two %
planes at depths $Z_n, Z_f$ (near and far, respectively),
then the depth $Z$ at a pixel $(x,y)$ with disparity $d(x,y)$ was linearly interpolated by~\cite{Wang21jsen}:
\begin{equation}
Z(x,y) = Z_n + Z_f \, \frac{d(x,y) - d_n(x,y)} {d_f(x,y) - d_n(x,y)}
\end{equation}
This first-order method, which assumes pinhole models and a small illumination angle approximation throughout the scan volume,
was justified in \cite{Matsuda15iccp} to overcome the low spatial resolution of the DVS128 ($128 \times 128$ pixels) and the jitter in the event timestamps.
In contrast to the setup in~\cite{Matsuda15iccp}, we use a higher resolution ($\approx20\times$) event camera and calibrate using events.
Therefore, we can estimate depth from disparity without the need for prior scanning of the end-planes.
In our version of MC3D, depth is given by the classical triangulation equation for a canonical stereo configuration
(assuming the image planes of the projector and event camera are rectified using the calibration information):
\begin{equation}
\label{eq:depth_mc3d}
Z(\bx_{c}) = b \frac{F}{|\bx_{c} - \bx_{p}|},
\end{equation}
where $b$ is the stereo baseline, $F$ is the focal length, and the denominator is the disparity.
\section{Introduction}
\label{sec:intro}
\input{floats/fig_eyecatch2}
Depth estimation plays an important role in many computer vision and robotics applications, such as 3D modeling, augmented reality, navigation, or industrial inspection.
Structured light (SL) systems estimate depth by actively projecting a known pattern on the scene
and observing with a camera how light interacts (i.e., deforms and reflects) with the surfaces of the objects.
In close range, these systems provide more accurate depth estimates than passive stereo methods,
and so they have been used in commercial products like KinectV1~\cite{KinectV1} and Intel RealSense~\cite{IntelRealSense}.
Due to simple hardware and accurate depth estimates, SL systems are suitable for applications like 3D modeling, augmented reality, and indoor-autonomous navigation.
SL systems are constrained by the bandwidth of the devices (projector and camera) and the power of the projector light source.
These constraints limit the acquisition speed, depth resolution, and performance in ambient illumination of the SL system.
The main drawback of SL systems based on traditional cameras is that frame rate and redundant data acquisition limit processing to tens of Hz.
By suppressing redundant data acquisition, processing can be accelerated and made more lightweight.
This is a fundamental idea of recent systems based on event cameras, such as~\cite{Matsuda15iccp,Brandli13fns,Martel18iscas}.
Event cameras, such as the Dynamic Vision Sensor (DVS) \cite{Lichtsteiner08ssc,Suh20iscas,Finateu20isscc} or the ATIS sensor \cite{Posch11ssc,Posch14ieee}, are bio-inspired sensors that measure per-pixel intensity \emph{changes} (i.e., temporal contrast) asynchronously, at the time they occur.
Thus, event cameras excel at suppressing temporal redundancy of the acquired visual signal, and they do so at the circuit level, thus consuming very little power.
Moreover, event cameras have a very high temporal resolution (in the order of microseconds), which is orders of magnitude higher than that of traditional (frame-based) cameras, so they allow us to acquire visual signals at very high speed.
This is another key idea of SL systems based on event cameras: the high temporal resolution simplifies data association by sequentially exposing the scene, one point~\cite{Matsuda15iccp} or one line~\cite{Brandli13fns} at a time.
However, the unconventional output of event cameras (a stream of asynchronous per-pixel intensity changes, called ``events'', instead of a synchronous sequence of images) requires the design of novel computer vision methods ~\cite{Gallego20pami,Benosman14tnnls,Kim14bmvc,Zhu17icra,Rebecq17ral,Gallego17pami,Osswald17srep,Mueggler18tro,Gallego19cvpr}.
Additionally, event cameras have a very high dynamic range (HDR) ($>$\SI{120}{\decibel}), which allows them to operate in broad illumination conditions~\cite{Rosinol18ral,Rebecq19pami,Cohen18amos}.
This paper tackles the problem of depth estimation using a SL system comprising a laser point-projector and an event camera (Figs.~\ref{fig:eyecatch} and \ref{fig:setup}).
Our goal is to exploit the advantages of event cameras in terms of data redundancy suppression, large bandwidth (i.e., high temporal resolution) and HDR.
Early work~\cite{Matsuda15iccp} showed the potential of these types of systems; however, 3D points were estimated independently from each other, resulting in noisy 3D reconstructions.
Instead, we propose to exploit the regularity of the surfaces in the world to obtain more accurate and less noisy 3D reconstructions.
To this end, events are no longer processed independently, but jointly and following a forward projection model rather than the classical depth-estimation approach (stereo matching plus triangulation by back-projection).
\textbf{Contributions}. In summary, our contributions are:
\begin{itemize}[topsep=1pt,parsep=2pt,partopsep=2pt]
\setlength\itemsep{0em}
\item A novel formulation for depth estimation from an event-based SL system comprising a laser point-projector and an event camera.
\mm{We model the laser point projector as an ``inverse'' event camera
and estimate depth by maximizing the spatio-temporal consistency between the projector's and the event camera's data, when interpreted as a stereo system.}
\item The proposed method is robust to noise in the event timestamps (e.g., jitter, latency, BurstAER) as compared to the state-of-the-art \cite{Matsuda15iccp}.
\item A convincing evaluation of the accuracy of our method using \mm{ten} stationary scenes and
a demonstration of the capabilities of our setup to scan \mm{eight} sequences with high-speed motion.
\item A dataset comprising all static and dynamic scenes recorded with our setup, \mm{and source code}.
To the best of our knowledge it is the first public dataset of its kind.
\end{itemize}
The following sections review the related work (Section~\ref{sec:related-work}),
present our approach \mm{ESL} (Section~\ref{sec:methodolody}),
and evaluate the method, comparing against the state-of-the-art and against ground truth data (Section~\ref{sec:experim}).
\section{Event-based Structured Light Systems}
\label{sec:related-work}
Prior structured light (SL) systems that have addressed the problem of depth estimation with event cameras are summarized in Table~\ref{tab:related-work}.
Since event cameras are novel sensors (commercialized since 2008), there are only a handful of papers on SL systems.
These can be classified according to whether the shape of the light source (point, line, 2D pattern)
and according to the number of event cameras used.
One of the earliest works combined a DVS with a pulsed laser line to reconstruct a small terrain~\cite{Brandli13fns}.
The pulsed laser line was projected at a fixed angle with respect to the DVS while the terrain moved beneath, perpendicular to the projected line.
The method used an adaptive filter to distinguish the events caused by the laser (up to $f\!\sim$\SI{500}{\Hz}) from the events caused by noise or by the terrain's motion.
\input{floats/fig_hw_setup}
\input{floats/table_background}
The SL system MC3D~\cite{Matsuda15iccp} comprised a laser point projector (operating up to \SI{60}{\Hz}) and a DVS.
The laser raster-scanned the scene, and its reflection was captured by the DVS, which converted temporal information of events at each pixel into disparity.
It exploited the redundancy suppression and high temporal resolution of the DVS, also showing appealing results in dynamic scenes.
{In \cite{Leroux18arxiv}, a Digital Light Processing (DLP) projector was used to illuminate the scene with frequency-tagged light patterns.
Each pattern's unique frequency facilitated the establishment of correspondences between the patterns and the events,
leading to a sparse depth map that was later interpolated.
Recently, \cite{Martel18iscas} combined a laser light source with a \emph{stereo} setup consisting of two
DAVIS event cameras~\cite{Brandli14ssc}.
The laser illuminated the scene and the synchronized event cameras recorded the events generated by the reflection from the scene.
Hence the light source was used to generate stereo point correspondences, which were then triangulated (back-projected) to obtain a 3D reconstruction.
More recently, \cite{Mangalore20spl} proposed a SL system with a fringe projector and an event camera.
A sinusoidal 2D pattern with different frequencies illuminated the scene and its reflection was captured by the camera and processed (by phase unwrapping) to generated depth estimates.
The closest work to our method is MC3D~\cite{Matsuda15iccp} since both use a laser point-projector and a single event camera,
which is a sufficiently general and simple scenario that allows us to exploit the high-speed advantages of event cameras and the focusing power of a point light source.
In both methods we may interpret the laser and camera as a stereo pair.
The principle behind MC3D is to map the spatial disparity between the projector and event camera to temporal information of the events.
When events are generated, their timestamps are mapped to disparity by multiplying by the projector's scanning speed.
This operation amplifies the noise inherent in the event timestamps and leads to brittle stereo correspondences.
Moreover, this noise amplification depends on the projector's speed, which is product of the projector resolution and scanning frequency. Hence, MC3D's performance degrades as the scanning frequency increases.
By contrast, our method maximizes the spatio-temporal consistency between the projector's and event camera's data, thus leading to lower errors (especially with higher scanning frequencies).
By exploiting the regularities in neighborhoods of event data, as opposed to the point-wise operations in MC3D, our method improves robustness against noise.
\section{Depth Estimation}
\label{sec:methodolody}
This section introduces basic considerations of the event-camera - projector setup (Section~\ref{sec:method:preliminaries})
and then presents our optimization approach to depth estimation (Section~\ref{sec:method:energy-based-formulation})
using spatio-temporal consistency between the signals used on the event camera and the projector.
Overall, our method is summarized in Fig.~\ref{fig:geometricConfig} and Algorithm~\ref{alg:pseudo-code-patches}.
\subsection{Basic Considerations}
\label{sec:method:preliminaries}
\input{floats/fig_geometric_setup}
We consider the problem of depth estimation using a laser point projector and an event camera.
Fig.~\ref{fig:geometricConfig} illustrates the geometry of our configuration.
The projector illuminates the scene by moving a laser light source in a raster scan fashion. %
The changes in illumination caused by the laser are observed by the event camera, whose pixels respond asynchronously by generating events\footnote{An event camera generates an event $e_k = (\mathbf{x}_k,t_k,p_k)$ at time $t_k$ when the increment of logarithmic brightness at the pixel $\mathbf{x}_k=(x_k,y_k)^\top$
reaches a predefined threshold $C$:
$L(\mathbf{x}_k,t_k) - L(\mathbf{x}_k,t_k-\Delta t_k) = p_k C$, where $p_k \in \{-1,+1\}$ is the sign (polarity) of the brightness change,
and $\Delta t_k$ is the time since the last event at the same pixel location. \cite{Gallego20pami,Gallego15arxiv}}.
Ideally, every camera pixel receives light from a single scene point, which is illuminated by a single location of the laser as it sweeps through the projector's pixel grid (Fig.~\ref{fig:geometricConfig}).
Since the light source moves in a predefined manner and the event camera and the projector are synchronized,
as soon as an event is triggered, one can match it to the current light source pixel (neglecting latency, light traveling time, etc.)
to establish a stereo correspondence between the projector and the camera.
This concept of converting \emph{point-wise} temporal information into disparity was explored in~\cite{Matsuda15iccp},
which relied on precise timing of both laser and event camera to establish accurate correspondences.
However this one-to-one, ideal situation breaks down due to noise as the scanning speed increases.
Let us introduce the time constraints and effects from both devices: projector and event camera.
\textbf{Projector's Sweeping Time and Sensor's Temporal Resolution}.
Without loss of generality, we first assume the projector and the event camera are in a canonical stereo configuration, i.e., epipolar lines are horizontal (this can be achieved via calibration and rectification).
The time that it takes for the projector to move its light source from one pixel to the next one horizontally in the raster scan, $\delta t$, is inversely proportional to the scanning frequency $f$ and the projector's spatial resolution ($W \times H$ pixels):
\vspace{-0.5ex}
\begin{equation}
\label{eq:projector-time-dt}
\delta t = 1 / (f\,H\,W).
\vspace{-0.5ex}
\end{equation}
For example, our projector scans at $f=\SI{60}{\Hz}$ and has $1920 \times 1080$ pixels,
thus it takes $1/f = \SI{16.6}{\milli\second}$ to sweep over all its pixels, spending at most $\delta t \approx \SI{8}{\nano\second}$ per pixel.
This is considerably smaller than the temporal resolution of event cameras (\SI{1}{\micro\second})
(i.e., the camera cannot perceive the time between consecutive raster-scan pixels).
To overcome this issue and be able to establish stereo correspondences along epipolar lines from time measurements, we take advantage of geometry:
we \emph{rotate} the projector by \SI{90}{\degree}, so that the raster scan is now vertical (Fig.~\ref{fig:geometricConfig}).
The time that it now takes for the projector to move its light source from a pixel to its adjacent one on the still horizontal epipolar line is
\vspace{-0.5ex}
\begin{equation}
\label{eq:projector-time-dt-line}
\delta t_{\text{line}} = 1 / (f\,H),
\vspace{-0.5ex}
\end{equation}
the time that it takes to sweep over one of the $H$ raster lines.
For the above projector, $\delta t_{\text{line}}\approx \SI{15.4}{\micro\second}$, which is larger than the temporal resolution of most event cameras~\cite{Gallego20pami}.
Hence, the event camera is now able to distinguish between consecutive projector pixels on the same epipolar line.
\textbf{Event camera noise sources}: Let us describe the different noise characteristics of event cameras that factor into this system.
\emph{Latency} is defined as the time it takes for an event to be triggered since the moment the logarithmic change of intensity exceeded the threshold.
Typically this latency can range from \SI{15}{\micro\second} to \SI{1}{\milli\second}.
Since this affects all the timestamps equally, this can be considered as a constant offset that does not affect relative timestamps between consecutive events.
\emph{Jitter} is the random noise that appears in the timestamps.
This can have a huge variance depending on the scene and the illumination conditions.
\emph{BurstAER mode}. This pixel read-out mode is very common in high resolution event cameras.
It is a technique used to quickly read events from the pixel array.
Instead of reading out each individual event pixel (which takes longer time for higher resolution cameras), this method reads out an entire row or group of rows together and assigns the same timestamp to all the events in these rows.
This causes banding effects that appear in the event timestamps,
hence they also affect the quality of the reconstructed depth map.
\subsection{Maximizing Spatio-Temporal Consistency}
\label{sec:method:energy-based-formulation}
The method in \cite{Matsuda15iccp}, (mentioned in Section~\ref{sec:method:preliminaries}, and described in Section~\ref{sec:experim:baseline} as baseline), computes disparity independently for each event and is, therefore, highly susceptible to noise, especially as the scanning speed increases.
We now propose a method that processes events in space-time neighborhoods
and exploits the regularity of surfaces present in natural scenes
to improve robustness against noise and produce spatially coherent 3D reconstructions.
\textbf{Time Maps}:
The laser projector illuminates the scene in a raster scan fashion. %
During one scanning interval, $T = 1/f$, the projector traverses each of its pixels $\bx_{p}$ at a precise time $\tau_{p}$,
which allows us to define a time map over the projector's pixel grid: $\bx_{p} \mapsto \tau_{p}(\bx_{p})$.
Similarly for the event camera we can define another time map (see \cite{Lagorce17pami}) $\bx_{c} \mapsto \tau_{c}(\bx_{c})$,
where $\tau_{c}(\bx_{c})$ records the timestamp of the last event at pixel $\bx_{c}$.
Owing to this similarity between time maps and the fact that the projector emits light whereas the camera acquires it,
we think of the projector as an ``inverse'' event camera.
That is, the projector creates an ``illumination event'' $\tilde{e}=(\bx_{p},t_p,1)$ when light at time $t=t_p$ traverses pixel $\bx_{p}$.
These ``illumination events'' are sparse, follow a raster-like pattern and are $\delta t \approx \SI{8}{\nano\second}$ apart \eqref{eq:projector-time-dt}.
For simplicity, we do not make a distinction between $\tau_{p}$ and $\tau_{c}$ and refer to them as time maps (i.e., regardless of whether they are in the projector's or the event camera's image plane).
Exemplary time maps are shown in Fig.~ \ref{fig:geometricConfig}.
\textbf{Geometric Configuration}:
A point $\bx_{c}$ on the event camera’s image plane transfers onto a point $\bx_{p}$ on the projector's image plane following a chain of transformations that involves the surface of the objects in the scene (Fig.~\ref{fig:geometricConfig}).
If we represent the surface of the objects using the depth $Z$ with respect to the event camera, we have:
\vspace{-0.5ex}
\begin{equation}
\label{eq:transferPointFromCamera}
\bx_{p} = \pi_p \bigl(\TE\; \pi_c^{-1} \bigl(\bx_{c}, Z(\bx_{c})\bigr)\bigr)
\vspace{-0.5ex}
\end{equation}
where $\pi_p$ is the perspective projection on the projector's frame,
$\TE$ is the rigid-body motion from the camera to the projector,
$\pi_c^{-1}$ is the inverse perspective projection of the event camera
(assumed to be well-defined by a unique point of intersection between the viewing ray from the camera and the surfaces in the scene).
\textbf{Time Constancy Assumption}.
In the above geometric configuration,
the ``illumination events'' from the projector induce regular events on the camera.
Equivalently, in terms of timestamps,
the time map $\tau_{p}$ on the projector’s image plane induces a time map $\tau_{c}$ on the camera’s image plane:
\vspace{-0.5ex}
\begin{equation}
\label{eq:IdealTimeSurfaceTransfer}
\tau_{c} (\bx_{c}) = \tau_{p} (\bx_{p}).
\vspace{-0.5ex}
\end{equation}
This equation states a \emph{time-consistency principle} between $\tau_{c}, \tau_{p}$, which assumes negligible travel time and photoreceptor delay~\cite{Zhou18eccv,Ieng18fnins,Zhou20tro}, i.e., instantaneous transmission from projector to camera, as if ``illumination events'' and regular events were simultaneous.
This time-consistency principle will play the same role that photometric consistency (e.g., the brightness constancy assumption $I_2(\mathbf{x}_2) = I_1(\mathbf{x}_1))$ plays in conventional (i.e., passive) multi-view stereo.
\textbf{Disparity map from stereo matching}.
We formulate the problem of depth estimation using epipolar search, where we compare local neighborhoods of ``illumination'' and regular events (of size $W \!\times W \!\times T$) on the rectified image planes, seeking to maximize their consistency.
In terms of time maps, a neighborhood $\tau_\star(\mathbf{x}_\star, W)$, of size $W \times W$ pixels around point $\mathbf{x}_\star$, is a compact representation of the spatio-temporal neighborhood of the point $\mathbf{x}_\star$, since it not only contains spatial information but also temporal one, by definition of $\tau_\star$.
Our goal becomes then to maximize consistency~\eqref{eq:IdealTimeSurfaceTransfer}, and we do so by searching for $\tau_{p}(\bx_{p}, W)$ (along the epipolar line) that minimizes the error
\begin{equation}
\label{eq:objectiveFunction}
Z^\ast \doteq \arg\min_{Z} C(\bx_{c}, Z),
\end{equation}
\begin{equation}
\label{eq:residualCalculation}
C(\bx_{c}, Z) \doteq \| \tau_{c}(\bx_{c}, W) - \tau_{p}(\bx_{p}, W) \|^{2}_{L^2(W\times W)}.
\end{equation}
\input{floats/alg_pseudocode}
\textbf{Discussion of the Approach}.
The temporal noise characteristics of event cameras (e.g jitter, latency, BurstAER mode, etc.) influence the quality of the obtained depth maps.
The advantages of the proposed method are as follows.
(\emph{i}) \emph{Robustness to noise} (event jitter):
By considering spatio-temporal neighborhoods of events for stereo matching, our method becomes less susceptible to individual event's jitter than point-wise methods~\cite{Matsuda15iccp}.
(\emph{ii}) \emph{Less data required}:
Point-wise methods improve depth accuracy on static scenes by averaging depth over multiple scans~\cite{Matsuda15iccp}.
Our method exploits spatial relationships between events, which makes up for temporal averaging, and therefore produces good results with less data, thus enabling better reconstructions of dynamic scenes.
We may further smooth the depth maps by using a non-linear refinement step.
(\emph{iii}) \emph{Single step stereo triangulation}: Depth parametrization and stereo matching are combined in a single step, as opposed to the classical two-step approach of first establishing correspondences and then triangulating depth like SGM or SGBM.
This improves accuracy by removing triangulation errors from non-intersecting rays.
(\emph{iv}) \emph{Trade-off controllability}:
Parameter $W$ allows us to control the quality of the estimated depth maps, with a trade-off:
a small $W$ produces fine-detailed but noisy depth maps,
whereas a large $W$ filters out noise at the expense of recovering fewer details, with (over-)smooth depth maps.
Noise due to BurstAER mode or temporal resolution may affect large pixel areas.
We may mitigate this type of noise by using large neighborhoods at the expense of smoothing depth discontinuities.
On the downside, the method is computationally more expensive than~\cite{Matsuda15iccp}, albeit it is still practical.
The pseudo-code of the method is given in Alg.~\ref{alg:pseudo-code-patches}.
Overall, Alg.~\ref{alg:pseudo-code-patches} may be interpreted as a principled non-linear method to recover depth from raw measurements, which may be initialized by a simpler method, such as~\cite{Matsuda15iccp}.
\section{Experiments}
\label{sec:experim}
This section evaluates the performance of our event-based SL system for depth estimation.
We first introduce the hardware setup (Section~\ref{sec:experim:hardware})
and the baseline methods and ground truth used for comparison (Section~\ref{sec:experim:baseline}).
Then we perform experiments on static scenes to quantify the accuracy of Alg.~\ref{alg:pseudo-code-patches},
and on dynamic scenes to show its high-speed acquisition capabilities (Section~\ref{sec:experim:results}).
\subsection{Hardware Setup}
\label{sec:experim:hardware}
To the best of our knowledge, there is no available dataset on which the proposed method can be tested.
Therefore, we build our setup using a Prophesee event camera and a laser point source projector (Fig.~\ref{fig:setup}).
\textbf{Event Camera}:
In our setup, we use a Prophesee Gen3 camera~\cite{Posch11ssc,propheseeevk}, with a resolution of $640 \times 480$ pixels.
This sensor provides only regular events (change detection, not exposure measurement) which are used for depth estimation.
We use a lens with a field of view (FOV) of \SI{60}{\degree}.
\textbf{Projector Source}:
We use a Sony Mobile projector MP-CL1A.%
The projector has a scanning speed of \SI{60}{\Hz} and a resolution of $1920\times 1080$ pixels.
During one scan (an interval of \SI{16}{\milli\second}), the point light source moves in a raster scanning pattern. %
The light source consists of a Laser diode (Class 3R), of wavelength \SIrange{445}{639}{\nano\meter}.
The event camera and the laser projector are synchronized via an external jack cable.
The projector's FOV is \SI{20}{\degree}.
The projector and camera are \SI{11}{\centi\meter} apart and their optical axes form a \SI{26}{\degree} angle.
\textbf{Calibration}:
We calibrate the intrinsic parameters of the event camera using a standard calibration tool (Kalibr \cite{Furgale13iros}) on the images produced after converting events to images using E2VID \cite{Rebecq19pami} when viewing a checkerboard pattern from different angles.
We calibrate the extrinsic parameters of the camera-projector setup and the intrinsic parameters of the projector using a standard tool for SL systems~\cite{Moreno12impvt}.
\input{floats/table_comparison}
\input{floats/fig_final_stationary}
\subsection{Baselines and Ground Truth}
\label{sec:experim:baseline}
Let us specify the depth estimation methods used for comparison and how ground truth depth is provided.
\textbf{MC3D Baseline}.
We implemented the state-of-the-art method proposed in \cite{Matsuda15iccp}.
Moreover, we improved it by removing the need to scan the two end-planes of the scanning volume, which were used to linearly interpolate depth.
The details are described in the supplementary material.
Due to the event jitter of the event camera and noisy correspondences (e.g., missing matches), the disparity map for a single scanning period of \SI{16}{ms} is typically noisy and has many gaps (``holes'').
Hence, we apply a median filter in post-processing (also used by~\cite{Matsuda15iccp}).
However, this process does not remove all noise. %
Hence, we apply inpainting with hole filling and total variational (TV) denoising in post-processing.
In the experiments, we use as baseline the MC3D method~\cite{Matsuda15iccp} with a single single scan (\SI{16}{\milli\second}).
\textbf{SGM Baseline}.
The main advantage of formulating the projector as an inverse event camera and its associated time map is that any stereo algorithm can be applied for disparity calculation between the projector' and event camera's time maps.
We therefore test the Semi-Global Matching (SGM) method~\cite{Hirschmuller08pami} on such timestamp maps.
\textbf{Ground truth}. We average the scans of MC3D over a period of \SI{1}{\second}.
With a frequency of \SI{60}{\Hz}, this temporal averaging approach combines $60$ depth scans into one.
\textbf{Evaluation metrics}.
We define two evaluation metrics:
($i$) the root mean square error (\emph{RMSE}), namely the Euclidean distance between estimates and ground truth, measured in \SI{}{\centi\meter}, and ($ii$) the \emph{fill rate} (or completeness), namely the percentage of ground-truth points, which have been estimated by the proposed method within a certain error.
RMSE is often used to evaluate the quality of depth maps;
however, this metric is heavily influenced by the scene depth, especially if there are missing points in the estimated depth map.
We therefore also measure the fill rate, with a depth error threshold of 1\% of the average scene depth.
\subsection{Results}
\label{sec:experim:results}
We assess the performance of our method on static and dynamic scenes, as well as in HDR illumination conditions.
\vspace{-1ex}
\subsubsection{Static Scenes}
\label{sec:experim:static-scenes}
Static scenes enable the acquisition of accurate ground truth by temporal averaging, which ultimately allows us to assess the accuracy of our method.
To this end, we evaluate our method on ten static scenes with increasing complexity:
a 3D printed model of Michelangelo's David, a 3D printed model of a heart, book-duck-cylinder, plants, City of Lights and cycle-plant.
We also include long-range indoor scenes of desk and room having maximum depth of \textbf{\SI{6.5} {\meter}}.
The scenes have varying depths (range and average depth).
Depth estimation results are collected in Fig.~\ref{fig:experim:static} and Table~\ref{tab:comparison}.
The depth error was measured on the overlapping region with the ground truth.
As it can be observed, on all scenes, our method, which processes the event data triggered by a single scan pass of the \SI{60}{\Hz} projector, outperforms the MC3D baseline method with the same input data (\SI{16}{\milli\second}).
Although SGM gives satisfactory results in comparison to MC3D, it suffers from artefacts that arise when temporal consistency is not strictly adhered to.
Table~\ref{tab:comparison} reports the fill rate (completion) and RMS error for our method and the two baselines (MC3D, SGM).
The even rows incorporate post-processing (``proc''), which fills in holes (i.e., increases the fill ratio) and decreases the RMS depth error. The best results are obtained using our method and post-processing.
However, the effect of post-processing is marginal in our method compared to the effect it has on the baseline methods.
\input{floats/fig_zoomduck}
Fig.~\ref{fig:depthdiff:signed} zooms into the signed depth errors for the Book-Duck scene (top row in Fig.~\ref{fig:experim:static}).
Here, SGM gives the largest errors, specially at the duck's edges;
MC3D yields smaller errors, but still has marked object contours and gaps;
finally, our approach has the smallest error contours.
\subsubsection{High Dynamic Range Experiments}
We also assess the performance of our method on a static scene under different illumination conditions (Fig.~\ref{fig:experim:static:hdr}),
which demonstrates the advantages of using an event-based SL depth system over conventional-camera--based depth sensor like Intel RealSense D435.
\input{floats/fig_hdr}
Fig.~\ref{fig:experim:static:hdr} shows qualitatively how our method provides consistent depth maps under different illumination conditions, whereas a frame-based depth sensor, e.g. Intel RealSense, does not cope well with such challenging scenarios.
Table~\ref{tab:comparison:hdr} compares our method against the event-based baselines in HDR conditions.
While all event-based methods estimate consistent depth maps across the HDR conditions, our method outperforms the MC3D baseline significantly.
We observe that as illumination increases, there is a slight decrease of the errors.
The reason is that the noise (i.e., jitter) in the event timestamps decreases with illumination.
\subsubsection{Sensitivity with respect to the Neighborhood Size}
\label{sec:experim:sensitivity}
Fig.~\ref{fig:experim:patch-sensitivity} qualitatively shows the performance of Alg.~\ref{alg:pseudo-code-patches} as the size of the local aggregation neighborhood increases from $W=3$ to $W=15$ pixels on the event camera's image plane.
As anticipated in Section~\ref{sec:method:energy-based-formulation}, there is a trade-off between accuracy, detail preservation, and noise reduction.
Our method allows us to control the desired depth estimation quality along this trade-off via the parameter $W$.
\input{floats/fig_windowsize}
\subsubsection{Dynamic Scenes}
\label{sec:experim:motion-scenes}
\input{floats/fig_dynamic}
We also test our method on eight dynamic scenes (Fig.~\ref{fig:experim:dynamic}) with diverse challenging scenarios to show the capabilities of the proposed method to recover depth information in high-speed applications.
Specifically, Fig.~\ref{fig:experim:dynamic} shows depth recovered using our method and the baselines for the eight sequences.
The figure shows a good performance of our technique in fast motion scenes and in the presence of \mbox{(self-)}occlusions (e.g., Scotch tape and Multi-object) and thin structures (e.g., fan).
Objects do not need to be convex to recover depth with the proposed SL system.
We observe that MC3D depth estimation is inaccurate due to inherent noise in the event timestamps.
In the case of tape spin and fan scenes, MC3D depth has significant holes which cannot be recovered even after post-processing.
SGM performs better than MC3D, however its performance decreases in the presence of noise:
in the origami fan scene, the depth along the wing of the fan and the wall has significant artefacts.
Our method is robust to these artefacts and can accurately estimate depth in challenging scenes.
Qualitative comparison against Intel RealSense shows favorable performance of our event-based SL method compared to frame-based SL for dynamic scenes.
Because it is \emph{challenging} (if not impossible) to obtain \emph{accurate} ground truth depth at \SI{}{\milli\second} resolution in natural dynamic scenes, such as the deforming origami fan rotating at variable speed, spinning tape, etc. we do not report quantitative results.
In static scenes, we acquire accurate ground truth depth by time-averaging \SI{1}{\second} of scan data.
However, this is not possible in dynamic scenes.
The static scene experiments allow us to assess the accuracy of our method (which only requires \SI{16}{\milli\second} of data) and provide a ballpark for the accuracy of dynamic scenes. %
\textbf{Discussion}.
The experiments show that the proposed method produces, with the input data from a single scan pass, accurate depth maps at high frequency.
This was possible by exploiting local event correlations at the expense of increasing the computational effort compared to MC3D.
The current Python implementation of the proposed method is 38 times slower than MC3D.
Nevertheless, we think this can be optimized further for real-time operation.
We also found that the method suffers in the presence of strong specularities (coin sequence, bike scene).
Still, our method is able to handle specularities better than passive systems that process images using the brightness constancy assumption, which breaks down in these scenarios.
\section{Conclusion}
\label{sec:conclusion}
We have introduced a novel method for depth estimation using a laser point-projector and an event camera.
The method aims at exploiting correlations between events (sparse space-time measurements), which previous methods on the same setup had not explored.
We formulated the problem from first principles, aiming at maximizing spatio-temporal consistency while formulating the problem
in an amenable stereo fashion.
The experiments showed that the proposed method outperforms the frame-based (Intel RealSense) and event-based baselines (MC3D, SGM),
producing, given input data from a single scan pass, similar 3D reconstruction results as the temporal average of 60 scans with MC3D.
The method also provides best results in dynamic scenes and under broad illumination conditions.
Exploiting local correlations was possible by introducing more event processing effort into the system.
The effect of post-processing on the output of our method was marginal, signaling a thoughtful design.
Finally, we think that the ideas presented here can spark a new set of techniques for high-speed depth acquisition and denoising with event-based structured light systems.
\section*{Acknowledgement}
\up{We thank Dr.~Dario Brescianini and Kira Erb for their help with the prototype and data collection.}
\section*{MC3D Baseline}
We implemented the state-of-the-art method proposed in \cite{Matsuda15iccp}.
Moreover, we improved it by removing the need to scan the two end-planes of the scanning volume, which were used to linearly interpolate depth, as we explain.
The method in \cite{Matsuda15iccp} required to scan two planes at known distances from the setup at the two ends of the scanning volume.
These planes were used for calibration and depth estimation.
If $d_n, d_f$ are the disparities corresponding to these two %
planes at depths $Z_n, Z_f$ (near and far, respectively),
then the depth $Z$ at a pixel $(x,y)$ with disparity $d(x,y)$ was linearly interpolated by~\cite{Wang21jsen}:
\begin{equation}
Z(x,y) = Z_n + Z_f \, \frac{d(x,y) - d_n(x,y)} {d_f(x,y) - d_n(x,y)}
\end{equation}
This first-order method, which assumes pinhole models and a small illumination angle approximation throughout the scan volume,
was justified in \cite{Matsuda15iccp} to overcome the low spatial resolution of the DVS128 ($128 \times 128$ pixels) and the jitter in the event timestamps.
In contrast to the setup in~\cite{Matsuda15iccp}, we use a higher resolution ($\approx20\times$) event camera and calibrate using events.
Therefore, we can estimate depth from disparity without the need for prior scanning of the end-planes.
In our version of MC3D, depth is given by the classical triangulation equation for a canonical stereo configuration
(assuming the image planes of the projector and event camera are rectified using the calibration information):
\begin{equation}
\label{eq:depth_mc3d}
Z(\bx_{c}) = b \frac{F}{|\bx_{c} - \bx_{p}|},
\end{equation}
where $b$ is the stereo baseline, $F$ is the focal length, and the denominator is the disparity.
|
{
"timestamp": "2021-12-01T02:27:52",
"yymm": "2111",
"arxiv_id": "2111.15510",
"language": "en",
"url": "https://arxiv.org/abs/2111.15510"
}
|
\section{Introduction}
For a field $\mathbf{F}$ and a positive integer $h$, we define linear forms
\begin{eqnarray}\label{eq11}
\varphi(x_1,\ldots,x_h)=c_1 x_1+\cdots+c_h x_h,
\end{eqnarray}
where $c_i\in \mathbf{F}$ for all $i\in\{1,\ldots,h\}$. Let $V$ be a vector space over the field $\mathbf{F}$. For every nonempty set $A\subseteq V$, let
$$
A^h=\left\{ (a_1,a_2,\ldots,a_h):a_i\in A~\text{for~all}~i\in \{1,2,\ldots,h\} \right\}
$$
be the set of all $h$-tuples of elements of $A$. For $c\in \mathbf{F}$, the $c$-$dilate$ of $A$ is defined as
$$
c\ast A=\{ca: a\in A \}.
$$
The $\varphi$-$image$ of $A$ is the set
\begin{eqnarray*}
\varphi(A)&=&\left\{ \varphi(a_1,a_2,\ldots,a_h):(a_1,a_2,\ldots,a_h)\in A^h \right\}\\
&=&\{c_1a_1+\cdots+c_ha_h:~(a_1,\ldots,a_h)\in A^h\}\\
&=& c_1 \ast A+ \cdots +c_h \ast A .
\end{eqnarray*}
We call a nonempty subset $A$ of $V$ a $Sidon$ set for the linear form $\varphi$ (or a $\varphi$-$Sidon$ set)
if $ \varphi(a_1,a_2,\ldots,a_h)$ with $(a_1,a_2,\ldots,a_h)\in A^h$ are all distinct. That is, for all $h$-tuples $(a_1,a_2,\ldots,a_h)\in A^h$ and $(a'_1,a'_2,\ldots,a'_h)\in A^h$, if
$\varphi (a_1,a_2,\ldots,a_h) =\varphi(a'_1,a'_2,\ldots,a'_h),$
then $(a_1,a_2,\ldots,a_h)= (a'_1,a'_2,\ldots,a'_h)$.
For every nonempty subset $I$ of $\{1,\ldots,h\}$, define the subset sum
\begin{eqnarray}\label{eq12}
s_I =\sum_{i\in I}c_i .
\end{eqnarray}
Let $s_\emptyset =0$. Suppose there exist disjoint subsets $I_1$ and $I_2$ of $\{1,\ldots,h\}$ with $I_1$ and $I_2$ not both empty such that
\begin{eqnarray}\label{eq13}
s_{I_{1}} =\sum_{i\in I_1}c_i =\sum_{i\in I_2}c_i=s_{I_{2}}.
\end{eqnarray}
Then $A$ is not a sidon set (See \cite{Nathanson1}).
We say that the linear form (\ref{eq11}) has property $N$ if there do not exist disjoint nonempty
subsets $I_1$ and $I_2$ of $\{1,\ldots,h\}$ that satisfy (\ref{eq13}).
Let $J\subseteq \{1, \ldots, h\}$, we define the linear form in $\operatorname{card}(J)$ variables
$$
\varphi_{J}=\sum_{j \in J} c_{j} x_{j}.
$$
By definition, $\varphi_{\emptyset}=0$ and $\varphi_{J}=\varphi$ if $J=\{1, \ldots, h\} .$ The linear form $\varphi_{J}$ is called a contraction of the linear form $\varphi$.
For every nonempty subset $A$ of $V$, let
$$
\varphi_{J}(A)=\left\{\sum_{j \in J} c_{j} a_{j}: a_{j} \in A \text { for all } j \in J\right\}.
$$
If $A$ is a $\varphi$-Sidon set, then $A$ is a $\varphi_{J}$-Sidon set for every nonempty subset $J$ of $\{1, \ldots, h\}$.
For every subset $X$ of $V$ and vector $v \in V$, the {\em translate} of $X$ by $v$ is the set
$$
X+v=\{x+v: x \in X\}.
$$
For every subset of $J$ of $\{1, \ldots, h\}$, let $J^{c}=\{1, \ldots, h\} \backslash J$ be the complement of $J$ in $\{1, \ldots, h\}$. For every subset $A$ of $V$ and $b \in V \backslash A$, we define
$$
\Phi_{J}(A, b)=\varphi_{J}(A)+\bigg(\sum_{j \in J^{c}} c_{j}\bigg) b=\varphi_{J}(A)+s\left(J^{c}\right) b
$$
be the translate of the set $\varphi_{J}(A)$ by the subset sum $s\left(J^{c}\right) b$. We have $\Phi_{\emptyset}(A, b)=\left(\sum_{j=1}^{h} c_{j}\right) b$ and $\Phi_{J}(A, b)=\varphi(A)$ if $J=\{1, \ldots, h\}$.
Let $A=\{a_k:k=1,2,3,\ldots \}$ and $B=\{b_k:k=1,2,3,\ldots \}$ be sets of integers. We define $A$ a {\em polynomial perturbation} of $B$ if for some $r>0$ and positive integer $k_0$,
$$ |a_k-b_k|< k^r$$
for all integers $k \geq k_0$. The set $A$ is a {\em bounded perturbation} of $B$ if
there exists an $m_0 > 0$ such that
$$ |a_k-b_k|< m_0$$
holds for all integers $k \geq k_0$.
Recently, Nathanson \cite{Nathanson1} posed the following problem.
\noindent{\bf Nathanson's Problem} {\em Let $\varphi$ be a linear form with integer coefficients that satisfies condition $N$. Let $B$ be a set of integers. Does there exist a $\varphi$-Sidon set of integers that is a polynomial perturbation of $B$? Does there exist a $\varphi$-Sidon set of integers that is a bounded perturbation of $B$?}
For other related results about Sidon set,
one can refer to [1]-[4],[6],[7]. In this paper, we give an affirmative answer to the previous problem and give
some partial results to the last problem.
\begin{theorem}\label{thm1}
Let $\varphi=\Sigma_{i=1}^h c_i x_i$ be a linear form with integer coefficients that satisfy condition $N$ and $B=\{b_k:k=1,2,3,\ldots \}$ be a set of integers. Then there exists an infinite $\varphi$-Sidon set $A=\{a_k:k=1,2,3,\ldots \}$ of integers such that
$|a_k-b_k|< k^{4h}$ holds for all positive integers $k$.
\end{theorem}
\begin{theorem}\label{thm2}
Let $\varphi=\Sigma_{i=1}^h c_i x_i$ be a linear form with integer coefficients that satisfies condition $N$, $C=\sum^{h}_{i=1}|c_i|$, and $B=\{b_1<b_2<\cdots\}$ be
a set of integers. For any $\epsilon>0$, if $|b_t - b_s|\leq \left( t-s+1 \right)^{h-\epsilon}$ for any positive integers $t>s$, then there does not exist a $\varphi$-Sidon set of integers that is a polynomial perturbation of $B$.
\end{theorem}
\begin{theorem}\label{thm3}
Let $\varphi=\sum_{i=1}^h c_i x_i$ be a linear form with integer coefficients that satisfies condition $N$, $C=\sum^{h}_{i=1}|c_i|$, and $B=\{b_1<b_2<\cdots\}$ be
a set of integers. If $b_1>m$ and $b_{k+1}> C b_k+(C+1)m$ for some $m\ge 0$ and
for all positive integers $k$, then there exists a $\varphi$-Sidon set
$A=\{a_1,a_2,\ldots\}$ of integers and a constant $m_0>0$ such that
$ |a_k-b_k|<m_0 $ for all positive integers $k$.
\end{theorem}
\section{Proofs}
\begin{lemma}\label{lem21}\cite[Lemma 1]{Nathanson1}
Let $\varphi=\Sigma_{i=1}^h c_i x_i$ be a linear form with coefficients in the field $\mathbf{F}$. Let $V$ be a vector space over $\mathbf{F}$.
For every subset $A$ of $V$ and $b\in V\backslash A$,
$$
\varphi \left(A\cup \{b\}\right)= \bigcup_{J\subseteq \{1,\ldots,h \}} \Phi_J (A,b).
$$
If $A\cup \{b\}$ is a $\varphi$-Sidon set, then
\begin{eqnarray}\label{eq2.0}
\left\{ \Phi_J(A,b): J\subseteq \{1,\ldots,h \} \right\}
\end{eqnarray}
is a set of pairwise disjoint sets.
If $A$ is a $\varphi$-Sidon set and (\ref{eq2.0}) is a set of pairwise disjoint sets, then $A\cup \{b\}$ is a $\varphi$-Sidon set.
\end{lemma}
\begin{lemma}\label{lem22}
If $A$ is a $\varphi$-Sidon set, then any subset of $A$ is also a $\varphi$-Sidon set.
\end{lemma}
The Proof of Lemma \ref{lem22} is easy, we leave it to the reader.
\begin{proof}[Proof of Theorem \ref{thm1}]
We construct the $\varphi$-Sidon set $A=\{a_k:k=1,2,3,\ldots \}$ inductively. Let $a_1=b_1$, then $A_1=\{a_1\}$ is a $\varphi$-Sidon set and $|a_1-b_1|=0< 1^{4h}$.
Let $k\geq 1$ and let $A_k=\{a_1,a_2,\ldots,a_k\}$ be a set of $k$ distinct positive integers such that $A_k$ is a $\varphi$-Sidon set with $|a_i-b_i|< i^{4h}$ for all integers $i\leq k$. Let $b$ be a positive integer. By Lemma \ref{lem21}, the set $A_k\cup \{b\}$ is a $\varphi$-Sidon set if and only if the sets
$$\Phi_{J}(A_k,b)=\varphi_{J} (A_k)+\bigg(\sum_{j\in J^{c}} c_j\bigg)b$$
are pairwise disjoint for all $J\subseteq\{1,\ldots,h\}$. Let $J_1$ and $J_2$ be distinct subsets of $\{1,\ldots,h\}$. We have
$$ \Phi_{J_{1}}(A_k,b)\cap \Phi_{J_{2}}(A_k,b)\neq \emptyset$$
if and only if there exist integers $a_{1,j}\in A_k$ for all $j\in J_1$ and $a_{2,j}\in A_k$ for all $j\in J_2$ such that
$$ \sum_{j\in J_{1}}c_j a_{1,j}+\bigg(\sum_{j\in J_{1}^{c}} c_j\bigg)b= \sum_{j\in J_{2}}c_j a_{2,j}+\bigg(\sum_{j\in J_{2}^{c}} c_j\bigg)b.$$
Equivalently, the integer $b$ satisfies the equation
\begin{eqnarray}\label{eq2.2}
\bigg(\sum_{j\in J^{c}_{2}}c_j-\sum_{j\in J^{c}_{1}}c_j\bigg)b=\sum_{j\in J_{1}}c_j a_{1,j}-\sum_{j\in J_{2}}c_j a_{2,j}.
\end{eqnarray}
The integer
$$\begin{aligned}
c=\sum_{j\in J^{c}_{2}}c_j-\sum_{j\in J^{c}_{1}}c_j = s(J_{2}^{c})-s(J_{1}^{c})
=s(J_{1}\backslash (J_{1}\cap J_{2}) )-s(J_{2}\backslash (J_{1}\cap J_{2}) )
\end{aligned}$$
is nonzero because $J_{1}\backslash (J_{1}\cap J_{2})$ and $J_{2}\backslash (J_{1}\cap J_{2})$ are distinct and disjoint, and the linear form $\varphi$ satisfies condition $N$, and so there exists at most one integer $b $ that satisfies equation (\ref{eq2.2}).
Let $card(J_1)=k_1$ and $card(J_2)=k_2$. The sets $J_1$ and $J_2$ are distinct subsets of $\{1,\ldots,h\}$, and so at least one of the sets $J_1$ and $J_2$ is a proper subset of $\{1,\ldots,h\}$. It follows that
$$ k_1+k_2=card(J_1)+card(J_2)\leq 2h-1.$$
The number of integers of the form
$$\sum_{j\in J_{1}}c_j a_{1,j}-\sum_{j\in J_{2}}c_j a_{2,j} $$
with $a_{1,j}\in A_n$ and $a_{2,j}\in A_n$ is at most $n^{k_1+k_2}$. The number of ordered pairs $\left( J_1,J_2 \right)\subseteq \{1,\ldots,h\}$ with $|J_1|=k_1$ and $|J_2|=k_2$ is
$ \dbinom{h}{k_1}\dbinom{h}{k_2}.$
Thus, the number of equations of the form (\ref{eq2.2}) is less than
$$ \sum^{h}_{k_1=0}\sum^{h}_{k_2=0}\dbinom{h}{k_1}\dbinom{h}{k_2}n^{k_1+k_2}<\left( \sum^{h}_{k_1=0}\dbinom{h}{k_1} \right)^2 n^{2h-1}=4^h n^{2h-1},$$
and so there are less than $4^h n^{2h-1}+n$ positive integers $b$ such that $b \notin A_n$ and $A_n\cup \{b\}$ is a $\varphi$-Sidon set.
Let $a_{n+1}\in \left( b_{n+1}-4^h n^{2h-1}-n,b_{n+1}+4^h n^{2h-1}+n \right)$ satisfy that $A_{n+1}=A_{n}\cup \{a_{n+1}\}$ is a $\varphi$-Sidon set. Then
$$|a_{n+1}-b_{n+1}|<4^h n^{2h-1}+n.$$
Next, we prove that $4^h n^{2h-1}+n<(n+1)^{4h}$ for all positive integers $n$.
Case 1. $n=1$. Since $h$ is a positive integer and $2^{2h}\geq 4$, we have
$$2^{4h}=\left(2^{2h}\right)^2>2^{2h}+1=4^h+1.$$
Case 2. $n\geq 2$. It follows that
$$\begin{aligned}
(n+1)^{4h}&=(n+1)^{2h} \cdot (n+1)^{2h}
> 2^{2h} \cdot n^{2h}= 4^h \cdot n^{2h-1} \cdot n \\
&\geq 4^h \cdot n^{2h-1} \cdot 2
> 4^h \cdot n^{2h-1}+ n.
\end{aligned}$$
This completes the proof of Theorem \ref{thm1}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm2}]
Suppose that there exist a $\varphi$-Sidon set $A=\{a_k:k=1,2,\ldots\} $ and an integer $m_0$ such that
$ |a_k-b_k|<m_0$ for all positive integers $k$.
Let $A_{t-s}=\{a_s,a_{s+1},\ldots,a_t \}, t\geq s$. By Lemma \ref{lem22}, $A_{t-s}$ is also a $\varphi$-Sidon set, so we have
$$ \varphi\left(A_{t-s}\right)=\left\{ \sum_{i=1}^{h} c_i a_i :a_i\in A_{t-s} \right\} $$
and
\begin{eqnarray}\label{eq2.3}
|\varphi\left(A_{t-s}\right)|=(t-s+1)^h.
\end{eqnarray}
Let
$$J_1=\{i:~1\le i\le h ~\text{and}~c_i>0\},\quad J_2=\{i:~1\le i\le h~\text{and}~c_i<0\}.$$
Then
$$
\varphi\left(A_{t-s}\right)_{\max}=\sum_{i\in J_1} c_i\cdot \max(A_{t-s}) +\sum_{i\in J_2} c_i \cdot \min(A_{t-s})
<\sum_{i\in J_1} c_i (b_t+m_0) +\sum_{i\in J_2} c_i (b_s-m_0),
$$
$$
\varphi\left(A_{t-s}\right)_{\min}=\sum_{i\in J_2} c_i \cdot \max(A_{t-s}) +\sum_{i\in J_1} c_i \cdot \min(A_{t-s})
>\sum_{i\in J_2} c_i (b_t+m_0) +\sum_{i\in J_1} c_i (b_s-m_0).
$$
Since $\varphi\left(A_{t-s}\right)$ is a set of integers, it follows that
\begin{eqnarray}\label{eq2.4}
\begin{aligned}
|\varphi\left(A_{t-s}\right)|&\le \sum_{i\in J_1} c_i (b_t+m_0) +\sum_{i\in J_2} c_i (b_s-m_0)- \bigg(\sum_{i\in J_2} c_i (b_t+m_0) +\sum_{i\in J_1} c_i (b_s-m_0)\bigg)-1 \\
&= \sum_{i=1}^{h} |c_i| (b_t+m_0)- \sum_{i=1}^{h} |c_i| (b_s-m_0) -1 \\
&= C(b_t - b_s + 2 m_0)-1.
\end{aligned}
\end{eqnarray}
By (\ref{eq2.3}) and (\ref{eq2.4}), we have
$$
(t-s+1)^h\le C(b_t - b_s + 2 m_0)-1 \leq C\left[(t-s+1)^{h-\epsilon} + 2 m_0\right]-1.
$$
This inequality can not hold when $t-s+1$ large enough, a contradiction.
This completes the proof of Theorem \ref{thm2}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm3}]
Take $m_0$ with $0<m_0 \leq m$ and $a_1=b_1$, then $A_1=\{a_1\}$ is a $\varphi$-Sidon set and $|a_1-b_1|=0< m_0$.
Now we prove by induction on $k$. Suppose that $k\geq 1$ and $A_k=\{a_1,a_2,\ldots,a_k\}$ is a $\varphi$-Sidon set of integers such that $|a_i-b_i|<m_0$ for $i=1,2,\ldots,k$.
Take a positive integer $a_{k+1}$ such that $$a_{k+1}\in \left(b_{k+1}-m_0,b_{k+1}+m_0 \right).$$
By Lemma \ref{lem21}, the set $A_k\cup \{a_{k+1}\}$ is a $\varphi$-Sidon set if and only if the sets
$$\Phi_{J}(A_k,a_{k+1})=\varphi_{J} (A_k)+\bigg(\sum_{j\in J^{c}} c_j\bigg)a_{k+1}$$
are pairwise disjoint for all $J\subseteq\{1,\ldots,h\}$.
Suppose that there exist two distinct subsets $J_1,J_2\subseteq \{1,\ldots,h\}$ such that
$$ \Phi_{J_{1}}(A_k,a_{k+1})\cap \Phi_{J_{2}}(A_k,a_{k+1})\neq \emptyset.$$
It follows that there exist integers $a_{1,j}\in A_k$ for all $j\in J_1$ and $a_{2,j}\in A_k$ for all $j\in J_2$ such that
$$ \sum_{j\in J_{1}}c_j a_{1,j}+\bigg(\sum_{j\in J_{1}^{c}} c_j\bigg)a_{k+1}= \sum_{j\in J_{2}}c_j a_{2,j}+\bigg(\sum_{j\in J_{2}^{c}} c_j\bigg)a_{k+1}.$$
Equivalently, the integer $a_{k+1}$ satisfies the equation
$$
\bigg(\sum_{j\in J^{c}_{2}}c_j-\sum_{j\in J^{c}_{1}}c_j\bigg)a_{k+1}=\sum_{j\in J_{1}}c_j a_{1,j}-\sum_{j\in J_{2}}c_j a_{2,j}.
$$
Since $J_{1}^{c}\neq J_{2}^{c}$ and the linear form $\varphi$ satisfies condition $N$,
it follows that
$\sum_{j\in J^{c}_{2}}c_j-\sum_{j\in J^{c}_{1}}c_j $
is nonzero, and so
$$
a_{k+1}=\frac{\sum_{j\in J_{1}}c_j a_{1,j}-\sum_{j\in J_{2}}c_j a_{2,j}}{\sum_{j\in J^{c}_{2}}c_j-\sum_{j\in J^{c}_{1}}c_j}\le |\sum_{j\in J_{1}}c_j a_{1,j}-\sum_{j\in J_{2}}c_j a_{2,j}|
\le \sum_{i=1}^h |c_i|\cdot \max\{a_1,a_2,\ldots,a_k\}
<C (b_k+m_0).
$$
Thus we have
$$
a_{k+1}>b_{k+1}-m> C b_k+(C+1)m-m=C (b_k+m)\geq C (b_k+m_0)> a_{k+1},
$$
which is a contradiction. Hence $ \Phi_{J_{1}}(A_k,a_{k+1})\cap \Phi_{J_{2}}(A_k,a_{k+1})=\emptyset.$
Therefore, the set $A_k\cup \{a_{k+1}\}$ is a $\varphi$-Sidon set and $|a_i-b_i|<m_0$ for all $i\leq k+1$.
\end{proof}
|
{
"timestamp": "2021-12-01T02:26:44",
"yymm": "2111",
"arxiv_id": "2111.15485",
"language": "en",
"url": "https://arxiv.org/abs/2111.15485"
}
|
\section{Introduction}
An essential goal of computational physical science is to reliably predict the relevant polymorphs and their
relative stability, and consequently the thermodynamic behavior of a substance, with only knowledge of its chemical composition.
This challenge largely persists even today after it was noted over 30 years ago \cite{Maddox:1988}. Nevertheless, considerable progress has been
made towards this goal, thanks to the development of various structure prediction algorithms
\cite{Wille:1987,Wales/Doye:1997,Deaven/Ho:1995,Glass/Oganov/Hansen:2006,WangYC/etal:2010}
as well as the continuing methodology and accuracy advances of first-principles calculations \cite{Burke:2012,Jones:2015,Heyd/Scuseria/Ernzerhof:2003,Sun/etal:2016,Lejaeghere/etal:2016,Marzari/etal:2021}.
Nowadays, it is not uncommon for the mainstream crystal structure prediction (CSP) methods to predict the relevant known and unknown
polymorphic forms of a material, as well as their energy separations,
starting with a pre-given chemical composition. However, the output of such predictions, in particular the energy ranking of different
polymorphs will necessarily depend on the underlying electronic structure methods for computing the potential energy surface.
For instance, most CSP algorithms can predict the existence of diamond and graphene allotropes for carbon, but
cannot tell graphene is thermodynamically more stable at ambient conditions, if conventional local/semilocal density functional
approximations (DFAs) are employed as the electronic structure solver. Similar issues occurs if one quests for whether
the face-centered cubic (FCC) or hexagonal close packed (HCP) crystal structure is preferred for rare gas crystals under
ambient pressure \cite{Klein1976Rare,PhysRevLett.67.3263}. Thus, in the long-term endeavor aiming at quantitatively and faithfully predict the
thermodynamical behavior of real materials without invoking empirical inputs, further improving the accuracy and reliability of
the underlying electronic structure methods that can be conveniently used together with the CSP algorithms is a must.
In fact, the above-noted phase stability of rare-gas crystals has posed a significant challenge to first principles electronic structure methods.
In particular, it is theoretically highly nontrivial to describe the delicate energy differences between the FCC and HCP phases at
zero temperature, for which a sub-meV accuracy is mandatory. Without a solid base at 0 K, it is then highly unlikely
that one can convincingly explain why rare gas atoms prefer to crystallize in the FCC phase instead of the HCP phase under ambient pressures,
let alone the entire
phase diagram of the rare-gas systems under varying thermodynamic conditions. Within the realm of first principles electronic structure theories,
conventional local and semilocal DFAs \cite{Kohn/Sham:1965,Perdew/Burke/Ernzerhof:1996} are unable to reliably describe the cohesive properties
of rare-gas systems due to their inadequate treatment of long-range
van der Waals (vdW) interactions. Complementing semi-local functionals with semiempirical vdW corrections
\cite{Becke/Johnson:2007,Grimme/etal:2010,Tkatchenko/Scheffler:2009,Tkatchenko/etal:2012} or non-local vdW density functionals
\cite{Dion/etal:2004,Klimes/Bowler/Michaelides:2010} can
significantly improve their usefulness in describing vdW-bonded systems, but it is questionable whether these
pragmatic vdW-inclusive schemes can provide the needed (sub-meV) accuracy in order to describe the phase stability of
rare-gas crystals. Beyond these approximations one may either go to the fifth-rung functionals
\cite{Perdew/Schmidt:2001} (e.g., the random phase approximation (RPA)
\cite{Bohm/Pines:1953,Langreth/Perdew:1977,Gunnarsson/Lundqvist:1976,Eshuis/Bates/Furche:2012,Ren/etal:2012b} and
beyond \cite{Ren/etal:2012,Ren/etal:2013}) within the realm of the density functional theory (DFT)
where non-local electron correlation effects are seamlessly incorporated, or simply go to wave-function based
approaches, like the second-order M{\o}ller-Plesset perturbation theory (MP2) \cite{Moller/Plesset:1934} and the coupled cluster (CC) theory \cite{Bartlett/Musial:2007}. However, MP2, which contains only
two-body (here ``body" means atom) correlations, is considered not
sufficiently accurate to describe the cohesive properties of rare-gas crystals \cite{Hermann2009Complete,CasassaA,PhysRevB.82.205111}.
In the connection, the CC theory
truncated at the level of single, double, and perturbative triple excitations (CCSD(T)) \cite{Head-Gordon/Pople:1988}
(known as ``gold standard" in computational chemistry) is of great interest and has been applied
to study the cohesive properties of rare-gas systems \cite{Hermann2009Complete,CasassaA,PhysRevB.82.205111,PhysRevLett.79.1301,PhysRevB.62.5482,PhysRevB.60.7905,Schwerdtfeger2016Towards,PhysRevB.73.064112,PhysRevB.95.214116}.
However, in these studies, instead of directly applying CCSD(T)
to treat infinite periodic rare-gas crystals, one utilizes the method only to generate the short-range part of
the potential energy landscapes of trimers and tetramers. The entire cohesive energies are obtained by summing up two-body, three-body,
and four-body contributions, with each of them being calculated by summing over sufficiently large atomic clusters in real space.
The interactions between atomic pairs, as well as the long-range part of the interactions within trimers and tetramers are being described by empirical model potentials.
Thus this is a rather sophisticated approach that combines many ingredients into a framework, and necessarily depends on many parameters.
Despite the remarkable accuracy this approach delivers and its successful explanation of the preference to the FCC phase of rare-gas crystals \cite{Schwerdtfeger2016Towards}, it is rather tedious to use for non-experts and barely suitable for routine applications, in particular
in the context of high-throughput computations.
In this work, we present a detailed study of the performance of RPA and beyond for computing
the cohesive energies of the Ar crystal, with special emphasis on
the energy differences between the FCC and HCP phases. Here RPA refers to a theoretical approach for
computing the ground-state total
energy of many-electron systems, in a way that the exchange energy is evaluated exactly and
the electron correlation energy is treated at the direct RPA level.
The exchange-correlation (XC) energy of the RPA method can be viewed as an orbital-dependent energy functional
derived within the framework of adiabatic-connection fluctuation-dissipation theorem of DFT \cite{Langreth/Perdew:1977,Gunnarsson/Lundqvist:1976,Dobson:1994}, but
it is also intimately connected to the CC theory \cite{Scuseria/Henderson/Sorensen:2008}. One particularly appealing aspect of RPA is that it
provides a balanced description of
all bonding characteristics, including a seamless incorporation of vdW interactions. Early benchmark studies
indicated that RPA (and beyond) is capable of describing delicate energy differences \cite{Ren/etal:2012b}.
This is corroborated by more recent studies that revealed that RPA yields the correct energy ranking of
different polymorphs of several types of materials, including carbon allotropes \cite{Lebegue/etal:2010}, FeS$_2$ \cite{Zhang/Cui/Zhang:2018}, BN \cite{Cazorla/Gould:2019}, as well as a few others \cite{Sengupta/Bates/Ruzsinszky:2018}.
Despite its success, the standard RPA scheme shows an overall tendency of underestimating the binding energies of molecules and solids
\cite{Furche:2001,Harl/Schimka/Kresse:2010}, and this underestimation
can be largely alleviated by adding a correction term arising from single excitations \cite{Ren/etal:2011}. In particular, the renormalzied single excitation (rSE)
correction has been shown to appreciably improve the accuracy of the standard RPA for describing the binding strengths
of molecules and solids \cite{Paier/etal:2012,Ren/etal:2013,Klimes/etal:2015}. In this work, we apply the RPA+rSE method \cite{Ren/etal:2013} to study the
energy difference between the FCC and HCP structures of the Ar crystal. Here, instead of resorting to a tedious cluster-based approach \cite{PhysRevLett.79.1301,PhysRevB.62.5482,PhysRevB.60.7905,Schwerdtfeger2016Towards,PhysRevB.73.064112,PhysRevB.95.214116}, we directly treat the crystal
as a periodic system in terms of $\bfk$ point sampling. We find that, in order to capture the delicate energy differences between the FCC and HCP structures,
it is important to treat the two structures at an equal footing, i.e., setting up a computational cell that is the same for the FCC and HCP crystal structures and
performing calculations with the $\bfk$ point setting. Test calculations show that using their primitive unit cells
of these two structures (and hence different $\bfk$ point setting) will not provide sufficient numerical precision. With the above-noted computational strategy, and by carefully
converging all the computational parameters, we found that the RPA+rSE cohesive energy for the FCC phase is lower by around 3.6 $\mu$Ha ($\sim$ 0.1 meV) than that of
the HCP phase at zero temperature and ambient pressure. The zero-point energy (ZPE), which was previously suggested to be vital for stabilizing the FCC phase \cite{PhysRevB.62.5482,PhysRevB.60.7905,Schwerdtfeger2016Towards},
is found to only play a secondary role here.
Furthermore, by including the contributions of phonon energies, we calculate the Helmholtz free energy and derive the pressure-volume ($P$-$V$) curves at room temperatures.
Our results show excellent agreement with available experimental data in the high-pressure regime, which was previously considered to be a challenging
problem for the Ar crystal \cite{PhysRevB.95.214116}. Finally, by computing the Gibbs free energy for both phases, we are able
to determine the phase boundary of the FCC and HCP phases in the temperature-pressure ($T$-$P$) phase diagrams. The behavior of the phase boundary line can be understood
from the perspective that the FCC phase has a slightly large entropy and volume that the HCP phase at the same temperature and pressure condition.
Our investigation suggests that the combination of advanced
electronic structure calculations at the level of RPA and beyond with reliable estimations of phonon energies is a powerful approach to
study the thermodynamic properties of condensed matter systems. Since the implementation of such approach has been available in several mainstream computer code
and the barrier of employing such an approach for routine application is not high, we expect this approach will become highly useful for treating challenging
problem when unprecedented accuracy is required.
\section{\label{sec:results}Results and Discussions}
\textbf{Cohesive energy curves for the Ar crystal.}
We start by first looking at the performance of RPA+rSE for describing the cohesive property of the Ar crystal. In Fig.~\ref{Fig:Cohesive}
we present the cohesive energies as a function of the lattice constant for the FCC Ar crystal, as determined by various
functionals, including RPA@PBE and (RPA+rSE)@PBE. Here ``@PBE" denotes that the calculations
are done on top of the reference wavefunctions generated by a preceding KS calculation based on the generalized gradient approximation (GGA) of Perdew, Burke, and Ernzerhof \cite{Perdew/Burke/Ernzerhof:1996}.
For comparison, results obtained using other functionals, including local-density approximation (LDA), GGA-PBE itself, the global hybrid functional
PBE0 \cite{Perdew/Ernzerhof/Burke:1996}, the Heyd-Scuseria-Ernzerhof (HSE)
screened hybrid functional \cite{Heyd/Scuseria/Ernzerhof:2003},
as well as PBE complemented with the Tkatchenko-Scheffler (TS) vdW correction (vdW(TS)) \cite{Tkatchenko/Scheffler:2009} and with the more sophisticated many-body dispersion (MBD) \cite{Tkatchenko/etal:2012}, are also presented.
In addition, the experimental equilibrium lattice constant and cohesive energy are indicated by dashed lines in Fig.~\ref{Fig:Cohesive}.
All calculations are done using the FHI-aims code package \cite{Blum/etal:2009,Ren/etal:2012}, based on an all-electron, numeric atom-centered
orbital (NAO) basis set framework. The correlation-consistent NAO-VCC-$n$Z basis sets \cite{IgorZhang/etal:2013} are used in the calculations.
Since correlated methods like RPA and beyond have different basis set dependence behavior compared to
the conventional functionals, the presented RPA and RPA+rSE results are those extrapolated to the complete basis set (CBS) limit,
whereas the results of other functionals are obtained with NAO-VCC-4Z basis set.
Further computational details of these calculations
are given in Sec.~\ref{sec:method}. Figure~\ref{Fig:Cohesive} clearly shows that
a quantitatively accurate description of the cohesive properties of the Ar crystal is highly challenging for first-principles approaches.
While LDA displays a pronounced artificial overbinding, GGA-PBE shows the opposite behavior. Going from PBE to the hybrid
functionals does not improve the situation, due to the fact that long-range vdW interactions are missing in both types of functionals.
Complementing the PBE functional with semi-empirical vdW corrections significantly improves the cohesive energies, as shown by
by PBE+vdW(TS) and PBE+MBD curves in Fig.~\ref{Fig:Cohesive}, but the equilibrium
lattice constant is still appreciably overestimated. In case of RPA-based approaches, RPA@PBE performs significantly better than
LDA, GGA, and hybrid functionals, and the obtained equilibrium lattice constant ($5.365$ \AA) is in excellent agreement with the experimental value ($5.311$ \AA).
However, similar to the Ar dimer (Ar$_2$) case \cite{Ren/etal:2011}, RPA@PBE underestimates the cohesive energies. Finally, when adding the rSE correction, this underbinding
is largely alleviated, and the (RPA+rSE)@PBE cohesive energy curve shows a rather satisfactory agreement with the experimental data, despite that
a slight underestimation of the cohesive energy is still noticeable. It should
be noted that RPA \cite{Harl/Kresse:2008} and RPA+rSE \cite{Klimes/etal:2015} have been previously applied to the FCC phase of the Ar crystal,
using an independent implementation based on projector augmented wave (PAW) method and plane-wave basis set \cite{Kresse/Furthmuller:1996a}.
In their paper \cite{Klimes/etal:2015},
an even better agreement of the RPA+rSE cohesive energy with the experimental result was reported. We expect such a slight difference arising from the different numerical implementations does not affect the discussions below.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth,clip]{cohesive_energy_lattice_constant}
\caption{\label{Fig:Cohesive} Cohesive energies of the FCC Ar crystal determined by different functionals. All calculations are done using
the FHI-aims code package \cite{Blum/etal:2009,Ren/etal:2012} and NAO-VCC-$n$Z basis sets \cite{IgorZhang/etal:2013}. The RPA and RPA+rSE results
are obtained by extrapolating the NAO-VCC-3Z and 4Z results to the complete basis set (CBS) limit, and all other results are obtained using the NAO-VCC-4Z basis set.
A primitive unit cell and $10\times 10 \times 10$ $\bfk$ grid sampling
are used in all calculations.
ZPE contribution is not included here. Experimental values are taken from Ref.~\cite{schwalbe1977thermodynamic}.}
\end{figure}
\textbf{Energy differences between FCC and HCP structures.}
Encouraged by the excellent performance of RPA+rSE for describing the cohesive properties of the Ar crystal, we set out to
examine the energy difference of the FCC and HCP structures of Ar using this approach. Here we stress that it is rather challenging to numerically capture
the delicate energy differences of the two structures, due to the fact that they have different primitive unit cells,
and a finite $\bfk$-point sampling based on the primitive cells leads to an unbalanced description of the two structures.
To deal with this problem, we set up a $1\times 1 \times 6$ supercell for both structures, whereby the $z$ axis is chosen along the $[111]$ direction of
the FCC structure and the $c$ direction of the HCP structure, respectively. This results in a supercell with $ABCABC$
stacking for the FCC structure and with $ABABAB$ stacking
for the HCP structure. Then by utilizing the same setting for $\bfk$-point mesh ($12\times 12 \times 2$ in the present case), one achieves a numerical description of
both structures on an equal footing.
As a consequence, the target energy differences of the two structures are least prone to possible numerical errors arising from the under-convergence of computational parameters.
In Fig.~\ref{Fig:fcc-hcp_deltaE}, we plot the energy differences of the cohesive energies of the FCC and HCP structures obtained by PBE, PBE+MBD, and (RPA+rSE)@PBE
as a function of the volume. It can be seen that, most strikingly, the (RPA+rSE)@PBE yields a lower energy for the FCC phase than the HCP phase over a wide volume range. The energy preference to the FCC phase is approximately 3.5 $\mu$Ha at the ambient pressure, and reaches its maximum ($\sim$ 10 $\mu$Ha) at a compressed volume of approximately 22 \AA$^3$ per atom. The energy difference is reduced when the system is further compressed,
until the HCP phase is favored at even higher
pressures (smaller volumes). In contrast to the RPA+rSE results, the PBE and PBE+MBD yield almost identical cohesive energies for the two phases
over a large volume range, and only at very high pressures
the HCP strutur becomes clearly favored. Furthermore, comparing the PBE and PBE+MBD results reveals that the long-range vdW interactions
do not play a role
in discerning the energy difference of the two phases, although they do contribute significantly to the cohesive energies of each
individual phase. Last but not least, the comparison of RPA+rSE and PBE+MBD results provides additional insights.
It should be noted that MBD captures the many-body coupling among the atoms at the RPA level, with each individual atom treated as
a harmonic oscillator\cite{Tkatchenko/etal:2012}. The different behaviors of PBE+MBD and (RPA+rSE)@PBE in Fig.~\ref{Fig:fcc-hcp_deltaE} suggests that
it is really the many-body correlations at the electronic level that makes a qualitative difference here.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth,clip]{fcc-hcp_deltaE}
\caption{\label{Fig:fcc-hcp_deltaE} Energy differences between the FCC and HCP structures of the Ar crystal at zero temperature as
a function of the volume (per atom). Presented
results are calculated using PBE, PBE+MBD, (RPA+rSE)@PBE approaches. Also shown are the (RPA+rSE)@PBE cohesive energy complemented with ZPE at the PBE level, denoted as ``RPA+rSE+ZPE". Results from RPA+rSE and ZPE@PBE are extrapolated to CBS(3,4), while those from other functionals are obtained using NAO-VCC-4Z basis set. The presented curves are obtained by subtracting the corresponding cohesive energy curves of the FCC and HCP structures,
which themselves are obtained by a second-order Birch-Murnaghan fitting of the original computed data.}
\end{figure}
In addition to the electronic energies, it has been pointed out in the past that the ZPE plays an
essential role in stabilizing the FCC phase over the HCP phase \cite{PhysRevB.62.5482,PhysRevB.60.7905,Schwerdtfeger2016Towards}. To check this point,
we further add the differences of the ZPE contributions of the two phases to the RPA+rSE results, and plot the resultant RPA+rSE+ZPE results in Fig.~\ref{Fig:fcc-hcp_deltaE} (blue curve).
It can be seen that the ZPE contribution indeed favors the FCC phase, shifting the entire $E_\text{FCC} - E_\text{HCP}$ curve further down.
However, its effect is much smaller in magnitude compared to the electron correlation effect described by RPA+rSE. It should be noted that in Fig.~\ref{Fig:fcc-hcp_deltaE}
the ZPE contributions are calculated using the PBE functional within the harmonic approximation. In principle, it would be ideal if the phonon
frequencies were evaluated by the RPA+rSE approach. Unfortunately, at present
our implementation does not yet allow one to compute the phonon spectra according to the RPA+rSE potential energy surface, and evaluate the ZPE contributions at
the RPA+rSE level. However, as shown in Fig.~S1 of the supporting material (SM)
, we have checked the ZPEs using LDA, PBE and PBE+MBD functionals
(among which the cohesive behavior of RPA+rSE sits in between), and confirmed that the ZPE contribution to the energy difference does not change appreciably with respect to the underlying functionals.
We thus expect that the scenario illustrated in in Fig.~\ref{Fig:fcc-hcp_deltaE} does not change if RPA+rSE were used to evaluate the ZPEs.
The anharmonic correction to the ZPE has also been considered in the literature \cite{Schwerdtfeger2016Towards}, but its effect is rather small
and neglected here.
\textbf{The $P$-$V$ curve in the high-pressure regime.}
Besides the mystery regarding the preferred crystal structure at ambient pressure, the rare-gas systems in the high-pressure regime also
exhibit fascinating phenomena. These include the pressure-induced structural phase transition from FCC to HCP (or to other structures at
even higher pressures), the change of electronic structure, e.g., the narrowing of the band gap and/or the change of conductance \citep{PhysRevLett.62.669}, as well as the rearrangement of electronic states \citep{PhysRevLett.95.257801}, etc. Even superconductivity
has been predicted to exist at very high pressures \citep{PhysRevB.91.064512}.
Considerable efforts have been devoted to understanding the physical properties of rare gas systems under high pressure \citep{PhysRevB.95.214116,PhysRevLett.95.257801,PhysRevB.52.15165,PhysRevLett.88.075504,PhysRevLett.96.035504}.
In particular, Schwerdtfeger and coworkers \cite{Schwerdtfeger/Hermann:2009,PhysRevB.95.214116} derived the volume and temperature dependence of the the pressure -- $P(V,T)$
-- for both Ne and Ar crystals from the Helmholtz free energy, which consists of a static term determined from many-body expansion based on
the CC theory and a dynamical term determined from lattice vibrations. While excellent agreement with experiment was found for Ne up to
200 GPa \cite{Schwerdtfeger/Hermann:2009}, a comparable level of agreement was only achieved up to 20 GPa for Ar at room
temperature \cite{PhysRevB.95.214116}.
In the pressure range of
20-100 GPa, appreciable discrepancy between calculated results and experiment was observed. As such, a theoretical description of the equation
of state at high pressure reaching the experimental accuracy was considered to be a significant challenge for the Ar crystal \cite{PhysRevB.95.214116}.
To check how RPA+rSE performs in the high-pressure regime, we here also determine the equation of state $P(V,T)$ from the Helmholtz free
energy via $P(V,T)=-{\partial F(V,T)/\partial V}|_T$, whereby the expression of the Helmholtz free energy is presented in Sec.~\ref{sec:method}
(Eq.~\ref{Eq:QHA}). In brief, the static part of the Helmholtz energy in our calculations is given by RPA+rSE total energy, whereas
the ZPE and lattice thermodynamic vibration contributions are determined from the phonon spectrum based on PBE calculations.
The possible influence of the underlying XC functional for
evaluating the phonon spectra on the $P$-$V$ curve is examined in Fig. S2 of the SM, whereby one can see again that varying the
functional has negligible impact. In Fig.~\ref{Fig:P-V_diagram}, our calculated $P(V,T)$ results
as a function of
$V$ at $T=300$ K
are presented for the FCC phase of the Ar crystal. For comparison, the calculated results
of Schwerdtfeger \textit{et al.} \cite{PhysRevB.95.214116} as well
as the experimental results \cite{FingerStructure,Errandonea2006Structural,ross1986equation} are also included. To stay away from the melting point
(the melting pressure $P_{melt}$ and volume $V_{melt}$ at 300 K are estimated to be $1.353$ GPa and $32.64 {\AA}^{3}$ in Refs.~\cite{WiebkeMelting,PhysRevB.65.052102}),
only the compressed volume regime with $V< 32 {\AA}^{3}$ was considered in our investigation. Figure~\ref{Fig:P-V_diagram} shows that
our calculated $P$-$V$ results are in very good agreement with the experimental data of Ross \textit{et al.} \cite{ross1986equation} extracted
from static pressure measurements
and of Errandonea \textit{et al.} \cite{Errandonea2006Structural}
extracted from dispersive x-ray diffraction measurements up to 50 GPa.
In addition to the FCC phase, we also calculated the $P$-$V$ curve in the high-pressure regime for the HCP phase, but its difference from that
of the FCC phase in the scale of Fig.~\ref{Fig:P-V_diagram} is tiny. Overall, it seems that our approach describes the
equation of states in high-pressure regime rather well, in contrast with the CC-based many-body expansions. Schwerdtfeger \textit{et al.} \cite{PhysRevB.95.214116} attributed the disparity of their results from experiment to the missing 4-body and higher body effects
within their approach and the possible inaccuracy of CCSD(T). In our case, RPA+rSE is less accurate than CCSD(T), but our calculations account for the many-body
effects to infinite order. From this perspective, we propose that it is most likely not the method of CCSD(T) itself, but rather the limited order
of the many-body expansion that is responsible for the discrepancy observed by Schwerdtfeger \textit{et al.} \cite{PhysRevB.95.214116}.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth,clip]{P-V_diagram.eps}
\caption{\label{Fig:P-V_diagram} Calculated $P$-$V$ diagram at 300 K for the FCC phase of the Ar crystal. Calculations are done using the 6-atom supercell
(as Fig.~\ref{Fig:fcc-hcp_deltaE}) and the results are extrapolated to the CBS limit. We choose RPA+rSE accomplished with phonon free energy at 300K in our work here(Green line). Theoretical results of Ref.~\cite{PhysRevB.95.214116}
obtained using many-body expansion and the experimental data taken from Ref.~\cite{FingerStructure,ross1986equation,Errandonea2006Structural} are included
for comparison.}
\end{figure}
\textbf{The $T$-$P$ phase diagram for Ar.}
Figure~\ref{Fig:fcc-hcp_deltaE} signifies that at zero temperature the FCC phase can transform into the HCP phase at a compressed volume of approximately $17$ \AA$^3$ per atom,
corresponding to a pressure of 31.4 GPa, as can be inferred from Fig.~\ref{Fig:P-V_diagram}. To investigate how the transition pressure changes with the temperature,
we calculate the Gibbs free energies $G(T,P)$ for both phases at several different temperatures, and pinpoint the boundary between the two phases in the $T$-$P$ phase diagram
by equating the Gibbs free energies of the two phases $G_\text{FCC}(T,P)=G_\text{HCP}(T,P)$. The resultant $T$-$P$ phase diagram is presented in
Fig.~\ref{Fig:T-P_diagram}, where the boundary line is determined by data points at seven different temperatures ranging from 0 to 300 K. One can see that the transition
pressure increases steadily from 31.4 GPa at 0 K to about 37.0 GPa at 300 K. In Table~SI of SM, we present the actual values of the critical pressure and volume along
the phase boundary line. The steady increase of the transition pressure along with the temperature is related to the different phonon spectra of the two phases.
The lattice vibrations are such that the higher-symmetry phase (FCC) is favored over the lower-symmetry phase at finite temperatures, and thus a higher pressure is
needed to turn the FCC phase into the HCP one, compared to the zero temperature case. Close inspection reveals that the positive slope of the phase boundary is related to
the slightly larger entropy and volume of the FCC phase than the HCP phase under the same temperature and pressure conditions. Namely, the positive sign of $\Delta S/\Delta V$
across the phase boundary line gives rise to a positive $dP/dT$ slope, according to the Clausius-Clapeyon relationship (i.e., $dP/dT = \Delta S/\Delta V$) for a solid-solid
phase transition.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth,clip]{T-P_diagram}
\caption{\label{Fig:T-P_diagram} The $T-P$ diagram of the Ar crystal, where the phase boundary between the FCC and HCP phases are determined by the
Gibbs free energy. Calculations are done using the 6-atom supercell model at several successive temperatures, i.e., $0$, 50, 100, $\cdots$, $300$ K .
Both electronic (RPA+rSE) and phonon (PBE) parts of the Gibbs free energy are extrapolated to the CBS limit [CBS(3,4)].}
\end{figure}
Experimentally, Errandonea \textit{et al.} observed that a phase transition from FCC to HCP starts at 49.6 GPa under room temperature \cite{Errandonea2006Structural},
and after that the
two phases coexist over a wide pressure range. Determining the hysteresis region between the two solid-state phases requires a rather long-time $NPT$ molecular dynamics
simulations and goes beyond the scope of the present work. Quantitatively, our simulation seems to underestimate the transition pressure, but
nevertheless predicts the correct order of magnitude for the transition pressure. Several factors might contribute to this underestimation. First, in the present work,
the ideal structure is used for both the FCC and HCP phases, whereas in reality disorder and defects play important roles. Second, the dynamics of the phase transition
process is rather complex, governed by a delicate interplay between energetics and kinetics, as, e.g., discussed in Refs.~\cite{PhysRevLett.86.4552,PhysRevLett.96.035504} for
the martensitic FCC-HCP transition in solid xenon. These aspects have not been taken into account in our studies, but needs to be taken into account to
achieve a quantitatively accurate description of the detailed phase transition behavior.
\section{Conclusion}
To summarize, we carried out a first-principles study of the energy difference and the transition between the FCC and HCP phases of the Ar crystal.
The electronic part of the free energy is calculated at the level of RPA plus renormalized singles corrections \cite{Ren/etal:2013}, and the ZPE and thermodynamic lattice vibrations are determined at the PBE level using the harmonic
approximation. Our results show that, at zero temperature, RPA+rSE predicts that the FCC structure has a lower energy than the HCP structure at ambient
and high pressures up to 30 GPa. The ZPE slightly favors the FCC structure over the HCP one, but its effect is rather tiny and one order of magnitude smaller
than the electron correlation effect described by RPA+rSE. Computationally, in order to capture the delicate energy difference between the two phases, it is important to
treat the FCC and HCP structure on an equal footing, i.e., adopting the same computational unit cell and parameters for both structures.
In the high-pressure regime, our $P$-$V$ curve is in excellent agreement with available experimental data. Such an agreement is not achieved with the CC-based
many-body expansion approach, and has been regarded as a significant challenge for theoretical methods \cite{PhysRevB.95.214116}. Based on our results, we propose
that the inadequacy of the many-body expansion approach is probably due to the missing four- and higher-body body terms, and not due to the CCSD(T) method itself.
By computing the Gibbs free energies for both phases, we are able to determine a $T$-$P$ phase diagram of the Ar crystal. Although the transition pressure is somewhat
underestimated and no details of the transition process is provided yet, determining a qualitatively correct phase diagram for the Ar crystal
entirely from first principles itself is a great success. We believe our findings presented in this work will have significant implications for investigating
other rare-gas systems and more complex materials in general.
\section{\label{sec:method}Methods}
\textbf{Computer Code and Computational Details.}
The ground-state total energy in this work is calculated using the RPA+rSE method, as is implemented within the all-electron full-potential
code package FHI-aims \cite{Blum/etal:2009,Ren/etal:2012}, based on the numerical atom-centered orbital (NAO) basis set framework.
Extensive reviews of the RPA method for many-electron ground-state energy calculations
exist in the literature \cite{Hesselmann/Goerling:2011,Eshuis/Bates/Furche:2012,Ren/etal:2012b} and its rSE correction has been discussed in
Refs.~\cite{Ren/etal:2013}. The numerical details and benchmark tests for NAO-based RPA (and rSE) implementation for finite systems (molecules and clusters) have been presented in Refs.~\cite{Ren/etal:2012,Ren/etal:2013,Ihrig/etal:2015}. Recently,
this implementation has recently been extended to periodic systems. While the details will be presented elsewhere,
the basic numerical techniques of periodic RPA implementation within the NAO framework follow closely those of periodic $G_0W_0$ implementation published recently \cite{Ren/etal:2021}. Also, the reliability of our NAO-based periodic RPA implementation has been demonstrated by
recently published benchmark-quality calculations for a set of semiconductors \cite{IgorZhang/etal:2019}.
The valence correlation-consistent (VCC) NAO basis sets \cite{IgorZhang/etal:2013} are used in our calculations. Such basis sets allow one
extrapolate the calculated RPA+rSE results to the CBS limit, if the following asymptotic behavior is assumed \cite{helgaker1997basis-set}
\begin{equation}
E(n) = E(\infty) - C/n^3
\label{eq:basis_dependence}
\end{equation}
In Eq.~\ref{eq:basis_dependence} $n$ is the cardinal number of the NAO-VCC-$n$Z basis sets \cite{IgorZhang/etal:2013}, and $C$ is a fitting parameter. Our CBS results
for RPA+rSE are then obtained from a two-point extrapolation with $n=3,4$ \cite{IgorZhang/etal:2019,helgaker1997basis-set}
\begin{equation}
E(\infty)=\frac{E(3)3^{3}-E(4)4^{3}}{3^{3}-4^{3} }
\label{Eq:extrapolate}
\end{equation}
and hence denoted as CBS(3,4). The semi-local or hybrid functionals utilize only occupied state information and the convergence with respect to
the basis set is much less demanding. Hence, in the present work, the NAO-VCC-$n$Z basis set is used for these functionals. Figure.~S3 in the
SM demonstrates the basis set convergence behavior of both RPA+rSE and PBE calculations.
As already mentioned in Sec.~\ref{sec:results}, to obtain reliable energy differences between the FCC and HCP structures, it is crucial to
treat both structures on an equal footing. To this end, $1\times 1\times 6$ computational cells with $ABCABC$ and $ABABAB$ stacking are used
for the FCC and HCP structures, respectively. In Fig.~S4, we show that if primitive unit cell are used instead, one may easily get in incorrect
energy ordering for the two structures for the RPA+rSE method.
The computational cost of the RPA calculations scales as $N_b^4$ with respect to the number of basis functions $N_b$.
To reduce the computational load, the NAO-VCC-4Z RPA+rSE results for
the 6-atom computational cell at a given volume $V$ (denoted as $E_{4Z}(V)$) are obtained from the NAO-VCC-3Z results ($E_{3Z}(V)$)
upon adding a correction term that is obtained from calculations based on the primitive unit cells,
\begin{equation}
E_{4Z}(V) = E_{3Z}(V) + E_{corr}(V)\, .
\label{Eq:correction}
\end{equation}
Here,
\begin{equation}
E_{corr}(V)=6*[E_{4Z}^{prim}(V)- E_{3Z}^{prim}(V)]
\label{Eq:correction1}
\end{equation}
where $E_{4Z}^{prim}(V)$ and $E_{3Z}^{prim}(V)$ are respectively the NAO-VCC-4Z and NAO-VCC-3Z results for primitive unit cells.
In these calculations,
$12\times 12 \times 2$, $12\times 12 \times 12$ ($12\times 12 \times 6$) $\bfk$ grids are used for the supercell and FCC (HCP) primitive cells, respectively.
Test calculations show that such a basis correction scheme only introduces an error less than 0.1 $\mu$Ha for the energy differences between FCC
and HCP structures, compared to direct NAO-VCC-4Z calculations for the supercell.
\textbf{ZPE and Free Energy Calculations.}
The Helmholtz free energy at a given volume $V$ and temperature $T$ invoked to determine the $P$-$V$ curve presented in Fig.~\ref{Fig:P-V_diagram}
is given as follows,
\begin{align}
F(V,T)=& E^\text{RPA+rSE}(V)+ \frac{1}{2}\sum_{\bfq,j}\hbar\omega_{\bfq,j}(V) + \nonumber \\
& \emph{k}_{B}T\sum_{q,j}ln\left[2 sinh \left( \frac{\hbar\omega_{\bfq,j}(V)}{2\emph{k}_{B}T}\right) \right]
\label{Eq:QHA}
\end{align}
where $E^\text{RPA+rSE}(V)$ is the RPA+rSE total energy at 0 K, and the second and third terms correspond to the contributions of ZPE
and lattice vibrations at finite temperatures, respectively. In Eq.~\ref{Eq:QHA}, $\omega_{\bfq,j}(V)$ are the phonon frequencies,
which in the present work are calculated using the finite difference method via the PHONOPY code \cite{phonopy} interfaced with FHI-aims.
The phonon spectra used to determine the ZPE and/or lattice vibration contributions
presented in Figs.~\ref{Fig:fcc-hcp_deltaE}-\ref{Fig:T-P_diagram} are evaluated
using the PBE functional with a $2\times 2 \times 2 $ supercell (in units of the $1\times 1 \times 6$ computational cell, hence 48 atoms in total).
In the photon calculations, $12\times 12 \times 6$ $\bfk$ grid and a displacement of 0.002 \AA (for the finite displacement method) are
adopted. The influence of different functionals on the phonon spectra, and hence on the final ZPE and phonon contributions
to the Helmholtz free energy is presented in Fig.~S5 of the SM.
The Gibbs free energy used to determine the $T$-$P$ phase diagram reported in Fig.~\ref{Fig:P-V_diagram} is given by
\begin{equation}
G(T,P) = F(V,T) + PV\, .
\end{equation}
The Gibbs free energies are calculated separately for the FCC and HCP phases, and the phase boundary separating the two phases are
determined by equating $G_\text{FCC}(T,P)$ and $G_\text{HCP}(T,P)$ at several different temperatures.
\begin{acknowledgments}
The work is supported by National Natural Science Foundation of China (Grant No. 11874335), and Max Planck Partner Group for
\textit{Advanced Electronic Structure Methods}. We thank Igor Ying Zhang and Xinzheng Li for helpful discussions.
\end{acknowledgments}
\begin{appendix}
\renewcommand{\thefigure}{S\arabic{figure}}
\setcounter{figure}{0}
\renewcommand{\thetable}{S\Roman{table}}
\setcounter{table}{0}
\newpage
\section*{Supporting Material for \\
``Phase Stability of the Argon Crystal: A First-Principles Study Based on Random Phase Approximation plus Renormalized Single Excitation Corrections"}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth,clip]{RPA+rSE+ZPEonLDA_PBE_MBD.eps}
\caption{\label{Fig:ZPE_diff_functionals}
Energy differences between the FCC and HCP structures of the Ar crystal at zero temperature as a function of the volume (per atom). Results are obtained by (RPA+rSE)@PBE, complemented with ZPE determined using LDA, PBE, and PBE+MBD functionals. The presented curves are obtained by subtracting the corresponding cohesive energy curves of the FCC and HCP structures,
which themselves are obtained by a second-order Birch-Murnaghan fitting of the original computed data.}
\end{figure}
Figure~\ref{Fig:ZPE_diff_functionals} shows the energy differences between the FCC and HCP structures of the Ar crystal at zero temperature as
a function of volume, calculated using the random phase approximation (RPA) plus renormalized single excitation (rSE) correction \cite{Ren/etal:2013}.
The RPA+rSE energy differences are further complemented by zero point energy (ZPE) as determined by three different functionals -- local-density approximation (LDA), Perdew-Burke-Erzerhof (PBE) generalized
gradient approximation (GGA), and PBE with many-body dispersion (MBD) \cite{Tkatchenko/etal:2012}. It can be seen from Fig.~\ref{Fig:ZPE_diff_functionals} the ZPE contribution
to the energy difference is rather tiny, compared to the electron energy contribution as described by RPA+rSE. This behavior does not change if
different functionals are used to evaluate the phonon frequencies. From Fig..~\ref{Fig:ZPE_diff_functionals} we infer that even if the RPA+rSE method
is used to calculate the phonon frequencies, the scenario will not change.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth,clip]{PV_phonon_PBE_LDA_MBD.eps}
\caption{\label{Fig:PV_diff_functionals} Calculated $P$-$V$ curves at 300 K for the FCC phase of the Ar crystal, as derived from the Helmholtz free energy. The electronic part of the Helmholtz free energy is calculated using RPA+rSE, whereas the phonon part is evaluated using
three different functionals: LDA, PBE, and PBE+MBD. Calculations are done using the 6-atom supercell and the results are extrapolated to the complete basis set (CBS) limit. Three sets of experimental data \cite{FingerStructure,ross1986equation,Errandonea2006Structural} are also presented for comparison (cf. Fig. 3 in the main text).}
\end{figure}
In Fig.~3 of the main text, we presented the $P$-$V$ curve of the FCC Ar crystal in the high-pressure regime. The curve is determined from the
Helmholtz free energy with respect to the volume $V$. The static (electronic) part of the free energy is calculated using (RPA+rSE)@PBE, whereas
the lattice vibration contributions are estimated from the phonon frequencies calculating the PBE functionals. Excellent agreement with the
experimental data is observed. However, there remains the question what happens if the phonon frequencies are calculated using a different
functional. In Fig.~\ref{Fig:PV_diff_functionals} we present the $P$-$V$ curves with the phonon contributions calculated using LDA, PBE,
and PBE+MBD functionals. The three curves are practically indistinguishable at the scale of Fig.~\ref{Fig:PV_diff_functionals}. This benchmark test
indicates that the obtained $P$-$V$ curve is insensitive to the underlying functionals for evaluating the phonon spectrum, and our results
are not biased because of the choice of the PBE functional.
Presented in Table~\ref{Tab:transition_pressure_PBE} are the transition pressures and volumes along the boundary line between
the FCC and HCP phases at seven equally separated
temperature points (0 K, 50 K, $\cdots$, 300 K). The phase boundary line is determined by equating the Gibbs free energies of the FCC and HCP phases.
The phonon contributions of the Gibbs energies are evaluated using both PBE and PBE+MBD functionals. An inspection of the results presented in the left- and right-hand sides
of Table~\ref{Tab:transition_pressure_PBE} indicates that using different functionals to evaluate the phonon spectra has insignificant influence
on the actual phase boundary.
\begin{table*}
\caption{Transition pressures and volumes along the phase boundary line in the $T$-$P$ phase diagram of the Ar crystal. The electronic part of the Gibbs free energy used to determine the phase boundary is calculated using (RPA+rSE)@PBE, whereas the phonon contributions are calculated using the PBE (left side) and PBE+MBD functionals (right side).}
\begin{tabular}{@{\extracolsep{\fill}}lcccccc}
\hline\hline\\[-1.5ex]
\multirow{2}{*}{Temperature(K)~~~} & \multicolumn{2}{c}{RPA+rSE+phonon@PBE} & &
\multicolumn{2}{c}{RPA+rSE+phonon@MBD} \\[0.2ex]
\cline{2-3} \cline{5-6} \\[-1.0ex]
& ~~~~~$P_{t}$(GPa)~~~~~ & ~~~~~~$V_{t}$(${\AA}^{3}$)~~~~~ & ~~~ & ~~~~~$P_{t}$(GPa)~~~~~ & ~~~~~~$V_{t}$(${\AA}^{3}$)~~~~~ \\[0.5ex]
\hline \\[-0.5ex]
0 & 31.39 & 16.96 & & 31.79 & 16.91\\
50 & 31.60 & 16.93 & & 31.94 & 16.89\\
100 & 32.42 & 16.84 & & 32.85 & 16.79\\
150 & 33.47 & 16.73 & & 33.83 & 16.68\\
200 & 34.58 & 16.62 & & 35.03 & 16.57\\
250 & 35.81 & 16.50 & & 36.28 & 16.45\\
300 & 37.02 & 16.39 & & 37.58 & 16.33\\
\hline \\[-1.5ex]
\end{tabular}
\label{Tab:transition_pressure_PBE}
\end{table*}
In Fig.~\ref{Fig:Coh_E_convergence} we present the (RPA+rSE)@PBE and PBE cohesive energies as a function of the volume as calculated with different NAO-VCC-$n$Z \cite{IgorZhang/etal:2013}
basis sets.
One can see that achieving basis set convergence for the correlated method (in the present case RPA+rSE) is much more demanding than the PBE functional which requires
only occupied-state information. Whereas the PBE results calculated using NAO-VCC-3Z (N3Z) and NAO-VCC-4Z (N4Z) are almost on top of each other,
the RPA+rSE results shows a sizable shift towards stronger bonding when going from N3Z to N4Z.
Hence, in the present work, we extrapolate the RPA+rSE results to the complete basis set (CBS) limit based on the N3Z and N4Z
results, where for PBE the N4Z results are used.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth,clip]{primitive_PBE_RPA_extrapolate.eps}
\caption{\label{Fig:Coh_E_convergence} Calculated PBE and (RPA+rSE)@PBE cohesive energy curves at 0 K for the FCC phase of the Ar crystal, obtained
using NAO-VCC-3Z and NAO-VCC-4Z basis sets. For (RPA+rSE)@PBE, the two-point extrapolated results (CBS(3,4)) are also plotted.}
\end{figure}
Figure~\ref{Fig:E_diff} compares two different strategies to compute the energy differences
between the FCC and HCP crystal structures.
In the first strategy, the energy differences are calculated using the respective primitive unit cells for both structures.
In this case, as panel (a) shows, after converging $\Delta E = E_\text{fcc}-E_\text{hcp}$ with respect to the $\bfk$-grid mesh, one obtains
a positive value of $\Delta E = 21.6~\mu$Ha/atom $\approx$ 0.6 meV/atom, favoring the HCP structure. However, this is an artifact, since
HCP has a larger primitive unit cell than FCC and using the same $\bfk$ grid mesh for both structures may easily produce biased results,
especially when highly accurate results are hunted for. In the second strategy, $1\times 1 \times 6$ computational super cells (with $ABCABC$ stacking for FCC and $ABABAB$ stacking for HCP) are
used for both structures. Now by converging the energy differences with respect to the $\bfk$-grid mesh, one obtains
a $\Delta E = - 0.4~\mu$Ha/atom, marginally favoring the FCC structure. Please note this is the result obtained using an under-converged
NAO-VCC-3Z basis set. After extrapolating the results to the CBS limit, one ends up with a $\Delta E \approx - 3.5 \mu$Ha/atom, in favor
of the FCC structure.
\begin{figure*}
\subfigure[Primitive unit cell]{
\includegraphics[width=0.45\textwidth]{energy_diff_RPA_k_grid.eps}
}
\subfigure[Computational $1\times 1 \times 6$ supercell]{
\includegraphics[width=0.45\textwidth]{energy_diff_RPA_k_grid_close_packed.eps}
}
\caption{\label{Fig:E_diff} Energy differences of the FCC and HCP structures with increasing $\bfk$-grid mesh: (a) Energy difference based on the primitive cell calculations; (b) energy differences based on 6-atom supercells. An atomic volume of $V=38{\AA}^{3}$ (close to the equilibrium volume) and the NAO-VCC-3Z basis set
are used in both types of calculations.}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth,clip]{helm_RPA_LDA_PBE_MBD.eps}
\caption{\label{Fig:Helmholtz} Calculated Helmholtz free energy of FCC Ar crystal. Results from isolated RPA+rSE at 0K and RPA+rSE with phonon properties
from different functionals (LDA, PBE, MBD) are also shown.}
\end{figure}
In Fig.~\ref{Fig:Helmholtz} we present the Helmholtz free energies (cf. Eq.~5 of the main text) as a function of the volume calculated at 300 K.
Again the electronic part of the Helmholtz free
energies are calculated at the level of (RPA+rSE)@PBE, whereas the phonon contributions are evaluated using LDA, PBE, and PBE+MBD functionals. Note that
the (negative) slope of these Helmholtz free energy yields gives the $P$-$V$ curves presented in Fig.~\ref{Fig:PV_diff_functionals}.
For comparison, the
pure electronic (RPA+rSE)@PBE total energy is also plotted. Figure~\ref{Fig:Helmholtz} reveals that the phonon contributions mainly play a role in the
low pressure (large volume) regime. In the high pressure regime, the electronic part of the Helmholtz free energy dominates,
and it is important to adopt the RPA+rSE method in order to achieve a quantitative agreement with the experimental measurements of the equation of states (cf. Fig.~\ref{Fig:PV_diff_functionals}).
In the low pressure regime, the LDA phonons yield a noticeable difference to the Helmholtz free energy compared to the PBE phonons, whereas the MBD
correction brings negligible changes. However, even the small differences between the LDA and PBE phonon contributions become diminishing when one
looks at the derivatives of these curves, i.e., the $P$-$V$ curves as shown in Fig.~\ref{Fig:PV_diff_functionals}.
\end{appendix}
|
{
"timestamp": "2021-12-01T02:24:36",
"yymm": "2111",
"arxiv_id": "2111.15419",
"language": "en",
"url": "https://arxiv.org/abs/2111.15419"
}
|
\section{Introduction}
\subsection{Modified Bakry-\'Emery Ricci curvatures}
\ \ Let $(M,g)$ be an $n$-dimensional smooth complete Riemannian manifold with its volume measure $\mathfrak{m} :=\text{\rm vol}_g$ and
$V$ a $C^1$-vector field.
Throughout this paper, we assume that the manifold $M$ has no boundary and is connected.
We consider a diffusion operator $\Delta_V:=\Delta- \langle V, \nabla\cdot\rangle $. In \cite{Tadano,TadanoNegative,Wylie:WarpedSplitting}, $\Delta_V$ is called the \emph{$V$-Laplacian} on $(M, g)$.
For any constant $m\in]-\infty,+\infty]$, we introduce the symmetric $2$-tensor
\begin{align*}
{\Ric}_{m,n}(\Delta_V)(x)={\Ric}(x)+\frac12\mathcal{L}_Vg(x)-\frac{\;V^*(x)\otimes V^*(x)\;}{m-n},\quad x\in M,
\end{align*}
and call it the \emph{modified $m$-Bakry-\'Emery Ricci tensor} of the diffusion operator
$\Delta_V$. Here $\mathcal{L}_Vg(X,Y):=\langle \nabla_XV,Y\rangle +\langle \nabla_YV,X\rangle $ is the Lie derivative of $g$ with respect to $V$ and $V^*$ is the dual $1$-form of $V$ coming from $g$.
For any $m\in]-\infty,+\infty]$ and a continuous function $K:M\to\mathbb{R}$, we call $(M, g, V)$ or $L$ satisfies the ${\rm CD}(K, m)$-condition if
\begin{align*}
{\rm Ric}_{m, n}(\Delta_V)(x)\geq K(x)\quad \text{ for all }\quad x\in M.
\end{align*}
When $m=n$, we always assume that $V$ vanishes so that
${\Ric}_{n,n}(\Delta_V)={\Ric}$.
When $m\geq n$, $m$ is regarded as an upper bound for the dimension of the diffusion operator $\Delta_V$.
Throughout this paper, we focus on the case $m\leq1$ and assume $n>1$ if $m=1$ and $V$ does not vanish (i.e., $V\equiv0$ and $\Delta_V=\Delta$ if $m=n=1$).
Consequently, for $m\leq1$, we always assume $n>m$ provided $V$ does not vanish.
Note that, for $m\leq1$, $N\in[n,+\infty[$, and for any $x\in M$, we have
\begin{align*}
{\Ric}_{1,n}(\Delta_V)(x)\geq {\Ric}_{m,n}(\Delta_V)(x)\geq
{\Ric}_{\infty,n}(\Delta_V)(x)\geq {\Ric}_{N,n}(\Delta_V)(x).
\end{align*}
If we only consider the case that the lower bounds of the above Ricci tensor are constant, ${\Ric}_{1,n}(\Delta_V)\geq {\rm const}.$ is the weakest one among them. But if we consider the case that the lower bound of Ricci curvature depends on the parameter $m$ like \eqref{eq:RiciLowBddStrong} below, the similar condition is no longer the weakest one.
In the literature, there have been intensive works on the study of geometry and analysis of weighted complete Riemannian manifolds
with the CD$(K, m)$-condition for $m\geq n$ and $K\in \mathbb{R}$
(or $K\in C(M, \mathbb{R})$) in the case
$V=\nabla\phi$ for $\phi\in C^2(M)$. See
\cite{AN,BakryLect1581,BGL_book,BE1,BL,
BQ1,BQ2,FanLiZhang,FLL,Xdli:Liouville, Li12,LL15,LL17,Lot,Qian,WeiWylie},
and reference therein. During recent years, there are already several papers on the study of weighted Riemannian manifolds with $m$-Bakry-\'Emery Ricci curvature for
$m<0$ or $m<1$ with $V=\nabla\phi$ for a $C^2$-function $\phi$. For $V=\nabla\phi$, we write $L:=\Delta_{\nabla\phi}$
in this introduction.
In ~\cite{OhtaTaka}, Ohta and Takatsu proved the $K$-displacement convexity of the R\'enyi type
entropy under the $m$-Bakry-\'Emery Ricci tensor condition ${\rm Ric}_{m, n}(L)\geq K$, i.e., the ${\rm CD}(K, m)$-condition, for $m\in]\!-\infty,0\,[\,\cup\,[\,n,+\infty\,[$ and $K\in \mathbb{R}$. After that, Ohta~\cite{Ohta:KN} and Kolesnikov-Milman~\cite{KolesMilman} simultaneously treated
the case $m<0$. Ohta~\cite{Ohta:KN} extended the Bochner inequality, eigenvalue estimates, and the Brunn-Minkowski inequality under the lower bound for
${\Ric}_{m,n}(L)$ with $m<0$.
Kolesnikov-Milman~\cite{KolesMilman} also proved the Poincar\'e and the Brunn-Minkowski inequalities for manifolds with boundary
under the lower bound for
${\Ric}_{m,n}(L)$ with $m<0$.
In \cite[Theorem~4.10]{Ohta:KN}, Ohta also proved that the lower bound of ${\Ric}_{m,n}(L)(x)$ with
$m<0$ is equivalent to the curvature dimension condition in terms of
mass transport theory as defined by Lott-Villani~\cite{LV2} and Sturm~\cite{St:geomI, St:geomII}. In ~\cite{Wylie:WarpedSplitting}, Wylie proved a warped product version of Cheeger-Gromoll splitting theorem under the CD$(0, 1)$-condition. He also proved an isometric product version of Cheeger-Gromoll splitting under CD$(0, m)$-condition with $m<1$ and $(V,1)$-completeness condition.
In ~\cite{WylieYeroshkin},
W.~Wylie and D.~Yeroshkin proved a Laplacian comparison theorem, a Bishop-Gromov volume comparison theorem, Myers' theorem and Cheng's maximal diameter theorem on manifolds with
$m$-Bakry-\'Emery Ricci curvature condition for $m=1$ with $V=\nabla \phi$ for a $C^2$-function $\phi$. Recently, Milman~\cite{Milman17} extended the Heintze-Karcher Theorem, isoperimetric inequality, and functional inequalities under the lower bound for ${\Ric}_{m,n}(L)(x)$ with $m < 1$.
In \cite{KL}, the first named author and X.-D.~Li established
the Laplacian comparison theorem on weighted complete Riemannian manifolds with the ${\rm CD}(K, m)$-condition with $m\leq 1$ for
$V=\nabla\phi$ with $\phi\in C^2(M)$, and obtained (weighted) Myers' theorem, Bishop-Gromov volume comparison theorem, Ambrose-Myers' theorem, Cheeger-Gromoll type splitting theorem, stochastic completeness and Feller property of $L$-diffusion process under optimal conditions on the $m$-Bakry-\'Emery Ricci
tensor for $m\leq1$
over the weighted complete Riemannian manifolds.
It is important to know whether one can establish the Laplacian comparison theorem on such Riemannian manifolds with the ${\rm CD}(K, m)$-condition for $m\leq 1$ and $K\in \mathbb{R}$ for general $C^1$-vector field $V$. In this paper, we prove such comparison theorem for $K$ being a continuous function depending on a re-parametrized distance
function on $M$. As consequences, we give the optimal conditions on the modified $m$-Bakry-\'Emery Ricci
tensor for $m\leq1$ so that (weighted) Myers' theorem, Bishop-Gromov volume comparison theorem,
Ambrose-Myers' theorem, Cheng's maximal diameter theorem,
and the Cheeger-Gromoll type splitting theorem hold on weighted complete Riemannian manifolds. These geometric results are complete extensions of the case for $V=\nabla \phi$ proved in the first part of \cite{KL}. When $m<1$, our results are new in the literature.
\par
\bigskip
\noindent
\emph{Acknowledgment.} The authors would
thank to
Dr.~Yohei Sakurai for his significant comments to the
draft of this paper.
They also would like to thank to
the anonymous referee.
His/Her comments help to improve the quality
of this paper very much.
\section{Main result}
Let $V$ be a $C^1$-vector field on a Riemannian manifold $(M,g)$. Since there may be no function $\phi$ satisfying $V=\nabla\phi$ in general, we can still make sense of bounds by integrating $V$ along geodesics.
Define
\begin{align*}
V_{\gamma}(r):&=\int_0^r\langle V_{\gamma_s},\dot{\gamma}_s\rangle \d s
\end{align*}
for a unit speed geodesic $\gamma:[0,T[\to M$,
and
\begin{align}
\phi_V(x):&=\inf\left\{\left.
\int_0^{r_p(x)}\langle V_{\gamma_s},\dot{\gamma}_s\rangle \d s
\;\;\right|\!\!\!\!\! \left.\begin{array}{ll}&\gamma: \text{unit speed geodesic}\\ & \gamma_0=p, \gamma(r_p(x))=x\end{array}\right.\right\}.\label{eq:ModifiedPhi}
\end{align}
Note that $V_{\gamma}$ depends on the choice of unit speed geodesic $\gamma$, and
$\phi_V(x)$ depends on $p$ with $\phi_V(p)=0$ and it is well-defined for $x\in M$.
It is easy to see that $\phi_V(x)=\int_0^{r_p(x)}Vr_p(\gamma_s)\d s$ under $x\notin {\rm Cut}(p)$, where
$\gamma$ is the unique unit speed geodesic with $\gamma_0=p$ and $\gamma(r_p(x))=x$.
Hence $\phi_V$ is a continuous function on $({\rm Cut}(p)\cup\{p\})^c$.
Consequently, $\phi_V$ is an $\mathfrak{m} $-measurable function.
Moreover, for $x\notin {\rm Cut}(p)$, $\phi_V(x)=V_{\gamma}(r_p(x))$ for the unique unit speed geodesic $\gamma$ with $\gamma_0=p$ and $\gamma(r_p(x))=x$. Hence $\phi_V(\gamma_t)=V_{\gamma}(t)$ for any unit speed geodesic $\gamma$ with $\gamma_0=p$ and $\gamma_t\notin {\rm Cut}(p)$.
When $V=\nabla\phi$ is a gradient vector field for some $\phi\in C^2(M)$, then one can see
\begin{align*}
V_{\gamma}(t)&=\int_0^t\langle \nabla\phi,\dot{\gamma}_s\rangle \d s
=\int_0^t\frac{\d}{\d s}\phi(\gamma_s)\d s=\phi(\gamma_t)-\phi(\gamma_0).
\end{align*}
Throughout this paper,
we fix a point $p\in M$ and a constant $C_p>0$, which may depend on $p$.
For $x\in M$, we define
\begin{align*}
s_p(x):=\inf\left\{\left. C_p
\int_0^{r_p(x)}e^{-\frac{2V_{\gamma}(t)}{n-m}}\d t\;\;\right|\left.\begin{array}{ll}&\!\!\!\!\gamma: \text{unit speed geodesic}\\ &\!\!\!\! \gamma_0=p, \gamma(r_p(x))=x\end{array}\right.\right\}.
\end{align*}
If $(M,g)$ is complete, then $s_p(x)$ is finite and
well-defined from the basic properties of Riemannian geodesics.
Let $s(p,q):=s_p(q)$ for $p,q\in M$. If $q$ is not a cut point of $p$, then there is a unique minimal geodesic from $p$ to $q$ and $s_p$ is smooth in a neighborhood of $q$ as
can be computed by pulling the function back by the exponential map
at $p$. Note that $s(p,q)\geq0$, it is zero if and only if $p=q$. But $s(p,q)=s(q,p)$ does not hold in general.
If $V=\nabla\phi$ for some $\phi\in C^2(M)$ and set
$C_p=\exp\left(-\frac{\;2\phi(p)\;}{n-m} \right)$ for the definition of $s_p(x)$ with $p$ being arbitrary, then one can see that
$s(p,q)=s(q,p)$ for $p,q\in M$.
However,
$s(p,q)$ does not necessarily satisfy the triangle inequality.
\begin{definition}\label{df:phicompletness}
{\rm Let $(M,g)$ be an $n$-dimensional complete Riemannian manifold and $V$ a $C^1$-vector field. Fix $p\in M$. Then we say that $(M,g, V)$ is \emph{$(V,m)$-complete at $p$} if
\begin{align}
\varlimsup_{r\to+\infty}\inf_{L(\gamma)=r}\int_0^re^{-\frac{\;2V_{\gamma}(t)\;}{n-m}}\d t=+\infty,\label{eq:phimcomplete}
\end{align}
where the infimum is taken over all minimizing unit speed geodesics $\gamma$ with respect to the metric $g$ such that $\gamma_0=p$. We say that $(M,g,V)$ is \emph{$(V,m)$-complete} if it is $(V,m)$-complete at $p$ for all $p\in M$.}
\end{definition}
\begin{remark}\label{rem:VMcomplete}
{\rm
\begin{enumerate}
\item If $V_{\gamma}$ is upper bounded for any unit speed geodesic $\gamma$ with $\gamma_0=p$,
then $(M,g,V)$ is always $(V,m)$-complete at $p$ for all $m\leq1$. In particular, if there exists a non-negative integrable function $f$ on $[0,+\infty[$ such that $\langle V,\nabla r_p\rangle _x\leq f(r_p(x))$, then $V_{\gamma}(r)\leq\int_0^rf(t)\d t\leq\int_0^{\infty}f(t)\d t<\infty$ so that $(M,g,V)$ is always $(V,m)$-complete at $p$ for all $m\leq1$.
\item If $M$ is compact, then $(M,g,V)$ is always $(V,m)$-complete for $m\leq1$. Indeed,
if so, the set $G_r:=\{\gamma\mid \gamma\text{ is a unit speed minimal geodesic}, L(\gamma)=r\}$ is an empty set for sufficiently large $r > 0$. This implies \eqref{eq:phimcomplete}.
\item If there exists a non-negative locally integrable function $f$ on $[0,+\infty[$ satisfying $f(t)\leq C/t$ on $[1,+\infty[$ for some $C\in]0,(n-m)/2]$ and $\langle V,\nabla r_p\rangle \leq f(r_p)$, then
$(V,m)$-completeness at $p$ holds for all $m\leq1$.
Here we assume $n>1$ for $m=1$.
In fact, we see for $r>1$
\begin{align*}
\inf_{L(\gamma)=r}\int_0^re^{-\frac{\;2V_{\gamma}(t)\;}{n-m}}\d t
&\geq
\int_1^re^{-\frac{\;2\int_0^tf(s)\d s\;}{n-m}}\d t\\
&\geq e^{-\frac{\;2\int_0^1f(s)\d s\;}{n-m}}
\int_1^r e^{-\frac{\;2\int_1^tf(s)\d s\;}{n-m}}\d t\\
&\geq e^{-\frac{\;2\int_0^1f(s)\d s\;}{n-m}}
\int_1^r e^{-\frac{\;2C\log t\;}{n-m}}\d t\\
&\geq e^{-\frac{\;2\int_0^1f(s)\d s\;}{n-m}}
\int_1^r\frac{\d t}{\;t^{\frac{\;2C\;}{n-m}}\;}\to+\infty\quad \text{ as }\quad r\to\infty,
\end{align*}
where the infimum is taken over all minimizing unit speed geodesics $\gamma$ with $\gamma_0=p$.
\item The $(V,1)$-completeness at $p$ defined as in \cite[Definition~6.2]{Wylie:WarpedSplitting} implies the $(V,m)$-completeness at $p$ for every $m\leq1$
provided $V_{\gamma}\geq0$
for any unit speed geodesic $\gamma$ with $\gamma_0=p$.
The converse also holds under $V_{\gamma}\leq0$
for any unit speed geodesic $\gamma$ with $\gamma_0=p$.
\end{enumerate}
}
\end{remark}
\begin{lemma}\label{lem:phimcomplete}
Let $(M,g)$ be an $n$-dimensional complete non-compact Riemannian manifold and $V$ a $C^1$-vector field. Fix $p\in M$ and suppose that
$(M,g,V)$ is $(V,m)$-complete at $p$. Then, for any sequence $\{q_i\}$ in $M$ such that $d(p,q_i)\to+\infty$ as $i\to+\infty$,
$s(p,q_{i})\to+\infty$.
\end{lemma}
\begin{proof}
The proof is similar to that of \cite[Proposition~3.4]{WylieYeroshkin}. We omit it.
\end{proof}
\begin{remark}\label{rem:SpRp}
{\rm
Recall that $\phi_V$ depends on $p\in M$.
For a fixed $p\in M$, we set
$\underline{\phi}_V(r):=\inf_{B_r(p)}\phi_V$ and
$\overline{\phi}_V(r):=\sup_{B_r(p)}\phi_V$
for $r\in]0,+\infty[$. Then $\underline{\phi}_V(r)\leq0\leq
\overline{\phi}_V(r)$ for
$r>0$
and $\lim_{r\to0}\overline{\phi}_V(r)=\lim_{r\to0}\underline{\phi}_V(r)=0$.
If $x\notin {\rm Cut}(p)$, we have
$s_p(x)= C_p
\int_0^{r_p(x)}e^{-\frac{\;2\phi_V(\gamma_t)\;}{n-m}}\d t$ for the unique unit speed geodesic $\gamma$ with $\gamma_0=p$ and $\gamma(r_p(x))=x$. So $\lim_{x\to p}\frac{s_p(x)}{r_p(x)}= C_p$.
In particular,
\begin{align*}
C_p e^{-\frac{\;2\overline{\phi}_V(r_p(x))\;}{n-m}}r_p(x)\leq s_p(x)\leq C_p
e^{-\frac{\;2\underline{\phi}_V(r_p(x))\;}{n-m}}r_p(x)\quad\text{ for }\quad x\notin {\rm Cut}(p).
\end{align*}
}
\end{remark}
\subsection{Laplacian Comparison}
Let $\kappa:[0,+\infty[\to\mathbb{R}$ be a continuous function and ${\sf a}_{\kappa}$ the unique solution defined on the maximal interval $]0,\delta_{\kappa}[$ for $\delta_{\kappa}\in]0,+\infty]$ of the following Riccati equation
\begin{align}
-\frac{\d {\sf a}_{\kappa}}{\d s}(s)=\kappa(s)+{\sf a}_{\kappa}(s)^2\label{eq:RiccatiEq}
\end{align}
with the boundary conditions
\begin{align}
\lim_{s\downarrow 0}s\, {\sf a}_{\kappa}(s)=1,\label{eq:BdryCond}
\end{align}
and
\begin{align}
\lim_{s\uparrow \delta_{\kappa}}(s-\delta_{\kappa})\, {\sf a}_{\kappa}(s)=1\label{eq:BdryCondStrict*}
\end{align}
under $\delta_{\kappa}<\infty$.
\eqref{eq:BdryCond} yields
\begin{align}
\lim_{s\downarrow0}{\sf a}_{\kappa}(s)=+\infty.\label{eq:BdryCond0}
\end{align}
If $\delta_{\kappa}<\infty$, from \eqref{eq:BdryCondStrict*}, $\delta_{\kappa}$ is the explosion time of ${\sf a}_{\kappa}$ in the sense that
\begin{align}
\lim_{s\uparrow\delta_{\kappa}}{\sf a}_{\kappa}(s)=-\infty.\label{eq:BdryCond*}
\end{align}
Actually, ${\sf a}_{\kappa}(s)={\mathfrak{s}_{\kappa}'(s)}/{\mathfrak{s}_{\kappa}(s)}$, where $\mathfrak{s}_{\kappa}$ is the unique solution of
Jacobi equation $\mathfrak{s}_{\kappa}''(s)+\kappa(s)\mathfrak{s}_{\kappa}(s)=0$ with $\mathfrak{s}_{\kappa}(0)=0$, $\mathfrak{s}_{\kappa}'(0)=1$, and
$\delta_{\kappa}=\inf\{s>0\mid \mathfrak{s}_{\kappa}(s)=0\}$.
We write ${\sf a}_{\kappa}(s)=\cot_{\kappa}(s)$.
Moreover, $]0,\delta_{\kappa}[\ni s\mapsto \cot_{\kappa}(s)$ is decreasing (resp.~strictly decreasing) provided $\kappa(s)$ is non-negative (resp.~positive) for all $s\in]0,\delta_{\kappa}[$ in view of
\eqref{eq:RiccatiEq}.
If $\kappa$ is a real constant, then
\begin{align*}
{\sf a}_{\kappa}(s)=\left\{\begin{array}{cc}\sqrt{\kappa}\cot(\sqrt{\kappa}s) & \kappa>0, \\ 1/s & \kappa=0, \\ \sqrt{-\kappa}\coth(\sqrt{-\kappa}s) & \kappa<0\end{array}\right.
\end{align*}
and $\delta_{\kappa}=\pi/\sqrt{\kappa^+}\leq+\infty$.
Fix $m\in ]-\infty,1\,]$
and set $m_{\kappa}(s):=(n-m)\cot_{\kappa}(s)$. Then \eqref{eq:RiccatiEq} is equivalent to
\begin{align}
-\frac{\d m_{\kappa}}{\d s}(s)=(n-m)\kappa(s)+\frac{\;m_{\kappa}(s)^2\;}{n-m},\label{eq:RiccatiEqM}
\end{align}
and \eqref{eq:BdryCond} (resp.~\eqref{eq:BdryCondStrict*}) is equivalent to
$\lim_{s\downarrow 0}s\, m_{\kappa}(s)=n-m$ (resp.~$\lim_{s\uparrow \delta_{\kappa}}(s-\delta_{\kappa})\, m_{\kappa}(s)=n-m$ under $\delta_{\kappa}<\infty$).
In view of the uniqueness of the solution to \eqref{eq:RiccatiEq} with \eqref{eq:BdryCond0},
we have the scaling property ${\sf a}_{\kappa_{\alpha}}(s)=\frac{1}{\alpha}{\sf a}_{\alpha^2\kappa}(s/\alpha)$ for $\alpha>0$. Here $\kappa_{\alpha}(s):=\kappa(s/\alpha)$. In particular, ${\sf a}_{\kappa}(s)=\frac{1}{\alpha}{\sf a}_{\alpha^2\kappa}(s/\alpha)$ for $\alpha>0$ provided $\kappa$ is a constant.
\vspace{0.5cm}
Our first result is the following Laplacian comparison along unit speed geodesic on
weighted complete Riemannian manifolds $(M,g,V)$ under the
lower bound of modified $m$-Bakry-\'Emery Ricci tensor
for $m\leq1$.
\begin{theorem}[Laplacian Comparison Theorem]\label{thm:GlobalLapComp}
Suppose that $(M,g)$ is an $n$-dimen\-sional complete smooth Riemannian manifold and $V$ is a $C^1$-vector field. Fix $p\in M$.
Take $R\in]0,+\infty]$.
Let $\phi_V$ be the function defined in \eqref{eq:ModifiedPhi}.
Suppose that
\begin{align}
{\Ric}_{m,n}(\Delta_V)_x(\nabla r_p,\nabla r_p)\geq(n-m)\kappa(s_p(x)) e^{-\frac{\;4\phi_V(x)\;}{n-m}} C_p^2 \label{eq:RiciLowBdd}
\end{align}
holds under $r_p(x)<R$ with $x\in ({\rm Cut}(p)\cup\{p\})^c$. Then
\begin{align}
(\Delta_V r_p)(x)\leq (n-m)\cot_{\kappa}(s_p(x))e^{-\frac{\;2\phi_V(x)\;}{n-m}} C_p. \label{eq:GloLapComp}
\end{align}
\end{theorem}
\begin{corollary}\label{cor:GlobalLapComp}
Suppose that $(M,g)$ is an $n$-dimensional complete smooth Riemannian manifold and $V$ is a $C^1$-vector field.
Fix $p\in M$ and assume $\delta_{\kappa}<\infty$. Then
\begin{align*}
\lim_{s_p(x)\uparrow \delta_{\kappa}}\Delta_V r_p(x)=-\infty.
\end{align*}
\end{corollary}
\begin{remark}
{\rm The sufficient condition \eqref{eq:RiciLowBdd} under $r_p(x)<R$ with $x\in ({\rm Cut}(p)\cup\{p\})^c$
for our
Laplacian
comparison theorem is weaker than the condition:
\begin{align}
{\Ric}_{m,n}(\Delta_V)(x)\geq(n-m)\kappa(s_p(x)) e^{-\frac{\;4\phi_V(x)\;}{n-m}} C_p^2 \;g_x
\label{eq:RiciLowBddStrong}
\end{align}
under $r_p(x)<R$, because $\nabla r_p(x)$ is defined only for
$x\notin{\rm Cut}(p)\cup\{p\}$. In particular, CD$(K,m)$-condition for
$K(x)=(n-m)\kappa(s_p(x)) e^{-\frac{\;4\phi_V(x)\;}{n-m}} C_p^2 $ always implies that \eqref{eq:RiciLowBdd} holds for all
$x\in ({\rm Cut}(p)\cup\{p\})^c$.
}
\end{remark}
\begin{remark}
{\rm The inequality \eqref{eq:GloLapComp} is meaningful at $p$, because
$m_{\kappa}(0+)=+\infty$ and $\Delta_Vr_p(p)=\Delta r_p(p)=+\infty$ in view of the classical Laplacian comparison theorem for $\Delta$ under local upper sectional curvature bound (see \cite[Theorem~3.4.2]{Hsu:2001}). Moreover, the following inequality
\begin{align}
r_p(x)(\Delta_V r_p)(x)\leq (n-m)r_p(x)\cot_{\kappa}(s_p(x))e^{-\frac{\;2\phi_V(x)\;}{n-m}}C_p\label{eq:GloLapComp*}
\end{align}
is also meaningful at $p$. Indeed, the right hand side of
\eqref{eq:GloLapComp*} has the value $n-m$ at $x=p$ by Remark~\ref{rem:SpRp} and the left hand side has the value $n-1$ at $x=p$ by the classical Laplacian comparison theorem for $\Delta$ as noted above.
}
\end{remark}
\begin{remark}
{\rm Theorem~\ref{thm:GlobalLapComp}
generalizes \cite[Theorem~4.4]{WylieYeroshkin}.
}
\end{remark}
\subsection{Geometric consequences}
\begin{theorem}[Weighted Myers' Theorem]\label{thm:WeightedMyers}
Let $(M,g)$ be an $n$-dimensional complete Riemannian manifold and a $C^1$-vector field $V$. Fix $p\in M$.
Assume that \eqref{eq:RiciLowBdd}
holds for all $x\in ({\rm Cut}(p)\cup\{p\})^c$
and $\delta_{\kappa}<\infty$.
Then $s(p,q)\leq \delta_{\kappa}$ for all $q\in M$.
\end{theorem}
\begin{corollary}\label{cor:WeightedMyers}
Let $(M,g)$ be an $n$-dimensional complete Riemannian manifold and a $C^1$-vector field $V$. Fix $p\in M$ and $\delta_{\kappa}<\infty$. Assume that \eqref{eq:RiciLowBdd}
holds for all $x\in ({\rm Cut}(p)\cup\{p\})^c$ and $(M,g,V)$ is $(V,m)$-complete at $p$. Then $M$ is compact.
\end{corollary}
\begin{remark}
{\rm
\begin{enumerate}
\item Theorem~\ref{thm:WeightedMyers} (resp.~Corollary~\ref{cor:WeightedMyers}) generalizes \cite[Theorem~2.2]{WylieYeroshkin} (resp. \cite[Corollary~2.3]{WylieYeroshkin}).
\item Since $V_{\gamma}\leq0$ for any unit speed geodesic $\gamma$ with $\gamma_0=p$
implies the $(V,m)$-completeness at $p$, Corollary~\ref{cor:WeightedMyers} implies the compactness of $M$
provided $\delta_{\kappa}<\infty$, \eqref{eq:RiciLowBdd}
holds for $x\notin{\rm Cut}(p)\cup\{p\}$ and $V_{\gamma}\leq0$
any unit speed geodesic $\gamma$ with $\gamma_0=p$.
\end{enumerate}
}
\end{remark}
Based on Theorems~\ref{thm:GlobalLapComp} and \ref{thm:WeightedMyers}, we can deduce several geometric
fruitful results. Next
we will give two versions of the Bishop-Gromov type volume comparison.
The first one is for $\mu_V(A)=\int_Ae^{-\phi_V(x)}\mathfrak{m} (\d x)$ of metric annuli $A(p,r_0,r_1):=\{x\in M\mid r_0\leq r_p(x)\leq r_1\}$. The comparison in this case will be in terms of the quantities
\begin{align}
\overline{\nu}_p(\kappa,r_0,r_1)&:=\int_{r_0}^{r_1}\int_{\mathbb{S}^{n-1}}\mathfrak{s}_{\kappa}^{n-m}\left(\sup_{\eta}s_p(r,\eta)
\right)\d r\d\theta,
\qquad \overline{\nu}_p(\kappa,r_1):=\overline{\nu}_p(\kappa,0,r_1),
\label{eq:VolSpaceFormNormalUpper}\\
\underline{\nu}_p(\kappa,r_0,r_1)&:=\int_{r_0}^{r_1}\int_{\mathbb{S}^{n-1}}\mathfrak{s}_{\kappa}^{n-m}\left(\inf_{\eta}s_p(r,\eta)
\right)\d r\d\theta,
\qquad \underline{\nu}_p(\kappa,r_1):=\underline{\nu}_p(\kappa,0,r_1),
\label{eq:VolSpaceFormNormalLower}\\
\nu_p(\kappa,r_0,r_1)&:=\int_{r_0}^{r_1}\int_{\mathbb{S}^{n-1}}\mathfrak{s}_{\kappa}^{n-m}(s_p(r,\theta))\d r\d\theta,
\qquad \nu_p(\kappa,r_1):=\nu_p(\kappa,0,r_1)
\label{eq:VolSpaceFormNormal}
\end{align}
under $s_p(r_1,\theta)\leq \delta_{\kappa}$ for all $\theta\in\mathbb{S}^{n-1}$. Here
\begin{align*}
s_p(r,\theta):=C_p
\int_0^re^{-\frac{\;2V_\gamma(t)\;}{n-m}}\d t
\end{align*}
with $\theta=\dot{\gamma}_0$, and $\overline{\phi}_V(r)$ and $\underline{\phi}_V(r)$ are the functions defined in Remark~\ref{rem:SpRp}.
If $\phi_V$ is rotationally symmetric around $p$, i.e., if there exists a $C^2$-function ${\Phi}_V$ on $[0,+\infty[$ such that $\phi_V(x)={\Phi}_V(r_p(x))$, then $s_p(r,\theta)$ is independent of $\theta\in \mathbb{S}^{n-1}$.
The second one is for $\nu_V(A):=\int_Ae^{-\frac{\;2\phi_V(x)\;}{n-m}}\mu_V(\d x)=\int_Ae^{-\frac{\;n-m+2\;}{n-m}\phi_V(x)}\mathfrak{m} (\d x)$ of the sets $C(p,s_0,s_1):=\{x\in M\mid s_0\leq s_p(x)\leq s_1\}$ and $C_s(p):= C(p,0,s)$.
The set $C(p,s_0,s_1)$ also depends on $s_p$ and is quite different from annuli.
The comparison in this case will be in terms of the
quantities
\begin{align}
v(\kappa,s_0,s_1):=\int_{s_0}^{s_1}\int_{\mathbb{S}^{n-1}}\mathfrak{s}_{\kappa}^{n-m}(s)\d s\d\theta \quad \text{ and }\quad
v(\kappa,s_1):=v(\kappa,0,s_1)
\label{eq:VolSpaceForm}
\end{align}
under $s_1\leq \delta_{\kappa}$.
When $m\in ]-\infty,1]$ is an integer and $\kappa$ is a constant, \eqref{eq:VolSpaceForm} is the volume of
annuli in the simply connected space form of constant curvature $\kappa$ and dimension
$n-m+1$.
\begin{theorem}[Bishop-Gromov Volume Comparison]\label{thm:BGVol}
Fix $p\in M$ and $R\in]0,+\infty]$.
Suppose that $(M,g)$ is an $n$-dimensional complete smooth Riemannian manifold and a $C^1$-vector field $V$.
Let $\kappa:[0,+\infty[\to\mathbb{R}$ be a continuous function. Assume that \eqref{eq:RiciLowBdd}
holds for $r_p(x)<R$ with $x\in ({\rm Cut}(p)\cup\{p\})^c$. Then we have the following:
\begin{enumerate}
\item\label{item:BG1}
Suppose that $0\leq r_0< r_a\leq r_1$ and $0\leq r_0\leq r_b<r_1$. Then
\begin{align}
\frac{\;\mu_V(A(p,r_b,r_1))\;}{\mu_V(A(p,r_0,r_a))}\leq \frac{\;\overline{\nu}_p(\kappa,r_b,r_1)\;}{\underline{\nu}_p(\kappa,r_0,r_a)}\label{eq:BGAnnuliUpLow}
\end{align}
holds for $r_1<R$.
Assume further that $\phi_V$ is rotationally symmetric around $p$. Then
\begin{align}
\frac{\;\mu_V(A(p,r_b,r_1))\;}{\mu_V(A(p,r_0,r_a))}\leq \frac{\;\nu_p(\kappa,r_b,r_1)\;}{\nu_p(\kappa,r_0,r_a)}\label{eq:BGAnnuli}
\end{align}
holds for $r_1<R$, in particular, the function
\begin{align}
]0,R[\, \ni \,r\mapsto \frac{\;\mu_V(B_r(p))\;}{\nu_p(\kappa,r)}\label{eq:BG}
\end{align}
is non-increasing.
\item\label{item:BG2}
Suppose that $0\leq s_0<s_a\leq s_1$ and $0\leq s_0\leq s_b<s_1$. Then
\begin{align}
\frac{\;\nu_V(C(p,s_b,s_1))\;}{\nu_V(C(p,s_0,s_a))}\leq \frac{\;v(\kappa,s_b,s_1)\;}{v(\kappa,s_0,s_a)}\label{eq:BGAnnuli*}
\end{align}
holds for $s_1<S$.
In particular, the function
\begin{align}
]0,S[\, \ni \, s\mapsto \frac{\;\nu_V(C_s(p))\;}{v(\kappa,s)}\label{eq:BG*}
\end{align}
is non-increasing. Here $S=\inf_{\theta\in\mathbb{S}^{n-1}}s_p(R,\theta)$.
\end{enumerate}
\end{theorem}
\begin{remark}
{\rm \eqref{eq:BG} (resp.~\eqref{eq:BG*}) may not be bounded as $r\to0$ (resp.~$s\to0$) unless $m=1$. Note that the Bishop type inequality holds for $m=1$ (see
\cite[Corollary~4.6]{WylieYeroshkin}).
}
\end{remark}
\begin{corollary}\label{cor:BGVol}
Fix $p\in M$ and $R\in]0,+\infty]$.
Suppose that $(M,g)$ is an $n$-dimensional complete smooth Riemannian manifold and $V$ is a $C^1$-vector field.
Assume that
\begin{align*}
{\rm Ric}_{m,n}(\Delta_V)_x(\nabla r_p,\nabla r_p)\geq 0
\quad\text{ for }\quad r_p(x)<R\quad\text{ with }\quad x\notin {\rm Cut}(p)\cup\{p\}.
\end{align*}
Then
\begin{align}
\frac{\;\mu_V(B_{r_2}(p))\;}{\mu_V(B_{r_1}(p))}\leq
e^{2(\overline{\phi}_V(r_1)-\underline{\phi}_V(r_2))}\left(\frac{\;r_2\;}{r_1}\right)^{n-m+1}\quad\text{ for all }\quad 0<r_1<r_2<R\label{eq:BGBall}
\end{align}
holds.
\end{corollary}
\begin{theorem}[Ambrose-Myers' Theorem]\label{thm:AmbroseMyers}
Let $(M,g)$ be an $n$-dimensional complete Riemannian manifold and $V$ a $C^1$-vector field. Fix $p\in M$.
Assume that $(M,g,V)$ is $(V,m)$-complete at $p$.
Suppose that for every unit speed $($local minimizing$)$ geodesic $\gamma$
with $\gamma_0=p$, we have
\begin{align}
\int_0^{\infty}e^{\frac{\;2V_{\gamma}(t)\;}{n-m}}{\Ric}_{m,n}(\Delta_V)(\dot{\gamma}_t,\dot{\gamma}_t)\d t=+\infty.\label{eq:Ambrose}
\end{align}
Then $M$ is compact.
\end{theorem}
\begin{corollary}\label{cor:AmbroseMyersVbdd}
Let $(M,g)$ be an $n$-dimensional complete Riemannian manifold and $V$ a $C^1$-vector field. Fix $p\in M$.
Assume ${\Ric}_{m,n}(\Delta_V)\geq0$ on $M$.
Suppose that there exists a non-negative measurable function $f$ on $[0,+\infty[$ satisfying $\int_0^{\infty}f(s)\d s<+\infty$ and $\langle V,\nabla r_p\rangle \geq -f(r_p)$, and
for every unit speed $($local minimizing$)$ geodesic $\gamma$
with $\gamma_0=p$, we have
\begin{align}
\int_0^{\infty}{\Ric}_{m,n}(\Delta_V)(\dot{\gamma}_t,\dot{\gamma}_t)\d t=+\infty.\label{eq:Ambrose*}
\end{align}
Then $M$ is compact.
\end{corollary}
\begin{corollary}\label{cor:AmbroseMyers}
Let $(M,g)$ be an $n$-dimensional complete Riemannian manifold and $V$ a $C^1$-vector field. Fix
$p\in M$ and a constant $\kappa>0$.
Assume that $(M,g,V)$ is $(V,m)$-complete at $p$. Suppose that for every unit speed $($local minimizing$)$ geodesic $\gamma$ with $\gamma_0=p$, we have
\begin{align}
{\Ric}_{m,n}(\Delta_V)(\dot{\gamma}_t,\dot{\gamma}_t)\geq (n-m)\kappa e^{-\frac{\;4V_{\gamma}(t)\;}{n-m}} C_p^2. \label{eq:AmbroseRiccibounds}
\end{align}
Then $M$ is compact.
\end{corollary}
\begin{remark}
{\rm
\begin{enumerate}
\item
Theorem~\ref{thm:AmbroseMyers} is a version of Ambrose's Theorem (\cite{Ambrose}). Here Ambrose's Theorem states that if for any (local minimizing) geodesic $\gamma$ emanating from a point $p\in M$,
$$
\int_0^{\infty}{\Ric}(\dot{\gamma}_t,\dot{\gamma}_t)\d t=+\infty,
$$
then $M$ is compact.
Cavalcante-Oliveira-Santos \cite{CavOliSantos} also proved the following different
version of Ambrose's Theorem (see \cite[Theorem~2.1]{CavOliSantos}):
Suppose that every (local minimizing) geodesic $\gamma$ emanating from $p$ satisfies
$$
\int_0^{\infty}{\Ric}_{m,n}(\Delta_{\nabla\phi})(\dot{\gamma}_t,\dot{\gamma}_t)\d t=+\infty
$$
under $m>n$ for $\phi\in C^2(M)$. Then $M$ is compact. Tadano~\cite[Theorem~14]{Tadano} extends \cite[Theorem~2.1]{CavOliSantos} for $\Delta_V$ with modified $m$-Bakry-\'Emery Ricci tensor ${\rm Ric}_{m,n}(\Delta_V)$ under $m>n$.
Our Theorem~\ref{thm:AmbroseMyers} is different
from the above mentioned results.
Tadano~\cite[Theorem~25]{TadanoNegative} also proves a version of Ambrose's Theorem for
$\Delta_V$ with modified $1$-Bakry-\'Emery Ricci tensor
${\rm Ric}_{1,n}(\Delta_V)$
under the condition ${\rm Ric}_{1,n}(\Delta_V)>0$ and $|V|\leq ke^{-\ell r_p}$ for some $k\geq0,\ell>0$. So our condition in Corollary~\ref{cor:AmbroseMyersVbdd} is milder than one in \cite[Theorem~25]{TadanoNegative}.
\end{enumerate}
}
\end{remark}
In the following theorem and its corollary, we assume $V=\nabla\phi$ for some
$\phi\in C^2(M)$ and set $C_p
=\exp\left(-\frac{\;2\phi(p)\;}{n-m} \right)$ for the definition of $s_p(x)$ with $p$ being an arbitrary point. As noted before, $s(p,q)$ is symmetric for any $p,q\in M$.
Let
$h=e^{-\frac{\;4\phi\;}{n-m}}g$ be the conformal change of the metric $g$. Then $s(p,q)$ is the smallest length in the $h$ metric of a minimal geodesic
between $p$ and $q$ in the $g$ metric. As such,
$d^{\,h}(p,q)\leq s(p,q)$ for any $q\in M$. So Theorem~\ref{thm:WeightedMyers} tells
us that the diameter of the metric $h$ is less than or equal to $\delta_{\kappa}$. For this conformal
diameter estimate we also obtain the following rigidity characterization.
\begin{theorem}[Cheng's Maximal Diameter Theorem]\label{thm:ChengDiamSphere}
Suppose that $(M,g)$, $n> 1$, is a complete Riemannian manifold and $\phi\in C^2(M)$. Fix $p,q\in M$.
Assume that $\delta_{\kappa}<\infty$, $\kappa$ is positive on $]0,\delta_{\kappa}[$, $\kappa(s)=\kappa(\delta_{\kappa}-s)$ for all $s\in[0,\delta_{\kappa}]$, and
\eqref{eq:RiciLowBdd}
holds for all $x\in ({\rm Cut}(p)\cup\{p\})^c$.
We further assume that \eqref{eq:RiciLowBdd} by replacing $p$ with $q$
holds for all $x\in ({\rm Cut}(q)\cup\{q\})^c$.
If $d^{\,h}(p,q)=
\delta_{\kappa}$,
then $m=1$,
$\phi$ is rotationally symmetric around $p$, i.e., $\phi$ is a function depending only on radial $r$,
and
$g$ is a warped product metric of the form
\begin{align*}
g&=\d r^2+e^{\frac{\;2 \phi(r) +2\phi(0)\;}{n-1}}
\mathfrak{s}_{\kappa}^2(s(r))
g_{\mathbb{S}^{n-1}}, \quad 0\leq r\leq d(p,q),
\end{align*}
where $s(r)=\int_0^re^{-\frac{\;2 \phi (t)\;}{n-1}}\d t$
and $s(d(p,q))=
\delta_{\kappa}$.
\end{theorem}
\begin{corollary}\label{cor:ChengDiamSphere}
Suppose that $(M,g)$, $n> 1$, is a complete Riemannian manifold and $\phi\in C^2(M)$. Fix $p,q\in M$.
Assume that $\kappa$ is a positive constant and
\eqref{eq:RiciLowBdd}
holds for all $x\in ({\rm Cut}(p)\cup\{p\})^c$.
We further assume that \eqref{eq:RiciLowBdd} by replacing $p$ with $q$
holds for all $x\in ({\rm Cut}(q)\cup\{q\})^c$.
If $d^{\,h}(p,q)=
\pi/\sqrt{\kappa}$,
then $m=1$,
$\phi$ is rotationally symmetric around $p$, i.e., $\phi$ is a function depending only on radial $r$,
and
$g$ is a warped product metric of the form
\begin{align*}
g&=\d r^2+e^{\frac{\;2 \phi(r) +2\phi(0)\;}{n-1}}\cdot
\frac{\;\sin^2({\sqrt\kappa}(s(r)))\;
}{\kappa}
g_{\mathbb{S}^{n-1}}, \quad 0\leq r\leq d(p,q),
\end{align*}
where $s(r)=\int_0^re^{-\frac{\;2 \phi(t)\;}{n-1}}\d t$
and $s(d(p,q))=
\pi/\sqrt{\kappa}$.
\end{corollary}
\begin{theorem}[Cheeger-Gromoll Splitting Theorem]\label{thm:Splitting}
Let $(M,g)$ be an $n$-dimen\-sional non-compact complete Riemannian manifold and $V$ a $C^1$-vector field.
Suppose that $(M,g,V)$ is $(V,m)$-complete
and $M$ contains a line.
Then under ${\rm CD}(0,m)$-condition
with $m<1$, $M$ is isometric to $\mathbb{R}\times N$ and $V$ depends only on $N$.
\end{theorem}
\begin{corollary}\label{cor:Splitting}
Let $(M,g)$ be an $n$-dimensional non-compact complete Riemannian manifold and $C^1$-vector field $V$.
Suppose that $V_{\gamma}\leq0$
for any unit speed geodesic $\gamma$
and $M$ contains a line.
Then under ${\rm CD}(0,m)$-condition
with $m<1$, we have that $M$ is isometric to $\mathbb{R}\times N$ and $V$ depends only on $N$.
\end{corollary}
\begin{proof}
If $V_{\gamma}\leq0$
for any unit speed geodesic $\gamma$, then $(M,g,V)$ is $(V,m)$-complete for all $m\leq1$.
So the assertion easily follows Theorem~\ref{thm:Splitting}.
\end{proof}
\begin{remark}
{\rm Theorem~\ref{thm:Splitting} partially extends \cite[Corollary~6.7]{Wylie:WarpedSplitting} for a restricted case, where
${\rm CD}(0,m)$-condition
for $m<1$ and
$(V,1)$-completeness of $(M,g,V)$ are assumed
for the isometric splitting $M=\mathbb{R}\times N$.
Note that
the $(V,m)$-completeness does not necessarily mean
the $(V,1)$-completeness, and it
is weaker than $(V,1)$-completeness if $V_{\gamma}\geq0$ for any unit speed geodesic $\gamma$.
}
\end{remark}
\section{Proof of Theorem~\ref{thm:GlobalLapComp}}
Recall the $V$-Laplacian $\Delta_Vu:=\Delta u-\langle V,\nabla u\rangle $. Letting $\lambda(r,\theta)= C_p^{-1}
e^{\frac{\;2V_{\gamma}(r)\;}{n-m}}\Delta_Vr_p(r,\theta)$, we find that
$\lambda$ satisfies the Riccati differential inequality in terms of the parameter $s$.
\begin{lemma}\label{lem:RiccatiDiferIneq}
Let $\gamma$ be a unit speed minimal geodesic with $\gamma_0=p$ and $\dot{\gamma}_0=\theta$.
Let $s$ be the
parameter $\d s= C_p e^{-\frac{\;2V_{\gamma}(r)\;}{n-m}}\d r$. Then
\begin{align}
\frac{\d\lambda}{\d s}\leq -\frac{\lambda^2}{n-m}-
C_p^{-2} e^{\frac{\;4V_{\gamma}(r)\;}{n-m}}\text{
\rm Ric}_{m,n}(\Delta_V)\left(\dot{\gamma}_r,\dot{\gamma}_r \right)\label{eq:lambda/ds}
\end{align}
in particular,
\begin{align}
\frac{\d\lambda}{\d r}\leq - C_p e^{-\frac{\;2V_{\gamma}(r)\;}{n-m}}
\frac{\;\lambda^2\;}{n-m}- C_p^{-1} e^{\frac{\;2V_{\gamma}(r)\;}{n-m}}\text{
\rm Ric}_{m,n}(\Delta_V)\left(\dot{\gamma}_r,
\dot{\gamma}_r \right)\label{eq:lambda/dr}
\end{align}
holds for $x=(r,\theta)\notin \text{\rm Cut}(p)\cup\{p\}$.
Moreover, if equality is achieved at a point, then $m=1$ and at that point $\nabla_{\nabla r_p}$ has at most one non-zero eigenvalue which is of multiplicity $n-1$.
\end{lemma}
\begin{proof} We modify the proof of the Laplacian comparison theorem on weighted complete Riemannian manifolds with the CD$(K, 1)$-condition by Wylie and Yeroshkin \cite{WylieYeroshkin}.
The usual Bochner-Weitzenb\"ock formula for functions says that for any $u\in C^3(M)$,
\begin{align*}
\frac12\Delta |\nabla u|^2=|\nabla^2\,u|^2+{\Ric}(\nabla u,\nabla u)+\langle \nabla\Delta u,\nabla u\rangle .
\end{align*}
The Bochner-Weitzenb\"ock formula for the $V$-Laplacian and the $m$-Bakry-\'Emery
Ricci curvature is given by
\begin{align*}
\frac12 \Delta_V|\nabla u|^2&=|\nabla^2\,u|^2+{\Ric}_{\infty,n}(\Delta_V)(\nabla u,\nabla u)+\langle \nabla \Delta_V u,\nabla u\rangle \\
&=|\nabla^2\,u|^2+{\Ric}_{m,n}(\Delta_V)(\nabla u,\nabla u)-\frac{\;V^*\otimes V^*\;}{n-m}(\nabla u,\nabla u)+\langle \nabla \Delta_V u,\nabla u\rangle .
\end{align*}
Consider this equation with $u=r_p$ at an interior point of a minimizing geodesic
(so that $r_p$ is smooth in a neighborhood). Then $|\nabla r_p|=1$ in this neighborhood,
so that the left hand side is zero.
Now we claim $\nabla_{\nabla r_p}\nabla r_p=0$
i.e.,
$\nabla r_p$ is a null vector for $\nabla_{\nabla r_p}$.
For this, it suffices to show that for any smooth vector field $X$ on $M\setminus\{p\}$
\begin{align}
\langle \nabla_{\nabla r_p}\nabla r_p,X\rangle =0.\label{eq:Nablaradial}
\end{align}
This is true if $X$ is parallel to $\nabla r_p$, because
for $f\in C^{\infty}(M\setminus\{p\})$
\begin{align*}
\langle \nabla_{\nabla_{r_p}}\!\nabla r_p,f\nabla r_p\rangle &=f
\langle \nabla_{\nabla_{r_p}}\!\nabla r_p,\nabla r_p\rangle =
f\frac12(\nabla r_p)|\nabla r_p|^2=0.
\end{align*}
Moreover, \eqref{eq:Nablaradial} holds if $X$ is vertical to
$\nabla r_p$, because
\begin{align*}
\langle \nabla_{\nabla r_p}\nabla r_p,X\rangle &=\frac12(\nabla r_p)\langle \nabla r_p,X\rangle =\frac12(\nabla r_p)0=0.
\end{align*}
Hence $\nabla_{\nabla r_p}$ has at most $n-1$ non-zero eigenvalues and by the Cauchy-Schwarz inequality, it holds on $({\rm Cut}(p)\cup \{p\})^c$ that (see \cite{WylieYeroshkin})
\begin{align}
|{\rm Hess}\;r_p|^2=\|\nabla_{\nabla r_p}\|^2\geq \frac{\;(\Delta r_p)^2\;}{n-1}.\label{eq:HessTrace}
\end{align}
Now $m\leq 1$. Hence
\begin{align}
0&\geq \frac{\;(\Delta r_p)^2\;}{n-m}+{\Ric}_{m,n}(\Delta_V)
\left(\nabla r_p,\nabla r_p \right)-\frac{1}{\;n-m\;}|\langle V,\nabla r_p\rangle |^2+\langle \nabla \Delta_Vr_p,\nabla r_p\rangle .\label{eq:Bochner}
\end{align}
This gives us the following inequality along $\gamma$,
\begin{align}
\frac{\d}{\d r}(\Delta_Vr_p)(r,\theta)&\leq -\frac{\;(\Delta r_p(r,\theta))^2\;}{n-m}-{\Ric}_{m,n}(\Delta_V)
\left(\dot{\gamma}_r,\dot{\gamma}_r \right)+\frac{1}{\;n-m\;}|\langle V,\nabla r_p\rangle (r,\theta)|^2.\label{eq:equaitonalonggama}
\end{align}
From this, we have
\begin{align*}
\frac{\d\lambda}{\d s}&= C_p^{-1} e^{\frac{\;2V_{\gamma}(r)\;}{n-m}}\frac{\d\lambda}{\d r}\\
&= C_p^{-2} e^{\frac{\;2V_{\gamma}(r)\;}{n-m}}\left\{\left(\frac{\d}{\d r}e^{\frac{\;2V_{\gamma}(r)\;}{n-m}} \right)\Delta_Vr_p(r,\theta) + e^{\frac{\;2V_{\gamma}(r)\;}{n-m}} \frac{\d}{\d r} \Delta_Vr_p(r,\theta)\right\}\\
&= C_p^{-2} e^{\frac{\;2V_{\gamma}(r)\;}{n-m}}\left\{e^{\frac{\;2V_{\gamma}(r)\;}{n-m}}\frac{2}{\;n-m\;}\cdot\frac{\;\partial V_{\gamma}(r)\;}{\partial r}\cdot \Delta_Vr_p(r,\theta)+e^{\frac{\;2V_{\gamma}(r)\;}{n-m}} \frac{\d}{\d r} \Delta_Vr_p(r,\theta)\right\}\\
&= C_p^{-2} e^{\frac{\;4V_{\gamma}(r)\;}{n-m}}\left\{ \frac{2}{\;n-m\;}\cdot\frac{\;\partial V_{\gamma}(r)\;}{\partial r}\cdot \Delta_Vr_p(r,\theta)+\frac{\d}{\d r} \Delta_Vr_p(r,\theta)\right\}
\end{align*}
\begin{align*}
&\leq \frac{ C_p^{-2} }{n-m}e^{\frac{\;4V_{\gamma}(r)}{n-m}\;}\left\{2\frac{\;\partial V_{\gamma}(r)\;}{\partial r}\Delta_Vr_p(r,\theta)-(\Delta r_p(r,\theta))^2 +|\langle V,\nabla r_p\rangle (r,\theta)|^2\right\}\\
&\hspace{5cm}- C_p^{-2} e^{\frac{\;4V_{\gamma}(r)\;}{n-m}}{\Ric}_{m,n}(\Delta_V)\left(\dot{\gamma}_r,\dot{\gamma}_r\right) \\
&= -\frac{ C_p^{-2} }{\;n-m\;}e^{\frac{\;4V_{\gamma}(r)\;}{n-m}}(\Delta_Vr_p(r,\theta))^2- C_p^{-2} e^{\frac{\;4V_{\gamma}(r)\;}{n-m}}{\Ric}_{m,n}(\Delta_V)\left(\dot{\gamma}_r,\dot{\gamma}_r\right)
\\
&= -\frac{1}{\;n-m\;}\left( C_p^{-1} e^{\frac{\;2V_{\gamma}(r)\;}{n-m}}\Delta_Vr_p(r,\theta)\right)^2-
C_p^{-2} e^{\frac{\;4V_{\gamma}(r)\;}{n-m}}{\Ric}_{m,n}(\Delta_V)\left(\dot{\gamma}_r,\dot{\gamma}_r\right)\\
&=-\frac{\lambda^2}{\;n-m\;}-
C_p^{-2} e^{\frac{\;4V_{\gamma}(r)\;}{n-m}}{\Ric}_{m,n}(\Delta_V)\left(\dot{\gamma}_r,\dot{\gamma}_r\right).
\end{align*}
Here we use \eqref{eq:equaitonalonggama} at the inequality above and use
$\Delta_Vr_p=\Delta r_p-\langle V, \nabla r_p\rangle $ in the next equality.
If the equality holds for \eqref{eq:lambda/ds} at some $x=(r_0,\theta)\notin{\rm Cut}(p)\cup \{p\}$, then the equality for
\eqref{eq:equaitonalonggama} equivalently the equality for \eqref{eq:Bochner} at $x\notin {\rm Cut}(p)\cup \{p\}$ holds, i.e.,
\begin{align*}
0&=\frac{\;(\Delta r_p)^2\;}{n-m}+{\Ric}_{m,n}(\Delta_V)
\left(\nabla r_p,\nabla r_p \right)-\frac{1}{\;n-m\;}|\langle V,\nabla r_p\rangle |^2+\langle \nabla \Delta_V r_p,\nabla r_p\rangle \\
&\geq \frac{\;(\Delta r_p)^2\;}{n-1}+{\Ric}_{m,n}(\Delta_V)
\left(\nabla r_p,\nabla r_p \right)-\frac{1}{\;n-m\;}|\langle V,\nabla r_p\rangle |^2+\langle \nabla \Delta_Vr_p,\nabla r_p\rangle
\end{align*}
holds at $x\notin {\rm Cut}(p)\cup \{p\}$.
This and $m\leq1$ yield
\begin{align*}
\frac{m-1}{\;(n-m)(n-1)\;}(\Delta r_p)^2(x)=0.
\end{align*}
Thus $m=1$ or $\Delta r_p(x)=0$.
Since $M$ has an upper bound $\kappa_{{\varepsilon}}>0$ of the sectional curvature on some $B_{{\varepsilon}}(p)\subset {\rm Cut}(p)^c$, the usual Laplacian comparison theorem tells us that $\Delta r_p(x)\geq (n-1)\sqrt{\kappa_{{\varepsilon}}}\cot(\sqrt{\kappa_{{\varepsilon}}}r_p(x))>0$ for $0<r_p(x)<{\varepsilon}$. Therefore we obtain $m=1$, in particular, the equality for \eqref{eq:HessTrace} holds at $x$. This implies that
$\nabla_{\nabla r_p}$ at $x$ has at most one non-zero eigenvalue of multiplicity $n-1$.
\end{proof}
Let $\kappa$ be a continuous function on $[0,+\infty[$ with
respect to the parameter $s$.
Assuming the curvature bound ${\Ric}_{m,n}(\Delta_V)_x(\nabla r_p,\nabla r_p)\geq (n-m)\kappa(s_p(x)) {e^{-\frac{\;4\phi_V(x)\;}{n-m}}} C_p^2 $ for $s_p(x)<S$ with $x\notin {\rm Cut}(p)\cup\{p\}$, we see
${\Ric}_{m,n}(\Delta_V)(\dot{\gamma}_r,\dot{\gamma}_r)\geq (n-m)\kappa(s) {e^{-\frac{\;4V_{\gamma}(r)\;}{n-m}}} C_p^2 $ for $s=s(r,\theta)<S$ with $0<r<d(p,{\rm Cut}(p))$.
From $(\ref{eq:lambda/ds})$ we have the usual Riccati inequality
\begin{align}
-\frac{\d\lambda}{\d s}(s)\geq (n-m)\kappa(s)+\frac{\lambda(s)^2}{\;n-m\;}\quad\text{ for }\quad s\in]0,S[\label{eq:RiccattiIneq}
\end{align}
with the caveat that it is in terms of the parameter $s$ instead of $r$.
This gives us the following comparison estimate.
\begin{lemma}
\label{lem:LaplacianComparisonconformal}
Suppose that $(M,g)$ be an $n$-dimensional complete Riemannian manifold and $V$ a $C^1$-vector field. Fix $R\in]0,+\infty[$ and $x,p\in M$.
Assume that \eqref{eq:RiciLowBdd} holds for $r_p(x)<R$
with $x\notin({\rm Cut}(p)\cup\{p\})$.
Let $\gamma$, $s$, and $\lambda$ be defined to be as in Lemma~\ref{lem:RiccatiDiferIneq}. Then
\begin{align}
\lambda(r,\theta)\leq m_{\kappa}(s) \label{eq:LocalLapComp}
\end{align}
holds for $r<R$, $s<\delta_{\kappa}$ and $x=(r,\theta)\notin\text{\rm Cut}(p)\cup\{p\}$.
Here
\begin{align*}
s=s_p(r)= C_p \int_0^r\exp\left(-\frac{\;2\phi_V(\gamma_t)\;}{n-m} \right)\d t.
\end{align*}
Suppose further that the equality in
\eqref{eq:LocalLapComp} holds for some $r_0<R$ with $s_0:=s(r_0)<\delta_{\kappa}$.
We choose an orthonormal basis $\{e_i\}_{i=1}^n$ of $T_pM$
with $e_n=\dot{\gamma}_0$.
Let $\{Y_i\}_{i=1}^{n-1}$ be the Jacobi fields along $\gamma$ with $Y_i(0)=o_p$ and $Y_i'(0)=e_i$.
Then we have $m=1$,
and at $x=(r,\theta)$ with $r\leq r_0$, $\nabla_{\nabla r_p}$ has at most one non-zero eigenvalue which is of multiplicity $n-1$,
and for all $r\in]0,r_0]$ we have
\begin{align}
\Ric_{1,n}(\Delta_V)(\dot\gamma_r,\dot\gamma_r)&=(n-1)\kappa(s_p(\gamma_r))e^{-\frac{\;4V_{\gamma}(r)\;}{n-1}} C_p^2 .\label{eq:Einstein}
\end{align}
Moreover, for all $i$ we have
$Y_i(r)=C_p^{-1}F_{\kappa}(r)E_i(r)$ for $r\in[0,r_0]$, where
\begin{align}
F_{\kappa}(r):=\exp\left(\frac{\;V_{\gamma}(r)\;}{n-1} \right)\mathfrak{s}_{\kappa}(s_p(\gamma_r)),\label{eq:parallel}
\end{align}
and $\{E_i(r)\}_{i=1}^{n-1}$ are the parallel vector fields with $E_i(0)=e_i$.
Consequently,
\begin{align}
g_{\gamma_r}&=dr^2+ C_p^{-2} e^{\frac{\;2V_{\gamma}(r)\;}{n-1}}\mathfrak{s}_{\kappa}^2(s_p(\gamma_r))g_{\mathbb{S}^{n-1}}.\label{eq:roundshere1}
\end{align}
Here $g_{\mathbb{S}^{n-1}}$ is the standard metric on the sphere $\mathbb{S}^{n-1}$.
\end{lemma}
\begin{proof}
Set $S:=s_p(R)$. Then $r<R$ implies $s<S$.
Since $\Delta r_p(r,\theta)\to+\infty$ as $r\to0$, we see
$\lambda(r,\theta)\to+\infty$ as $r\to0$ or $s\to0$.
We set $\beta(s):=\mathfrak{s}_{\kappa}^2(s)(\lambda-m_{\kappa}(s))$. Then,
by \eqref{eq:RiccatiEqM} and \eqref{eq:RiccattiIneq}, for $s<S$
\begin{align*}
\beta'(s)&=2\mathfrak{s}_{\kappa}'(s)\mathfrak{s}_{\kappa}(s)(\lambda-m_{\kappa}(s))+\mathfrak{s}_{\kappa}^2(s)\left(\frac{\d\lambda}{\d s}-m_{\kappa}'(s)\right)\\
&=2\mathfrak{s}_{\kappa}^2(s)\cot_{\kappa}(s)(\lambda-m_{\kappa}(s))
+\mathfrak{s}_{\kappa}^2(s)\left(\frac{\d\lambda}{\d s}+(n-m)\kappa(s)+\frac{m_{\kappa}^2(s)}{\;n-m\;}
\right)\\
&\leq
\frac{\mathfrak{s}_{\kappa}^2(s)}{\;n-m\;}\left(2m_{\kappa}(s)\lambda-2m_{\kappa}^2(s)\right)
+\frac{\mathfrak{s}_{\kappa}^2(s)}{\;n-m\;}\left(m_{\kappa}^2(s)-\lambda^2
\right)\\
&=-\frac{\mathfrak{s}_{\kappa}^2(s)}{\;n-m\;}\left(\lambda-m_{\kappa}(s) \right)^2\leq0.
\end{align*}
We note here that \eqref{eq:RiccattiIneq} is derived from \eqref{eq:RiciLowBdd}.
If we show $\beta(0)=0$, then $\beta(s)\leq \beta(0)=0$. For this, it suffices to prove that $s(\lambda-m_{\kappa}(s))$ is upper bounded as $s\to0$. We already know that $\lim_{s\to0}s\, m_{\kappa}(s)=n-m$ and
the ratio $s/r=s_p(r)/r$ converges to $C_p$ as $r\to0$. So it suffices to prove $\lim_{r\to0}r\lambda(r,\theta)=C_p^{-1}(n-1)$ as $r\to0$, equivalently $\lim_{r\to0}r\Delta r_p(r,\theta)=n-1$, because $\lim_{r\to0}r\langle V,\nabla r_p\rangle (r,\theta)=0$. In view of the usual Laplacian comparison theorem for the Laplace-Bertrami operator $\Delta$ under the upper (resp.~lower) bound $K_{{\varepsilon}}$ (resp.~$\kappa_{{\varepsilon}}$) of sectional curvature on $B_{{\varepsilon}}(p)$, we see
$(n-1)\cot_{K_{{\varepsilon}}}(r)\leq\Delta r_p(r,\theta)\leq (n-1)\cot_{\kappa_{{\varepsilon}}}(r)$ on $B_{{\varepsilon}}(p)$.
This implies the desired assertion.
Next we assume that the equality in \eqref{eq:LocalLapComp} holds for some $r_0<R$, i.e., $\lambda(r_0,\theta)=(n-m)\cot_{\kappa}(s_0)$ for $r_0<R$ with $s_0=s(r_0)$.
This implies $0=\beta(s_0)\leq\beta(s)\leq\beta(0)=0$, hence
$\lambda(r)=m_{\kappa}(s)$ for all $s\in[0,s_0]$. From this,
\begin{align*}
\frac{\d\lambda}{\d s}(s_0)=\frac{\d m_{\kappa}}{\d s}(s_0).
\end{align*}
In particular, we have at $r_0$
\begin{align}
\frac{\d \lambda}{\d s}&\leq -\frac{\lambda(r)^2}{\;n-m\;}-
C_p^{-2} e^{\frac{\;4V_{\gamma}(r)\;}{n-m}}
\Ric_{m,n}(\Delta_V)(\dot\gamma_r,\dot\gamma_r)\nonumber\\
&\leq
-\frac{\lambda(r)^2}{\;n-m\;}-
(n-m)\kappa(s)=-\frac{m_{\kappa}(s)^2}{\;n-m\;}-
(n-m)\kappa(s)=\frac{\d \lambda}{\d s}.\label{eq:RiccatiIneq}
\end{align}
Then the equality holds in \eqref{eq:lambda/ds}
at $x=(r_0,\theta)$. So we have $m=1$ by Lemma~\ref{lem:RiccatiDiferIneq}.
We can conclude $\beta(s)\equiv0$ on $[0,s_0]$ from $\beta(0)=\beta(s_0)=0$ and $\beta'(s)\leq0$ so that $\lambda(r,\theta)=(n-1)\cot_{\kappa}(s)$ for $s\in]0,s_0]$.
We then see the equality \eqref{eq:RiccatiIneq} at any $r\in]0,r_0]$, hence \eqref{eq:Einstein} holds at any $r\in]0,r_0]$.
Finally we prove \eqref{eq:roundshere1} at any $r\in]0,r_0]$ under $\lambda(r_0)=(n-m)\cot_{\kappa}(s_0)$.
Hereafter, we assume $r\in]0,r_0]$.
By Lemma~\ref{lem:RiccatiDiferIneq}, at $x=(r,\theta)$, $\nabla_{\nabla r_p}$ has a non-zero eigenvalue $A(r)$ which is of $n-1$ multiplicity.
Then we have
\begin{align*}
\lambda(r,\theta)&= C_p^{-1} e^{\frac{\;2V_{\gamma}(r)\;}{n-1}}(\Delta r_p(r,\theta)-\langle V,\nabla r_p\rangle (r,\theta))\\
&= C_p^{-1} e^{\frac{\;2V_{\gamma}(r)\;}{n-1}}((n-1)A(r)-\langle V,\nabla r_p\rangle (r,\theta))=(n-1)\cot_{\kappa}(s),
\end{align*}
where we use the equality \eqref{eq:LocalLapComp} at any $r\in]0,r_0]$.
So we have $A(r)= C_p e^{-\frac{\;2V_{\gamma}(r)\;}{n-1}}\cot_{\kappa}(s)+\frac{\;\langle V,\nabla r_p\rangle (r,\theta)\;}{n-1}
=(n-1)^{-1}\Delta r_p(\gamma_r)$.
The radial curvature equation (see \cite[Theorem~2 in pp.~44]{Pet:RiemannianGeo}) tells us that
\begin{align}
R(E_i,\dot{\gamma}_r)\dot{\gamma}_r=-(A'(r)+A(r)^2)E_i.\label{eq:radialCurvatureEq}
\end{align}
Combining Bochner-Weitzenb\"ock formula with \eqref{eq:Einstein},
we have
\begin{align}
A'(r)+A(r)^2=\frac{\;V_{\gamma}''(r)\;}{n-1}+\left(\frac{\;V_{\gamma}'(r)\;}{n-1} \right)^2-\kappa(s_p(\gamma_r))e^{-\frac{\;4V_{\gamma}(r)\;}{n-1}}C_p^2=\frac{\;F_{\kappa}''(r)\;}{F_{\kappa}(r)}.\label{eq:Ar}
\end{align}
Since $F_{\kappa}(0)=0$ and $F_{\kappa}'(0)=C_p$, we obtain
$$
Y_i(r)=C_p^{-1}F_{\kappa}(r)E_i(r)=C_p^{-1}e^{\frac{\;V_{\gamma}(r)\;}{n-1}}\mathfrak{s}_{\kappa}(s_p(\gamma_r))E_i(r).
$$
This proves the desired conclusion.
\end{proof}
\begin{corollary}\label{cor:Cutlocus}
Let $(M,g)$ be an $n$-dimensional complete Riemannian manifold and $V$ a $C^1$-vector field. Fix $p\in M$ and $R\in]0,+\infty[$.
Assume that \eqref{eq:RiciLowBdd} holds for $r_p(x)<R$
with $x\notin {\rm Cut}(p)\cup\{p\}$. Then $s_p(x)<\delta_{\kappa}$.
\end{corollary}
\begin{proof}
We may assume $\delta_{\kappa}<\infty$.
Take $x\in B_R(p)$ with
$x\notin{\rm Cut}(p)\cup\{p\}$.
Let $x=(r,\theta)$ be the polar coordinate expression around $p$ and set
$s:=s_p(r)= C_p \int_0^{r}\exp\left(-\frac{2 \phi_V (\gamma_t)}{n-m} \right)\d t$ and
$S=s_p(R)$, where $\gamma$ is a unit speed geodesic with
$\gamma_0=p$ and $\dot{\gamma}_0=\theta$.
We see $s_p(x)<S$.
Assume $S>\delta_{\kappa}$.
Then there exists $r_0\in]0,R[$ such that $\delta_{\kappa}= C_p \int_0^{r_0}\exp\left(-\frac{2V_{\gamma}(t)}{n-m} \right)\d t$. By \eqref{eq:LocalLapComp},
$\lambda(r,\theta)\leq (n-m)\cot_{\kappa}(s)$ holds for $s<\delta_{\kappa}$.
Since $r\uparrow r_0$ is equivalent to $s=s(r)\uparrow\delta_{\kappa}$, we have
$$
\lambda(r_0,\theta)=\lim_{r\uparrow r_0}\lambda(r,\theta)\leq
\lim_{r\uparrow r_0}(n-m)\cot_{\kappa}(s(r))=-\infty.
$$ This contradicts the
well-definedness of $\lambda(r,\theta)= C_p^{-1} \left(e^{\frac{2\phi_V}{n-m}}\Delta_Vr_p\right)(r,\theta)$ for $r\in]0,R[$.
Therefore $S\leq\delta_{\kappa}$ under $\delta_{\kappa}<\infty$ and we obtain the conclusion $s_p(x)<S\leq\delta_{\kappa}$.
\end{proof}
Let $p\in M$ and let $(r,\theta)$, $r>0$, $\theta\in \mathbb{S}^{n-1}$ be exponential
polar coordinates (for the metric $g$) around $p$ which are defined on
a maximal star shaped domain in $T_pM$ called the
\emph{segment domain}. Write the volume element $\d\mathfrak{m} =J(r,\theta)\d r\land\d\theta$.
Let $s_p(\cdot)$ be the re-parametrized distance function defined above. Inside the segment domain,
$s_p$ has the simple formula
\begin{align*}
s_p(r,\theta)= C_p \int_0^re^{-\frac{\;2\phi_V(t,\theta)\;}{n-m}}\d t.
\end{align*}
Therefore, $s_p$ is a smooth function in the segment domain with the property that
$\frac{\partial s}{\partial r}= C_p e^{-\frac{\;2\phi_V(r,\theta)\;}{n-m}}$. We can then also take $(s,\theta)$
to be coordinates which are also valid for the entire segment theorem. We can not control the derivative of $s$ in directions tangent to the sphere, so the new
$(s,\theta)$ coordinates are \emph{not} orthogonal as in the case for geodesic polar coordinates. However, this is not the issue when we computing volumes as
\begin{align}
\left.\begin{array}{rl}e^{-\frac{\;2\phi_V}{n-m}\;}\d\mu_V&=e^{-\frac{n-m+2}{n-m}\phi_V}J(r,\theta)\d r\land\d\theta\\
&=C_p^{-1}e^{-\phi_V}J(r,\theta)\d s\land\d\theta.\end{array}\right.\label{eq:conformal}
\end{align}
Here $\d\mu_V=e^{-\phi_V}\d \mathfrak{m} $.
We denote the derivative in the
radial direction in terms of this parameter by $\frac{\d}{\d s}$.
In geodesic polar coordinates $\frac{\d}{\d s}$ has the expression $\frac{\d }{\d s}= C_p^{-1} e^{\frac{\;2\phi_V(r,\theta)\;}{n-m}}\frac{\partial}{\partial r}$.
Note that it is not the same as $\frac{\partial}{\partial s}$ in $(s,\theta)$ coordinates.
\begin{proof}[Proof of Theorem~\ref{thm:GlobalLapComp}]
The implication \eqref{eq:RiciLowBdd}$\Longrightarrow$\eqref{eq:GloLapComp}
for $R<\infty$
follows from Lemma~\ref{lem:LaplacianComparisonconformal}, because
$r_p$ is smooth on $M\setminus(\text{\rm Cut}(p)\cup\{p\})$.
The implication \eqref{eq:RiciLowBdd}$\Longrightarrow$\eqref{eq:GloLapComp}
for $R=+\infty$ follows from it.
\end{proof}
\section{Proofs of Theorem~\ref{thm:WeightedMyers} and Corollary~\ref{cor:WeightedMyers}}
\begin{proof}[Proof Theorem~\ref{thm:WeightedMyers}]
Suppose that there exist points $p,q\in M$ such that $s(p,q)>\delta_{\kappa}$.
Since $\text{\rm Cut}(p)$ is closed and measure zero, we may assume $q\notin \text{\rm Cut}(p)$. By Lemma~\ref{lem:LaplacianComparisonconformal},
along minimal geodesic from $p$ to $q$,
$\lambda(r,\theta)\leq m_{\kappa}(s)$. However, as
$s\to\delta_{\kappa}$, $m_{\kappa}(s)\to-\infty$. This implies
$\Delta r_p(x)\to-\infty$ as $s(p,x)\to\delta_{\kappa}$. This contradicts
that $r_p$ is smooth in a neighborhood of $q$.
The final assertion follows Remark~\ref{rem:SpRp}.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:WeightedMyers}]
Suppose that $\sup_{q\in M}d(p,q)=+\infty$. Then there exists a sequence $\{q_i\}$ in $M$ such that $d(p,q_i)\to+\infty$ as $i\to+\infty$.
By Lemma~\ref{lem:phimcomplete}, $s(p,q_{i})\to+\infty$ as $k\to+\infty$,
which contradicts $\sup_{q\in M}s(p,q)\leq\delta_{\kappa}$.
Therefore, $\sup_{q\in M}d(p,q)<\infty$, hence $M$ is compact.
\end{proof}
\section{Proof of Theorem~\ref{thm:BGVol}}
Recall that for a Riemannian manifold $\frac{\d}{\d r}\log J(r,\theta)=\Delta r_p(r,\theta)$, where
$\Delta r_p$ is the standard Laplacian acting on the distance function $r_p$ from the point $p$. \eqref{eq:conformal} indicates we should consider the quantity
\begin{align}
\frac{\d}{\d s}\log(e^{-V_{\gamma}(r)}J(r,\theta))= C_p^{-1}e^{\frac{\;2V_{\gamma}(r)\;}{n-m}}\left(\Delta r_p(r,\theta)-\langle V_{\gamma_r},\dot\gamma_r\rangle \right)= C_p^{-1}
e^{\frac{\;2V_{\gamma}(r)\;}{n-m}}\Delta_V r_p(r,\theta)
.\label{eq:quantity}
\end{align}
\begin{lemma}[Volume Element Comparison]\label{lem:VolComp}
Let $(M,g)$ be an $n$-dimensional complete Riemannian manifold and $V$ a $C^1$-vector field. Fix $p\in M$ and $R\in]0,+\infty]$.
Assume that \eqref{eq:RiciLowBdd} holds for $r_p(x)<R$ with $x\notin {\rm Cut}(p)\cup\{p\}$.
Let $J$ be the volume element in geodesic polar coordinates around $p\in M$ and set $J_V(r,\theta):=e^{-V_{\gamma}(r)}J(r,\theta)$. Then for $r_0<r_1<R$ with
$r_1<{\rm cut}(\theta)$,
\begin{align}
\frac{\;J_V(r_1,\theta)\;}{J_V(r_0,\theta)}
\leq \frac{\;\mathfrak{s}_{\kappa}(s_p(r_1,\theta))^{n-m}\;}{\mathfrak{s}_{\kappa}(s_p(r_0,\theta))^{n-m}}.\label{eq:VolComp}
\end{align}
Here $\text{\rm cut}(\theta)$ is the distance from $p$ to the cut point along the geodesic with $\gamma(0)=p$ and $\dot{\gamma}(0)=\theta$.
\end{lemma}
\begin{proof}
Recall $s=s_p(r)=s_p(r,\theta)= C_p \int_0^r\exp\left(-\frac{\;2V_{\gamma}(t)\;}{n-m} \right)\d t$ and
$\gamma$ is the unit speed geodesic from $p$ with $\dot{\gamma}_0=\theta$.
First note that the right hand side of \eqref{eq:VolComp} is meaningful for
$r_0<r_1<R$. Indeed, if $R<+\infty$, $s_p(r_0,\theta)<s_p(r_1,\theta)<\delta_{\kappa}$
by Corollary~\ref{cor:Cutlocus}. If $R=+\infty$, we can take $R_0\in]r_1,+\infty[$ so that \eqref{eq:RiciLowBdd} holds for $r_p(x)<R_0$, hence $s_p(r_0,\theta)<s_p(r_1,\theta)<\delta_{\kappa}$ by Corollary~\ref{cor:Cutlocus}.
From Lemma~\ref{lem:LaplacianComparisonconformal} and \eqref{eq:quantity}
we have that
\begin{align}
\frac{\d}{\d s}\log J_V(r,\theta)= C_p^{-1} e^{\frac{\;2 \phi_V\;}{n-m}} \Delta_V r_p(r,\theta)\leq (n-m)\cot_{\kappa}(s)
=\frac{\d}{\d s}\log(\mathfrak{s}_{\kappa}(s)^{n-m})\label{eq:JacobIneq}
\end{align}
for $r\in]0,R\land {\rm cut}(\theta)[$.
Integrating \eqref{eq:JacobIneq} between any $s_0< s_1
$
with $s_i=s_p(r_i,\theta)$ and $r_i\in]0,R\land {\rm cut}(\theta)[$ $(i=0,1)$ gives
\begin{align*}
\log\left(
\frac{\;J_V(r_1,\theta)\;}{J_V(r_0,\theta)} \right)
\leq \log\left(\frac{\;\mathfrak{s}_{\kappa}(s_1)^{n-m}\;}{\mathfrak{s}_{\kappa}(s_0)^{n-m}} \right)\quad\text{ implies }
\quad \frac{\;J_V(r_1,\theta)\;}{J_V(r_0,\theta)}
\leq \frac{\;\mathfrak{s}_{\kappa}(s_1)^{n-m}\;}{\mathfrak{s}_{\kappa}(s_0)^{n-m}}
\end{align*}
for all $r_0< r_1<R\land {\rm cut}(\theta)$. Note that since $\d s$ is an orientation preserving change of variables along the geodesic $\gamma$, the quantity is also non-increasing in terms of the parameter $r\in]0,R\land {\rm cut}(\theta)[$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:BGVol}]
By Lemma~\ref{lem:VolComp}, for all $r_1,r_2>0$ with $r_1< r_2<R$ and
$r_2<{\rm cut}(\theta)$
\begin{align*}
\frac{\;J_V(r_2,\theta)\;}{J_V(r_1,\theta)}\leq \frac{\;\mathfrak{s}_{\kappa}^{n-m}(s_p(r_2,\theta))\;}{\mathfrak{s}_{\kappa}^{n-m}(s_p(r_1,\theta))}\leq
\frac{\;\mathfrak{s}_{\kappa}^{n-m}\left(
\sup_{\eta\in\mathbb{S}^{n-1}}s_p(r_2,\eta)
\right)\;}{\mathfrak{s}_{\kappa}^{n-m}\left(\inf_{\eta\in\mathbb{S}^{n-1}}s_p(r_1,\eta)
\right)}.
\end{align*}
So for
$0\leq r_a< r_b\leq r_d$, $0\leq r_a\leq r_c< r_d$ and $r_d<R$, we have following inequality
\begin{align*}
\frac{\;\int_{\text{\rm cut}(\theta)\land r_c}^{\text{\rm cut}(\theta)\land r_d}J_V(r_2,\theta)\d r_2\;}{\int_{\text{\rm cut}(\theta)\land r_a}^{\text{\rm cut}(\theta)\land r_b}J_V(r_1,\theta)\d r_1}&\leq\frac{\;\int_{\text{\rm cut}(\theta)\land r_c}^{\text{\rm cut}(\theta)\land r_d}\mathfrak{s}_{\kappa}^{n-m}(s_p(r_2,\theta))\d r_2\;}{\int_{\text{\rm cut}(\theta)\land r_a}^{\text{\rm cut}(\theta)\land r_b}\mathfrak{s}_{\kappa}^{n-m}(s_p(r_1,\theta))\d r_1}
\\
&\leq
\frac{\;\int_{r_c}^{r_d}\mathfrak{s}_{\kappa}^{n-m}\left(\sup_{\eta\in\mathbb{S}^{n-1}}s_p(r_2,\eta)\right)\d r_2\;}{\int_{r_a}^{r_b}\mathfrak{s}_{\kappa}^{n-m}\left(\inf_{\eta\in\mathbb{S}^{n-1}}s_p(r_1,\eta)
\right)\d r_1}
\end{align*}
under $r_a=r_c$ or $r_b=r_d$ by use of \cite[Lemma~3.1]{Zhu97}
(cf.~\cite[Proof of Theorem~3.2]{Zhu97}).
From this, we can deduce that
\begin{align*}
\frac{\;\int_{\mathbb{S}^{n-1}}\int_{\text{\rm cut}(\theta)\land r_c}^{\text{\rm cut}(\theta)\land r_d}J_V(r_2,\theta)\d r_2\d\theta\;}{\int_{\mathbb{S}^{n-1}}\int_{\text{\rm cut}(\theta)\land r_a}^{\text{\rm cut}(\theta)\land r_b}J_V(r_1,\theta)\d r_1\d\theta}
\leq
\frac{\;\int_{\mathbb{S}^{n-1}}\int_{r_c}^{r_d}\mathfrak{s}_{\kappa}^{n-m}\left(\sup_{\eta\in\mathbb{S}^{n-1}}s_p(r_2,\eta)
\right)\d r_2\d\theta\;}{\int_{\mathbb{S}^{n-1}}\int_{r_a}^{r_b}\mathfrak{s}_{\kappa}^{n-m}\left(\inf_{\eta\in\mathbb{S}^{n-1}}s_p(r_1,\eta)
\right)\d r_1\d\theta}
\end{align*}
holds for general $0\leq r_a< r_b\leq r_d$, $0\leq r_a\leq r_c< r_d$ and $r_d<R$.
This implies that \eqref{eq:BGAnnuliUpLow} holds for $r_1<R$.
If $\phi$ is rotationally symmetric around $p$, $s_p(r,\theta)$ can be written as $s_p(r)$ and one can derive
\begin{align*}
\frac{\;\int_{\mathbb{S}^{n-1}}\int_{\text{\rm cut}(\theta)\land r_c}^{\text{\rm cut}(\theta)\land r_d}J_V(r_2,\theta)\d r_2\d\theta\;}{\int_{\mathbb{S}^{n-1}}\int_{\text{\rm cut}(\theta)\land r_a}^{\text{\rm cut}(\theta)\land r_b}J_V(r_1,\theta)\d r_1\d\theta}\leq
\frac{\;\int_{\mathbb{S}^{n-1}}\int_{r_c}^{r_d}\mathfrak{s}_{\kappa}^{n-m}(s_p(r_2))\d r_2\d\theta\;}{\int_{\mathbb{S}^{n-1}}\int_{r_a}^{r_b}\mathfrak{s}_{\kappa}^{n-m}(s_p(r_1))\d r_1\d\theta}.
\end{align*}
This implies that \eqref{eq:BGAnnuli} holds for $r_1<R$.
Similarly, in the modified coordinates $(s,\theta)$, we set
\begin{align*}
\text{\rm cut}_s(\theta):=\int_0^{\text{\rm cut}(\theta)}
e^{-\frac{\;2V_{\gamma}(t)\;}{n-m}}\d t,
\end{align*}
where $\gamma$ is the unit speed geodesic with $\gamma_0=p$ and $\dot{\gamma}_0=\theta$. Then we have
\begin{align*}
\nu_V(C(p,s_0,s_1))=\int_{\mathbb{S}^{n-1}}\int_{\text{\rm cut}_s(\theta)\land s_0}^{\text{\rm cut}_s(\theta)\land s_1}J_V(r(s,\theta),\theta)\d s\d \theta,
\end{align*}
and
\begin{align*}
v(\kappa,s_0,s_1)=\int_{\mathbb{S}^{n-1}}\int_{s_0}^{s_1}\mathfrak{s}_{\kappa}^{n-m}(s)\d s\d \theta=\omega_{n-1}\int_{s_0}^{s_1}\mathfrak{s}_{\kappa}^{n-m}(s)\d s.
\end{align*}
Therefore, \eqref{item:BG2} follows.
Here $r(s,\theta):= C_p^{-1} \int_0^s\exp\left(\frac{\;2V_{\gamma}(f^{-1}(u))\;}{n-m} \right)\d u$ with
$f(r):=s_p(r,\theta)$.
Note that $s_1<\delta_{\kappa}$ always holds under the condition.
Indeed, $s_1<S$ implies $s_1<\delta_{\kappa}$ under $R<+\infty$ by
Corollary~\ref{cor:Cutlocus}. When $R=+\infty$, for any $\theta\in\mathbb{S}^{n-1}$ there exists $R_0\in]0,+\infty[$
depending on $\theta$ such that $s_1<s(R_0,\theta)$. Then applying Corollary~\ref{cor:Cutlocus} for $R_0<\infty$,
$r_1:=r(s_1,\theta)<r(s(R_0,\theta),\theta)=R_0$ implies
$s_1=s(r(s_1,\theta),\theta)<\delta_{\kappa}$, where we use \eqref{eq:RiciLowBdd} holds for
$r_p(x)<R_0$.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:BGVol}]
By Theorem~\ref{thm:BGVol}\eqref{item:BG1}, for
$0<r_1<r_2<R$
\begin{align*}
\frac{\;\mu_V(B_{r_2}(p))\;}{\mu_V(B_{r_1}(p))}&\leq \frac{\;\int_0^{r_2}\left( C_p e^{-\frac{\;2\underline{\phi}_V(r)\;}{n-m}}r\right)^{n-m}\d r\;}{\int_0^{r_1}\left( C_p e^{-\frac{\;2\overline{\phi}_V(r)\;}{n-m}}r\right)^{n-m}\d r}\\
&\leq
e^{2(\overline{\phi}_V(r_1)-\underline{\phi}_V(r_2))}\frac{\;\int_0^{r_2} r^{n-m}\d r\;}{\int_0^{r_1} r^{n-m}\d r}
=e^{2(\overline{\phi}_V(r_1)-\underline{\phi}_V(r_2))}\left(\frac{r_2}{r_1} \right)^{n-m+1}.
\end{align*}
\end{proof}
\section{Proofs of Theorem~\ref{thm:AmbroseMyers}, Corollaries~\ref{cor:AmbroseMyersVbdd} and \ref{cor:AmbroseMyers}}
\begin{proof}[Proof of Theorem~\ref{thm:AmbroseMyers}]
Suppose that $M$ is non-compact. Then there exists a unit speed geodesic $\gamma$ with $\gamma_0=p$ satisfying \eqref{eq:Ambrose}. Note that the function $\lambda(t)$ is smooth for all $t>0$ along $\gamma$.
By \eqref{eq:lambda/dr}, we have
\begin{align*}
\lambda(t)-\lambda(1)+\frac{C_p}{n-m}\int_1^t e^{-\frac{2V_{\gamma}(r)}{n-m}}\lambda(r)^2\d r
&\leq - C_p^{-1}
\int_1^t
e^{\frac{\;2V_{\gamma}(r)\;}{n-m}}\text{
\rm Ric}_{m,n}(\Delta_V)(\dot{\gamma}_r,\dot{\gamma}_r)\d r.
\end{align*}
Hence
\begin{align}
\lim_{t\to+\infty}\left(\lambda(t)+\frac{C_p}{n-m}\int_1^t e^{-\frac{\;2V_{\gamma}(r)\;}{n-m}}\lambda(r)^2\d r\right)=-\infty.\label{eq:MyerEstimate}
\end{align}
In particular, $\lim_{t\to+\infty}\lambda(t)=-\infty$.
Next we prove that there exists a finite number $T > 0$ such
that $\lim_{t\to T-}\lambda(t)=-\infty$, which contradicts the smoothness of $\lambda(r)$. By \eqref{eq:MyerEstimate},
given $C>n-m$ there exists $t_0>1$ such that
$$
-\lambda(t_0)-\frac{C_p}{n-m}\int_1^{t_0}e^{-\frac{\;2V_{\gamma}(r)\;}{n-m}}\lambda(r)^2\d r\geq \frac{C}{n-m}.
$$
Since
$$
\lim_{t\to+\infty}\int_1^t
e^{\frac{\;2V_{\gamma}(r)\;}{n-m}} \text{
\rm Ric}_{m,n}(\Delta_V)(\dot{\gamma}_r,\dot{\gamma}_r)\d r=+\infty,
$$
there exists $t_1\in]t_0,+\infty[$ such that
$\int_{t_0}^t
e^{\frac{\;2V_{\gamma}(r)\;}{n-m}} \text{
\rm Ric}_{m,n}(\Delta_V)(\dot{\gamma}_r,\dot{\gamma}_r)\d r\geq0$ for all $t\geq t_1$.
Let $\psi(t)$ be the function defined by
\begin{align}
\psi(t):=-\lambda(t)-\frac{C_p}{n-m}\int_1^te^{-\frac{\;2V_{\gamma}(r)}{n-m}\;}\lambda(r)^2\d r- C_p^{-1} \int_1^t e^{\frac{\;2V_{\gamma}(r)\;}{n-m}} \text{
\rm Ric}_{m,n}(\Delta_V)(\dot{\gamma}_r,\dot{\gamma}_r)\d r.\label{eq:MyerEstimateInt}
\end{align}
Then we see $\psi'(t)\geq0$ by \eqref{eq:lambda/dr}.
Hence $\psi(t)\geq\psi(t_0)$ for $t\geq t_1>t_0$. This implies that
\begin{align}
-\lambda(t)-\frac{C_p}{n-m}\int_1^t e^{-\frac{\;2V_{\gamma}(r)}{n-m}\;}\lambda(r)^2\d r\geq\frac{C}{n-m}>1\label{eq:Stpe1}
\end{align}
holds for all $t\geq t_1$.
Let us consider the sequence $\{t_{\ell}\}$ defined inductively by
\begin{align*}
C_p\int^{t_{\ell+1}}_{t_{\ell}}e^{-\frac{\;2V_{\gamma}(r)\;}{n-m}}
\d r
=(n-m)\left(\frac{n-m}{C} \right)^{\ell-1}\quad \text{ for }\quad\ell\geq1.
\end{align*}
The existence of such sequence is guaranteed by the
$(V,m)$-completeness of $(M,g,V)$ at $p$.
Let $T$ be the increasing limit of $\{t_{\ell}\}$. Then we see
\begin{align*}
C_p\int_{t_1}^Te^{-\frac{\;2V_{\gamma}(r)\;}{n-m}}
\d r=\frac{C(n-m)}{C-n+m}.
\end{align*}
In view of the $(V,m)$-completeness of $(M,g,V)$ at $p$, we have
$$
\int_1^{\infty}e^{-\frac{\;2V_{\gamma}(r)\;}{n-m}}
\d r=+\infty.
$$
Thus we obtain $T<\infty$.
Finally we claim that for given $\ell\in\mathbb{N}$, $-\lambda(t)\geq\left( \frac{C}{n-m}\right)^{\ell}$ for all $t\geq t_{\ell}$. This is true for $\ell=1$ by \eqref{eq:Stpe1}. Suppose that $-\lambda(r)\geq\left( \frac{C}{n-m}\right)^{\ell}$ for all $r\geq t_{\ell}$ and fix $t\geq t_{\ell+1}$. Then using inequality \eqref{eq:Stpe1} again,
\begin{align*}
-\lambda(t)&\geq\frac{C}{n-m}+\frac{C_p}{n-m}\int_1^{t_{\ell}}
e^{-\frac{\;2V_{\gamma}(r)\;}{n-m}}\lambda(r)^2\d r+
\frac{C_p}{n-m}\int_{t_{\ell}}^{t_{\ell+1}}
e^{-\frac{\;2V_{\gamma}(r)\;}{n-m}}\lambda(r)^2\d r\\
&\geq
\frac{C_p}{n-m}\int_{t_{\ell}}^{t_{\ell+1}}
e^{-\frac{\;2V_{\gamma}(r)\;}{n-m}}\lambda(r)^2\d r\\
&\geq
\frac{C^{2\ell}}{(n-m)^{2\ell}}\cdot\frac{(n-m)^{\ell-1}}{C^{\ell-1}}=\left(\frac{C}{n-m} \right)^{\ell+1}.
\end{align*}
Therefore we prove the claim.
In particular, $\lim_{t\to T-}\lambda(t)=-\infty$ which is the desired contradiction.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:AmbroseMyersVbdd}]
Suppose that there exists a non-negative integrable function $f$ on $[0,+\infty[$ satisfying $\langle V,\nabla r_p\rangle \geq -f(r_p)$. Then
$V_{\gamma}(r)\geq -\int_0^rf(s)\d s\geq -\int_0^{\infty}f(s)\d s>-\infty$ and ${\Ric}_{m,n}(\Delta_V)\geq0$ imply
\begin{align*}
\int_0^{\infty}&e^{\frac{\;2V_{\gamma}(t)}{n-m}\;}{\Ric}_{m,n}(\Delta_V)(\dot{\gamma}_t,\dot{\gamma}_t)\d t\\
&\geq \exp\left(-\frac{2}{n-m}\int_0^{\infty}f(s)\d s\right)\int_0^{\infty}{\Ric}_{m,n}(\Delta_V)(\dot{\gamma}_t,\dot{\gamma}_t)\d t=+\infty.
\end{align*}
This yields the conclusion by Theorem~\ref{thm:AmbroseMyers}.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:AmbroseMyers}]
Suppose that \eqref{eq:AmbroseRiccibounds} holds for
every unit speed geodesic $\gamma$ emanating from $p$.
The $(V,m)$-completeness of $(M,g,V)$ at $p$ implies
\begin{align*}
\int_0^{\infty}e^{-\frac{\;2V_{\gamma}(t)\;}{n-m}}\d t=+\infty.
\end{align*}
Then we have
\begin{align*}
\int_0^{\infty}e^{\frac{\;2V_{\gamma}(t)\;}{n-m}}{\Ric}_{m,n}(\Delta_V)(\dot{\gamma}_t,\dot{\gamma}_t)\d t\geq (n-m)\kappa\, C_p^2 \int_0^{\infty}
e^{-\frac{\;2V_{\gamma}(t)\;}{n-m}}\d t=+\infty.
\end{align*}
This yields the conclusion by Theorem~\ref{thm:AmbroseMyers}.
\end{proof}
\section{Proof of Theorem~\ref{thm:ChengDiamSphere}}
For the proof of Theorem~\ref{thm:ChengDiamSphere}, we need
the following lemma on the solution of Jacobi equation.
\begin{lemma}\label{lem:SymJacob}
Let $\kappa:[0,\infty[\to\mathbb{R}$ be a continuous function and
$\mathfrak{s}_{\kappa}$ the unique solution of the Jacobi equation $\mathfrak{s}_{\kappa}''(s)+\kappa(s)\mathfrak{s}_{\kappa}(s)=0$ with
$\mathfrak{s}_{\kappa}(0)=0$ and $\mathfrak{s}_{\kappa}'(0)=1$, and $\delta_{\kappa}:=\inf\{s>0\mid \mathfrak{s}_{\kappa}(s)=0\}$ the first zero point of $\mathfrak{s}_{\kappa}$. Assume that $\delta_{\kappa}<\infty$ and $\kappa(s)=\kappa(\delta_{\kappa}-s)$ holds for all $s\in[0,\delta_{\kappa}]$. Then
$\mathfrak{s}_{\kappa}'(\delta_{\kappa})=-1$, $\mathfrak{s}_{\kappa}'(\delta_{\kappa}/2)=0$ and
$\mathfrak{s}_{\kappa}(s)=\mathfrak{s}_{\kappa}(\delta_{\kappa}-s)$ for all $s\in[0,\delta_{\kappa}]$.
\end{lemma}
\begin{proof}
Set $\overline{\mathfrak{s}}_{\kappa}(s):=\mathfrak{s}_{\kappa}(\delta_{\kappa}-s)$ for $s\in[0,\delta_{\kappa}]$. Then this satisfies
$\overline{\mathfrak{s}}_{\kappa}''(s)+\kappa(s)\overline{\mathfrak{s}}_{\kappa}(s)=0$ and $\overline{\mathfrak{s}}_{\kappa}(0)=0$ and $\overline{\mathfrak{s}}_{\kappa}'(0)=-\mathfrak{s}_{\kappa}'(\delta_{\kappa})$. If we prove
$\overline{\mathfrak{s}}_{\kappa}'(0)=1$, i.e., $\mathfrak{s}_{\kappa}'(\delta_{\kappa})=-1$, then the uniqueness of the solution implies the assertion.
Note that $s_{\kappa}(s):=\overline{\mathfrak{s}}_{\kappa}(s)/ \overline{\mathfrak{s}}_{\kappa}'(0)=-\mathfrak{s}_{\kappa}(\delta_{\kappa}-s)/\mathfrak{s}_{\kappa}'(\delta_{\kappa})$ also satisfies the Jacobi equation with $s_{\kappa}(0)=0$ and $s_{\kappa}'(0)=1$.
Then the uniqueness implies $s_{\kappa}(s)=\mathfrak{s}_{\kappa}(s)$, that is, $\mathfrak{s}_{\kappa}(\delta_{\kappa}-s)=-\mathfrak{s}_{\kappa}'(\delta_{\kappa})s_{\kappa}(s)$ for $s\in [0,\delta_{\kappa}]$, in particular, $\mathfrak{s}_{\kappa}(\delta_{\kappa}/2)=-\mathfrak{s}_{\kappa}'(\delta_{\kappa})\mathfrak{s}_{\kappa}(\delta_{\kappa}/2)$. Therefore, $\mathfrak{s}_{\kappa}'(\delta_{\kappa})=-1$ by $\mathfrak{s}_{\kappa}(\delta_{\kappa}/2)>0$.
The proof of $\mathfrak{s}_{\kappa}'(\delta_{\kappa}/2)=0$ is easy from
$\mathfrak{s}_{\kappa}'(s)=-\mathfrak{s}_{\kappa}'(\delta_{\kappa}-s)$ for $s\in[0,\delta_{\kappa}]$.
\end{proof}
Hereafter, we assume $V=\nabla\phi$ for some $\phi\in C^2(M)$ and set $C_p:=\exp\left(-\frac{\;2\phi(p)\;}{n-m}\right)$ for the definition of $s_p(x)$ with $p$ being an arbitrary point.
We now consider the conformal metric $h=e^{-\frac{\;4\phi\;}{n-m}}g$.
\begin{lemma}\label{lem:Vindepenedent}
Fix $p\in M$. Suppose that there exists a point $q\in M$ such that
$s(p,q)=d^{\,h}(p,q)$ and let $\gamma$ be the minimal unit speed $g$-geodesic from $p$ and $q$ such that $s(p,q)=\int_0^{d(p,q)}e^{-2\frac{\; \phi(\gamma_t)\;}{n-m}}\d t$. Then $\nabla\phi$ is parallel to $\dot{\gamma}$ $($not parallel along $\gamma$$)$.
Moreover if
$s(p, x) = d^{\,h}(p, x)$ holds for any $x\in M$, then
$\phi$ is rotationally symmetric around $p$.
\end{lemma}
\begin{proof}
Since $t<d(p,q)$ implies $\gamma_t\notin {\rm Cut}(p)$,
we have $s(p,q)=\int_0^{d(p,q)}e^{-2\frac{\; \phi(\gamma_t)\;}{n-m}}\d t=L^h(\gamma)$.
Combining this with
$s(p,q)=d^{\,h}(p,q)$ we get $d^{\,h}(p,q)=L^h(\gamma)$.
Then $\gamma$ is a minimal geodesic in the $h$ metric. In particular, $\nabla^h_{\frac{\d\gamma}{\d s}}\frac{\d \gamma}{\d s}=0$.
Applying the formula for connection of $h$ in terms of $g$, we have
\begin{align*}
0&=\nabla^h_{\frac{\d\gamma}{\d s}}\frac{\d \gamma}{\d s}\\
&=\nabla^g_{\frac{\d\gamma}{\d s}}
\frac{\d \gamma}{\d s}
-\frac{4}{n-m}\left\langle \frac{\d \gamma}{\d s}, \nabla \phi \right\rangle \frac{\d \gamma}{\d s}+\frac{2}{n-m}
\left\langle \frac{\d \gamma}{\d s},\frac{\d \gamma}{\d s}\right\rangle \nabla\phi\\
&=
\frac{2e^{\;\frac{4 \phi(\gamma_r)\;}{n-m}}}{n-m}
\left(-\langle \dot{\gamma}_r, \nabla\phi \rangle \dot{\gamma}_r+
\nabla\phi\right).
\end{align*}
Then we obtain that $ \nabla\phi=\langle \nabla\phi,\dot{\gamma}_r\rangle \dot{\gamma}_r$, i.e.,
$ \nabla\phi$ is parallel to $\dot{\gamma}$.
Suppose further that $s(p,x)=d^{\,h}(p,x)$ for any $x\in M$.
Let $x_1,x_2\in M$ be the points
in the sphere $\partial B_r(p)$ for $r>0$ and
$c:[0,1]\to \partial B_r(p)$ a curve on $ \partial B_r(p)$
joining $c(0)=x_1$ and $c(1)=x_2$. Then we see $\langle \nabla\phi,\dot{c}_t\rangle =0$, because $\nabla\phi$ is parallel to $\dot{\gamma}$, where $\gamma$ is the $g$-geodesic from $p$ to a point in ${\rm Im}(c)$.
Hence $\phi(x_2)-\phi(x_1)=\int_0^1\langle \nabla\phi,\dot{c}_t\rangle \d t=0$.
\end{proof}
Here we encounter that $s$
does not necessarily satisfy the triangle inequality.
To get around this difficulty we utilize again the conformal metric $h$.
From $d^{\,h}(p,x)\leq s(p,x)$ and the triangle inequality for the $h$-metric we have
\begin{align*}
s(p,x)+s(q,x)\geq d^{\,h}(p,x)+d^{\,h}(q,x)\geq d^{\,h}(p,q).
\end{align*}
\begin{proof}[Proof of Theorem~\ref{thm:ChengDiamSphere}]
First note that $\mathfrak{s}_{\kappa}(s)=\mathfrak{s}_{\kappa}(\delta_{\kappa}-s)$
holds for $s\in[0,\delta_{\kappa}]$ by Lemma~\ref{lem:SymJacob}. In particular, we have
$\cot_{\kappa}(s)=-\cot_{\kappa}(\delta_{\kappa}-s)$
for all $s\in[0,\delta_{\kappa}]$.
Let $r_p$ and $r_q$ be the distance functions to $p$ and $q$ respectively. Then by Theorem~\ref{thm:GlobalLapComp}, we have
\begin{align*}
\Delta_{ \nabla\phi}(r_p+r_q)(x)\leq (n-m)e^{-\frac{\;2 \phi(x)\;}{n-m}}\left(\cot_{\kappa}(s_p(x))+\cot_{\kappa}(s_q(x)) \right)
\end{align*}
holds in the barrier sense.
We also have $s_p(x)+s_q(x)\geq d^{\,h}(p,q)=\delta_{\kappa}$, so that
\begin{align*}
\cot_{\kappa}\left(s_q(x) \right)\leq\cot_{\kappa}\left(\delta_{\kappa}-s_p(x) \right)=-\cot_{\kappa}\left(s_p(x) \right).
\end{align*}
Thus, $\Delta_{\nabla\phi}(r_p+r_q)\leq0$ holds in the barrier sense.
Note that $\inf_M(r_p+r_q)$ attains its minimum at a point of minimal geodesic joining $p$ and $q$. Then one can apply the
strong minimum principle for superharmonic functions in the barrier sense (see \cite{Calabi:strongmax, Esc-Hein} for the strong maximum principle for subharmonic functions in the barrier sense) so that
$r_p(x)+r_q(x)=d(p,q)$ for all $x\in M$ and all geodesics starting point at $p$ in $M$ are minimizing and end at $q$.
In particular, we have $\Delta_{\nabla\phi}(r_p+r_q)=0$ in the classical sense. Therefore, we have
\begin{align*}
\cot_{\kappa}(s_p(x))=\cot_{\kappa}(\delta_{\kappa}-s_q(x)) \quad\text{ for all }\quad x\in M.
\end{align*}
Since $s\mapsto\cot_{\kappa}(s)$ is strictly decreasing, we have $s_p(x)+s_q(x)=\delta_{\kappa}$. Hence
$s_p(x)+s_q(x)=d^{\,h}(p,q)=s(p,q)=\delta_{\kappa}$
by $d^{\,h}(p,q)\leq s(p,q)\leq\delta_{\kappa}$ (see Theorem~\ref{thm:WeightedMyers}). We can apply the similar
argument so that
$d_p^{\,h}(x)+d_q^{\,h}(x)=d^{\,h}(p,q)=s(p,q)=\delta_{\kappa}$.
Hence
$0\leq s_p(x)-d_p^{\,h}(x)=d_q^{\,h}(x)-s_q(x)\leq0$ implies
$s_p(x)=d_p^{\,h}(x)$. Taking $x\notin{\rm Cut}(p)$, we see that
there exists a unique minimal unit speed geodesic $\gamma$ with $\gamma_0=p$ and $\gamma_{r_p(x)}=x$ satisfying
$s_p(x)=\int_0^{r_p(x)}e^{-\frac{\;2 \phi(\gamma_t)\;}{n-m}}\d t$.
Applying this with Lemma~\ref{lem:Vindepenedent},
$\phi$ is rotationally symmetric around $p$.
Secondly, we can deduce that
\begin{align}
\Delta_{\nabla\phi}r_p(x)&=(n-m)e^{-\frac{\;2 \phi(x)\;}{n-m}}
\cot_{\kappa}(s_p(x)),\label{eq:Lap1}\\
\Delta_{\nabla\phi}r_q(x)&=(n-m)e^{-\frac{\;2 \phi(x)\;}{n-m}}
\cot_{\kappa}(s_q(x))\label{eq:Lap2}
\end{align}
hold in the barrier sense respectively. Consequently, \eqref{eq:Lap1} (resp.~\eqref{eq:Lap2}) holds for $x\in (\text{\rm Cut}(p)\cup\{p\})^c$ (resp.~$x\in (\text{\rm Cut}(q)\cup\{q\})^c$).
Let $\eta$ be a minimal unit speed geodesic from $p$ to $q$ with $\dot{\eta}_0=\theta$.
Applying Lemma~\ref{lem:LaplacianComparisonconformal} to \eqref{eq:Lap1},
we obtain $m=1$ and the expression of a metric of the form
\begin{align*}
g_{\eta_r}&=\d r^2+e^{\frac{\;2(\phi(r)+\phi(0))\;}{n-1}} \mathfrak{s}_{\kappa}^2(s(r))g_{\,\mathbb{S}^{n-1}},\quad 0\leq r\leq d(p,q)
\end{align*}
with $s(r)=\int_0^r e^{-\frac{\;2 \phi(t)\;}{n-1}}\d t$
and $s(d(p,q))=\delta_{\kappa}$.
This implies the conclusion.
\end{proof}
\section{Proof of Theorem~\ref{thm:Splitting}}
Let $\gamma$ be a ray in $M$, i.e. a unit speed geodesic defined on $[0,+\infty[$ such that
$d(\gamma_t,\gamma_s)=|s-t|$ for any $s,t\geq0$. The \emph{Busemann function} $b_{\gamma}:M\to\mathbb{R}$ for a ray $\gamma$ is defined by
\begin{align*}
b_{\gamma}(x):=\lim_{t\to+\infty}\left(t-d(x,\gamma_t) \right), \quad x\in M.
\end{align*}
It follows from the triangle inequality that $t\mapsto d(x,\gamma_t)$ is monotonically
non-decreasing in $t$, so that the above limit exists. Moreover, it is well-known that $b_{\gamma}$ is a $1$-Lipschitz function. See e.g. \cite{SchoenYau:LectDiffGeo}.
\begin{lemma}\label{lem:LaplacianCompAlongGeo}
Let $(M,g)$ be an $n$-dimensional complete Riemannian manifold and $V$ a $C^1$-vector field.
Fix a point $p\in M$.
Suppose that \eqref{eq:RiciLowBdd}
holds for any $x\in M$ with $\kappa\equiv0$.
Let $q\in M$ be a point such that $r_p$ is smooth at $q$,
and let $\gamma$ be the unique unit speed
minimal geodesic from $p$ to $q$. Then we have
\begin{align}
(\Delta_V r_p)(q)\leq \frac{n-m}{\exp\left(\frac{2V_{\gamma}(r_p(q))}{n-m} \right)\int_0^{r_p(q)}\exp\left(-\frac{2V_{\gamma}(s)}{n-m} \right)\d s}.\label{eq:LaplacianComparison0}
\end{align}
\end{lemma}
\begin{proof}
Applying the Riccati inequality \eqref{eq:lambda/dr} along $\gamma$ under \eqref{eq:RiciLowBdd}
with $\kappa\equiv0$, we see
\begin{align*}
\frac{1}{\lambda(r)^2}\frac{\d\lambda}{\d r}(r)\leq -
\frac{C_p}{n-m}e^{-\frac{2V_{\gamma}(r)}{n-m}}.
\end{align*}
Integrating this from ${\varepsilon}>0$ to $r_p(q)$ and letting ${\varepsilon}\to0$, we have from $\lim_{{\varepsilon}\to0}\lambda({\varepsilon})=+\infty$ that
\begin{align*}
\lambda(r_p(q))=C_p^{-1}
e^{\frac{2V_{\gamma}(r_p(q))}{n-m}}(\Delta_Vr_p)(q)
\leq \frac{n-m}{C_p\int_0^{r_p(q)}e^{-\frac{2V_{\gamma}(r)}{n-m}}\d r}.
\end{align*}
This implies the conclusion.
\end{proof}
\begin{lemma}\label{lem:subharmonic}
Let $(M,g)$ be an $n$-dimensional complete Riemannian manifold and $V$ a $C^1$-vector field.
Suppose that $(M,g,V)$ is $(V,m)$-complete.
Suppose that
\eqref{eq:RiciLowBddStrong}
holds for any $p,x\in M$
with $\kappa=0$.
Then the Busemann function $b_{\gamma}$ for any ray $\gamma$ in $M$ is an $\Delta_V$-subharmonic function in the barrier sense,
i.e., for each $p\in M$ and any ${\varepsilon}>0$, there exists a smooth function $b_{p,{\varepsilon}}$ defined on a neighborhood $U_{{\varepsilon}}(p)$ at $p$ such that $b_{p,{\varepsilon}}(p)=b_{\gamma}(p)$, $b_{p,{\varepsilon}}\leq b_{\gamma}$ on $U_{{\varepsilon}}(p)$, and
$\Delta_Vb_{p,{\varepsilon}}(p)\geq -{\varepsilon}$.
\end{lemma}
\begin{proof}
Fix $p\in M$ and a ray $\gamma$ in $M$. Take any sequence $\{t_k\}$
satisfying $\lim_{k\to\infty}t_k=+\infty$.
Let $\eta_{t_k}$ be a minimal $g$-geodesic joining $p$ and $\gamma_{t_k}$.
As stated in \cite{Esc-Hein}, there exists a subsequence of $t_k$ such that the initial vector
$\dot{\eta}_{t_k}(0)$ converges to some unit vector $u\in T_pM$. Let $\eta$ be the ray emanating from $p$ and generated by $u$. Then $p$ does not belong to the cut-locus of $\eta(r)$, hence $\eta(r)\notin {\rm Cut}(p)$ for any $r>0$. So $b_{\gamma}^r(x):=r-d(x,\eta(r))+b_{\gamma}(p)$ is smooth around $p$ and satisfies $b_{\gamma}^r\leq b_{\gamma}$ with
$b_{\gamma}^r(p)= b_{\gamma}(p)$. By \eqref{eq:LaplacianComparison0}, we see that for the unique unit speed geodesic $\overline{\gamma}$
from $\eta(r)$ to $p$
\begin{align}
\Delta_Vb_{\gamma}^r(p)=-\Delta_Vr_{\eta(r)}(p)&\geq
-\frac{n-m}{
\exp\left(\frac{2V_{\overline{\gamma}}(d(\eta(r),p))}{n-m} \right)
\int_0^{d(\eta(r),p)}
\exp\left(-\frac{2V_{\overline{\gamma}}(t)}{n-m}\right)\d t}.\label{eq:subharmbarrier}
\end{align}
Note that $\eta(r)=\overline{\gamma}_{d(p,\eta(r))-r}$ for
$r\in[0,d(p,\eta(r))]$.
Then \eqref{eq:subharmbarrier} becomes
\begin{align}
\Delta_Vb_{\gamma}^r(p)=-\Delta_Vr_{\eta(r)}(p)\geq -\frac{n-m}{\int_0^{d(p,\eta(r))}\exp\left(-\frac{2V_{\eta}(u)}{n-m} \right)\d u}.\label{eq:subharmbarrier*}
\end{align}
Since $(M,g,V)$ is $(V,m)$-complete, we can construct the desired support function.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:Splitting}]
Let $\gamma:]-\infty,+\infty[\to M$ be a line (i.e., $d(\gamma_t,\gamma_s)=|s-t|$ for
$s,t\in\mathbb{R}$) and $\gamma^+,\gamma^{-}$ rays defined by $\gamma^+_t:=\gamma_t$ and $\gamma^{-}_t:=\gamma_{-t}$ ($t\geq0$). Let $b^+$, $b^-$ be the Busemann function
associated to $\gamma^+$, $\gamma^-$, respectively. Then, under the $(V,m)$-completeness of $(M,g,V)$,
$b^+$ and $b^-$
are continuous $\Delta_V$-subharmonic functions on $M$ in the barrier sense by Lemma~\ref{lem:subharmonic}. Since $\gamma$ is a line, for each $x\in M$, we have
\begin{align*}
b^+(x)+b^-(x)=\lim_{t\to+\infty}(2t-d(x,\gamma_t)-d(x,\gamma_{-t}))\leq0
\end{align*}
and $b^++b^-=0$ on $\gamma$.
In view of the strong maximum principle for
$\Delta_V$-subharmonic functions in the barrier sense (see \cite{Calabi:strongmax, Esc-Hein} and \cite[Lemma~2.4]{FanLiZhang}),
we have $b^++b^-=0$ on $M$. In particular, $b^+$ and $b^-$ are continuous
$\Delta_V$-harmonic functions in the barrier sense. Since $|\nabla r_p|=1$ on $({\rm Cut}(p)\cup \{p\})^c$, we have
$|\nabla b^+|=|\nabla b^-|=1$ on $M$.
Moreover, let $h^{\pm}$ be the smooth $\Delta_V$-harmonic function on an open ball $B$ such that $b^{\pm}=h^{\pm}$ on $\partial B$. Applying the weak maximum principle to the $\Delta_V$-harmonic function $b^{\pm}-h^{\pm}$ on $B$
in the barrier sense, we can deduce $b^{\pm}\leq h^{\pm}$ on $B$, hence $0=b^++b^-\leq h^++h^-$. Applying the strong maximum principle again to the smooth $\Delta_V$-harmonic function $h^++h^-$ on $B$, we have $h^++h^-\equiv0$ on $B$.
Thus, we can get $0\geq b^+-h^+=-(b^--h^-)\geq0$ on $B$, hence $b^{\pm}=h^{\pm}$ on $B$. Therefore, $b^{\pm}$ is smooth on any ball $B$, hence on $M$.
Applying \cite[Lemma~6.5]{Wylie:WarpedSplitting} to the
smooth $\Delta_V$-harmonic function $b_{\gamma^{\pm}}$ and $|\nabla b_{\gamma^{\pm}}|=1$ on $M$, we can deduce that
${\rm Ric}_{1,n}(\Delta_V)(\nabla b_{\gamma^{\pm}},\nabla b_{\gamma^{\pm}})=0$ and $n-1$ non-zero eigenvalues of
${\rm Hess}\,b^{\pm}|_p$ are all equal, because
${\rm Hess}\,b^{\pm}|_p$ has $n-1$ non-zero eigenvalues.
Applying \cite[Lemma~6.6]{Wylie:WarpedSplitting} to the
smooth $\Delta_V$-harmonic function $b^{\pm}$ satisfying
$|\nabla b^{\pm}|=1$
together with the fact that
${\rm CD}(0,m)$-condition implies ${\rm CD}(0,1)$-condition for $m<1$, we have that
$g$ has a twisted product of the form $g=dr^2+e^{\frac{\;2\phi\;}{n-1}}g_N$,
where $g_N$ is a metric on $N$ and $\phi:M\to\mathbb{R}$ is a smooth function, ${\rm Ric}_{1,n}(\Delta_V)\left(\nabla b^{\pm},\nabla b^{\pm} \right)=0$, and $V=\frac{\partial \phi}{\partial r}\cdot\frac{\partial}{\partial r}+U$ with $U\perp \frac{\partial}{\partial r}$.
In the same way of the proof of \cite[Corollary~6.7]{Wylie:WarpedSplitting}, we can deduce that $\frac{d\phi}{dr}=0$, because \cite[Proposition~2.1]{Wylie:WarpedSplitting} yields ${\rm Ric}_{1,n}(\Delta_V)\left(\frac{\partial}{\partial r}, \frac{\partial}{\partial r}\right)=0$ and
\begin{align*}
0\leq{\rm Ric}_{m,n}(\Delta_V)\left(\frac{\partial}{\partial r}, \frac{\partial}{\partial r}\right)&={\rm Ric}_{1,n}(\Delta_V)
\left(\frac{\partial}{\partial r}, \frac{\partial}{\partial r}\right)+\left(\frac{m-1}{(n-1)(n-m)} \right)\left(\frac{d\phi}{dr} \right)^2\\
&=\left(\frac{m-1}{(n-1)(n-m)} \right)\left(\frac{d\phi}{dr} \right)^2\leq0.
\end{align*}
This means that $g$ has the form of product metric $g=dr^2+e^{\frac{2\phi(0,\cdot)}{n-1}}g_N=dr^2+h_N$ on $\mathbb{R}\times N$.
Moreover, we can see that $V$ is a vector field on $N$ by using the fact that ${\rm Ric}_{m,n}(\Delta_V)\left(\frac{\partial}{\partial r},U \right)=0$ for all $U\perp \frac{\partial}{\partial r}$.
\end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
{
"timestamp": "2021-12-01T02:27:47",
"yymm": "2111",
"arxiv_id": "2111.15508",
"language": "en",
"url": "https://arxiv.org/abs/2111.15508"
}
|
\section*{Abstract}
Rate-induced tipping, or simply R-tipping,
occurs when time-variation of input parameters of a dynamical system interacts with system timescales to give genuine nonautonomous instabilities that
cannot, in general, be understood in terms of autonomous bifurcations
in the frozen system with a fixed-in-time input. Such instabilities appear as the input varies at some critical rates. Finding these critical rates and characterising what happens when they are exceeded is of great interest in natural science. The challenge is that it requires development of mathematical concepts and techniques beyond classical autonomous bifurcation theory.
This paper develops an accessible mathematical framework and gives testable criteria for R-tipping in multidimensional nonautonomous dynamical systems with an autonomous future limit.
Our focus is on R-tipping via loss of tracking of base attractors that are equilibria in the frozen system, due to crossing what we call regular thresholds. These thresholds are associated with regular edge states:
compact hyperbolic invariant sets with
one unstable direction and orientable stable manifold, that lie on a basin boundary in the frozen system.
We define R-tipping and critical rates for the nonautonomous system in terms of special solutions that limit to a compact invariant set of the future limit system that is not an attractor. We then focus on the case when the limit set is a regular edge state of the future limit system,
which we call the regular R-tipping edge state, that anchors the associated regular R-tipping threshold at infinity. We then introduce the concept of edge tails to rigorously classify R-tipping into reversible, irreversible and degenerate cases.
The main idea is to use autonomous dynamics and regular edge states of the future limit system to analyse R-tipping in the nonautonomous system.
To that end, we compactify the original nonautonomous system to include the limiting autonomous dynamics.
This allows us to give easily verifiable conditions in terms of simple properties of the frozen system and input variation that are sufficient for the occurrence of R-tipping.
Additionally, we give necessary and sufficient conditions for the occurrence of reversible and irreversible R-tipping in terms of computationally verifiable (heteroclinic) connections to regular R-tipping edge states in the compactified system.
Thus, our work extends existing results for R-tipping in one dimension to arbitrary dimension and to different cases of R-tipping, some of which can occur only in higher dimensions.
\vspace{10mm}
\tableofcontents
\newpage
\section{Introduction}
Instability in the evolution of an open system subject to time-varying external conditions is a vitally important problem in many areas of applied science, including climate, ecology and biology. In particular, ``tipping points'' or ``critical transitions'' are {\em large, sudden} and often {\em irreversible} changes in the state of the system in response to {\em small and slow} changes in the external conditions. For an open system near a stable state (an attractor), we might expect
that, as external conditions change with time, the stable state will change too. We describe this phenomenon as a {\em moving stable state}. In many cases the system may adapt to changing external conditions and {\em track} the moving stable state. However, tracking may not
always be possible. Nonlinearities, competing timescales and feedbacks in the system mean that the stable state may turn unstable or disappear. Alternatively, the system may even cross the boundary of the basin of attraction of the stable state, or an excitability (quasi)threshold, and evolve away from the moving stable state.
When this happens, the system {\em tips} to a different state, which may be long-lived (another attractor) or short-lived (a transient response).
Our focus is on an interesting and relatively new tipping phenomenon, in which the system fails to track a moving stable state because the external conditions change too fast.
From a mathematical viewpoint, such tipping corresponds to a {\em genuine nonautonomous instability} in the corresponding {\em nonautonomous model} with time-varying external inputs.
The two main obstacles to mathematical analysis of such tipping are: (a) inability to explain it in terms of a classical autonomous bifurcation of the stable state in the {\em frozen model} with fixed-in-time external inputs, and (b) the absence of compact stable states such as equilibria, limit cycles or tori in the nonautonomous model.
Thus, one requires techniques beyond classical autonomous bifurcation theory~\cite{Kuznetsov2004}. Existing approaches include, for example, identifying a ``safe region'' about the moving stable state~\cite{Bishnani2003,Osinga2014,Ashwin2012}, using geometric singular perturbations~\cite{Wieczorek2011,Mitry2013,Perryman2014,Vanselow2019,OSullivan2021},
finite-time Lyapunov exponents~\cite{KloedenRasmussen2011,Duc2016,HoyerLeitzel2018,Meyeretal2018},
local pullback attractors~\cite{Arnold1998,Rasmussen2007,Potzsche2010,KloedenRasmussen2011,Ashwin2016,Alkhayuon2018,Longo2021,Kuehn2021} or snapshot attractors~\cite{Drotos2015,Kaszas2019}, Melnikov-like methods~\cite{Kuehn2021},
as well as most likely tipping paths~\cite{Ritchie2016,Chen2019} and tipping probabilities~\cite{Hartl2019} in the presence of noise.
This work overcomes obstacles (a) and (b) as follows. We relate the actual state of the system to the moving stable state to develop an accessible mathematical framework for such tipping phenomena, and give rigorous results that are both easily verifiable and relevant for a wide range of applications. Our framework is underpinned by the compactification technique developed in~\cite{Wieczorek2019compact} in combination with geometric singular perturbation theory~\cite{Fenichel1979,Jones1995,Szmolyan2004,Wechselberger2011}, and is guided by analysis of canonical examples in~\cite{Xie2019}. Most importantly, it allows us to extend a number of key results from~\cite{Ashwin2016} for irreversible R-tipping in one-dimensional (scalar) systems to arbitrary dimension and to different cases of R-tipping, including reversible R-tipping that can occur only in higher dimensions.
\subsection{Motivation: Critical Factors and R-tipping}
In applications, it is important to determine {\it critical factors} for tipping~\cite{Ashwin2012}. The most commonly studied critical factor is a {\it critical level} of the external input at
which the moving stable state of a complex system disappears or
destabilises in a classical dangerous\,\footnote{Dangerous bifurcations have a discontinuity in the parametrised family (or branch) of attractors at the bifurcation point and include, for example, saddle-node and subcritical Hopf bifurcations~\cite{Thompson1994}.}
bifurcation, causing the system to
suddenly move to a different
state~\cite{Thompson_Sieber_2010b,Kuehn2011,Ott,Ashwin2016,Ritchie2021}. Critical levels
have been identified in many different contexts: the collapse of
thermohaline circulation past the critical level of fresh-water influx
into the North Atlantic~\cite[Ch.16]{Dijkstra2008},
loss of submerged
vegetation in shallow turbid lakes past the critical level of nutrient
concentration~\cite[Ch.7]{Scheffer2009book}, forest-to-desert
transitions below the critical level of
precipitation~\cite[Ch.11]{Scheffer2009book}, power outage blackouts
past the critical level of power consumption~\cite{Dobson1989,Budd2002},
and in the reports of the {\em Intergovernmental Panel for Climate Change}~\cite{IPCC}
which specify critical levels of atmospheric temperature and CO$_2$
concentration. The underlying dynamical mechanism is
illustrated in a simple example in Figure~\ref{fig:BR}(a). As the
external input changes in time, the position of the stable state
changes too. The nonautonomous system can track the moving
stable state as long as it persists, provided that the
external input varies slowly enough. However, there may be a critical level at which the
moving stable state disappears or destabilises in a classical
bifurcation~\cite[Lemma 2.3]{Ashwin2016}.
If the bifurcation is dangerous, there is no nearby stable state to track
beyond the critical level, and the system suddenly moves to
a different state. Note that the critical transition in Figure~\ref{fig:BR}(a):
\begin{itemize}
\item
Requires a critical level of the external input -- a classical dangerous bifurcation of the stable state in the
{\em frozen system} with fixed-in-time external inputs~\cite{Thompson_Sieber_2010b,Kuehn2011,OKeeffe2019}.
\item Occurs no matter how slowly the external input
passes through the critical level.
\end{itemize}
This nonautonomous instability has been described as a {\em dynamic bifurcation}~\cite{Benoit1990}, {\em adiabatic bifurcation}~\cite{KloedenRasmussen2011} or {\em bifurcation-induced tipping (B-tipping)}~\cite{Ashwin2012}. The key point is that it can be understood in terms of a classical autonomous bifurcation of the moving stable state. Thus, in the presence of noise, there may be early warning signals of the impending bifurcation, and there has been much progress in understanding when such signals may be present \cite{Dakos_etal_2008,Dakos_etal_2015,Ditlevsen_etal_2010,Schefferetal2009}.
\begin{figure}[t!]
\includegraphics[width=16.cm]{figs/fig01}
\caption{ The conceptual difference between (a) B-tipping and (b) R-tipping for monotonically changing external conditions. The (solid black) moving stable state is a stable state for the frozen system with different but fixed-in-time external conditions. The (colour) trajectories
show the system behaviour for time-varying external conditions. (a) In
B-tipping, there is a {\it critical level} of external conditions, and tipping occurs for any rate of passage through the critical level. (b) In R-tipping, there is no critical level, but there is a {\it critical rate} of change of external conditions above which the system fails to track the moving stable state and tips. The (blue) special critical-rate trajectory tracks what we define as a repelling R-tipping threshold.
}
\label{fig:BR}
\end{figure}
However, critical levels are not the only critical factor for sudden transitions.
Other factors may arise in a system that is given insufficient time to adapt~\cite{Scheffer2008,Wieczorek2011}, that is subjected to fast fluctuations (noise)~\cite{Ditlevsen_etal_2010,Ashwin2012}, that is close to basin boundary
and may spend long period of time near (unstable) states of saddle type~\cite{Kuehn2011,Bezekci2015,Hastings2018}, or is sensitive to the spatial extent, spatial location or spatial change of the external input~\cite{Rushton1937,Starmer2007,Idris2008,Berestycki2009,MacCarthaigh}.
What is more, real-world tipping phenomena may involve an interplay between
different critical factors~\cite{OKeeffe2019,Ritchie2021}.
The focus of this work is on systems that are particularly sensitive
to how fast the external input changes~\cite{Jones2021}. Such systems may not even have
any critical levels, but they may have {\it critical rates} of change:
they suddenly and unexpectedly move to a different state if the
external input changes too fast~\cite{Morris2002,Scheffer2008,Wieczorek2011,Ashwin2012,Siteur2016,Alkhayuon2018,Vanselow2019,OKeeffe2019,Kiers2018,Suchithra2020,Lohmann2021,Arumugam2021,Pierini2021,Clarke2021,OSullivan2021}.
Although critical rates are less understood than critical levels, they
are equally relevant and ubiquitous. In particular, critical rates
are of special interest in climate science and ecology in the contexts of {\em global warming}, increasing {\em climate variability}, and ensuing
{\em failure to adapt to changing external conditions}: the moving stable state is continuously available, but the system is unable to adjust to its changing position when the change happens slowly but too fast. This is evidenced by
reports of contemporary and projected climate variability being too
fast for animals and plants to migrate or
adapt~\cite{Leemans2004,Berestycki2009,Jezkova2016}, critical dependence of
thermohaline circulation on the rate of North-Atlantic fresh-water
influx~\cite{Lucarini2005,Alkhayuon2018,Lohmann2021}, sudden release of soil carbon from
peatlands into the atmosphere~\cite{Luke2011,Wieczorek2011,Clarke2021} that can be accompanied by ``zombie wildfires"~\cite{Scholten2021,OSullivan2021} above some critical rate of atmospheric
warming, climate-related ``critical-rate
hypothesis'' in the context of coastal wetlands responding to rising
sea level~\cite{Morris2002} and more generally ecosystems subject to
rapid changes in external conditions such as wet El Ni\~no Southern
Oscillation years, droughts, or disease outbreaks~\cite{Scheffer2008,OKeeffe2019}.
There are many other areas of science where critical
rates are important.
In neuroscience, in addition to type-I or II nerves which ``fire''
above some level of externally applied voltage, there are type-III
excitable nerves that are able to accommodate slow changes in an
externally applied voltage up to very high voltage levels. What is
necessary for
type-III nerves to ``fire'' is a fast enough increase in an externally
applied voltage, rather than a high enough voltage level, and this
rate-dependence allows the brain for accurate coincidence
detection~\cite{Hill1935,Hodgkin1948,Idris2007,Mitry2013}.
In competitive economy, there is a related ``chasing problem'' in the
context of supply, demand and prices trying to adapt to a changing
equilibrium~\cite{Sheng-YiHsu2014}.
The general concept of rate-induced tipping is illustrated in
Figure~\ref{fig:BR}(b). When the external input changes in time, the
nonautonomous system tries to track the moving stable state. Tracking
is guaranteed if the external input changes slowly enough~\cite[Lemma2.3]{Ashwin2016}. However, above some critical rate of the external input change,
the system can no longer track the moving stable state and may
suddenly move to a different state. Note that the critical transition in
Figure~\ref{fig:BR}(b):
\begin{itemize}
\item Does not require any critical level of the external input -- there need not be any classical bifurcation of the stable state in the frozen system with fixed-in-time external inputs.
\item Occurs only if the external input varies sufficiently fast.
\item Can be {\em irreversible}: the system fails to track the
moving stable state, suddenly moves to a different stable state
and never returns to the original stable state; see for
example~\cite{Scheffer2008,Kiers2018,OKeeffe2019}.
\item Can be {\em reversible}: the system fails to track the moving
stable state, makes a large excursion away from it, then returns to the original stable state, and this process may happen repeatedly;
see for example~\cite{Wieczorek2011,Mitry2013,Perryman2014,Vanselow2019,OSullivan2021}.
\end{itemize}
We describe such a genuine nonautonomous instability as a {\em rate-induced tipping} or simply {\em R-tipping}~\cite{Ashwin2012}.
By ``genuine nonautonomous" we mean that, unlike B-tipping, R-tipping cannot, in general, be understood in terms of a classical autonomous bifurcation of a moving stable state. Nonetheless, in the presence of noise, some of the early-warning signals identified for B-tipping may also occur for R-tipping~\cite{Ritchie2016}.
We highlight that R-tipping is somewhat counter-intuitive and difficult to analyse for a number of reasons.
In addition to the fact that R-tipping cannot be simply explained in terms of a classical bifurcation of the stable state in a
frozen system~\cite{OKeeffe2019}, R-tipping may occur for external input rates that are much slower than the rate of convergence towards the stable state in the frozen
system~\cite{Vanselow2019}.
The reason is that tracking requires the
convergence rate towards the moving stable state to be faster than the
speed of the moving stable state in the phase space.
Thus, if the position of the stable state in the phase space is sufficiently sensitive to
changes in the external input parameters, then tipping may occur for external inputs
varying more slowly than the convergence rate towards the stable
state~\cite{Ashwin2012,Ashwin2012cor}.
Moreover, there may be no obvious R-tipping threshold separating initial
conditions that track the moving stable state from those that
R-tip. R-tipping thresholds in nonautonomous system may be intricate and
non-obvious in the sense that they cannot always be related to a
threshold in the frozen system~\cite{Wieczorek2011,Mitry2013,Perryman2015,Vanselow2019,Xie2019,OSullivan2021}.
Lastly, reversible R-tipping poses a mathematical challenge to capture transient
and thus quantitative instabilities because the system exhibits the same asymptotic (long-term)
behaviour below and above a critical rate.
This has previously made reversible R-tipping difficult to define rigorously,
even using modern concepts from the theory of nonautonomous dynamical systems~\cite{Ashwin2016}. These counter-intuitive properties of R-tipping motivate and highlight the need for the development of a mathematical framework that is easily accessible to applied scientists.
\subsection{Summary of Main Results and Outline of Paper}
This paper develops an applicable theory of R-tipping for external inputs that vary smoothly with time and decay exponentially to a constant at
infinity. The theory allows us to extend rigorous criteria from~\cite{Ashwin2016} for irreversible R-tipping in one dimensional (scalar) systems to arbitrary dimension and to different cases of R-tipping in the following way.
Firstly, we identify {\em R-tipping thresholds} for systems in arbitrary dimension.
R-tipping thresholds are typically responsible for loss of end-point tracking and separate nonautonomous solutions that R-tip from those that do not. These thresholds unify and generalise the concepts of ``excitability thresholds"
for excitable systems and ``multi-basin boundaries" for multistable systems.
Secondly, we classify reversible, irreversible and degenerate cases of R-tipping in such systems.
Thirdly, we give rigorous yet testable criteria for R-tipping to occur in terms of verifiable properties of the autonomous frozen systems and external input variation,
in terms of connecting (heteroclinic) orbits in a suitably compactified system, and in terms of numerical tools. Note that
\begin{itemize}
\item[(i)] We introduce the concepts of {\em regular edge states} and {\em regular thresholds} for autonomous frozen systems. Roughly speaking,
regular edge states are compact hyperbolic invariant sets with orientable codimension-one stable manifolds (one unstable direction), and regular thresholds are forward invariant subsets of stable manifolds of regular edge states.
\item[(ii)]
We work with asymptotically constant external inputs. This allows us to:
\begin{itemize}
\item[$\bullet$]
Identify {\em regular R-tipping edge states} with regular edge states in the corresponding autonomous future limit system.
\item[$\bullet$]
Define R-tipping and critical rates for the nonautonomous system in terms of special solutions that limit to a compact invariant set of the future limit system that is not an attractor.
Then, we introduce a new concept of {\em edge tails} for the nonautonomous system with a critical rate approached from above and below. This concept allows us to capture transient instabilities and classify R-tipping into reversible, irreversible and degenerate cases by focusing on limit sets that are regular R-tipping edge states.
\item[$\bullet$]
Adapt the compactification framework from~\cite{Wieczorek2019compact} to extend the nonautonomous problem by
including the autonomous future and possibly the past limit systems.
\item[$\bullet$]
Analyse nonautonomous R-tipping in the compactified system in terms of regular R-tipping edge states which turn out to anchor the important regular R-tipping thresholds at infinity.
\end{itemize}
\item[(iii)] We show that R-tipping
in the nonautonomous system corresponds to a {\em connecting (heteroclinic) orbit}
in the autonomous compactified system. This allows us to:
\begin{itemize}
\item[$\bullet$] Give computationally verifiable sufficient and necessary conditions for reversible and irreversible R-tipping to occur in arbitrary dimension.
\item[$\bullet$] Use existing numerical continuation techniques for parameter continuation of genuine nonautonomous R-tipping instabilities.
\end{itemize}
\end{itemize}
The paper is organized as follows. Section~\ref{sec:problemsetting} introduces multidimensional {\em nonautonomous systems} with asymptotically constant {\em input parameters}. Additionally, it introduces a {\em rate parameter} $r>0$ that characterises the `rate' of time variation of the input parameter(s) along some input parameter path. Then, it identifies the optimal timescale for R-tipping analysis in terms of the rate parameter.
Section~\ref{sec:NonautonInstab} characterises properties of the corresponding {\em frozen systems} with fixed-in-time input parameters. In particular, it defines {\em moving sinks} on a time interval $I$, which are hyperbolic sinks of the frozen system parametrised by time for a given time variation of the
input parameter(s). Then, it focuses on R-tipping from base attractors that are hyperbolic sinks, and characterises R-tipping as failure of the nonautonomous system to track a moving sink in a certain sense.
Section~\ref{sec:ThresholdsEdgeFrozen} develops a theory of {\em regular thresholds} and {\em regular edge states} within the frozen system, and defines {\em moving regular thresholds} and {\em moving regular edge states} in a similar way to moving sinks. Crucially, it introduces the important and easily verifiable property of {\em (forward) threshold instability} of a moving sink.
Section~\ref{sec:Rtippingdefs} gives a precise definition and classification of {\em R-tipping} from a moving sink in nonautonomous systems with asymptotically constant inputs. It defines {\em regular R-tipping thresholds} as well as {\em regular R-tipping edge states} and their {\em edge tails} that enable us to characterise {\em reversible, irreversible and degenerate cases of R-tipping}.
Section~\ref{sec:compact} introduces and summarises results from~\cite{Wieczorek2019compact} on compactification of nonautonomous dynamical systems with asymptotically constant external inputs.
It includes the following key technical results.
Proposition~\ref{prop:invsete-} relates a local pullback attractor anchored at negative infinity by a hyperbolic sink to an invariant unstable manifold of a saddle in the compactified system, a regular R-tipping threshold to an invariant stable manifold of the corresponding regular R-tipping edge state in the compactified system, and each edge tail to one branch of the invariant unstable manifold of the regular R-tipping edge state. Proposition~\ref{prop:edgetails} uses these relations to characterise R-tipping in terms of edge tail behaviour.
The main results of the paper are presented in Section~\ref{sec:gentestcrit} for moving sinks and equilibrium regular R-tipping edge states. Theorem~\ref{thm:tracking} shows that nonautonomous solutions track moving sinks of the frozen system, while Theorem~\ref{thm:trackingthresholds} shows that regular
R-tipping thresholds track moving regular thresholds of the frozen system,
as long as the rate parameter $r$ is sufficiently small.
For moving sinks on $I=\mathbb{R}$, Theorem~\ref{thm:Rtip} gives criteria for the existence of R-tipping in nonautonomous systems in terms of: threshold instability of a hyperbolic sink in the frozen system, and forward threshold instability of a moving sink of the frozen system for a given external input. This theorem generalizes results from~\cite{Ashwin2016} for one-dimensional (scalar) systems in the sense that threshold stability does not guarantee tracking in higher-dimensional systems; see for example~\cite{Kiers2018,Xie2019}.
We finish this section by identifying different R-tipping in nonautonomous system with a connecting (heteroclinic) orbit in the autonomous compactified system. For the case of non-degenerate R-tipping,
Proposition~\ref{prop:rtip_compact} identifies
reversible and irreversible R-tipping in the nonautonomous system with presence of a connecting (heteroclinic) orbit with certain non-degeneracy properties to a regular R-tipping edge state in the compactified system. This means that powerful numerical tools for detection and parameter continuation of connecting (heteroclinic) orbits can be applied to practically find critical rates for R-tipping.
Finally, Section~\ref{sec:Conclusions} highlights some open questions associated with extending our results to more general settings. These settings include asymptotically constant external inputs that decay slower than exponentially or are not asymptotically constant, R-tipping from more complicated base attractors, involving more complicated R-tipping edge states, thresholds that are not regular,
and quasithresholds. This paper is complementary to~\cite{Wieczorek2019compact} which develops the theory of compactification for asymptotically autonomous dynamical systems, and to~\cite{Xie2019} which presents a number of illustrative canonical examples of R-tipping.
\section{The Problem Setting}
\label{sec:problemsetting}
We consider a nonlinear {\em nonautonomous system}
\begin{equation}
\label{eq:ode}
\dot{x}=f(x,\Lambda(t)),
\end{equation}
with the state variable $x\in\mathbb{R}^n$, time $t\in \mathbb{R}$, $C^1$-smooth time-varying external input
$\Lambda:\mathbb{R}\to\mathbb{R}^d$,
and $C^1$-smooth vector field $f:\mathbb{R}^n\times\mathbb{R}^d\to\mathbb{R}^n$,
where $\dot{x}$ denotes $dx/dt$.
\subsection{Parametrised Nonautonomous System: Rates of Change}
We are interested in understanding nonautonomous R-tipping instabilities that appear on varying the time scale or ``rate of change'' of an external input. To address this question, we extend~\eqref{eq:ode} to a {\em parametrised family of nonautonomous systems}
\begin{equation}
\dot{x}=f(x,\Lambda(rt)),
\label{eq:odewithr}
\end{equation}
where $r >0$ is a constant {\em rate parameter}~\cite{Ashwin2012,Ashwin2016,Alkhayuon2018,Wieczorek2011}. We refer to
$t$ as the {\em time scale of the system}, and to $\tau=rt$
as the {\em time scale of the external input}.\,\footnote{Note that if $t$ is in units second and $r$ is in units inverse second then $\tau$ is dimensionless.} It is important to note that, typically, both the external input and solutions of \eqref{eq:odewithr} depend on $r$. Therefore, it will be convenient to analyse R-tipping on the time scale $\tau$ of the external input,
where only solutions to the problem depend on $r$.
To this end, we rewrite~\eqref{eq:odewithr} in terms of $\tau$, and consider
\begin{equation}
x' = f(x,\Lambda(\tau))/r,
\label{eq:odewithrs}
\end{equation}
where $x'$ denotes $dx/d\tau$.
\subsection{Frozen System}
Although R-tipping is a genuine nonautonomous instability
of the nonautonomous system, much can be
understood about R-tipping
from properties
of the autonomous {\em frozen system}
\begin{equation}
\label{eq:odea}
\dot{x}=f(x,\lambda),
\end{equation}
with a fixed-in-time {\em input parameter} $\lambda$ corresponding to a possible value of the external input~\cite{Ashwin2016}.
The frozen system is sometimes called a {\em quasistatic system} or an {\em instantaneous system}.
We will be interested in families of equilibria
of the frozen system \eqref{eq:odea} that vary $C^1$-smoothly with $\lambda$,
which are also referred to as {\em branches of equilibria}.
Note that, for fixed $r>0$, one can write \eqref{eq:odea} in the time scale of the external input, namely
\begin{equation}
\label{eq:odears}
x'=f(x,\lambda)/r,
\end{equation}
and that \eqref{eq:odea} and \eqref{eq:odears} clearly have the same invariant sets, qualitative stability and bifurcations on varying $\lambda$.
\subsection{Asymptotically Constant Inputs: Future and Past Limit Systems}
\label{sec:extinp}
\begin{figure}[t]
\begin{center}
\includegraphics[width=15.cm]{./figs/fig02}
\end{center}
\vspace{-3mm}
\caption{
(a) Example of a bi-asymptotically constant (scalar) external input $\Lambda(\tau)$
with the future limit $\lambda^+$ and the past limit $\lambda^-$, together with two parameter paths:
(blue) parameter path $P_\Lambda\subset\mathbb{R}$ traced out by this $\Lambda(\tau)$, and (purple)
parameter path $P_{\Lambda,I}\subset P_\Lambda$
traced out by this
$\Lambda(\tau)$ on a given time interval $I=(\tau_-,\tau_+)$. Note that $\lambda^+$ and $\lambda^-$
do not lie on the boundary of $P_\Lambda$.
(b) Examples of a parameter path $P$ in $\mathbb{R}^d=\mathbb{R}^2$.
}
\label{fig:path}
\end{figure}
When developing a theory of R-tipping, one needs to specify a class of possible external
inputs $\Lambda(\tau)$. For arbitrary time-dependent inputs, the theory of nonautonomous
systems \cite{KloedenRasmussen2011} summarises work in this area and gives general results
on attraction and stability. Here, we focus on a case that is more specific, relevant to applications, and allows us to make further progress on the nonautonomous problem~\eqref{eq:odewithrs}. In particular, it allows us to extend results from~\cite{Ashwin2016} to arbitrary dimension and to different cases of R-tipping.
To be more precise, we consider
response of an open system to non-periodic external inputs that
limit to a constant as time tends to positive and possibly negative infinity:
\begin{defn}
\label{defn:ac}
We say that $\Lambda(\tau)$ is {\em bi-asymptotically constant} with future limit $\lambda^+$ and past limit $\lambda^-$ if
\begin{equation}
\label{eq:ac1p}
\lim_{\tau\to +\infty} \Lambda(\tau)= \lambda^+\in\mathbb{R}^d\;\;\mbox{and}\;\; \lim_{\tau\to -\infty} \Lambda(\tau)= \lambda^-\in\mathbb{R}^d.
\end{equation}
We say $\Lambda(\tau)$ is {\em asymptotically constant} if it has a future limit but not necessarily a past limit, or if it has a past limit but not necessarily a future limit.
\end{defn}
\begin{rmk}
\label{rmk:ac}
Such a bi-asymptotically constant $\Lambda(\tau)$ need not be monotone or indeed one-dimensional, which is a generalization of the parameter shifts considered in~\cite{Ashwin2016}. For example, for a scalar $\Lambda(\tau)$, we do not require the supremum or infimum of $\Lambda(\tau)$ to be $\lambda^+$ or $\lambda^-$; see Fig.~\ref{fig:path}(a).
\end{rmk}
Such inputs are used widely in different areas of applied science as mathematical models of finite-time disturbances, saturated growth processes and decay phenomena. Furthermore, they are a natural choice for defining and analysing R-tipping rigorously: they allow us to identify possible asymptotic states of the system when the disturbance is gone, and discuss changes in the asymptotic state for different rates $r$ of the input.
The main simplification is that nonautonomous problem~(\ref{eq:odewithrs}) becomes {\em asymptotically autonomous}
in the terminology of~\cite{Markus1956,Aulbach2006,KloedenRasmussen2011}:
$$
f(x,\Lambda(\tau)) \to f(x,\lambda^\pm)\;\;\mbox{as}\;\; \tau\to \pm\infty.
$$
For the case of bi-asymptotically constant $\Lambda(\tau)$ we can define the autonomous {\em future limit system}
\begin{equation}
\label{eq:odea+}
\dot{x}=f(x,\lambda^+),
\end{equation}
and the autonomous {\em past limit system}
\begin{equation}
\label{eq:odea-}
\dot{x}=f(x,\lambda^-),
\end{equation}
which are special cases of the autonomous frozen system~\eqref{eq:odea}.
One of the main contributions of this work is to use autonomous dynamics and compact invariant sets (in particular equilibria and invariant manifolds) of the limit systems~\eqref{eq:odea+} and~\eqref{eq:odea-}
to analyse nonautonomous R-tipping instabilities in system~(\ref{eq:odewithr}) or~(\ref{eq:odewithrs}).
While related questions have been investigated in the past~\cite{Markus1956,Thieme1994,Holmes_Stuart1992,Castillo1994,Robinson1996}, a particular novelty of our approach is that we relate trajectories of the nonautonomous system~\eqref{eq:odewithrs} and compact invariant sets of the autonomous limit systems~\eqref{eq:odea+} and~\eqref{eq:odea-} to one
autonomous compactified system.
This can be achieved by applying the compactification technique that was developed in~\cite{Wieczorek2019compact} for system~\eqref{eq:ode} with arbitrary decay of external inputs $\Lambda(t)$. The technique is reviewed in Sec.~\ref{sec:compact}
from the viewpoint of R-tipping in system~\eqref{eq:odewithrs} and exponentially decaying external inputs $\Lambda(\tau)$.
\subsection{Solutions and Trajectories of the Parametrised Nonautonomous System
}
\label{sec:notation}
Throughout the paper, dependence of solutions and trajectories of the nonautonomous system~\eqref{eq:odewithrs} on $r$ is indicated by superscript $[r]$.
For example, we write
$$
x^{[r]}(\tau,x_0,\tau_0)\in\mathbb{R}^n,
$$
to denote a solution\,\footnote{This is the flow $x(\tau)=\varphi(\tau,\tau_0,x_0)$
written as a process \cite{KloedenRasmussen2011} with the $r$ dependence explicitly shown.
Given a solution $x^{[r]}(\tau,x_0,\tau_0)$ to system~\eqref{eq:odewithrs},
one can easily obtain the corresponding solution to system~\eqref{eq:odewithr}
by setting $t=\tau/r$ and $t_0=\tau_0/r$. However, it is important to note that,
for different $r>0$, a fixed initial state $(x_0,\tau_0)$ in
system~\eqref{eq:odewithrs} corresponds to a fixed value of the external
input $\Lambda(rt_0)$, but different initial states $(x_0,t_0)=(x_0,\tau_0/r)$
in system~\eqref{eq:odewithr}.
}
to
system~\eqref{eq:odewithrs}
at time $\tau$ started from $x_0$ at initial time $\tau_0$ for a fixed rate $r$.
We also write
$$
\mbox{trj}^{[r]}(x_0,\tau_0) = \left\{
x^{[r]}(\tau,x_0,\tau_0)\,:\, \tau \ge \tau_0
\right\} \subset \mathbb{R}^n,
$$
to denote the corresponding trajectory from $(x_0,\tau_0)$.
For the nonautonomous system~\eqref{eq:odewithrs} with bi-asymptotically constant inputs $\Lambda(\tau)$,
if $e^-$ is a sink for the autonomous past limit system~\eqref{eq:odea-} and $x^{[r]}(\tau,x_0,\tau_0)\to e^-$ as $\tau\to -\infty$, we write this solution as
\begin{equation}
x^{[r]}(\tau,e^-)\in \mathbb{R}^n.\nonumber
\end{equation}
We also write the corresponding trajectory as
$$
\mbox{trj}^{[r]}(e^-)
= \left\{
x^{[r]}(\tau,e^-)\,:\, \tau \in \mathbb{R}
\right\} \subset \mathbb{R}^n.
$$
If the sink $e^-$ is hyperbolic
then one can show~\cite{Ashwin2016,Alkhayuon2018} that $x^{[r]}(\tau,e^-)$ is unique and can be understood as a {\em local pullback attractor} for the nonautonomous system~\eqref{eq:odewithrs}.
We sometimes simply write
\begin{equation}
x^{[r]}(\tau)\in \mathbb{R}^n,\nonumber
\end{equation}
to mean either $x^{[r]}(\tau,x_0,\tau_0)$ or $x^{[r]}(\tau,e^-)$,
and
$$
\mbox{trj}^{[r]} \subset \mathbb{R}^n,
$$
to mean either $\mbox{trj}^{[r]}(x_0,\tau_0) $ or $\mbox{trj}^{[r]}(e^-)$,
depending on the context.
Note that solutions $x^{[r]}(\tau)$ and trajectories $\mbox{trj}^{[r]}$ started
from the same initial state $(x_0,\tau_0)$, or limiting to the same sink $e^-$, will typically vary nontrivially with $r>0$.
\subsection{Parameter Paths}
To give easily testable criteria for R-tipping, it is convenient to work with a parameter path that is traced out by the external input $\Lambda(\tau)$ in the parameter space $\mathbb{R}^d$. We write $\overline{S}$ to denote the closure\,\footnote{The smallest closed subset of $\mathbb{R}^d$ containing $S$.} of $S$, and define:
\begin{defn}
\label{def:path}
A {\em parameter path} is a compact subset of the input parameter space $\mathbb{R}^d$, that is the closure of an image of a $C^1$-smooth function from $\mathbb{R}$ to $\mathbb{R}^d$.
\begin{itemize}
\item[(a)]
A given parameter path is denoted by $P$.
\item[(b)]
{\em A parameter path traced out by a given external input} $\Lambda(\tau)$ is denoted by
\begin{equation}
\label{eq:PL}
P_\Lambda=\overline{\left\{\Lambda(\tau)\,:\,\tau\in\mathbb{R} \right\}}\subset\mathbb{R}^d.
\end{equation}
\item[(c)]
{\em A parameter path traced out by a given external input} $\Lambda(\tau)$ on a given time interval $I=(\tau_-,\tau_+)$, where $\tau_\pm$ may be $\pm\infty$, is denoted by
\begin{equation}
\label{eq:PLI}
P_{\Lambda,I}=\overline{\left\{\Lambda(\tau)\,:\,\tau\in I \right\}}\subseteq P_\Lambda.
\end{equation}
\end{itemize}
\end{defn}
\begin{rmk}
\label{rmk:traceout}
Note that
$P$ can be traced out by (infinitely) many different external inputs,
$P_{\Lambda,I}$ may be traced out by a given external input $\Lambda$ also on time intervals other than $I$,
$P_\Lambda$ and $P_{\Lambda,I}$ are independent of the rate $r > 0$ of the external input $\Lambda$, and $P_{\Lambda,\mathbb{R}} = P_\Lambda$.
\end{rmk}
Figure~\ref{fig:path} shows examples of (a) $P_\Lambda$
and $P_{\Lambda,I}$ in a one-dimensional parameter space, and (b) examples of $P$ in a two-dimensional
parameter space~\cite{OKeeffe2019,Alkhayuon2020}. An external input $\Lambda(\tau)$ may traverse the
parameter path $P_\Lambda$ over time in a
complicated manner, for example by moving back and forth along the
path repeatedly, and with a varying speed
$\Vert \Lambda'(\tau)\Vert$,
as shown in Fig.~\ref{fig:path}(a).
Moreover, the future limit
$\lambda^+$ and, if it exists, the past limit
$\lambda^-$ of $\Lambda(\tau)$ need not lie on the
boundary of $P_\Lambda$; see also Remark~\ref{rmk:ac}.
\section{ Tracking and Failure to Track of Moving Sinks}
\label{sec:NonautonInstab}
In this section we explore the response of the nonautonomous system~(\ref{eq:odewithr}) or~(\ref{eq:odewithrs})
to external inputs $\Lambda$.
First, we introduce the intuitive concept of a moving sink -
a smooth family of instantaneous positions of a hyperbolic sink for the autonomous frozen system~\eqref{eq:odea} that does not depend on the rate parameter $r>0$ when viewed on the external input time scale $\tau$. Then, we discuss the relation between the moving sink and rate-dependent solutions $x^{[r]}(\tau)$ to system~(\ref{eq:odewithrs})
for different but fixed $r>0$. A similar setting was used previously~\cite{Wieczorek2011,Ashwin2012,Perryman2015,Ashwin2016,Alkhayuon2018,OKeeffe2019,Longo2021}
to understand the dynamical behaviour of~\eqref{eq:odewithrs} in terms of:
\begin{itemize}
\item[$\bullet$]
Tracking of a moving sink by $x^{[r]}(\tau)$ for sufficiently small but non-zero rates $r$.
\item[$\bullet$]
Failure to track a moving sink via a nonautonomous R-tipping instability that can appear at higher rates $r = r_c$. This includes potential multiple transitions between tracking and tipping as $r$ is increased~\cite{OKeeffe2019,Longo2021}.
\end{itemize}
\subsection{Moving Sinks}
\label{sec:movingequ}
Guided by the intuition from Fig.~\ref{fig:BR}(b), we consider a linearly stable equilibrium (a hyperbolic sink) of the autonomous frozen
system~(\ref{eq:odea})
that varies $C^1$-smoothly with $\lambda$. In other words, we consider hyperbolic sinks that continue and do not bifurcate along a parameter path.
We will be interested in how the position of such an equilibrium changes over time for a given external input $\lambda=\Lambda(\tau)$.
\begin{defn}
\label{def:ms}
Suppose the autonomous frozen system~(\ref{eq:odea}) has an equilibrium $e(\lambda)$
for some connected set of values of $\lambda$.
Consider an external input $\Lambda(\tau)$ that traces out a parameter path $P_{\Lambda,I}$ on a time interval $I=(\tau_-,\tau_+)\subseteq\mathbb{R}$, where $\tau_\pm$ can be $\pm\infty$.
Then,
\begin{itemize}
\item [(a)]
We say $e\left(\Lambda(\tau)\right)$ is a {\em moving sink on $I$} if $e(\lambda)$ is a hyperbolic sink that varies $C^1$-smoothly with
$\lambda\in P_{\Lambda,I}$.
\item [(b)]
If $\Lambda(\tau)$ is asymptotically constant to $\lambda^+$ and $e\left(\Lambda(\tau)\right)$ is a moving sink on $I = (\tau_-,+\infty)$, we define the {\em future limit} $e^{+}$
of a moving sink
$$
e^{+} \coloneqq \lim_{\tau\rightarrow +\infty} e(\Lambda(\tau))= e(\lambda^{+}),
$$
which is a hyperbolic sink of the autonomous future limit system~\eqref{eq:odea+}.\\
If $\Lambda(\tau)$ is asymptotically constant to $\lambda^-$ and $e\left(\Lambda(\tau)\right)$ is a moving sink on $I = (-\infty,\tau_+)$, we define the {\em past limit} $e^{-}$ of a moving sink
$$
e^{-} \coloneqq \lim_{\tau\rightarrow -\infty} e(\Lambda(\tau))= e(\lambda^{-}),
$$
which is a hyperbolic sink of the autonomous past limit system~\eqref{eq:odea-}.
\end{itemize}
\end{defn}
A moving equilibrium on a time interval $I$ is
an equilibrium of the autonomous frozen system~(\ref{eq:odea})
parametrised by time $\tau\in I$ for a given input $\lambda=\Lambda(\tau)$. It is sometimes
called a {\em quasistatic equilibrium} or an {\em instantaneous equilibrium}.
We often
focus on the special case $I=\mathbb{R}$, namely where moving equilibria continue and do not bifurcate along the whole parameter path $P_\Lambda$.
Note that the
moving equilibrium $e\left(\Lambda(\tau)\right)$ depends on $f$ and on the
shape of the external input $\Lambda$, but does not depend on the rate
parameter $r>0$ (though its eigenvalues do on the external input timescale; see Eq.~\eqref{eq:odears}). Moving equilibria are not solutions to the nonautonomous system~(\ref{eq:odewithrs}). However, they can approximate rate-dependent
solutions $x^{[r]}(\tau)$ to~(\ref{eq:odewithrs}) when $r$ is sufficiently small, as we see in Section~\ref{sec:trackingcriteria}.
\subsection{Tracking Moving Sinks}
\label{sec:trackingcriteria}
We will be interested in how a rate-dependent solution $x^{[r]}(\tau)$ of~(\ref{eq:odewithrs}) changes over time relative to
a moving sink $e(\Lambda(\tau))$ for a given external input $\Lambda(\tau)$ and different rates $r>0$.
As noted in~\cite{Ashwin2016,Alkhayuon2018}, there are several ways to
understand tracking of a moving sink, depending on whether we need closeness at all points in time, or just in the future limit $\tau\to +\infty$. The
following definition formalises this.
\begin{defn}
\label{defn:Track}
Consider a nonautonomous system~(\ref{eq:odewithrs}) with an external input $\Lambda(\tau)$. Suppose there is a moving sink $e(\Lambda(\tau))$ on $I=(\tau_-,\tau_+)$, where $\tau_\pm$ may be $\pm\infty$. For any fixed $\delta>0$ and $r>0$:
\begin{itemize}
\item[(a)]
We say $x^{[r]}(\tau)$ {\em $\delta$-close tracks $e(\Lambda(\tau))$ on $I$} if
\begin{equation}
\label{eq:deltatrack}
\Vert x^{[r]}(\tau) - e(\Lambda(\tau))\Vert < \delta\;\;\mbox{for all}\;\;\tau\in I.
\end{equation}
%
\item[(b)] Suppose in addition that $\Lambda(\tau)$ is asymptotically constant to $\lambda^+$, $e(\Lambda(\tau))$ is a moving sink on $I=(\tau_-,+\infty)$, and recall that $e(\Lambda(\tau))$ limits to $e^{+}$. Then, we say $x^{[r]}(\tau)$ {\em end-point tracks $e(\Lambda(\tau))$ on $I$} if $x^{[r]}(\tau)$ exists for all $\tau\in I$ and
\begin{equation}
\label{eq:eptrack}
x^{[r]}(\tau) \to e^+\;\;\mbox{as}\;\; \tau\to +\infty.
\end{equation}
\end{itemize}
\end{defn}
\begin{rmk}
\label{rmk:ept}
We define $\delta$-close tracking for $x^{[r]}(\tau)$ on any time interval $I=(\tau_-,\tau_+)$, and end-point tracking for $x^{[r]}(\tau)$ on a (semi)infinite time interval $I=(\tau_-,+\infty)$, where $\tau_\pm$ may be $\pm\infty$. This is a generalisation of the $\delta$-close and end-point tracking definitions used in~\cite{Ashwin2016}, which restrict to tracking by pullback attractors $x^{[r]}(\tau) = x^{[r]}(e^-,\tau)$ on $I=\mathbb{R}$.
\end{rmk}
Theorem~\ref{thm:tracking} gives criteria that sufficiently small rate
parameter $r$ (i.e. slow enough motion of hyperbolic sinks on the system
time scale $t$) will give $\delta$-close and end-point tracking for any $\delta>0$.
Tracking of more complicated attractors\,\footnote{See Appendix~\ref{sec:A4} for the definition of an attractor.} such as limit cycles~\cite{Alkhayuon2018,Alkhayuon2020}, tori and chaotic attractors~\cite{Kaszas2019,Alkhayuon2020weak} is discussed in Section~\ref{sec:Conclusions} and left for future study.
\subsection{Failure to Track: Nonautonomous R-tipping Instability}
\label{sec:Rtipintro}
We use the notion of R-tipping to describe two types of genuine nonautonomous instabilities that occur through loss of tracking in the following manner:
\begin{itemize}
\item[$\bullet$]
{\em Loss of end-point tracking:}
A rate-dependent solution $x^{[r]}(\tau)$ fails to end-point track a moving sink $e\left(\Lambda(\tau)\right)$ at some
rate $r=r_c$~\cite{Scheffer2008,Ashwin2016,OKeeffe2019,Kuehn2021}. This is a {\em qualitative change}, it can thus be classified as a genuine nonautonomous {\em bifurcation}.
\item[$\bullet$]
{\em Loss of $\delta$-close tracking:}
For a given $\delta>0$, a rate-dependent solution $x^{[r]}(\tau)$ end-point tracks a moving sink $e\left(\Lambda(\tau)\right)$ for all $r>0$, but fails to $\delta$-close track $e\left(\Lambda(\tau)\right)$ at some
rate $r=r_c(\delta)$ that depends on the choice of $\delta$~\cite{Wieczorek2011,Mitry2013,Xie2019,Vanselow2019}. This is a {\em quantitative} change, but cases of interest may be classified as {\em finite-time bifurcations}~\cite{Rasmussen2010}.
\end{itemize}
This paper gives a rigorous characterisation of R-tipping that occurs via qualitative ``loss of end-point tracking" in Definition~\ref{defn:Rtip}, and leaves quantitative ``loss of $\delta$-close tracking" for future research.
For example, suppose that $x^{[r]}(\tau)\rightarrow e^+$ for $0<r<r_c$ but
$$
x^{[r_c]}(\tau)\not\rightarrow e^+\;\;\mbox{as}\;\; \tau\to +\infty.
$$
If $x^{[r_c]}(\tau)$ remains bounded then the system undergoes {\em R-tipping} according to our definition. If such an $r_c$ is isolated, we call it a {\em critical rate}.
One aim of this paper is to identify and rigorously define possible cases of such R-tipping.
In doing so, we note that
the critical-rate solution $x^{[r_c]}(\tau)$ will typically
converge to a compact invariant set\,\footnote{Notions of convergence to invariant sets $\eta$ are discussed in Appendix~\ref{sec:A1}.} $\eta^+$:
$$
x^{[r_c]}(\tau)\to \eta^+\;\;\mbox{as}\;\; \tau\to +\infty,
$$
that is not an attractor, not necessarily an equilibrium, and lies on the basin boundary of a sink $e^+$ in the future limit system~\eqref{eq:odea+}~\cite{Xie2019,OKeeffe2019}. If this set is hyperbolic with one unstable direction and an orientable stable manifold\,\footnote{Note that $\eta^+$ is contained in its stable manifold, that is $\eta^+\subseteq W^s(\eta^+)$.} then we call such an $\eta^+$ a {\em regular R-tipping edge state}. This in turn suggests two other important notions: {\em a regular R-tipping threshold} which contains initial states that converge to
the regular R-tipping edge state in the nonautonomous system~\eqref{eq:odewithrs}, and {\em edge tails} that continue away from the regular R-tipping edge state in the future limit system~\eqref{eq:odea+}.
\section{Thresholds and Edge States for Autonomous Frozen Systems}
\label{sec:ThresholdsEdgeFrozen}
We consider thresholds in phase space as invariant sets that have two different sides and, in some sense, give qualitatively different behaviour for trajectories started on different sides of the threshold.
As we introduce different types of threshold, in order to guide the reader, we give a short summary below:
\begin{itemize}
\item[$\bullet$]
For the autonomous frozen system~(\ref{eq:odea}),
we distinguish in Sec.~\ref{sec:rthr} between {\em regular thresholds} and {\em irregular thresholds}.
Regular thresholds are used in Sec.~\ref{sec:thr_inst} to define the notion of {\em threshold instability} on parameter paths.
We use this notion in Sec.~\ref{sec:Rtippingcriteria} to formulate testable (sufficient) criteria for R-tipping to occur in the nonautonomous system~\eqref{eq:odewithrs}
for some external input $\Lambda(\tau)$ and a moving sink on $I=\mathbb{R}$; see Theorem~\ref{thm:Rtip}(a).
\item[$\bullet$]
Given a regular threshold that varies $C^1$-smoothly with $\lambda$,
and a time-varying external input $\Lambda(\tau)$, we define in Sec.~\ref{sec:thr} {\em moving regular thresholds} as regular thresholds of the autonomous frozen system~(\ref{eq:odea}) parametrised by time $\tau$ for $\lambda=\Lambda(\tau)$.
%
Moving regular thresholds are used in Sec.~\ref{sec:thr_inst} to define the notion of {\em forward threshold instability} for a given input $\Lambda(\tau)$.
We use this notion in Sec.~\ref{sec:Rtippingcriteria} to characterise external inputs $\Lambda(\tau)$ that give R-tipping
from moving sinks on $I=\mathbb{R}$ in the nonautonomous system~\eqref{eq:odewithrs}; see Theorem~\ref{thm:Rtip}(b).
\item[$\bullet$]
For the nonautonomous system~\eqref{eq:odewithrs},
we define in Sec.~\ref{sec:Rtip_critrates} {\em regular R-tipping thresholds} that separate solutions $x^{[r]}(\tau)$ of~\eqref{eq:odewithrs} that R-tip from those that do not. Regular R-tipping thresholds are related to stable invariant manifolds of regular edge states from infinity that become hyperbolic saddles in a compactified system in Sec.~\ref{sec:compactdyns}. We use this relation in Sec.~\ref{sec:computing} to give (necessary and sufficient) criteria for the occurrence of reversible and irreversible R-tipping in terms of (heteroclinic) connections in the compactified system; see Proposition~\ref{prop:rtip_compact}.
\end{itemize}
Definition~\ref{defn:multi_excit} uses regular thresholds to generalise, and in certain sense unify, the concepts of ``excitability thresholds" for excitable systems~\cite{FitzHugh1955,Izhikevich07,Krauskopf2003,Wieczorek2011,Wechselberger2013} and ``multi-basin boundaries" for multistable systems~\cite{Pisarchik2014}.
\subsection{Regular Thresholds, Regular Edge States and Excitability}
\label{sec:rthr}
We restrict to thresholds that are
repelling orientable embedded manifolds\,\footnote{We recall some notions used in discussion of differentiable manifolds in Appendix~\ref{sec:A2}.}, which we call {\em regular thresholds}. Thresholds that are repelling
but not orientable or not embedded manifolds such as the fractal basin boundaries discussed in~\cite{McDonald1985,Aguirreetal2009,Kaszas2019},
we term {\em irregular thresholds} and leave for future study. More precisely:
\begin{defn}
\label{defn:rthr}
In the $n$-dimensional autonomous frozen system~\eqref{eq:odea}, we define a {\em regular threshold} $\theta\subset \mathbb{R}^n$ as a codimension-one embedded orientable forward-invariant manifold
that is normally repelling.
\end{defn}
\begin{rmk}
\label{rmk:nuqthr}
Any codimension-one forward-invariant subset of a regular threshold is clearly also a regular threshold. In this sense regular thresholds are not unique.
\end{rmk}
\begin{rmk}
\label{rmk:qthr}
There is a close relationship between a regular threshold
and a basin boundary of an attractor
\begin{itemize}
\item[(a)]
A regular threshold $\theta$ will
typically be contained in the basin boundary of one or more attractors.
For example, Fig.~\ref{fig:rthr} depicts regular thresholds $\theta$ that
lie in the basin boundary of (a) one attractor, (b) two attractors or (c) three attractors.
\item[(b)]
Not all points on the basin boundary need to be in regular thresholds.
In Fig.~\ref{fig:rthr}(a), a regular threshold can be chosen to be any codimension-one connected subset of the stable manifold of $\eta$ containing $\eta$, in which case there will be parts of the stable manifold that are part of the basin boundary of $e_1$ but not part of the threshold. If a regular threshold is chosen to be the entire stable manifold of $\eta$, as shown (in blue) in the figure, it still has boundary that is a source: this source is part of the basin boundary of $e_1$ but not part of the threshold.
\end{itemize}
\end{rmk}
\begin{figure}[t]
\begin{center}
\includegraphics[width=16.cm]{./figs/fig03}
\end{center}
\vspace{-3mm}
\caption{
Examples of a (blue) regular threshold $\theta$ and the associated regular edge state $\eta$ in a
two-dimensional autonomous frozen system~\eqref{eq:odea}.
(a) A regular edge state $\eta$ is a hyperbolic saddle equilibrium. The
associated regular threshold $\theta$ is any codimension-one forward-invariant
subset of the stable manifold of $\eta$, so that
$\eta\subset \theta$. This $\theta$ lies in the basin
boundary of one attractor, and the two sides of $\theta$ are in
the basin of the same attractor $e_1$.
(b) A regular edge state $\eta$ that is a repelling hyperbolic limit cycle.
The associated regular threshold is the same limit cycle,
so that $\eta=\theta$. This $\theta$ lies on the basin boundary of two attractors, and each side of $\theta$ lies in the
basin of a different attractor, that is $e_1$ and $e_2$. (c) A regular threshold $\theta$ that lies on the basin boundary of three attractors.
}
\label{fig:rthr}
\end{figure}
The assumption of forward invariance means that a regular threshold may contain several invariant sets that are attractors for the flow restricted to the threshold. Here, we consider (normally) hyperbolic invariant sets $\eta$ that are attracting within $\theta$ and minimal, meaning that they do not contain any proper hyperbolic invariant subsets that are attracting within the same $\theta$, together with their stable invariant manifolds\,\footnote{Note that
the stable invariant manifold of $\eta$ contains $\eta$.}, denoted $W^s(\eta)$. Furthermore, we restrict to regular thresholds that contain just one $\eta$. The assumption of normal repulsion means that there will be a transversely unstable direction. Using notation inspired by work on
fluid instabilities~\cite{Skufkaetal2006,Schneideretal07,Schneideretal10}, we define a regular edge state as follows:
\begin{defn}
\label{defn:edgestate}
In the $n$-dimensional autonomous frozen system~\eqref{eq:odea}, we define a {\em regular edge state} $\eta$ of the regular threshold $\theta$
as a minimal compact normally hyperbolic invariant set $\eta\subseteq \theta$ such that
$\theta\subseteq W^s(\eta)$.
\end{defn}
\begin{rmk}
\label{rmk:edgestate}
A compact set $\eta$ is normally hyperbolic \cite{Fenichel1971,Fenichel1979}
if the rates $l_\parallel\in\mathbb{R}$ of any exponential contraction or expansion on $\eta$ are slower than the corresponding exponential rates
$l_\perp\neq 0$ transverse to $\eta$.
A regular edge state $\eta$ may have dimension $(n-1)$,
in which case $\eta = \theta = W^s(\eta)$ and $\eta$ is normally repelling. More generally,
a regular edge state $\eta$ will be of lower dimension than
the threshold $\theta$, in which case $\eta\subset\theta\subseteq W^s(\eta)$ will be of saddle type owing to attraction within $\theta$ and normal repulsion of $\theta$.
\end{rmk}
Examples of a regular edge state in
two dimensions are: a hyperbolic saddle equilibrium with a one-dimensional stable manifold as depicted in Fig.~\ref{fig:rthr}(a) and (c), or a hyperbolic repelling limit cycle as depicted in Fig.~\ref{fig:rthr}(b).
Example of a regular edge state in higher dimension is a quasiperiodic torus.
The assumption of normal hyperbolicity implies that it is possible to
extend regular edge states
and associated regular thresholds of the frozen system~\eqref{eq:odea}
to nearby $\lambda$; see~\cite[Theorems 3 and 4]{Fenichel1971}.
We state this rigorously for regular edge states that are hyperbolic
equilibria with precisely one unstable dimension:
\begin{proposition}
\label{prop:edgecontinues}
Suppose that $\eta^*$ is a hyperbolic equilibrium with one unstable direction in the autonomous frozen system \eqref{eq:odea} with $\lambda=\lambda^*$. Then:
\begin{itemize}
\item[(a)]
The equilibrium $\eta^*$ is a regular edge state. There exists a regular threshold $\theta^*$ that is a forward invariant subset of the stable manifold of $\eta^*$.
\item[(b)]
There is an open neighbourhood $Q$ of $\lambda^*$ in $\mathbb{R}^d$ such that $\eta^*$ can be continued to a family of regular edge states
$\eta(\lambda)$, and $\theta^*$ can be continued to a family of regular thresholds $\theta(\lambda)$ containing $\eta(\lambda)$. These families vary $C^1$-smoothly with $\lambda\in Q$.
\item[(c)]
There is a continuous parametrization of $\theta(\lambda)$ by $x\in \theta^*$ and $\lambda\in Q$. This parameterization can be chosen so that the normal vector $\nu(x,\lambda)$ to $\theta(\lambda)$ varies $C^1$-smoothly with $\lambda\in Q$.
\end{itemize}
\end{proposition}
\begin{proof}
Note that $\eta^*$ is an unstable node in $\mathbb{R}$, in which case $W^s(\eta^*)=\eta^*$, or a saddle in $\mathbb{R}^{n\ge 2}$, in which case $\eta^*\in W^s(\eta^*)$.\\
(a) We choose $\theta^*$ to be a local stable manifold of $\eta^*$, denoted $W^s_{loc}(\eta^*)$, as given by the stable manifold theorem; see e.g.~\cite[Thm 2.1.2]{Kuehn2015}.
This means that $\theta^*$ is topologically a codimension-one ball that is forward invariant, contractable to $\eta^*$, and one can choose a normal vector (an orientation)
corresponding to
the unstable eigenvector of $\eta^*$, which varies smoothly with
$x\in\theta^*$.
Thus, $\eta^*$ is a regular edge state and $\theta^*$ is a regular threshold.\\
\noindent
(b) Applying results of Fenichel on persistence of normally hyperbolic invariant manifolds that are compact and embedded (see~\cite[Thm 3]{Fenichel1971} or~\cite[Thm 2.3.5]{Kuehn2015}), there is an open neighbourhood $Q$ of $\lambda^*$ such that $\eta^*$ can be continued to a family of hyperbolic equilibria $\eta(\lambda)$ that varies $C^1$-smoothly with $\lambda\in Q$.
Similarly, applying results on persistence of stable/unstable manifolds of normally hyperbolic invariant manifolds
(see e.g.~\cite[Thm 4]{Fenichel1971} or~\cite[Thm 2.3.6]{Kuehn2015}), $W^s_{loc}(\eta^*)$ can be $C^1$-smoothly continued to a family of stable manifolds of $\eta(\lambda)$, and each of these manifolds contains a regular threshold $\theta(\lambda)$ that varies $C^1$-smoothly with $\lambda\in Q$.\\
\noindent
(c) The continuous parameterization by $(x,\lambda)$ is a consequence of applying results of~\cite{Fenichel1974} and \cite{Fenichel1977} or~\cite[Thm 2.3.12]{Kuehn2015}.
Orientability implies that there are two choices
of a normal vector $\pm\nu(x)$ that vary smoothly with $x\in\theta^*$ and $\lambda\in Q$, and thus a well-defined notion of the two sides (e.g. inside/outside) of a regular threshold.
\end{proof}
To relate our concept of regular thresholds to
existing literature~\cite{Krauskopf2003,Pisarchik2014}, we distinguish between notions of ``excitability threshold" for excitable
systems and ``multi-basin boundary" for multistable systems as being
different kinds of thresholds.
\begin{defn}
\label{defn:multi_excit}
Let $\theta(\lambda)$ be a regular threshold
for the autonomous frozen system (\ref{eq:odea}).
\begin{itemize}
\item[(a)]
If $\theta(\lambda)$ is contained in the basin boundary of two or more
attractors, we say that the autonomous frozen system~\eqref{eq:odea} is {\em multistable} with {\em multi-basin
boundary $\theta(\lambda)$}.
\item[(b)]
If $\theta(\lambda)$ is contained in the basin boundary of a
single attractor, we say that the autonomous frozen system~\eqref{eq:odea} is {\em excitable} with {\em excitability threshold $\theta(\lambda)$}.
\end{itemize}
\end{defn}
\subsection{Moving Regular Thresholds and Moving Regular Edge States}
\label{sec:thr}
It follows from Definition~\ref{defn:edgestate} that, if there is a regular edge state $\eta(\lambda)$, then there is a regular threshold $\theta(\lambda)$ containing $\eta(\lambda)$.
For a given external input $\Lambda(\tau)$, we use the notion of a parameter path $P_{\Lambda,I}$ from Definition~\ref{def:path} and define
moving regular edge states and moving regular thresholds analogously to moving sinks, namely
as follows:
\begin{defn}
\label{defn:mth}
Suppose the autonomous frozen system~(\ref{eq:odea}) has a codimension-one
forward-invariant manifold $\theta(\lambda)$ and a compact invariant set $\eta(\lambda)\subseteq\theta(\lambda)$ for some connected set of values of $\lambda$.
Consider an external input $\Lambda(\tau)$ that traces out a parameter path $P_{\Lambda,I}$ on a time interval $I=(\tau_-,\tau_+)\subseteq\mathbb{R}$, where $\tau_\pm$ can be $\pm\infty$. Then,
\begin{itemize}
\item[(a)]
We say $\theta\left(\Lambda(\tau)\right)$ is a {\em moving regular threshold} on $I$ if $\theta(\lambda)$ is a regular threshold that varies $C^1$-smoothly with $\lambda\in P_{\Lambda,I}$.
\item[(b)]
We say $\eta\left(\Lambda(\tau)\right)$ is a {\em moving regular edge state} on $I$
if $\eta(\lambda)$ is a regular edge state that varies $C^1$-smoothly with $\lambda\in P_{\Lambda,I}$.
\item[(c)]
Suppose that $\Lambda(\tau)$ is asymptotically constant to $\lambda^+$, and $\eta\left(\Lambda(\tau)\right)$ is a
moving regular edge state on $I=(\tau_-,+\infty)$. Then, we define the {\em future limit} $\eta^+$ of the
moving regular edge state by
$$
\eta^+=\eta(\lambda^+).
$$
\end{itemize}
\end{defn}
\begin{rmk}
The assumption in (c) implies that $\eta^+$ is a regular edge state of a regular threshold
$$
\theta^+=\theta(\lambda^+),
$$
for the autonomous future limit system~\eqref{eq:odea+}.
\end{rmk}
A moving regular threshold (edge state) is
a regular threshold (edge state) of the autonomous frozen system~(\ref{eq:odea})
parametrised by time $\tau$ for a given input $\lambda=\Lambda(\tau)$.
Similar to moving sinks, moving regular thresholds (edge states)
depend on $f$ and on the shape of the external input $\Lambda$, but do not depend on the rate parameter $r>0$ when viewed on the external input timescale $\tau$.
Regular edge states $\eta^+$ of the future limit system~\eqref{eq:odea+} are particularly important in our work. This is because R-tipping thresholds are anchored at infinity by $\eta^+$.
\subsection{Threshold Instability of a Moving Sink}
\label{sec:thr_inst}
A theory of {\em irreversible R-tipping} in one-dimensional (scalar) nonautonomous system~\eqref{eq:odewithr} or~\eqref{eq:odewithrs} presented in~\cite{Ashwin2016}
is based on moving sinks on $I=\mathbb{R}$ and the intuitive concept of ``forward basin
stability"\,\footnote{Not to be confused with the `static' notion of ``basin stability"
introduced in~\cite{Menck2013} as a measure related to the volume of the
basin of attraction.
}
of a moving sink; see~\cite[Def.3.3]{Ashwin2016}.
To be more specific, a moving sink $e(\Lambda(\tau))$ is ``forward basin stable" if, at each point in time, $e(\Lambda(\tau))$ is contained in the basin of attraction of its every future position\,\footnote{Equivalently,
a moving sink $e(\Lambda(\tau))$ is ``forward basin stable" if, at each point in time, the basin of attraction of $e(\Lambda(\tau))$ contains all the previous positions of $e(\Lambda(\tau))$.}. This concept was used in~\cite[Th.3.2]{Ashwin2016} to derive easily testable criteria for the absence or presence of irreversible R-tipping for a moving sink on $I=\mathbb{R}$ in one dimension: forward basin stability in autonomous frozen
system~\eqref{eq:odea}
with $x\in\mathbb{R}$ excludes R-tipping in nonautonomous system~\eqref{eq:odewithr} or~\eqref{eq:odewithrs}, whereas lack of forward basin stability (plus some additional assumptions) in system~\eqref{eq:odea}
with $x\in\mathbb{R}$ guarantees R-tipping in system~\eqref{eq:odewithr} or~\eqref{eq:odewithrs}.
The key point in the derivation of these criteria is that, in $\mathbb{R}$, trajectories started within the basin of attraction approach the attractor monotonically in time. Another point is that,
in $\mathbb{R}$, a typical basin boundary is a boundary of two attractors unless trajectories on one side of the boundary diverge to infinity. Thus, one typically expects irreversible R-tipping in $\mathbb{R}$.
However, a theory that works in arbitrary dimension and captures both irreversible and reversible R-tipping requires a more
sophisticated understanding. First, the concept of {\em forward basin
stability} from~\cite{Ashwin2016} is no longer useful. If trajectories
started within the basin of attraction can approach the attractor non-monotonically in time, then forward basin stability in system~(\ref{eq:odea}) with $x\in\mathbb{R}^{n\,\ge\,2}$ no longer excludes R-tipping in system~(\ref{eq:odewithr}) or~\eqref{eq:odewithrs}. This is evidenced by examples of irreversible R-tipping for a moving sink on $I=\mathbb{R}$ occurring in spite of forward basin stability already in two dimensions~\cite{Kiers2018,Xie2019}. Second, in two or more dimensions, there can be {\em reversible R-tipping} that occurs by crossing through basin boundaries of a single attractor; see Fig.~\ref{fig:rthr}(a).
The concept of {\em basin instability} from~\cite{OKeeffe2019} addresses only part of the problem: it gives easily testable criteria for the occurrence of irreversible R-tipping for a moving sink on $I=\mathbb{R}$ in multidimensional systems, but is not useful for reversible R-tipping.
To properly address the problem of different cases of R-tipping in arbitrary dimension, we introduce the more general concepts of {\em threshold instability} and {\em forward threshold instability}.
In short, {\em threshold instability} of a hyperbolic sink on a parameter path describes the position of the sink at some points on the path relative to the position of the threshold at different points on the path. To be specific, we introduce two notions. First, we quantify ``relative position of a sink and a threshold" using the signed distance\,\footnote{The signed distance $d_s(x,S)$ is discussed in Appendix~\ref{sec:A3}} between the point $e(\lambda_1)$ and the set $\theta(\lambda_2)$:
\begin{equation}
\label{eq:sdlambda}
d_s(e(\lambda_1),\theta(\lambda_2)).
\end{equation}
Second, we describe $e(\lambda)$ and $\theta(\lambda)$ at ``different points on the path"
by constructing the subset
$$
P^2 :=P\times P \subset \mathbb{R}^{d\times d},
$$
and viewing pairs $(\lambda_1,\lambda_2)$ of different
input parameters as elements of this subset.
We can then define threshold instability, which generalises the notion of basin instability from~\cite{OKeeffe2019}.
\begin{defn}
\label{def:thun}
Suppose the autonomous frozen system~(\ref{eq:odea}) has a hyperbolic sink $e(\lambda)$. Consider a parameter path $P$ such that $e(\lambda)$ varies $C^1$-smoothly with $\lambda\in P$.
\begin{itemize}
\item [(a)]
We say $e(\lambda)$ is {\em threshold unstable on $P$} if
there exists a $C^1$-smooth family of regular thresholds $\theta(\lambda)$ and a $(\lambda_{a},\lambda_{b})\in P^2$ such that
$$
e(\lambda_{a})\in \theta(\lambda_b) \quad\mbox{i.e.} \quad d_s\left(e(\lambda_a),\theta(\lambda_b)\right) = 0,
$$
and
$d_s(e(\lambda_1),\theta(\lambda_2))$ takes both signs
in any neighbourhood
of $(\lambda_a,\lambda_b)$ in $P^2$.
\item[(b)]
We say $e(\lambda)$ is {\em basin unstable on $P$} if it is threshold unstable, and the threshold $\theta(\lambda_b)$ is contained in a multi-basin boundary.
\end{itemize}
\end{defn}
\begin{rmk}
Note that, if $e(\lambda)$ is threshold unstable, then there is a
crossing of the threshold $\theta(\lambda_2)$ from one side to another by the sink $e(\lambda_1)$,
i.e. a passage through zero with a change in the sign of $d_s(e(\lambda_1),\theta(\lambda_2))$.
In practice, this could happen when: setting $\lambda_{1(2)}=\lambda_{a(b)}$ and varying $\lambda_{2(1)}$ in a neighbourhood of $\lambda_{b(a)}$ in $P$, or varying $\lambda_{1}$ and $\lambda_2$ near $\lambda_{a}$ and $\lambda_b$, respectively,
both in $P$.
\end{rmk}
Threshold instability on a parameter path $P$ in the autonomous frozen system~\eqref{eq:odea}
indicates that R-tipping is possible in the nonautonomous system~\eqref{eq:odewithr} or~\eqref{eq:odewithrs}
given a suitable external input that traces out $P$. To understand which external inputs are ``suitable", we consider $\Lambda(\tau)$ for which the moving sink $e(\Lambda(\tau))$
crosses some future position of a moving regular threshold $\theta(\Lambda(\tau))$ from one side to the other. To this end, we introduce a notation for the signed distance at different points in time:
\begin{equation}
\label{eq:sdtau}
\Delta_{\Lambda}(\tau_1,\tau_2)=d_s(e(\Lambda(\tau_1)),\theta(\Lambda(\tau_2))),
\end{equation}
consider pairs $(\tau_1,\tau_2)$ of different points in time as elements of $\mathbb{R}^2$,
and define
\pagebreak
\begin{defn}
\label{def:fthun}
Consider some external input $\Lambda(\tau)$ and a moving sink $e(\Lambda(\tau))$.
\begin{itemize}
\item [(a)]
We say $e(\Lambda(\tau))$ is {\em forward threshold unstable for $\Lambda(\tau)$} if
there exist a moving regular threshold $\theta(\Lambda(\tau))$ and finite $\tau_{a} < \tau_b$ such that
\begin{equation}
\label{eq:ftunst}
e(\Lambda(\tau_a)) \in \theta(\Lambda(\tau_b))\quad\mbox{i.e.} \quad \Delta_{\Lambda}(\tau_a,\tau_b) = 0,
\end{equation}
and $\Delta_{\Lambda}(\tau_1,\tau_2)$ takes both signs in any neighbourhood
of $(\tau_a,\tau_b)$ in $\mathbb{R}^2$.
\item[(b)]
We say this $e(\Lambda(\tau))$ is {\em forward basin unstable for $\Lambda(\tau)$} if it is forward threshold unstable, and the threshold $\theta(\Lambda(\tau_b))$ is contained in a multi-basin boundary.
\end{itemize}
\end{defn}
\begin{rmk}
Note that, if a moving sink $e(\Lambda(\tau))$ is forward threshold unstable, then, in some sense, there is
crossing of the moving threshold by
$e(\Lambda(\tau))$ from one side to the other.
\end{rmk}
Although {\em forward threshold instability} is a property of the
autonomous frozen system~(\ref{eq:odea})
and some external input
$\Lambda(\tau)$, {\em threshold instability} is a property of the
frozen system~(\ref{eq:odea}) on a given parameter path $P$. Threshold
instability on a path $P$ guarantees existence of some input $\Lambda(\tau)$ that
traces out this path, meaning that $P_\Lambda=P$, and gives forward threshold
instability. However, there may be other
inputs $\tilde{\Lambda}(\tau) \ne \Lambda(\tau)$ that trace out the same
path, meaning that $P_{\tilde{\Lambda}} = P_\Lambda$ =P, but do not give forward threshold instability.
\begin{figure}[t]
\begin{center}
\includegraphics[width=15.cm]{./figs/fig04}
\end{center}
\vspace{-3mm}
\caption{
(a) Families (branches) of hyperbolic sinks $e(\lambda)$ and regular thresholds $\theta(\lambda)$ for a
one-dimensional (scalar) autonomous frozen system~(\ref{eq:odea}), together with a given parameter path $P$. The pair $\lambda_a,\lambda_b \in P$ indicates threshold instability of $e(\lambda)$ on $P$.
(b) For a monotone increasing $\Lambda(\tau)$ that traces out the path $P$, the moving sink $e(\Lambda(\tau))$ is
forward threshold unstable.
(c) For a monotone decreasing $\tilde{\Lambda}(\tau)$ that traces out the same path $P$, the moving sink $e(\tilde{\Lambda}(\tau))$ is
forward threshold stable.
}
\label{fig:ftu}
\end{figure}
This is illustrated in Fig.~\ref{fig:ftu}
for a one-dimensional (scalar) autonomous frozen
system~(\ref{eq:odea}) with a family of hyperbolic sinks
$e(\lambda)$ and a family of regular thresholds $\theta(\lambda)$,
both of which vary $C^1$-smoothly with $\lambda$ on the given
parameter path $P\subset \mathbb{R}$; see Fig.~\ref{fig:ftu}(a).
The sink $e(\lambda)$ is {\em threshold unstable} on $P$ because for any $\lambda_a\in P$ smaller than
$\lambda^*$ there exists a $\lambda_b\in P$ such that
$e(\lambda_{a})\in\theta(\lambda_b)$, and $e(\lambda_a)$ can lie on
different sides of $\theta(\lambda)$ for $\lambda$ arbitrarily
close to $\lambda_b$.
Now, consider monotone bi-asymptotically constant external inputs
that trace out the path $P$. For any monotone increasing $\Lambda(\tau)$,
the moving sink $e(\Lambda(\tau))$ is forward threshold unstable because
it crosses through future positions of the moving threshold
$\theta(\Lambda(\tau))$. Specifically, for any $\tau_a\in(-\infty,\tau^*)$
there exist a $\tau_b > \tau_a$ such that $e(\Lambda(\tau))$ at $\tau=\tau_a$
crosses $\theta(\Lambda(\tau_b))$ from one side to the other; see
Fig.~\ref{fig:ftu}(b).
However, this is not the case for any monotone decreasing
$\tilde{\Lambda}(\tau)$ because the moving sink $e(\tilde{\Lambda}(\tau))$ never crosses
any future position of the moving threshold $\theta(\tilde{\Lambda}(\tau))$; see Fig.~\ref{fig:ftu}(c).
In other words, there are no finite $\tau_a < \tau_b$ that can satisfy
$e(\tilde{\Lambda}(\tau_a))\in\theta(\tilde{\Lambda}(\tau_b))$: we say the moving sink $e(\tilde{\Lambda}(\tau))$ is forward threshold stable.
\section{Nonautonomous R-tipping Definitions}
\label{sec:Rtippingdefs}
We now define a {\em nonautonomous R-tipping bifurcation via loss of end-point tracking} in nonautonomous
system~\eqref{eq:odewithrs} with asymptotically constant input $\Lambda$,
in a precise yet general context.
In addition to reversible, irreversible and degenerate cases of R-tipping, we also define critical rates for R-tipping, regular R-tipping edge states and their edge tails, and time-dependent regular R-tipping thresholds.
\subsection{R-tipping and Critical Rates}
\label{sec:Rtip_critrates}
We start by defining R-tipping and critical rates in terms of limiting behaviour
of trajectories of the nonautonomous system (\ref{eq:odewithrs}); note that this
generalises the definition of R-tipping in~\cite{Ashwin2016}.
\begin{defn}
\label{defn:Rtip}
Consider a nonautonomous system~(\ref{eq:odewithrs})
with an external input $\Lambda(\tau)$ that is asymptotically constant to $\lambda^+$.
Suppose the future limit system~\eqref{eq:odea+} has a compact invariant set
$\eta^+$ that is not an attractor\,\footnote{Note that $\eta^+$ is not necessarily
a regular edge state from Definition~\ref{defn:mth}(c); it may be a saddle with more than one unstable direction, or even a repeller of codimension-two or higher, and/or not necessarily hyperbolic.}.
\begin{enumerate}
\item[(a)]
We say the nonautonomous system~(\ref{eq:odewithrs})
undergoes {\em R-tipping from $(x_0,\tau_0)$} if there are
$r_1,r_2>0$
such that
$$
x^{[r_1]}(\tau,x_0,\tau_0) \to \eta^+
\;\;\mbox{and}\;\;
x^{[r_2]}(\tau,x_0,\tau_0) \not\to \eta^+
\;\;\mbox{as}\;\;\tau\to+\infty.
$$
\item[(b)]
Suppose in addition that $\Lambda(\tau)$ is bi-asymptotically constant and the past limit system~(\ref{eq:odea-}) has a hyperbolic sink $e^{-}$. We say the nonautonomous system~(\ref{eq:odewithrs}) undergoes {\em R-tipping from $e^-$} if there are $r_1,r_2>0$ such that
$$
x^{[r_1]}(\tau,e^-)\to \eta^+
\mbox{ and }\;\;
x^{[r_2]}(\tau,e^-) \not\to \eta^+\;\;\mbox{as}\;\;\tau\to+\infty.
$$
\item[(c)]
If there is an $r_1>0$ and a $\delta > 0$
such that
$$
x^{[r_1]}(\tau)\to \eta^+
\;\;\mbox{and}\;\;
x^{[r]}(\tau)\not\to \eta^+
\;\;\mbox{as}\;\;\tau\to +\infty
\;\;\mbox{for all}\;\;
0 < |r - r_1|< \delta,
$$
then we say $r_1$ is a {\em critical rate} and denote it with $r_c$.
\end{enumerate}
\end{defn}
\begin{rmk}
\label{rmk:Rtip}
For typical systems~\eqref{eq:odewithrs} with typical choices of initial condition and the rate parameter $r$, a solution $x^{[r]}(\tau)$
converges to an attractor $a^+$ for the future limit system~\eqref{eq:odea+}
or diverges to infinity\,\footnote{In other words, in typical situations, $x^{[r]}(\tau)\not\to\eta^+$.}.
A consequence of this robustness to small variations in $r$ is that if the future limit system~\eqref{eq:odea+} has
disjoint compact invariant sets $a_2^+$ and $a_3^+$
that are attractors, and there are rates $0<r_2<r_3$ such that
$$
x^{[r_2]}(\tau)\rightarrow a_2^+~\mbox{ and }~x^{[r_3]}(\tau)\rightarrow a_3^+~\mbox{ as }~\tau\rightarrow +\infty,
$$
then the future limit system~\eqref{eq:odea+} must have at least one compact invariant set $\eta^+$ on the basin boundary of $a_2^+$ and $a_3^+$ that is not an attractor, and there must be at least one rate $r_1\in[r_2,r_3]$ such that there is R-tipping in the sense of Definition~\ref{defn:Rtip}, namely
$$
x^{[r_1]}(\tau)\to\eta^+~\mbox{ as }~\tau\rightarrow +\infty.
$$
To see this, suppose the future limit system~\eqref{eq:odea+} has a compact invariant set $a^+$ that is an attractor, consider a solution
$$
x^{[r]}(\tau) \to a^+\;\;\mbox{as}\;\;\tau\to+\infty\;\;\mbox{for some}\;\;r=r_1>0,
$$
and note that this solution can be extended to a family of solutions that is continuous in all three $\tau$, initial condition and $r$; see for example~\cite[Theorem 3.3]{Robinson1999}. Thus, the same limiting behaviour occurs for an open set of $x$ containing the initial condition and an open set of $r$ containing $r_1$.
\end{rmk}
Next, we recognise the significance of $\eta^+$ that are regular edge states from Definition~\ref{defn:mth}(c).
\begin{defn}
Suppose that a nonautonomous system \eqref{eq:odewithrs} undergoes R-tipping as in Definition~\ref{defn:Rtip}, and $\eta^+$ is a regular edge state of the future limit system. Then we say $\eta^+$ is a {\em regular R-tipping edge state}.
\end{defn}
Then, we consider R-tipping thresholds that are anchored at infinity by a regular R-tipping edge state $\eta^+$. These thresholds are regular in the same sense as regular thresholds from Definition~\ref{defn:rthr}.
\begin{defn}
\label{def:rtipthres}
Consider a nonautonomous system~(\ref{eq:odewithrs}) with an
external input $\Lambda(\tau)$ that is asymptotically constant to $\lambda^+$.
Suppose the future limit system~\eqref{eq:odea+} has a regular R-tipping edge state $\eta^{+}$.
We say $\Theta^{[r]}(\tau) \subset\mathbb{R}^n$ is a {\em regular R-tipping threshold} if it is a codimension-one embedded orientable forward-invariant
subset of the stable set of $\eta^+$.
\end{defn}
By ``forward invariant" we mean that it is forward invariant as a nonautonomous set, i.e.
$$
x_0\in\Theta^{[r]}(\tau_0)
\Rightarrow
x^{[r]}(\tau,x_0,\tau_0)\in \Theta^{[r]}(\tau)\;\;\mbox{for all}\;\;\tau>\tau_0.
$$
By ``stable set of $\eta^+$" we mean that
\begin{equation}
x_0\in\Theta^{[r]}(\tau_0)
\Rightarrow x^{[r]}(\tau,x_0,\tau_0)\to \eta^+\;\;\mbox{as}\;\; \tau\to +\infty.\label{eq:etastable}
\end{equation}
\begin{rmk}
Note that:
\begin{itemize}
\item[(a)]
A regular R-tipping threshold $\Theta^{[r]}(\tau)$ is a rate and time dependent subset of $\mathbb{R}^n$.
\item[(b)]
We prove existence of regular R-tipping thresholds $\Theta^{[r]}(\tau)$
in Proposition~\ref{prop:invsete-}(b1), using the compactification technique of~\cite{Wieczorek2019compact}. In particular, we state conditions under
which a $\Theta^{[r]}(\tau)$ exists for all $\tau > \tau_0$ and $r>0$.
\item[(c)]
Any codimension-one forward-invariant subset of a regular R-tipping threshold is clearly also a regular R-tipping threshold. In this sense regular R-tipping thresholds are not unique.
\end{itemize}
\end{rmk}
Figure~\ref{fig:Rtip} shows two examples of R-tipping via loss of end-point tracking from Definition~\ref{defn:Rtip}, due to crossing a regular R-tipping threshold anchored at infinity by a regular R-tipping edge state\,\footnote{In the one dimensional case, recall that the moving regular threshold and edge state are one and the same.}, for a nonautonomous system~\eqref{eq:odewithrs} on $\mathbb{R}$.
In Figure~\ref{fig:Rtip}(a), $e_1(\Lambda(\tau))$ is a moving sink on $I=\mathbb{R}$, and $e_1(\Lambda(\tau))$ is forward threshold unstable due to $\theta(\Lambda(\tau))$. Such R-tipping is discussed in~\cite{Ashwin2016}, and extended to arbitrary dimension in Section~\ref{sec:Rtippingcriteria}.
In Figure~\ref{fig:Rtip}(b), $e_1(\Lambda(\tau))$ is a moving sink on a semi-infinite interval $I=(-\infty,\tau_+)$, disappears at some finite time via a saddle-node bifurcation $sn_1$, and is forward threshold stable\,\footnote{We say a moving sink $e(\Lambda(\tau))$ on $I$ is forward threshold stable if there are no $\theta(\Lambda(\tau))$ and finite $\tau_a<\tau_b\in I$ that can satisfy condition~\eqref{eq:ftunst}.}. Such R-tipping is not captured by the setting of Section~\ref{sec:Rtippingcriteria}, which is limited to moving sinks on $I=\mathbb{R}$ that are forward threshold unstable.
To overcome this limitation, we show in Section~\ref{sec:computing} that different R-tipping via loss of end-point tracking, including the example in Figure~\ref{fig:Rtip}(b), can be captured in arbitrary dimension by connecting (heteroclinic) orbits in a suitably compactified system. Furthermore, we note that the saddle-node bifurcation $sn_1$ of $e_1(\Lambda(\tau))$ in Figure~\ref{fig:Rtip}(b) gives rise to B-tipping from $e_1^-$ for $r\in(0,r_c)$ according to~\cite[Definition 3.1]{Ashwin2012}.
\begin{figure
\begin{center}
\includegraphics[width=15cm]{./figs/fig05}
\end{center}
\vspace{-5mm}
\caption{
Two examples of R-tipping via loss of end-point tracking from Definition~\ref{defn:Rtip} for the case of a nonautonomous system~\eqref{eq:odewithrs} with $x\in\mathbb{R}$. At the critical rate $r=r_c$, the trajectory crosses a regular R-tipping threshold and limits to an equilibrium regular R-tipping edge state at infinity. Shown are (grey) moving sinks $e(\Lambda(\tau))$, (light blue) moving regular thresholds $\theta(\Lambda(\tau))$, and trajectories of~\eqref{eq:odewithrs} limiting to a sink $e_1^-$ as $\tau\to-\infty$
for different values of the rate parameter: (green) $r<r_c$, (blue) $r=r_c$, and (red) $r>r_c$.
%
(a) R-tipping from $e_1^-$ via loss of end-point tracking of $e_1(\Lambda(\tau))$, due to crossing the regular R-tipping threshold $\Theta^{[r]}(\tau)$ (not shown) anchored at infinity by the equilibrium regular R-tipping edge state $\eta^+=\theta^+$.
Note that $e_1(\Lambda(\tau))$ is a moving sink on $I=\mathbb{R}$,
that is forward threshold unstable due to $\theta(\Lambda(\tau))$.
%
(b) R-tipping from $e_1^-$ via loss of end-point tracking of $e_3(\Lambda(\tau))$, due to crossing the regular R-tipping threshold $\Theta_2^{[r]}(\tau)$ (not shown) anchored at infinity by the equilibrium regular R-tipping edge state $\eta_2^+=\theta_2^+$.
Note that $e_1(\Lambda(\tau))$ is a moving sink on a semi-infinite interval $I$, disappears at a finite time via (black dot) a saddle-node bifurcation $sn_1$, and is forward threshold stable, which is different from (a) and from the setting used in~\cite{Ashwin2016}. Furthermore, the saddle-node bifurcation of $e_1(\Lambda(\tau))$ gives rise to (green) B-tipping from $e_1^-$ for $r < r_c$~\cite[Definition 3.1]{Ashwin2016}.
}
\label{fig:Rtip}
\end{figure}
\subsection{Edge Tails}
We now focus on $\eta^+$ that are regular R-tipping edge states, and introduce for the first time a notion of {\em edge tails} to rigorously classify different cases of R-tipping that may occur via loss of end-point tracking.
Consider a rate-dependent solution $x^{[r]}(\tau)$ of the nonautonomous system~\eqref{eq:odewithrs}, started from a fixed $(x_0,\tau_0)$ or limiting to a sink $e^-$.
Suppose that end-point tracking of a moving sink $e(\Lambda(\tau))$ by
$x^{[r]}(\tau)$
fails for some $r_c >0$ in the sense that
$$
x^{[r_c]}(\tau) \to \eta^+\;\;\mbox{as}\;\; \tau\to +\infty.
$$
If $\eta^+$ is a regular R-tipping edge state, then the system undergoes R-tipping due to crossing a regular R-tipping threshold $\Theta^{[r]}(\tau)$. If $r_c$ is a critical rate, then
for all $r\neq r_c$ sufficiently close we have
$$
x^{[r]}(\tau)\not\to \eta^+\;\;\mbox{as}\;\; \tau\to +\infty,
$$
and we generically expect that
$x^{[r<r_c]}(\tau)$ and $x^{[r>r_c]}(\tau)$ lie on different sides of the regular R-tipping threshold.
To be more precise about
``lie on different sides of the regular R-tipping threshold'',
we examine the corresponding trajectory\,\footnote{Recall the notation introduced in Section~\ref{sec:notation}.} $\mbox{trj}^{[r]}$
as the rate parameter $r$ approaches its critical value $r_c$ from above ($r\to r_c^+$) and from below ($r\to r_c^-$).
The ensuing limit sets\,\footnote{Here, we define
$$
\lim_{r\to r_c^+} \mbox{\normalfont trj}^{[r]}(x_0,\tau_0) = \bigcap_{r>r_c}\; \overline{\bigcup_{r_c < s < r} \mbox{\normalfont trj}^{[s]}(x_0,\tau_0)}
\;\;\mbox{and}\;\;
\lim_{r\to r_c^-} \mbox{\normalfont trj}^{[r]}(x_0,\tau_0) = \bigcap_{r<r_c}\; \overline{\bigcup_{r <s < r_c} \mbox{\normalfont trj}^{[s]}(x_0,\tau_0)}.
$$}
can typically be decomposed into two components:
\begin{equation}
\lim_{r\to r_c^\pm} \mbox{trj}^{[r]} =
\mbox{trj}^{[r_c]}
\cup x^{[r_c^\pm]}.
\label{eq:trjr}
\end{equation}
The first component, denoted $\mbox{trj}^{[r_c]}$,
is the trajectory of the nonautonomous system~\eqref{eq:odewithrs}
from $x_0$ or $e^-$
to the regular R-tipping edge state $\eta^+$ in $\mathbb{R}^n$, which is common to both limits.
Note that, being a projection of a smooth curve from $\mathbb{R}^n\times\mathbb{R}$ onto $\mathbb{R}^n$,
$\mbox{trj}^{[r_c]}$ may intersect itself and $x^{[r_c^{\pm}]}$.
The second component is either
$x^{[r_c^+]}$ or $x^{[r_c^-]}$. We define these below as the upper and lower {\em edge tails} of the regular R-tipping edge state $\eta^+$. Each edge tail of $\eta^+$ is a (union of) trajectories of the autonomous future limit system~\eqref{eq:odea+} that includes $\eta^+$ and continues away from
$\eta^+$ in $\mathbb{R}^n$.
To be more precise,
\begin{defn}
\label{defn:edgetails}
Consider a nonautonomous system~\eqref{eq:odewithrs}
with an external input $\Lambda(\tau)$ that is asymptotically constant to $\lambda^+$.
Suppose the future limit system~\eqref{eq:odea+} has a regular R-tipping edge state $\eta^+$,
and the nonautonomous system~(\ref{eq:odewithrs}) undergoes
R-tipping for some critical rate $r=r_c>0$
so that
$x^{[r_c]}(\tau) \to \eta^+\;\;\mbox{as}\;\; \tau\to +\infty$.
Then, we define the {\em upper edge tail} of $\eta^+$ to be
\begin{align}
&x^{[r_c^+]}=\bigcap_{T>0, ~\delta>0} \overline{\left\{ x^{[r]}(\tau)\;:\;\tau>T,~r\in(r_c,r_c+\delta)\right\}}\; \subset\mathbb{R}^n,
\end{align}
and the {\em lower edge tail} of $\eta^+$ to be
\begin{align}
&x^{[r_c^-]}=\bigcap_{T>0, ~\delta>0} \overline{\left\{ x^{[r]}(\tau)\;:\;\tau>T,~r\in(r_c-\delta,r_c)\right\}}\; \subset\mathbb{R}^n.
\end{align}
\end{defn}
Edge tails of $\eta^+$ include $\eta^+$ and trajectories that are contained in the unstable manifold of $\eta^+$, denoted $W^u(\eta^+)$. The upper and lower edge tails are typically different as shown in Fig.~\ref{fig:Rtiptypes}.
\begin{rmk}
\label{rmk:edgetail}
For an equilibrium regular R-tipping edge state $\eta^+$, we use the compactification technique of~\cite{Wieczorek2019compact} to:
\begin{itemize}
\item [(a)]
Show that each edge tail contains one branch of $W^u(\eta^+)$ in Proposition~\ref{prop:invsete-}(b).
\item [(b)]
Relate solutions $x^{[r]}(\tau)$ for $r$ on different sides of $r_c$ to the edge tails $x^{[r_c^-]}$ and $x^{[r_c^+]}$ in Proposition~\ref{prop:edgetails}.
\end{itemize}
\end{rmk}
\subsection{Reversible, Irreversible and Degenerate R-tipping}
\label{sec:Rtipcasec}
We use the notion of regular R-tipping edge states and their edge tails to classify R-tipping via loss of end-point tracking in nonautonomous system~\eqref{eq:odewithr} or~\eqref{eq:odewithrs} into the following cases.
\begin{defn}
\label{defn:Rtiptypes}
Consider a nonautonomous system~(\ref{eq:odewithrs})
with an external input $\Lambda(\tau)$ that is asymptotically constant to $\lambda^+$. Suppose the future limit system~\eqref{eq:odea+} has a compact invariant set
$\eta^+$ that is not an attractor, and the nonautonomous system~(\ref{eq:odewithrs}) undergoes R-tipping for some $r_1>0$ so that $x^{[r_1]}(\tau)\to\eta^+$ as $\tau\to +\infty$.
We say this R-tipping is:
\begin{itemize}
\item[(a)]
{\em Non-degenerate} if $r_1 = r_c$ is a critical rate,
$\eta^+$ is a regular R-tipping edge state, the upper and lower edge tails of $\eta^+$ are different: $x^{[r_c^+]}\neq x^{[r_c^-]},$ and each edge tail is a connection from $\eta^+$ to an attractor\,\footnote{See Appendix~\ref{sec:A4} for the definition of an attractor.} for the future limit system~\eqref{eq:odea+}.
Furthermore, we say non-degenerate R-tipping is
\begin{itemize}
\item[$\bullet$]
{\em Reversible}
if each edge tail is a connection from $\eta^+$ to the same attractor.
\item[$\bullet$]
{\em Irreversible} if each edge tail is a connection from $\eta^+$ to a different attractor.
\end{itemize}
\item [(b)]
{\em Degenerate} if it is not non-degenerate.
\end{itemize}
\end{defn}
\begin{figure
\begin{center}
\includegraphics[width=11.5cm]{./figs/fig06}
\end{center}
\vspace{-5mm}
\caption{
Examples of (a) irreversible, (b) reversible and (c) degenerate R-tipping via loss of end-point tracking from Definition~\ref{defn:Rtiptypes} for a nonautonomous system~\eqref{eq:odewithrs} on $\mathbb{R}^2$.
Shown are
(thicker black curves) trajectories of~\eqref{eq:odewithrs} started from $(x_0,\tau_0)$ for different values of the rate parameter $r=r_c-\delta$, $r=r_c$, and $r=r_c+\delta$,
(blue dot) the equilibrium regular R-tipping edge state $\eta^+$,
the (red) upper $x^{[r_c^+]}$ and (green) lower $x^{[r_c^-]}$ edge tails of $\eta^+$ (note that these contain $\eta^+$),
(light blue) the rate-dependent family $\Theta^{[r]}$ of time-dependent regular R-tipping thresholds $\Theta^{[r]}(\tau)$ defined in~\eqref{eq:Rtipthrfam}, as well as (thinner blue curves) stable
and (thinner black curves) unstable manifolds of $\eta^+$ in the future limit system~\eqref{eq:odea+}.
Note that the projection of $x^{[r^c]}(\tau,x_0,\tau_0)$ onto the $(x_1,x_2)$ phase plane (not shown in the figure) gives the first component $\mbox{trj}^{[r_c]}(x_0,\tau_0)$ in~\eqref{eq:trjr}.
}
\label{fig:Rtiptypes}
\end{figure}
Examples of different cases of R-tipping are depicted in Fig.~\ref{fig:Rtiptypes}. Only (non-degenerate) irreversible
and reversible R-tipping, shown in Fig.~\ref{fig:Rtiptypes}(a)
and (b), respectively, are typical in the sense that they are
generically found at codimension-one in $r$. In other words,
they are generically found at isolated critical rates $r=r_c$
under increasing/decreasing of $r$; see also Remark~\ref{rmk:Rtip}.
Degenerate R-tipping clearly includes many subcases, even if a regular R-tipping edge state is involved.
One example of degenerate R-tipping is depicted in
Fig.~\ref{fig:Rtiptypes}(c), where $\eta^+$ is a regular R-tipping edge state and
the upper and lower edge tails are identical.
Another example of nondegenerate R-tipping occurs when
at least one edge tail is not a connection from $\eta^+$ to an attractor (e.g. an edge tail may connect $\eta^+$ to a saddle, or diverge from $\eta^+$ to infinity; not shown in Fig.~\ref{fig:Rtiptypes}).
Additional examples of degenerate R-tipping involve $\eta^+$ that is not a regular R-tipping edge state. These include a chaotic saddle $\eta^+$ with an irregular threshold: a codimension-one stable manifold where the stable manifold is not embedded but accumulates on itself, or a repeller $\eta^+$ of codimension-two (e.g. a source in $\mathbb{R}^2$) or higher that does not have any threshold.
A final example of degenerate R-tipping is the case where there is no critical rate $r_c$: $x^{[r]}(\tau)\to\eta^+$ as $\tau\to +\infty$ within an interval of $r$.
In the reminder of the paper, we concentrate on R-tipping due to crossing a regular R-tipping threshold $\Theta^{[r]}(\tau)$ anchored at infinity by an equilibrium regular R-tipping edge state $\eta^+$. R-tipping involving more complicated edge states, thresholds that are not regular and quasithresholds, are discussed in Section~\ref{sec:Conclusions} and left for future study.
\section{Compactification}
\label{sec:compact}
The main obstacle to analysis of genuine nonautonomous R-tipping instabilities in nonautonomous system~(\ref{eq:odewithr})
or~(\ref{eq:odewithrs}) is absence of compact invariant sets such as equilibria, limit cycles or tori.
We overcome this obstacle by working with asymptotically constant inputs from Definition~\ref{defn:ac}, $\Lambda(\tau)\to\lambda^+$ as $\tau\to +\infty$.
Then, the nonautonomous system~(\ref{eq:odewithr})
or~(\ref{eq:odewithrs}) becomes {\em asymptotically autonomous}, and we can define the autonomous {\em future limit system} (\ref{eq:odea+}). More importantly, if
there is a moving sink $e(\Lambda(\tau))$,
and $e(\Lambda(\tau))\to e^+$ as $\tau\to +\infty$,
the future limit system has a hyperbolic sink $e^+$. If there is a moving regular edge state $\eta(\Lambda(\tau))$, and $\eta(\Lambda(\tau))\to\eta^+$ as $\tau\to+\infty$,
the future limit system has a regular R-tipping edge state $\eta^+$.
If additionally $\Lambda(\tau)\to\lambda^-$ as $\tau\to -\infty$,
we can also define the autonomous {\em past limit system}~\eqref{eq:odea-}. If $e(\Lambda(\tau))\to e^-$ as $\tau\to -\infty$, the past limit system has a hyperbolic sink $e^-$.
Our main idea is to simplify analysis of genuine nonautonomous R-tipping instabilities in system~(\ref{eq:odewithr}) or~(\ref{eq:odewithrs}) by
exploiting
the compact invariant sets of interest, such as $e^\pm$ and $\eta^+$, of the autonomous limit systems~\eqref{eq:odea+} and~\eqref{eq:odea-}.
For example, we would like to transform an R-tipping from $e^-$ problem into a heteroclinic $e^--$to$-\eta^+$ orbit problem.
This
requires a suitable compactification of the original nonautonomous system.
In the usual approach~\cite{KloedenRasmussen2011}, the nonautonomous system~(\ref{eq:odewithrs})
is augmented with unbounded $\tau\in\mathbb{R}$ as an additional
dependent variable\,\footnote{By abuse of notation, we use $\tau$ to denote both the independent variable and the additional dependent variable.}. This gives the autonomous {\em augmented system}
\begin{align}
\label{eq:odeext0}
\left.
\begin{array}{rl}
x' &= f(x,\Lambda(\tau))/r\\
\tau'&= 1
\end{array}\right\},
\end{align}
defined on $\mathbb{R}^n\times\mathbb{R}$.
While
the regular R-tipping threshold can nicely be represented in $\mathbb{R}^n\times\mathbb{R}$ as a rate-dependent family of time-dependent subsets of $\mathbb{R}^n$ (see Fig.~\ref{fig:Rtiptypes}):
\begin{equation}
\label{eq:Rtipthrfam}
\Theta^{[r]} := \left\{\Theta^{[r]}(\tau),\tau\right\}_{\tau\in\mathbb{R}}
\subset\mathbb{R}^n\times\mathbb{R},
\end{equation}
the augmented flow in~\eqref{eq:odeext0} does not contain
any compact invariant sets
because they only appear as
$\tau$ tends to positive and negative infinity.
To address this issue, we
\begin{itemize}
\item
Augment system~(\ref{eq:odewithrs}) with bounded $s\in(-1,1)$ as an additional dependent variable.
\item
Use the compactification technique developed in~\cite{Wieczorek2019compact} to extend the augmented phase space. Specifically, we glue in the limit systems from time infinity ($s=\pm 1$) that carry
compact invariant sets such as $e^\pm$ and $\eta^+$.
\end{itemize}
In short, we require that the additional dependent variable remains within a compact interval.
Reference~\cite{Wieczorek2019compact} proves existence of a smooth compactification for
nonautonomous system~\eqref{eq:odewithr} or~\eqref{eq:odewithrs} for a wide class of asymptotically constant (possibly non-monotone) inputs\,\footnote{$\Lambda(\tau)$ is denoted $\Gamma(t)$ in~\cite{Wieczorek2019compact}.} $\Lambda(\tau)$, ranging from super-exponential to sub-logarithmic asymptotic decay (with oscillation). Additionally, it outlines a procedure for constructing suitable examples of time transformation for a given asymptotic decay of $\Lambda(\tau)$.
For simplicity, we assume here that $\Lambda(\tau)$ decays exponentially, and reformulate the main results from~\cite{Wieczorek2019compact} to account for the presence of the rate parameter $r$. To be precise,
\begin{defn}
\label{defn:expbac}
We say $\Lambda(\tau)$ is {\em exponentially bi-asymptotically constant} if there is a
{\em decay coefficient} $\rho>0$ such that
\begin{equation}
\label{eq:eas}
\lim_{\tau\to \pm\infty}\frac{\Lambda'(\tau)}{e^{\mp\rho \tau}}\;\;\mbox{exists}.
\end{equation}
We say $\Lambda(\tau)$ is {\em exponentially asymptotically constant} if there is a
$\rho>0$ such that one of the limits above exists.
\end{defn}
\begin{rmk}
We note that for any bi-asymptotically constant $\Lambda(\tau)$ it is possible to define the slowest rate of exponential approach to a constant as $\tau\to\pm\infty$ by
$$
\tilde{\rho}_{\pm} = \lim_{ \tau\rightarrow \pm \infty} -\frac{1}{|\tau|}\ln \left( \sup_{ u>\tau} \|\Lambda'( u)\|\right).
$$
One can show that $\Lambda$ is exponentially bi-asymptotically constant
if and only if both of $\tilde{\rho}_{\pm}$ are either positive or $+\infty$.
Then, a finite decay coefficient $\rho$ in~\eqref{eq:eas} can always be chosen such that $0<\rho<\min(\tilde{\rho}_-,\tilde{\rho}_+)$, and in some special cases\,\footnote{For example, when $\Lambda(\tau) \sim C\, e^{\mp\tilde{\rho}\tau}\,\tau^{n\le0}$ as $\tau\to\pm\infty$.}
such that $0<\rho\le\min(\tilde{\rho}_-,\tilde{\rho}_+)$.
\end{rmk}
\subsection{Autonomous Compactified System}
\label{sec:compautsyst}
Compactification is a three-step process. The first step is
an $\alpha$-parametrised time transformation that makes the additional dependent variable bounded. Guided by~\cite[Sec.4.2]{Wieczorek2019compact}, we use a transformation designed for exponentially or faster decaying external inputs, and augment the asymptotically autonomous system~(\ref{eq:odewithrs}) with
\begin{equation}
\label{eq:compacttransf}
s = g_{\alpha}(\tau) =\tanh\left(\frac{\alpha }{2}\,\tau\right) \in (-1,1),
\end{equation}
where $\alpha>0$ is the {\em compactification parameter} that
is chosen later, in the third step. The inverse is given by
$$
\tau = h_{\alpha}(s) = \frac{1}{\alpha}\ln\frac{1+s}{1-s} \in \mathbb{R},
$$
and the augmented component of the vector field is
$$
s' =\alpha\, (1-s^2)/2.
$$
An advantage of the external input time scale $\tau$ is that transformation~\eqref{eq:compacttransf} does not depend on
the rate parameter $r>0$.
The second step is to make the $s$-interval closed
by including $s=\pm 1$ ($\tau=\pm\infty$), and
continuously extend the augmented vector field to $s=\pm 1$.
This gives the autonomous {\em compactified system}
\begin{align}
\label{eq:odeextbtau}
\left.
\begin{array}{rl}
rx'& = f(x,\Lambda_\alpha(s))\\
s' &= \alpha(1-s^2)/2
\end{array}\right\},
\end{align}
with
\begin{align}
\label{eq:lambda_s+-}
\Lambda_\alpha(s)&:=
\left\{
\begin{array}{rcl}
\Lambda(h_{\alpha}(s)) &\mbox{for}& s\in(-1,1),\\
\lambda^+ &\mbox{for}& s = 1,\\
\lambda^- &\mbox{for}& s = -1,
\end{array}\right.
\end{align}
that is defined on the {\em extended phase space} $\mathbb{R}^n\times[-1,1]$.
Most importantly, the flow-invariant subspaces
$$
S^+=\{(x,1)\;:\;x\in\mathbb{R}^n \}\;\;\mbox{and}\;\;S^-=\{(x,-1)\;:\;x\in\mathbb{R}^n \},
$$
carry the autonomous dynamics and compact invariant sets, such as $e^\pm$ and $\eta^+$,
of the future~\eqref{eq:odea+} and past~\eqref{eq:odea-} limit systems, respectively.
The third step is to choose the compactification parameter $\alpha$ such that the continuously extended vector field of the compactified system is continuously differentiable ($C^1$-smooth) on $\mathbb{R}^n\times[-1,1]$. This is done in the following proposition.
\begin{proposition}
\label{prop:regular}
Consider a nonautonomous system~\eqref{eq:odeextbtau} with exponentially bi-asymptotically constant input $\Lambda(\tau)$ and decay coefficient $\rho>0$. Then, the autonomous compactified system~\eqref{eq:odeextbtau} is $C^1$-smooth on the extended phase space $\mathbb{R}^n\times[-1,1]$ for any $\alpha\in(0,\rho]$
and all $r >0$.
\end{proposition}
\begin{proof}
For any $r>0$, system~\eqref{eq:odeextbtau} is
a compactification of system~\eqref{eq:odewithrs}.
Thus, we can apply~\cite[Cor.4.1]{Wieczorek2019compact} to~\eqref{eq:odeextbtau} to infer that, for any $\alpha\in(0,\rho]$ and $r>0$, the compactified system \eqref{eq:odeextbtau} is at least $C^1$ smooth on the compactified phase space $\mathbb{R}^n\times[-1,1]$.
\end{proof}
\subsection{Compactified System as a Singularly Perturbed Fast-Slow System}
\label{sec:csfastslow}
When $0 < r \ll 1$, the compactified system~\eqref{eq:odeextbtau} can be viewed as a singularly perturbed fast-slow system with the small parameter $r$~\cite{Kuehn2015},
where the system time scale $t$ is the {\em fast time}, and the external input time scale $\tau=rt$ is the {\em slow time}.
Taking the limit $r\to 0$ in the fast time $t$
in
\begin{align}
\label{eq:odeextb}
\left.
\begin{array}{rl}
\dot{x} &= f(x,\Lambda_\alpha(s))\\
\dot{s} &= r\,\alpha(1-s^2)/2
\end{array}\right\},
\end{align}
gives the {\em fast subsystem (the layer problem)}
\begin{equation}
\label{eq:layersyst}
\dot{x} = f(x,\Lambda_\alpha(s)),
\end{equation}
where $s$ becomes a fixed-in-time parameter. Note that this is the frozen system~\eqref{eq:odea} with the input parameter $\lambda = \Lambda_\alpha(s)$.
Taking the limit $r\to 0$ in the slow time $\tau$ in~\eqref{eq:odeextbtau} gives the {\em slow subsystem (the reduced problem)}
\begin{align}
\label{eq:slowsub}
\left.
\begin{array}{rl}
0 &= f(x,\Lambda_\alpha(s))\\
s' &= \alpha(1-s^2)/2
\end{array}\right\}.
\end{align}
This singular system describes the evolution of $s$ in slow time $\tau$ on
the {\em critical set}
$$
\tilde{C}^{[0]} =
\left\{
(x,s)\in \mathbb{R}^n\times[-1,1]
\;:\;
f(x,\Lambda_\alpha(s)) = 0
\right\},
$$
that consists of all branches of equilibria (critical points) of the fast subsystem~\eqref{eq:layersyst} or~\eqref{eq:odea}. The critical set $\tilde{C}^{[0]}$ is called the {\em critical manifold} if it is a submanifold of $\mathbb{R}^n\times[-1,1]$. Furthermore, submanifolds of $\tilde{C}^{[0]}$ that consist of hyperbolic equilibria of the fast subsystem~\eqref{eq:layersyst} or~\eqref{eq:odea},
are called {\em normally hyperbolic critical manifolds}~\cite{Fenichel1979,Kuehn2015}.
\subsection{Compact Normally Hyperbolic Critical Manifolds}
\label{sec:NHIM}
The fast-slow viewpoint allows us to represent moving sinks and moving equilibrium regular edge states as {\em compact normally hyperbolic invariant manifolds} in the extended phase space of the compactified system.
\begin{proposition}
\label{prop:fastslow}
Consider a nonautonomous system~\eqref{eq:odewithrs}
with exponentially bi-asymptotically constant input $\Lambda(\tau)$.
Choose the compactification parameter $\alpha$ that satisfies Proposition~\ref{prop:regular}.
Consider an interval $I=(\tau_-,\tau_+)$, let $s_\pm=g_\alpha(\tau_\pm)$, and note that $\tau_\pm$ may be $\pm\infty$ in which case $s_\pm=\pm 1$.
Then,
\begin{itemize}
\item[(a)]
A moving sink $e(\Lambda(\tau))$ on
$I=(\tau_-,\tau_+)$ corresponds
to the compact connected normally hyperbolic attracting critical manifold
$$
\tilde{E}_\alpha^{[0]} =
\left\{
\left(e(\Lambda_\alpha(s)),s\right)
\;:\; s\in[s_-,s_+]
\right\},
$$
in the extended phase space of the compactified system~\eqref{eq:odeextbtau}.
\item[(b)]
A moving equilibrium regular edge state $\eta(\Lambda(\tau))$ on $I=(\tau_-,\tau_+)$ corresponds to the compact connected normally hyperbolic critical manifold
$$
\tilde{H}_\alpha^{[0]} =
\left\{ \left(\eta(\Lambda_\alpha(s)),s\right) \;:\; s\in[s_-,s_+] \right\},
$$
in the extended phase space of the compactified system~\eqref{eq:odeextbtau}. $\tilde{H}_\alpha^{[0]}$ is normally repelling if $x\in\mathbb{R}$, or of saddle type with one unstable dimension if $x\in\mathbb{R}^{n\ge 2}$.
\end{itemize}
\end{proposition}
\begin{rmk}
The compact normally hyperbolic critical manifolds $\tilde{E}_\alpha^{[0]}$ and $\tilde{H}_\alpha^{[0]}$ represent branches of hyperbolic sinks $e(\lambda)$ and equilibrium regular edge states $\eta(\lambda)$, respectively, for the frozen system~\eqref{eq:odea}
in the $(x,s)$ phase space of the compactified system~\eqref{eq:odeextbtau}.
This allows us to use Fenichel's theorem~\cite[Thm 9.1]{Fenichel1979} to give criteria for tracking moving sinks and moving regular thresholds in Section~\ref{sec:TrackingProof}.
\end{rmk}
\noindent
{\em Proof of Proposition~\ref{prop:fastslow}.}
(a) Note that $\tilde{E}_\alpha^{[0]}$ is a graph over $s$, and
$$
\frac{d}{ds}\,e(\Lambda_\alpha(s))=
\frac{d}{d\lambda}e(\lambda)\,\frac{d}{ds}\Lambda_\alpha(s).
$$
It then follows from Definition~\ref{def:ms}(a) of a moving sink on $I$, and from Prop.~\ref{prop:regular},
that $\tilde{E}_\alpha^{[0]}$ is at least $C^1$-smooth in $s$ on $[s_-, s_+]$.
For any fixed $s^*\in[s_-,s_+]$, $\tilde{E}_\alpha^{[0]}$ consists of
an equilibrium (a critical point) of the fast subsystem~\eqref{eq:layersyst}, which is exponentially stable (hyperbolic) within, and neutrally stable transverse to $\{s=s^*\}$. Hence, $\tilde{E}_\alpha^{[0]}$
is a connected attracting normally hyperbolic critical manifold. It is compact because it is a closed and bounded subset of $\mathbb{R}^n\times[-1,1]$.
\noindent
(b)
Note that $\tilde{H}_\alpha^{[0]}$ is a graph over $s$, and
$$
\frac{d}{ds}\,\eta(\Lambda_\alpha(s))=
\frac{d}{d\lambda} \eta(\lambda)\,\frac{d}{ds}\Lambda_\alpha(s).
$$
It then follows from Definition~\ref{defn:mth}(b) of a moving regular edge state on $I$, and from similar arguments to (a), that $\tilde{H}_\alpha^{[0]}$ is a compact connected normally hyperbolic invariant critical manifold. Normal stability of
$\tilde{H}_\alpha^{[0]}$ follows from
Definition~\ref{defn:edgestate} of a regular edge state, see also Remark~\ref{rmk:edgestate}.
\qed
\subsection{Compactified System Dynamics}
\label{sec:csd}
In this section, we discuss the stability and invariant manifolds of hyperbolic sinks $e^\pm$ and equilibrium regular R-tipping edge states $\eta^+$ from the limit systems when embedded in the extended phase space of the compactified system~\eqref{eq:odeextbtau}. We extrapolate the dynamical structure from these states into the new dependent variable $s$.
In Section~\ref{sec:Conclusions}, we discuss extensions of some of the results below to non-equilibrium attractors and non-equilibrium regular edge states.
\begin{proposition}
\label{prop:csdyn}
Consider a nonautonomous system~(\ref{eq:odewithrs}) with exponentially bi-asymptotically constant input $\Lambda(\tau)$ and decay coefficient $\rho>0$. Choose any compactification parameter $\alpha<\rho$.
\begin{itemize}
\item[(a)]
If $e^+$ is a hyperbolic sink for the future limit system~\eqref{eq:odea+}, then
$$
\tilde{e}^+ =(e^+,1)\in S^+,
$$
is also a hyperbolic sink when considered in the extended phase space of the compactified system~\eqref{eq:odeextbtau}.
The additional eigenvector of $\tilde{e}^+$, denoted $v_+$,
exists and is normal to the invariant subspace $S^+$ for any
$\alpha\in(0,\rho)$ and all $r>0$. Furthermore, $v_+$ is the leading eigenvector of $\tilde{e}^+$ for any $\alpha\in\left(0,\min\{\rho,-\mathrm{Re}(l_1)/r\}\right)$
and all $r>0$, where $l_1$ is the leading eigenvalue of $e^+$ in
the future limit system~\eqref{eq:odea+}.
%
\item[(b)]
If $\eta^+$ is an equilibrium regular R-tipping edge state, then
$$
\tilde{\eta}^+ =(\eta^+,1)\in S^+,
$$
is a hyperbolic saddle
with a codimension-one stable manifold $W_\alpha^{s,[r]}(\tilde{\eta}^+)$, a codimension-one embedded orientable local stable manifold
$W^{s,[r]}_{\alpha,loc}(\tilde{\eta}^+) \subseteq W_\alpha^{s,[r]}(\tilde{\eta}^+)$,
and a one-dimensional unstable manifold $W^u(\tilde{\eta}^+)$, when considered in the extended phase space of the compactified system~\eqref{eq:odeextbtau}.
The additional eigenvector of $\tilde{\eta}^+$ is normal to the invariant subspace $S^+$ for any
$\alpha\in(0,\rho)$ and all $r>0$.
%
\item[(c)]
If $e^-$ is a hyperbolic sink for the past limit system~\eqref{eq:odea-}, then
$$
\tilde{e}^- =(e^-,-1)\in S^-,
$$
is a hyperbolic saddle with a one-dimensional unstable manifold $W_\alpha^{u,[r]}(\tilde{e}^-)$ when considered in the extended phase space of the compactified system~\eqref{eq:odeextbtau}.
The additional eigenvector of $\tilde{e}^-$ is normal to the invariant subspace $S^-$ for any
$\alpha\in(0,\rho)$ and all $r>0$.
\end{itemize}
\end{proposition}
\begin{rmk}
Note that
\begin{itemize}
\item[$\bullet$]
The shape and relative position of invariant manifolds $W_\alpha^{s,[r]}(\tilde{\eta}^{+})$ and $W_\alpha^{u,[r]}(\tilde{e}^-)$ will typically change with the rate parameter $r$, but these manifolds are guaranteed to respectively meet the invariant subspaces $S^+$ and $S^-$ orthogonally for any $r>0$ if we choose the compactification parameter $\alpha\in(0,\rho)$.
The invariant manifold $W^u(\tilde{\eta}^+)$ is independent of $r$ and $\alpha$.
\item[$\bullet$]
We show in Proposition~\ref{prop:invsete-}(b) that a forward invariant local stable manifold $W^{s,[r]}_{\alpha,loc}(\tilde{\eta}^+)$ for the autonomous compactified system~\eqref{eq:odeextbtau}
corresponds to
a regular R-tipping threshold in the original nonautonomous system~\eqref{eq:odewithrs},
and that branches of the unstable manifold $W^u(\tilde{\eta}^+)$ are related to the edge tails of $\eta^+$.
\end{itemize}
\end{rmk}
\noindent
{\em Proof of Proposition~\ref{prop:csdyn}.}
Note that we impose the limit $0 < \alpha\le \rho$ to ensure the compactified system~\eqref{eq:odeextbtau} is at least $C^1$-smooth; this follows from Prop.~\ref{prop:regular}.
The Jacobian for the compactified system is
\begin{equation}
\label{eq:jacobian}
J=\left(
\begin{array}{cc}
\frac{1}{r}\left(\frac{\partial f}{\partial x}\right)_{n\times n} & \frac{1}{r}\left(\frac{\partial f}{\partial \Lambda}\,\frac{\partial \Lambda_\alpha}{\partial s}\right)_{n\times 1} \\
(0)_{1\times n} & - \alpha s
\end{array}
\right),
\end{equation}
where the subscripts indicate the size of the matrix components of $J$.
Consider linear stability of equilibria $\tilde{e}^\pm$ and $\tilde{\eta}^+$ in the compactified system~\eqref{eq:odeextbtau} on the time scale $\tau$.
\\
(a) Equilibrium $\tilde{e}^+$ is a hyperbolic sink.
There are $n$ eigenvalues $q_i = l_i/r$ within $S^+$ that satisfy ${\mathrm Re}(q_n) \le \ldots \le \mathrm{Re}(q_1) < 0$, where $l_i$ are the eigenvalues of ${e}^+$ in the future limit system~\eqref{eq:odea+}, and $S^+$ itself is exponentially attracting, adding one additional negative eigenvalue $q_+ =-\alpha$.
It follows from the structure of the Jacobian~\eqref{eq:jacobian} that the
additional eigenvector, denoted $v_+$, exists for all $r>0$ if the top $n$ elements in the last column of $J$ are zero
\begin{equation}
\label{eq:orthog}
\frac{\partial f}{\partial \Lambda}({e}^+)\,
\frac{d\Lambda_\alpha}{ds}(s=1)
=
\frac{\partial f}{\partial \Lambda}({e}^+)\, \lim_{\tau\to + \infty}\,\frac{\Lambda'(\tau)}{g'_{\alpha}(\tau)}
= 0,
\end{equation}
and $v_+$
is normal to $S^+$ if and only if~\eqref{eq:orthog} holds.
Noting that $g'_{\alpha}(\tau) \sim 2\alpha \, e^{-\alpha\tau}$ as
$\tau\to +\infty$, and that $\Lambda(\tau)$ decays exponentially with the decay coefficient $\rho>0$, we obtain
$$
\lim_{\tau\to + \infty}\,\frac{\Lambda'(\tau)}{g'_{\alpha}(\tau)} =
\left(\lim_{\tau\to +\infty}\frac{\Lambda'(\tau)}{e^{-\rho \tau}}\right)\,
\left(\lim_{\tau\to +\infty}\frac{e^{-\rho\tau}}{g'_{\alpha}(\tau)}\right)
=
\frac{1}{2\alpha}
\left(
\lim_{\tau\to +\infty}\frac{\Lambda'(\tau)}{e^{-\rho \tau}}
\right)\,
\left(
\lim_{\tau\to +\infty}e^{-(\rho-\alpha) \tau}
\right),
$$
implying that $v_+$ exists and is normal to $S^+$
for any $0<\alpha<\rho$ and all $r>0$.
Finally, $v_+$ is the leading eigenvector for all $r>0$ if
it exists for all $r>0$, meaning that $0 < \alpha < \rho$,
and if $-q_+ < -\mathrm{Re}(q_1)$.
Hence the condition $0<\alpha < \min\{\rho,-\mathrm{Re}(l_1)/r\}$.
\noindent
(b) Equilibrium $\tilde{\eta}^+$ is a hyperbolic saddle with
$n$-dimensional stable eigenspace $E_\alpha^s(\tilde{\eta}^+)$.
This is because $\tilde{\eta}^+$ is either a hyperbolic source ($n=1$) or a hyperbolic saddle with one unstable eigendirection ($n\ge 2$) within $S^+$ by Definition~\ref{defn:edgestate}, and $S^+$ itself is exponentially attracting, adding one (additional) negative eigenvalue $q_+ =-\alpha$.
Note that the additional (generalised) eigenvector is transverse to $S^+$ for all $r>0$. Thus, the stable eigenspace $E_\alpha^s(\tilde{\eta}^+)$ is transverse to $S^+$ for all $r>0$.
It then follows from the stable manifold theorem that, for any $r>0$, there is a unique $C^1$-smooth codimension-one stable manifold $W_\alpha^{s,[r]}(\tilde{\eta}^+)$
that is tangent to $E_\alpha^s(\tilde{\eta}^+)$ at $\tilde{\eta}^+$.
$W_\alpha^{s,[r]}(\tilde{\eta}^+)$ depends on the rate parameter $r$ because the vector field in~\eqref{eq:odeextbtau} depends on $r$.
Consider a codimension-one forward-invariant local stable manifold $W_{\alpha,loc}^{s,[r]}(\tilde{\eta}^+)$ defined for
$s\in(s_0,1]$ with a suitably chosen $s_0$.
It then follows from Definition~\ref{defn:edgestate} of a regular edge state that $W_{\alpha,loc}^{s,[r]}(\tilde{\eta}^+) \cap S^+$ is a codimension-one embedded orientable forward-invariant local stable manifold of $\eta^+$ within $S^+\subseteq \mathbb{R}^n$.
Since $W_{\alpha,loc}^{s,[r]}(\tilde{\eta}^+)$ intersects $S^+$ transversely, there is an $s_0\in[-1,1)$ such that $W_{\alpha,loc}^{s,[r]}(\tilde{\eta}^+)$ is a graph over $s$ on $(s_0,1]$.
Thus,
the embedding and orientability properties carry over from $S^+$ to the entire $W_{\alpha,loc}^{s,[r]}(\tilde{\eta}^+)$.
The condition for the stable eigenspace $E_\alpha^s(\tilde{\eta}^{+})$ to be normal to $S^+$ follows from (a).
\noindent
(c) For any $r>0$, equilibrium $\tilde{e}^{-}$ is a hyperbolic saddle with
one-dimensional unstable eigenspace $E_\alpha^u(\tilde{e}^{-})$. This is because $\tilde{e}^{-}$ is a hyperbolic sink within $S^-$, and $S^-$ itself is exponentially repelling, adding one and the only unstable eigendirection with positive eigenvalue $q_- = \alpha$. For any $r>0$, existence of the one-dimensional unstable manifold $W_\alpha^{u,[r]}(\tilde{e}^-)$ follows from the unstable manifold theorem.
$W_\alpha^{u,[r]}(\tilde{e}^-)$ depends on the rate parameter $r$ because the compactified vector field in~\eqref{eq:odeextbtau} depends on $r$.
The condition for the unstable eigendirection $E_\alpha^u(\tilde{e}^{-})$ to be normal to $S^-$ follows from a similar argument to (a).
\qed
\subsection{Relating Nonautonomous and Compactified System Dynamics}
\label{sec:compactdyns}
We now examine the relationship between:
\begin{itemize}
\item[(i)]
Solutions, regular R-tipping thresholds and edge tails in the nonautonomous system~\eqref{eq:odewithrs}, and
\item[(ii)]
Equilibria $\tilde{e}^-$ and $\tilde{\eta}^+$ as well as their invariant manifolds
in the autonomous compactified system~\eqref{eq:odeextbtau}.
\end{itemize}
First, we relate the local pullback attractor $x^{[r]}(\tau,e^-)$
to the rate-dependent unstable manifold of $\tilde{e}^-$, the time and
rate dependent R-tipping threshold $\Theta^{[r]}(\tau)$ anchored at infinity by an equilibrium regular R-tipping edge state $\eta^+$ to the rate-dependent
local stable manifold of $\tilde{\eta}^+$, and associate each edge tail
$x^{[r_c^+]}$ and $x^{[r_c^-]}$ of $\eta^+$ to a branch of the unstable manifold of $\tilde{\eta}^+$.
\begin{proposition}
\label{prop:invsete-}
Consider a nonautonomous system~(\ref{eq:odewithrs}) with exponentially bi-asymptotically constant input $\Lambda(\tau)$ and decay coefficient $\rho$. Choose any compactification parameter $\alpha\in (0,\rho)$.
\begin{itemize}
\item[(a)]
Suppose the past limit system~\eqref{eq:odea-} has
a sink $e^{-}$. Then,
\begin{itemize}
\item[$\bullet$]
There is a $\tau_0$ such that, for any $r>0$ and all $\tau < \tau_0$, there exists a unique local pullback attractor
$x^{[r]}(\tau,e^-)$ in nonautonomous system~\eqref{eq:odewithrs}.
Note that $\tau_0$ may be $+\infty$.
\item[$\bullet$]
For any $r>0$, the local pullback attractor
$x^{[r]}(\tau,e^-)$ in nonautonomous system~\eqref{eq:odewithrs} corresponds to
sections of the one-dimensional unstable manifold
$$
W_\alpha^{u,[r]}(\tilde{e}^{-}) \supset
\left\{(x,s)\;\;:\;\; x\in x^{[r]}(\tau,e^-),~s=g_{\alpha}(\tau) \right\}_{\tau<\tau_0},
$$
of the saddle $\tilde{e}^- =(e^-,-1)$ in the extended phase space of the compactified system~\eqref{eq:odeextbtau}.
\end{itemize}
\item[(b)]
Suppose the future limit system~\eqref{eq:odea+} has an equilibrium regular R-tipping edge state $\eta^+$. Then,
\begin{itemize}
\item[$\bullet$]
There is a $\tau_0$ such that, for any $r>0$ and all $\tau>\tau_0$, there exists an R-tipping threshold $\Theta^{[r]}(\tau)$ anchored at infinity by $\eta^+$ in nonautonomous system~\eqref{eq:odewithrs}.
Note that $\tau_0$ may be $-\infty$.
\item[$\bullet$]
For any $r>0$, the R-tipping threshold $\Theta^{[r]}(\tau)$ in nonautonomous system~\eqref{eq:odewithrs} corresponds to sections of the codimension-one stable manifold
\begin{align}
\label{eq:compthr}
W_\alpha^{s,[r]}(\tilde{\eta}^+) \supset \tilde{\Theta}_\alpha^{[r]}:=
\left\{ \left(x,s\right)~:~x\in \Theta^{[r]}(\tau),~s=g_{\alpha}(\tau) \right\}_{\tau > \tau_0},
\end{align}
of the saddle $\tilde{\eta}^+ =(\eta^+,1)$ in the extended phase space of the compactified system~\eqref{eq:odeextbtau}.
\item[$\bullet$]
Each edge tail of $\eta^+$ embedded in the compactified phase space of~\eqref{eq:odeextbtau}, namely
\begin{equation}
\label{eq:etsembedded}
\tilde{x}^{[r_c^+]}=
\left\{
(x,1)~:~x\in x^{[r_c^+]}
\right\}
\quad\mbox{and}\quad
\tilde{x}^{[r_c^-]}=
\left\{
(x,1)~:~x\in x^{[r_c^-]}
\right\},
\end{equation}
contains one branch of the unstable manifold $W^{u}(\tilde{\eta}^+)$ of the saddle $\tilde{\eta}^+ =(\eta^+,1)$.
\end{itemize}
\end{itemize}
\end{proposition}
\begin{rmk}
These relations between nonautonomous and compactified system dynamics are the main advantages of the compactification. They show that the temporal shape of the external input $\Lambda(\tau)$ and the magnitude of the rate parameter $r>0$ are in a certain sense `encoded' in the geometric shape of the invariant manifolds $W_\alpha^{u,[r]}(\tilde{e}^-)$ and $W_\alpha^{s,[r]}(\tilde{\eta}^{+})$ for the autonomous compactified system~\eqref{eq:odeextbtau}. This observation allows us to:
\begin{itemize}
\item
Use existing numerical methods from~\cite{Krauskopf2006} to compute families of regular R-tipping thresholds in low-dimensional nonautonomous systems~\eqref{eq:odewithrs} as local stable manifolds of saddles $\tilde{\eta}^+$ in the extended phase space of the compactified system~\eqref{eq:odeextbtau}.
\item
Highlight the importance of one-dimensional connecting (heteroclinic) orbits. In Section~\ref{sec:computing}, we identify different R-tipping with a connecting (heteroclinic) orbit, and use these orbits to give a general method for computing critical rates in arbitrary dimension.
\end{itemize}
\end{rmk}
\noindent
{\em Proof of Proposition~\ref{prop:invsete-}.}
The assumption of $\alpha$ means that the conclusion of Proposition~\ref{prop:regular} holds.\\
(a) In the nonautonomous system~\eqref{eq:odewithrs}, existence of a unique local pullback
point attractor $x^{[r]}(\tau,e^-)$ that limits to
$e^-$ as $\tau\to +\infty$ for any $r>0$ follows from~\cite[Thm.~2.2]{Ashwin2016}.
In the compactified system~\eqref{eq:odeextbtau}, existence of a unique one-dimensional unstable manifold
$W_\alpha^{u,[r]}(\tilde{e}^-)$
for any $r>0$ follows from Proposition~\ref{prop:csdyn}(c).
These may exist for all $\tau\in\mathbb{R}$ and $s\in(-1,1)$,
respectively, but this is not guaranteed.
Noting that $\left\{ (x^{[r]}(\tau),g_{\alpha}(\tau))\;:\;\tau<\tau_0 \right\}$ is
the trajectory of the compactified system that corresponds to a solution $x^{[r]}(\tau)$
of the nonautonomous system gives the result.
\noindent
(b)
We prove existence of a regular R-tipping threshold anchored
at infinity by an equilibrium regular R-tipping edge state $\eta^+$ by construction,
using sections of a suitably chosen subset of $W^{s,[r]}_\alpha(\tilde{\eta}^{+})$ at fixed values of $s$.
Existence of a codimension-one embedded orientable forward-invariant local stable manifold $W_{\alpha,loc}^{s,[r]}(\tilde{\eta}^+) \subseteq W_{\alpha}^{s,[r]}(\tilde{\eta}^+)$ that
is a graph over $s$ for $s\in(s_0,1]$ follows from Proposition~\ref{prop:csdyn}(b).
Keeping in mind that $s = g_\alpha(\tau)$, and setting $\tau_0 = h_\alpha(s_0)$, we construct
\begin{equation}
\label{eq:thetaWs}
\Theta^{[r]}(\tau) :=\{x\;:\;(x,s)\in W_{\alpha,loc}^{s,[r]}(\tilde{\eta}^+)\} \subset \mathbb{R}^n,
\end{equation}
for any $r>0$ and all $\tau \in (\tau_0,+\infty)$. Note that $\tau_0$ is $-\infty$ if $s_0=-1$.
Such $\Theta^{[r]}(\tau)$ is a codimension-one embedded orientable forward-invariant nonautonomous
set by construction, and has the property~\eqref{eq:etastable}.
Thus, $\Theta^{[r]}(\tau)$ is a regular R-tipping threshold. Note that $\Theta^{[r]}(\tau)$
is not unique in the sense that there is a different $\Theta^{[r]}(\tau)$ for every
different codimension-one forward-invariant subset of $W_{\alpha,loc}^{s,[r]}(\tilde{\eta}^+)$. Relation~\eqref{eq:compthr} follows from construction of $\Theta^{[r]}(\tau)$ in~\eqref{eq:thetaWs}.
To prove the last bullet point in (b), recall from Definition~\ref{defn:edgetails} that each edge tail of $\eta^+$ contains a trajectory
of the future limit system~\eqref{eq:odea+} that limits to $\eta^+$ in backwards time and does not depend on $r$ or $\alpha$. It follows from Proposition~\ref{prop:csdyn}(b)
that $\tilde{\eta}^+$ is a hyperbolic saddle with one-dimensional unstable manifold $W^{u}(\tilde{\eta}^+)\subset S^+$. This means that this unstable manifold contains precisely two trajectories (the branches of $W^{u}(\tilde{\eta}^+)$) and hence each edge tail must contain one of these.
\qed\\
Next, we state three relations between solutions $x^{[r]}(\tau)$ of the nonautonomous system~\eqref{eq:odewithrs} for $r$
on different sides of a critical rate $r_c$, and the upper $x^{[r_c^+]}$ and lower $x^{[r_c^-]}$ edge tails of an equilibrium regular R-tipping edge state $\eta^+$.
\begin{proposition}
\label{prop:edgetails}
Consider a solution $x^{[r]}(\tau)$ to a nonautonomous system~\eqref{eq:odewithrs}
with an external input $\Lambda(\tau)$ that is asymptotically constant to $\lambda^+$.
Suppose there is a regular R-tipping threshold $\Theta^{[r]}(\tau)$ anchored at infinity by an equilibrium regular R-tipping edge state $\eta^+$, and there is R-tipping for some critical rate $r=r_c>0$
so that
$x^{[r_c]}(\tau) \to \eta^+\;\;\mbox{as}\;\; \tau\to +\infty$.
\begin{itemize}
\item [(a)]
If there is a $\delta>0$ such that $x^{[r]}(\tau)$ lies on different sides of $\Theta^{[r_c]}(\tau)$ for $r\in(r_c-\delta,r_c)$ and $r\in(r_c+\delta,r_c)$,
then the upper and lower edge tails of $\eta^+$ are different: $x^{[r_c^+]}\neq x^{[r_c^-]}$.
\item[(b)]
If each edge tail of $\eta^+$ is a connection from $\eta^+$ to an attractor, then there is a $\delta>0$ such that $x^{[r]}(\tau)$ converges to an attractor for $0<|r-r_c|<\delta$.
\item[(c)]
If each edge tail of $\eta^+$ is a different connection from $\eta^+$ to a (possibly different) attractor, then there is a $\delta>0$ such that $x^{[r]}(\tau)$ lies on different sides of $\Theta^{[r_c]}(\tau)$ and converges to the corresponding attractor for $r\in(r_c-\delta,r_c)$ and $r\in(r_c+\delta,r_c)$.
\end{itemize}
\end{proposition}
\begin{rmk}
\label{rmk:edgetails}
Note that different edge tails of $\eta^+$ do not imply that each edge tail is a different connection from $\eta^+$ to an attractor. For example, different composite edge tails, each of which consists of different connected trajectories or components, may have a common first component that connects $\eta^+$ to a saddle, and a different second component that continues away from this saddle. Another example are different non-composite edge tails that diverge from $\eta^+$ to infinity.
\end{rmk}
\noindent
{\em Proof of Proposition~\ref{prop:edgetails}.}
Choose the compactification parameter $\alpha$ that satisfies Proposition~\ref{prop:regular}. Recall from Section~\ref{sec:compautsyst} that $s(\tau)=g_\alpha(\tau)$, and use
$$
\tilde{x}_\alpha^{[r]}(\tau) =
\left(
{x}^{[r]}(\tau), s(\tau)
\right),
$$
to denote the solution of~\eqref{eq:odeextbtau} corresponding to a solution
$x^{[r]}(\tau)$ of the nonautonmous system~\eqref{eq:odewithrs}
with a fixed $r$, and refer to $\tilde{x}^{[r_c^+]}$ and $\tilde{x}^{[r_c^-]}$ from~\eqref{eq:etsembedded} as {\em embeded edge tails}.
Recall from Proposition~\ref{prop:invsete-}(b) that $W_\alpha^{s,[r_c]}(\tilde{\eta}^{+})$
contains a family of regular R-tipping thresholds $\Theta^{[r]}(\tau)$, and each embedded edge tail contains one branch of the unstable manifold $W^{u}(\tilde{\eta}^+)$.
For (a), assume that $x^{[r]}(\tau)$ is on different sides of $\Theta^{[r]}(\tau)$ for $r\in(r_c-\delta,r_c)$ and $r\in(r_c+\delta,r_c)$ in the nonautonomous system~\eqref{eq:odewithrs}, and consider where the corresponding
$\tilde{x}_\alpha^{[r]}(\tau)$ intersects the two branches of $W^u(\tilde{\eta}^+)$ in the extended phase space of the compactified system~\eqref{eq:odeextbtau}. This intersection changes sides of $W_\alpha^{s,[r_c]}(\tilde{\eta}^{+})$ as $r$ passes through $r_c$.
Thus, each embedded edge tail contains a different branch of $W^u(\tilde{\eta}^+)$, meaning that the edge tails
$x^{[r_c^+]}$ and $x^{[r_c^-]}$ are different.
For (b), it follows from~\cite[Prop.3.1]{Wieczorek2019compact} that
if $A^+$ is an attractor for the future limit system~\eqref{eq:odea+},
then $\tilde{A}^+ =\{(x,1):x\in A^+\}\subset S^+$ is an attractor for
the compactified system~\eqref{eq:odeextbtau}.
Thus, the assumption that each edge tail is a connection from $\eta^+$
to an attractor implies that each embedded edge tail lies in the basin
of attraction of an attractor. This, in turn, implies that each section that transversely intersects an embedded edge tail has an open neighbourhood that
lies in the basin of attraction of an attractor. We choose $\delta>0$ small
enough so that $\tilde{x}_\alpha^{[r]}(\tau)$ enters this neighbourhood
for all $0<|r-r_c|<\delta$. This implies that the corresponding $x^{[r]}(\tau)$
converges to an attractor for all $0<|r-r_c|<\delta$.
In (c), the assumption that each edge tail of $\eta^+$ is a different connection from $\eta^+$ to a possibly different attractor implies that each embedded edge tail contains a different branch of $W^u(\tilde{\eta}^+)$ and thus lies on a different sides of $W_\alpha^{s,[r_c]}(\tilde{\eta}^{+})$. This, in turn, implies that
${x}^{[r]}(\tau)$ lies on different sides of $\Theta^{[r_c]}(\tau)$ for $r\in(r_c-\delta,r_c)$ and $r\in(r_c+\delta,r_c)$.
It follows from (b) that $x^{[r]}(\tau)$ converges to the corresponding attractor for $0<|r-r_c|<\delta$.
\qed
\section{Criteria for Tracking and R-tipping with Regular Thresholds}
\label{sec:gentestcrit}
In this section, we give the main results on R-tipping via loss of end-point tracking in nonautonomous system~\eqref{eq:odewithr} or~\eqref{eq:odewithrs} with asymptotically
constant inputs $\Lambda$. Our focus is on non-degenerate (reversible and irreversible) cases of R-tipping, due to crossing regular R-tipping thresholds anchored at infinity by an equlibrium regular R-tipping edge state. Specifically, we use the compactification technique together with relations between nonautonomous~\eqref{eq:odewithrs} and compactified~\eqref{eq:odeextbtau} system dynamics given in Section~\ref{sec:compact} to:
\begin{itemize}
\item
Give rigorous testable criteria for tracking of moving sinks, and tracking of moving regular thresholds in arbitrary dimension in Section~\ref{sec:TrackingProof}.
\item
Use the concept of ``threshold instability" to generalise sufficient conditions from~\cite{Ashwin2016} for the occurrence irreversible R-tipping for moving sinks on $I=\mathbb{R}$ in one dimension to
different cases of R-tipping for moving sinks on $I=\mathbb{R}$ in arbitrary dimension in Section~\ref{sec:Rtippingcriteria}.
\item
Relax the assumption of moving sinks on $I=\mathbb{R}$ and associate different R-tipping in~\eqref{eq:odewithrs} with a connecting (heteroclinic) orbit in~\eqref{eq:odeextbtau}.
Give necessary and sufficient conditions for the occurrence of non-degenerate R-tipping in~\eqref{eq:odewithrs} in terms of non-degeneracy criteria for connecting (heteroclinic) orbits in~\eqref{eq:odeextbtau}.
Use this result to give general methods for computing critical rates for R-tipping in arbitrary dimension in Section~\ref{sec:computing}.
\end{itemize}
\subsection{Criteria for Tracking Moving Sinks and Moving Regular Thresholds}
\label{sec:TrackingProof}
We now demonstrate that a moving sink will be tracked by a
solution of the nonautonomous system if the rate parameter
$r$ is small enough. We also demonstrate that a (normally repelling) moving regular
threshold will be tracked by an R-tipping threshold if $r$ is
small enough.
To prove these results, we consider the compactified system~\eqref{eq:odeextbtau} as a singularly perturbed fast-slow system.
This allows us to use results from geometric singular perturbation
theory on the compactified system~\eqref{eq:odeextbtau} with small parameter
$0< r \ll 1$ from Section~\ref{sec:NHIM}, together with relations between nonautonomous~\eqref{eq:odewithrs} and compactified~\eqref{eq:odeextbtau}
system dynamics from Section~\ref{sec:compactdyns}.
\begin{theorem}
\label{thm:tracking}
Consider a nonautonomous system~(\ref{eq:odewithrs}) with an input $\Lambda(\tau)$ that is exponentially asymptotically constant
to $\lambda^+$.
Suppose there is a moving sink $e(\Lambda(\tau))$ on $I=(\tau_0,+\infty)$, and recall that $e(\Lambda(\tau)) \to e^+$ as $\tau \to +\infty$.
%
Fix any $\delta>0$.
\begin{itemize}
\item[(a)]
For any solution $x^{[r]}(\tau,x_0,\tau_0)$ with $x_0$ in the basin of attraction of $e(\Lambda(\tau_0))$, there is an $r^*(\delta)>0$ and a $\tau^*(r,\delta)\ge \tau_0$, such that $x^{[r]}(\tau,x_0,\tau_0)$
{\em $\delta$-close and end-point tracks} the moving sink $e(\Lambda(\tau))$ on $(\tau^*,+\infty)\subseteq I$ for any $r\in(0,r^*)$.
%
\item[(b)]
Suppose in addition that $\Lambda(\tau)$ is exponentially
bi-asymptotically constant, $e(\Lambda(\tau))$ is a moving sink on $I=\mathbb{R}$, and recall that $e(\Lambda(\tau))\to e^-$ as $\tau \to -\infty$. Then, there is an $r^*(\delta)>0$ such that
\begin{itemize}
\item[$\bullet$]
The unique local pullback attractor $x^{[r]}(\tau,e^-)$ from Proposition~\ref{prop:invsete-}(a) exists for any $r\in(0,r^*)$ and all $\tau\in\mathbb{R}$.
\item[$\bullet$]
The local pullback attractor $x^{[r]}(\tau,e^-)$
{\em $\delta$-close and end-point tracks} the moving sink $e(\Lambda(\tau))$ on $I=\mathbb{R}$ for any $r\in(0,r^*)$.
\end{itemize}
\end{itemize}
\end{theorem}
\proof
Choose the compactification parameter $\alpha$ that satisfies Proposition~\ref{prop:regular} for any $r>0$.\\
\noindent
(a)
Recall from Proposition~\ref{prop:fastslow}(a) that the moving sink $e(\Lambda(\tau))$ on $I=(\tau_0,+\infty)$ corresponds to a one-dimensional compact connected attracting normally hyperbolic critical manifold
$$
\tilde{E}_\alpha^{[0]} =
\left\{
\left(e(\Lambda_\alpha(s)),s\right)
\;:\; s\in[s_0,1]
\right\},
$$
in the extended phase space of the compactified
system~\eqref{eq:odeextbtau},
where $s_0=g_\alpha(\tau_0)$.
It then follows from~\cite{Fenichel1979} that,
for $r>0$ sufficiently small, $\tilde{E}_\alpha^{[0]}$ perturbs to a one-dimensional connected attracting normally hyperbolic invariant manifold $\tilde{E}_\alpha^{[r]}$ that lies $C^1$-close to $\tilde{E}_\alpha^{[0]}$ and, as $\tilde{e}^+$ is isolated, contains $\tilde{e}^+$.
Thus, for any $\delta>0$
and initial condition $(x_0,s_0)$ in the basin of attraction of $e(\Lambda_\alpha(s_0))$,
we can choose $r^*$ small enough so that:
(i) $\tilde{E}_\alpha^{[r]}$ is normally hyperbolic ($v_+$ from Proposition~\ref{prop:csdyn}(a) is the leading eigenvector), attracting, and lies $\delta$-close to $\tilde{E}_\alpha^{[0]}$ for any $r\in(0,r^*)$
and all $s\in[s_0,1]$, and
(ii) $(x_0,s_0)$ is in the basin of attraction of $\tilde{E}_\alpha^{[r]}$ for any $r\in(0,r^*)$.
Thus, $x^{[r]}(\tau,x_0,\tau_0)$ will be attracted to the solution
of~\eqref{eq:odewithrs} corresponding to $\tilde{E}_\alpha^{[r]}$,
and $\delta$-close and end-point track $e(\Lambda(\tau))$ on $(\tau^*,+\infty)$ for any $r\in(0,r^*)$ and sufficiently large $\tau^*\ge\tau_0$.\\
(b)
In this case, we have
$$
\tilde{E}_\alpha^{[0]} =
\left\{
\left(e(\Lambda_\alpha(s)),s\right)
\;:\; s\in[-1,1]
\right\},
$$
so that $\tilde{E}_\alpha^{[r]}$ is connected, attracting and normally hyperbolic, contains $\tilde{e}^-$ and $\tilde{e}^+$, and lies $\delta$-close to $\tilde{E}_\alpha^{[0]}$ for any $r\in(0,r^*)$ and all $s\in[-1,1]$.
Since $\tilde{e}^-$ is a hyperbolic equilibrium with one unstable direction, $\tilde{E}_\alpha^{[r]}$ contains the branch of the unique one-dimensional unstable manifold of $\tilde{e}^-$ in the compactified system~\eqref{eq:odeextbtau}. Hence, by Proposition~\ref{prop:invsete-}(a), $\tilde{E}_\alpha^{[r]}$ corresponds
to a unique local pullback attractor
$x^{[r]}(\tau,e^-)$ that limits to $e^-$ as $\tau\rightarrow -\infty$
in the nonautonomous system~\eqref{eq:odewithrs}.
It then follows from the properties of $\tilde{E}_\alpha^{[r]}$ that $x^{[r]}(\tau,e^-)$ exists
for all $\tau\in\mathbb{R}$, and $\delta$-close and end-point tracks $e(\Lambda(\tau))$ on $\mathbb{R}$ for any $r\in(0,r^*)$.
\qed
An alternative approach to prove the above is given in~\cite[Theorem III.1]{Alkhayuon2018} which uses results from~\cite{Aulbach2006}.
The next result is an analogous result to Theorem~\ref{thm:tracking} for moving thresholds.
\begin{theorem}
\label{thm:trackingthresholds}
Consider a nonautonomous system~(\ref{eq:odewithrs}) with an input $\Lambda(\tau)$ that is exponentially asymptotically constant to $\lambda^+$.
Suppose the future limit system~\eqref{eq:odea+} has an equilibrium regular R-tipping edge state $\eta^+$.
Then,
\begin{itemize}
\item[(a)]
There is a $\tau_0$ (that may be $-\infty$), and
\begin{itemize}
\item [$\bullet$]
A moving equilibrium regular edge state $\eta(\Lambda(\tau))$ on $I=(\tau_0,+\infty)$ that limits to $\eta^+$.
\item [$\bullet$]
A moving regular threshold $\theta(\Lambda(\tau))$ on $I=(\tau_0,+\infty)$ that contains $\eta(\Lambda(\tau))$.
\end{itemize}
\item[(b)]
Additionally, there is an R-tipping threshold $\Theta^{[r]}(\tau)$ anchored at infinity by $\eta^+$. Furthermore, for any $\delta>0$ there is an $r^*(\delta)>0$ such that the R-tipping threshold $\Theta^{[r]}(\tau)$ lies $\delta$-close to the moving threshold $\theta(\Lambda(\tau))$:
\begin{equation}
d_h\left(\Theta^{[r]}(\tau),\theta(\Lambda(\tau))\right)<\delta,
\label{eq:threshtrack}
\end{equation}
for any $r\in(0,r^*)$ and all $\tau > \tau_0$.
\end{itemize}
\end{theorem}
\proof
(a)
Note that, from Definitions~\ref{defn:rthr} and~\ref{defn:edgestate}, the future limit system~\eqref{eq:odea+} has a regular threshold
$\theta^+$ containing $\eta^+$, and $\theta^+$ and $\eta^+$ are normally hyperbolic. On applying Proposition~\ref{prop:edgecontinues} for the case $\lambda^*=\lambda^+$, they
can be continued on some neighbourhood $Q$ of $\lambda^+$ to families of equilibrium regular edge states $\eta(\lambda)$ and regular thresholds $\theta(\lambda)$ that vary $C^1$-smoothly with $\lambda \in Q$.
Pick any such $Q\subseteq P_\Lambda$ together with a $\tau_0$ such that
$Q= \overline{\left\{\Lambda(\tau): \tau\in (\tau_0,+\infty)\right\}}$.
This gives a moving equilibrium regular edge state $\eta(\Lambda(\tau)$ on $I=(\tau_0,+\infty)$ that limits to $\eta^+$, and a moving regular threshold $\theta(\Lambda(\tau))$ on $I=(\tau_0,+\infty)$ that limits to $\theta^+$ and contains $\eta(\Lambda(\tau)$.\\
(b)
Choose the compactification parameter $\alpha$ that satisfies Proposition~\ref{prop:regular} for any $r>0$.
Let $s_0=g_\alpha(\tau_0)$, and note that
$$
\tilde{\Theta}^{[0]}_\alpha:=\{(\theta(\Lambda_\alpha(s)),s)~:~s \in [s_0, 1]\},
$$
is a normally hyperbolic forward-invariant manifold in the extended phase space of the $r=0$ compactified system~\eqref{eq:odeextbtau}, that corresponds to the moving regular threshold $\theta(\Lambda(\tau))$ on $I=(\tau_0,+\infty)$. Note that $\tilde{\Theta}^{[0]}_\alpha$ contains $\tilde{\theta}^+$ and $\tilde{\eta}^+$.
It then follows from~\cite[Thm 9.1]{Fenichel1979} that,
for any $\delta>0$, we can choose a sufficiently small $r^*>0$, so that there is a perturbed normally hyperbolic manifold $\tilde{\Theta}^{[r]}_\alpha$ that lies
$\delta$-close to $\tilde{\Theta}^{[0]}_\alpha$ in the sense of~\eqref{eq:threshtrack}
for any $r\in(0,r^*)$ and all $s\in[s_0,1]$
in the compactified system~\eqref{eq:odeextbtau}.
Furthermore, $\tilde{\Theta}^{[r]}_\alpha$ contains $\tilde{\eta}^+$, meaning that it is contained
within the stable manifold of $\tilde{\eta}^+$.
For any $r\in(0,r^*)$, pick a forward-invariant subset of
$\tilde{\Theta}^{[r]}_\alpha$ on $[s_0,1]$. On applying Proposition~\ref{prop:invsete-}(b), this forward-invariant subset corresponds to an R-tipping threshold $\Theta^{[r]}(\tau)$ that is anchored at infinity by $\eta^+$ and lies $\delta$-close to the moving threshold $\theta(\Lambda(\tau))$ for all $\tau >\tau_0$ in the nonautonomous system~\eqref{eq:odeextbtau}.
\qed
\subsection{Threshold Instability as a Criterion for R-tipping}
\label{sec:Rtippingcriteria}
This section maintains our goal of a mathematical framework that is applicable, and follows the approach in~\cite{Ashwin2016}.
Specifically, we use simple properties of the autonomous frozen system~\eqref{eq:odea}, and the external input $\Lambda$, to give rigorous yet easily
testable criteria for R-tipping in the nonautonomous
system~\eqref{eq:odewithr} or~\eqref{eq:odewithrs}.
These criteria are for moving sinks on $I=\mathbb{R}$ and R-tipping from $e^-$ via loss of end-point tracking, due to crossing regular R-tipping thresholds anchored at infinity by an equilibrium regular R-tipping edge state.
Reference~\cite[Theorem 3.2]{Ashwin2016} uses the notion of ``forward basin
stability" to give sufficient conditions for such
R-tipping to occur, and to be excluded, in one-dimensional (scalar) systems. Recent work~\cite{Xie2019,Kiers2018} suggests that simple
testable criteria to exclude such R-tipping will be much less easy to formulate for
higher dimensional systems unless there are additional constraints.
The main reason is that, in higher dimensions, forward basin stability does not exclude the possibility of R-tipping.
Below, we use the notion of ``threshold instability" introduced in
Sec.~\ref{sec:thr_inst} to give sufficient conditions for the occurrence of such R-tipping
in arbitrary dimension.
In case (a), we give a sufficient condition to identify autonomous frozen systems that can exhibit such R-tipping for suitably chosen external inputs $\Lambda$.
In case (b), we give a sufficient condition for such R-tipping to occur in a nonautonomous system with a (possibly reparametrized) given external input $\Lambda$. This case is a generalization of \cite[Theorem 3.2 part 2]{Ashwin2016}.
\begin{theorem}
\label{thm:Rtip}
Consider a nonautonomous system~(\ref{eq:odewithrs}) with a parameter path $P$. Suppose the autonomous frozen system~\eqref{eq:odea} has a hyperbolic sink $e(\lambda)$ that varies $C^1$-smoothly with
$\lambda\in P$, and an equilibrium regular edge state $\eta(\lambda)$ with a regular threshold $\theta(\lambda)$.
\begin{enumerate}
\item[(a)]
If $e(\lambda)$ is threshold unstable on $P$ due to $\theta(\lambda)$, then there is an exponentially
bi-asymptotically constant input $\Lambda(\tau)$ that
traces out $P_\Lambda = P$ and gives R-tipping from $e^-$
in the nonautonomous system~(\ref{eq:odewithrs}).
%
\item[(b)]
Consider a given exponentially bi-asymptotically constant input $\Lambda(\tau)$ tracing out $P_\Lambda = P$ such that $e(\Lambda(\tau))$ is forward
threshold unstable due to $\theta(\Lambda(\tau))$, and
$\eta(\Lambda(\tau))$ limits to $\eta^+$. Then, there is R-tipping from $e^-$ in the nonautonomous system~\eqref{eq:odewithrs} for $\Lambda$ with suitably reparametrised time, i.e. for some $\tilde{\Lambda}(\tau) = \Lambda(\sigma(\tau))$ tracing out the same path $P_{\tilde{\Lambda}}=P_{\Lambda}=P$, where $\sigma$ is a strictly monotonic increasing function.
\end{enumerate}
\end{theorem}
\begin{rmk}
Note that:
\begin{itemize}
\item
The R-tipping criteria in Theorem~\ref{thm:Rtip} are sufficient but not necessary: there are examples of (non-degenerate)
R-tipping for a moving sink on $I=\mathbb{R}$ in the absence of forward threshold instability and presence of forward basin stability~\cite{Kiers2018,Xie2019}.
\item
The conditions in Theorem~\ref{thm:Rtip} do not necessary imply that the R-tipping is non-degenerate.
Nonetheless, we expect that a solution $x^{[r]}(e^-)$ and the codimension-one R-tipping threshold $\Theta^{[r]}(\tau)$ will cross transversely on varying $r$, and suggest that ``threshold instability" will typically give non-degenerate R-tipping.
\item
The R-tipping in Theorem~\ref{thm:Rtip} is from $e^-$ and for a moving sink on $I=\mathbb{R}$, which is often the case of interest.
We discuss generalisations of Theorem~\ref{thm:Rtip} to
R-tipping from a fixed $(x_0,\tau_0)$ and/or for a moving sink on a finite or semi-infinite time interval $I$
in Section~\ref{sec:Conclusions}.
\item
In the simplest cases in Theorem~\ref{thm:Rtip}(b),
we may be able to choose $\tilde{\Lambda} = {\Lambda}$ and obtain R-tipping for a suitable choice of the rate parameter $r=r^*$~\cite{Xie2019}, but more generally, $\tilde{\Lambda}$ is a time reparametrisation of $\Lambda$ with the same limiting behaviour.
In other words, we can ensure that the pullback attractor is on different sides of the R-tipping threshold for a fixed $r$ and different $\tilde{\Lambda}$, but it is more complex to ensure that
this occurs for a fixed $\tilde{\Lambda}=\Lambda$ and different $r$.
\item
In the next section, we relax the assumption of moving sinks on $I=\mathbb{R}$ and associate different R-tipping in nonautonomous system~\eqref{eq:odewithrs} with a connecting (heteroclinic) orbit in the autonomous compactified system~\eqref{eq:odeextbtau}. We also give necessary and sufficient conditions for the occurrence of non-degenerate R-tipping in~\eqref{eq:odewithrs}
in terms of non-degeneracy criteria for connecting (heteroclinic) orbits in~\eqref{eq:odeextbtau}
in Proposition~\ref{prop:rtip_compact}.
\end{itemize}
\end{rmk}
\vspace{5mm}
\noindent
{\em Proof of Theorem~\ref{thm:Rtip}.}
(a) Threshold instability of $e(\lambda)$ on $P$ due to $\theta(\lambda)$ implies that there is a $C^1$-smooth family of $\theta(\lambda)$, as well as $\lambda_a$ and $\lambda_b$ in $P$ and in the domain of existence of $\theta(\lambda)$, such that
$d_s(e(\lambda_a),\theta(\lambda_b))=0$ and
$d_s(e(\lambda_1),\theta(\lambda_2))$ takes both
signs in any neighbourhood
of $(\lambda_a,\lambda_b)$ in $P^2$.
Recall from~\eqref{eq:sdtau} the signed distance notation $\Delta_{\Lambda}(\tau_1,\tau_2)$ at different points in time, and from
Appendix~\ref{sec:A3} that $\Delta_{\Lambda}(\tau_1,\tau_2)$ is smooth and well defined near $(\tau_a,\tau_b)$.
Now choose any $C^1$-smooth function $\Lambda(\tau)$ such that:
\begin{itemize}
\item
$\Lambda(\tau)$ traces out $P_{\Lambda}=P$ and is exponentially bi-asymptotically constant to $\lambda^\pm$.
\item
There are $\tau_a<\tau_b$ such that\,\footnote{Note that $\Lambda(\tau)$ passes through $\lambda_{a}$ before $\lambda_b$, though it may pass through either or both of these values several times.}
$\Lambda(\tau_a)=\lambda_{a}$ and $\Lambda(\tau_b)=\lambda_{b}$,
\item
$\Delta_{\Lambda}(\tau_a,\tau_b) = 0$, and
$\Delta_{\Lambda}(\tau_1,\tau_2)$
takes both signs in any neighbourhood
of $(\tau_a,\tau_b)\in\mathbb{R}^2$.
\item
The future limit $\lambda^+$ of $\Lambda(\tau)$
is in the domain of existence of $\eta(\lambda)\in\theta(\lambda)$.
\end{itemize}
For such external input $\Lambda(\tau)$, the moving sink $e(\Lambda(\tau))$ is forward threshold unstable due to a moving regular threshold $\theta(\Lambda(\tau))$ with a moving equilibrium regular edge state $\eta(\Lambda(\tau))$ that limits to
an equilibrium regular R-tipping edge state
$\eta^+$. We then apply case (b) of this theorem for this $\Lambda(\tau)$ to obtain the result.
(b) Choose any convex neighbourhood $\mathcal{N}$ of $(\tau_a,\tau_b)$ in $\mathbb{R}^2$.
Forward threshold instability of $e(\Lambda(\tau))$ due to $\theta(\Lambda(\tau))$ means that
we can choose a small enough $\delta>0$, as well as the time pairs
$(\tau_a^-,\tau_b^-)$ and $(\tau_a^+,\tau_b^+)$ in $\mathcal{N}$, such that
\begin{equation}
\Delta_{\Lambda}(\tau_a^+,\tau_b^+) = \delta > \,0
\quad\mbox{and}\quad
\Delta_{\Lambda}(\tau_a^-,\tau_b^-) = -\delta < \,0;
\label{eq:Delta_delta}
\end{equation}
see Figure~\ref{fig:N} for an illustration of this.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8cm]{./figs/fig07}
\end{center}
\vspace{-5mm}
\caption{The $(\tau_1,\tau_2)$-plane with a (blue) convex neighbourhood ${\cal{N}}$ of $(\tau_a,\tau_b)$ in the region where $\tau_2>\tau_1$. Shown are examples of time pairs: $(\tau_a,\tau_b)$ where $\Delta_{\Lambda}(\tau_a,\tau_b) =0$, $(\tau_a^+,\tau_b^+)$ where $\Delta_{\Lambda}(\tau_a^+,\tau_b^+) =\delta > 0$, and $(\tau_a^-,\tau_b^-)$ where $\Delta_{\Lambda}(\tau_a^-,\tau_b^-) =-\delta <0$. A time pair $(\tau_a^*,\tau_b^*)$, where $d_s(x^{[r^*,\tilde\Lambda]}(\epsilon,e^-),\Theta^{[r^*,\tilde\Lambda]}(\epsilon))=0$, is guaranteed to lie somewhere on the (green) line from $(\tau_a^+,\tau_b^+)$ to $(\tau_a^-,\tau_b^-)$.
}
\label{fig:N}
\end{figure}
Next, consider a time reparametrisation of the prescribed external input $\Lambda(\tau)$:
\begin{equation}
\tilde\Lambda(\tau)=\Lambda(\sigma_{\tau_\alpha,\tau_\beta,\epsilon}(\tau)),
\label{eq:lambdaconstruct}
\end{equation}
using a parametrised family of strictly monotone increasing functions $\sigma_{\tau_\alpha,\tau_\beta,\epsilon}(\tau)$ with range $\mathbb{R}$ and three parameters\,\footnote{Note that the subscript in $\tau_\alpha$ is not related to the compactification parameter $\alpha$.} $\epsilon>0$ and $\tau_\alpha<\tau_\beta\in{\cal{N}}$.
We define this reparameterisation of time
by means of a function
\begin{equation}
\sigma_{\tau_\alpha,\tau_\beta,\epsilon}(\tau):= \tau_{\alpha}+\epsilon \tau +\left(\tau_\beta-\tau_\alpha-\epsilon^2\right)
\xi\left(\tau/\epsilon
\right),
\label{eq:sigma}
\end{equation}
where $\xi(v)$ is a smooth function such that $\xi(v)=0$ for $v\leq 0$, $\xi(v)=1$ for $v\geq 1$ and $\xi(v)$ is strictly monotone increasing for $v\in(0,1)$. For example, we can take
$$
\xi(v):= \frac{\chi(v)}{\chi(v)+\chi(1-v)},
$$
where
$$
\chi(v):=
\left\{
\begin{array}{rcl}
\exp(-1/v) &\mbox{for}& v>0,\\
0 &\mbox{for}& v \le 0,
\end{array}\right.
$$
takes values in the interval $[0,1)$ and is strictly monotone increasing for $v>0$.
One can check that $\sigma_{\tau_\alpha,\tau_\beta,\epsilon}(\tau)$ defined by~\eqref{eq:sigma}
is $C^{\infty}$-smooth in all three parameters, strictly monotone increasing in $\tau$ as long as $\epsilon^2<\tau_\beta-\tau_\alpha$,
linear with slope $\epsilon$ for $\tau\le 0$ and for $\tau\ge \epsilon$:
\begin{equation}
\sigma_{\tau_\alpha,\tau_\beta,\epsilon}(\tau)=
\left\{ \begin{array}{rll}
\tau_\alpha+\epsilon\tau& ~\mbox{ if } & \tau\leq 0,\\
\tau_\beta+\epsilon(\tau-\epsilon)& ~\mbox{ if } & \tau\geq \epsilon,
\end{array}\right.
\nonumber
\end{equation}
and satisfies
\begin{equation}
\label{eq:aux0}
\sigma_{\tau_\alpha,\tau_\beta,\epsilon}(0)=\tau_{\alpha}\quad\mbox{and}\quad\sigma_{\tau_\alpha,\tau_\beta,\epsilon}(\epsilon)=\tau_{\beta}.
\end{equation}
In other words, for $\epsilon$ small compared to $\tau_\beta-\tau_\alpha$, there is slow change for $\tau \le 0$, rapid change for $\tau\in(0,\epsilon)$, and slow change thereafter.
In the limit $\epsilon=0$, the reparameterisation function~\eqref{eq:sigma} has a jump discontinuity at $\tau=0$.
Most importantly, if $\Lambda(\tau)$ is exponentially bi-asymptotically constant with decay coefficient $\rho>0$, then $\tilde{\Lambda}(\tau)$ is also exponentially bi-asymptotically constant with decay coefficient $\epsilon \rho$, so the results
in Sections~\ref{sec:compactdyns} and~\ref{sec:TrackingProof} apply to $\tilde{\Lambda}(\tau)$.
For the reparametrised input $\tilde{\Lambda}$ in~\eqref{eq:lambdaconstruct}, we write the unique pullback attractor
from Proposition~\ref{prop:invsete-}(a) as $x^{[r,\tilde\Lambda]}(\tau,e^-)$ to indicate that, in addition to $r$, it depends on
$\epsilon$ and $\tau_\alpha < \tau_\beta$ through $\tilde{\Lambda}$.
We fix the rate parameter $r = r^* > 0$ and show that there is a choice of the parameters $\epsilon$ and $\tau_{\alpha} < \tau_{\beta}$ in $\tilde{\Lambda}$ such that the ensuing $\tilde\Lambda(\tau)$
gives R-tipping at this $r=r^*$.
By the argument in Theorem~\ref{thm:tracking}(b), solution $x^{[r^*,\tilde\Lambda]}(\tau,e^-)$ exists and $\delta$-close tracks the moving sink $e(\tilde\Lambda(\tau))$ for all $\tau \le 0$ if $\epsilon$ is small enough. Similarly, by the argument in Theorem~\ref{thm:trackingthresholds}(b), a regular R-tipping threshold
$\Theta^{[r^*,\tilde\Lambda]}(\tau)$ anchored by $\eta^+$ at infinity exists and $\delta$-close tracks the moving regular threshold $\theta(\tilde\Lambda(\tau))$
for all $\tau \ge \epsilon$ if $\epsilon$ is small enough.
This means that there is an $\epsilon_1 > 0$ such that if $0<\epsilon<\epsilon_1$ then
\begin{equation}
\label{eq:d1}
d\left(x^{[r^*,\tilde\Lambda]}(\tau,e^-),e(\tilde{\Lambda}(\tau))\right)<\frac{1}{3}\delta\quad\mbox{for all}\quad \tau\leq 0\quad\mbox{and}\quad (\tau_\alpha,\tau_\beta)\in{\cal{N}},
\end{equation}
and
\begin{equation}
\label{eq:dH}
d_H\left(\Theta^{[r^*,\tilde\Lambda]}(\tau),\theta(\tilde\Lambda(\tau))\right)<\frac{1}{3}\delta\quad\mbox{for all}\quad \tau\geq \epsilon
\quad\mbox{and}\quad (\tau_\alpha,\tau_\beta)\in{\cal{N}}.
\end{equation}
Furthermore, local continuity of $x^{[r^*,\tilde{\Lambda}]}(\tau,e^-)$ on varying time and the three parameters in $\tilde{\Lambda}$ means that there is an $\epsilon_2 > 0$ such that if $0<\epsilon<\epsilon_2$ then
\begin{equation}
\label{eq:d2}
d\left(x^{[r^*,\tilde\Lambda]}(0,e^-),x^{[r^*,\tilde{\Lambda}]}(\epsilon,e^-)\right)<\frac{1}{3}\delta\quad\mbox{for all}\quad (\tau_\alpha,\tau_\beta)\in{\cal{N}}.
\end{equation}
We chose $0 < \epsilon < \min\{\epsilon_1,\epsilon_2 \}$.
We now examine the signed distance between $x^{[r^*,\tilde\Lambda]}(\tau,e^-)$ and $\Theta^{[r^*,\tilde\Lambda]}(\tau)$ at time $\tau=\epsilon$, its dependence on the two remaining parameters $\tau_\alpha < \tau_\beta$, and choose $(\tau_\alpha,\tau_\beta)\in{\cal{N}}$ that give R-tipping.
Recall the triangle inequality $d(a,b)\le d(a,c) + d(c,b)$ for points $a,b,c\in \mathbb{R}^n$, and also note that $|d_s(a,S)-d_s(a',S)|\le d(a,a')$ and $|d_s(a,S)-d_s(a,S')|\leq d_H(S,S')$ for any codimension one sets $S,S'$, and points $a,a'$ in some convex neighbourhood of $S$ and $S'$, respectively, where $d_s(a,S)$ and $d_s(a,S')$ are defined. Using these inequalities together with~\eqref{eq:d1} and~\eqref{eq:d2},
note that
\begin{align}
\begin{split}
& \left|d_s\left(x^{[r^*,\tilde\Lambda]}(\epsilon,e^-),\theta(\tilde\Lambda(\epsilon)\right) - d_s\left(e(\tilde\Lambda(0)),\theta(\tilde\Lambda(\epsilon)\right)\right| \\
& \le
d\left(x^{[r^*,\tilde\Lambda]}(\epsilon,e^-),e(\tilde\Lambda(0))\right)\\
&\le
d\left(x^{[r^*,\tilde\Lambda]}(\epsilon,e^-),x^{[r^*,\tilde\Lambda]}(0,e^-)\right)
+
d\left(x^{[r^*,\tilde\Lambda]}(0,e^-),e(\tilde\Lambda(0))\right)
\\
&< \frac{1}{3}\delta+\frac{1}{3}\delta = \frac{2}{3}\delta \quad\mbox{for all}\quad (\tau_\alpha,\tau_\beta)\in{\cal{N}}.
\end{split}
\label{eq:aux1}
\end{align}
Similarly, using~\eqref{eq:dH}, note that
\begin{align}
\begin{split}
& \left|d_s\left(e(\tilde\Lambda(0)),\Theta^{[r^*,\tilde\Lambda]}(\epsilon)\right) - d_s\left(e(\tilde\Lambda(0)),\theta(\tilde\Lambda(\epsilon)\right)\right| \\
&\le
d_H\left(\Theta^{[r^*,\tilde\Lambda]}(\epsilon),\theta(\tilde\Lambda(\epsilon)\right) \\
&< \frac{1}{3}\delta \quad\mbox{for all}\quad (\tau_\alpha,\tau_\beta)\in{\cal{N}}.
\end{split}
\label{eq:aux2}
\end{align}
The triangle inequality $|a-b| \le |a-c| + |c-b|$ for $a,b,c\in \mathbb{R}$, together with~\eqref{eq:aux1} and~\eqref{eq:aux2}, gives
\begin{align}
\begin{split}
&\left|
d_s\left(x^{[r^*,\tilde\Lambda]}(\epsilon,e^-),\Theta^{[r^*,\tilde\Lambda]}(\epsilon)\right)
-
d_s\left(e(\tilde\Lambda(0)),\theta(\tilde\Lambda(\epsilon)\right)
\right|\\
&\le
d_H\left(\Theta^{[r^*,\tilde\Lambda]}(\epsilon),\theta(\tilde\Lambda(\epsilon))\right)
+
d\left(x^{[r^*,\tilde\Lambda]}(\epsilon,e^-),e(\tilde\Lambda(0))\right)
\\
& < \frac{1}{3}\delta + \frac{2}{3}\delta = \delta\quad\mbox{for all}\quad (\tau_\alpha,\tau_\beta)\in{\cal{N}}.
\end{split}
\label{eq:aux3}
\end{align}
\begin{figure}[t]
\centering
\includegraphics[width=14.5cm]{figs/fig08.eps}
\caption{Schematic illustration of the proof of Theorem~\ref{thm:Rtip} for $x\in\mathbb{R}^2$. (a) Forward threshold instability of the moving sink $e(\Lambda(\tau_1))$ due to crossing the moving regular threshold $\theta(\Lambda(\tau_2))$
for $(\tau_1,\tau_2)=(\tau_a,\tau_b)$. (b) For some fixed $r=r^*>0$, there is a reparametrisation $\tilde{\Lambda}(\tau) = \Lambda(\sigma_{\tau_\alpha,\tau_\beta,\epsilon}(\tau))$ such that the (cyan) pullback attractor $x^{[r,\tilde{\Lambda}]}(e^-,\tau)$ enters a regular R-tipping threshold $\Theta^{[r,\tilde{\Lambda}]}(\tau)$ (see the snapshot at time $\tau=\epsilon$) for a suitable choice of $\epsilon$ and $(\tau_\alpha,\tau_\beta)=(\tau_a^*,\tau_b^*)$ shown in Figure~\ref{fig:N}. The pullback attractor then tracks $\eta(\tilde{\Lambda}(\tau))$ and limits to the regular equilibrium R-tipping edge state $\eta^+$. For nondegenerate R-tipping, the pullback attractor switches (red/green) edge tail on crossing $r^*$.
}
\label{fig:threshold}
\end{figure}
Finally, note from~\eqref{eq:aux0} that
$$
d_s\left(e(\tilde\Lambda(0)),\theta(\tilde\Lambda(\epsilon)\right) =\Delta_\Lambda(\tau_\alpha,\tau_\beta),
$$
and use~\eqref{eq:aux3} to arrive at
\begin{align}
\label{eq:aux4}
\begin{split}
&\Delta_\Lambda(\tau_\alpha,\tau_\beta) - \delta < d_s\left(x^{[r^*,\tilde\Lambda]}(\epsilon,e^-),\Theta^{[r^*,\tilde\Lambda]}(\epsilon)\right)<\Delta_\Lambda(\tau_\alpha,\tau_\beta) + \delta\\&\mbox{for all}\quad (\tau_\alpha,\tau_\beta)\in{\cal{N}}.
\end{split}
\end{align}
For $(\tau_\alpha,\tau_\beta)=(\tau_{a}^+,\tau_{b}^+)$, it follows from~\eqref{eq:Delta_delta} and~\eqref{eq:aux4} that
$$
0 < d_s\left(x^{[r^*,\tilde\Lambda]}(\epsilon,e^-),\Theta^{[r_c,\tilde\Lambda]}(\epsilon)\right) <2\delta.
$$
The same argument applied to $(\tau_\alpha,\tau_\beta)=(\tau_{a}^-,\tau_{b}^-)$ gives
$$
-2\delta < d_s\left(x^{[r^*,\tilde\Lambda]}(\epsilon,e^-),\Theta^{[r^*,\tilde\Lambda]}(\epsilon)\right) < 0.
$$
Now consider pairs $(\tau_\alpha,\tau_\beta)$ on the line in $\mathcal{N}$ from $(\tau_a^+,\tau_b^+)$ to $(\tau_a^-,\tau_b^-)$; see the green line in Figure~\ref{fig:N}. Noting that
$$
d_s\left(x^{[r^*,\tilde\Lambda]}(\epsilon,e^-),\Theta^{[r^*,\tilde\Lambda]}(\epsilon)\right),
$$
is continuous on this line, the intermediate value theorem
guarantees a choice of $(\tau_{\alpha},\tau_\beta)= (\tau_{\alpha}^*,\tau_\beta^*)$ on this line such that
$$
d_s\left(x^{[r^*,\tilde\Lambda]}(\epsilon,e^-),\Theta^{[r^*,\tilde\Lambda]}(\epsilon)\right)=0.
$$
It then follows from the properties of $\Theta^{[r]}(\tau)$ in Definition~\ref{def:rtipthres} that
$$
x^{[r^*,\tilde\Lambda]}(\tau,e^-)\rightarrow \eta^+\;\;\mbox{as}\;\; \tau\to +\infty,
$$
for the chosen $0 < \epsilon < \min\{\epsilon_1,\epsilon_2 \}$ and $(\tau_{\alpha},\tau_\beta)= (\tau_{\alpha}^*,\tau_\beta^*)\in{\cal{N}}$; see Figure~\ref{fig:threshold} for an illustration of this.
Hence we conclude there is R-tipping for this $\tilde{\Lambda}(\tau)$ at $r=r^*$.
\subsection{Connecting Orbit as a General Criterion for R-tipping and a General Method for Computing Critical Rates}
\label{sec:computing}
While B-tipping can be found and continued in system parameters on applying tools from theory of autonomous bifurcations~\cite{Kuznetsov2004,auto,matcont} to the autonomous frozen system~\eqref{eq:odea}, this is not the case for nonautonomous R-tipping.
Furthermore, whereas Section~\ref{sec:Rtippingcriteria}
considers R-tipping for moving sinks on $I=\mathbb{R}$ (e.g. Figure~\ref{fig:Rtip}(a)), some R-tipping occur from moving sinks on a semi-infinite or even finite time interval $I\subset\mathbb{R}$ (e.g. Figure~\ref{fig:Rtip}(b)). Therefore, there is a need for
general criteria and methods to find different nonautonomous R-tipping and continue them in system parameters.
To address this need, in this section we continue with an applicable mathematical framework.
Our focus remains on R-tipping via loss of end-point tracking, due to crossing regular R-tipping thresholds anchored at infinity by an equilibrium regular R-tipping edge state.
However, there are two differences from Section~\ref{sec:Rtippingcriteria}. First, we relax the assumption of moving sinks on $I=\mathbb{R}$. Second,
we use properties of the autonomous compactified system~\eqref{eq:odeextbtau} to give rigorous criteria for R-tipping and critical rates in the nonautonomous
system~\eqref{eq:odewithr} or~\eqref{eq:odewithrs}.
The proof of Theorem~\ref{thm:Rtip} used the
compactification technique of Section~\ref{sec:compact} to
show there is R-tipping in the nonautonomous system~\eqref{eq:odewithrs} by computing codimension-one
heteroclinic connections in the compactified system~\eqref{eq:odeextbtau}.
A similar approach has previously been used on a case-by-case basis to compute critical rates in specific examples of R-tipping~\cite{Ashwin2012,Perryman2014,Ashwin2016,Alkhayuon2018,Xie2019}. We show here that connecting (heteroclinic) orbits of~\eqref{eq:odeextbtau} can be used to:
\begin{itemize}
\item
Give necessary and sufficient conditions for the occurrence of non-degenerate
R-tipping from $e^-$ or $(x_0,\tau_0)$ for moving sinks on any time interval $I\subseteq\mathbb{R}$.
\item
Give a general method for computing critical rates for R-tipping. This method also applies to more complicated regular R-tipping edge states such as limit cycles or quasiperiodic tori.
\end{itemize}
To be more specific, recall the notation from Section~\ref{sec:compact} for equilibria of the limit systems embedded in the extended phase space of the compactified system
$$
\tilde{e}^\pm=(e^\pm,\pm 1)
\quad\mbox{and}\quad
\tilde{\eta}^+=(\eta^+,1),
$$
and keep in mind that $s_0=g_\alpha(\tau_0)$.
In the case of asymptotic constant input with a future limit $\lambda^+$, R-tipping from a fixed $(x_0,\tau_0)$ in nonautonomous system~\eqref{eq:odewithrs} depends on where $(x_0,s_0)$ lies in relation to the
stable manifold $W_\alpha^{s,[r]}(\tilde{\eta}^{+})$ in the extended phase space of the autonomous compactified system~\eqref{eq:odeextbtau}. Here, $(x_0,s_0$) is
fixed\,\footnote{Note that a fixed $(x_0,t_0)$ in nonautonomous system~\eqref{eq:odewithr} gives a rate-dependent
$(x_0,s_0^{[r]})$ in the compactified system~\eqref{eq:odeextb}.}, but the position of $W_\alpha^{s,[r]}(\tilde{\eta}^{+})$ typically changes with $r$.
R-tipping from $(x_0,\tau_0)$ occurs when there is
a {\em connecting orbit} from $(x_0,s_0)$ to
$\tilde{\eta}^{+}$
in the compactified system. Such connecting orbits arise when $(x_0,s_0)$ crosses
$W_\alpha^{s,[r]}(\tilde{\eta}^{+})$ under varying $r$.
In the bi-asymptotic constant input case, R-tipping from $e^-$
in nonautonomous system~\eqref{eq:odewithrs} depends on where the one-dimensional unstable manifold
$W_\alpha^{u,[r]}(\tilde{e}^{-})$ lies in relation to
$W_\alpha^{s,[r]}(\tilde{\eta}^{+})$
in the extended phase space of the compactified system~\eqref{eq:odeextbtau}.
Here, the positions of both
$W_\alpha^{u,[r]}(\tilde{e}^{-})$ and $W_\alpha^{s,[r]}(\tilde{\eta}^{+})$
typically change with $r$.
R-tipping from $e^-$ occurs when there is a
{\em connecting heteroclinic orbit} from $\tilde{e}^-$ to $\tilde\eta^+$
in the compactified system. Such connecting orbits arise when $W_\alpha^{u,[r]}(\tilde{e}^{-})$ and $W_\alpha^{s,[r]}(\tilde{\eta}^{+})$ cross each other under varying $r$.
These observations allow us to state necessary and sufficient conditions for the occurrence of non-degenerate R-tipping in~\eqref{eq:odewithrs} in terms of non-degeneracy criteria for connecting (heteroclinic) orbits in~\eqref{eq:odeextbtau}.
To formulate these criteria in a Proposition, we use
$$
\mbox{trj}^{[r]}_\alpha(x_0,s_0)\subset\mathbb{R}^n\times[-1,1],
$$
to denote a trajectory started from $(x_0,s_0)$ in the phase space of the compactified system~\eqref{eq:odeextbtau} parametrised by the rate $r>0$.
If this trajectory converges to $\tilde{e}^-$ backward in time, we write
$$
\mbox{trj}^{[r]}_\alpha(\tilde{e}^-) \subset W_\alpha^{u,[r]}(\tilde{e}^{-}),
$$
using the relation from Proposition~\ref{prop:invsete-}(a).
We also write
$$
\mbox{trj}^{[r]}_\alpha \subset\mathbb{R}^n\times[-1,1],
$$
to mean either $\mbox{trj}^{[r]}_\alpha(x_0,s_0)$ or $\mbox{trj}^{[r]}_\alpha(\tilde{e}^-)$, depending on the context.
\begin{proposition}
\label{prop:rtip_compact}
Consider the nonautonomous system~(\ref{eq:odewithrs}) with an input $\Lambda(\tau)$ satisfying either of the following conditions:
\begin{itemize}
\item [1.]
$\Lambda(\tau)$ is exponentially asymptotically constant to $\lambda^{+}$. The future limit system~\eqref{eq:odea+} has
an equilibrium regular R-tipping edge state $\eta^+$.
\item[2.]
$\Lambda(\tau)$ is bi-exponentially asymptotically constant to $\lambda^{-}$ and $\lambda^+$. In addition to condition 1, the past limit system~\eqref{eq:odea-} has a hyperbolic sink $e^-$.
\end{itemize}
Let $\mbox{trj}^{[r]}_\alpha = \mbox{trj}^{[r]}_\alpha(x_0,s_0)$ in cases 1 or 2, or $\mbox{trj}^{[r]}_\alpha = \mbox{trj}^{[r]}_\alpha(\tilde{e}^-)\subset W_\alpha^{u,[r]}(\tilde{e}^{-})$ in case 2.
The nonautonomous system~\eqref{eq:odewithrs} undergoes non-degenerate R-tipping at $\eta^+$ with critical rate $r_c > 0$ if and only if, in the compactified system~\eqref{eq:odeextbtau}:
\begin{itemize}
\item [(a)]
For $r=r_c$, $\mbox{trj}^{[r_c]}_\alpha$ is a (heteroclinic) connection to the regular R-tipping edge state $\tilde{\eta}^+$:
$$
\mbox{trj}^{[r_c]}_\alpha
\subset W_\alpha^{s,[r_c]}(\tilde{\eta}^{+}).
$$
\item [(b)]
There is a $\delta>0$ such that for $r\in(r_c-\delta,r_c)$ and $r\in(r_c, r_c+\delta)$,
$\mbox{trj}^{[r]}_\alpha$
lies on different sides of $W_\alpha^{s,[r]}(\tilde{\eta}^{+})$.
\item[(c)]
Each branch of $W^u(\tilde{\eta}^+)$ is a connection from $\tilde{\eta}^+$ to an attractor.
\end{itemize}
\end{proposition}
\begin{rmk}
Various conditions are usually proposed for a heteroclinic orbit to be considered as non-degenerate. These are typically assumptions about the orbit and the limiting states as well as more subtle assumptions on parameter variation and the geometry of linearised behaviour; see for example~\cite{homburg2010}.
Our notion of non-degenerate R-tipping in Definition~\ref{defn:Rtiptypes}(a) requires that the connecting (heteroclinic) orbit $\tilde{\eta}^+$ in system~\eqref{eq:odeextbtau} is non-degenerate in the sense that:
\begin{itemize}
\item [(i)]
It is found at codimension one in $r$.
\item [(ii)]
The trajectory of interest $\mbox{trj}^{[r]}_\alpha$ crosses from one side of $W_\alpha^{s,[r]}(\tilde{\eta}^{+})$ to the other at $r=r_c$.
We do not require that the crossing occurs with non-zero speed in $r$, though this is likely to be typically the case.
\item [(iii)] There are no homoclinic connections from $\tilde{\eta}^+$ to itself or heteroclinic connections from $\tilde{\eta}^+$ to other saddle(s).
Note that this assumption about $W^u(\tilde{\eta}^+)$ is not explicitly about the connecting orbit of interest or its limiting state(s).
\end{itemize}
\end{rmk}
\vspace{5mm}
\noindent
{\em Proof of Proposition~\ref{prop:rtip_compact}.}
Choose any compactification parameter $\alpha$ such that Proposition~\ref{prop:regular} applies.
Recall from Section~\ref{sec:compautsyst} that $s(\tau)=g_\alpha(\tau)$, and relate $\mbox{trj}^{[r]}_\alpha$
to a solution $x^{[r]}(\tau)$ of the nonautonomous system~\eqref{eq:odewithrs} with fixed $r>0$:
$$
\mbox{trj}^{[r]}_\alpha =
\left\{
\left(x^{[r]}(\tau),s(\tau)\right)
\right\}_{\tau\in\mathbb{R}}.
$$
Recall from Proposition~\ref{prop:invsete-}(b) that $W_\alpha^{s,[r_c]}(\tilde{\eta}^{+})$
contains a family of regular R-tipping thresholds $\Theta^{[r]}(\tau)$, and each embedded edge tail contains one branch of the unstable manifold $W^{u}(\tilde{\eta}^+)$.
Thus, conditions (a) and (b) imply that the nonautonomous system~(\ref{eq:odewithrs}) undergoes R-tipping: there are $r_c, r_2 >0$ such that
$x^{[r_c]}(\tau)\to \eta^+$ and $x^{[r_2]}(\tau)\not\to \eta^+$ as $\tau\to +\infty$.
Condition (b) also implies that the rate $r_c$ is isolated in the sense that
$x^{[r]}(\tau)\not\to \eta^+$ for $0<|r-r_c|<\delta$. Hence $r_c$ is a
critical rate.
Condition (b) together with Proposition~\ref{prop:edgetails}(a) imply that the lower and upper edge tails of $\eta^+$ are different.
Finally, condition (c) implies that each edge tail connects $\eta^+$ to an attractor. Hence R-tipping is non-degenerate.
Conversely, non-degenerate R-tipping implies conditions (a), (b) and (c). Specifically, R-tipping implies (a).
Each edge tail of $\eta^+$ being a different connection from $\eta^+$ to an attractor, together with Proposition~\ref{prop:edgetails}(c), imply (b).
Each edge tail of $\eta^+$ being a connection from $\eta^+$
to an attractor implies (c).
\qed
\\
In consequence, critical rates for R-tipping in nonautonomous system~\eqref{eq:odewithrs} can be found by finding $r$ that give codimension-one or higher connecting (heteroclinic) orbits to $\tilde{\eta}^+$ in the compactified system~\eqref{eq:odeextbtau}.
It is important to note that, unlike R-tipping thresholds, these connecting orbits are one-dimensional curves, which makes them relatively easy to detect in $r$, and then continue in other parameters to obtain curves or even hypersurfaces of critical rates.
This allows us to produce nonautonomous {\em R-tipping diagrams}~\cite{OKeeffe2019,Xie2019,OSullivan2021} akin to classical autonomous bifurcation diagrams.
Non-degenerate R-tipping has additional requirements that $\tilde{\eta}^+$ is a regular edge state, the edge tails of $\tilde{\eta}^+$ are different, and each edge tail is a connection from $\tilde{\eta}^+$ to an attractor.
This means that parameter continuation of critical rates may give continuation of
non-degenerate R-tipping, at least in cases where different edge tails connect to attractors that are simple enough (eg. an equilibrium or a limit cycle) to continue as attractors in these other parameters.
In practice, critical rates and non-degenerate R-tipping can always be computed using a shooting method. In cases where $\tilde{\eta}^+$ is an equilibrium, a limit cycle, or possibly a quasiperiodic torus, parameter continuation can be done using numerical implementations of detection and continuation methods such as that of Beyn~\cite{Beyn1990numerical} and Lin~\cite{Lin,krauskopf2008}, or numerical software packages such as HOMCONT~\cite{champneys1996numerical} or MATCONT~\cite{matcont} based on these methods.
Finally, we point out that our approach relating R-tipping in the nonautonomous system~\eqref{eq:ode} to an $\tilde{e}^-$-to-$\tilde{\eta}^+$ heteroclinic connection in the compactified autonomous
system~\eqref{eq:odeextbtau}
has strong parallels with an alternative approach relating R-tipping to a collision (loss of uniform asymptotic stability) of a pullback attractor that limits to $e^-$ and a pullback repeller that limits to $\eta^+$ in the one-dimensional (scalar) nonautonomous system~\eqref{eq:ode}~\cite{Kuehn2021}.
\section{Summary and Open Questions}
\label{sec:Conclusions}
This paper describes nonlinear dynamics of a multidimensional nonautonomous system~\eqref{eq:ode} (or equivalently system~\eqref{eq:odewithrs}) for quite a general class of asymptotically constant external inputs, or parameter shifts, that vary with time at a rate $r$ and decay exponentially at infinity. It uses extension to the compactified autonomous
system~\eqref{eq:odeextbtau}--\eqref{eq:lambda_s+-} by including autonomous dynamics of the future~\eqref{eq:odea+} and past~\eqref{eq:odea-} limit systems from infinity. This approach allows us to understand the dynamics of the nonautonomous system~\eqref{eq:ode}
in terms of compact invariant sets of the autonomous future limit system.
The focus is on genuine nonautonomous R-tipping instabilities that can occur at critical rates $r=r_c$.
Asymptotically autonomous systems have been studied in the past in terms of asymptotic equivalence of two separate systems: the nonautonomous system~\eqref{eq:ode} and the future limit system~\eqref{eq:odea+}~\cite{Markus1956,Thieme1994,Holmes_Stuart1992,Castillo1994,Robinson1996}.
A particular advantage of our approach is that all invariant sets, including trajectories of the nonautonomous system~\eqref{eq:ode} as well as compact invariant sets of the autonomous limit systems~\eqref{eq:odea+} and~\eqref{eq:odea-}, can be related to the one autonomous compactified system~\eqref{eq:odeextbtau}--\eqref{eq:lambda_s+-}.
Our strategy is to define R-tipping in the nonautonomous system, introduce the key concepts of R-tipping thresholds as well as R-tipping edge states and their edge tails also in the nonautonomous system, and derive the main results using the compactified system. As a starting point, Proposition~\ref{prop:csdyn} uses results from~\cite{Wieczorek2019compact} to show for exponentially bi-asymptotically constant inputs that the compactified system is in standard format for a $C^1$ smooth slow-fast system,
where the rate parameter $r$ is the timescale separation. Small $r$ corresponds to quasistatic approximation, giving rise to tracking of a branch of base attractors for the frozen system~\eqref{eq:odea}. R-tipping can be understood as a breakdown of the quasistatic approximation, giving rise to loss of tracking (i.e. moving away from the branch of base attractors) due to crossing an R-tipping threshold for some larger $r$.
We give methods to identify, classify and understand R-tipping in a wide variety of ODE models from applications.
In other words, we generalise and extend results from~\cite{Ashwin2016} on irreversible R-tipping in one dimension to arbitrary dimensions and to different cases of R-tipping, some of which can occur only in higher dimensional systems.
In particular, we give tools for a fairly complete understanding of systems with equilibrium base attractors whose basin boundaries consist of regular thresholds anchored by regular equilibrium edge states.
This culminates in two results. Theorem~\ref{thm:Rtip} gives an easily verifiable set of sufficient conditions for R-tipping to be present in a multidimensional nonautonomous system~\eqref{eq:ode} for some choice of the external input. Proposition~\ref{prop:rtip_compact} shows
how R-tipping in the nonautonomous system corresponds to a (heteroclinic) connection to an R-tipping edge state in the compactified autonomous system, and thus gives a numerical tool for quantifying R-tipping and computing critical rates in quite general cases.
A challenge for the future is to understand and classify R-tipping in~\eqref{eq:ode} for more complicated cases such as:
\begin{itemize}
\item [(a)]
{\em R-tipping from an equilibrium attractor $e^-$ for a moving sink on a semi-infinite or finite time interval $I$.}
Theorem~\ref{thm:Rtip} considers R-tipping from $e^-$ for moving sinks on $I=\mathbb{R}$.
The result in cases of R-tipping from a fixed $(x_0,\tau_0)$ for a moving sink on an infinite $I=\mathbb{R}$ or semi-infinite $I=(\tau_-,+\infty)\subset\mathbb{R}$ will follow from a simple generalisation of Theorem~\ref{thm:Rtip}, that is on considering the trajectory from $(x_0,\tau_0)$ rather than the one that limits to $e^-$.
%
Additionally, there are cases of R-tipping from $e^-$ for a moving sink on a semi-infinite $I=(-\infty,\tau_+)\subset\mathbb{R}$, or from a fixed $(x_0,\tau_0)$ for a moving sink on a finite $I=(\tau_-,\tau_+)\subset\mathbb{R}$ or semi-infinite $I=(-\infty,\tau_+)\subset\mathbb{R}$. In such cases, the moving sink bifurcates or disappears at some finite time, and need not even be forward threshold unstable (e.g. see Figure~\ref{fig:Rtip}(b)). Thus, such cases will require a more extensive generalisation of Theorem~\ref{thm:Rtip}.
%
\item [(b)]
{\em R-tipping from non-equilibrium attractors $\gamma^-$.}
For systems with phase space of dimension higher than one,
there can be
R-tipping from more general attractors $\gamma^-$ including limit cycles~\cite{Alkhayuon2018,Alkhayuon2020}, quasiperiodic tori, and chaotic attractors~\cite{Kaszas2019,Alkhayuon2020weak,Lohmann2021}.
It is interesting to note that results on R-tipping in such cases will depend to some extent on the approach taken. For example,
non-degenerate
R-tipping according to Definition~\ref{defn:Rtiptypes} can be generically found at codimension one or zero, depending on whether we take the pointwise or setwise approach.
In the pointwise approach, where one considers a single solution that limits to $\gamma^-$ as $\tau\to -\infty$, non-degenerate R-tipping
can be generically found only at codimension-one in $r$, as explained
in Section~\ref{sec:Rtipcasec}.
By contrast, in the setwise approach, one considers the set of all solutions that limit to $\gamma^-$ as $\tau\to -\infty$.
In this case, it is possible that non-degenerate R-tipping can be found at codimension-zero in $r$: there can be an interval of $r$ such that non-degenerate R-tipping is found for any value of $r$ within the interval and some solution in the set of solutions that limit to $\gamma^-$ as $\tau\to -\infty$.
Furthermore,
non-equilibrium attractors can give rise to additional cases of R-tipping, such as ``partial R-tipping" from a limit cycle $\gamma^-$ described in~\cite{Alkhayuon2018} (see also~\cite{Alkhayuon2020}), and to additional cases of tracking, such as ``weak tracking"~\cite{Alkhayuon2020weak} where the pullback attractor limits to an unstable subset of a chaotic attractor $\gamma^-$ as $\tau\to -\infty$. A physical measure on $\gamma^-$ can be used to quantify the probability that tipping takes place \cite{ashwin2021physical}.
\item [(c)]
{\em R-tipping without crossing regular thresholds.}
For systems with phase space of dimension higher than one, it is possible
to have R-tipping where, as $\tau\to +\infty$, the solution limits to a compact invariant set $\eta^+$ on the boundary of a basin of attraction that is not a regular edge state. Such
$\eta^+$ may be associated with a threshold that is irregular, or with no threshold at all, for one of several possible reasons. More precisely, the boundary of a basin of attraction may include any of:
\begin{itemize}
\item[(i)]
Saddle periodic orbits $\eta^+$ with codimension-one stable manifolds that are not orientable (irregular thresholds).
\item[(ii)]
Chaotic saddles $\eta^+$ with codimension-one stable manifolds that are not embedded (irregular thresholds).
\item[(iii)]
Compact invariant sets $\eta^+$ with stable invariant manifolds of codimension two or higher, for example a source in $\mathbb{R}^2$ (no thresholds).
\end{itemize}
In all three cases, R-tipping will occur without crossing a regular threshold.
Case (i) leads to R-tipping that does not give a change in the system behaviour.
Case (ii) can generate basin boundaries with highly nontrivial fractal structure~\cite{McDonald1985}.
This means that R-tipping may occur not only at isolated values of $r_c$, but also at sets of $r_c$ with nontrivial accumulation points. Case (iii) generically will not be of codimension one in $r$, but it can be for R-tipping from non-equilibrium attractors $\gamma^-$; see point (b) above. For example, in the setwise approach, trajectories from a limit cycle attractor $\gamma^-$ may interact with an equilibrium $\eta^+$ with two unstable directions at codimension one in $r$. Such ``invisible R-tipping" is documented in~\cite{Alkhayuon2018}.
\item [(d)]
{\em R-tipping due to crossing quasithresholds.}
%
In any dimension, it is possible for
so-called ``quasithresholds''~\cite{FitzHugh1955}
to be present in system~\eqref{eq:ode}.
The key difference from regular thresholds is that quasithresholds do not contain an R-tipping edge state $\eta^+$.
Therefore,
\begin{itemize}
\item[(i)]
Quasithresholds cannot give rise to qualitative {\em R-tipping via loss of end-point tracking.} They can only give rise to quantitative {\em R-tipping via loss of $\delta$-close tracking}; see Section~\ref{sec:Rtipintro}.
\item[(ii)]
Rigorous definitions of quasithresholds and R-tipping via loss of $\delta$-close tracking that are relevant for applications still remain a challenge~\cite{Perryman2014,OSullivan2021}.
\end{itemize}
Quasithresholds can appear when a moving regular edge state disappears at some finite time~\cite[Sec.4.9]{Xie2019}, or when the frozen system is slow-fast~\cite{FitzHugh1955, Wieczorek2011,Vanselow2019,OSullivan2021}.
%
\item [(e)]
{\em R-tipping for asymptotically constant external inputs with non-exponential asymptotic decay.}
Our results assume asymptotically constant external inputs with exponential decay. This ensures that (normally) hyperbolic compact invariant sets of the autonomous limit systems remain (normally) hyperbolic when embedded in the extended phase space of the compactified system. It should be possible to generalise our results to asymptotically constant external inputs with slower than exponential decay, provided they are ``normal" in the sense of~\cite[Definition 2.2]{Wieczorek2019compact}. Although such inputs give rise to a centre direction in the compactified system, one can show that both the ensuing centre manifold of $\tilde{e}^-$ and the centre-stable manifold of a regular equilibrium R-tipping edge state $\tilde{\eta}^+$ are unique~\cite[Theorems 3.3 and 3.4]{Wieczorek2019compact}.
\item [(f)]
{\em R-tipping for external inputs that are not asymptotically constant.}
While we focus here on asymptotically constant external inputs, more complex external inputs represent another interesting direction of generalization.
In particular, one could consider external inputs that are asymptotically periodic or quasiperiodic. One proposed definition for R-tipping in this general case is suggested in \cite{Longo2021,hoyer2021rethinking,Kuehn2021} as a bifurcation of a pullback attractor. There are many parallels with our work on relating R-tipping to a heteroclinic connection in the compactified system (see also the last paragraph in Section~\ref{sec:computing}), but obtaining general results without imposing stringent hypotheses is likely to be a challenge. Also note that R-tipping due to crossing a quasithreshold may not correspond to a bifurcation of a pullback attractor.
\item [(g)]
The R-tipping framework presented here gives rigorous results about asymptotic behaviour of a nonlinear system for a given external input, in the spirit of dynamical systems theory. This approach is motivated by applications where given inputs may be difficult to alter or control (e.g. climate, ecology, earthquakes or neuroscience).
An alternative approach is to specify the desired asymptotic state, and use ideas from control theory to make rigorous statements about the class of `optimal' external inputs.
This interesting direction of future research on R-tipping is of interest in applications where one has control over the external inputs (e.g. control engineering, climate change mitigation strategy, disease treatment or epidemiological intervention strategy).
\end{itemize}
\subsection*{Acknowledgements}
The initial stages of this research was supported by the CRITICS Innovative Training Network, funded by the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie Grant Agreement No. 643073. The research of PA was partially supported through funding from EPSRC project EP/T018178/1. This is TiPES contribution \#135. This project received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 820970 (TiPES). We thank the following for their perceptive comments on this research: Ulrike Feudel, Chris K.R.T. Jones, Bernd Krauskopf, Martin Rasmussen, Jan Sieber.
\bibliographystyle{plain}
|
{
"timestamp": "2021-12-01T02:27:12",
"yymm": "2111",
"arxiv_id": "2111.15497",
"language": "en",
"url": "https://arxiv.org/abs/2111.15497"
}
|
\section{Introduction}
Understanding how opinions are formed is as important as ever, as the spread of misinformation becomes more prevalent every day.
Assume there is some new innovation being either good or bad that is introduced to a group of people who want to form their (binary) opinion about it. Following a key insight by Rogers \cite{Rogers2003}, the opining forming process can be modelled as follows. At first, a small set of so-called early adopters, or experts, forms their opinion about the newly introduced innovation. Afterwards, they disseminate their opinion to all other non-experts in the network.
When looking at that network from the outside an observer wants to infer the quality of the new innovation by observing the opinion of all individuals, but without taking the actual structure of the network into consideration (maybe by doing a poll).
One popular method to achieve this is using the \emph{wisdom of the crowd}.
In this case that corresponds to a simple majority rule, that is, the observer takes the majority of opinions as an estimate.
Wisdom of the crowd has been shown to have a plethora of useful applications in decision making, see e.g.~\cite{Cooke2008, Morgan2014, Oprea2009, Aspinall2010, Budescu2015, Mellers2014}.
Assume furthermore that there is an adversary who can influence the opinion of some early adopters so as to falsely convince the observer of the new innovation's quality.
Let us look at some examples.
Consider the so-called Black-Hat ASIN Piggybacking on Amazons Marketplace~\cite{Masters2019}. This is the method of hijacking the listing of an Amazon vendor to sell counterfeit products under the (dis-)guise of a genuine listing. Some customers then buy the real product and some buy the fake one. This results in the vendor to lose profit as well as him getting negative reviews that do not correspond to the actual product. The second example is a newly opened restaurant, that in its opening phase invites food critics to try and rate the restaurant. However, when those critics dine at the restaurant, the restaurant puts in more effort than it would when catering to a regular customer, e.g., by providing better quality food and service. Lastly, consider the common practice of online vendors to buy positive reviews for their products by either giving directly monetary incentives to reviewers or providing them with free products. In particular, on Amazon in certain product categories, like Bluetooth speakers and headphones, ReviewMeta \cite{Noonan2016} finds more than half of reviews to be fake \cite{Dwoskin2018}.
\paragraph{A Model for Opinion Forming}
In the previous examples we saw three different sorts of adversaries: the hijacking seller influenced negatively the opinions of some customers; the restaurant owner could actively choose which critics to influence; finally, the seller that bought his reviews could select the reviewers as well as guarantee their opinion.
Alon et al.~\cite{Alon2015} introduced a model that implements the ideas outlined above. Given a graph $G=(V,E)$ on $n$ vertices and parameters $0\le \mu<1/2 ,0<\delta\le 1/2$ we define the set of experts as a set $\mathcal{E}\subseteq V$ with the property that $|\mathcal{E}|=\mu |V|=\mu n$. Let $\mathcal{E}$ be furthermore divided into two subsets: the experts that know the truth $\mathcal{E}_1\subseteq \mathcal{E}$ and the experts that are convinced of the falsehood $\mathcal{E}_0= \mathcal{E}\setminus \E_1$. The sets $\mathcal{E}_1, \E_0$ are chosen in three different ways that correspond to the various adversaries described in the previous paragraphs.
The \emph{random} adversary has actually no choice. He chooses the expert set $\E$ uniformly at random among all sets of size $\mu |V|$. Then $\E$ is in turn partitioned into $\E_1$ and $\E_0$ by adding each vertex in~$\E$ to~$\mathcal{E}_1$ independently with probability $1/2+\delta$ and to $\E_0$ otherwise. The \emph{weak} adversary is allowed to choose the expert set with the restriction that~$|\E|=\mu |V|$; the selected set is then partitioned into~$\E_1$ and~$\E_0$ like in the random adversary. Finally, the \emph{strong} adversary chooses~$\E, \E_1$ and $\E_0= \E\setminus \E_1$ arbitrarily such that~$|\E|=\mu |V|, |\E_1|=(1/2+\delta)|\E|$ and consequently $|\E_0|=(1/2-\delta)|\E|$. We will ignore rounding issues througout to facilitate the presentation.
All vertices that know the truth in a graph are assigned the label `1', including all vertices in~$\E_1$, and all vertices that believe a falsehood are labeled `0'. Vertices without an opinion bear no label. The experts disseminate their opinions to the non-experts $V\setminus \E$ by a majority rule, that is, every vertex in $V\setminus \E$ takes the opinion of the majority of its neighbouring experts. To be completely explicit, a non-expert is labeled `1'/'0' if more that half of its neighbouring experts are labeled `1'/'0'.
Vertices at which there is no majority -- because of a tie of `1's and `0's or because they have no expert neighbours -- decide upon their opinion uniformly at random, i.e., each of these vertices is independently labeled `1' with probability 1/2 and `0' otherwise.
We say that a graph is \emph{robust} against the random/ weak/ strong adversary if with high probability, for any choice of the expert set, after the dissemination process more than half of the vertices are labeled `1'.
'With high probability' means with probability approaching 1 as $n$ approaches infinity, which we sometimes abbreviate with whp. In \cite{Alon2015} the authors studied which properties of a graph make it robust. They discovered that all graphs with maximal degree being sub-linear in $n$ are robust against the weak adversary. Furthermore, they showed that certain well-connected networks are robust against the strong adversary. In particular, such networks are either Erd\H{o}s-R\'enyi random graphs having edge probability $p$ greater than $c/n$ for a suitable constant $c>0$, or expander graphs, with $d, \lambda_2$ being the largest and second largest eigenvalue of its adjacency matrix, satisfying
$d \ge \lambda_2/(\delta \sqrt{\mu (1-\mu +2\delta\mu)}).$
\begin{figure}[b]
\centering
\begin{tikzpicture}
\begin{scope}[every node/.style={circle,thick,draw,minimum size=0.5cm}]
\node[fill=red] (X) at (-1,2) { };
\node[fill=red] (A) at (0,2) { };
\node[fill=red] (B) at (1,2) { };
\node[fill=red] (C) at (2,2) { };
\node[fill=red] (D) at (3,2) { };
\node[fill=red] (E) at (4,2) { };
\node[fill=blue] (F) at (5,2) { };
\node[fill=blue] (G) at (6,2) { };
\node[pattern=dots,pattern color=blue] (H) at (7,2) { };
\node[pattern=dots,pattern color=blue] (I) at (8,2) { };
\node[pattern=dots,pattern color=blue] (J) at (9,2) { };
\node[pattern=dots,pattern color=blue] (K) at (10,2) { };
\node[pattern=dots,pattern color=blue] (L) at (11,2) { };
\end{scope}
\begin{scope}[>={Stealth[black]},
every edge/.style={draw=black, very thick}]
\path (X) edge node {} (A);
\path (A) edge node {} (B);
\path (B) edge node {} (C);
\path (C) edge node {} (D);
\path (D) edge node {} (E);
\path (E) edge node {} (F);
\path (F) edge node {} (G);
\path (G) edge node {} (H);
\path (H) edge node {} (I);
\path (I) edge node {} (J);
\path (J) edge node {} (K);
\path (K) edge node {} (L);
\end{scope}
\end{tikzpicture}
\begin{minipage}{1\textwidth}
~\\~\\
\end{minipage}
\begin{tikzpicture}
\begin{scope}[every node/.style={circle,thick,draw,minimum size=0.5cm}]
\node[fill=red] (X) at (-1,2) { };
\node[fill=red] (A) at (0,2) { };
\node[fill=red] (B) at (1,2) { };
\node[fill=red] (C) at (2,2) { };
\node[fill=red] (D) at (3,2) { };
\node[fill=red] (E) at (4,2) { };
\node[fill=blue] (F) at (5,2) { };
\node[pattern=dots,pattern color=blue] (G) at (6,2) { };
\node[pattern=dots,pattern color=blue] (H) at (7,2) { };
\node[fill=blue] (I) at (8,2) { };
\node[pattern=dots,pattern color=blue] (J) at (9,2) { };
\node[] (K) at (10,2) { };
\node[] (L) at (11,2) { };
\end{scope}
\begin{scope}[>={Stealth[black]},
every edge/.style={draw=black, very thick}]
\path (X) edge node {} (A);
\path (A) edge node {} (B);
\path (B) edge node {} (C);
\path (C) edge node {} (D);
\path (D) edge node {} (E);
\path (E) edge node {} (F);
\path (F) edge node {} (G);
\path (G) edge node {} (H);
\path (H) edge node {} (I);
\path (I) edge node {} (J);
\path (J) edge node {} (K);
\path (K) edge node {} (L);
\end{scope}
\end{tikzpicture}
\begin{minipage}{0.88\textwidth}
\caption{\small This figure shows an example from \cite{Alon2015}. The colors red/blue correspond to the experts labeled `1'/'0'. The dotted vertices indicate their label after the dissemination process, the unmarked vertices are decided randomly. In the first line graph we consider the iterative strong adversary, where only the rightmost blue vertex determines the label of all remaining vertices. In the second line we consider the non-iterative setting, each blue expert can at most convince two non-experts. If $1/2+\delta>3(1/2-\delta)$ and $n$ is large, the adversary can not hope to convince more that half of all vertices.
}
\label{fig:LitRef}
\end{minipage}
\end{figure}
\paragraph{Iterative Dissemination}
In \cite{Alon2015} the authors also introduced an iterative version of the model with a more dynamic dissemination process. The \emph{iterative model} also starts with labeled experts and all non-experts are labeled according to the majority of their neighbouring experts. Ties that involve at least one expert are broken uniformly at random. All non-experts without any expert neighbours, however, do not form their opinion right away, but remain unlabeled. This process is then iterated by considering all vertices with label `1' and all vertices with label `0',
until all vertices are labeled.
A natural question is whether iterativity helps or hinders the adversary.
Intuitively, iterativity ought to be beneficial for the adversary.
If a graph is not robust against a non-iterative adversary, then there is a choice of expert sets such that after one round of dissemination there are more vertices that are labeled '0' than '1'.
The remaining vertices without expert neighbours are then either decided randomly (non-iterative) or there are subsequent rounds of dissemination (iterative). As there now are more '0' labeled vertices that '1' labeled vertices, deciding the label of the remaining vertices by dissemination should be beneficial for the adversary.
Indeed, the authors of \cite{Alon2015} provided examples where this is the case. For example, they showed that for suitable values of~$\mu$ and~$\delta$, a line graph is robust against the non-iterative strong/weak adversaries, but not against their iterative versions, see Fig.~\ref{fig:LitRef}.
However, in \cite{Alon2015} an additional example, where for the weak adversary the opposite is true, was constructed. Consider a graph that is a disjoint union of a star and a $d$-regular expander graph.
Place one expert in the center of the star and distribute the other experts as evenly as possible on the expander. In the non-iterative setting, each expert in the expander will spread its label to $d$ many non-experts. If the expert in the center of the star is labeled `0', all vertices in the star are labeled `0' as well, outweighing the difference between `1's and `0's in the expander.
In the iterative setting however, each expert does not only spread its label to $d$ many other vertices, but all vertices in the expander will be labeled at the end of the dissemination, roughly in the same ratio as that of the experts in the beginning.
Now the difference in `1's and `0's is so large that even if all vertices in the star were labeled with `0' can sway the majority.
Guided by the intuition described previously, it seems that no such construction can work for a strong adversary. In the previous example of the graph consisting of an expander and a star the adversary can place all '1'-labeled experts on the star and all others in the expander. Then, all vertices in the expander will be labeled '0' resulting in a clear majority. Consequently, in~\cite{Alon2015} the following conjecture concerning the effect of iterativity in that case was made.
\begin{conjecture}[\cite{Alon2015}]\label{conjectureAlon}
In the case of a strong adversary an iterative propagation can never harm the adversary.
\end{conjecture}
Equivalently, the conjecture states that there is no graph
that is robust against the iterative strong adversary and simulaneously not robust against the non-iterative version -- in this precise sense iterativity does not harm/can only help the adversary.
\paragraph{Related Results}
Besides of \cite{Alon2015}, where this model for opinion formation was introduced, there is one more work that studies questions in this precise framework. In his doctoral thesis \cite{Daknama2018}, Daknama studied resilience properties of random graphs. 'Local resilience' in this context refers to the largest number of edges, which are adjacent to any vertex, that can be removed so that the graph still is robust against the strong adversary. In~\cite{Daknama2018} it was shown that one can delete up to a fraction of $2(1-\mu+2\delta \mu)\delta/(1+2\delta)$ of all edges at each vertex without affecting robustness.
There are also other directly related studies in opinion forming, which, however, do not use the exact model presented here. These papers include studies on word of mouth \cite{Young2009}, group recommendation \cite{Andersen2008, Grandi2016,lev2017group,Faliszewski2016} and informational cascades \cite{Bikhchandani1992, Bikhchandani1998, Watts2002, Alon2012, Feldman2014}. For further references see also \cite{Alon2015}.
\paragraph{Result}
The contribution of this paper is to refute Conjecture \ref{conjectureAlon}. The idea is to consider a graph that has non-robustness against the non-iterative strong adversary in a very weak way. More concretely, the majority for '0' labeled vertices is only achieved if a majority of vertices without expert neighbours is labeled '0'. If we consider iterativity, then the adversary has no clear advantage in a subsequent round of the dissemination, as there are roughly equally many '1'- and '0'-labeled vertices. Additionally, we can construct the graph in a way such that the vertices without expert neighbours are connected to vertices that are labeled '1' in the first round of the dissemination, so that the adversary \emph{gets harmed}.
\begin{figure}[h]
\centering
\begin{tikzpicture}
\begin{scope}[every node/.style={circle,thick,draw,text width=0.5cm,align=center}]
\node[minimum size=1cm,, inner sep=1pt] (A) at (0,2) {{\large I} \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}};
\node[minimum size=1.5cm] (B) at (3,2) {{\Large J} \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}};
\node[minimum size=1.5cm] (C) at (8,2) {{\Large P} \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}};
\node[minimum size=1cm, inner sep=1pt] (D) at (11,2) {{\large O} \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {1}}}};
\node[inner sep=0pt] (E) at (2,-1) {{\large D} \raisebox{.5pt}{\textcircled{\raisebox{-.9pt} {0}}}};
\end{scope}
\begin{scope}[>={Stealth[black]},
every node/.style={fill=white,shape=rectangle},
every edge/.style={draw=black, very thick}]
\path (A) edge node {$p_{IJ}$} (B);
\path (B) edge node {$p_{JP}$} (C);
\path (A) edge[bend left=40] node {$p_{IP}$} (C);
\path (D) edge node {$1$} (C);
\path (D) edge[bend left=40] node {$1$} (B);
\path (A) edge[bend left=60] node {$1$} (D);
\path (B) edge node {$1\textbf{ : }1$} (E);
\end{scope}
\end{tikzpicture}
\begin{minipage}{0.85\textwidth}
\caption{\small This figure shows the graph $G$. The numbers on the edges and in the vertices give the probability that an edge is present between/in the components. For example, any edge with one vertex in $I$ and one in $J$ exists independently with probability $p_{IJ}$. Every vertex in $D$ has exactly one distinct neighbour in $J$ and no other neighbours, i.e., every vertex in $D$ has degree 1 and no two vertices in $D$ have a common neighbour. }
\label{fig:Result}
\end{minipage}
\end{figure}
Consider the following graph that implements these ideas. Let $0<\mu,\delta<1/2$ and $0< \varepsilon_1<2\delta/(1/2+\delta)$, $0<\varepsilon_2<(1/2-\delta)/(1/2+\delta)$ as well as $0<d<(1-\mu-2\delta\mu)/3$. Then the graph $G=(V,E),\ |V|=n$ is given by $V=I~\dot\cup ~J~\dot\cup~ O~\dot\cup~ P~\dot\cup ~D$ such that
$$|I\cup J|=|O\cup P|=(1-d)\frac{n}{2},\quad |D|=dn,\quad |I|=\mu \left(\frac{1}{2}+\delta\right)n\quad\text{and} \quad |O|=\mu \left(\frac{1}{2}-\delta\right)n.$$
The subset $D$ forms an independent set. In contrast, $I,J, O$ and $P$ each form a clique. Every vertex in $O$ is connected to all other vertices except to those in $D$. Between $I$ and $J$, $I$ and $P$ and $J$ and $P$ are random bipartite graphs with edge-probabilities $p_{IJ}$, $p_{IP}$ and $p_{JP}$ respectively. Every vertex in $D$ has degree one, with the unique neighbor being in $J$; moreover, no two vertices in $D$ have the same neighbour. There are no more edges.
Set
$$p_{IJ}=p_{JP}=\frac{1/2-\delta}{1/2+\delta}+\varepsilon_1\qquad \text{and}\qquad \qquad p_{IP}=\frac{1/2-\delta}{1/2+\delta}-\varepsilon_2.$$
See Fig.~\ref{fig:Result} for a depiction of $G$.
Assume for now that the adversary chooses $\E_1=I$ and $\E_0=O$. Then whp all vertices in $P$ will have $\approx\varepsilon_2|I|$
more neighbours in $O$ than in $I$ by choice of $p_{IP}$, thus they will be labeled '0' independently of iterativity.
In contrast, vertices in $J$ have $\approx\varepsilon_1|I|$ more neighbours in $I$ than in~$O$ and will consequently be labeled '1'. Summarizing, we have that all vertices in $I\cup J$ are labeled '1' and all vertices in $O\cup P$ are labeled~'0'.
As both unions have by construction the same size, the labels of vertices in $D$ decide whether the adversary succeeds or not. This is where (non-)iterativity comes into play. In the non-iterative setting, vertices in $D$ will choose uniformly at random, as they have no neighbours in $I \cup O$. So, with positive probability there will be more vertices labeled '0' than '1' in $D$, consequently granting a majority of '0'-labeled vertices. In the iterative setting however, all vertices in $D$ will be labeled '1', as they are exclusively connected to vertices in $J$. Thus, when choosing $\E_1=I$ and $\E_0=O$ the iterative adversary fails, while the non-iterative adversary succeeds.
Choosing the proportions of $I,O, J$ and $P$ and the edges between them suitably, we can make sure that choosing $\E_1$ and $\E_0$ differently is not advantageous for the adversary and therefore $G$ is indeed robust against the iterative strong adversary. The main result of this paper is to show that the graph $G$ has indeed the properties outlined above.
\begin{theorem}\label{conjecture}
For all $0<\mu<1/2$ and $1/6<\delta<1/2$ there are $\varepsilon_1,\varepsilon_2, d>0$ such that $G$ is whp robust against the iterative strong adversary, but not against the non-iterative strong adversary.
\end{theorem}
Note that $\delta>1/6$ is a necessary constraint for our construction, but we are certain that there is an example for smaller $\delta$ as well.
Permissible values in Theorem \ref{conjecture} are, e.g.~$\mu=\delta=1/5,\ \varepsilon_1=10^{-2},\ d=10^{-4}$ and $\varepsilon_2=10^{-6}.$
The remainder of this paper will consist of the proof of Theorem~\ref{conjecture}. We first state and prove a well known description of the edge distribution of random graphs and then show the claimed (non-) robustness.
\section{Proof}
For a graph $G=(V,E)$ let $N(v)=\{w\in V\mid (v,w)\in E\}$ be the set of neighbours of $v$. We begin with a statement about the distribution of edges in random graphs.
\begin{lemma}\label{exp_lem_2}\label{exp_lem}
Let $\varepsilon>0$. The Erd\H{o}s-R\'enyi random graph $G(n,p)$ with vertex set $V$ and $p\ge \varepsilon$ has whp the following property. For any set $S\subseteq V$ of size $|S|\ge \varepsilon n$
there is a set $X_S\subset V\setminus S$ of size
at most ${4\varepsilon^{-3}(\ln \varepsilon^{-1} +2)}$
such that
$$\forall v\in (V\setminus S)\setminus X_S: \big ||N(v)\cap S|-p|S|\big| \le \varepsilon p|S|.$$
\end{lemma}
Similar versions of Lemma \ref{exp_lem} with (somehow) different bounds exist in the literature, see for example \cite[Lem.~IV.1 and IV.3]{Fountoulakis2010}. However, as we did not find the exact statement we will need in the literature we include a proof. We will utilize the following Chernoff bound.
\begin{theorem}[\cite{Arora2009}, Cor 7.11]\label{Chernoff}
Let $X$ be a binomially distributed random variable. Then
$${P}\Big(|X-\mathbb{E}[X]|>\delta\mathbb{E}[X]\Big)\le 2\exp\left(-\min\{\delta^2,\delta\}\mathbb{E}[X]/4\right ), \qquad \delta > 0.$$
\end{theorem}\begin{proof}[Proof of Lemma \ref{exp_lem_2}]
Let $S\subseteq V, |S|\ge \varepsilon n$ and let
$$X_S=\big\{v\in V\setminus S\bigm\vert \big ||N(v)\cap S|-p|S|\big| > \varepsilon p|S|\big\}$$
be the set of vertices not satisfying the claim of the lemma. The number of neighbours of any vertex $v\in V\setminus S$ is a binomially distributed random variable, $|N(v)\cap S|\ =\text{Bin}(|S|,p)$, and the expected number of neighbours of $v$ in $S$ is $p|S|$. Thus the probability of $v\in X_S$ can be bounded with Theorem \ref{Chernoff} by
$${P}\big (\big||N(v)\cap S|-p|S|\big| > \varepsilon p|S|\big )\le \exp \left (- \varepsilon^2 p|S|/4\right ). $$
Let furthermore $t\in \mathbb{N}$; the probability that $t$ distinct vertices are in $X_S$ is at most $\exp(-\varepsilon^2p|S|/4\cdot t)$ as the events of vertices being elements of $X_S$ are independent. There are $\binom{n}{k}\le \left (\frac{en}{k}\right )^k$ possibilities to choose $S$, a set of size $k\ge \varepsilon n$. Hence the probability that for fixed $k\ge \varepsilon n$ there is a set $S, |S|= k $ such that $|X_S|=t$ is by union bound at most \begin{align*}
\exp\left (k\ln (en/k)-\frac{\varepsilon^2pk}{4}\cdot t\right )\le \exp\left (k\left (-\ln \varepsilon +1-\frac{\varepsilon^3}{4}\cdot t\right )\right ),
\end{align*}
where we used the assumption that $p\ge \varepsilon$. Thus, if $t\ge{4{\varepsilon^{-3}}(\ln \varepsilon^{-1} +2)}$ this expression is $\le e^{-k}$ and summing over $k\ge \varepsilon n$ yields the claim.
\end{proof}
This concludes the preparations. Next we prove the main theorem, by proving the two claims separately. We show the robustness of $G$ against the iterative strong adversary first.
\begin{lemma}\label{lemma_robust}
For all $0<\mu<1/2$ and $1/6<\delta<1/2$ there are values $\varepsilon_1,\,\varepsilon_2,\, d>0$ such that $G$ is whp robust against the iterative strong adversary.
\end{lemma}
\begin{proof}
Let $0<\mu<1/2,\ 1/6<\delta<1/2$ and $\varepsilon_1, \varepsilon_{2},\ d>0$ such that
\begin{align}\label{ass:e1}
\varepsilon_1<\min\left \{\frac{\delta\mu}2, \frac{4\delta}{1/2+\delta}-1\right\}
\end{align} and furthermore
\begin{align}\label{ass:d}
d<\min\left \{\frac{\varepsilon_1\delta}{1/2+\delta},\frac{\varepsilon_1\delta\mu}4 ,\frac{1-\mu -2\delta\mu}{3}\right \}
\end{align}
as well as
\begin{align}\label{ass:e2}
\varepsilon_2<\min \left\{\frac{d}{6}\left (\frac{4\delta}{1/2+\delta}-1-\varepsilon_1\right ),\frac{1/2-\delta}{1+2\delta}\right\}.
\end{align}
We will show that for any choice of experts, at the end of the dissemination the majority will be labeled '1' thus proving robustness. Let therefore $\E=\E_1\cup\E_0$ be any set of experts as chosen by the iterative strong adversary and define
$$i_1:=|I\cap \mathcal{E}_{1}|, \quad j_{1}:=|J\cap \mathcal{E}_{1}|, \quad o_{1}:=|O\cap \mathcal{E}_{1}|, \quad p_1:=|P\cap \mathcal{E}_{1}|, \quad d_1:=|D\cap \mathcal{E}_{1}|$$
as well as
$$i_0:=|I\cap \mathcal{E}_{0}|, \quad j_{0}:=|J\cap \mathcal{E}_{0}|, \quad o_{0}:=|O\cap \mathcal{E}_{0}|, \quad p_0:=|P\cap \mathcal{E}_{0}|, \quad d_0:=|D\cap \mathcal{E}_{0}|.$$
By definition of the model we have that $|\mathcal{E}_{1}|=\left( {1}/{2}+\delta\right )\mu n$ as well as $|\mathcal{E}_{0}|=\left( {1}/{2}-\delta\right )\mu n$ and therefore
\begin{align}\label{eq:1}
i_1+j_1+o_1+p_1+d_1=\left(\frac{1}{2}+\delta\right )\mu n\quad\text{and}\quad i_0+j_0+o_0+p_0+d_0=\left(\frac{1}{2}-\delta\right )\mu n,
\end{align}
which readily implies that
\begin{align}\label{eq:2}
2\delta\mu n=\frac{2\delta}{1/2+\delta}(i_1+j_1+p_1+o_1+d_1).
\end{align}
We will see that the iterative dissemination will be finished after two rounds only. We start by determining the label of each vertex in the different components after the first round of dissemination.
This is decided by the difference in `0'/`1' labeled expert neighbours. Consider the difference $$
\Delta(v):=|N(v)\cap \mathcal{E}_{1}|-|N(v)\cap \mathcal{E}_{0}|, \quad v\in V.
$$
In particular, $\Delta(v)> 0$ means that $v\in V\setminus (\mathcal{E}_1\cup \mathcal{E}_0)$ will be labeled `1' and $\Delta(v)<0$ means it will be labeled `0'. Note that vertices $v$ with $\Delta(v)=0$ could be either labeled randomly (if they have the same positive number of '0'/'1' labeled neighbours) or not at all in this round.
We begin with a vertex $v\in O$. Using the construction of $G$ and \eqref{eq:1} we get
\begin{align*}
|N(v)\cap \mathcal{E}_{1}|&=i_1+ j_1+ p_1+o_1=\left (\frac{1}{2}+\delta\right )\mu n-d_1
\end{align*}
and similarly
\begin{align*}
|N(v)\cap\mathcal{E}_{0}|&=i_0+j_0+p_0+o_0=\left (\frac{1}{2}-\delta\right )\mu n-d_0.
\end{align*}
Combining these two equations, $d<\varepsilon_1<\delta\mu/2$ given by \eqref{ass:d} and \eqref{ass:e1} implies
$$\Delta(v)=2\delta\mu n-(d_1-d_0)>0\qquad \forall~v\in O.$$
We continue with $v\in J$. Using Lemma \ref{exp_lem}, we get that for all $\varepsilon>0$ whp there is $J_P\subset J,~ |J_P|\le 4\varepsilon^{-3}(\ln \varepsilon^{-1} +2)$ such that
$$\big||N(v)\cap P\cap \E_1|-p_{JP}\cdot p_1\big|\le\varepsilon\cdot p_{JP}\cdot p_1+ \varepsilon n \quad \text{for all } v\in J\setminus J_P.$$
As $\varepsilon>0$ is arbitrary we infer that
$$|N(v)\cap P\cap \E_1|=p_{JP}\cdot p_1+o(n)\quad \text{for all } v\in J\setminus J_P.$$
Completely analogous calculations for $I$ and $\E_0$ yield that whp there is $J'\subset J,~ |J'|~=o(n)$ such that for all $v\in J\setminus J'$
\begin{align*}
|N(v)\cap \mathcal{E}_{1}|&=p_{IJ}\cdot i_1+ j_1+ p_{JP}\cdot p_1+o_1+o(n)\\&=
\left (\frac{1}{2}+\delta\right )\mu n-(1-p_{IJ})i_1-(1- p_{JP})p_1-d_1+o(n)
\end{align*}
and
\begin{align*}
|N(v)\cap\mathcal{E}_{0}|&=p_{IJ}\cdot i_0+ j_0+ p_{JP}\cdot p_0+o_0+o(n)\\&=
\left (\frac{1}{2}-\delta\right )\mu n-(1-p_{IJ})i_0-(1- p_{JP})p_0-d_0+o(n).
\end{align*}
Computing the difference of the above expressions we get for all $v\in J\setminus J'$
\begin{align*}
\Delta(v)&=2\delta\mu n-(1-p_{IJ})(i_1-i_0)-(1- p_{JP})(p_1-p_0)-(d_1-d_0)+o(n)\\
&=2\delta\mu n-\left (\frac{2\delta}{1/2+\delta}-\varepsilon_1\right )\Big ((i_1-i_0)+(p_1-p_0)\Big )-(d_1-d_0)+o(n)\\
&= 2\delta\mu n-\frac{2\delta}{1/2+\delta}\Big ((i_1-i_0)+(p_1-p_0)\Big )+\varepsilon_1\Big ((i_1-i_0)+(p_1-p_0)\Big )-(d_1-d_0)+o(n).
\end{align*}
Applying \eqref{eq:2} and \eqref{eq:1} we can obtain a lower bound for $\Delta(v),v\in J\setminus J'$
\begin{align*}
\Delta(v)&\ge \frac{2\delta}{1/2+\delta}(j_1+o_1+d_1+i_0+p_0)+\varepsilon_1\Big ((i_1-i_0)+(p_1-p_0)\Big )-d_1+o(n)\\
&\ge \varepsilon_1(i_1+j_1+p_1+o_1+d_1)+\left ( \frac{2\delta}{1/2+\delta}-\varepsilon_1\right )(i_0+p_0)-d_1.
\end{align*}
According to \eqref{ass:e1} and \eqref{ass:d} we have $\varepsilon_1<2\delta/(1/2+\delta)$ as well as $d<\varepsilon_1\delta\mu /4$ and therefore
\begin{align*}
\Delta(v)\ge \varepsilon_1\cdot 2\delta\mu n -dn>0\qquad \forall~ v\in J\setminus J'.
\end{align*}
Next we look at $v\in I.$ Using again Lemma \ref{exp_lem} and~\eqref{eq:1} we infer that whp there is $I'\subset I,~ |I'|~=o(n)$ such that for all $v\in I\setminus I'$
\begin{align*}
|N(v)\cap \mathcal{E}_{1}|&=i_1+p_{IJ}\cdot j_1+p_{IP}\cdot p_1+o_1+o(n)\\&=\left (\frac{1}{2}+\delta\right )\mu n-\left (1-p_{IJ}\right )j_1-\left (1-p_{IP}\right )p_1-d_1+o(n)
\end{align*}
and
\begin{align*}
|N(v)\cap\mathcal{E}_{0}|&=i_0+p_{IJ}\cdot j_0+p_{IP}\cdot p_0+o_0+o(n)\\&=\left (\frac{1}{2}-\delta\right )\mu n-\left (1-p_{IJ}\right )j_0-\left (1-p_{IP}\right )p_0-d_0+o(n).
\end{align*}
By combining those bounds we obtain for all $v\in I\setminus I'$
\begin{equation}\label{eq:5}
\begin{aligned}
\Delta(v)
&=
2\delta \mu n-\left (1-p_{IJ}\right )(j_1-j_0)-\left (1-p_{IP}\right )(p_1-p_0)-(d_1-d_0)+o(n) \\
& \hspace{-6mm}=
2\delta\mu n-\frac{2\delta}{1/2+\delta}\Big ((j_1-j_0)+(p_1-p_0)\Big )-(d_1-d_0)+\varepsilon_{1}(j_1-j_0)-\varepsilon_{2}(p_1-p_0)+o(n).
\end{aligned}
\end{equation}
Before we conclusively determine $\Delta(v)$ for $ v\in I\setminus I'$ we look at vertices $v\in P$. Using once more Lemma~\ref{exp_lem} and~\eqref{eq:1} we infer that whp there is $P'\subset P,~ |P'|~=o(n)$ such that for all $v\in P\setminus P'$
\begin{align*}
|N(v)\cap \mathcal{E}_{1}|&=p_{IP}\cdot i_1+p_{JP}\cdot j_1+ p_1+o_1+o(n)\\&=\left (\frac{1}{2}+\delta\right )\mu n-\left (1-p_{IP}\right )i_1-\left (1-p_{JP}\right )j_1-d_1+o(n)
\end{align*}
and
\begin{align*}
|N(v)\cap\mathcal{E}_{0}|&=p_{IP}\cdot i_0+p_{JP}\cdot j_0+ p_0+o_0+o(n)\\&=\left (\frac{1}{2}-\delta\right )\mu n-\left (1-p_{IP}\right )i_0-\left (1-p_{JP}\right )j_0-d_0+o(n).
\end{align*}
Together these two expressions yield for all $v\in P\setminus P'$
\begin{align}\label{eq:4}
\Delta(v)&= 2\delta\mu n-\frac{2\delta}{1/2+\delta}\Big((i_1-i_0)+(j_1-j_0)\Big)-(d_1-d_0)+\varepsilon_{1}(j_1-j_0)-\varepsilon_{2}(i_1-i_0)+o(n).
\end{align}
We argue next, that either ``$\Delta(v)<0 \text{ for some }v\in I\setminus I'\,$'' or ``$\Delta(v)<0 \text{ for some }v\in P\setminus P'\,$'' but never both. To see this, observe that
$$\text{``}\Delta(v)<0 \text{ for some }v\in I\setminus I' \quad\text{and}\quad \Delta(v)<0 \text{ for some }v\in P\setminus P \, \text{''}$$ implies that
\begin{align}\label{eq:3}
(i_1-i_0)+(j_1-j_0) \quad \text{and}\quad (j_1-j_0)+(p_1-p_0) \quad \text{are both}\quad \ge((1/2+\delta)\mu-\varepsilon_1 )n.
\end{align}
Otherwise~\eqref{ass:d} and \eqref{ass:e2} assert that $\varepsilon_2< d/6$ as well as $d< \varepsilon_1\delta/(1/2+\delta)$ and therefore either by \eqref{eq:5}
\begin{align*}
\Delta(v)&\ge \frac{2\delta}{1/2+\delta}\varepsilon_1 n-dn-\varepsilon_{2}n>0,\qquad\text{for all } v\in I\setminus I'
\end{align*}
or by \eqref{eq:4}
\begin{align*}
\Delta(v)&\ge \frac{2\delta}{1/2+\delta}\varepsilon_1 n-dn-\varepsilon_{2}n>0,\qquad \text{for all } v\in P\setminus P'.
\end{align*}
However, as $(i_1-i_0)+(p_1-p_0)\le (1/2+\delta)\mu n$ we obtain from~\eqref{eq:3} that $j_1-j_0\ge (\delta\mu -\varepsilon_1)n$. Then~\eqref{ass:e1},\eqref{ass:d} and \eqref{ass:e2} imply that $\varepsilon_1< \delta\mu/2, \, d<\varepsilon_1\delta\mu/$ and $\varepsilon_2< d/6$ and thus \eqref{eq:5} yields
\begin{align*}
\Delta(v)&\ge \varepsilon_1\cdot (\delta\mu-\varepsilon_1 )n-dn- \varepsilon_{2}n>0,\qquad \text{for all } v\in I\setminus I'.
\end{align*}
Summarizing, we have shown that $\Delta(v)>0$ for all $v\in (J\setminus J')\cup O$ and either $\Delta(v)>0$ for all $v\in I\setminus I'$ or $\Delta(v)>0$ for all $v\in P\setminus P'$.
In the rest of the proof we consider the second round of the iterative dissemination process. We will distinguish two cases. Assume first that $j_0+ d_0< (d/2-\varepsilon_2)n$. As $\Delta(v)>0$ for all $v\in J\setminus J'$ we infer that at most $(d/2-\varepsilon_2)n+o(n)$ vertices in $D$ will be labeled `0' after the second round of the dissemination process, all other vertices in $D$ will be labeled `1'. Thus counting the total number of vertices labeled `1' after the process, we get in this case for $n$ large enough
\begin{align*}
\#(\text{vertices labeled `1'})&>|I\setminus I'|\ +\ |J\setminus J'|\ +\ |O|\ +\left (\frac{d}{2}+\varepsilon_2-o(1)\right) n-|\mathcal{E}_0|\\
&=(1-d)\frac{n}{2}+\frac{dn}{2}+\varepsilon_2n-o(n)>\frac{n}{2}.
\end{align*}
We are left with the case $j_0+d_0\ge (d/2-\varepsilon_2)n$. Observe that $d_1<(d/2+\varepsilon_2)n$ as otherwise the conclusion of the previous case applies. We revisit $\Delta(v),\ v\in P\setminus P'$ using \eqref{eq:4} and \eqref{eq:2}
\begin{align*}
\Delta(v)&=\frac{2\delta}{1/2+\delta}(p_1+o_1+d_1+i_0+j_0)-(d_1-d_0)+\varepsilon_{1}(j_1-j_0)-\varepsilon_{2}(i_1-i_0)+o(n)\\
&\ge \left (\frac{2\delta}{1/2+\delta}-\varepsilon_1\right )j_0+d_0 - \left (1-\frac{2\delta}{1/2+\delta}\right ) d_1-\varepsilon_{2}i_1+o(n).
\end{align*}
Using the assumptions $j_0+d_0\ge (d/2-\varepsilon_2)n$ and $d_1<(d/2+\varepsilon_2)n$, this simplifies to
\begin{align*}
\Delta(v)&> \left (\frac{4\delta}{1/2+\delta}-1-\varepsilon_1\right )dn/2-3\varepsilon_{2}n+o(n).
\end{align*}
Assumption \eqref{ass:e2} guarantees that $\Delta(v)>0,\ v\in P\setminus P'$ and thus in this case for $n$ large enough
\begin{align*}
\#( \text{vertices labeled '1'})&>|I\setminus I'|\ +\ |J\setminus J'|\ +\ |O|\ +\ |P\setminus P'|\ -\ |\mathcal{E}_0|\\
&=(1-d)n-\left (\frac{1}{2}-\delta\right )\mu n-o(n)>\frac{n}{2},
\end{align*}
and the proof is completed.
\end{proof}
The next lemma together with Lemma \ref{lemma_robust} implies Theorem \ref{conjecture}.
\begin{lemma}\label{lemma_non_robust}
For all $0<\mu<1/2$ and $1/6<\delta<1/2$ there are values $\varepsilon_1,\varepsilon_2, d>0$ such that whp $G$ is not robust against the non-iterative strong adversary.
\end{lemma}
\begin{proof}
Let $0<\mu<1/2$ and $1/6<\delta<1/2$ and $\varepsilon_1,\, \varepsilon_2,\, d>0$ as given in \eqref{ass:e1} to \eqref{ass:e2}.
We show that $G$ is indeed not robust by giving a suitable choice of the expert set. Set $\E=\E_1\cup\E_0$ with $\mathcal{E}_{1}=I$ and $\mathcal{E}_{0}=O$. By definition, these sets have matching cardinalities.
We compute the quantity $$\Delta(v)=|N(v)\cap \E_1|-|N(v)\cap \E_2|$$ for vertices $v\in P$ to find their labels.
Using $\varepsilon_2< (1/2-\delta)/(1+2\delta)$ by \eqref{ass:e2} and Theorem \ref{Chernoff} we readily obtain that whp for all $v\in P$
\begin{align*}
\Delta(v)\le (p_{IP}+ o(1))|\E_1|-|\E_0|=\left (\frac{1/2-\delta}{1/2+\delta}-\varepsilon_{2}+ o(1)\right )|\E_1|-|\E_0|=-(\varepsilon_2+o(1))|\E_1|<0.
\end{align*}
Therefore the set of vertices labeled `0' contains $O\cup P$ which has cardinality $(1-d)\frac{n}{2}$. However, vertices in $D$ do not have any expert neighbours and as we are in the non-iterative setting, those vertices will be decided uniformly at random. Hence with probability $1/2$ there will be at least $dn/2+1$ vertices labeled `0' in $D$ and therefore $G$ is not robust against the non-iterative strong adversary.
\end{proof}
\phantomsection
\bibliographystyle{abbrv}
\small
|
{
"timestamp": "2021-12-02T02:17:19",
"yymm": "2111",
"arxiv_id": "2111.15445",
"language": "en",
"url": "https://arxiv.org/abs/2111.15445"
}
|
\section{Introduction}
\label{sec:intro}
Photo-realistic image synthesis via Generative Adversarial Networks is an important problem in computer vision and graphics. Specifically, synthesizing high-fidelity and editable portrait images has gained considerable attention in recent years. Two main classes of methods have been proposed: 2D GAN image generation and 3D-aware image synthesis techniques.
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{Images/teaser_v11.png}
\caption{View-consistent portrait editing. Given the proposed FENeRF generator, we invert a given reference (top left) into shape and texture latent spaces to get the free-view portrait (first row). We can modify the rendered semantic masks and then leverage GAN inversion again to edit the free-view portraits (second row). As of last, we can replace the optimized texture code with the one from another image (bottom left) to perform the style transfer (bottom row) as well.}
\label{fig: teaser}
\end{figure}
Despite their great success of synthesizing highly-realistic and even locally-editable images, 2D GAN methods all ignore the projection or rendering process of the underlying 3D scene, which is essential for view consistency. Consequently, they produce inevitable artifacts when changing the viewpoint of generated portraits. In order to overcome this issue, the Neural Radiance Fields (NeRF)~\cite{mildenhall2020nerf} have been explored to develop 3D-aware image synthesis techniques. Some of these methods~\cite{schwarz2020graf, chan2021pi} adopt vanilla NeRF generator to synthesize free-view portraits that are not editable, and the results could be blurry.
Others employ volumetric rendering techniques to first produce view-consistency 2D feature maps, and then use an additional 2D decoder to obtain the final highly-realistic images. Nevertheless, such methods suffer from additional view-dependent artifacts introduced by 2D convolutions and the mirror symmetry problem. To this end, the concurrent work, CIPS-3D~\cite{zhou2021cips3d} replaces the 2D convolutions with implicit neural representation (INR) network. Unfortunately, all existing 3D-aware GANs do not support the interactive local editing on the generated free-view portraits.
In this paper, we propose a generator that can produce strictly view-consistent portraits, while supports interactive local editing. We adopt the noise-to-volume scheme. The generator takes as input the decoupled shape and texture latent code, and generates a 3D volume where the facial semantics and texture are spatially-aligned via the shared geometry. As a learnable 3D positional feature embedding is exploited while generating the texture volume, more details are preserved in the synthesized portraits.
Directly learning this 3D volume representation is challenging due to the absent of suitable, large-scale 3D training data. A possible solution is to use multi-view images~\cite{chen2020sofgan}. Nonetheless, the inadequate training data harms the representation ability of the 3D semantic volume. To overcome this issue, we make the use of monocular images with paired semantic masks, which are vastly available. Specifically, color and semantic discriminators are employed to supervise the training of the NeRF generator. The color discriminator focuses on image details hence improves the image fidelity. The semantic discriminator takes as input a pair of image and semantic map to enforce the alignment of corresponding content in the 3D volume. Thanks to the spatial-aligned 3D representation, we can use the semantic map to locally and flexibly edit the 3D volume via GAN inversion. In addition, an insight here is that learning the semantic and texture representations simultaneously helps to generate more accurate 3D geometry.
To illustrate the effectiveness of the proposed method, we perform the evaluation on two widely-used public datasets: CelebAMask-HQ and FFHQ. As shown in the experiments, the FENeRF generator outperforms state-of-the-art methods in several aspects. In addition, it supports various downstream tasks. To facilitate further research, we will release our code and models upon acceptance. To summarize, our main contributions are as following:
\begin{itemize}
\item We present the first portrait image generator that is locally editable and strictly view-consistent, benefiting from the 3D representation in which the semantics, geometry and texture are spatially-aligned.
\item We train the generator with paired monocular images and semantic maps without the requirement of multi-view or 3D data. This ensures data diversity and enhances the representation ability of the generator.
\item In experiments, we reveal that joint learning the semantic and texture volume can help to generate the finer 3D geometry.
\end{itemize}
\begin{figure*}[htbp]
\centering
\vspace{0.5cm}
\includegraphics[width=\textwidth]{Images/method_v8.png}
\captionof{figure}{Overall pipeline of FENeRF. Our generator produces the spatially-aligned density, semantic and texture fields conditioned on disentangled latent codes $\mathbf{Z}_{s}$ and $\mathbf{Z}_{t}$. The positional feature embedding $e_{corrd}$ is also injected to the network together with view direction for color prediction to preserve high-frequency details in generated image. By sharing the same density, aligned rgb image and semantic map are rendered. Finally two discriminators $D_s$ and $D_c$ are fed with semantic map/image pairs and real/fake image pairs, and trained with adversarial objectives $L_{D_s}$ and $L_{D_c}$, respectively.}
\label{fig:pipeline}
\end{figure*}
\section{Related work}
\label{sec:Related Work}
\noindent \textbf{Neural Implicit Representations.}
Recently Neural implicit scene representation boosts various 3D perception tasks, such as 3D reconstruction and novel view synthesis, with its space continuity and memory efficiency. \cite{michalkiewicz2019implicit, mescheder2019occupancy, park2019deepsdf} represent either scene, objects as occupancy fields or signed distance functions, and 3D data is required for supervision. \cite{mildenhall2020nerf} models scene as neural radiance fields which are baked in weights of MLPs. With differentiable numerical integration of volume rendering, NeRF can be trained on only posed images. Various follow-ups extend NeRF to faster training and testing \cite{garbin2021fastnerf, reiser2021kilonerf, yu2021plenoctrees, hedman2021baking, cole2021differentiable}, pose-free \cite{lin2021barf, meng2021gnerf}, dynamic scenes \cite{yu2021pixelnerf, chen2021mvsnerf} and animating avatars \cite{peng2021animatable, liu2021neural, guo2021ad}. \cite{zhi2021place} extends NeRF with a semantic segmentation renderer and boosts performance of semantic interpretation. In this work, we build a generative semantic field aligned with neural radiance field. Instead of focusing on scene semantic understanding, we utilize the spatial alignment of facial texture and semantics to achieve semantic-guided attribute editing\vspace{1mm}.
\noindent\textbf{Face Image Editing with 2D GANs.}
Generative Adversarial Networks (GANs) are widely used in photo-realistic face editing. Inspired by image-to-image translation, conditional GANs take as condition semantic masks \cite{fedus2018maskgan, isola2017image, park2019semantic, zhu2020sean} or hand-written sketches \cite{chen2020deepfacedrawing, li2020deepfacepencil, chen2021deepfaceediting} for the interactive editing of face images. SPADE \cite{park2019semantic} utilizes efficient spatially-adaptive normalization to synthesize photorealistic face images given input semantic layouts. SEAN \cite{zhu2020sean} further enables semantic region-based styling and more flexible facial editing. In order to provide explicit control of 3D interpretable semantic parameters (\eg pose, expression, illumination), several recent approaches decompose the image generation space into multiple specific attributes based on 3D guidance \cite{leimkuhler2021freestylegan, tewari2020stylerig, chen2020sofgan}. SofGAN proposes a semantic occupancy field to render view-consistent semantic maps which provide geometry constraints on image synthesis. However, SofGAN still lacks the interpretation of 3D geometry and considerable semantic labelled 3D scans are required for training semantic rendering. Instead, our FENeRF is trained end-to-end in an adversarial manner without any 3D data or multi-view images. Moreover, we show that our semantic rendering has better view consistency\vspace{1mm}.
\noindent\textbf{3D-Aware Image Synthesis.} Despite the tremendous breakthroughs in image generation by deep adversarial models \cite{karras2019style, karras2020analyzing, isola2017image, park2019semantic, zhu2020sean, collins2020editing, tewari2020stylerig}, those methods mainly manipulate shape and textures in 2D space without understanding of 3D nature of objects and scenes, resulting in limited pose control ability. To this end, 3D image synthesis methods lift image generation into 3D with explicit camera control. Early approaches \cite{zhu2018visual, gadelha20173d} utilize explicit voxel or volume representation thus are limited in resolution. Recently neural implicit scene representations are integrated to generative adversarial models and enable better memory efficiency and multi-view consistency\cite{chan2021pi, schwarz2020graf, niemeyer2021giraffe}. In particular, $\mathrm{\pi}$-GAN \cite{chan2021pi} presents siren-based neural radiance field conditioned on a global latent code which entangles geometry and texture. GRAF \cite{schwarz2020graf} and Giraffe \cite{niemeyer2021giraffe} enable disentangled control of texture and geometry, however, in a global level thus don't support user-interacted local editing. By contrast, our FENeRF enables both global independent styling on texture and geometry as well as local facial attribute editing while preserving view consistency.
\section{Method}
\subsection{Locally Editable NeRF Generator}
\label{sub: 3.1}
Our goal is to enable semantic-guided facial editing in 3D space. The main challenges are: 1) We need to decouple shape and texture during image generation. 2) Semantic map has to be strictly aligned with geometry and texture in 3D space. To this end, FENeRF exploits two separate latent codes. The shape latent code is to control the geometry and semantics. The texture code controls the appearance in the texture volume. Moreover, we exploits the three-head architecture in presented generator to individually encode the semantics and texture which are aligned with the underlying geometry depicted in the density volume. We formulate our generator as follows:
{\setlength\abovedisplayskip{1pt}
\begin{equation}
\begin{split}
G:
(\mathbf{x}, \mathbf{d}, \mathbf{z}_s, \mathbf{z}_t, \mathbf{e}_{coord}) \mapsto (\mathbf{\sigma}, \mathbf{c}, \mathbf{s}),
\end{split}
\end{equation}}
As illustrated in Fig.~\ref{fig:pipeline}, the proposed generator is parameterized as Multi-Layer Perceptrons (MLPs), which takes as input the 3D point coordinates $\mathbf{x} = (x, y, z)$, viewing direction $\mathbf{d} = (\theta, \phi)$ and the learned positional feature embedding~$\mathbf{e}_{coord}$. Then generates the view-invariant density $\mathbf{\sigma} \in \mathbb{R}^+$ and semantic labels $\mathbf{s}_r \in \mathbb{R}^k$ conditioning on shape latent code~$\mathbf{z}_s$, as well as the view-dependent colour $\mathbf{c}_r \in \mathbb{R}^3$ conditioning on texture code~$\mathbf{z}_c$.
We also utilize a mapping network to map sampled codes into an intermediate latent space $\mathcal{W}$ and outputs frequencies $\mathbf{\gamma}$ and phase shifts $\mathbf{\beta}$, controlling the generator through feature-wise linear modulation as did in \cite{chan2021pi, perez2018film}:
\begin{align}
\Phi(\mathbf{x})=\mathbf{W}_n(\phi_{n-1} \circ \phi_{n-2} \circ ... \circ \phi_{0})(\mathbf{x}) + \mathbf{b}_n \\
\mathbf{x}_i \mapsto \phi_{i}(\mathbf{x}_i) = \mathrm{sin}(\mathbf{\gamma}_i \cdot (\mathbf{W}_i\mathbf{x}_i + \mathbf{b}_i) + \mathbf{\beta}_i),
\end{align}
\noindent where $\phi_i: \mathbb{R}^{M_i} \mapsto \mathbb{R}^{N_i}$ is the $i^{th}$ layer of the network. The input $\mathbf{x}_i \in \mathbb{R}^{M_i}$ is transformed by the weight matrix $\mathbf{W}_i \in \mathbb{R}^{N_i \times M_i}$ and the biases $\mathbf{b}_i \in \mathbb{R}^{M_i}$, and then modulated by the sine nonlinearity.
Nevertheless, utilising the siren-based network only generates images lacking details. Therefore, we introduce a learnable 3D feature grid to compensate high-frequency image details. Specifically, to predict the color for a 3D point $\mathbf{x}$ with 2D view direction $\mathrm{d}$, we sample a local feature vector $\mathbf{e}_{coord}^{\mathrm{x}}$ from the feature grid by bi-cubic interpolation, then it is fed into the color branch as additional input. As shown in Fig.~\ref{fig:effect_fg}, it helps to preserve finer-grained image details.
Once the semantic, density and colour fields are generated, we can render those into semantic map and portrait image from arbitrary camera poses via volume rendering. For each 3D point, we first query its color $\mathbf{c}$, semantic labels $\mathbf{s}$ and volume density $\mathbf{\sigma}$. To obtain the pixel color $\mathbf{C}_r$ and semantic label probabilities $\mathbf{S}_r$, the values of all the samples in the ray are accumulated using the classical volume rendering process. The rendering equations are as follows:
\begin{equation}
\label{eq: color rendering}
\begin{aligned}
\mathbf{C}(\mathbf{r})=\int_{t_n}^{t_f}T(t)\sigma(\mathbf{r}(t))\mathbf{c}(\mathbf{r}(t), \mathbf{d})dt, \\
\end{aligned}
\end{equation}
\begin{equation}
\label{eq: semantic rendering}
\begin{aligned}
\mathbf{S}(\mathbf{r})=\int_{t_n}^{t_f}T(t)\sigma(\mathbf{r}(t))\mathbf{s}(\mathbf{r}(t), \mathbf{d})dt, \\
\end{aligned}
\end{equation}
\noindent where $T(t)=\mathrm{exp}(-\int_{t_n}^{t_f}\sigma(\mathbf{r}(s))ds).$ In practice, we approximate Eq.~\ref{eq: color rendering} and Eq.~\ref{eq: semantic rendering} in a discretized form following NeRF \cite{mildenhall2020nerf}. Note that the three branches of semantics, density and texture share the same intermediate features, and the output density is shared in the processes of colour and semantic rendering as well, to ensure that the generated semantics, density and texture are exactly aligned in 3D space.
\subsection{Discriminators}
\label{sub: 3.2}
In order to learn the unsupervised 3D representations, we design two discriminators $D_c$ and $D_s$ both of which are parameterized as CNN with leaky ReLU activation \cite{karras2017progressive}. $D_c$ discriminates the fidelity of generated portraits. Semantic masks, in addition to face images, are taken as input to $D_s$. This is to encourage the alignment of the face appearance and semantics. Moreover, we append two channels of $D_c$ to predict camera pose and then apply camera pose correction loss with sampled ones.
\begin{figure}[t!]
\centering
\vspace{0.2cm}
\includegraphics[width=\textwidth]{Images/geometry_v3.png}
\caption{Comparison on geometry interpretation with $\mathrm{\pi}$-GAN. $\mathrm{\pi}$-GAN fails to learn accurate geometry (e.g. facial boundaries, hair, background), and suffers from serious artifacts. By contrast,
benefiting from the semantic guidance, FENeRF generates accurate and smooth geometry without any specific regularization. Moreover, FENeRF enables a clear decouple of the generative 3D face from background.}
\label{fig:geometry}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{Images/miou_v3.png}
\caption{Average mIoU score during semantic inversion (left) and inversion results in free-view (right). We randomly select 1000 real face images from CelebAMask-HQ\cite{CelebAMask-HQ} and invert them into the shape latent space. The left chart illustrates the average mIoU score of the 1000 inverted semantic maps with respect to the iterations during semantic inversion. We visualize one reference portrait with its free-viewed inverted semantic maps on right.}
\label{fig:miou}
\end{figure}
\subsection{Training}
\label{sub: 3.3}
During training, we randomly sample camera poses $\xi \sim p_{\xi}$ and latent codes $\mathbf{z}_s, \mathbf{z}_t \sim \mathcal{N}(0,I)$. We approximate the camera pose distribution as gaussian and set pose range as a prior according to \cite{schwarz2020graf, niemeyer2021giraffe, chan2021pi}. The camera positions are sampled on the surface of a object-centered sphere, and the camera is always directed towards the origin.
Our training loss is composed of three parts:
\begin{equation}
\vspace{0.2cm}
\begin{aligned}
\mathcal{L}_{D_c} =
&\mathbb{E}_{\mathbf{z}_s, \mathbf{z}_t \sim \mathcal{N}, \mathbf{\xi} \sim p_{\xi}}[f(D_{c}(\mathbf{x}_c))] + \\
& \mathbb{E}_{\mathbf{I} \sim p_i}[f(-D_{c}(\mathbf{I})) + \\
&\lambda_c\|\mathbf{\bigtriangledown}D_{c}(\mathbf{I})\|^2]
\end{aligned}
\label{eq:image discriminator loss}
\end{equation}
\begin{equation}
\vspace{0.2cm}
\begin{aligned}
\mathcal{L}_{D_s} =
&\mathbb{E}_{\mathbf{z}_s, \mathbf{z}_t \sim \mathcal{N}, \mathbf{\xi} \sim p_{\xi}}[f(D_{s}(\mathbf{x}_s, \mathbf{x}_c))] + \\ &\mathbb{E}_{\mathbf{I} \sim p_i, \mathbf{L} \sim p_l }[f(-D_{s}(\mathbf{L}, \mathbf{I})) + \\
&\lambda_s\|\mathbf{\bigtriangledown}D_{s}(\mathbf{L}, \mathbf{I})\|^2]
\end{aligned}
\label{eq:semantic discriminator loss}
\end{equation}
\begin{equation}
\begin{split}
\mathcal{L}_{G} =
&\mathbb{E}_{\mathbf{z}_s, \mathbf{z}_t \sim \mathcal{N}, \mathbf{\xi} \sim p_{\xi}}[f(D_{c}(\mathbf{x}_c))] + \\
&\mathbb{E}_{\mathbf{z}_s, \mathbf{z}_t \sim \mathcal{N}, \mathbf{\xi} \sim p_{\xi}}[f(D_{s}(\mathbf{x}_s, \mathbf{x}_c))] + \\
&\lambda_p\|\hat{\xi} - \xi\|
\end{split}
\label{eq:generator loss}
\end{equation}
\noindent where $f(t)=-\mathrm{log}(1+\mathrm{exp}(-t))$, $\mathbf{\lambda}_c, \mathbf{\lambda}_s, \mathbf{\lambda}_p =10$, and $p_i, p_l$ indicate the distributions of real images $\mathbf{I}$ and semantic maps $\mathbf{L}$ in datasets. The objectives of the image discriminator $D_c$, semantic discriminator $D_s$ and generator $G$ to minimize $\mathcal{L}_{D_c}$, $\mathcal{L}_{D_s}$ and $\mathcal{L}_G$, respectively. $\mathcal{L}_{D_s}$ shown in Eq.~\ref{eq:semantic discriminator loss} is utilised to discriminate the paired image and semantic map and enforce their spatial alignment. While training generator $G$ with $\mathcal{L}_{G}$, we stop gradient back propagation from $D_s$ into color branch since the gradient would enforce texture to match semantics and lead to loss of fine image details. We adopt non-saturating GAN loss and $R_1$ gradient penalty\cite{mescheder2018training}. Moreover, we apply camera pose correction loss (the last term of Eq.~\ref{eq:generator loss}) to penalize the distance between camera pose $\hat{\xi}, \xi$ which is fed into generator and predicted by discriminator, respectively. This loss enforces all 3D faces lie in the same canonical pose and encourages a reliable 3D face geometry in avoid of pose drift.
In summary, our method builds a generative implicit representation which encodes facial geometry, texture and semantics jointly in a spatial aligned 3D volume. We further introduce a learnable feature grid for fine-grained image detail. An auxiliary discriminator further enforces this alignment by taking as input paired synthesised image and semantic map. Furthermore, we notice that semantic rendering significantly improves the quality of synthesised facial geometry as shown in Fig.~\ref{fig:geometry}.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{Images/extreme_pose_v6.png}
\caption{Comparison of semantic maps with SofGAN at extreme poses. SofGAN makes wrong semantic labels at extreme poses, leading to inconsistent artifacts in synthesized images. Besides, texture inconsistency (\eg inconsistent eye directions) appears when zooming in by SofGAN. Note that SofGAN is set to classify right and left attributes into the same class.}
\label{fig:extreme_pose}
\end{figure}
\section{Experiments}
\label{sec: Experiments}
\label{sub: 4.1}
\noindent\textbf{Datasets.}
We consider two datasets in our experiments for evaluation: CelebAMask-HQ~\cite{CelebAMask-HQ} and FFHQ~\cite{karras2019style}. CelebAMask-HQ contains 30,000 high-resolution face images from CelebA~\cite{liu2015deep} and each image has a segmentation mask of facial attributes. The masks have resolution of $512 \times 512$ and 19 classes including skin, eyebrows, ears, mouth, lip, etc. FFHQ contains 70,000 high-quality face images and We label the semantic classes by BiseNet \cite{yu2018bisenet}\vspace{2mm}.
\noindent\textbf{Baselines.} We compare our model on image synthesis quality with three recent works for 3D-aware image synthesis : GRAF~\cite{schwarz2020graf}, pi-GAN\cite{chan2021pi} and Giraffe\cite{niemeyer2021giraffe}. We also compare the performance of semantic rendering with SofGAN~\cite{chen2020sofgan} which learns the semantic field with multi-view data and edits the portrait in 2D image plane. For the view consistency of inversion, we compare with InterfaceGAN\cite{shen2020interfacegan} and E4E\cite{tov2021designing}. We perform the evaluation of these approaches by leveraging their official implementations\vspace{2mm}.
\begin{figure}[htbp]
\centering
\vspace{-0.5cm}
\includegraphics[width=\textwidth]{Images/style_morphing_v3.png}
\caption{Visualization of disentangled morphing. We interpolate the texture and shape codes of images at the left bottom and right top. In each row, the texture code is interpolated while keeping the shape code constant. Similarly, the shape code varies across columns. We can see that the interpolated results are disentangled clearly in these two dimensions.}
\label{fig:morphing}
\end{figure}
\begin{figure*}[!h]
\centering
\includegraphics[width=\textwidth]{Images/style_mixing_trans_v2.png}
\caption{Global style editing. Style mixing (on left part): Given the Source 1 image and a sequence of source 2 images (on top row), the mixed face image in second row retains texture from source 1 and shape from source 2. Inversely, we show the mixed results with shape of source 1 and texture of source 2 sequence on bottom. Style transfer (on the right part): Given source and target images, we can also transfer the texture style of source image into the target one, and the free-view portraits are generated while facial geometry is preserved.}
\label{fig: style transfer}
\end{figure*}
\noindent\textbf{Implementation details.}
Our semantic radiance field is parameterized as MLPs with FiLM-conditioned SIREN layers~\cite{chan2021pi}. For discriminator, CoordConv layers~\cite{liu2018intriguing}, residual connections~[\cite{he2016deep}] are utilized. During training, we initialize the learning rate to $6 \times 10^{-5}$ for generator $G$, $2 \times 10^{-4}$ for image discriminator $D_c$ and $1 \times 10^{-4}$ for segmentation discriminator $D_s$. We use the Adam optimizer with $\beta_1 = 0, \beta_2 = 0.9$. We start training at $32 \times 32$ with a batch size of 40. Then the resolution is increased to $128 \times 128$ with a batch size of 24. Please refer to the supplemental material for more details.
\subsection{Comparisons}
\noindent\textbf{Quality evaluation of synthesized image.}
We conduct quantitative comparisons of synthesized images with the state-of-the-art 3D-aware GAN methods and the results are shown in Table~\ref{tab:comparison}. Frechet Inception Distance (FID) \cite{heusel2017gans} and Kernel Inception Distance \cite{binkowski2018demystifying} are used to evaluate the image quality. For fair comparison, we retrain all models on these two datasets with full images (30k for CelebA-HQ and 70k for FFHQ) at $256 \times 256$ resolution. FID score is calculated 2048 randomly sampled images. We reach the state-of-the-art performance on both datasets with FID and KID. This improvement is attributed to: 1) the joint learning with semantic field provides reliable semantic enforcement and facilitate training converge; 2) our learnable feature grid brings high frequency details to synthesized images\vspace{2mm}.
\begin{table}
\resizebox{\linewidth}{!}{
\begin{tabular}{c|cc|cc}
\hline
\multicolumn{1}{l|}{} & \multicolumn{2}{c|}{\textbf{FID}$\downarrow$} & \multicolumn{2}{c}{\textbf{KID} ($\times 10^3$)$\downarrow$} \\ \cline{2-5}
& CelebA-HQ & FFHQ & CelebA-HQ & FFHQ \\ \hline
GRAF & 34.7 & 66.5 & 15.6 & 49.3 \\
pi-GAN & 14.7 & 40.3 & 3.9 & 23.5 \\
Giraffe & 16.2 & 31.9 & 9.1 & 32.7 \\
FENeRF & \textbf{12.1} & \textbf{28.2} & \textbf{1.6} & \textbf{17.3} \\ \hline
\end{tabular}}
\caption{Quantitative comparison of our approach with other 3D-aware GAN methods. Our method outperforms other methods in terms of FID and KID on both CelebA-HQ and FFHQ datasets.}
\label{tab:comparison}
\end{table}
\noindent\textbf{Rendering performance of Semantic field.}
In our framework, we construct a neural semantic radiance field for a generative 3D face and render both semantic maps and images from arbitrary view points. To evaluate the performance of semantic rendering, we compare with SofGAN~\cite{chen2020sofgan} which trains a generative semantic occupancy field supervised by labeled multi-view semantic maps. As Fig.~\ref{fig:extreme_pose} shows, SofGAN is prone to wrong semantic classes at extreme poses and causes artifacts in synthesized face. We attribute the semantic inconsistency of SofGAN to that its semantic rendering relies on the surface construction which is, however, highly ambiguous from pure semantic maps even with multi-view observations. By contrast, FENeRF learns accurate geometry (Fig.~\ref{fig:geometry}) benefiting from the joint semantic and image rendering thus keeps view consistency at such challenging poses. Besides, for SofGAN, though the semantic map keeps consistent during zooming in, its synthesised images are not view consistent (\eg eyes look into different directions). By contrast, FENeRF guarantees strict pixel-level view consistency of synthesised images.
To further explore the inversion ability of semantic rendering, we randomly collect 1000 real portrait images and reproject them into the geometry latent space to recover a semantic field. The left chart in Fig.~\ref{fig:miou} illustrates that the average mIoU score of these 1000 portraits reaches 0.5 by 100 iterations and finally converges to over 0.7 within 200 iterations. This indicates the geometry latent space of FENeRF covers various portrait shapes. From the visualized example we can see that the facial semantics are reconstructed accurately after 2000 iterations with texture-aligned region boundaries and view consistency.
\subsection{Applications}
\label{sub: 4.3}
\noindent \textbf{Disentangled morphing and style mixing.} Our method enables disentangled control on geometry and texture through independent latent sampling. Fig.~\ref{fig:morphing} demonstrates the disentangled morphing along two independent directions. Specifically , we sample two sets of texture and shape codes $(\mathrm{Z}^1_t, \mathrm{Z}^1_s)$, $(\mathrm{Z}^2_t, \mathrm{Z}^2_s)$ which synthesize images $\mathrm{I}^1, \mathrm{I}^2$ at left bottom and right top corners of Fig.~\ref{fig:morphing}. Then we perform linear interpolation in these two latent directions and group them for image synthesis. Our approach also supports style mixing demonstrated in Fig.~\ref{fig: style transfer} (on left), proving the effectiveness of our disentangled facial representation\vspace{2mm}.
\noindent\textbf{3D inversion and style transfer.} Inversion is a popular application of 2D GANs and we further lift it to 3D space and render in arbitrary poses. Though some recent conditional GANs \cite{leimkuhler2021freestylegan, shen2020interfacegan, wang2021high, tov2021designing} also support pose rotation after inversion, however, suffer from inconsistent identity and texture during rotation. Fig.~\ref{fig: view consistency} compares FENeRF with InterfaceGAN and e4e on realistic inversion with pose rotations. InterfaceGAN reconstructs realistic synthesized image for input view but generates obvious artifacts (e.g. sticking texture), while E4E generates inconsistent shape during camera rotations. Compared with 2D GANs, FENeRF outperforms these methods on view consistency of identity and texture with a slight drop of texture details. This is because we reproject the 2D image into a latent space which bounds to a 3D generative volume rather than a 2D space. Therefore, the latent code controls facial properties independent of camera pose. We also support face style transfer which is shown in Fig.~\ref{fig: style transfer}. Style transfer is more challenging than interpolation or mixing since real portraits' texture and shape could be far from the distribution of the latent spaces, leading to unrealistic artifacts. As shown in Fig.~\ref{fig: style transfer}, FENeRF successes to transfer the target texture to the source identity. Moreover, the results of style transfer holds consistency at poses far from the input views\vspace{2mm}.
\begin{figure*}[!h]
\centering
\includegraphics[width=\textwidth]{Images/local_editing_v2.png}
\caption{\textbf{Interactive image manipulation.} Our method enables interactive image editing with semantic guidance. Given the source image, we invert it into texture and shape codes and obtain the inversion results. We manipulate facial attributes on the semantic map (e.g. haircut, eyes, nose and face shape, etc.) and leverage the GAN inversion again. We obtain the corresponding modified free-view portraits.}
\label{fig: local_editing}
\end{figure*}
\begin{figure}[!h]
\centering
\captionsetup{type=figure}
\includegraphics[width=\textwidth]{Images/view_consistency_v10.png}
\caption{View consistency of inversion. We compare FENeRF against SOTA methods, InterFaceGAN\cite{shen2020interfacegan} and E4E\cite{tov2021designing}. Obviously, when generated portraits rotate, InterFaceGAN suffers from the serious artifacts and E4E fails to preserve the face identity.}
\label{fig: view consistency}
\end{figure}
\noindent \textbf{3D local editing.} Both style mixing and transfer manipulate face in a global manner and prove our powerful disentangled control of global texture and geometry. We further find that the semantic field would encourage the latent spaces of geometry and texture to be regional disentangled based on the spatial-aligned 3D volume. To prove that, we take facial attributes manipulation by editing semantic maps interactively. Fig.~\ref{fig: local_editing} demonstrates the results of facial attributes editing. We first inverse a real face image into a reconstructed image and the accompanied semantic map. Then we edit this semantic map and reproject it into geometry latent space by optimizing the shape code. Note FENeRF is capable of manipulating facial attributes even with significant deformation (\eg the enlarged and shorten noses in Fig.~\ref{fig: local_editing}) while keeping other regions consistent in shape and texture. Moreover, the edited faces by FENeRF still hold strict view consistency\vspace{2mm}.
\subsection{Ablation Studies}
\label{sub: 4.4}
\noindent \textbf{Benefits of our joint framework.}
Recall that FENeRF renders face images accompanied by semantic maps. We conduct experiments to explore if this joint framework benefits two kinds of rendering. We first train a FeNeRF without image rendering. Therefore, given a 3D point, we only query its density and semantic labels with rendering a single semantic map. In Fig.~\ref{fig:effect_semantic_rendering}, FENeRF without image rendering in (a) fails to specify facial regions or converge to reliable geometry. By contrast, training with image rendering in (b) solve these problems. We further explore effect of semantic rendering on geometry quality. (c) and (d) in Fig.~\ref{fig:effect_semantic_rendering} demonstrates that semantic rendering encourages the generative surface to smooth and accurate. This result is consistent with Fig.~\ref{fig:geometry}\vspace{2mm}.
\noindent \textbf{Effects of learnable coordinate embeddings.}
To produce fine-grained image details, we introduce a learnable feature grid for the local sampling of coordinate embedding ($e_{coord}$). We further conduct experiments to explore the effect of $e_{coord}$ and its injecting positions. As shown in Fig.~\ref{fig:effect_fg}, FENeRF w/o $e_{coord}$ in (a) generates blurry teeth. Injecting $e_{coord}$ as in (b) generates sharper details but suffers from artifacts on synthesised images and semantic map since $e_{coord}$ brings high frequency signal into the volume density when injecting at the beginning of MLP with coordinates. (c) injects $e_{coord}$ into the color branch and enables both high-frequency image and smooth semantics\vspace{2mm}.
\begin{figure}[t!]
\centering
\includegraphics[width=\textwidth]{Images/effect_joint_training.png}
\caption{Effect of the joint semantic and image rendering. We show the effect of image rendering with rendered semantic maps (top row) and depth maps (bottom row) in the sub-page (a) and (b), and the effect of semantic rendering with rendered images(top row) and extracted meshes(bottom row) in sub-pages (c) and (d). It is illustrated that, on one hand, learning the semantic and geometry fields with semantic map supervision only is impossible. On the other, our joint rendering enhances the quality of semantic, geometry and texture at the same time.}
\label{fig:effect_semantic_rendering}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{Images/effect_fg_v5.png}
\caption{Ablation study on $e_{coord}$. Sub pages (a) to (c) are the generated image details with three variants of $e_{coord}$. We zoom in the synthesized mouths for more clear observation. (d) illustrates the injecting positions of $e_{coord}$ in (b) and (c).}
\label{fig:effect_fg}
\end{figure}
\section{Limitations}
One known limitation is our generator cannot produce HD portrait images due to the computationally-expensive ray casting and volume integration.
Besides, GAN inversion is an effective method to perform the local editing of 3D volume. The iterative optimization of inversion, however, is inefficient. As a result, the real-time free-view portrait editing is still an open problem.
\section{Conclusion}
In this paper we present the first locally editable 3D-aware face generator FENeRF based on implicit scene representation. For using semantic map as editing interface, we introduce a semantic radiance field which aligns facial semantics and texture implicitly in 3D space through the shared geometry. We show that FENeRF realizes fancy applications including style mixing, style transfer, facial attribute editing, and we further push them to a 3D free-view manner with explicit camera control. We hope our work would introduce a promising research direction of editable 3D-aware generative network. For future works, we plan to increase the resolution of synthesised free-view portraits and study the specific 3D-aware GAN inversion as well.
\section{Potential Social Impact}
Given a single real portrait image, FENeRF enables the generation of his/her photo-realistic avatar through GAN inversion. Moreover, we can drive this avatar by changing semantic maps and camera poses to make fake videos. Therefore there are certain risks of fooling face recognition systems, such as vivo detection, with our synthesised fake videos thus it should be careful to deploy the technology.
{\small
\bibliographystyle{ieee_fullname}
|
{
"timestamp": "2021-12-01T02:26:51",
"yymm": "2111",
"arxiv_id": "2111.15490",
"language": "en",
"url": "https://arxiv.org/abs/2111.15490"
}
|
\section{Introduction}\label{sec:intro}
How old is the universe? How distant are the cosmological objects that we detect with our telescopes? Our answers to these questions depend crucially on how accurately we can measure the current universe's expansion rate, i.e. the Hubble parameter $H_0$, since it sets both, the time and distance scales in cosmology. Now, after more than nine decades from its first measurement \citep{Hubble:1929} several observational sources tell us that its value should fall in the range $H_0\in (65,75)$ km/s/Mpc. However, there is a $4.2\sigma$ tension between the value inferred for this parameter with the classical cosmic distance ladder by SH0ES, $H_0=(73.2\pm 1.3)$ km/s/Mpc \citep{Riess:2020fzl}, which is almost fully cosmology-independent, and the one inferred with the cosmic microwave background (CMB) data from {\it Planck} 2018, under the assumption of the flat $\Lambda$CDM model, $H_0=(67.36\pm 0.54)$ km/s/Mpc \citep{Planck:2018vyg}. SH0ES measures a somewhat higher value of the absolute magnitude of supernovae of Type Ia (SNIa) in the first steps of the ladder than the one preferred by the CMB best-fit standard model, and this is what ultimately triggers the $H_0$ tension, see e.g. \citep{Camarena:2019moy}. Other observational teams, though, do not find any susbtantial discrepancy \citep{Freedman:2019jwv,Freedman:2021ahq}, and it is still possible that systematic errors play an important role in this story \citep{Efstathiou:2020wxn,Mortsell:2021nzg}. Nevertheless, the tension has persisted and grown in a consistent way in the last years thanks to the gain in precision of the modern observational facilities employed to explore the local universe \citep{Riess:2009pu,Riess:2016jrr} and measure the CMB anisotropies \citep{WMAP:2012nax,Planck:2015fie}. Although there exist some internal tensions at the $\sim 2\sigma$ level in {\it Planck}'s data (e.g. between the $\Lambda$CDM best-fit parameters obtained from low and high multipoles or the amount of lensing in the TT spectrum), their results seem to be consistent with other CMB experiments, as WMAP \citep{WMAP:2012nax}, SPT \citep{SPT:2017sjt} and ACT \citep{ACT:2020gnv}. Moreover, constraints from baryon acoustic oscillations (BAO) and big bang nucleosynthesis (BBN), independent from CMB, lead to values of the Hubble parameter lying again in the lower range in the context of the standard model, more in accordance with {\it Planck} \citep{Addison:2017fdm,Cuceu:2019for}. Also when use is made of the inverse cosmic distance ladder \citep{Aubourg:2014yra,Cuesta:2014asa,Feeney:2018mkj}, which assumes again standard pre-recombination (and most of the cases also standard late-time) physics. Interestingly, standard sirens \citep{LIGOScientific:2017adf,Palmese:2021mjm} and strongly lensed quasars \citep{Denzel:2020zuq} also allow us to measure $H_0$, but they are still not able to arbitrate the tension.
Cosmological models with a preferred larger critical energy density around the matter-radiation equality time or an earlier recombination of protons and electrons before the photon decoupling can lead to an alleviation of the $H_0$ tension, since they allow for lower values of the comoving sound horizon at the baryon drag epoch, $r_d$, what in turn gives room for larger values of the Hubble parameter, which is needed to keep the good description of the BAO data and the location of the first peak of the CMB temperature anisotropies. This is the case of some models that have been proposed to alleviate the $H_0$ tension, based on early dark energy \citep{Poulin:2018cxd,Niedermann:2019olb,Agrawal:2019lmo,Gomez-Valent:2021cbe}, modified gravity \citep{SolaPeracaula:2019zsl,SolaPeracaula:2020vpg,Braglia:2020iik,Braglia:2020auw}, primordial magnetic fields \citep{Jedamzik:2020krr}, varying atomic constants \cite{Liu:2019awo,Sekiguchi:2020teg}, and running vacuum models \citep{SolaPeracaula:2021gxi}.
The measurement of $r_d$ has been carried out using a plethora of techniques in the past:
\begin{itemize}
\item From CMB data alone or in combination with other data sets, assuming concrete cosmological models. See e.g. \citep{Verde:2016wmz,Planck:2018vyg,Benisty:2020otr} and the references of the previous paragraph.
\item Using the local (distance ladder) $H_0$ value as an anchor at $z=0$, together with BAO and SNIa data, assuming the $\Lambda$CDM \citep{Cuesta:2014asa} or using cubic splines to reconstruct the Hubble function \citep{Bernal:2016gxb,Aylor:2018drw}.
\item Fixing the scale with cosmic chronometers (CCH), employed in combination with BAO and SNIa, either reconstructing the shape of $H(z)$ with linearly-interpolated values that are left free in the fit \citep{Heavens:2014rja,Verde:2016ccp} or with a Multi-Task Gaussian Process \citep{Haridasu:2018gqm}.
\end{itemize}
We propose here a new method that lets us measure $r_d$ and $M$ under very mild theoretical assumptions (which reduce mainly to the validity of the Cosmological Principle and the metric description of gravity), and without making use of the two data sets driving the $H_0$ tension, namely the CMB and the calibration of the SNIa performed in the first steps of the distance ladder by SH0ES. Our method is close in spirit to those employed in \cite{Heavens:2014rja,Verde:2016ccp,Haridasu:2018gqm}, since we also use data on BAO, SNIa and CCH, and do not assume a particular cosmological model. Here we take, though, a different approach. While in \cite{Heavens:2014rja,Verde:2016ccp} the authors used a spline for $H(z)$ and reconstructed it, allowing the values of $H(z_i)$ at several redshift nodes $z_i$ to vary in the fitting analysis (together with $M$, $r_d$ and the curvature density parameter $\Omega^0_k$), here we only vary the last three. Our method is also different from the one presented in \citep{Haridasu:2018gqm}. In this work we do not aim to find the shape of $H(z)$ preferred by the low-redshift data. We measure the size of the cosmic ruler and the SNIa intrinsic brightness by minimizing a loss function that tells us what is the level of inconsistency between the BAO, SNIa and CCH data sets for each triplet of values $\vec{\theta}=(M,r_d,H_0^2\Omega^0_k)$. Every $\vec{\theta}$ can be used to translate the BAO+SNIa+CCH data into a list of cosmological distances and values of the Hubble rate at different redshifts, with their corresponding covariance matrix. This information can be employed then to quantify the level of inconsistency between the transformed data sets using the index of inconsistency (IOI) proposed in \cite{Lin:2017ikq}. We obtain a posterior distribution for the three parameters contained in $\vec{\theta}$. Our method has not been employed before in the literature to extract constraints on these important parameters. Model-independent methods as the one proposed and studied in this paper will allow us to further test the viability of the candidate models aiming to loosen the $H_0$ tension.
The paper is organized as follows. In Sec. \ref{sec:data} we describe the data sets that we have employed in our analysis. In Sec. \ref{sec:method} we explain our statistical method. Our results are presented in Sec. \ref{sec:results}, and the conclusions in Sec. \ref{sec:conclusions}.
\section{Data sets and observables}\label{sec:data}
These are the data sets that we have employed in our study.
\subsection{Supernovae of Type Ia}\label{sec:SNIa}
The expression for the apparent magnitude $m(z)$ of standardized SNIa reads,
\begin{equation}
m(z) = M +25 +5\log_{10}\left(\frac{D_L(z)}{1\,{\rm Mpc}}\right)\,,
\end{equation}
with $M$ the absolute magnitude and $D_L(z)$ the luminosity distance. In a Friedmann-Lema\^itre-Robertson-Walker universe the latter takes the following form,
\begin{equation}\label{eq:lumDista}
D_L(z) = \frac{c(1+z)}{H_0\sqrt{\Omega_k^{0}}}\sinh\left(\sqrt{\Omega_k^0}\int_0^{z}\frac{dz^\prime}{E(z^\prime)}\right)\,,
\end{equation}
where $E(z)=H(z)/H_0$ is the normalized Hubble rate, and $\Omega_k^0=-kc^2/(R_0H_0)^2$ is the curvature density parameter, with $k=0,-1,+1$ for a flat, open and closed universes, respectively. $R_0$ is a constant with units of length that can be interpreted as the current radius of curvature in a closed universe.
In this work we employ the SNIa contained in the Pantheon compilation \citep{Scolnic:2017caz}. For practical purposes we opt to use the 40 binned data points provided in \footnote{http://github.com/dscolnic/Pantheon}, with their corresponding covariance matrix.
\subsection{Baryon acoustic oscillations}\label{sec:BAO}
Baryons and photons were tightly coupled through electromagnetic interactions during the pre-recombination era. The intense fight between radiation pressure and gravity in the photon-baryon plasma generated sound waves that left an imprint in the distribution of baryons when the universe was cool enough for the CMB photons to escape and start their travel towards us. The maximum distance traveled by this wave, the sound horizon at the baryon drag epoch, is prompted in the distribution of matter in the universe. It manifests as a peak in the matter two-point correlation function or as wiggles in the matter power spectrum \citep{Cole:2005sx,Eisenstein:2005su}. Galaxy surveys use this characteristic length, $r_d$, as a standard ruler with respect to which they can measure cosmological distances at various redshifts. The latter can be employed to constrain cosmological models in a quite robust way \citep{Bernal:2020vbb}. Their constraints are given either in terms of the dilation scale $D_V$,
\begin{equation}
\frac{D_V(z)}{r_d}=\frac{1}{r_d}\left[D_M^2(z)\frac{cz}{H(z)}\right]^{1/3}\,,
\end{equation}
with $D_M=(1+z)D_{A}(z)$ being the comoving angular diameter distance, or by splitting (when possible) the transverse and line-of-sight BAO information, providing data on $D_{A}(z)/r_d$ and $H(z)r_d$ separately, with some degree of correlation. In any metric theory of gravity with photons traveling on null geodesics and conservation of the photon number, the Etherington relation \citep{Etherington:1933} applies,
\begin{equation}\label{eq:AngDista}
D_A(z)=\frac{D_L(z)}{(1+z)^2}\,
\end{equation}
with $D_{L}(z)$ given by Eq. (\ref{eq:lumDista}). This expression is useful to translate luminosity distances into angular diameter distances, and viceversa.
We employ the following BAO data points:
\begin{itemize}
\item $D_V/r_d$ at $z=0.122$ provided in \citep{Carter:2018vce}, which combines the dilation scales previously reported by the 6dF Galaxy Survey (6dFGS) \citep{Beutler:2011hx} at $z=0.106$ and the one obtained from the Sloan Digital Sky Survey (SDSS) Main Galaxy Sample at $z=0.15$ \citep{Ross:2014qpa}.
\item The anisotropic BAO data measured by BOSS using the LOWZ ($z=0.32$) and CMASS ($z=0.57$) galaxy samples \citep{Gil-Marin:2016wya}.
\item The dilation scale measurements by WiggleZ at $z=0.44,0.60,0.73$ \citep{Kazin:2014qga}.
\item $D_A/r_d$ at $z=0.81$ measured by the Dark Energy Survey Year 1 (DESY1) \citep{DES:2017rfo}. We will also study the impact of substituting this point by the more recent measurement from DESY3 at the effective redshift $z=0.835$ \citep{DES:2021esc}, which is in $2.3\sigma$ tension with the {\it Planck} prediction assuming the $\Lambda$CDM.
\item The anisotropic BAO data from the extended BOSS Data Release 16 (DR16) quasar sample at $z=1.48$ \citep{Neveux:2020voa}.
\end{itemize}
We avoid the use of the anisotropic BAO data obtained from the Ly$\alpha$ absorption and quasars of the final data release (SDSS DR16) of eBOSS at $z=2.334$ \citep{duMasdesBourboux:2020pck} because it falls out of the measurement ranges of SNIa and CCH, see Sec. \ref{sec:SNIa} and \ref{sec:CCH}. The full BAO data vector and associated covariance matrix is provided in Appendix \ref{sec:appA}.
\subsection{Cosmic chronometers}\label{sec:CCH}
Spectroscopic dating techniques of passively–evolving galaxies, i.e. galaxies with old stellar populations and low star formation rates, have become a good tool to obtain observational values of the Hubble function at redshifts $z\lesssim 2$ \citep{Jimenez:2001gg}. These measurements do not rely on any particular cosmological model, although are subject to other sources of systematic uncertainties, as to the ones associated to the modeling of stellar ages, see e.g. \citep{Moresco:2012jh,Moresco:2016mzx}, which is carried out through the so-called stellar population synthesis (SPS) techniques, and also to a possible contamination due to the presence of young stellar components in quiescent galaxies \citep{Lopez-Corredoira:2017zfl,Lopez-Corredoira:2018tmn,Moresco:2018xdr}. Given a pair of ensembles of passively-evolving galaxies at two different redshifts it is possible to infer $dz/dt$ from observations under the assumption of a concrete SPS model and compute $H(z) = -(1 + z)^{-1}dz/dt$. Thus, cosmic chronometers allow us to obtain the value of the Hubble function at different redshifts, contrary to other probes which do not directly measure $H(z)$, but integrated quantities as e.g. luminosity distances.
In this study we use the 24 data points on $H(z)$ from CCH reported in \citep{Jimenez:2003iv,Stern:2009ep,Moresco:2012jh,Zhang:2012mp,Moresco:2015cya,Moresco:2016mzx,Ratsimbazafy:2017vga,Borghi:2021rft}. More concretely, we make use of the {\it processed} sample provided in Table 2 of \citep{Gomez-Valent:2018gvm}, but adding the data point of \citep{Borghi:2021rft} and removing the ones of \citep{Simon:2004tf}\footnote{Serious concerns about the statistical analysis carried out in \citep{Simon:2004tf} have been recently raised in \cite{Kjerrgren:2021zuo}. Thus, we prefer to omit at the moment the use of these CCH data.}. Our resulting CCH data set is robust, since it introduces corrections accounting for the systematic errors mentioned above.
\section{The method}\label{sec:method}
We want to perform a Monte Carlo sampling over the parameters contained in the vector $\vec{\theta}=(M,r_d,H_0^2\Omega_k^0)$. For each step in this three-dimensional parameter space we can easily rewrite all the low-redshift data under consideration in terms of luminosity and angular diameter distances, dilation scales, and values of the Hubble function at different redshifts. We can use then a loss function $L(\vec{\theta)}$ (still to be defined) to quantify the degree of inconsistency between the SNIa, BAO and CCH data sets found for that particular value of $\vec{\theta}$. Our Monte Carlo algorithm will sample the distribution $\propto e^{-L(\vec{\theta})}$ to obtain the best-fit $\vec{\theta}$ and associated confidence regions. The loss function can be split into two parts,
\begin{equation}\label{eq:L}
L(\vec{\theta}) = L_1(M,r_d)+L_2(M,H_0^2\Omega_k^0)\,,
\end{equation}
with
\begin{equation}\label{eq:L1}
L_1(M,r_d)={\rm IOI}[{\rm BAO,SNIa+CCH}]
\end{equation}
being the index of inconsistency between the BAO data and the string SNIa+CCH, and
\begin{equation}\label{eq:L2}
L_2(M,H_0^2\Omega_k^0)={\rm IOI}[{\rm SNIa,CCH}]
\end{equation}
the one between the SNIa and the CCH data sets. The specific dependence of these functions on the parameters of $\vec{\theta}$ will be explained in detail below. The two-experiment IOI is defined as follows \citep{Lin:2017ikq},
\begin{equation}\label{eq:IOI}
{\rm IOI[i,j]}=\frac{1}{2}\delta^{T}(C^{(i)}+C^{(j)})^{-1}\delta\,,
\end{equation}
where $C^{(i)}$ is the covariance matrix of the ith data set and $\delta=\mu^{(i)}-\mu^{(j)}$ is the difference between the mean vectors of the two data sets under consideration, i.e. the data sets $i$ and $j$. The IOI is a generalization of the Mahalanobis distance \citep{Mahalanobis:1936}, and strictly speaking it is reliable only for Gaussian-distributed data sets. Fortunately, this is the case in the current study, in very good approximation. In the following two subsections we provide more details about how to build the indexes of inconsistency appearing in \eqref{eq:L}.
\begin{table*}[t!]
\centering
\begin{tabular}{|c ||c | c | c | c | }
\multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{}
\\\hline
{\small Loss function} & {\small $M$} & {\small $r_d$ [Mpc]} & {\small $H_0^2\Omega_k^0$ [km/s/Mpc]} & {\small $\Omega_k^0$}
\\\hline
$L_1(M,r_d)$ & $-19.399\pm 0.084$ & $148.3\pm 5.3$ & - & - \\\hline
$L(M,r_d,0)$, flat universe & $-19.401\pm 0.062$ & $148.3\pm 4.0$ & 0 & 0 \\\hline
$L(M,r_d,H_0^2\Omega_k^0)$ & $-19.405\pm 0.066$ & $148.3\pm 4.3$ & $-378\pm 520$ & $-0.08\pm 0.11 $ \\\hline
$\mathcal{L}(M,r_d,H_0^2\Omega_k^0)$ & $-19.387^{+0.064}_{-0.074}$ & $148.5\pm 4.5$ & $-356^{+480}_{-410}$ & $-0.07^{+0.11}_{-0.09}$\\\hline
\end{tabular}
\label{tab:table}
\caption{Means and uncertainties of the fitting parameters obtained with the loss functions listed in Sec. \ref{sec:MC}. The constraints on $\Omega^0_k$ are computed by breaking the degeneracy in the $H_0-\Omega_k^0$ plane with the Gaussian prior $H_0=(70.72\pm 6.44)$ km/s/Mpc obtained from our reconstruction with GP (cf. Appendix \ref{sec:appC}). See the comments in Sec. \ref{sec:results} and Fig. \ref{fig:contours}.}
\end{table*}
\subsection{IOI between BAO and SNIa+CCH}\label{sec:IOI1}
Given a value of $r_d$ we can rewrite the BAO constraints on $\{D_V(z)/r_d,D_A(z)/r_d,H(z)r_d\}$ of Sec. \ref{sec:BAO} as constraints on $\{D_V(z),D_A(z),H(z)\}$, with a mean vector
\begin{equation}
\mu^{\rm BAO}(r_d) = \begin{pmatrix}
D_V(z=0.122)\\
D_V(z=0.44)\\
D_V(z=0.60)\\
D_V(z=0.73)\\
D_A(z=0.32)\\
D_A(z=0.57)\\
D_A(z=0.81)\\
D_A(z=1.48)\\
H(0.32)\\
H(0.57)\\
H(1.48)
\end{pmatrix}\,,
\end{equation}
and its associated covariance matrix $C^{\rm BAO}(r_d)$. The resulting distribution is Gaussian, as we have explicitly checked for different values of $r_d$. In order to compute the IOI between the BAO and SNIa+CCH data sets we need to build the analogous quantities from the SNIa+CCH joint data set, i.e. $\mu^{\rm SNIa+CCH}$ and $C^{\rm SNIa+CCH}$. As we do not have SNIa and CCH data at the exact BAO redshifts, we need first to compute the extrapolated values of $D_A(z)$ and $H(z)$ at these $z$'s using some technique that allows us to get the vectors
\begin{equation}\label{eq:inter}
\mu^{\rm SNIa}(M) = \begin{pmatrix}
D_A(z=0.122)\\
D_A(z=0.44)\\
D_A(z=0.60)\\
D_A(z=0.73)\\
D_A(z=0.32)\\
D_A(z=0.57)\\
D_A(z=0.81)\\
D_A(z=1.48)
\end{pmatrix}
\end{equation}
and
\begin{equation}\label{eq:muCCH}
\mu^{\rm CCH}= \begin{pmatrix}
H(z=0.122)\\
H(z=0.44)\\
H(z=0.60)\\
H(z=0.73)\\
H(0.32)\\
H(0.57)\\
H(1.48)
\end{pmatrix}\,,
\end{equation}
with their individual covariances, $C^{\rm SNIa}(M)$ and $C^{\rm CCH}$. From them we can build the final mean and covariance matrix $\mu^{\rm SNIa+CCH}$ and $C^{\rm SNIa+CCH}$ to be employed (together with $\mu^{\rm BAO}$ and $C^{\rm BAO}$) in the computation of $L_1$ \eqref{eq:L1}.
For a given value of $M$, and making use of Eqs. \eqref{eq:lumDista} and \eqref{eq:AngDista}, we can translate the SNIa apparent magnitudes $m(z)$ into data on $D_A(z)$ at the redshifts of the (binned) SNIa data. By sampling this distribution and using e.g. a cubic interpolation method, it is easy to infer the Gaussian distribution for the $D_A$'s at the redshifts of interest, i.e. those specified in \eqref{eq:inter}. In order to be more efficient, though, it is better actually to split $D_A$ as follows,
\begin{equation}\label{eq:Bfunc}
D_A(z)=10^{-M/5}B(z)\,{\rm Mpc}\quad{\rm with}\quad B(z)=\frac{10^{\frac{m(z)-25}{5}}}{(1+z)^2}\,,
\end{equation}
and sample the distribution of $m$'s to generate the mean ($\mu_B$) and covariance matrix ($C_B$) for the $B$'s at the redshifts of \eqref{eq:inter} before starting the Monte Carlo, since this part of $D_A$ does not depend on $M$ and hence can be employed at each step of the sampling process. This distribution is Gaussian too, see Appendix \ref{sec:appB}. Given a value of $M$, it is easy to obtain then the mean and covariance for $D_A$ from the distribution of $B(z)$. We just need to do:
\begin{equation}
\mu^{\rm SNIa}= 10^{-M/5}\mu_B\quad;\quad C^{\rm SNIa}=10^{-2M/5}C_B\,.
\end{equation}
We proceed in a similar way to obtain $\mu^{\rm CCH}$ \eqref{eq:muCCH} and the corresponding covariance matrix $C^{\rm CCH}$. In this case the result does not depend on any of the parameters contained in $\vec{\theta}$. Therefore, we can employ the same vector $\mu^{\rm CCH}$ and covariance $C^{\rm CCH}$ in each step of the Monte Carlo routine. We reconstruct the shape of $H(z)$ using the data described in Sec. \ref{sec:CCH} and Gaussian Processes (GP), with a Gaussian kernel. In appendix \ref{sec:appC} we describe the GP reconstruction of the Hubble function and provide the resulting $\mu^{\rm CCH}$ and $C^{\rm CCH}$, apart from commenting on several technical aspects and niceties for the interested reader.
The obtention of $\mu^{\rm SNIa+CCH}$ and $C^{\rm SNIa+CCH}$ from ($\mu^{\rm SNIa},C^{\rm SNIa}$) and ($\mu^{\rm CCH},C^{\rm CCH}$) is straightforward. It can be done through a simple sampling of the two multivariate Gaussians. The result is in very good approximation Gaussian too. Equipped with these tools we can finally evaluate $L_1$ \eqref{eq:L1}, which is of course a function of $M$ and $r_d$.
\subsection{IOI between SNIa and CCH}\label{sec:IOI2}
We compute the index of inconsistency between the SNIa and CCH data sets by first noting that for a given pair $(M,H_0^2\Omega_k^0)$ we can translate a particular value of $m(z_i)$ into a value of the Hubble function $H(z_i)$, if we also know the derivative of the apparent magnitude at that redshift. This becomes evident if we perform the derivative of Eq. \eqref{eq:lumDista} with respect to the redshift, and isolate $H(z)$,
\begin{equation}\label{eq:Hdeco}
H(z)=\frac{\left[\left(\frac{c(1+z)}{D_L(z)}\right)^2+H_0^2\Omega_k^0\right]^{1/2}}{\frac{\ln(10)}{5}\frac{\partial m}{\partial z}-\frac{1}{1+z}}\,,
\end{equation}
where the luminosity distance can be written in terms of a function $\tilde{B}(z)$ that does not depend on $M$,
\begin{equation}\label{eq:Btildefunc}
D_L(z)=10^{-\frac{(M+25)}{5}}\tilde{B}(z)\,{\rm Mpc}\quad{\rm with}\quad \tilde{B}(z)=10^{\frac{m(z)}{5}}\,.
\end{equation}
We can sample the distribution of $m$'s at the SNIa redshifts and obtain from it the distribution of $\tilde{B}$'s and $\partial m/\partial z$'s at those redshifts at which we have CCH data, e.g. making use again of a cubic interpolation method. The resulting distribution is Gaussian. Finally, given a pair $(M,H_0^2\Omega_k^0)$ we can construct from the latter the distribution of values of the Hubble function at the CCH redshifts using \eqref{eq:Hdeco}. It is obviously a function of $M$ and $H_0^2\Omega_k^0$. The computation of $L_2$ \eqref{eq:L2} is at this stage very easy because we already have all the ingredients to apply \eqref{eq:IOI}. In order to sample physically motivated values of the product $H_0^2\Omega_k^0$ we use the Gaussian prior
\begin{equation}
P(H_0^2\Omega_k^0) = \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(H_0^2\Omega_k^0)^2}{2\sigma^2}}\,,
\end{equation}
with $\sigma=500$ (km/s/Mpc)$^2$. The latter is motivated by the constraint on $H_0$ obtained from our GP reconstruction, $H_0=(70.72\pm 6.44)$ km/s/Mpc (cf. Appendix \ref{sec:appC}), and also by the ones on $\Omega_k^0$ obtained under some CMB data sets \citep{Planck:2018vyg,DiValentino:2019qzk}, which allow values of $\Omega_k^0\sim -0.1$ at $\sim 1.5\sigma$ c.l. Much tighter constraints on the curvature parameter are derived when CMB lensing, SNIa and/or BAO data are also considered \citep{Planck:2018vyg,Efstathiou:2020wem}, but we want to proceed as model-independently as possible. This is why we choose a wide prior for this parameter, but still forbidding values that clearly fall out of the region allowed by the CMB. As we will discuss in Sec. \ref{sec:results}, $H_0^2\Omega_k^0$ has a very low impact on our constraints on $M$ and $r_d$, and its posterior is basically dominated by the prior.
\subsection{Monte Carlo analyses}\label{sec:MC}
We sample the distributions built with the following loss functions:
\begin{itemize}
\item $L_1(M,r_d)$, Eq. \eqref{eq:L1}.
\item $L(M,r_d,H_0^2\Omega_k^0=0)$ assuming a flat universe.
\item $L(M,r_d,H_0^2\Omega_k^0)$.
\item $L(M,r_d,H_0^2\Omega_k^0)$, but using the data point from DESY3 \citep{DES:2021esc} instead of DESY1 \citep{DES:2017rfo}. We call this loss function $\mathcal{L}(M,r_d,H_0^2\Omega_k^0)$.
\end{itemize}
We make use of the Metropolis–Hastings algorithm \citep{Metropolis:1953,Hastings:1970}. The results obtained in our Monte Carlo analyses are presented and discussed in the next section.
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=5in, height=4.5in]{contours.png}
\caption{Contour plots at $1\sigma$ and $2\sigma$ c.l. in all the planes of the ($M,r_d,H_0^2\Omega_k^0$)-parameter space, obtained with the loss function $L(M,r_d,H_0^2\Omega_k^0)$, cf. Sec. \ref{sec:MC}. $r_d$ is expressed in Mpc and $H_0^2\Omega_k^0$ in (km/s/Mpc)$^2$. The vertical green bands correspond to the (1$\sigma$ and 2$\sigma$) constraint on $M$ obtained by SH0ES from the first steps of the distance ladder \cite{Camarena:2019moy}. See Sec. \ref{sec:results} for further comments.}\label{fig:contours}
\end{center}
\end{figure*}
\section{Results}\label{sec:results}
Our constraints on $M$, $r_d$, $H_0^2\Omega^0_k$ and $\Omega^0_k$ are shown in Table I. In Fig. \ref{fig:contours} we plot the confidence regions at $1$ and $2\sigma$ c.l. in all the relevant planes of the parameter space, together with the corresponding one-dimensional posteriors. Remarkably, the results obtained with our method are quite stable, the central values for $M$ and $r_d$ are very similar in all the scenarios analyzed in this study, even when the loss function is only built with the IOI(BAO,SNIa+CCH), i.e. when $L=L_1$. In this case we find $M=-19.399\pm 0.084$ and $r_d=(148.3\pm 5.3)$ Mpc. If we penalize also the degree of inconsistency between SNIa and CCH by considering the loss function $L=L_1+L_2$ we find, though, that the uncertainties on $M$ and $r_d$ decrease by $\sim 20\%$, yielding $M=-19.405\pm 0.066$ and $r_d=(148.3\pm 4.3)$ Mpc. These are our main results. The absolute magnitude of SNIa is in mild tension with SH0ES, at $\sim 2.4\sigma$ c.l. (cf. Fig. \ref{fig:contours}). This tension is not very significant, but it will be interesting to revisit this calculation in the future to check whether this tension persists. On the other hand, the comoving sound horizon agrees with the $\Lambda$CDM-based inference by {\it Planck}, although the one and two sigma bands also encompass much lower values of $r_d$, as those found e.g. in early dark energy and modified gravity scenarios that have been explored in the literature to alleviate (in greater or lesser extent) the $H_0$ tension. The correlation coefficient in the $M-r_d$ plane is negative, as expected, since larger values of $M$ lead also to larger values of the Hubble function and, therefore, $r_d$ needs to decrease in order to calibrate appropriately the BAO ruler.
The error bars of both, $M$ and $r_d$, are a bit smaller if a flat universe is taken for granted, of course, just because we reduce by one the dimensionality of our parameter space. It is also worth to mention that the change of the DESY1 BAO data point \cite{DES:2017rfo} by the DESY3 one \cite{DES:2021esc}, which is in $2.3\sigma$ tension with the {\it Planck} $\Lambda$CDM best-fit cosmology, does not alter our results in a significant way neither, and our constraints on the curvature parameter are weak. We find $\Omega_k^0=-0.08\pm 0.11$, which is fully compatible with a flat universe at $1\sigma$.
\section{Conclusions}\label{sec:conclusions}
The absolute magnitude of supernovae of Type Ia, $M$, and the comoving sound horizon, $r_d$, are the anchors of the direct and inverse cosmic distance ladders, respectively, and therefore play a very important role in the discussion of the $H_0$ tension. Many models aiming to solve the latter modify the physics of the pre-recombination era and predict a value of $r_d$ which is much lower than the one preferred by the $\Lambda$CDM when it is constrained with CMB data. In this paper we have employed low-$z$ data to constrain $M$ and $r_d$ under minimal model assumptions (which reduce to the cosmological Principle and an underlying metric theory of gravity) and also avoiding the use of any input coming from the main drivers of the $H_0$ tension, similarly to what was done by other groups in the past \cite{Heavens:2014rja,Verde:2016ccp,Haridasu:2018gqm}. We have applied a novel method, though, based on the minimization of a loss function that quantifies the degree of inconsistency between the BAO, SNIa and cosmic chronometer data sets, which is built from the IOI estimator proposed by Lin and Ishak \cite{Lin:2017ikq}. It is only a function of $M$, $r_d$ and $H_0^2\Omega_k^0$. Our constraints read $M=-19.405\pm 0.066$ and $r_d=(148.3\pm 4.3)$ Mpc at $1\sigma$ c.l. The former is in mild ($\sim 2.4\sigma$) tension with the value inferred in the first steps of the cosmic distance ladder by SH0ES, $M=-19.2191\pm 0.0405$ \cite{Camarena:2019moy}, whereas the comoving sound horizon is not very constrained by our principle of consistency. It is fully compatible with the standard model value from the TT,TE,EE+lowE+lensing analysis by {\it Planck} \cite{Planck:2018vyg}, $r_d=(147.09\pm 0.26)$ Mpc, but still leaves plenty of room for new physics \cite{Poulin:2018cxd,Niedermann:2019olb,Agrawal:2019lmo,Gomez-Valent:2021cbe,SolaPeracaula:2019zsl,SolaPeracaula:2020vpg,Braglia:2020iik,Braglia:2020auw,Jedamzik:2020krr,Liu:2019awo,Sekiguchi:2020teg,SolaPeracaula:2021gxi,Verde:2016wmz}. With the advent of future data e.g. from Euclid and LSST, our method will provide tighter constraints on these relevant parameters, allowing us to study the viability of cosmological models with non-standard pre-recombination physics and to test the agreement between the two ends of the cosmic ladder in a quite model-independent way.
\vspace{0.25cm}
{\bf Acknowledgements}
\newline
\newline
\noindent The author is funded by the Istituto Nazionale di Fisica Nucleare (INFN) through the project ``Dark Energy and Modified Gravity Models in the light of Low-Redshift
Observations'' (n. 22425/2020). He is grateful to the Institute for Theoretical Physics (ITP) Heidelberg for letting him use its computational facilities from remote, and to Dr. Elmar Bittner for his valuable technical help in the use of the ITP computational resources.
|
{
"timestamp": "2021-12-01T02:25:37",
"yymm": "2111",
"arxiv_id": "2111.15450",
"language": "en",
"url": "https://arxiv.org/abs/2111.15450"
}
|
\section{Introduction}\label{sec:introduction}}
\end{document}
\section{Design}
\subsection{Problem Definition}
\label{sec:problem solution}
In this section, we formulate the joint server and data center network power
optimization as a constrainted optimization problem using the switch power model
and job model defined above. First, to obtain the power consumption of a switch
$k$, assume the number of active line cards and ports are $\zeta^{active}_k$ and
$\rho^{active}_k$ respectively, and the number of line cards in respective sleep states and ports in LPI mode are $\zeta^{sleep}_k$ and $\rho^{LPI}_k$ respectively. Since the total power of a switch is the sum of base power $P_k^{base}$, power of ports, and power of line cards, then we have $P_k^{switch}=P_k^{base}+\zeta^{active}_k*P_{line card}^{active}+\rho^{active}_k*P_{port}^{active}+\zeta^{sleep}_k*P^{sleep}_{line card}+\rho^{LPI}_k*P_{port}^{LPI}$.
To calculate the power consumption of server $i$, since our system consideres the core as basic processing unit in server, the total power of a server is the sum of idle power $P_i^{idle}$ and dynamic power which is linear in the number of active cores $C_i^{on}$. Then we have $P_i^{server}=P_i^{idle}+C_i^{on}*P_{core}^{on}$, where $P_{core}^{on}$ denotes the power consumed by an active core.
Then the joint power optimization problem can be formulated as minimize
$\sum_{k=1}^{N_{switch}}{P_k^{switch}}+\sum_{i=1}^{N_{server}}{P_i^{server}}$
under both network-side and serve-side constraints, such as link capacity,
computation resources, etc.
\begin{table*}
\centering
\scriptsize
\caption{Estimating the energy consumption}
\label{tab:estimatingenergy}
\arrayrulecolor{black}
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{2}{|c|}{\textbf{Energy Component} } & \textbf{Description} \\
\hline
\multicolumn{3}{|l|}{\textbf{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ Server Energy} } \\
\hline
\multirow{2}{*}{\textbf{Candidate server} } & Server Task Execution Energy & Energy consumed when the task is being executed \\
\arrayrulecolor[rgb]{0.651,0.651,0.651}\cline{2-3}
& Server waiting for tranmission & Energy consumed when the task is waiting for communication to complete. \\
\hline
\multirow{3}{*}{\textbf{Server (If sleeping)} } & Wakeup Energy during transistion & Energy spent during wakeup. \\
\cline{2-3}
& Core Energy savings loss Sleep & If sleeping, Energy savings loss due to Active state \\
\cline{2-3}
& Package Energy savings loss Sleep & If the first core being woken up, CPU package sleep savings loss. \\
\arrayrulecolor{black}\hline
\multicolumn{3}{|l|}{\textbf{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~Network Energy} } \\
\hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}\textbf{For Each Switch in the }\\\textbf{candidate flow path} \end{tabular}} & \begin{tabular}[c]{@{}l@{}}Network Energy cost based \\on Bandwidth-allocated \end{tabular} & \begin{tabular}[c]{@{}l@{}}Every flow is allocated a BW based on the network congestion and~\\link capacity of the entire flow path,~flow's minimum bandwidth \\the requirement to meet QoS. \end{tabular} \\
\arrayrulecolor[rgb]{0.651,0.651,0.651}\cline{2-3}
& Wakeup Energy during transistion & Energy spent to transistion to active state \\
\cline{2-3}
& Line card Active energy & If the Linecard is sleeping, the energy cost is the full active power consumption. \\
\hline
\begin{tabular}[c]{@{}l@{}}\textbf{For Every flow in common }\\\textbf{path of the candidate path} \end{tabular} & \begin{tabular}[c]{@{}l@{}}Energy savings due to decreased \\Bandwidth for the every other flow \end{tabular} & \begin{tabular}[c]{@{}l@{}}For every other switch in the current flow path whose BW is reduced,\\calculate the energy spent on it. \end{tabular} \\
\arrayrulecolor{black}\hline
\end{tabular}
\end{table*}
\section{Introduction}
\label{sec:Introduction}
Data centers have spurred rapid growth in computing, and an increasing number of user applications have continued to migrate toward cloud computing in the past few years. With this growing trend, data centers now account for about 2\% of US energy consumption \cite{barrosoCaseEnergyProportionalComputing2007}. Many public cloud computing environments have power consumption in the order of several Gigawatts. Therefore, energy is key challenges in data centers.
Data center servers are typically provisioned for peak performance to always satisfy user demands. This, however, also translates to higher power consumption. Hardware investments have resulted in power saving mechanisms, such as DVFS and low-power or idle states~\cite{acpi}.
We note that power reduction strategies in network switches and routers have been largely studied in large-scale network settings. Gupta et al.~\cite{guptaGreeningInternet2003} proposed a protocol-level support for coordinated entry into low-power states, where routers broadcast their sleep states for routing decisions to be changed accordingly. Adaptive Link Rate (ALR) for ethernet~\cite{gunaratneReducingEnergyConsumption2008} allows the network links to reduce their bandwidth adaptively for increased power efficiency. Such approaches may not be very effective in data center settings where application execution times have a higher dependence on network performance and Quality of Service (QoS) demands by the users.
In this article, we propose PopCorns-Pro, a new framework to holistically optimize data center power through a cooperative network-server approach. We propose power models for data center servers and network switches (with support for low-power modes) based on power measurements in real system settings and memory power modeling tools from MICRON~\cite{lalSLCMemoryAccess2019}
and Cacti \cite{balasubramonianCACTINewTools2017}. We then study job placement algorithms that take communication patterns into account while optimizing the amount of sleep periods for both servers and line cards. Our experimental results show that we are able to achieve more than 20\% higher energy savings compared to a baseline strategy that relies on server load-balancing to optimize data center energy.
We note that further power savings can be obtained at the application level through carefully tuning them for usage of processor resources~\cite{chenWattsinsideHardwaresoftwareCooperative2013} or through load-balancing tasks across cores in multicore processor settings to avoid keeping cores unnecessarily active.
Such strategies can complement our proposed approach, and boost further power savings in data center settings.
We extended upon our previous work~\cite{luPopCornsPowerOptimization2018} by proposing a multi-state power model for data center network switches and servers based on available power measurements in real system settings and memory power modeling tools. We formulate a power optimization problem that jointly considers both server
(with multiple cores) and network switches (with multiple line cards). We improve network traffic modeling accuracy by including link capacities. We also consider realistic heterogeneous switches in the Fat tree topology with different performance and power characteristics for switches at Core, Aggregate and Edge levels.
We compare our approach against a server load-balancing mechanism for task placement and a traditional Djikstra based network routing algorithm. We also consider a server energy optimization algorithm and a greedy bin packing network routing policy which consolidates traffic into fewer switches. These four policy combinations are compared with our Popcorns server algorithm to find that a combined server-network energy aware algorithm is more efficient than optimized versions of individual approaches previously considered as datacenter energy optimization. We consider real world job arrival traces, as well as with synthetic bursty arrivals at different job network demands to characterize the benefits of our combined server network selection policy in terms of energy consumption and job latencies. We found that our approach provides 25-80\% reduction in energy consumption compared to the case of optimizing server and network separately with conventional techniques.
important goals and overall contributions of our work are:
\begin{itemize}
\item We conceptualize the architectural sleep states for the switch, based on functional
components and architectural design of real world data center switches. We
motivate the benefit of such sleep states to the overall system power efficiency.
\item We propose a new algorithm that considers servers and networks energy
characteristics to coordinate server task placement while considering the power
drawn by network components. Transition between power states in switches is
controlled by buffer sizes and traffic patterns.
\item We evaluate the impact of having the low-power states in the switch
and our policy which optimizes for lower energy consumption in our data-center
simulator. We consider a FatTree data center topology with various configuration parameters
such as network traffic sizes, CPU traces, server and network performance
models.
\end{itemize}
\section{Conclusion}
\label{sec:Conclusion}
In this article, we presented Popcorns-Pro\xspace, where we explore techniques that make smart use of line card and port low power states in switches and orchestrate them with intelligent joint task placement and routing algorithm for more effective power management. The results show good promise in achieving considerable power savings compared to the baseline policies. Our experimental results show that smart management of low power states achieves upto 80\% over policies optimizing servers and network policies separately, while still keeping energy savings low.
\section{Baseline policies}
\begin{figure*}
\centering
\captionsetup{font=small}
\subfloat[Shortest Path and Random Server job scheduling]{
\includegraphics[width=0.33\textwidth]{figures/heatmaps/heatmap_djikstra_user_specify_random_.png}
\label{djikstra_heat_random}
}
\subfloat[Elastic Tree and Random Server job scheduling]{
\includegraphics[width=0.33\textwidth]{figures/heatmaps/heatmap_elastic_tree_user_specify_random_.png}
\label{elastic_tree_heat_random}
}
\subfloat[Shortest Path and WASP Server job scheduling]{
\includegraphics[width=0.33\textwidth]{figures/heatmaps/heatmap_djikstra_user_specify_wasp_.png}
\label{djikstra_heat_wasp}
}
\subfloat[Elastic Tree and WASP Server job scheduling]{
\includegraphics[width=0.33\textwidth]{figures/heatmaps/heatmap_elastic_tree_user_specify_wasp_.png}
\label{elastic_tree_heat_wasp}
}
\subfloat[Popcorns based server and network scheduling]{
\includegraphics[width=0.33\textwidth]{figures/heatmaps/heatmap_popcorns_user_specify_popcorns_.png}
\label{popcorns_heat_popcorns}
}
\caption{Illustration of the average sleep state for switches and server with
different policies. Popcorns-Pro\xspace scheduling policy achieves greater consolidation of server and network flows. }
\label{fig:heatmap-graph}
\end{figure*}
Figure ~\ref{fig:heatmap-graph}, illustrates the heat map after the end of the execution of the same Poisson based random job arrival arrival pattern with mean arrival rate $\lambda$=0.5 Jobs/second using the different server and network job scheduling policies for a small 16 server Fat-Tree topology. We note that with random server selection , all servers are equally utilized with elastic tree based network scheduling performing more network consolidation for better savings.
When using random server selection for tasks with Shortest path routing ~\ref{djikstra_heat_random} and Elastic based routing ~\ref{elastic_tree_heat_random}, we can see that there are more aggregate switches in the dark green allowing more switches to stay in deep sleep state. Similarly for the WASP server selection scheme in figures ~\ref{djikstra_heat_wasp}
and ~\ref{elastic_tree_heat_wasp}, we see that there are unequal usage of the servers and more flows consolidated into fewer switches. It can be see that even with server energy optimization in WASP, and our combined server and network policy Popcorns-Pro\xspace ~\ref{popcorns_heat_popcorns}
can achieve much higher savings than each of the individual optimizations. Popcorns-Pro\xspace algorithm is able achieve this mainly due to being aware of network energy aware state when scheduling jobs on the servers. It is further able to reduce energy savings by increased consolidation of flows and servers by utilizing the latency slacks available at each node in the data center.
\section{Power models}
\label{sec:systemdesign}
\label{sec:system-model}
\subsection{Conceptual Network Switch Low Power States}
\label{sec:switchpower}
\input{switchpower}
\subsection{Server Low-power States and Power Model}
\label{sec:serverpower}
With the focus on improving energy efficiency when servers are under-utilized, low-power states were created
with goal of reaching energy proportional computing. For standardization across computing platforms, Advanced Configuration
and Power Interface (ACPI)~\cite{acpi} specification gives the Operating system developers and hardware vendors common platform independent energy savings.
ACPI has been historically well supported by various hardware vendors such as Intel and IBM~\cite{wareArchitectingPowerManagement2010}. ACPI uses global states, \emph{Gx}, to represent states of the
entire system that are visible to the user. We discuss the component-wise
breakdown of such system states as described in the table ~\ref{tab:serverpowermodel}.
Although processor sleep states can significantly reduce the power consumed by the processor, servers can still consume a considerable amount of power, as the platform may still remain active. As a result, in order to achieve further energy savings, \emph{system sleep states}, that also put platform components into low-power state, are considered for server farm power management. We consider additional full system sleep state.
\begin{itemize}
\item CPU(s) Power: Low-power states are now an important feature that are
widely supported in today's processors. In a multicore processor, each core
can have different architectural components or features disabled or turned off
by shutting down the clock signal or turning off power and such states are
called C-states. A higher level C state or S state typically indicates more
aggressive energy savings but also corresponds to long wakeup latencies.
For multi-core processor, low-power sleep states are supported at both core
level and package level. When all cores become idle and reside in some
\emph{Core C state}, the entire package would be resolved to a greater power saving state, denoted as \emph{package sleep state}, which further reduces power.
\item{Chipset} The chipset and board consume energy used to support the main
chipset, voltage regulators, bus control chips, interfaces for peripheral
devices. According to Intel specifications~\cite{IntelX99Chipset} the chipset
has TDP of 6.5W. The minimal power savings in the deeper sleep states of the
system can be ignored.
\item{DRAM}: The DRAM power consumption can be seperated into 3 parts ,
read/write energy, Activate and pre-charge energy to open the specific row buffer to operate
upon and refresh power to continuously perform the refresh cycle.
Characterizing the DRAM power depends on several factors such as:
1. DRAM technology DDR3, DDR4, LPDDR3L, etc with their respective power and
performance characteristics.
2. Workload : number of last level cache misses: An application whose working
memory does not fit into the CPU cache can have a lot misses.
3. Manufacturer optimizations: DRAM powerdown mode can be implemented
differently by different manufacturers. The DRAM self-refresh cycle allows
DRAM to be clock-gated from the external memory controller for self refreshing cycle.
4. Memory clock speed. Higher clock improves performance buy consumes more memory.
We used the micron power calculator to arrive the power consumption of the
memory components as shown in the table ~\ref{tab:linecardpower}.
\item{Power supply} We consider the AC-DC energy conversion loss which are
typically 10\% of the power consumed by the system.
\item{Fans}: The power consumption of a cooling fan is directly proportional to
the cubic function of current fan speed utilization
\end{itemize}
\begin{table*}
\small \centering
\setlength{\extrarowheight}{0pt}
\addtolength{\extrarowheight}{\aboverulesep}
\addtolength{\extrarowheight}{\belowrulesep}
\setlength{\aboverulesep}{0pt}
\setlength{\belowrulesep}{0pt}
\caption{A Representative Server Power model with Multi-socket Xeon processors and 128GB DDR3 }
\label{tab:serverpowermodel}
\resizebox{\linewidth}{!}{%
\begin{tabular}{lcllll}
\toprule
\textbf{Server Power Component } & \textbf{Number of units } & \textbf{Active } & \textbf{Idle (G0) } & \textbf{G1 } & \textbf{G2 } \\
\hline
\begin{tabular}[c]{@{}l@{}}\textbf{CPU}\\\textbf{(Intel Xeon E5}\\\textbf{V2 2690) }\end{tabular} & 2 & 135W & 108W & 22W & 15W \\
\rowcolor[rgb]{0.851,0.851,0.851} & & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Average sustainable\\power consumption (TDP)\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}All CPUs/cores set\\to lowest DVFS \\frequency with\\20\% savings\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}All cores of all CPUs\\in C3 state- Clock\\Stopped and Cache\\flushed but other\\architectural state\\maintained\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}All cores of all CPUs\\in C6 state- Power\\gating the CPU, after\\saving the architectural\\state to the DRAM\end{tabular} \\
\textbf{ Chipset } & 1 & 8W & 8W & 8W & 8W \\
\rowcolor[rgb]{0.851,0.851,0.851} & & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Chipset powering \\the interfaces and\\peripherals\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Chipset powering\\the interfaces and\\peripherals\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Chipset powering the\\interfaces and\\peripherals\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Chipset powering the\\interfaces and peripherals\end{tabular} \\
\begin{tabular}[c]{@{}l@{}}\textbf{ Memory }\\\textbf{(8x16GB DDR3) }\end{tabular} & 8 & 1.45W & 0.29W & 0.29W & 0.29W \\
\rowcolor[rgb]{0.851,0.851,0.851} & & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}DRAM serving a normal\\utilization~of reads writes\\(25\% of all CPU~cycles\\each) + ACT~power (228mW) +\\I/0 power (540mw) + Background +\\Termination power(672mW)\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Memory without new\\Reads or Writes, with ACT\\and background power\\consumption, and\\Background power 57mW.\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Memory without new\\Reads or Writes, \\with ACT and\\background power consumption, and\\Background power 57mW.\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Memory without new\\Reads or Writes, with\\ACT and background\\power consumption,\\and Background\\power 57mW.\end{tabular} \\
\begin{tabular}[c]{@{}l@{}}\textbf{ Disks }\\\textbf{(8 TB SSDs}\\\textbf{in RAID) }\end{tabular} & 4 & 10W & 10W & 1W & 1W \\
\rowcolor[rgb]{0.851,0.851,0.851} & & Active mode & Active mode & standby power & standby power \\
\textbf{ Network Interface card } & 2 & 3W & 3W & 1W & 1W \\
\rowcolor[rgb]{0.851,0.851,0.851} & & Active mode & Wake-on-LAN mode & Wake-on-LAN mode & Wake-on-LAN mode \\
\textbf{ Power Supply losses } & 2 & 38 W & 31W & 7W & 5W \\
\rowcolor[rgb]{0.851,0.851,0.851} & \multicolumn{1}{l}{} & 10\% current system power consumption & 10\% current system power consumption & 10\% current system power consumption & 10\% current system power consumption \\
\begin{tabular}[c]{@{}l@{}}\textbf{Cooling Fans}\\\textbf{15W fans}\end{tabular} & 2 & 10.5W & 5W & 5W & 0 W \\
\rowcolor[rgb]{0.851,0.851,0.851} & & 70\% of Full Power & 30\% Power & 30\% Power & Off \\
\hline
\textbf{Total} & \multicolumn{1}{l}{} & \textbf{385W} & \textbf{308W} & \textbf{73W} & \textbf{~51W} \\
\hline
\textbf{ Transition Time - To sleep } & & 0usecs & 10 usecs & 100 usecs & 500 msecs \\
\rowcolor[rgb]{0.851,0.851,0.851} & & & DVFS transition time & Transition to C3 state & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Saving Architectural state of CPU to ram\\and~power supply\\power up\end{tabular} \\
\hline
\textbf{ Wakeup latency } & & 0 usecs & 100 usecs & 100 msecs & 1 second \\
\rowcolor[rgb]{0.851,0.851,0.851} & & & CPU frequency scale & Transiting all cores to C0 state & Transitioning CPU to Active state \\
\bottomrule
\end{tabular}
}
\end{table*}
\subsection{Modeling Job}
More and more applications are being designed using a modular microservices-based paradigm, where inter-dependent tasks are hosted on different servers. The modularisation of job helps reduce complexities in software development, with independently update-able software programs in a scalable 'single-service instance per server' deployment pattern.
We model the execution of jobs at the server side as consisting of multiple inter-dependent tasks that include both spatial and temporal inter-dependence. Application tasks are typically executed by specific server types. For example, a web service request will first be processed by an application or web server, and a search request is processed by a database server, and this kind of task relationship is called spatial inter-dependence. In terms of temporal inter-dependence, a task cannot start executing until all of its 'parent' tasks have finished their execution, and until after their results have been communicated to the server assigned to the task. A job is considered to have finished when all of its tasks finish execution. As for servers, there are multiple cores per server and one core can only process one task. We support asynchronous task execution by allowing the server running the parent task to release the cpu, after all flows are completed to the child tasks' server even if it is waiting in the queue for execution.
Each job $j$ can be represented as a directed acyclic graph (DAG) $G^j(V^j, E^j)$, where $V^j$ is the set of tasks of job $j$. In DAG, if there is a link from task $i$ to task $r$, then task $i^j$ must finish and communicate its results to task $r^j$ before $r^j$ can start processing. Each task $v^j \in V^j$ has a workload requirement, namely task size or execution time requirement $w^j_v$ for the core. For each link in $E^j$, there is a data transfer size $D^j_l$ associated with it, which denotes the bandwidth requirement to transfer the result over link $l$ (from the task at the head of DAG link to the task at the tail) when assigned a network flow. Figure~\ref{figure1} shows an example of a job DAG.
\begin{figure}
\captionsetup{font=small}
\centering\includegraphics[width=0.4\linewidth]{figures/DAG.pdf}\caption{Example of a job DAG. Numbers 1-6 denote task 1-task 6 respectively. Numbers around the tasks represent task size, while numbers on the links represent flow size.}
\label{figure1}
\end{figure}
\begin{figure}
\captionsetup{font=small}
\centering\includegraphics[width=0.3\textwidth]{figures/fat_tree.png}
\caption{Fat tree topology with K=4 (16 servers).}
\label{figure12}
\end{figure}
\subsection{Data Center System}
\label{sec:dc-sys}
Figure~\ref{figure12} shows the classic fat tree topology used in our network-server system, where each subset is called a 'pod'. In our system, each switch consists of a number of distributed cards plugged into the backplane, which provides the physical connectivity~\cite{panZerotimeWakeupLine2016}. Among these cards, there are multiple line cards for forwarding packets or flows, and can be in active, sleep, or off state. In turn, each line card contains several ports connecting to external links, which can also be in active, LPI, or off state. A typical schematic of switch, line card, and port is shown in Figure~\ref{figure13}.
\subsubsection{Supervisor modules}
As shown in figure ~\ref{figure13}, the control pane where all
forwarding decisions is performed by the SUP(Supervisor) module which consists
of the Route Processor (OSI layer 3 function) and Switch Processing card(OSI
layer 2 functions).
The CISCO SUP2E contains DRAM upto 32GB(Core switches).
According to Cisco power calculator~\cite{CiscoPowerCalculator}, a routing
processor card consumes upto 69W. We breakdown the components of the route
processing card in table ~\ref{tab:switchchassis}.
\subsubsection{Line Card power Model}
A linecard consists of several components such as ASICs, TCAMs, DRAM memory, and ports. Our power model for each component is explained below:
Due to lack of detailed power models for commercial switches, we propose and
derive our switch power model based on available literature and memory power
modeling tools from Micron~\cite{MicronMemoryPower}. A switch used in production uses
the several components such as ASICs, TCAMs, DRAM memory, and ports. Our power model for each component is explained below:
\label{sec:switcharch}
\begin{figure}
\captionsetup{font=small}
\centering\includegraphics[width=1.2\linewidth]{figures/Switcharch.png}\caption{Illustration of Switch architecture.} \label{figure13}
\end{figure}
\begin{table*}
\centering
\small
\setlength{\extrarowheight}{0pt}
\addtolength{\extrarowheight}{\aboverulesep}
\addtolength{\extrarowheight}{\belowrulesep}
\setlength{\aboverulesep}{0pt}
\setlength{\belowrulesep}{0pt}
\caption{Single Linecard Power model for an Edge level Switch with 1 Gbps maximum bandwidth}
\label{tab:linecardpower}
\resizebox{\linewidth}{!}{%
\begin{tabular}{llllll} \toprule
& \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{Active }\\\textbf{(1 Gbps~Bandwidth )}\end{tabular}} & \textbf{Low power state 1} & \textbf{Low power state 2} & \textbf{Low Power state 3} & \begin{tabular}[c]{@{}l@{}}\textbf{Line Card~ -}\\\textbf{Deepest Sleep State~}\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Packet Forwarding\\~(Forwarding and \\replication Engines)\end{tabular} & 165W & 132W & 66W & 33W & 0W \\
\rowcolor[rgb]{0.882,0.882,0.882} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Active-Max Clock\\frequency\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}DVFS savings for\\the network processor.\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Clock gating the\\lookup~caches in\\the forwarding\\engines, Replication\\engines clock gated.\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Forwarding engine's \\Core clock and bus \\stopped.\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Architectural state\\flushed to DRAM\\and Forwarding\\engine and replication\\engines are power gated.\end{tabular} \\
\begin{tabular}[c]{@{}l@{}}VoQ (M)\\(INPUT + OUTPUT)(max)\end{tabular} & \begin{tabular}[c]{@{}l@{}}15.5W +\\2 x 6 mW per MB of flow\end{tabular} & \begin{tabular}[c]{@{}l@{}}15.5W+\\1 x 6mW per MB flow\end{tabular} & 4W & 0 W & 0 W \\
\rowcolor[rgb]{0.882,0.882,0.882} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Both Input and\\Output buffers active\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Output VoQ buffer\\turned off\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}DRAM clock gated\\and contents lost.\end{tabular} & DRAM Power gated & DRAM power gated \\
TCAM(max) & \begin{tabular}[c]{@{}l@{}}12W+ \\2.6 mW per MB of flow\end{tabular} & \begin{tabular}[c]{@{}l@{}}12W+2.6 mW per MB\\of flow\end{tabular} & \begin{tabular}[c]{@{}l@{}}12W+2.6 mW per MB\\of flow\end{tabular} & 12W & 0W \\
\rowcolor[rgb]{0.882,0.882,0.882} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}TCAM active and\\synchronized\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}TCAM Active and\\synchronized\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}TCAM active and\\synchronized\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}TCAM stopped,\\synchronized,\\and clock gated.\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}TCAM-routing\\tables flushed and\\power gated\end{tabular} \\
Interconnect Fabric interface & 23W & 23W & 23W & 0 W & 0 W \\
\rowcolor[rgb]{0.882,0.882,0.882} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Active for Control\\plane operations\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Active for Control\\plane operations\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Active for Control\\plane operations.\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Power gated,\\Control plane\\operation stopped\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Power gated,\\Control plane\\operation stopped\end{tabular} \\
Host Processor & 24 W & 22 W & 9 W & 9 W & 3 W \\
\rowcolor[rgb]{0.882,0.882,0.882} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Active and running\\Linecard OS\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Active and running\\Linecard OS, with\\DVFS\end{tabular} & C3 state. Halt mode. & C3 state. Halt mode & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}C5 state: deeper\\sleep state\end{tabular} \\
Ports & 29 W & 15W & 4 W & 4 W & 4 W \\
\rowcolor[rgb]{0.882,0.882,0.882} & All 24 ports active & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Output ports in Low\\Power Idle- Wake on Arrival\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}All ports in Low\\Power Idle-Wake\\on Arrival\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}All ports in Low\\Power Idle- Wake on Arrival\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}All ports in Low\\Power Idle- Wake on Arrival\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}\textbf{Max Power Consumption~~}\\\textbf{in each state (}\\\textbf{1 Gbps when active)}\end{tabular} & \textbf{310 W} & \textbf{250 W} & \textbf{151 W} & \textbf{45 W} & \textbf{7 W} \\ \bottomrule
\end{tabular}
}
\end{table*}
\begin{table}
\centering
\scriptsize
\caption{Switch Chassis power model.}
\label{tab:switchchassis}
\begin{tabular}{lll}
\toprule
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{Switch Chassis }\\\textbf{components}\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{At-least}\\\textbf{One}\\\textbf{Line card~}\\\textbf{Active}\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{All Line }\\\textbf{Cards}\\\textbf{in OFF}\\\textbf{state}\end{tabular}} \\
\hline
\multicolumn{3}{c}{\textbf{Route processing card}} \\
\hline
Hostprocessor & 24W & 0 \\
DRAM-3GB & 32.5W & 0 \\
Persistent Storage & 10W & 0 \\
\hline
\multicolumn{3}{c}{\textbf{\textbf{Switch Interconnect card}}} \\
\hline
Switch Fabric and~~I/0 (40\%load) & 85W & 0 \\
Scheduler ASIC & 24W & 0 \\
\hline
\multicolumn{3}{c}{\textbf{Chassis Controller}} \\
\hline
Host-Processor & 24W & 24W \\
DRAM-3GB & 12W & 12W \\
\hline
\multicolumn{3}{c}{\textbf{Power Proportional Chassis Components}} \\
\hline
Power supply losses~per Active card (25\%) & 93W & 5W \\
Cooling per Active Line~\textasciitilde{}Card (20\%) & 77W & 4W \\
\bottomrule
\end{tabular}
\end{table}
\textbf{1. ASICs/ Network Processors:} The data plane operations of the switch such as
parsing the packet contents to read the header, looking up routing tables, and
forwarding data to the corresponding destination port are performed by the
networking processor. The processing is divided by two functional components:
Forwarding Engine and Replication Engine. The Replication engine separates the
header and data parts of a network packet and passes the header to the forwarding
engine. The replication engine also reassembles the packet after its processed
by the forwarding engine and in multicast communication duplicate packets to
different destination ports. The forwarding engine is the decision making
component in the line cards, and it uses the locally synchronized routing
tables, QoS and ACL lookups. According to Wobker's report \cite{wobkerPowerConsumptionHighend2012}, this
consumes 52\% of the total power in enterprise Cisco line cards. Accordingly,
the ASIC/network processor's power consumption is computed to be 165 W. Based on
studies done by Iqbal et al. \cite{iqbalEfficienttrafficaware2012} and Luo et
al.
To construct the energy saving sleep states, we draw parallels to the design
of the sleep states in general purpose processors. In the first idle state, the
clock can be set to the lowest frequency therby resulting in 20\% savings. In
the first sleep state, the lookup caches in the forwarding engines can be flushed
and clock gated along with the replication engines. In the next sleep state, the core processing clock and processor bus is clockgated.
In the lowest sleep state state, the entire architectural state of the processor including runtime data from Fowarding information base rules, Qos Classification counters is written to the Linecard DRAM and power gated.
\textbf{2. VoQ (DRAM memory and SRAM buffer):} In Cisco line cards, the Virtual Output
Queuing memory is used to provide buffering and queing function to manage the
QoS, packet replication, congestion control functions. The ingress DRAM is used to buffer incoming
packets and the egress DRAM buffer is used store data payload while the header
is being processed by the forwarding engine. The active power consumption of
DRAM depends on the frequency of accesses, and leakage/static power depends on
the transistor technology. Micron Power calculators \cite{MicronMemoryPower} for
RLDDR3 (Reduced latency DRAM) show a power consumption of 1571 mW per 1.125 GB
of memory when active and 314 mW when in sleep.
There is also a high speed SRAM buffer which acts as a write-back cache for the
DRAM memory.
\textbf{3. TCAM (Ternary Content addressable memory):} The TCAM structre is used by the
L3 routing function and to store routing table entries synchronized locally from
the route processing card. A typical 4.5 Mb TCAM structure which is used to offload high-speed packet lookup, consumes 15 W of power \cite{guoResistiveTCAMAccelerator2011}. We model the static leakage power for a 4.5 Mb CAM structure using Cacti \cite{balasubramonianCACTINewTools2017}, which estimates the power consumed during the idle sleep period when memory is not accessed.
\textbf{4. Line card Interconnect fabric:} The line card communicates with the chassis
bus using the interconnect interface. The line card fabric interconnect consumes 23W during active power state \cite{panZerotimeWakeupLine2016}.
\textbf{5. Host processor and local DRAM:} Each line card includes a host processor which is used in line card boot and initialization process for copying routing table information from the switch fabric card. The processor is kept running in sleep mode to keep the routing tables synchronized and to wake up the line card on packet arrival. We assume a 30\% power reduction due to dynamic frequency scaling during line card sleep \cite{liuSleepScaleRuntimeJoint2014}.
\textbf{6. Ports:} To tackle the energy consumption in network equipment, the IEEE 802.3az standard introduces the Low Power Idle (LPI) mode of Ethernet ports, which is used when there is no data to transmit, rather than keeping the port in active state all the time~\cite{gunaratneReducingEnergyConsumption2008}. The idea behind LPI is to refresh and wake up the port when there is data to be transmitted; the wakeup duration is usually small.
Apart from the line cards, we model a constant baseline power of 120 W for the rest of the switch in ON state, which includes switch supervisor module, the backplane, cooling systems, switch fabric card based on Pan et al. \cite{panZerotimeWakeupLine2016}. The wakeup latency for each of the states is derived from sleep and wake up state conceptualized in section~\ref{sec:transition}
\subsection{Modeling switch sleep state transition and wake up latency}
\label{sec:transition}
\begin{figure}
\captionsetup{font=small}
\centering\includegraphics[width=.5\textwidth]{figures/switchtransition.pdf}\caption{Illustration of a Switch Sleep and Wakeup Transition process with 3 sleep states.}
\label{figure13-sleepcycle}
\end{figure}
As shown in the Figure \ref{figure13-sleepcycle}, we model the sleep policy for a switch based on the power model described in section~\ref{sec:switchpower}. The line card unit of a switch is considered to be in active state when processing any output traffic, and idle when either there is no incoming traffic for certain period of time. In the first sleep state, the switch stops processing incoming packets but continues to receive new packets in the packet buffer to require the DRAM to be on the State. TCAM is not flushed to avoid the wakeup latency associated with re-populating the routing tables. In the second sleep state S2, the server's Energy consumption is reduced by setting the ASIC or Network processor to clock-gated state. In the second sleep state, additional savings can be achieved by turning off the DRAM, and deeper sleep state for the NP/ASIC. In the last sleep state, we clock gate the TCAM memories storing the routing tables, host processor and switch interconnects to be put in sleep state.
\subsection{Estimating the energy consumption}
\label{sec:algorithms}
Modeling this joint power optimization problem in Datacenter Networks (DCN) as an Integer Linear Programming (ILP) formulation is a solution, and optimization tools like MathProg can be used to provide a near-optimal result. However, the computation complexity increases exponentially with the number of servers and switches~\cite{wangExpeditusCongestionAwareLoad2017}. In a typical data center with tens of thousands of servers and hundreds of switches, it is computationally prohibitive to solve the optimization problem. We therefore propose a computationally-efficient heuristic algorithm in this section.
\input{motivation}
\begin{comment}
In this section, we first present an Integer Linear Programming (ILP) formulation for the joint optimization problem in details. We then propose heuristic algorithms for solving the optimization problem efficiently.
\subsection{ILP formulation}
In the ILP formulation, the input parameters are shown in Table~\ref{table2:notation}.
\begin{table}
\caption{\label{table2:notation} Notations in ILP formulation}
\scalebox{0.5}{
\begin{tabular}{|c|c|c|}
\hline
\bf{Symbol}& \bf{Meaning} \\
\hline
$v$ & an arbitrary network node \\
\hline
$s$ & an arbitrary server \\
\hline
$V$ & set of network nodes (e.g. servers, switches) \\
\hline
$S$ & set of servers \\
\hline
$j$ & an arbitrary job \\
\hline
$L$ & set of links in the network \\
\hline
$P_{m,n}^{linecard}$ & power consumption of line card $m$ in switch $n$ \\
\hline
$P_{n}^{base}$ & power consumption of all parts other than line cards in switch $n$ \\
\hline
$N^{switch}$ & number of switches \\
\hline
$P_{c,s}^{core}$ & power consumption of core $c$ in server $s$ \\
\hline
$P_{s}^{base}$ & power consumption of all parts other than cores in server $s$ \\
\hline
$N_{s}^{core}$ & number of cores in server $s$ \\
\hline
$N^{server}$ & number of servers \\
\hline
$T^{j}$ & set of tasks in job $j$ \\
\hline
$t^{j}$ & an arbitrary task in job $j$, $t^{j}\in{T^{j}}$ \\
\hline
$N^{job}$ & number of jobs \\
\hline
$B_{r,t}^{j}$ & bandwidth requirement between task $r$ and task $t$ of job $j$, task $r$ is in server $s_r$, task $t$ is in server $s_t$ \\
\hline
$l_{n_1,n_2}$ & link between node $n_1$ and $n_2$, $l_{n_1,n_2}$ $\in$ L \\
\hline
$C_{l_{n_1,n_2}}$ & capacity of link $l_{n_1,n_2}$ \\
\hline
$L_{m, n}$ & set of links connected to line card $m$ in switch $n$ \\
\hline
$f_{l_{n_1,n_2}}$ & number of flows on link $l_{n_1,n_2}$ \\
\hline
$l_{p,s_1,q,s_2}$ & link between line card p in switch $s_1$ and line card q in switch $s_2$ $s$ to node $d$ \\
\hline
$l_{s,d}$ & an arbitrary path from node $s$ to node $d$ \\
\hline
$f_{l_{n_1,n_2}}$ & set of flows on link $l_{n_1,n_2}$ \\
\hline
$d^{l_{n_1,n_2}}_f$ & data rate of flow $f$ on link $l_{n_1,n_2}$ \\
\hline
\end{tabular}}
\end{table}
\begin{center}
\scalebox{0.8}{
Objective: ${Minimize(P^{switches}+P^{servers})}$}
\scalebox{0.8}{
$=\min{\sum_{n=1}^{N^{switch}}(P^{base}_n+\sum_{m=1}^{N_n^{linecard}}P_{m,n}^{linecard}x_m^n) +\sum_{s=1}^{N^{server}}(P_s^{base}+\sum_{c=1}^{N_s^{core}}P_{c,s}^{core}z_c^s)}$}
\end{center}
Decision Variables:
a)
\begin{equation*}
a^{j}_{t, c, s}=
\begin{cases}
1, \quad\makebox{if task } t \makebox{ of job } j \makebox{ is assigned to core } c \makebox{ in server } s \makebox{;} \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
b)
\begin{equation*}
z^{j}_{t, r, s}=
\begin{cases}
1, \quad\makebox{if task } t \makebox{ and } r \makebox{ of job } j \makebox{ are assigned to server } s \makebox{;} \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
c)
\begin{equation*}
y^{s}_{c}=
\begin{cases}
1, \quad\makebox{if core } c \makebox{ in server } s \makebox{ is operating; } \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
d)
\begin{equation*}
x^{n}_{m}=
\begin{cases}
1, \quad\makebox{if line card } m \makebox{ in switch } n \makebox{ is active; } \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
d)
\begin{equation*}
\lambda^{l}_{p_{s,d}}=
\begin{cases}
1, \quad\makebox{if link } l \makebox{ is on path } p_{s,d} \makebox{;} \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
Constraints:
a) Each task of each job can be assigned to only one core. For all $j, t$
\begin{equation*}
\sum_{s=1}^{N^{server}} \sum_{c=1}^{N_s^{core}} a^{j}_{t, c, s} = 1
\end{equation*}
b) The number of tasks being executed on a server cannot exceed the number of its cores. For all $s$
\begin{equation*}
\sum_{j=1}^{N^{job}} \sum_{t\in{T^j}} \sum_{c=1}^{N_s^{core}} a^{j}_{t, c, s} = 1
\end{equation*}
c) Constraints to find the value of $z^{j}_{t, r, s}$. For all $j, t, r, s$
\begin{equation*}
\sum_{c=1}^{N^{core}_s}a^{j}_{t, c, s} + \sum_{c=1}^{N^{core}_s}a^{j}_{r, c, s} \leq 1 + z^{j}_{t, r, s}
\end{equation*}
\begin{equation*}
\sum_{c=1}^{N^{core}_s}a^{j}_{t, c, s} + \sum_{c=1}^{N^{core}_s}a^{j}_{r, c, s} \geq 2z^{j}_{t, r, s}
\end{equation*}
d) If link $l_{n_1,n_2}$ is on path $p_{s,d}$, there must exist another link from node $n_1, n_2$ respectively on this path.
\begin{equation*}
\sum_{c=1}^{N^{core}_s}a^{j}_{t, c, s} + \sum_{c=1}^{N^{core}_s}a^{j}_{r, c, s} \leq 1 + z^{j}_{t, r, s}
\end{equation*}
\begin{equation*}
\sum_{c=1}^{N^{core}_s}a^{j}_{t, c, s} + \sum_{c=1}^{N^{core}_s}a^{j}_{r, c, s} \geq 2z^{j}_{t, r, s}
\end{equation*}
e) If link $l_{s_1,s_2}$ is on the task communication path $p_{s,d}$, then the corresponding line cards in $s_1$ and $s_2$ must be active and the link between them is also on $p_{s,d}$. For $l_{p,s_1,q,s_2}$ $\in$ $L_{p,s_1}$, $l_{p,s_1,q,s_2}$ $\in$ $L_{q,s_2}$
\begin{equation*}
\lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}} = \lambda^{l_{s_1,s_2}}_{p_{s,d}} = x^{s_1}_{p} = x^{s_2}_{q}
\end{equation*}
\begin{equation*}
x^{s_1}_{p}\lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}} \geq \lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}}
\end{equation*}
\begin{equation*}
x^{s_2}_{q}\lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}} \geq \lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}}
\end{equation*}
f) Line card $m$ in switch $n$ is off only when all the links connected to it are not on any path.
\begin{equation*}
\left| L_{m,n} \right| x^{n}_{m} \geq \sum_{l\in{L_{m,n}}}\lambda^{l}_{p_{s,d}}
\end{equation*}
g) Total flow rate on a link cannot exceed the link capacity.
\begin{equation*}
\sum_{f\in{f_{l_{n_1,n_2}}}}^{N^{server}} \sum_{c=1}^{N_s^{core}} a^{j}_{t, c, s} = 1
\end{equation*}
\end{comment}
\subsection{Heuristic Algorithms}
In this section, we first present a power management algorithm for the transition of line card and port power state, a simple power transition algorithm for servers, and then propose joint job placement and network routing algorithm for solving the optimization problem efficiently.
\subsubsection{Power State Transition Algorithm}
As not all switches in DCN need to be active all the time, if we can intelligently control the transition of active and low power state for ports and line cards, then data center network power consumption could be saved. Therefore, we propose the Power State Transition Algorithm~\ref{algorithm1} to implement network power management. The notations are elaborated in Table~\ref{algorithm1table}. We assume that there is a global controller that keeps record of all the line cards, ports, and server status, including their power state, queue size, etc. The global controller monitors the current traffic load (number of pending flows or packets) at each port and decides whether the current line card power state should change. It then decides if the port state should change as a feedback. In our design, we couple the line card and port power state and their transition: if a line card is in sleep or off state, then all the ports are also in LPI or off state; if a line card is active, then its ports can be in LPI or active state, and LPI ports can be woken up to become active in a short time.
\begin{algorithm}
\SetAlgoNoLine
\scriptsize
\caption{Power State Transition Algorithm}
\label{algorithm1}
\KwIn{$T_s$, $T_a$, $Q_{iL}$}
\KwOut{Line card and port state transition}
Initialization: All line cards are in sleep state, all ports are in LPI state\;
\While{there are jobs to be executed}
{
\If{a flow arrives at port $i$ of line card $L$ at time $t$}
{
\If{$L$ is in sleep state} {
\If{$Q_{iL} > T_s$} {
$L$ begins waking up from sleep state\;
$i$ begins waking up from LPI state\;
}
\Else
{
$L$ begins waking up after $\tau_{wakeup}^{LC}$\;
$i$ begins waking up after $\tau_{wakeup}^{port}$\;
}
}
}
\If{a flow is transimitted from port $i$ of line card $L$ at time $t$} {
\If{$Q_{iL} < T_s$} {
$i$ starts transition to LPI after $\tau_{LPI}$\;
\If{after $\tau_{LPI}$, all the ports of $L$ are in LPI state} {
$L$ starts transition to deeper sleep state after $\tau_{sleep}^{LC}$;
}
}
}
}
\end{algorithm}
\subsubsection{Popcorns-Pro\xspace-Cooperative Network-Server Algorithm}
{
The main idea of our algorithm is to jointly consider the status of server pool and network before assigning jobs. To be more specific, for a job consisting of several pairs of interdependent tasks, if we place task pairs based on their interdependence, and choose the core pair with the minimum routing cost, instead of randomly placing them on available cores without awareness of communication requirement between tasks, then the placement together with its corresponding routing path must be the optimal. And in the context of this paper, network routing cost is energy consumption, since every line card on the chosen routing path has to be active, or it will have to be woken up.
Based on this idea, we propose the Cooperative Network-Server (CNS) Algorithm~\ref{algorithm2}
The notations are elaborated in Table~\ref{algorithm1table}. In the initial stage, we compute and store all possible routing paths between any pair of network nodes. As mentioned in the previous section, we have a global controller to keep track of all the line cards, ports, and server statuses. Thus, when a job consisting of a set of interdependent tasks arrive, we first check the server side to select all server pairs whose power state is active and local queue sizes do not exceed a threshold. If no server statisfies these requirements, then servers with full local queue and servers in C6 sleep state will be selected. Note that if a task is assigned to a server in sleep state, then the server will be activated immediately and enter active state after a wakeup latency. For each server pair we select possible routing paths between them from the pre-computed routing paths set. Along each path, line cards could be active, sleeping, or off, and ports could be active, LPI, or off, but if we assign a path to the task pair, all the line cards and ports along it should become active. In other words, we need to wake up the inactive line cards and ports on the chosen path, which results in extra power consumption and wakeup latency. Based on this, we can compute and assign a weight, which is a measurement of the number of active line cards and ports, to each possible routing path, and choose the server pair with minimum routing cost. And our proposed Cooperative Network-Server Algorithm will output the least-weight path and its corresponding server pair for each task pair.}
\begin{table}
\centering
\captionsetup{font=small}
\scalebox{0.7}{
\begin{tabular}{|c|c|c|}
\hline
\bf{Symbol}& \bf{Description} \\
\hline
$P$ & all the routing paths between any pair of network nodes \\
\hline
$Q_s$ & local queue size of server $s$ \\
\hline
$T_s^{server}$ & local queue size threshold of server $s$\\
\hline
$S$ & all the servers in DCN \\
\hline
$S_{ava}$ & all the servers whose current queue size doesn't exceed $T_s^{server}$ \\
\hline
$P_{x, y}$ & all the routing paths between node $x$ and $y$ in DCN\\
\hline
$T_s$ & traffic threshold for port waking up \\
\hline
$T_a$ & traffic threshold for port turning into LPI state \\
\hline
$Q_{iL}$& current traffic load at port $i$ in line card $L$ \\
\hline
$\tau_{wakeup}^{port}$ & port wakeup delay \\
\hline
$\tau_{wakeup}^{LC}$ & line card wakeup delay \\
\hline
$\tau_{LPI}^{port}$ & port turning into LPI state delay \\
\hline
$\tau_{sleep}^{LC}$ & line card turning into sleep state delay \\
\hline
\end{tabular}
}
\caption{\label{algorithm1table} Notations in Popcorns-Pro\xspace Cooperative Network-Server Algorithm.}
\end{table}
\begin{algorithm}[!htb]
\SetAlgoNoLine
\scriptsize
\caption{Popcorns-Pro\xspace - Cooperative Network-Server Algorithm}
\label{algorithm2}
\KwIn{$P$, $Q_s$, $T_s$, line cards and ports power state, task dependency within a job}
\KwOut{Job placement and corresponding routing path}
\While{job $j$ consisting of task set $T^j$ arrives}
{
\For{each pair of interdependent tasks $(T^j_m, T^j_n)$ in $T^j$}
{
select $S_{ava}$ from $S$\;
\For {each pair of available servers $(x, y)$ in $S_{ava}$}
{
compute $P(x, y)$ for server pair $(x, y)$ from $P$\;
\For {each path $p$ in $P(x, y)$}
{
get power states of all the ports and line cards along $P$ from the global controller, compute path weight $w(x, y)$ in terms of energy consumption\;
}
}
choose the least-weight path associated with corresponding server pair for task pair $(T^j_m, T^j_n)$\;
}
}
\end{algorithm}
\begin{algorithm}
\SetAlgoNoLine
\scriptsize
\caption{Popcorns-Pro\xspace - Weight assignment algorithm between two nodes}
\label{algorithm3}
\KwIn{$Link(i,j)$, $QoS$, $Link_{Capacity}$}
\KwOut{$W(i,j)$ Weight for the edge connecting nodes i and j}
\If{ Node $i$ is Server} {
$LinkWeight(i,j) += (ServerActivePower - CurrentSleepStatePower) *( taskSize + EdgeLinkBW / FlowSize)$;
}
\If{ Node $j$ is a Switch}
{
$LinkCapacity_{remaining} \leftarrow Link_{fullCapacity}$;
\For {$Every Flow F_{i} on Link(i,j)$ }
{
$Flow_{i_{remainingTime}} \leftarrow FSize_{i_{Remaining}} / FcurBW_{i}$;
\If {$F_{i_{remainingTime}} > SlowestTime$ }
{
$SlowestTime \leftarrow F_{i_{remainingTime}}$;
$SlowestFlow \leftarrow F_{i}$;
}
$LinkCapacity_{remaining} -= FcurBW_{i}$
}
$availBW \leftarrow min(Link_{fullCapacity} - LinkCapacity_{remaining} \text{ and } EdgeLinkBW)$;
\If { $availBW > MinBWforQoS$ }
{
$timeForNewFlow \leftarrow availBW/ FSize_{newFlow}$;
$additionalTimeAwake \leftarrow MAX(SlowestTime \text{ and } timeForNewFlow )$;
$LinkWeight(i,j) += additionalTimeAwake * ActivePower_{j}$;
} \Else {
$FcurBW_{newFlow} \leftarrow FSize_{newFlow}/QoSRequiredTime$;
$FcurBW_{Slowest}' \leftarrow (FcurBW_{Slowest} - (availBW - (FcurBW_{newFLow}/NumFlowsOnLink))$;
$AdditionalTime_{SlowestFlow} \leftarrow (FcurBW_{Slowest} - FcurBW_{Slowest}')/ FlowDataremaing_{Slowest}$;
$timeForNewFlow \leftarrow QoSRequiredTime$
$additionalTimeAwake \leftarrow MAX(timeForNewFlow \text{ and } AdditionalTime_{SlowestFlow})$;
$LinkWeight(i,j) += additionalTimeAwake * ActivePower_{j}$;
}
}
\end{algorithm}
\begin{algorithm}
\SetAlgoNoLine
\caption{Job scheduling Algorithm}
\label{algorithm2}
\scriptsize
\KwIn{$P$, $Q_s$, $T_s$, line cards and ports power state, task dependency within a job}
\KwOut{Job placement and corresponding routing path}
\While{job $j$ consisting of task set $T^j$ arrives}
{
\For{each pair of interdependent tasks $(T^j_m, T^j_n)$ in $T^j$}
{
select $S_{ava}$ from $S$\;
\For {each pair of available servers $(x, y)$ in $S_{ava}$}
{
compute $P(x, y)$ for server pair $(x, y)$ from $P$\;
\For {each path $p$ in $P(x, y)$}
{
get power states of all the ports and line cards along $P$ from the global controller, compute path weight $w(x, y)$ in terms of energy consumption\;
}
}
choose the least-weight path associated with corresponding server pair for task pair $(T^j_m, T^j_n)$\;
}
}
\end{algorithm}
\todo{TBD}
\section{Evaluation}
\label{sec:experiment}
\subsection{Experimental Setup}
Our experiments were performed using HolDCSim, an event-driven simulator~\cite{HolDCSimHolisticSimulator},
which simulates interdependent asynchronous task executions, and implement
custom server job scheduling and network routing algorithms. This allows us to
simulate a large number of servers and switches relatively easily. The simulator
allows us to specify our own power model and sleep state transition mechanisms
and calculate the overall energy consumption for the simulator. The network is
configured using a fat tree-topology as shown in Figure~\ref{figure12}. We
simulate two classes of real-world applications:
\emph{web service}: two large sized (uniformly distributed around 500 ms service time) parent-child tasks,
\emph{web search}: a parent task search task querying five subtasks to run smaller search process (uniformly distributed around 100 ms CPU service time). All the network routing policies consider the QoS threshold of 10X the total CPU time of all tasks in the job when choosing the job selection.
We assume that each of the tasks can be executed on any server machine and if enough processing cores available co-located in the same server. The communication patterns and the DAG for the two workloads are illustrated in Figure~\ref{fig:ws-graph}, and Figure~\ref{fig:wsv-graph}.
We carefully select delay timer values for sleep state transitions~\cite{yaoWASPWorkloadAdaptive2017} for server and network by trying each value in the search-space, starting with first determining the delay timer value for Sleep state 1 with the lowest energy consumption, then fixing the lowest timer value found previously for state 1 to determine the delay timer for sleep state 2, and so on.
\subsubsection{Job arrivals patterns}
\label{workloads}
We use two kinds of job arrivals in our study.
The arrival rate is given by $\lambda = (Num_Servers * Num * Cores * \rho)/ Avg_TaskSize * NumTasksperJob$. Thereby targeting $\rho$ values of 15\%, 30\% and 60\% we derive the $\lambda$ values. Due to the variation in the scheduling policy, task inter-dependency, randomized job task and arrival times and sleep state transitions the resultant server utilization level may not match with targeted utilization levels.
\emph{Markov Modulated Poisson Process}: Similar to workload used in \cite{yaoWASPWorkloadAdaptive2017}, MMPP arrivals generates a two different states of arrival rates. The normal utilization phase similar to the Poisson workload and a bursty period where the arrival rate is 1.5X the $\lambda$ in the normal phase. We choose the bursty period to last 10 secs between 20 seconds of normal phase. The load levels of low, medium and high arrival rate and obtained in the similar fashion as the basic poisson based arrival workload.
\emph{Trace based arrival job arrival pattern}: We use publicly available job arrival trace from NLANR~\cite{NationalLabApplied} for a real world system. We chose NLANR trace since it inherently has a high variation in job inter-arrival rate. The load levels of low, medium and high arrival traces are obtained by scaling the inter-arrival times between the jobs in the original trace by a factor.
\begin{figure} [!h]
\centering
\captionsetup{font=small}
\subfloat[Steps in a web search request.]{ \includegraphics[scale=0.34]{figures/wsearchdraw.pdf}
\label{fig:ws-figure90}
}
\subfloat[DAG of a web search job.]{ \includegraphics[scale=0.15]{figures/wsearchDAG.pdf}
\label{fig:ws-dag}
}
\caption{Web search communication patterns (left) and DAG of a web search job (right).
Task 1 is executed by the master server, while task 2 - 6 are processed by different indexers, and master server communicates with all the indices.
\label{fig:ws-graph}
\end{figure}
\begin{figure} [!h]
\centering
\captionsetup{font=small}
\subfloat[Steps in a web service request.]{
\includegraphics[scale=0.34]{figures/wsdraw-2.pdf}
\label{fig:wsv-figure90}
s }
\subfloat[DAG of a web service job.]{
\includegraphics[scale=0.15]{figures/wsDAG.pdf}
\label{fig:wsv-dag}
}
\caption{Web service communication patterns (top) and DAG of a web service job (bottom).
Task 1 is executed by application server, while task 2 is assigned to database server.
\label{fig:wsv-graph}
\end{figure}
\subsubsection{Quality of Service Constraints}
We consider the minimum time a job takes to complete to be the sum of computation time of each task in the job and time it takes for all the inter-task communications to finish at the fastest possible bandwidth in the network. For instance, for a two-task job with 500ms of CPU cost each and a 10 MB of network communication on the 10 Gbps maximum link capacity between two nodes, the minimum time for job execution will be \textit{500ms x 2 + 75 ms (data transfer time) = 1075 ms}. Similar to other works (e.g.,~\cite{yaoWASPWorkloadAdaptive2017}), we consider the quality of Service (QoS) deadline for the job scheduler to be 10 X the minimum job execution to enable enough slack to allow multiple jobs and enable energy optimizations.
\subsection{Baseline policies and Evaluation methodology}
There are two major objectives: first to prove that the smart use of line card sleep states with our proposed Algorithm~\ref{algorithm1}, does reduce power consumption in Popcorns-Pro compared to no power management policy on the switches at all; second, our proposed Popcorns-Pro\xspace
algorithm~\ref{algorithm2} can further save power, compared to other job placement algorithms not taking both the network and server status into consideration. We consider the four combinations of the below server-specific and network-specific energy optimization policies. As shown in the section ~\ref{sec:motivation}, we have seen that switch sleep states without any optimization can still yield some energy savings, and it is important to further optimize the energy savings for them.
\subsubsection{Server based policies}
The below two policies represent a common server selection policy and energy optimized server policy.
{\textbf{Random Server allocation}-\textit{SB}}
Following a traditional server load balancing approach, tasks are assigned randomly to all servers and the processing cores as they come in, without any consideration for task dependency and core locality in the data centers.
{\textbf{Workload Adaptive using Server Low-Power State Partitioning-\textit{WASP}}} This policy represents recent works~\cite{yaoWASPWorkloadAdaptive2017} which consolidate jobs into fewer server while optimizing for keeping more servers in sleeping state. These optimizations are typically developed by system developers without considering the network energy optimization. In WASP, a set of servers are kept in the unused state to allow them to achieve the most energy efficient sleep state, while the remaining jobs are processed on the remaining servers. The servers are reassigned between the two sets depending the measured input load on the entire server pool.
\subsubsection{Network based policies}
The following two approaches to network routing in the data center.
{\textbf{Shortest Path-\textit{SP}}} This baseline network path allocation uses Djikstra's algorithm to forward flows across the two end points. This policy represents the traditional network routing algorithm available which chooses the shortest number of hops which also considering the QoS latency requirements for the current flow.
{\textbf{Elastic tree-\textit{ET}}} Recent work on the improving network energy efficiency have considered various heuristics to consolidate flows into fewer switches. The Elastictree ~\cite{hellerElasticTreeSavingEnergy2010} baseline represents one of the most commonly cited work in this area, which tries keep fewer switches in active state in highly redundant network typologies such as the fat tree topology. We implement the greedy bin packing approach in our experiment where the heuristic chooses the leftmost path at every switch exit to reach the destination. These approaches assume a fixed server task allocation and hence cannot fully optimize for efficiency.
\input{baselines}
\begin{comment}
\begin{algorithm}
\SetAlgoNoLine
\caption{Server-Balanced Algortihm}
\label{algorithm3}
\KwIn{Pre-computed Dijkstra's shortest paths set $DijkstraPaths$ between any pair of network nodes, number of sevres $N$ in DCN }
\KwOut{Job placement and corresponding routing path}
\While{task $t_i$ with task id $i$ of job $j$ arrives}
{
place $t_i$ on server $i\%N$;
then the interdependent task $t_k$ of $t_i$ with task id $k$ will be placed on server $k\%N$;
get the routing path $P$ from server $i\%N$ to server $k\%N$ from $DijkstraPaths$;
return server $i\%N$ and path $P$;
}
\end{algorithm}
\end{comment}
service and its corresponding job DAG. In simulation, we set the flow size to be
100KBytes and average job service time is randomly generated between 150ms and 200ms based on prior studies~\cite{meisnerPowerNapEliminatingServer}.
\begin{figure*}[!h]
\centering
\captionsetup{font=small}
\subfloat[Low Arrival Rate \newline $\lambda$= 8 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/service/W=mmpp_Q=100_FS=100_C=1_U=10.png}
\label{djikstra_h_random}
}
\subfloat[Medium Arrival Rate \newline $\lambda$ = 15 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/service/W=mmpp_Q=100_FS=100_C=1_U=25.png}
\label{elastic_tree_h_random}
}
\subfloat[High Arrival rate \newline $\lambda$ = 30 Jobs/Sec ]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/service/W=mmpp_Q=100_FS=100_C=1_U=40.png}
\label{elastic_tree_h_wasp}
}
\caption{Energy Savings comparison for different server and network path selection algorithms with MMPP based Job Arrival Pattern for service based 2 task model. Every 30 seconds we have 10-second bursts with 1.5x Arrival rate.}
\label{fig:mmpp-ser}
\end{figure*}
\begin{figure*}[!h]
\centering
\captionsetup{font=small}
\subfloat[Low Arrival Rate \newline Avg. Rate = 10 Jobs/Sec ]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/service/W=trace_Q=10_FS=100_C=4_U=10.png}
\label{djikstra_h_random}
}
\subfloat[Medium Arrival Rate \newline Avg. Rate = 16 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/service/W=trace_Q=10_FS=100_C=4_U=25.png}
\label{elastic_tree_h_random}
}
\subfloat[High Arrival rate \newline Avg. Rate= 32 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/service/W=trace_Q=10_FS=100_C=4_U=55.png}
\label{elastic_tree_h_wasp}
}
\caption{Energy Savings comparison for different server and network path selection algorithms with NLANR trace based Job Arrival Pattern for service based 2 task model.}
\label{fig:trace-ser}
\end{figure*}
\begin{figure*}[!h]
\centering
\captionsetup{font=small}
\subfloat[Low Arrival Rate \newline $\lambda$=8 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/search/W=mmpp_Q=10_FS=10_C=4_U=15.png}
\label{djikstra_h_random}
}
\subfloat[Medium Arrival Rate \newline $\lambda$=15 Jobs/Sec ]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/search/W=mmpp_Q=10_FS=10_C=4_U=35.png}
\label{elastic_tree_h_random}
}
\subfloat[High Arrival Rate \newline $\lambda$=25 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/search/W=mmpp_Q=10_FS=10_C=4_U=55.png}
\label{elastic_tree_h_wasp}
}
\caption{Energy Savings comparison for different server and network path selection algorithms with MMPP based Job Arrival Pattern for search based 6-task job model. Every 30 seconds we have 10-second bursts with 1.5x Average Arrival rate.
\label{fig:mmpp-search}
\end{figure*}
\begin{figure*}[!h]
\centering
\captionsetup{font=small}
\subfloat[Low Arrival Rate \newline Avg. Rate = 9 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/search/W=trace_Q=10_FS=10_C=4_U=15.png}
\label{djikstra_h_random}
}
\subfloat[Medium Arrival Rate \newline Avg. Rate = 15 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/search/W=trace_Q=10_FS=10_C=4_U=25.png}
\label{elastic_tree_h_random}
}
\subfloat[High Arrival Rate \newline Avg. Rate = 30 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/search/W=trace_Q=10_FS=10_C=4_U=50.png}
\label{elastic_tree_h_wasp}
}
\caption{Energy Savings comparison for different server and network path
selection algorithms with NLANR trace based Job Arrival Pattern for search
based 6-task job model.}
\label{fig:trace-search}
\end{figure*}
\subsection{Evaluation Results}
\subsubsection{Energy Savings Comparison}
We compare the average energy consumption per job, for different combination of
network and server selection algorithm with Popcorn's selection algorithm. We
compare against a combination baseline policies WASP and random server
allocation policies with Elastictree and Shortest Path network selection
algorithms. As shown in Figure~\ref{fig:mmpp-ser}, popcorn algorithm for 2-task
service based job model for random arrival job provides significant savings
compared to Shortest-path based policies. This is attributed to fewer switches
and server being active. For ElasticTree based runs in figure
~\ref{fig:mmpp-ser}, using WASP as server allocation policy does provide more
opportunities for network consolidation and hence decreased savings. As we
increase the arrival rate to 60 Jobs/sec in figure, we see that Elastictree based runs perform
closer to Popcorns-Pro\xspace as there are fewer opportunities for savings. We note that due to the bursty
nature of MMPP arrival rates (as discussed in section ~\ref{workloads}), the increased
delay in waking up additional switches leads to slower flow transmission rate
and increased energy consumption. We gather that server-only optimizations are
not always optimal when considering the whole data center energy consumption.
With NLANR trace based job arrivals.
For Search based workload discussed in beginning of section ~\ref{sec:experiment}, the Popcorns-Pro\xspace algorithm still results in significant savings.
We also see WASP based server optimization consistently performing better than Random server allocation. This is attributed to smaller communication size between each parent and child tasks and minimal time wasted waiting for the network data to arrive for processing.
\subsection{Flow size sensitivity}
The goal of this experiment is to determine if the routing path and server selection
by Popcorns-Pro\xspace algorithm is resilient to increases in network flow sizes and
thereby increased network utilization. As we see in figure ~\ref{fig:flows} the
Popcorns-Pro\xspace policy's is able to select paths and servers which are least affected
by the increase in flowsizes. Shortestpath based policies use most number of
switches and with increasing flow sizes there are fewer chances going to sleep
and hence consume the most energy. Elastic tree's greedy bin packing of flow
consolidation benefits from some network consolidation but with the non-optimal
server selection leading to longer flow communication times, the energy
consumption increases exponentially.
\begin{figure}[h]
\captionsetup{font=small}
\centering\includegraphics[width=.85\linewidth]{figures/flowsize/W=user_specify_U=11_Q=10.png}
\caption{Energy consumption/Job for various flow sizes for different server
scheduling and network routing algorithms. The experiment involves a 1024
server fat-Tree topology with WebService jobs arriving randomly with a Poisson random
variable value $\lambda$ = 25 jobs/sec and 10X QoS threshold.}
\label{fig:flows}
\end{figure}
\subsection{Job completion latency Comparison}
\begin{figure}[!h]
\captionsetup{font=small}
\includegraphics[width=1\linewidth]{figures/cdf/cdf33.png}\caption{Cumulative-Distributed-function
plot of job latencies for different server scheduling and network routing
algorithms. The experiment involves a 1024 server fat-Tree topology with
WebService modeled jobs arriving randomly with a Poisson random variable value $\lambda$ = 25
jobs/sec and QoS threshold as 10X.
\label{cdf}
\end{figure}
Although all the algorithms discussed, consider the QoS latency threshold of
10x (Sum of Task Size of all the tasks in the job) for network
routing path selection, we see that the jobs in the Popcorns-Pro\xspace algorithm are much
below the threshold in figure ~\ref{cdf}. Comparing WASP based policies with Server
balanced random server selection scheme, fewer servers being utilized in WASP,
the transmission time is higher and thus resulting in higher latencies.
\begin{figure} []
\captionsetup{font=small}
\includegraphics[width=1\linewidth]{figures/energy_distribution/popcorns.pdf}
\caption{Component-wise energy distribution for Popcorns in Joules for 500
second execution of 1024 server Fat-Tree topology with WebService model jobs arriving randomly (Poisson distribution) with $\lambda$ = 25 jobs per second. The energy consumed at different sleep states of the switch is also shown. }
\label{fig:Popcorn-component}
\end{figure}
\begin{figure} []
\captionsetup{font=small}
\centering\includegraphics[width=1\linewidth]{figures/energy_distribution/wasp-et.pdf}\caption{Component-wise
energy distribution for WASP server scheduling policy with Elastic tree
as network routing algorithm in Joules for 500 second execution of 1024
server Fat-Tree topology with WebService model jobs arriving randomly (Poisson distribution) with $\lambda$=25 jobs per second. The energy consumed at different sleep states of the switch is also shown.}
\label{fig:WASP-component}
\end{figure}
\subsection{Energy Distribution}
In this section we illustrate how system architects can utilize our simulated
settings and the power model to understand which component in the hardware
~\ref{fig:Popcorn-component} shows the energy consumed by each component in the
switch power model at each of the sleep state of the switch for typical
workload. This allows the system architect to focus their limited resources
towards optimizing a particular system architecture. For instance, looking at
figures ~\ref{fig:Popcorn-component} and ~\ref{fig:WASP-component}, we see that
most energy consumed by the network processor in its deepest sleep state. It can
also tell that a high cost design change for Sleep state S1 has more benefit
when using the Popcorns policy when compared to WASP and Elastictree policy.
\begin{comment}
\subsubsection{Real traces}
{\color{blue}Results from real traces}
\subsubsection{DNS Service}
Figure~\ref{figure100} and Figure~\ref{figure99} show the process of DNS service and its corresponding job DAG. In simulation, we set the flow size to be 100MB and the average job service time is randomly generated between 150ms and 200ms based on prior studies~\cite{meisner2009powernap}~\cite{he2013next}. Since our experiment is based on multi-task job, we define the service time as average task size (e.g. 75ms$\sim$100ms for a DNS task based on job DAG), and the system utilization rate is defined as \emph{service time}$*$\emph{arrival rate}.
\begin{figure*}
\centering
\captionsetup{font=small}
\subfloat[DNS average power]{
\includegraphics[scale=0.1]{figures/powerdns.pdf}
\label{fig:dnspower}
}
\subfloat[DNS latency, Utilization=0.1]{
\includegraphics[scale=0.1]{figures/dnsp1-0.pdf}
\label{fig:dns_pp_0.1_90th}
}
\subfloat[DNS latency, Utilization=0.3]{
\includegraphics[scale=0.1]{figures/dnsp3-0.pdf}
\label{fig:dns_pp_0.3_90th}
}
\subfloat[DNS latency, Utilization=0.6]{
\includegraphics[scale=0.095]{figures/dnsp6-0.pdf}
\label{fig:dns_pp_0.6_90th}
}
\caption{DCN executes 2000 Poisson-arrival DNS jobs under low, medium, and high system utilization respectively. Suffix $-CNS$ represents Cooperative Network-Server Algorithm and $-SB$ denotes Server-Balanced Algorithm.}
\label{fig:dns-exp}
\end{figure*}
Figure~\ref{fig:dnspower} shows the power saving corresponding to the execution of 2000 DNS jobs, compared to the baseline policies. In our experiment setup, if all the line cards and ports remain active, the switch power is 398.76$W$. We can see that for each configuration, the avaerage power of Popcorns is 233$\sim$312$W$, namely Popcorns is able to achieve approximately 18\%$\sim$39\% network power savings, while the baseline policy also saves 20\%$\sim$34\% network power with smart use of both line card and port low power state, compared to no power management on line cards and ports at all.
Additionally, we can see that on the server side, using the Cooperative Network-Server Algorithm~\ref{algorithm2}, Popcorns further consumes approximately 19\%$\sim$22\% less power than the Server-Balanced Algorithm. There is no power saving on the servers for the latter algorithm, since all the servers are kept active, the power of which is 92$W$~\ref{Serverpowermodel}. In this sense, we can conclude that intelligently scheduling tasks based on their interdependence and system status, together with the use of server C6 package sleep state further saves more power.
%
Figure~\ref{fig:dns_pp_0.1_90th}, ~\ref{fig:dns_pp_0.3_90th}, and ~\ref{fig:dns_pp_0.6_90th} show the CDF of job latency in DNS Service experiment. We can see that the $90^{th}$ percentile job latency is the best under low system utilization, and gradually increases as utilization increases. Prior study~\cite{yao2017wasp} shows that low system utilization is a more practical case in real scenarios. Thus, we note that our proposed power management policy together with job placement algorithm saves a large portion of power in DCN while maintaining the QoS demands.
\subsubsection{Web Search}
Figure~\ref{fig:ws-graph} shows the steps in processing a web search request and its corresponding job DAG. In simulation, we set the flow size to be 100MB, and average job service time is randomly generated between 20ms and 60ms based on prior studies~\cite{alizadeh2013pfabric}~\cite{hadjilambrou2015characterization}. Thus, the average web search task size is 10ms$\sim$30ms in our job model.
\begin{figure*}
\centering
\captionsetup{font=small}
\subfloat[Web search average power]{
\centering\includegraphics[scale=0.1]{figures/powerwsearch.pdf}
\label{fig:wsearchpower}
}
\subfloat[b][Web search latency, Utilization=0.1]{
\includegraphics[scale=0.1]{figures/wsearchp1-0.pdf}
\label{fig:ws.1_90th}
}
\subfloat[b][Web search latency, Utilization=0.3]{
\includegraphics[scale=0.1]{figures/wsearchp3-0.pdf}
\label{fig:ws_pp_0.3_90th}
}
\subfloat[b][Web search latency, Utilization=0.6]{
\includegraphics[scale=0.095]{figures/wsearchp6-0.pdf}
\label{fig:ws_pp_0.6_90th}
}
\caption{DCN executes 2000 Poisson-arrival web search jobs under low, medium, and high system utilization respectively. Suffix $-CNS$ represents Cooperative Network-Server Algorithm and $-SB$ denotes Server-Balanced Algorithm.}
\label{fig:ws-exp}
\end{figure*}
\begin{figure*}
\centering
\captionsetup{font=small}
\subfloat[Web service average power]{
\centering\includegraphics[scale=0.1]{figures/powerws.pdf}
\label{fig:wsvpower}
}
\subfloat[b][Web service latency, Utilization=0.1]{
\includegraphics[scale=0.1]{figures/wsp1-0.pdf}
\label{fig:wsv_pp_0.1_90th}
}
\subfloat[b][Web service latency, Utilization=0.3]{
\includegraphics[scale=0.1]{figures/wsp3-0.pdf}
\label{fig:wsv_pp_0.3_90th}
}
\subfloat[b][Web service latency, Utilization=0.6]{
\includegraphics[scale=0.095]{figures/wsp6-0.pdf}
\label{fig:wsv_pp_0.6_90th}
}
\caption{DCN executes 2000 Poisson-arrival web service jobs under low, medium, and high system utilization respectively. Suffix $-CNS$ represents Cooperative Network-Server Algorithm and $-SB$ denotes Server-Balanced Algorithm.}
\label{fig:wsv-exp}
\end{figure*}
\todo{update all diagrams}
Figure~\ref{fig:wsearchpower} shows the power saving corresponding to the execution of 2000 DNS jobs, compared to the baseline policies. We can see that for each configuration, Popcorns is able to achieve $\sim$42\% network power savings under high system utilization rate, and even with low utilization, $\sim$16\% network power savings is achieved. As for the Server-Balanced case, the numbers are 33\% and 18\% respectively, which demonstrates the benefits of smart use of low power states of switch components.
What is more, on the server side, Popcorns further saves approximately $\sim$13\% power than the Server-Balanced Agorithm, which has no power saving on the servers. The results are basically in accordance with the previous DNS experiment.
%
Figure~\ref{fig:ws.1_90th}, ~\ref{fig:ws_pp_0.3_90th}, and ~\ref{fig:ws_pp_0.6_90th} show the CDF of job latency in the Web Search experiment. We can see that the $90^{th}$ percentile job latency is also the best under low system utilization, and gradually increases with utilization rate getting higher.
%
\subsubsection{Web Service}
Figure~\ref{fig:wsv-graph} shows the process of a web service response and its corresponding job DAG. In simulation, we set the flow size to be 100MB, and average job service time is randomly generated between 2ms and 10ms based on prior studies~\cite{wu2013adaptive}~\cite{meisner2009powernap}. Therefore, the average web service task size is 1ms$\sim$5ms in our job model.
Figure~\ref{fig:wsv-exp} shows the power consumption for the execution of 2000 web service jobs, which further reinforce the conclusion of previous experiment and priority of our proposed algorithms. We can see that for a DCN executing small-sized jobs like \emph{web service}, smart use of server sleep state also saves $\sim$7\% power. And two configuration both achieve $\sim$27\% network power saving under three system utilization.
\end{comment}
\section*{Acknowledgment}
\label{ack}
This material is based upon work supported in part by the National Science Foundation
under CAREER Award CCF-1149557 and CNS-1320226.
\section{Solution Approach}
\label{sec:algorithms}
Modeling this joint power optimization problem in DCN as an Integer Linear Programming (ILP) formulation is a solution, and optimization tools like MathProg can be used to provide a near-optimal result. However, the computation complexity increases exponentially with the number of servers and switches~\cite{wang2012carpo}. In a typical data center with tens of thousands of servers and hundreds of switches, it is computationally prohibitive to solve the optimization problem. We therefore propose a computationally-efficient heuristic algorithm in this section.
\begin{comment}
In this section, we first present an Integer Linear Programming (ILP) formulation for the joint optimization problem in details. We then propose heuristic algorithms for solving the optimization problem efficiently.
\subsection{ILP formulation}
In the ILP formulation, the input parameters are shown in Table~\ref{table2:notation} .
\begin{table}
\caption{\label{table2:notation} Notations in ILP formulation}
\scalebox{0.5}{
\begin{tabular}{|c|c|c|}
\hline
\bf{Symbol}& \bf{Meaning} \\
\hline
$v$ & an arbitrary network node \\
\hline
$s$ & an arbitrary server \\
\hline
$V$ & set of network nodes (e.g. servers, switches) \\
\hline
$S$ & set of servers \\
\hline
$j$ & an arbitrary job \\
\hline
$L$ & set of links in the network \\
\hline
$P_{m,n}^{linecard}$ & power consumption of line card $m$ in switch $n$ \\
\hline
$P_{n}^{base}$ & power consumption of all parts other than line cards in switch $n$ \\
\hline
$N^{switch}$ & number of switches \\
\hline
$P_{c,s}^{core}$ & power consumption of core $c$ in server $s$ \\
\hline
$P_{s}^{base}$ & power consumption of all parts other than cores in server $s$ \\
\hline
$N_{s}^{core}$ & number of cores in server $s$ \\
\hline
$N^{server}$ & number of servers \\
\hline
$T^{j}$ & set of tasks in job $j$ \\
\hline
$t^{j}$ & an arbitrary task in job $j$, $t^{j}\in{T^{j}}$ \\
\hline
$N^{job}$ & number of jobs \\
\hline
$B_{r,t}^{j}$ & bandwidth requirement between task $r$ and task $t$ of job $j$, task $r$ is in server $s_r$, task $t$ is in server $s_t$ \\
\hline
$l_{n_1,n_2}$ & link between node $n_1$ and $n_2$, $l_{n_1,n_2}$ $\in$ L \\
\hline
$C_{l_{n_1,n_2}}$ & capacity of link $l_{n_1,n_2}$ \\
\hline
$L_{m, n}$ & set of links connected to line card $m$ in switch $n$ \\
\hline
$f_{l_{n_1,n_2}}$ & number of flows on link $l_{n_1,n_2}$ \\
\hline
$l_{p,s_1,q,s_2}$ & link between line card p in switch $s_1$ and line card q in switch $s_2$ $s$ to node $d$ \\
\hline
$l_{s,d}$ & an arbitrary path from node $s$ to node $d$ \\
\hline
$f_{l_{n_1,n_2}}$ & set of flows on link $l_{n_1,n_2}$ \\
\hline
$d^{l_{n_1,n_2}}_f$ & data rate of flow $f$ on link $l_{n_1,n_2}$ \\
\hline
\end{tabular}}
\end{table}
\begin{center}
\scalebox{0.8}{
Objective: ${Minimize(P^{switches}+P^{servers})}$}
\scalebox{0.8}{
$=\min{\sum_{n=1}^{N^{switch}}(P^{base}_n+\sum_{m=1}^{N_n^{linecard}}P_{m,n}^{linecard}x_m^n) +\sum_{s=1}^{N^{server}}(P_s^{base}+\sum_{c=1}^{N_s^{core}}P_{c,s}^{core}z_c^s)}$}
\end{center}
Decision Variables:
a)
\begin{equation*}
a^{j}_{t, c, s}=
\begin{cases}
1, \quad\makebox{if task } t \makebox{ of job } j \makebox{ is assigned to core } c \makebox{ in server } s \makebox{;} \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
b)
\begin{equation*}
z^{j}_{t, r, s}=
\begin{cases}
1, \quad\makebox{if task } t \makebox{ and } r \makebox{ of job } j \makebox{ are assigned to server } s \makebox{;} \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
c)
\begin{equation*}
y^{s}_{c}=
\begin{cases}
1, \quad\makebox{if core } c \makebox{ in server } s \makebox{ is operating; } \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
d)
\begin{equation*}
x^{n}_{m}=
\begin{cases}
1, \quad\makebox{if line card } m \makebox{ in switch } n \makebox{ is active; } \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
d)
\begin{equation*}
\lambda^{l}_{p_{s,d}}=
\begin{cases}
1, \quad\makebox{if link } l \makebox{ is on path } p_{s,d} \makebox{;} \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
Constraints:
a) Each task of each job can be assigned to only one core. For all $j, t$
\begin{equation*}
\sum_{s=1}^{N^{server}} \sum_{c=1}^{N_s^{core}} a^{j}_{t, c, s} = 1
\end{equation*}
b) The number of tasks being executed on a server cannot exceed the number of its cores. For all $s$
\begin{equation*}
\sum_{j=1}^{N^{job}} \sum_{t\in{T^j}} \sum_{c=1}^{N_s^{core}} a^{j}_{t, c, s} = 1
\end{equation*}
c) Constraints to find the value of $z^{j}_{t, r, s}$. For all $j, t, r, s$
\begin{equation*}
\sum_{c=1}^{N^{core}_s}a^{j}_{t, c, s} + \sum_{c=1}^{N^{core}_s}a^{j}_{r, c, s} \leq 1 + z^{j}_{t, r, s}
\end{equation*}
\begin{equation*}
\sum_{c=1}^{N^{core}_s}a^{j}_{t, c, s} + \sum_{c=1}^{N^{core}_s}a^{j}_{r, c, s} \geq 2z^{j}_{t, r, s}
\end{equation*}
d) If link $l_{n_1,n_2}$ is on path $p_{s,d}$, there must exist another link from node $n_1, n_2$ respectively on this path.
\begin{equation*}
\sum_{c=1}^{N^{core}_s}a^{j}_{t, c, s} + \sum_{c=1}^{N^{core}_s}a^{j}_{r, c, s} \leq 1 + z^{j}_{t, r, s}
\end{equation*}
\begin{equation*}
\sum_{c=1}^{N^{core}_s}a^{j}_{t, c, s} + \sum_{c=1}^{N^{core}_s}a^{j}_{r, c, s} \geq 2z^{j}_{t, r, s}
\end{equation*}
e) If link $l_{s_1,s_2}$ is on the task communication path $p_{s,d}$, then the corresponding line cards in $s_1$ and $s_2$ must be active and the link between them is also on $p_{s,d}$. For $l_{p,s_1,q,s_2}$ $\in$ $L_{p,s_1}$, $l_{p,s_1,q,s_2}$ $\in$ $L_{q,s_2}$
\begin{equation*}
\lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}} = \lambda^{l_{s_1,s_2}}_{p_{s,d}} = x^{s_1}_{p} = x^{s_2}_{q}
\end{equation*}
\begin{equation*}
x^{s_1}_{p}\lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}} \geq \lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}}
\end{equation*}
\begin{equation*}
x^{s_2}_{q}\lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}} \geq \lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}}
\end{equation*}
f) Line card $m$ in switch $n$ is off only when all the links connected to it are not on any path.
\begin{equation*}
\left| L_{m,n} \right| x^{n}_{m} \geq \sum_{l\in{L_{m,n}}}\lambda^{l}_{p_{s,d}}
\end{equation*}
g) Total flow rate on a link cannot exceed the link capacity.
\begin{equation*}
\sum_{f\in{f_{l_{n_1,n_2}}}}^{N^{server}} \sum_{c=1}^{N_s^{core}} a^{j}_{t, c, s} = 1
\end{equation*}
\end{comment}
\subsection{Heuristic Algorithms}
In this section, we first present a power management algorithm for the transition of line card and port power state, a simple power transition algorithm for servers, and then propose joint job placement and network routing algorithm for solving the optimization problem efficiently.
\subsubsection{Power State Transition Algorithm}
{\color{blue}remove the port Low Power Idle (LPI) transition from the algorithm (save very little power), add more line card power states, the transition algorithm to control the transition}
\begin{table}
\centering
\captionsetup{font=small}
\begin{tabular}{|c|c|c|}
\hline
\bf{Symbol}& \bf{Description} \\
\hline
$T_s$ & traffic threshold for port waking up \\
\hline
$T_a$ & traffic threshold for port turning into LPI state \\
\hline
$Q_{iL}$& current traffic load at port $i$ in line card $L$ \\
\hline
$\tau_{wakeup}^{port}$ & port wakeup delay \\
\hline
$\tau_{wakeup}^{LC}$ & line card wakeup delay \\
\hline
$\tau_{LPI}^{port}$ & port turning into LPI state delay \\
\hline
$\tau_{sleep}^{LC}$ & line card turning into sleep state delay \\
\hline
\end{tabular}
\caption{\label{algorithm1table} Notations in PopCorns power state transition algorithm.}
\end{table}
As not all switches in DCN need to be active all the time, if we can intelligently control the transition of active and low power state for ports and line cards, then data center network power consumption could be saved. Therefore, we propose the Power State Transition Algorithm~\ref{algorithm1} to implement network power management. The notations are elaborated in Table~\ref{algorithm1table}. An overview of our approach is shown in Figure~\ref{figure3}. We assume that there is a global controller that keeps record of all the line cards, ports, and server status, including their power state, queue size, etc. The global controller monitors the current traffic load (number of pending flows or packets) at each port and decides whether the current line card power state should change. It then decides if the port state should change as a feedback. In our design, we couple the line card and port power state and their transition: if a line card is in sleep or off state, then all the ports are also in LPI or off state; if a line card is active, then its ports can be in LPI or active state, and LPI ports can be woken up to become active in a short time.
\begin{figure}
\captionsetup{font=small}
\centering\includegraphics[width=0.5\textwidth]{figures/Presentation2.pdf}
\caption{Power state transition overview.}
\label{figure3}
\end{figure}
\begin{algorithm}
\SetAlgoNoLine
\caption{Power State Transition Algorithm}
\label{algorithm1}
\KwIn{$T_s$, $T_a$, $Q_{iL}$}
\KwOut{Line card and port state transition}
Initialization: All line cards are in sleep state, all ports are in LPI state\;
\While{there are jobs to be executed}
{
\If{a flow arrives at port $i$ of line card $L$ at time $t$}
{
\If{$L$ is in sleep state} {
\If{$Q_{iL} > T_s$} {
$L$ begins waking up from sleep state\;
$i$ begins waking up from LPI state\;
}
\Else
{
$L$ begins waking up after $\tau_{wakeup}^{LC}$\;
$i$ begins waking up after $\tau_{wakeup}^{port}$\;
}
}
}
\If{a flow is transimitted from port $i$ of line card $L$ at time $t$} {
\If{$Q_{iL} < T_s$} {
$i$ starts transition to LPI after $\tau_{LPI}$\;
\If{after $\tau_{LPI}$, all the ports of $L$ are in LPI state} {
$L$ starts transition to sleep state after $\tau_{sleep}^{LC}$;
}
}
}
}
\end{algorithm}
\subsubsection{Server State Transition Algorithm}
{\color{blue}multi-core servers, multiple power states}
As mentioned in Section~\ref{sec:serverpower}, we simply model two power states for the server: active and C6 sleep state. Algorithm~\ref{algorithm4} describes our policy on managing the server power states. We assume that each server in DCN has a local queue to buffer the tasks, and tasks should first be dispatched to active servers whose queue size do not exceed a threshold. The threshold controls the job latency.
\begin{algorithm}
\SetAlgoNoLine
\caption{Server State Transition Algorithm}
\label{algorithm4}
\KwIn{ server wakeup latency $\tau_{wakeup}^{server}$, local queue of server $S$}
\KwOut{Server power state transition}
\If{a task arrives at a server $S$}
{
\If{$S$ is in sleep state} {
$S$ begins waking up after wakeup latency $\tau_{wakeup}^{server}$\;
}
\Else {
task is put in the local queue of $S$\;
}
}
\If{a task is finished executing at a server $S$}
{
\If{local queue of $S$ is empty} {
$S$ begins turning into sleep state\;
}
}
\end{algorithm}
\subsubsection{Cooperative Network-Server Algorithm}
{\color{blue}periodical global control/computation, traffic prediction like TS-Bat}
The main idea of our algorithm is to jointly consider the status of server pool and network before assigning jobs. To be more specific, for a job consisting of several pairs of interdependent tasks, if we place task pairs based on their interdependence, and choose the core pair with the minimum routing cost, instead of randomly placing them on available cores without awareness of communication requirement between tasks, then the placement together with its corresponding routing path must be the optimal. And in the context of this paper, network routing cost is energy consumption, since every line card on the chosen routing path has to be active, or it will have to be woken up.
Based on this idea, we propose the Cooperative Network-Server (CNS) Algorithm~\ref{algorithm2}. The notations are elaborated in Table~\ref{algorithm2table}. In the initial stage, we compute and store all possible routing paths between any pair of network nodes. As mentioned in the previous section, we have a global controller to keep track of all the line cards, ports, and server statuses. Thus, when a job consisting of a set of interdependent tasks arrive, we first check the server side to select all server pairs whose power state is active and local queue sizes do not exceed a threshold. If no server statisfies these requirements, then servers with full local queue and servers in C6 sleep state will be selected. Note that if a task is assigned to a server in sleep state, then the server will be awoken immediately and enter active state after a wakeup latency. For each server pair we select possible routing paths between them from the pre-computed routing paths set. Along each path, line cards could be active, sleeping, or off, and ports could be active, LPI, or off, but if we assign a path to the task pair, all the line cards and ports along it should become active. In other words, we need to wake up the inactive line cards and ports on the chosen path, which results in extra power consumption and wakeup latency. Based on this, we can compute and assign a weight, which is a measurement of the number of active line cards and ports, to each possible routing path, and choose the server pair with minimum routing cost. And our proposed Cooperative Network-Server Algorithm will output the least-weight path and its corresponding server pair for each task pair.
\begin{table}
\centering
\captionsetup{font=small}
\scalebox{0.7}{
\begin{tabular}{|c|c|c|}
\hline
\bf{Symbol}& \bf{Description} \\
\hline
$P$ & all the routing paths between any pair of network nodes \\
\hline
$Q_s$ & local queue size of server $s$ \\
\hline
$T_s^{server}$ & local queue size threshold of server $s$\\
\hline
$S$ & all the servers in DCN \\
\hline
$S_{ava}$ & all the servers whose current queue size doesn't exceed $T_s^{server}$ \\
\hline
$P_{x, y}$ & all the routing paths between node $x$ and $y$ in DCN\\
\hline
\end{tabular}
}
\caption{\label{algorithm2table} Notations in PopCorns Cooperative Network-Server Algorithm.}
\end{table}
\begin{algorithm}
\SetAlgoNoLine
\caption{Cooperative Network-Server Algorithm}
\label{algorithm2}
\KwIn{$P$, $Q_s$, $T_s$, line cards and ports power state, task dependency within a job}
\KwOut{Job placement and corresponding routing path}
\While{job $j$ consisting of task set $T^j$ arrives}
{
\For{each pair of interdependent tasks $(T^j_m, T^j_n)$ in $T^j$}
{
select $S_{ava}$ from $S$\;
\For {each pair of available servers $(x, y)$ in $S_{ava}$}
{
compute $P(x, y)$ for server pair $(x, y)$ from $P$\;
\For {each path $p$ in $P(x, y)$}
{
get power states of all the ports and line cards along $P$ from the global controller, compute path weight $w(x, y)$ in terms of energy consumption\;
}
}
choose the least-weight path associated with corresponding server pair for task pair $(T^j_m, T^j_n)$\;
}
}
\end{algorithm}
{\color{blue}
\begin*{algorithm}
\SetAlgoNoLine
\caption{Popcorns Weight assignment algorithm}
\label{algorithm3}
\KwIn{$i$, $j$, Server/line card power state, Number of flows on Link (i,j), LinkBandwidth, FlowSize, AverageNumberOfHops }
\KwOut{$W(i,j)$ Weight for the edge connecting nodes i and j}
LinkWeight(i,j) \leftarrow 0;
\If{$i$ is Server}
{
LinkWeight(i,j) + \leftarrow (ServerActivePower - Current Sleep state power) * taskSize + EdgeLinkBW /FlowSize;
}
\If{$j$ is a Switch}
{
AvailableBandwidth = min(EdgeLinkBW,MaxLinkRate(i,j)/number_of_flows);
LinkWeight (i,j)+ \leftarrow Switch $j$ ActivePower * FlowSize/EdgeLinkBW ;
delayedTime \leftarrow (FlowSize/(EdgeSwitchBW - AvailableBandwidth);
LinkWeight (i,j) + \leftarrow (Switch $j$ ActivePower + 2 * ServerActivePower) * delayTime / AverageHopCount;
}
\end*{algorithm}
}
\begin{algorithm}
\SetAlgoNoLine
\caption{Job scheduling Algorithm}
\label{algorithm2}
\KwIn{$P$, $Q_s$, $T_s$, line cards and ports power state, task dependency within a job}
\KwOut{Job placement and corresponding routing path}
\While{job $j$ consisting of task set $T^j$ arrives}
{
\For{each pair of interdependent tasks $(T^j_m, T^j_n)$ in $T^j$}
{
select $S_{ava}$ from $S$\;
\For {each pair of available servers $(x, y)$ in $S_{ava}$}
{
compute $P(x, y)$ for server pair $(x, y)$ from $P$\;
\For {each path $p$ in $P(x, y)$}
{
get power states of all the ports and line cards along $P$ from the global controller, compute path weight $w(x, y)$ in terms of energy consumption\;
}
}
choose the least-weight path associated with corresponding server pair for task pair $(T^j_m, T^j_n)$\;
}
}
\end{algorithm}
\begin{algorithm}
\SetAlgoNoLine
\caption{Job Scheduling Algorithm}
\label{algorithm3}
\KwIn{input}
\KwOut{output}
\If{there is job to be finished}{then-block}
content...
\end{algorithm}
\section{Motivation}
\subsection{Need for Switch sleep states}
\label{sec:motivation}
In this section, we motivate the need for having more sleep states in switches. As discussed in sections \ref{sec:transition} and \ref{sec:switchpower}, the switch power consumption can be temporarily reduced by selectively turning off parts of the line cards and woken up when required. For components which do not have any memory or state the wakeup latency is the time it takes to initialize the system. For other components such as DRAM storing address forwarding tables should be reinitialized from the host line cards. In our study, we find that having multiple sleep states for the switch helps in trade-off various levels of wake-up latency for energy consumption reduction. This scheme is inherently useful when the idle period is not large enough to accommodate the high wastage of energy spent during waking from a single deepest sleep state and would cause network transmission to get delayed.
\begin{figure}[!h]
\captionsetup{font=small}
\centering\includegraphics[width=0.4\textwidth]{figures/motivation/service.png}\caption{Network Energy consumption normalized per job for a 1024 server FatTree topology network. Jobs containing two dependent tasks of 500ms CPU duration each arrive at 30 Job/second. The Switch sleep state and latency models are constructed using the power model defined in section~\ref{sec:switchpower}.}
\label{motivation12}
\end{figure}
\subsection{Modeling Job{\color{blue}-------$>$Multi-task v.s. single-task job models}}
We model the execution of jobs at the server side as follows. A job consists of multiple inter-dependent tasks that include both spatial and temporal inter-dependence. Application tasks are typically executed by specific server types. For example, a web service request will first be processed by an application or web server, and a search request is processed by a database server, and this kind of task relationship is called spatial inter-dependence. In terms of temporal inter-dependence, a task cannot start executing until all of its 'parent' tasks have finished their execution, and until after their results have been communicated to the server assigned to the task. A job is considered to have finished when all of its tasks finish execution. As for servers, there are multiple cores per server and one core can only process one task. In this paper, we configure single-core processor for the servers.
Each job $j$ can be represented as a directed acyclic graph (DAG) $G^j(V^j, E^j)$, where $V^j$ is the set of tasks of job $j$. In DAG, if there is a link from task $i$ to task $r$, then task $i^j$ must finish and communicate its results to task $r^j$ before $r^j$ can start processing. Each task $v^j \in V^j$ has a workload requirement, namely task size or execution time requirement $w^j_v$ for the core. For each link in $E^j$, there is a data transfer size $D^j_l$ associated with it, which denotes the bandwidth requirement to transfer the result over link $l$ (from the task at the head of DAG link to the task at the tail) when assigned a network flow. Figure~\ref{figure1} shows an example of a job DAG.
\begin{figure}
\captionsetup{font=small}
\centering\includegraphics[width=0.15\textwidth]{figures/DAG.pdf}\caption{Example of a job DAG. Numbers 1-6 denote task 1-task 6 respectively. Numbers around the tasks represent task size, while numbers on the links represent flow size.}
\label{figure1}
\end{figure}
\begin{figure}
\captionsetup{font=small}
\centering\includegraphics[width=0.5\textwidth]{figures/fat_tree.png}
\caption{Fat tree topology.}
\label{figure12}
\end{figure}
\section{Experimental Setup}
We perform two sets of experiments: 1. simulations to explore the Pareto-optimal energy-latency tradeoff as well as corresponding $T_s$, $T_w$ and $\tau$ settings, and 2. prototype implementation on a testbed with web server deployment. In this section, we elaborate on the experimental setup for both approaches.
\begin{figure}[htbp]
\captionsetup{font=small}
\centering
\includegraphics[scale=0.60]{./figure/power_profile_cloud.pdf}
\caption[test]{Power profile of a 10-core Xeon E5 processor with C0-C1 and C0-C6 transition settings whenever the server is idle.\protect\footnotemark}
\label{fig:power_profile}
\end{figure}
\footnotetext{We use a microbenchmark that can calibrate itself and occupy the core based on required utilization settings. To occupy multiple cores, we run multiple copies of this microbenchmark each pinned to a core.}
\begin{table}[htbp]
\footnotesize
\captionsetup{font=small}
\centering
\caption{Power (W) breakdown for a system with $n_{a}$ active cores}
\begin{tabular}{|p{1.5cm}<{\centering}|p{1.3cm}<{\centering}|p{1.4cm}<{\centering}|p{1.3cm}<{\centering}|p{0.8cm}<{\centering}|}
\hline
\multirow{2}{*}{\bf Component} & Core sleep C1$^{\ast}$& Core sleep C6 $^{\dagger}$& Pkg. sleep C6 & System sleep \\
\hline
\multirow{2}{*}{CPU} & 33.0+$3.1\times(n_{a}-1)$ & 23.0+$3.8\times(n_{a}-1)$ & \multirow{2}{*}{8.3} & \multirow{2}{*}{8.3} \\
\hline
RAM~\cite{intel_e5} & 10.8 & 10.8 & 4.9 & 1.4 \\
\hline
Platform~\cite{powernap} & 45.5 & 45.5 & 23.6 & 4.8 \\
\hline
\multirow{2}{*}{Total Power} & 89.3+$3.1\times(n_{a}-1)$ & 79.3+$3.8\times(n_{a} -1)$ & \multirow{2}{*}{36.8} & \multirow{2}{*}{14.5} \\
\hline
\end{tabular} \\
${\ast} $ processor is active and the rest of the idle cores are in $C1$ state. \\
$\dagger$ processor is active and the rest of the idle cores are in $C6$ state.
\label{table:powermode}
\end{table}
\begin{table}[htbp]
\footnotesize
\captionsetup{font=small}
\centering
\caption{Processor/System low-power states and wakeup latencies}
\begin{tabular}{|c|c|c|}
\hline
Low-power State & Wake-up latency \\ \hline
core sleep C1 & 10 $\mu$s \\ \hline
core sleep C6 & 82 $\mu$s \\ \hline
package sleep C6 & 1 ms \\ \hline
system sleep & 5 s \\ \hline
\end{tabular}
\vspace{-0.15in}
\label{table:latency-table}
\end{table}
\subsection{Processor Power Profile and Server Power Model}
We profile the power consumption of the Intel Xeon E-5 processor~\cite{intel_e5} using Intel's \emph{Running Average Power Limit} (RAPL) interface. We build a customized cpuidle governor that allows specified low-power state transitions. The processor is programmed to transition between active state (C0) and low-power state $Cx$ (e.g., C1, C6). Figure~\ref{fig:power_profile} shows the measured power consumption of the processor for two configurations: \emph{C0-C1}, \emph{C0-C6} and at utilization levels from 0\% to 100\%. Using linear regression, a power model is built for the processor based on the sleep state selection and the number of active cores at full utilization. Table~\ref{table:powermode} shows the power consumption when a certain state is chosen for sleep mode. Table~\ref{table:latency-table} shows the wakeup latencies for various low-power states. Note that the processor sleep state transition latencies are reported by the Linux cpuidle driver~\cite{pallipadi2007cpuidle}.
\subsection{Simulation Platform}
\titlename uses an event-driven simulator based on Bighouse~\cite{bighouse} that models server farm workloads and multi-server activity. We simulate a server farm with 100 ten-core servers (by default). In all of our experimental results, we report the steady state statistics by disregarding the warm-up period of the first 10,000 jobs. In the simulation, we use short latency (Web service-like) jobs with $s=4.2 ms$ and long latency (DNS service-like) jobs with $s=194 ms$ as representatives based on prior studies~\cite{powernap}. For each of the representative workloads, we generate synthetic job arrivals with different utilization levels (0.1 for low, 0.3 for average~\cite{barroso_isca07}, and 0.6 for high). Random job arrivals are modeled by Poisson process~\cite{meisner2012dreamweaver}. Besides synthetic workload, we also perform simulation based on Wikipedia traces.
\subsection{Real System Experiments on Testbed}
We deploy a testbed with a cluster of 10 application servers together with one load-generating server and one load-balancing server; all servers support Intelligent Power Management Interface (IPMI) interface~\cite{openipmi} for system-level power monitoring. Each application server is configured with the apache web service. The load generator keeps sending web requests to the system according to real system traces (See Section~\ref{sec:evaluation} for further details).
\section*{Acknowledgment}
\label{sec:acknowledgement}
This material is based in part upon work supported by the National Science Foundation under Grant Number CNS-178133. Kathy Nguyen and Gregory Kahl were supported through a REU supplement under the NSF award.
\section{Evaluation on Real System}
\label{sec:evaluation}
\begin{figure*}[htbp]
\centering
\captionsetup{font=small}
\includegraphics[scale=0.56]{./figure/combine-energy.pdf}
\caption{\label{fig:per-server-energy} Energy measured on a server farm with 10 servers with different energy management policies. The first three groups of bars represent energy breakdown in each server when Active-Idle, Delay-Doze and \titlename are applied, respectively. The rightmost three bars illustrate the total server farm energy consumption for Active-Idle (black bar), Delay-Doze (gray bar) and \titlename respectively (white bar).}
\end{figure*}
We evaluate \titlename on a testbed with 10 Dell Poweredge servers equipped with Inten Xeon-based processors with all of the servers deployed on a dedicated rack. We installed a modified version of the Apache HTTP server for our Local Power Controller. We extended the Local Power Controller to also include a Delay-Doze timer. The Global Server Farm Power Manager is added to an additional apache server with the \emph{mod\_proxy\_balancer} module used for load balancing. Specifically, the load balancer performs operating mode transitions in servers (as discussed in Section~\ref{sec:handler}); this is done by sending special HTTP requests (\emph{/hostname/trans-to-active-mode/}, \emph{/hostname/trans-to-lp-mode}) to the application server. It also monitors the power state for each server, and manages the server wakeups (from system sleep) using IPMI interface supported by Dell systems. The special requests are handled by the local power controller that would determine the server low-power transitions accordingly. We set up a custom cpuidle governor which allows direct processor C-state transitions from userspace (e.g., C0-C6). For power measurement, we leverage two techniques: the RAPL interface for fine-grained component power, and the IPMI's system power management interface for coarse-grained server power. We evaluate the effectiveness of \titlename by providing two sets of workloads to the system: the non-bursty Wikipedia workload, that does not require server provisioning, and four bursty NLANR workloads~\cite{nlanr}, that require server provisioning to handle bursty workloads (See section~\ref{sec:provision}).
\subsection{Wikipedia Trace}
\label{sec:bursty-trace}
We performed real system energy measurements by deploying Wikipedia software stack, namely Wikipedia application (Mediawiki), database system (Mysql) on servers. We compare \titlename against Active-Idle and Delay-Doze approaches described in Section~\ref{sec:smartlp}. To capture detailed energy breakdown, we leverage RAPL interface for fine-grained power measurement. The RAPL utility records the CPU and RAM power values periodically. We configure \titlename with the $T_s$, $T_w$ and $\tau$ parameters that achieve energy-latency Pareto-optimality with tail latency constraint set to 2.0. Similarly, for Delay-Doze, we explore various
values of the delay-timer and choose the setting that achieves the best power with the same tail latency constraint. From our experiments, we get actual CPU and RAM energy consumption for each server. To get the overall server energy, we also factored in the platform energy shown in Table~\ref{table:powermode}.
Figure~\ref{fig:per-server-energy} shows the per-server energy breakdown in terms of CPU, DRAM, and platform energy. With Active-Idle power management, all the 10 servers have similar energy consumption. With Delay-Doze, some of the servers are able to stay in system sleep state for longer periods of time, thus saving energy. With \titlename, we can clearly see that most of the servers drastically reduce energy consumption, and only a minimal subset of servers (server\#6 and \#10) are used for servicing jobs. Note that the energy consumption of server\#10 is slightly higher than that of Active-Idle power management since the server is at a higher utilization level while other servers remained inactive. Overall, \titlename gains 39\% reduction in energy saving compared to Delay-Doze, and 56\% energy savings compared to Active-Idle.
\begin{figure}[htbp]
\centering
\subfloat[ny09]{
\includegraphics[scale=0.34]{./figure/ny09.pdf}
\label{fig:nlanr-ny09}
\hspace{0.05in}
}
\subfloat[pa09]{
\includegraphics[scale=0.34]{./figure/pa09.pdf}
\label{fig:nlanr-pa09}
}
\subfloat[pa10]{
\includegraphics[scale=0.34]{./figure/pa10.pdf}
\label{fig:nlanr-pa10}
\hspace{0.05in}
}
\subfloat[uc09]{
\includegraphics[scale=0.34]{./figure/uc09.pdf}
\label{fig:nlanr-uc09}
}
\caption{System utilization for four bursty traces.}
\label{fig:utilizations}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.45]{./figure/trace-eval/trace_result.pdf}
\caption{Normalized energy consumption relative to peak energy on a 10-server cluster}
\label{fig:traceresults}
\end{figure}
\subsection{Bursty Traces}
\label{sec:nonbursty-trace}
As the raw NLANR network traces~\cite{nlanr} present job arrivals that are too infrequent for the server farm system with 10 application servers (less than 2\%), we speed up the trace by scaling the 24-hour trace to one hour. We choose four traces, namely \emph{ny09}, \emph{pa09}, \emph{pa10} and \emph{uc09}. Figure~\ref{fig:utilizations} shows the utilization levels (scaled) for the four traces over one hour. All traces exhibit bursty traffic patterns. For example, the \emph{ny09} trace has highly fluctuating utilizations ranging from 4\% to 45\% with a large number of spikes. To run the traces, we set up a software stack similar to the one in Section~\ref{sec:bursty-trace}. Each request in the trace is serviced by a PHP script that accesses a pre-defined set of pages randomly, and we note that the average service time is about the same as Wikipedia web requests.
To enable server provisioning, the Server Farm Power Manager additionally samples the server farm utilization levels based on the job arrival rates. Utilization is calculated as the product of job arrival rate and average job execution time. Standard deviation on the samples for utilization levels is calculated every 120 seconds. The number of provisioned servers is calculated dynamically (See Section~\ref{sec:provision} for details). Note that, in our comparative studies, the delay-timer values are re-evaluated for each trace such that best possible energy savings are had while meeting the QoS constraints. Figure~\ref{fig:traceresults} shows the energy consumption for the four bursty workloads using Active-Idle, Delay-Doze and \titlename. The energy is normalized to the peak energy which is $PeakPower*Time$. The energy reduction for \titlename ranges from 34\% to 40\% compared to Active-Idle. Even with the best delay timer settings, Delay-Doze only achieves 9\% to 12\% energy reduction in bursty workloads. We observe that due to the job arrival rate spikes (especially for \emph{uc09}), in order for Delay-Doze to meet the tail latency constraint of 2.0, the delay timer has to be set to larger values, and in turn the servers have limited chance to enter system sleep state.
\section{Evaluation}
\label{sec:evaluation}
\subsection{Processor/System Low-power States and Power Model}
Low-power states are now an important feature that are widely supported in today's computing systems. The Advanced Configuration and Power Interface (ACPI)~\cite{acpi}
specifies various processor low power states, denoted as \emph{Cx}, and system low-power states, denoted as \emph{Sx}.
A higher level C state or S state typically indicates more aggressive energy savings but also corresponds to longer wake-up latencies.
For multi-core processor, low-power sleep states are supported at both core level and package level. When all cores become idle and reside in some \emph{Core C state}, the entire package would be resolved to a certain C state, denoted as \emph{package sleep state}, which further reduces power.
Although processor sleep states can significantly reduce the power consumed by the processor, servers can still consume a considerable amount of power, as the platform may still remain active. As a result, in order to achieve further energy savings, \emph{system sleep states}, that also put platform components into low-power state, are considered for server farm power management.
\subsection{Server and Job Model}
We model the server farm as a multi-server system that could process multiple jobs (upto the number of total cores) at a time.
{\em Utilization factor} is defined as the product of the job arrival rate and the average execution time, which is also the fraction of time that the server is expected to be busy executing a job.
We assume that a system-wide load balancer dispatches jobs to the servers within the server farm.
The {\em job latency} is defined as the time elapsed from when a job arrives to when the job completes its execution and departs the server. In this paper, the $90^{th}$ percentile job latency is considered as the QoS target.
\section{Workload Adaptation}
\label{sec:tstw_exp}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{./figure/algorithm_chart.pdf}
\caption{\titlename Power Management Framework}
\label{fig:alg_overview}
\end{figure}
While delay timers are useful in saving energy, we note that they lack workload awareness. Large values of delay timers could cause the system to remain in higher power states for longer than necessary resulting in increased energy consumption, while too small values for delay timers result in premature entry into low-power states. This can be problematic on two counts:
\begin{inparaenum}
\item The transition energy between power states can be high.
\item The wakeup latencies from low-power states can degrade system performance.
\end{inparaenum}
To incorporate workload-awareness, we explore a two-level adaptive strategy that controls the active and low-power state transitions using a local server power controller and a global server farm power manager.
\subsection{\titlename: Workload-Adaptive Algorithm}
\label{sec:handler}
We now present the design for our \titlename framework. As shown in Figure~\ref{fig:alg_overview}, the server farm power manager in the front end monitors the current load (number of pending jobs per server) and sends control commands to the local power controller. The server farm power manager puts the servers in either active or sleep modes. The bottom part of Figure~\ref{fig:alg_overview} presents state transitions coordinated by the local server power controllers.
\begin{table}[htbp]
\captionsetup{font=footnotesize}
\begin{footnotesize}
\begin{center}
\caption{Notations in \titlename power management algorithm}
\begin{tabular}{|c|l|}
\hline
{\bf Symbol} &{\bf \hspace{0.8in}Description}\\
\hline
{\normalsize $V_{act}$} & servers in active/package sleep mode \\
\hline
{\normalsize $V_{s}$} & servers in system sleep mode \\
\hline
{\normalsize $T_{s}$} & workload threshold to reduce active servers \\
\hline
{\normalsize $T_{w}$} & workload threshold to increase active servers \\
\hline
{\normalsize $N_{p}$} & number of provisioned shallow sleep servers \\
\hline
{\normalsize $\tau$} & delay time before entering system sleep \\
\hline
\end{tabular}
\label{table:notation}
\end{center}
\end{footnotesize}
\end{table}
\begin{algorithm}[ht]
\footnotesize
\KwIn{$T_{s}$, $T_{w}$, $N_{p}$, $\tau$, $n$ (total number of servers)}
\small
%
%
Initialization:
$V_{act}=\{s_1,s_2,...,s_n\} $; $V_{s}=\{\}$\;
\While{there are unfinished jobs}{
\If{a new job $j$ arrives at time $t_a$}{
compute $load\_per\_active\_server$\;
\If{$load\_per\_active\_server>T_w$ and $|V_{s}|>0$}{
retrieve a server $s$ from $V_{s}$\;
$V_{act}$.add($s$)\;
create a \emph{trans\_to\_active\_mode} request \emph{$tta\_r$}\;
send \emph{$tta\_r$} to server $s$'s power controller\;
}
}
\If{a job $j$ finishes at time $t_d$}{
compute $load\_per\_active\_server$\;
\If{$load\_per\_active\_server<T_s$ and $|V_{act}|>0$ }{
retrieve a server $s$ in $V_{act}$\;
$V_{s}$.add($s$) \;
create a \emph{trans\_to\_sleep\_mode} request \emph{$tts\_r$}\;
\If{count of shallow sleep servers$>N_{p}$}{
\emph{$tts\_r$}.enableDelayTimer($\tau$)\;
}
\Else{
\emph{$tts\_r$}.enableDelayTimer(infinity)\;
}
send \emph{$tts\_r$} to server $s$'s power controller\;
}
}
}
\caption{Global Server Farm Power Manager}
\label{alg:scheduler}
\end{algorithm}
\titlename automatically activates servers when the pending load becomes too high (that could lead to higher average job latency), and then places servers in low-power sleep mode to conserve energy when the workload becomes light. We achieve our goal of balancing energy consumption and latency by estimating the current load and placing servers in different power modes. There are two important parameters in Algorithm~\ref{alg:scheduler} that govern transitioning between active and sleep modes: 1. $T_s$, workload threshold per active server below which \titlename will put an active server to sleep, and 2. $T_w$, workload threshold per active server above which \titlename will wake up an inactive server.
{\bf Global Server Farm Power Manager}: All servers are initially in the shallow low-power state, and arriving jobs are placed in the server's local job queue. As jobs arrive, the \emph{load per active mode server} is computed dynamically by the power manager in the front end based on the number of jobs sent to individual servers and the completed jobs. The global server farm power manager maintains lists of servers in active and sleep modes. When new jobs arrive, it first checks if current load per active server is above $T_w$. If so, it selects a server in sleep server pool (if available) and sends a power mode transition request to the active state for that server. When the load per active server falls below $T_s$, the power manager selects an active server and sends a power mode transition request to enter sleep mode. Algorithm~\ref{alg:scheduler} describes the \titlename power manager with its notations shown in Table~\ref{table:notation}.
\textbf{Local Server Power Controller:} The processor transitions to package sleep state when it becomes idle, and stays in that state until it receives the request to wakeup from the global power manager. If the server receives a request for transition to sleep mode, it will first finish up all the pending jobs in the local queue and then enters package sleep after which a delay timer is started. The server enters system sleep upon delay timer expiration. However, if the scheduler chooses to wake up the server before timer expiration (e.g., due to sudden load increase), the timer is reset and the server goes back to active mode.
For large server farms, we can adopt one of two possible solutions:
\begin{inparaenum}
\item Adopt a distributed power management approach where energy is optimized within individual domains of servers with their own power managers.
\item Adopt a hierarchical solution with multiple levels of global power managers.
\end{inparaenum}
We note that a distributed power management approach may be more scalable with lower implementation complexity compared to a hierarchical approach that may involve longer latencies for decision making and higher bookkeeping overheads for the servers.
\subsection{Adaptive Server Provisioning}
\label{sec:provision}
Job arrival pattern may have local spikes (bursty), during which the service latency may suffer, especially when the servers are in low power modes. To mitigate this problem, we provision a subset of servers in shallow sleep states dynamically by setting their delay timer values to infinity. \titlename determines the number of provisioned servers {\it dynamically} through measuring the current standard deviation in the job arrival rate observed over a period of 2 minutes. Specifically, the server provision module samples the number of arrivals and calculates the utilization for each sample period (one second in our current setting). It then uses the sampling window to determine the standard deviation in the level of system utilization. The module will provision $\alpha\times stdev\times number\_servers$ dynamically in shallow sleep state. $\alpha$ is a tunable parameter. By default we set it to 3.0, since it typically covers a vast majority of the population (e.g., more than 99\% of the population in Gaussian distribution).
\section{Related Work}
\label{sec:RelatedWork}
With the energy consumption of large data centers reaching Gigawatt scale, its energy saving techniques are increasingly being studied in recent years.
Common techniques used for server energy reduction include Dynamic voltage and
frequency scaling (DVFS) to reduce the energy at the cost of server performance
,Co-ordinated DVFS and sleep states for
server processors \cite{yaoTSBatProImprovingEnergy2018} \cite{yaoWASPWorkloadAdaptive2017}, and
virtualization to consolidate VMs into fewer servers
\cite{hsuOptimizingEnergyConsumption2014}. TS-Bat ~\cite{yaoTSBatProImprovingEnergy2018}
demonstrates that, through temporally batching the jobs and by grouping them
onto specific servers spatially, higher power savings can be obtained.
WASP~\cite{yaoWASPWorkloadAdaptive2017} shows that intelligent use of low power states in
servers can be used to boost server power savings.
For the energy efficiency in network, earlier works have looked at switches and
routers for Internet-scale large area networking. Gupta et
al.~\cite{guptaGreeningInternet2003} first proposed the need for power saving in
networks and pointed to having network protocol support for energy management.
Adaptive Link Rate (ALR) \cite{gunaratneReducingEnergyConsumption2008} reduces
link energy consumption by dynamically adjusting data link rate depending on
traffic requirements. Other approaches include turning off switches when not
required, or to put them in sleep mode depending on packet queue length
\cite{yuEnergyefficientDelayawarepacket2015}
Prior work on reducing data center network
power rely on DVFS and sleep states \cite{iqbalEfficienttrafficaware2012} to
opportunistically reduce power consumption of individual switches. In these
approaches, switches may enter sleep states without knowledge of incoming server
traffic and may be forced to wake up prematurely. We study a co-ordinate server
job placement and network allocation required to optimize the amount of sleep
time and save network energy consumption.Recently,
DREAM~\cite{zhouDREAMDistRibutedEnergyAware2019} proposed a probability based
network traffic distribution scheme by splitting flows to save network power.
There are existing works which combine server and network power saving.
Mahadevan et al.\cite{mahadevanEnergyAwareNetwork2009} and Heller et al.
~\cite{hellerElasticTreeSavingEnergy2010} have proposed a heuristic based
algorithm for a coarse-grained load variation which dynamically allocates the
servers required for the workload and powers off the unneeded switches for the
server configuration. Other approaches consolidate VMs in fewer servers and in
turn fewer switches \cite{zhengJointpoweroptimization2014}. These approaches
assume an unrealistically high amount of idle period to offset the large wakeup
latencies to transition between On and Off states. To the best of our knowledge
our solution is the only one to consider network sleep states to target higher
power savings in the data center. The EEPRON~\cite{zhouJointServerNetwork2018}
discussed using the network slack available to reduce the energy consumption of
servers with frequency scaling.
\section{Conclusion}
\label{sec:Conclusion}
In this article, we presented Popcorns-Pro\xspace, where we explore techniques that make smart use of line card and port low power states in switches and orchestrate them with intelligent joint task placement and routing algorithm for more effective power management. The results show good promise in achieving considerable power savings compared to the baseline policies. Our experimental results show that smart management of low power states achieves upto 80\% over policies optimizing servers and network policies separately, while still keeping energy savings low.
\section{Introduction}
\label{sec:Introduction}
Data centers have spurred rapid growth in computing, and an increasing number of user applications have continued to migrate toward cloud computing in the past few years. With this growing trend, data centers now account for about 2\% of US energy consumption \cite{barrosoCaseEnergyProportionalComputing2007}. Many public cloud computing environments have power consumption in the order of several Gigawatts. Therefore, energy is key challenges in data centers.
Data center servers are typically provisioned for peak performance to always satisfy user demands. This, however, also translates to higher power consumption. Hardware investments have resulted in power saving mechanisms, such as DVFS and low-power or idle states~\cite{acpi}.
We note that power reduction strategies in network switches and routers have been largely studied in large-scale network settings. Gupta et al.~\cite{guptaGreeningInternet2003} proposed a protocol-level support for coordinated entry into low-power states, where routers broadcast their sleep states for routing decisions to be changed accordingly. Adaptive Link Rate (ALR) for ethernet~\cite{gunaratneReducingEnergyConsumption2008} allows the network links to reduce their bandwidth adaptively for increased power efficiency. Such approaches may not be very effective in data center settings where application execution times have a higher dependence on network performance and Quality of Service (QoS) demands by the users.
In this article, we propose PopCorns-Pro, a new framework to holistically optimize data center power through a cooperative network-server approach. We propose power models for data center servers and network switches (with support for low-power modes) based on power measurements in real system settings and memory power modeling tools from MICRON~\cite{lalSLCMemoryAccess2019}
and Cacti \cite{balasubramonianCACTINewTools2017}. We then study job placement algorithms that take communication patterns into account while optimizing the amount of sleep periods for both servers and line cards. Our experimental results show that we are able to achieve more than 20\% higher energy savings compared to a baseline strategy that relies on server load-balancing to optimize data center energy.
We note that further power savings can be obtained at the application level through carefully tuning them for usage of processor resources~\cite{chenWattsinsideHardwaresoftwareCooperative2013} or through load-balancing tasks across cores in multicore processor settings to avoid keeping cores unnecessarily active.
Such strategies can complement our proposed approach, and boost further power savings in data center settings.
We extended upon our previous work~\cite{luPopCornsPowerOptimization2018} by proposing a multi-state power model for data center network switches and servers based on available power measurements in real system settings and memory power modeling tools. We formulate a power optimization problem that jointly considers both server
(with multiple cores) and network switches (with multiple line cards). We improve network traffic modeling accuracy by including link capacities. We also consider realistic heterogeneous switches in the Fat tree topology with different performance and power characteristics for switches at Core, Aggregate and Edge levels.
We compare our approach against a server load-balancing mechanism for task placement and a traditional Djikstra based network routing algorithm. We also consider a server energy optimization algorithm and a greedy bin packing network routing policy which consolidates traffic into fewer switches. These four policy combinations are compared with our Popcorns server algorithm to find that a combined server-network energy aware algorithm is more efficient than optimized versions of individual approaches previously considered as datacenter energy optimization. We consider real world job arrival traces, as well as with synthetic bursty arrivals at different job network demands to characterize the benefits of our combined server network selection policy in terms of energy consumption and job latencies. We found that our approach provides 25-80\% reduction in energy consumption compared to the case of optimizing server and network separately with conventional techniques.
important goals and overall contributions of our work are:
\begin{itemize}
\item We conceptualize the architectural sleep states for the switch, based on functional
components and architectural design of real world data center switches. We
motivate the benefit of such sleep states to the overall system power efficiency.
\item We propose a new algorithm that considers servers and networks energy
characteristics to coordinate server task placement while considering the power
drawn by network components. Transition between power states in switches is
controlled by buffer sizes and traffic patterns.
\item We evaluate the impact of having the low-power states in the switch
and our policy which optimizes for lower energy consumption in our data-center
simulator. We consider a FatTree data center topology with various configuration parameters
such as network traffic sizes, CPU traces, server and network performance
models.
\end{itemize}
\section{Related Work}
\label{sec:RelatedWork}
With the energy consumption of large data centers reaching Gigawatt scale, its energy saving techniques are increasingly being studied in recent years.
Common techniques used for server energy reduction include Dynamic voltage and
frequency scaling (DVFS) to reduce the energy at the cost of server performance
,Co-ordinated DVFS and sleep states for
server processors \cite{yaoTSBatProImprovingEnergy2018} \cite{yaoWASPWorkloadAdaptive2017}, and
virtualization to consolidate VMs into fewer servers
\cite{hsuOptimizingEnergyConsumption2014}. TS-Bat ~\cite{yaoTSBatProImprovingEnergy2018}
demonstrates that, through temporally batching the jobs and by grouping them
onto specific servers spatially, higher power savings can be obtained.
WASP~\cite{yaoWASPWorkloadAdaptive2017} shows that intelligent use of low power states in
servers can be used to boost server power savings.
For the energy efficiency in network, earlier works have looked at switches and
routers for Internet-scale large area networking. Gupta et
al.~\cite{guptaGreeningInternet2003} first proposed the need for power saving in
networks and pointed to having network protocol support for energy management.
Adaptive Link Rate (ALR) \cite{gunaratneReducingEnergyConsumption2008} reduces
link energy consumption by dynamically adjusting data link rate depending on
traffic requirements. Other approaches include turning off switches when not
required, or to put them in sleep mode depending on packet queue length
\cite{yuEnergyefficientDelayawarepacket2015}
Prior work on reducing data center network
power rely on DVFS and sleep states \cite{iqbalEfficienttrafficaware2012} to
opportunistically reduce power consumption of individual switches. In these
approaches, switches may enter sleep states without knowledge of incoming server
traffic and may be forced to wake up prematurely. We study a co-ordinate server
job placement and network allocation required to optimize the amount of sleep
time and save network energy consumption.Recently,
DREAM~\cite{zhouDREAMDistRibutedEnergyAware2019} proposed a probability based
network traffic distribution scheme by splitting flows to save network power.
There are existing works which combine server and network power saving.
Mahadevan et al.\cite{mahadevanEnergyAwareNetwork2009} and Heller et al.
~\cite{hellerElasticTreeSavingEnergy2010} have proposed a heuristic based
algorithm for a coarse-grained load variation which dynamically allocates the
servers required for the workload and powers off the unneeded switches for the
server configuration. Other approaches consolidate VMs in fewer servers and in
turn fewer switches \cite{zhengJointpoweroptimization2014}. These approaches
assume an unrealistically high amount of idle period to offset the large wakeup
latencies to transition between On and Off states. To the best of our knowledge
our solution is the only one to consider network sleep states to target higher
power savings in the data center. The EEPRON~\cite{zhouJointServerNetwork2018}
discussed using the network slack available to reduce the energy consumption of
servers with frequency scaling.
\section*{Acknowledgment}
\label{ack}
This material is based upon work supported in part by the National Science Foundation
under CAREER Award CCF-1149557 and CNS-1320226.
\section*{Acknowledgment}
\label{sec:acknowledgement}
This material is based in part upon work supported by the National Science Foundation under Grant Number CNS-178133. Kathy Nguyen and Gregory Kahl were supported through a REU supplement under the NSF award.
\subsection{Estimating the energy consumption}
\label{sec:algorithms}
Modeling this joint power optimization problem in Datacenter Networks (DCN) as an Integer Linear Programming (ILP) formulation is a solution, and optimization tools like MathProg can be used to provide a near-optimal result. However, the computation complexity increases exponentially with the number of servers and switches~\cite{wangExpeditusCongestionAwareLoad2017}. In a typical data center with tens of thousands of servers and hundreds of switches, it is computationally prohibitive to solve the optimization problem. We therefore propose a computationally-efficient heuristic algorithm in this section.
\input{motivation}
\begin{comment}
In this section, we first present an Integer Linear Programming (ILP) formulation for the joint optimization problem in details. We then propose heuristic algorithms for solving the optimization problem efficiently.
\subsection{ILP formulation}
In the ILP formulation, the input parameters are shown in Table~\ref{table2:notation}.
\begin{table}
\caption{\label{table2:notation} Notations in ILP formulation}
\scalebox{0.5}{
\begin{tabular}{|c|c|c|}
\hline
\bf{Symbol}& \bf{Meaning} \\
\hline
$v$ & an arbitrary network node \\
\hline
$s$ & an arbitrary server \\
\hline
$V$ & set of network nodes (e.g. servers, switches) \\
\hline
$S$ & set of servers \\
\hline
$j$ & an arbitrary job \\
\hline
$L$ & set of links in the network \\
\hline
$P_{m,n}^{linecard}$ & power consumption of line card $m$ in switch $n$ \\
\hline
$P_{n}^{base}$ & power consumption of all parts other than line cards in switch $n$ \\
\hline
$N^{switch}$ & number of switches \\
\hline
$P_{c,s}^{core}$ & power consumption of core $c$ in server $s$ \\
\hline
$P_{s}^{base}$ & power consumption of all parts other than cores in server $s$ \\
\hline
$N_{s}^{core}$ & number of cores in server $s$ \\
\hline
$N^{server}$ & number of servers \\
\hline
$T^{j}$ & set of tasks in job $j$ \\
\hline
$t^{j}$ & an arbitrary task in job $j$, $t^{j}\in{T^{j}}$ \\
\hline
$N^{job}$ & number of jobs \\
\hline
$B_{r,t}^{j}$ & bandwidth requirement between task $r$ and task $t$ of job $j$, task $r$ is in server $s_r$, task $t$ is in server $s_t$ \\
\hline
$l_{n_1,n_2}$ & link between node $n_1$ and $n_2$, $l_{n_1,n_2}$ $\in$ L \\
\hline
$C_{l_{n_1,n_2}}$ & capacity of link $l_{n_1,n_2}$ \\
\hline
$L_{m, n}$ & set of links connected to line card $m$ in switch $n$ \\
\hline
$f_{l_{n_1,n_2}}$ & number of flows on link $l_{n_1,n_2}$ \\
\hline
$l_{p,s_1,q,s_2}$ & link between line card p in switch $s_1$ and line card q in switch $s_2$ $s$ to node $d$ \\
\hline
$l_{s,d}$ & an arbitrary path from node $s$ to node $d$ \\
\hline
$f_{l_{n_1,n_2}}$ & set of flows on link $l_{n_1,n_2}$ \\
\hline
$d^{l_{n_1,n_2}}_f$ & data rate of flow $f$ on link $l_{n_1,n_2}$ \\
\hline
\end{tabular}}
\end{table}
\begin{center}
\scalebox{0.8}{
Objective: ${Minimize(P^{switches}+P^{servers})}$}
\scalebox{0.8}{
$=\min{\sum_{n=1}^{N^{switch}}(P^{base}_n+\sum_{m=1}^{N_n^{linecard}}P_{m,n}^{linecard}x_m^n) +\sum_{s=1}^{N^{server}}(P_s^{base}+\sum_{c=1}^{N_s^{core}}P_{c,s}^{core}z_c^s)}$}
\end{center}
Decision Variables:
a)
\begin{equation*}
a^{j}_{t, c, s}=
\begin{cases}
1, \quad\makebox{if task } t \makebox{ of job } j \makebox{ is assigned to core } c \makebox{ in server } s \makebox{;} \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
b)
\begin{equation*}
z^{j}_{t, r, s}=
\begin{cases}
1, \quad\makebox{if task } t \makebox{ and } r \makebox{ of job } j \makebox{ are assigned to server } s \makebox{;} \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
c)
\begin{equation*}
y^{s}_{c}=
\begin{cases}
1, \quad\makebox{if core } c \makebox{ in server } s \makebox{ is operating; } \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
d)
\begin{equation*}
x^{n}_{m}=
\begin{cases}
1, \quad\makebox{if line card } m \makebox{ in switch } n \makebox{ is active; } \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
d)
\begin{equation*}
\lambda^{l}_{p_{s,d}}=
\begin{cases}
1, \quad\makebox{if link } l \makebox{ is on path } p_{s,d} \makebox{;} \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
Constraints:
a) Each task of each job can be assigned to only one core. For all $j, t$
\begin{equation*}
\sum_{s=1}^{N^{server}} \sum_{c=1}^{N_s^{core}} a^{j}_{t, c, s} = 1
\end{equation*}
b) The number of tasks being executed on a server cannot exceed the number of its cores. For all $s$
\begin{equation*}
\sum_{j=1}^{N^{job}} \sum_{t\in{T^j}} \sum_{c=1}^{N_s^{core}} a^{j}_{t, c, s} = 1
\end{equation*}
c) Constraints to find the value of $z^{j}_{t, r, s}$. For all $j, t, r, s$
\begin{equation*}
\sum_{c=1}^{N^{core}_s}a^{j}_{t, c, s} + \sum_{c=1}^{N^{core}_s}a^{j}_{r, c, s} \leq 1 + z^{j}_{t, r, s}
\end{equation*}
\begin{equation*}
\sum_{c=1}^{N^{core}_s}a^{j}_{t, c, s} + \sum_{c=1}^{N^{core}_s}a^{j}_{r, c, s} \geq 2z^{j}_{t, r, s}
\end{equation*}
d) If link $l_{n_1,n_2}$ is on path $p_{s,d}$, there must exist another link from node $n_1, n_2$ respectively on this path.
\begin{equation*}
\sum_{c=1}^{N^{core}_s}a^{j}_{t, c, s} + \sum_{c=1}^{N^{core}_s}a^{j}_{r, c, s} \leq 1 + z^{j}_{t, r, s}
\end{equation*}
\begin{equation*}
\sum_{c=1}^{N^{core}_s}a^{j}_{t, c, s} + \sum_{c=1}^{N^{core}_s}a^{j}_{r, c, s} \geq 2z^{j}_{t, r, s}
\end{equation*}
e) If link $l_{s_1,s_2}$ is on the task communication path $p_{s,d}$, then the corresponding line cards in $s_1$ and $s_2$ must be active and the link between them is also on $p_{s,d}$. For $l_{p,s_1,q,s_2}$ $\in$ $L_{p,s_1}$, $l_{p,s_1,q,s_2}$ $\in$ $L_{q,s_2}$
\begin{equation*}
\lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}} = \lambda^{l_{s_1,s_2}}_{p_{s,d}} = x^{s_1}_{p} = x^{s_2}_{q}
\end{equation*}
\begin{equation*}
x^{s_1}_{p}\lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}} \geq \lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}}
\end{equation*}
\begin{equation*}
x^{s_2}_{q}\lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}} \geq \lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}}
\end{equation*}
f) Line card $m$ in switch $n$ is off only when all the links connected to it are not on any path.
\begin{equation*}
\left| L_{m,n} \right| x^{n}_{m} \geq \sum_{l\in{L_{m,n}}}\lambda^{l}_{p_{s,d}}
\end{equation*}
g) Total flow rate on a link cannot exceed the link capacity.
\begin{equation*}
\sum_{f\in{f_{l_{n_1,n_2}}}}^{N^{server}} \sum_{c=1}^{N_s^{core}} a^{j}_{t, c, s} = 1
\end{equation*}
\end{comment}
\subsection{Heuristic Algorithms}
In this section, we first present a power management algorithm for the transition of line card and port power state, a simple power transition algorithm for servers, and then propose joint job placement and network routing algorithm for solving the optimization problem efficiently.
\subsubsection{Power State Transition Algorithm}
As not all switches in DCN need to be active all the time, if we can intelligently control the transition of active and low power state for ports and line cards, then data center network power consumption could be saved. Therefore, we propose the Power State Transition Algorithm~\ref{algorithm1} to implement network power management. The notations are elaborated in Table~\ref{algorithm1table}. We assume that there is a global controller that keeps record of all the line cards, ports, and server status, including their power state, queue size, etc. The global controller monitors the current traffic load (number of pending flows or packets) at each port and decides whether the current line card power state should change. It then decides if the port state should change as a feedback. In our design, we couple the line card and port power state and their transition: if a line card is in sleep or off state, then all the ports are also in LPI or off state; if a line card is active, then its ports can be in LPI or active state, and LPI ports can be woken up to become active in a short time.
\begin{algorithm}
\SetAlgoNoLine
\scriptsize
\caption{Power State Transition Algorithm}
\label{algorithm1}
\KwIn{$T_s$, $T_a$, $Q_{iL}$}
\KwOut{Line card and port state transition}
Initialization: All line cards are in sleep state, all ports are in LPI state\;
\While{there are jobs to be executed}
{
\If{a flow arrives at port $i$ of line card $L$ at time $t$}
{
\If{$L$ is in sleep state} {
\If{$Q_{iL} > T_s$} {
$L$ begins waking up from sleep state\;
$i$ begins waking up from LPI state\;
}
\Else
{
$L$ begins waking up after $\tau_{wakeup}^{LC}$\;
$i$ begins waking up after $\tau_{wakeup}^{port}$\;
}
}
}
\If{a flow is transimitted from port $i$ of line card $L$ at time $t$} {
\If{$Q_{iL} < T_s$} {
$i$ starts transition to LPI after $\tau_{LPI}$\;
\If{after $\tau_{LPI}$, all the ports of $L$ are in LPI state} {
$L$ starts transition to deeper sleep state after $\tau_{sleep}^{LC}$;
}
}
}
}
\end{algorithm}
\subsubsection{Popcorns-Pro\xspace-Cooperative Network-Server Algorithm}
{
The main idea of our algorithm is to jointly consider the status of server pool and network before assigning jobs. To be more specific, for a job consisting of several pairs of interdependent tasks, if we place task pairs based on their interdependence, and choose the core pair with the minimum routing cost, instead of randomly placing them on available cores without awareness of communication requirement between tasks, then the placement together with its corresponding routing path must be the optimal. And in the context of this paper, network routing cost is energy consumption, since every line card on the chosen routing path has to be active, or it will have to be woken up.
Based on this idea, we propose the Cooperative Network-Server (CNS) Algorithm~\ref{algorithm2}
The notations are elaborated in Table~\ref{algorithm1table}. In the initial stage, we compute and store all possible routing paths between any pair of network nodes. As mentioned in the previous section, we have a global controller to keep track of all the line cards, ports, and server statuses. Thus, when a job consisting of a set of interdependent tasks arrive, we first check the server side to select all server pairs whose power state is active and local queue sizes do not exceed a threshold. If no server statisfies these requirements, then servers with full local queue and servers in C6 sleep state will be selected. Note that if a task is assigned to a server in sleep state, then the server will be activated immediately and enter active state after a wakeup latency. For each server pair we select possible routing paths between them from the pre-computed routing paths set. Along each path, line cards could be active, sleeping, or off, and ports could be active, LPI, or off, but if we assign a path to the task pair, all the line cards and ports along it should become active. In other words, we need to wake up the inactive line cards and ports on the chosen path, which results in extra power consumption and wakeup latency. Based on this, we can compute and assign a weight, which is a measurement of the number of active line cards and ports, to each possible routing path, and choose the server pair with minimum routing cost. And our proposed Cooperative Network-Server Algorithm will output the least-weight path and its corresponding server pair for each task pair.}
\begin{table}
\centering
\captionsetup{font=small}
\scalebox{0.7}{
\begin{tabular}{|c|c|c|}
\hline
\bf{Symbol}& \bf{Description} \\
\hline
$P$ & all the routing paths between any pair of network nodes \\
\hline
$Q_s$ & local queue size of server $s$ \\
\hline
$T_s^{server}$ & local queue size threshold of server $s$\\
\hline
$S$ & all the servers in DCN \\
\hline
$S_{ava}$ & all the servers whose current queue size doesn't exceed $T_s^{server}$ \\
\hline
$P_{x, y}$ & all the routing paths between node $x$ and $y$ in DCN\\
\hline
$T_s$ & traffic threshold for port waking up \\
\hline
$T_a$ & traffic threshold for port turning into LPI state \\
\hline
$Q_{iL}$& current traffic load at port $i$ in line card $L$ \\
\hline
$\tau_{wakeup}^{port}$ & port wakeup delay \\
\hline
$\tau_{wakeup}^{LC}$ & line card wakeup delay \\
\hline
$\tau_{LPI}^{port}$ & port turning into LPI state delay \\
\hline
$\tau_{sleep}^{LC}$ & line card turning into sleep state delay \\
\hline
\end{tabular}
}
\caption{\label{algorithm1table} Notations in Popcorns-Pro\xspace Cooperative Network-Server Algorithm.}
\end{table}
\begin{algorithm}[!htb]
\SetAlgoNoLine
\scriptsize
\caption{Popcorns-Pro\xspace - Cooperative Network-Server Algorithm}
\label{algorithm2}
\KwIn{$P$, $Q_s$, $T_s$, line cards and ports power state, task dependency within a job}
\KwOut{Job placement and corresponding routing path}
\While{job $j$ consisting of task set $T^j$ arrives}
{
\For{each pair of interdependent tasks $(T^j_m, T^j_n)$ in $T^j$}
{
select $S_{ava}$ from $S$\;
\For {each pair of available servers $(x, y)$ in $S_{ava}$}
{
compute $P(x, y)$ for server pair $(x, y)$ from $P$\;
\For {each path $p$ in $P(x, y)$}
{
get power states of all the ports and line cards along $P$ from the global controller, compute path weight $w(x, y)$ in terms of energy consumption\;
}
}
choose the least-weight path associated with corresponding server pair for task pair $(T^j_m, T^j_n)$\;
}
}
\end{algorithm}
\begin{algorithm}
\SetAlgoNoLine
\scriptsize
\caption{Popcorns-Pro\xspace - Weight assignment algorithm between two nodes}
\label{algorithm3}
\KwIn{$Link(i,j)$, $QoS$, $Link_{Capacity}$}
\KwOut{$W(i,j)$ Weight for the edge connecting nodes i and j}
\If{ Node $i$ is Server} {
$LinkWeight(i,j) += (ServerActivePower - CurrentSleepStatePower) *( taskSize + EdgeLinkBW / FlowSize)$;
}
\If{ Node $j$ is a Switch}
{
$LinkCapacity_{remaining} \leftarrow Link_{fullCapacity}$;
\For {$Every Flow F_{i} on Link(i,j)$ }
{
$Flow_{i_{remainingTime}} \leftarrow FSize_{i_{Remaining}} / FcurBW_{i}$;
\If {$F_{i_{remainingTime}} > SlowestTime$ }
{
$SlowestTime \leftarrow F_{i_{remainingTime}}$;
$SlowestFlow \leftarrow F_{i}$;
}
$LinkCapacity_{remaining} -= FcurBW_{i}$
}
$availBW \leftarrow min(Link_{fullCapacity} - LinkCapacity_{remaining} \text{ and } EdgeLinkBW)$;
\If { $availBW > MinBWforQoS$ }
{
$timeForNewFlow \leftarrow availBW/ FSize_{newFlow}$;
$additionalTimeAwake \leftarrow MAX(SlowestTime \text{ and } timeForNewFlow )$;
$LinkWeight(i,j) += additionalTimeAwake * ActivePower_{j}$;
} \Else {
$FcurBW_{newFlow} \leftarrow FSize_{newFlow}/QoSRequiredTime$;
$FcurBW_{Slowest}' \leftarrow (FcurBW_{Slowest} - (availBW - (FcurBW_{newFLow}/NumFlowsOnLink))$;
$AdditionalTime_{SlowestFlow} \leftarrow (FcurBW_{Slowest} - FcurBW_{Slowest}')/ FlowDataremaing_{Slowest}$;
$timeForNewFlow \leftarrow QoSRequiredTime$
$additionalTimeAwake \leftarrow MAX(timeForNewFlow \text{ and } AdditionalTime_{SlowestFlow})$;
$LinkWeight(i,j) += additionalTimeAwake * ActivePower_{j}$;
}
}
\end{algorithm}
\begin{algorithm}
\SetAlgoNoLine
\caption{Job scheduling Algorithm}
\label{algorithm2}
\scriptsize
\KwIn{$P$, $Q_s$, $T_s$, line cards and ports power state, task dependency within a job}
\KwOut{Job placement and corresponding routing path}
\While{job $j$ consisting of task set $T^j$ arrives}
{
\For{each pair of interdependent tasks $(T^j_m, T^j_n)$ in $T^j$}
{
select $S_{ava}$ from $S$\;
\For {each pair of available servers $(x, y)$ in $S_{ava}$}
{
compute $P(x, y)$ for server pair $(x, y)$ from $P$\;
\For {each path $p$ in $P(x, y)$}
{
get power states of all the ports and line cards along $P$ from the global controller, compute path weight $w(x, y)$ in terms of energy consumption\;
}
}
choose the least-weight path associated with corresponding server pair for task pair $(T^j_m, T^j_n)$\;
}
}
\end{algorithm}
\todo{TBD}
\section{Solution Approach}
\label{sec:algorithms}
Modeling this joint power optimization problem in DCN as an Integer Linear Programming (ILP) formulation is a solution, and optimization tools like MathProg can be used to provide a near-optimal result. However, the computation complexity increases exponentially with the number of servers and switches~\cite{wang2012carpo}. In a typical data center with tens of thousands of servers and hundreds of switches, it is computationally prohibitive to solve the optimization problem. We therefore propose a computationally-efficient heuristic algorithm in this section.
\begin{comment}
In this section, we first present an Integer Linear Programming (ILP) formulation for the joint optimization problem in details. We then propose heuristic algorithms for solving the optimization problem efficiently.
\subsection{ILP formulation}
In the ILP formulation, the input parameters are shown in Table~\ref{table2:notation} .
\begin{table}
\caption{\label{table2:notation} Notations in ILP formulation}
\scalebox{0.5}{
\begin{tabular}{|c|c|c|}
\hline
\bf{Symbol}& \bf{Meaning} \\
\hline
$v$ & an arbitrary network node \\
\hline
$s$ & an arbitrary server \\
\hline
$V$ & set of network nodes (e.g. servers, switches) \\
\hline
$S$ & set of servers \\
\hline
$j$ & an arbitrary job \\
\hline
$L$ & set of links in the network \\
\hline
$P_{m,n}^{linecard}$ & power consumption of line card $m$ in switch $n$ \\
\hline
$P_{n}^{base}$ & power consumption of all parts other than line cards in switch $n$ \\
\hline
$N^{switch}$ & number of switches \\
\hline
$P_{c,s}^{core}$ & power consumption of core $c$ in server $s$ \\
\hline
$P_{s}^{base}$ & power consumption of all parts other than cores in server $s$ \\
\hline
$N_{s}^{core}$ & number of cores in server $s$ \\
\hline
$N^{server}$ & number of servers \\
\hline
$T^{j}$ & set of tasks in job $j$ \\
\hline
$t^{j}$ & an arbitrary task in job $j$, $t^{j}\in{T^{j}}$ \\
\hline
$N^{job}$ & number of jobs \\
\hline
$B_{r,t}^{j}$ & bandwidth requirement between task $r$ and task $t$ of job $j$, task $r$ is in server $s_r$, task $t$ is in server $s_t$ \\
\hline
$l_{n_1,n_2}$ & link between node $n_1$ and $n_2$, $l_{n_1,n_2}$ $\in$ L \\
\hline
$C_{l_{n_1,n_2}}$ & capacity of link $l_{n_1,n_2}$ \\
\hline
$L_{m, n}$ & set of links connected to line card $m$ in switch $n$ \\
\hline
$f_{l_{n_1,n_2}}$ & number of flows on link $l_{n_1,n_2}$ \\
\hline
$l_{p,s_1,q,s_2}$ & link between line card p in switch $s_1$ and line card q in switch $s_2$ $s$ to node $d$ \\
\hline
$l_{s,d}$ & an arbitrary path from node $s$ to node $d$ \\
\hline
$f_{l_{n_1,n_2}}$ & set of flows on link $l_{n_1,n_2}$ \\
\hline
$d^{l_{n_1,n_2}}_f$ & data rate of flow $f$ on link $l_{n_1,n_2}$ \\
\hline
\end{tabular}}
\end{table}
\begin{center}
\scalebox{0.8}{
Objective: ${Minimize(P^{switches}+P^{servers})}$}
\scalebox{0.8}{
$=\min{\sum_{n=1}^{N^{switch}}(P^{base}_n+\sum_{m=1}^{N_n^{linecard}}P_{m,n}^{linecard}x_m^n) +\sum_{s=1}^{N^{server}}(P_s^{base}+\sum_{c=1}^{N_s^{core}}P_{c,s}^{core}z_c^s)}$}
\end{center}
Decision Variables:
a)
\begin{equation*}
a^{j}_{t, c, s}=
\begin{cases}
1, \quad\makebox{if task } t \makebox{ of job } j \makebox{ is assigned to core } c \makebox{ in server } s \makebox{;} \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
b)
\begin{equation*}
z^{j}_{t, r, s}=
\begin{cases}
1, \quad\makebox{if task } t \makebox{ and } r \makebox{ of job } j \makebox{ are assigned to server } s \makebox{;} \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
c)
\begin{equation*}
y^{s}_{c}=
\begin{cases}
1, \quad\makebox{if core } c \makebox{ in server } s \makebox{ is operating; } \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
d)
\begin{equation*}
x^{n}_{m}=
\begin{cases}
1, \quad\makebox{if line card } m \makebox{ in switch } n \makebox{ is active; } \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
d)
\begin{equation*}
\lambda^{l}_{p_{s,d}}=
\begin{cases}
1, \quad\makebox{if link } l \makebox{ is on path } p_{s,d} \makebox{;} \\
0, \quad\makebox{otherwise }
\end{cases}
\end{equation*}
Constraints:
a) Each task of each job can be assigned to only one core. For all $j, t$
\begin{equation*}
\sum_{s=1}^{N^{server}} \sum_{c=1}^{N_s^{core}} a^{j}_{t, c, s} = 1
\end{equation*}
b) The number of tasks being executed on a server cannot exceed the number of its cores. For all $s$
\begin{equation*}
\sum_{j=1}^{N^{job}} \sum_{t\in{T^j}} \sum_{c=1}^{N_s^{core}} a^{j}_{t, c, s} = 1
\end{equation*}
c) Constraints to find the value of $z^{j}_{t, r, s}$. For all $j, t, r, s$
\begin{equation*}
\sum_{c=1}^{N^{core}_s}a^{j}_{t, c, s} + \sum_{c=1}^{N^{core}_s}a^{j}_{r, c, s} \leq 1 + z^{j}_{t, r, s}
\end{equation*}
\begin{equation*}
\sum_{c=1}^{N^{core}_s}a^{j}_{t, c, s} + \sum_{c=1}^{N^{core}_s}a^{j}_{r, c, s} \geq 2z^{j}_{t, r, s}
\end{equation*}
d) If link $l_{n_1,n_2}$ is on path $p_{s,d}$, there must exist another link from node $n_1, n_2$ respectively on this path.
\begin{equation*}
\sum_{c=1}^{N^{core}_s}a^{j}_{t, c, s} + \sum_{c=1}^{N^{core}_s}a^{j}_{r, c, s} \leq 1 + z^{j}_{t, r, s}
\end{equation*}
\begin{equation*}
\sum_{c=1}^{N^{core}_s}a^{j}_{t, c, s} + \sum_{c=1}^{N^{core}_s}a^{j}_{r, c, s} \geq 2z^{j}_{t, r, s}
\end{equation*}
e) If link $l_{s_1,s_2}$ is on the task communication path $p_{s,d}$, then the corresponding line cards in $s_1$ and $s_2$ must be active and the link between them is also on $p_{s,d}$. For $l_{p,s_1,q,s_2}$ $\in$ $L_{p,s_1}$, $l_{p,s_1,q,s_2}$ $\in$ $L_{q,s_2}$
\begin{equation*}
\lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}} = \lambda^{l_{s_1,s_2}}_{p_{s,d}} = x^{s_1}_{p} = x^{s_2}_{q}
\end{equation*}
\begin{equation*}
x^{s_1}_{p}\lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}} \geq \lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}}
\end{equation*}
\begin{equation*}
x^{s_2}_{q}\lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}} \geq \lambda^{l_{p,s_1,q,s_2}}_{p_{s,d}}
\end{equation*}
f) Line card $m$ in switch $n$ is off only when all the links connected to it are not on any path.
\begin{equation*}
\left| L_{m,n} \right| x^{n}_{m} \geq \sum_{l\in{L_{m,n}}}\lambda^{l}_{p_{s,d}}
\end{equation*}
g) Total flow rate on a link cannot exceed the link capacity.
\begin{equation*}
\sum_{f\in{f_{l_{n_1,n_2}}}}^{N^{server}} \sum_{c=1}^{N_s^{core}} a^{j}_{t, c, s} = 1
\end{equation*}
\end{comment}
\subsection{Heuristic Algorithms}
In this section, we first present a power management algorithm for the transition of line card and port power state, a simple power transition algorithm for servers, and then propose joint job placement and network routing algorithm for solving the optimization problem efficiently.
\subsubsection{Power State Transition Algorithm}
{\color{blue}remove the port Low Power Idle (LPI) transition from the algorithm (save very little power), add more line card power states, the transition algorithm to control the transition}
\begin{table}
\centering
\captionsetup{font=small}
\begin{tabular}{|c|c|c|}
\hline
\bf{Symbol}& \bf{Description} \\
\hline
$T_s$ & traffic threshold for port waking up \\
\hline
$T_a$ & traffic threshold for port turning into LPI state \\
\hline
$Q_{iL}$& current traffic load at port $i$ in line card $L$ \\
\hline
$\tau_{wakeup}^{port}$ & port wakeup delay \\
\hline
$\tau_{wakeup}^{LC}$ & line card wakeup delay \\
\hline
$\tau_{LPI}^{port}$ & port turning into LPI state delay \\
\hline
$\tau_{sleep}^{LC}$ & line card turning into sleep state delay \\
\hline
\end{tabular}
\caption{\label{algorithm1table} Notations in PopCorns power state transition algorithm.}
\end{table}
As not all switches in DCN need to be active all the time, if we can intelligently control the transition of active and low power state for ports and line cards, then data center network power consumption could be saved. Therefore, we propose the Power State Transition Algorithm~\ref{algorithm1} to implement network power management. The notations are elaborated in Table~\ref{algorithm1table}. An overview of our approach is shown in Figure~\ref{figure3}. We assume that there is a global controller that keeps record of all the line cards, ports, and server status, including their power state, queue size, etc. The global controller monitors the current traffic load (number of pending flows or packets) at each port and decides whether the current line card power state should change. It then decides if the port state should change as a feedback. In our design, we couple the line card and port power state and their transition: if a line card is in sleep or off state, then all the ports are also in LPI or off state; if a line card is active, then its ports can be in LPI or active state, and LPI ports can be woken up to become active in a short time.
\begin{figure}
\captionsetup{font=small}
\centering\includegraphics[width=0.5\textwidth]{figures/Presentation2.pdf}
\caption{Power state transition overview.}
\label{figure3}
\end{figure}
\begin{algorithm}
\SetAlgoNoLine
\caption{Power State Transition Algorithm}
\label{algorithm1}
\KwIn{$T_s$, $T_a$, $Q_{iL}$}
\KwOut{Line card and port state transition}
Initialization: All line cards are in sleep state, all ports are in LPI state\;
\While{there are jobs to be executed}
{
\If{a flow arrives at port $i$ of line card $L$ at time $t$}
{
\If{$L$ is in sleep state} {
\If{$Q_{iL} > T_s$} {
$L$ begins waking up from sleep state\;
$i$ begins waking up from LPI state\;
}
\Else
{
$L$ begins waking up after $\tau_{wakeup}^{LC}$\;
$i$ begins waking up after $\tau_{wakeup}^{port}$\;
}
}
}
\If{a flow is transimitted from port $i$ of line card $L$ at time $t$} {
\If{$Q_{iL} < T_s$} {
$i$ starts transition to LPI after $\tau_{LPI}$\;
\If{after $\tau_{LPI}$, all the ports of $L$ are in LPI state} {
$L$ starts transition to sleep state after $\tau_{sleep}^{LC}$;
}
}
}
}
\end{algorithm}
\subsubsection{Server State Transition Algorithm}
{\color{blue}multi-core servers, multiple power states}
As mentioned in Section~\ref{sec:serverpower}, we simply model two power states for the server: active and C6 sleep state. Algorithm~\ref{algorithm4} describes our policy on managing the server power states. We assume that each server in DCN has a local queue to buffer the tasks, and tasks should first be dispatched to active servers whose queue size do not exceed a threshold. The threshold controls the job latency.
\begin{algorithm}
\SetAlgoNoLine
\caption{Server State Transition Algorithm}
\label{algorithm4}
\KwIn{ server wakeup latency $\tau_{wakeup}^{server}$, local queue of server $S$}
\KwOut{Server power state transition}
\If{a task arrives at a server $S$}
{
\If{$S$ is in sleep state} {
$S$ begins waking up after wakeup latency $\tau_{wakeup}^{server}$\;
}
\Else {
task is put in the local queue of $S$\;
}
}
\If{a task is finished executing at a server $S$}
{
\If{local queue of $S$ is empty} {
$S$ begins turning into sleep state\;
}
}
\end{algorithm}
\subsubsection{Cooperative Network-Server Algorithm}
{\color{blue}periodical global control/computation, traffic prediction like TS-Bat}
The main idea of our algorithm is to jointly consider the status of server pool and network before assigning jobs. To be more specific, for a job consisting of several pairs of interdependent tasks, if we place task pairs based on their interdependence, and choose the core pair with the minimum routing cost, instead of randomly placing them on available cores without awareness of communication requirement between tasks, then the placement together with its corresponding routing path must be the optimal. And in the context of this paper, network routing cost is energy consumption, since every line card on the chosen routing path has to be active, or it will have to be woken up.
Based on this idea, we propose the Cooperative Network-Server (CNS) Algorithm~\ref{algorithm2}. The notations are elaborated in Table~\ref{algorithm2table}. In the initial stage, we compute and store all possible routing paths between any pair of network nodes. As mentioned in the previous section, we have a global controller to keep track of all the line cards, ports, and server statuses. Thus, when a job consisting of a set of interdependent tasks arrive, we first check the server side to select all server pairs whose power state is active and local queue sizes do not exceed a threshold. If no server statisfies these requirements, then servers with full local queue and servers in C6 sleep state will be selected. Note that if a task is assigned to a server in sleep state, then the server will be awoken immediately and enter active state after a wakeup latency. For each server pair we select possible routing paths between them from the pre-computed routing paths set. Along each path, line cards could be active, sleeping, or off, and ports could be active, LPI, or off, but if we assign a path to the task pair, all the line cards and ports along it should become active. In other words, we need to wake up the inactive line cards and ports on the chosen path, which results in extra power consumption and wakeup latency. Based on this, we can compute and assign a weight, which is a measurement of the number of active line cards and ports, to each possible routing path, and choose the server pair with minimum routing cost. And our proposed Cooperative Network-Server Algorithm will output the least-weight path and its corresponding server pair for each task pair.
\begin{table}
\centering
\captionsetup{font=small}
\scalebox{0.7}{
\begin{tabular}{|c|c|c|}
\hline
\bf{Symbol}& \bf{Description} \\
\hline
$P$ & all the routing paths between any pair of network nodes \\
\hline
$Q_s$ & local queue size of server $s$ \\
\hline
$T_s^{server}$ & local queue size threshold of server $s$\\
\hline
$S$ & all the servers in DCN \\
\hline
$S_{ava}$ & all the servers whose current queue size doesn't exceed $T_s^{server}$ \\
\hline
$P_{x, y}$ & all the routing paths between node $x$ and $y$ in DCN\\
\hline
\end{tabular}
}
\caption{\label{algorithm2table} Notations in PopCorns Cooperative Network-Server Algorithm.}
\end{table}
\begin{algorithm}
\SetAlgoNoLine
\caption{Cooperative Network-Server Algorithm}
\label{algorithm2}
\KwIn{$P$, $Q_s$, $T_s$, line cards and ports power state, task dependency within a job}
\KwOut{Job placement and corresponding routing path}
\While{job $j$ consisting of task set $T^j$ arrives}
{
\For{each pair of interdependent tasks $(T^j_m, T^j_n)$ in $T^j$}
{
select $S_{ava}$ from $S$\;
\For {each pair of available servers $(x, y)$ in $S_{ava}$}
{
compute $P(x, y)$ for server pair $(x, y)$ from $P$\;
\For {each path $p$ in $P(x, y)$}
{
get power states of all the ports and line cards along $P$ from the global controller, compute path weight $w(x, y)$ in terms of energy consumption\;
}
}
choose the least-weight path associated with corresponding server pair for task pair $(T^j_m, T^j_n)$\;
}
}
\end{algorithm}
{\color{blue}
\begin*{algorithm}
\SetAlgoNoLine
\caption{Popcorns Weight assignment algorithm}
\label{algorithm3}
\KwIn{$i$, $j$, Server/line card power state, Number of flows on Link (i,j), LinkBandwidth, FlowSize, AverageNumberOfHops }
\KwOut{$W(i,j)$ Weight for the edge connecting nodes i and j}
LinkWeight(i,j) \leftarrow 0;
\If{$i$ is Server}
{
LinkWeight(i,j) + \leftarrow (ServerActivePower - Current Sleep state power) * taskSize + EdgeLinkBW /FlowSize;
}
\If{$j$ is a Switch}
{
AvailableBandwidth = min(EdgeLinkBW,MaxLinkRate(i,j)/number_of_flows);
LinkWeight (i,j)+ \leftarrow Switch $j$ ActivePower * FlowSize/EdgeLinkBW ;
delayedTime \leftarrow (FlowSize/(EdgeSwitchBW - AvailableBandwidth);
LinkWeight (i,j) + \leftarrow (Switch $j$ ActivePower + 2 * ServerActivePower) * delayTime / AverageHopCount;
}
\end*{algorithm}
}
\begin{algorithm}
\SetAlgoNoLine
\caption{Job scheduling Algorithm}
\label{algorithm2}
\KwIn{$P$, $Q_s$, $T_s$, line cards and ports power state, task dependency within a job}
\KwOut{Job placement and corresponding routing path}
\While{job $j$ consisting of task set $T^j$ arrives}
{
\For{each pair of interdependent tasks $(T^j_m, T^j_n)$ in $T^j$}
{
select $S_{ava}$ from $S$\;
\For {each pair of available servers $(x, y)$ in $S_{ava}$}
{
compute $P(x, y)$ for server pair $(x, y)$ from $P$\;
\For {each path $p$ in $P(x, y)$}
{
get power states of all the ports and line cards along $P$ from the global controller, compute path weight $w(x, y)$ in terms of energy consumption\;
}
}
choose the least-weight path associated with corresponding server pair for task pair $(T^j_m, T^j_n)$\;
}
}
\end{algorithm}
\begin{algorithm}
\SetAlgoNoLine
\caption{Job Scheduling Algorithm}
\label{algorithm3}
\KwIn{input}
\KwOut{output}
\If{there is job to be finished}{then-block}
content...
\end{algorithm}
\section{Baseline policies}
\begin{figure*}
\centering
\captionsetup{font=small}
\subfloat[Shortest Path and Random Server job scheduling]{
\includegraphics[width=0.33\textwidth]{figures/heatmaps/heatmap_djikstra_user_specify_random_.png}
\label{djikstra_heat_random}
}
\subfloat[Elastic Tree and Random Server job scheduling]{
\includegraphics[width=0.33\textwidth]{figures/heatmaps/heatmap_elastic_tree_user_specify_random_.png}
\label{elastic_tree_heat_random}
}
\subfloat[Shortest Path and WASP Server job scheduling]{
\includegraphics[width=0.33\textwidth]{figures/heatmaps/heatmap_djikstra_user_specify_wasp_.png}
\label{djikstra_heat_wasp}
}
\subfloat[Elastic Tree and WASP Server job scheduling]{
\includegraphics[width=0.33\textwidth]{figures/heatmaps/heatmap_elastic_tree_user_specify_wasp_.png}
\label{elastic_tree_heat_wasp}
}
\subfloat[Popcorns based server and network scheduling]{
\includegraphics[width=0.33\textwidth]{figures/heatmaps/heatmap_popcorns_user_specify_popcorns_.png}
\label{popcorns_heat_popcorns}
}
\caption{Illustration of the average sleep state for switches and server with
different policies. Popcorns-Pro\xspace scheduling policy achieves greater consolidation of server and network flows. }
\label{fig:heatmap-graph}
\end{figure*}
Figure ~\ref{fig:heatmap-graph}, illustrates the heat map after the end of the execution of the same Poisson based random job arrival arrival pattern with mean arrival rate $\lambda$=0.5 Jobs/second using the different server and network job scheduling policies for a small 16 server Fat-Tree topology. We note that with random server selection , all servers are equally utilized with elastic tree based network scheduling performing more network consolidation for better savings.
When using random server selection for tasks with Shortest path routing ~\ref{djikstra_heat_random} and Elastic based routing ~\ref{elastic_tree_heat_random}, we can see that there are more aggregate switches in the dark green allowing more switches to stay in deep sleep state. Similarly for the WASP server selection scheme in figures ~\ref{djikstra_heat_wasp}
and ~\ref{elastic_tree_heat_wasp}, we see that there are unequal usage of the servers and more flows consolidated into fewer switches. It can be see that even with server energy optimization in WASP, and our combined server and network policy Popcorns-Pro\xspace ~\ref{popcorns_heat_popcorns}
can achieve much higher savings than each of the individual optimizations. Popcorns-Pro\xspace algorithm is able achieve this mainly due to being aware of network energy aware state when scheduling jobs on the servers. It is further able to reduce energy savings by increased consolidation of flows and servers by utilizing the latency slacks available at each node in the data center.
\section{Workload Adaptation}
\label{sec:tstw_exp}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.3]{./figure/algorithm_chart.pdf}
\caption{\titlename Power Management Framework}
\label{fig:alg_overview}
\end{figure}
While delay timers are useful in saving energy, we note that they lack workload awareness. Large values of delay timers could cause the system to remain in higher power states for longer than necessary resulting in increased energy consumption, while too small values for delay timers result in premature entry into low-power states. This can be problematic on two counts:
\begin{inparaenum}
\item The transition energy between power states can be high.
\item The wakeup latencies from low-power states can degrade system performance.
\end{inparaenum}
To incorporate workload-awareness, we explore a two-level adaptive strategy that controls the active and low-power state transitions using a local server power controller and a global server farm power manager.
\subsection{\titlename: Workload-Adaptive Algorithm}
\label{sec:handler}
We now present the design for our \titlename framework. As shown in Figure~\ref{fig:alg_overview}, the server farm power manager in the front end monitors the current load (number of pending jobs per server) and sends control commands to the local power controller. The server farm power manager puts the servers in either active or sleep modes. The bottom part of Figure~\ref{fig:alg_overview} presents state transitions coordinated by the local server power controllers.
\begin{table}[htbp]
\captionsetup{font=footnotesize}
\begin{footnotesize}
\begin{center}
\caption{Notations in \titlename power management algorithm}
\begin{tabular}{|c|l|}
\hline
{\bf Symbol} &{\bf \hspace{0.8in}Description}\\
\hline
{\normalsize $V_{act}$} & servers in active/package sleep mode \\
\hline
{\normalsize $V_{s}$} & servers in system sleep mode \\
\hline
{\normalsize $T_{s}$} & workload threshold to reduce active servers \\
\hline
{\normalsize $T_{w}$} & workload threshold to increase active servers \\
\hline
{\normalsize $N_{p}$} & number of provisioned shallow sleep servers \\
\hline
{\normalsize $\tau$} & delay time before entering system sleep \\
\hline
\end{tabular}
\label{table:notation}
\end{center}
\end{footnotesize}
\end{table}
\begin{algorithm}[ht]
\footnotesize
\KwIn{$T_{s}$, $T_{w}$, $N_{p}$, $\tau$, $n$ (total number of servers)}
\small
%
%
Initialization:
$V_{act}=\{s_1,s_2,...,s_n\} $; $V_{s}=\{\}$\;
\While{there are unfinished jobs}{
\If{a new job $j$ arrives at time $t_a$}{
compute $load\_per\_active\_server$\;
\If{$load\_per\_active\_server>T_w$ and $|V_{s}|>0$}{
retrieve a server $s$ from $V_{s}$\;
$V_{act}$.add($s$)\;
create a \emph{trans\_to\_active\_mode} request \emph{$tta\_r$}\;
send \emph{$tta\_r$} to server $s$'s power controller\;
}
}
\If{a job $j$ finishes at time $t_d$}{
compute $load\_per\_active\_server$\;
\If{$load\_per\_active\_server<T_s$ and $|V_{act}|>0$ }{
retrieve a server $s$ in $V_{act}$\;
$V_{s}$.add($s$) \;
create a \emph{trans\_to\_sleep\_mode} request \emph{$tts\_r$}\;
\If{count of shallow sleep servers$>N_{p}$}{
\emph{$tts\_r$}.enableDelayTimer($\tau$)\;
}
\Else{
\emph{$tts\_r$}.enableDelayTimer(infinity)\;
}
send \emph{$tts\_r$} to server $s$'s power controller\;
}
}
}
\caption{Global Server Farm Power Manager}
\label{alg:scheduler}
\end{algorithm}
\titlename automatically activates servers when the pending load becomes too high (that could lead to higher average job latency), and then places servers in low-power sleep mode to conserve energy when the workload becomes light. We achieve our goal of balancing energy consumption and latency by estimating the current load and placing servers in different power modes. There are two important parameters in Algorithm~\ref{alg:scheduler} that govern transitioning between active and sleep modes: 1. $T_s$, workload threshold per active server below which \titlename will put an active server to sleep, and 2. $T_w$, workload threshold per active server above which \titlename will wake up an inactive server.
{\bf Global Server Farm Power Manager}: All servers are initially in the shallow low-power state, and arriving jobs are placed in the server's local job queue. As jobs arrive, the \emph{load per active mode server} is computed dynamically by the power manager in the front end based on the number of jobs sent to individual servers and the completed jobs. The global server farm power manager maintains lists of servers in active and sleep modes. When new jobs arrive, it first checks if current load per active server is above $T_w$. If so, it selects a server in sleep server pool (if available) and sends a power mode transition request to the active state for that server. When the load per active server falls below $T_s$, the power manager selects an active server and sends a power mode transition request to enter sleep mode. Algorithm~\ref{alg:scheduler} describes the \titlename power manager with its notations shown in Table~\ref{table:notation}.
\textbf{Local Server Power Controller:} The processor transitions to package sleep state when it becomes idle, and stays in that state until it receives the request to wakeup from the global power manager. If the server receives a request for transition to sleep mode, it will first finish up all the pending jobs in the local queue and then enters package sleep after which a delay timer is started. The server enters system sleep upon delay timer expiration. However, if the scheduler chooses to wake up the server before timer expiration (e.g., due to sudden load increase), the timer is reset and the server goes back to active mode.
For large server farms, we can adopt one of two possible solutions:
\begin{inparaenum}
\item Adopt a distributed power management approach where energy is optimized within individual domains of servers with their own power managers.
\item Adopt a hierarchical solution with multiple levels of global power managers.
\end{inparaenum}
We note that a distributed power management approach may be more scalable with lower implementation complexity compared to a hierarchical approach that may involve longer latencies for decision making and higher bookkeeping overheads for the servers.
\subsection{Adaptive Server Provisioning}
\label{sec:provision}
Job arrival pattern may have local spikes (bursty), during which the service latency may suffer, especially when the servers are in low power modes. To mitigate this problem, we provision a subset of servers in shallow sleep states dynamically by setting their delay timer values to infinity. \titlename determines the number of provisioned servers {\it dynamically} through measuring the current standard deviation in the job arrival rate observed over a period of 2 minutes. Specifically, the server provision module samples the number of arrivals and calculates the utilization for each sample period (one second in our current setting). It then uses the sampling window to determine the standard deviation in the level of system utilization. The module will provision $\alpha\times stdev\times number\_servers$ dynamically in shallow sleep state. $\alpha$ is a tunable parameter. By default we set it to 3.0, since it typically covers a vast majority of the population (e.g., more than 99\% of the population in Gaussian distribution).
\section{Evaluation}
\label{sec:evaluation}
\subsection{Processor/System Low-power States and Power Model}
Low-power states are now an important feature that are widely supported in today's computing systems. The Advanced Configuration and Power Interface (ACPI)~\cite{acpi}
specifies various processor low power states, denoted as \emph{Cx}, and system low-power states, denoted as \emph{Sx}.
A higher level C state or S state typically indicates more aggressive energy savings but also corresponds to longer wake-up latencies.
For multi-core processor, low-power sleep states are supported at both core level and package level. When all cores become idle and reside in some \emph{Core C state}, the entire package would be resolved to a certain C state, denoted as \emph{package sleep state}, which further reduces power.
Although processor sleep states can significantly reduce the power consumed by the processor, servers can still consume a considerable amount of power, as the platform may still remain active. As a result, in order to achieve further energy savings, \emph{system sleep states}, that also put platform components into low-power state, are considered for server farm power management.
\subsection{Server and Job Model}
We model the server farm as a multi-server system that could process multiple jobs (upto the number of total cores) at a time.
{\em Utilization factor} is defined as the product of the job arrival rate and the average execution time, which is also the fraction of time that the server is expected to be busy executing a job.
We assume that a system-wide load balancer dispatches jobs to the servers within the server farm.
The {\em job latency} is defined as the time elapsed from when a job arrives to when the job completes its execution and departs the server. In this paper, the $90^{th}$ percentile job latency is considered as the QoS target.
\section{Experimental Setup}
We perform two sets of experiments: 1. simulations to explore the Pareto-optimal energy-latency tradeoff as well as corresponding $T_s$, $T_w$ and $\tau$ settings, and 2. prototype implementation on a testbed with web server deployment. In this section, we elaborate on the experimental setup for both approaches.
\begin{figure}[htbp]
\captionsetup{font=small}
\centering
\includegraphics[scale=0.60]{./figure/power_profile_cloud.pdf}
\caption[test]{Power profile of a 10-core Xeon E5 processor with C0-C1 and C0-C6 transition settings whenever the server is idle.\protect\footnotemark}
\label{fig:power_profile}
\end{figure}
\footnotetext{We use a microbenchmark that can calibrate itself and occupy the core based on required utilization settings. To occupy multiple cores, we run multiple copies of this microbenchmark each pinned to a core.}
\begin{table}[htbp]
\footnotesize
\captionsetup{font=small}
\centering
\caption{Power (W) breakdown for a system with $n_{a}$ active cores}
\begin{tabular}{|p{1.5cm}<{\centering}|p{1.3cm}<{\centering}|p{1.4cm}<{\centering}|p{1.3cm}<{\centering}|p{0.8cm}<{\centering}|}
\hline
\multirow{2}{*}{\bf Component} & Core sleep C1$^{\ast}$& Core sleep C6 $^{\dagger}$& Pkg. sleep C6 & System sleep \\
\hline
\multirow{2}{*}{CPU} & 33.0+$3.1\times(n_{a}-1)$ & 23.0+$3.8\times(n_{a}-1)$ & \multirow{2}{*}{8.3} & \multirow{2}{*}{8.3} \\
\hline
RAM~\cite{intel_e5} & 10.8 & 10.8 & 4.9 & 1.4 \\
\hline
Platform~\cite{powernap} & 45.5 & 45.5 & 23.6 & 4.8 \\
\hline
\multirow{2}{*}{Total Power} & 89.3+$3.1\times(n_{a}-1)$ & 79.3+$3.8\times(n_{a} -1)$ & \multirow{2}{*}{36.8} & \multirow{2}{*}{14.5} \\
\hline
\end{tabular} \\
${\ast} $ processor is active and the rest of the idle cores are in $C1$ state. \\
$\dagger$ processor is active and the rest of the idle cores are in $C6$ state.
\label{table:powermode}
\end{table}
\begin{table}[htbp]
\footnotesize
\captionsetup{font=small}
\centering
\caption{Processor/System low-power states and wakeup latencies}
\begin{tabular}{|c|c|c|}
\hline
Low-power State & Wake-up latency \\ \hline
core sleep C1 & 10 $\mu$s \\ \hline
core sleep C6 & 82 $\mu$s \\ \hline
package sleep C6 & 1 ms \\ \hline
system sleep & 5 s \\ \hline
\end{tabular}
\vspace{-0.15in}
\label{table:latency-table}
\end{table}
\subsection{Processor Power Profile and Server Power Model}
We profile the power consumption of the Intel Xeon E-5 processor~\cite{intel_e5} using Intel's \emph{Running Average Power Limit} (RAPL) interface. We build a customized cpuidle governor that allows specified low-power state transitions. The processor is programmed to transition between active state (C0) and low-power state $Cx$ (e.g., C1, C6). Figure~\ref{fig:power_profile} shows the measured power consumption of the processor for two configurations: \emph{C0-C1}, \emph{C0-C6} and at utilization levels from 0\% to 100\%. Using linear regression, a power model is built for the processor based on the sleep state selection and the number of active cores at full utilization. Table~\ref{table:powermode} shows the power consumption when a certain state is chosen for sleep mode. Table~\ref{table:latency-table} shows the wakeup latencies for various low-power states. Note that the processor sleep state transition latencies are reported by the Linux cpuidle driver~\cite{pallipadi2007cpuidle}.
\subsection{Simulation Platform}
\titlename uses an event-driven simulator based on Bighouse~\cite{bighouse} that models server farm workloads and multi-server activity. We simulate a server farm with 100 ten-core servers (by default). In all of our experimental results, we report the steady state statistics by disregarding the warm-up period of the first 10,000 jobs. In the simulation, we use short latency (Web service-like) jobs with $s=4.2 ms$ and long latency (DNS service-like) jobs with $s=194 ms$ as representatives based on prior studies~\cite{powernap}. For each of the representative workloads, we generate synthetic job arrivals with different utilization levels (0.1 for low, 0.3 for average~\cite{barroso_isca07}, and 0.6 for high). Random job arrivals are modeled by Poisson process~\cite{meisner2012dreamweaver}. Besides synthetic workload, we also perform simulation based on Wikipedia traces.
\subsection{Real System Experiments on Testbed}
We deploy a testbed with a cluster of 10 application servers together with one load-generating server and one load-balancing server; all servers support Intelligent Power Management Interface (IPMI) interface~\cite{openipmi} for system-level power monitoring. Each application server is configured with the apache web service. The load generator keeps sending web requests to the system according to real system traces (See Section~\ref{sec:evaluation} for further details).
\section{Evaluation}
\label{sec:experiment}
\subsection{Experimental Setup}
Our experiments were performed using HolDCSim, an event-driven simulator~\cite{HolDCSimHolisticSimulator},
which simulates interdependent asynchronous task executions, and implement
custom server job scheduling and network routing algorithms. This allows us to
simulate a large number of servers and switches relatively easily. The simulator
allows us to specify our own power model and sleep state transition mechanisms
and calculate the overall energy consumption for the simulator. The network is
configured using a fat tree-topology as shown in Figure~\ref{figure12}. We
simulate two classes of real-world applications:
\emph{web service}: two large sized (uniformly distributed around 500 ms service time) parent-child tasks,
\emph{web search}: a parent task search task querying five subtasks to run smaller search process (uniformly distributed around 100 ms CPU service time). All the network routing policies consider the QoS threshold of 10X the total CPU time of all tasks in the job when choosing the job selection.
We assume that each of the tasks can be executed on any server machine and if enough processing cores available co-located in the same server. The communication patterns and the DAG for the two workloads are illustrated in Figure~\ref{fig:ws-graph}, and Figure~\ref{fig:wsv-graph}.
We carefully select delay timer values for sleep state transitions~\cite{yaoWASPWorkloadAdaptive2017} for server and network by trying each value in the search-space, starting with first determining the delay timer value for Sleep state 1 with the lowest energy consumption, then fixing the lowest timer value found previously for state 1 to determine the delay timer for sleep state 2, and so on.
\subsubsection{Job arrivals patterns}
\label{workloads}
We use two kinds of job arrivals in our study.
The arrival rate is given by $\lambda = (Num_Servers * Num * Cores * \rho)/ Avg_TaskSize * NumTasksperJob$. Thereby targeting $\rho$ values of 15\%, 30\% and 60\% we derive the $\lambda$ values. Due to the variation in the scheduling policy, task inter-dependency, randomized job task and arrival times and sleep state transitions the resultant server utilization level may not match with targeted utilization levels.
\emph{Markov Modulated Poisson Process}: Similar to workload used in \cite{yaoWASPWorkloadAdaptive2017}, MMPP arrivals generates a two different states of arrival rates. The normal utilization phase similar to the Poisson workload and a bursty period where the arrival rate is 1.5X the $\lambda$ in the normal phase. We choose the bursty period to last 10 secs between 20 seconds of normal phase. The load levels of low, medium and high arrival rate and obtained in the similar fashion as the basic poisson based arrival workload.
\emph{Trace based arrival job arrival pattern}: We use publicly available job arrival trace from NLANR~\cite{NationalLabApplied} for a real world system. We chose NLANR trace since it inherently has a high variation in job inter-arrival rate. The load levels of low, medium and high arrival traces are obtained by scaling the inter-arrival times between the jobs in the original trace by a factor.
\begin{figure} [!h]
\centering
\captionsetup{font=small}
\subfloat[Steps in a web search request.]{ \includegraphics[scale=0.34]{figures/wsearchdraw.pdf}
\label{fig:ws-figure90}
}
\subfloat[DAG of a web search job.]{ \includegraphics[scale=0.15]{figures/wsearchDAG.pdf}
\label{fig:ws-dag}
}
\caption{Web search communication patterns (left) and DAG of a web search job (right).
Task 1 is executed by the master server, while task 2 - 6 are processed by different indexers, and master server communicates with all the indices.
\label{fig:ws-graph}
\end{figure}
\begin{figure} [!h]
\centering
\captionsetup{font=small}
\subfloat[Steps in a web service request.]{
\includegraphics[scale=0.34]{figures/wsdraw-2.pdf}
\label{fig:wsv-figure90}
s }
\subfloat[DAG of a web service job.]{
\includegraphics[scale=0.15]{figures/wsDAG.pdf}
\label{fig:wsv-dag}
}
\caption{Web service communication patterns (top) and DAG of a web service job (bottom).
Task 1 is executed by application server, while task 2 is assigned to database server.
\label{fig:wsv-graph}
\end{figure}
\subsubsection{Quality of Service Constraints}
We consider the minimum time a job takes to complete to be the sum of computation time of each task in the job and time it takes for all the inter-task communications to finish at the fastest possible bandwidth in the network. For instance, for a two-task job with 500ms of CPU cost each and a 10 MB of network communication on the 10 Gbps maximum link capacity between two nodes, the minimum time for job execution will be \textit{500ms x 2 + 75 ms (data transfer time) = 1075 ms}. Similar to other works (e.g.,~\cite{yaoWASPWorkloadAdaptive2017}), we consider the quality of Service (QoS) deadline for the job scheduler to be 10 X the minimum job execution to enable enough slack to allow multiple jobs and enable energy optimizations.
\subsection{Baseline policies and Evaluation methodology}
There are two major objectives: first to prove that the smart use of line card sleep states with our proposed Algorithm~\ref{algorithm1}, does reduce power consumption in Popcorns-Pro compared to no power management policy on the switches at all; second, our proposed Popcorns-Pro\xspace
algorithm~\ref{algorithm2} can further save power, compared to other job placement algorithms not taking both the network and server status into consideration. We consider the four combinations of the below server-specific and network-specific energy optimization policies. As shown in the section ~\ref{sec:motivation}, we have seen that switch sleep states without any optimization can still yield some energy savings, and it is important to further optimize the energy savings for them.
\subsubsection{Server based policies}
The below two policies represent a common server selection policy and energy optimized server policy.
{\textbf{Random Server allocation}-\textit{SB}}
Following a traditional server load balancing approach, tasks are assigned randomly to all servers and the processing cores as they come in, without any consideration for task dependency and core locality in the data centers.
{\textbf{Workload Adaptive using Server Low-Power State Partitioning-\textit{WASP}}} This policy represents recent works~\cite{yaoWASPWorkloadAdaptive2017} which consolidate jobs into fewer server while optimizing for keeping more servers in sleeping state. These optimizations are typically developed by system developers without considering the network energy optimization. In WASP, a set of servers are kept in the unused state to allow them to achieve the most energy efficient sleep state, while the remaining jobs are processed on the remaining servers. The servers are reassigned between the two sets depending the measured input load on the entire server pool.
\subsubsection{Network based policies}
The following two approaches to network routing in the data center.
{\textbf{Shortest Path-\textit{SP}}} This baseline network path allocation uses Djikstra's algorithm to forward flows across the two end points. This policy represents the traditional network routing algorithm available which chooses the shortest number of hops which also considering the QoS latency requirements for the current flow.
{\textbf{Elastic tree-\textit{ET}}} Recent work on the improving network energy efficiency have considered various heuristics to consolidate flows into fewer switches. The Elastictree ~\cite{hellerElasticTreeSavingEnergy2010} baseline represents one of the most commonly cited work in this area, which tries keep fewer switches in active state in highly redundant network typologies such as the fat tree topology. We implement the greedy bin packing approach in our experiment where the heuristic chooses the leftmost path at every switch exit to reach the destination. These approaches assume a fixed server task allocation and hence cannot fully optimize for efficiency.
\input{baselines}
\begin{comment}
\begin{algorithm}
\SetAlgoNoLine
\caption{Server-Balanced Algortihm}
\label{algorithm3}
\KwIn{Pre-computed Dijkstra's shortest paths set $DijkstraPaths$ between any pair of network nodes, number of sevres $N$ in DCN }
\KwOut{Job placement and corresponding routing path}
\While{task $t_i$ with task id $i$ of job $j$ arrives}
{
place $t_i$ on server $i\%N$;
then the interdependent task $t_k$ of $t_i$ with task id $k$ will be placed on server $k\%N$;
get the routing path $P$ from server $i\%N$ to server $k\%N$ from $DijkstraPaths$;
return server $i\%N$ and path $P$;
}
\end{algorithm}
\end{comment}
service and its corresponding job DAG. In simulation, we set the flow size to be
100KBytes and average job service time is randomly generated between 150ms and 200ms based on prior studies~\cite{meisnerPowerNapEliminatingServer}.
\begin{figure*}[!h]
\centering
\captionsetup{font=small}
\subfloat[Low Arrival Rate \newline $\lambda$= 8 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/service/W=mmpp_Q=100_FS=100_C=1_U=10.png}
\label{djikstra_h_random}
}
\subfloat[Medium Arrival Rate \newline $\lambda$ = 15 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/service/W=mmpp_Q=100_FS=100_C=1_U=25.png}
\label{elastic_tree_h_random}
}
\subfloat[High Arrival rate \newline $\lambda$ = 30 Jobs/Sec ]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/service/W=mmpp_Q=100_FS=100_C=1_U=40.png}
\label{elastic_tree_h_wasp}
}
\caption{Energy Savings comparison for different server and network path selection algorithms with MMPP based Job Arrival Pattern for service based 2 task model. Every 30 seconds we have 10-second bursts with 1.5x Arrival rate.}
\label{fig:mmpp-ser}
\end{figure*}
\begin{figure*}[!h]
\centering
\captionsetup{font=small}
\subfloat[Low Arrival Rate \newline Avg. Rate = 10 Jobs/Sec ]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/service/W=trace_Q=10_FS=100_C=4_U=10.png}
\label{djikstra_h_random}
}
\subfloat[Medium Arrival Rate \newline Avg. Rate = 16 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/service/W=trace_Q=10_FS=100_C=4_U=25.png}
\label{elastic_tree_h_random}
}
\subfloat[High Arrival rate \newline Avg. Rate= 32 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/service/W=trace_Q=10_FS=100_C=4_U=55.png}
\label{elastic_tree_h_wasp}
}
\caption{Energy Savings comparison for different server and network path selection algorithms with NLANR trace based Job Arrival Pattern for service based 2 task model.}
\label{fig:trace-ser}
\end{figure*}
\begin{figure*}[!h]
\centering
\captionsetup{font=small}
\subfloat[Low Arrival Rate \newline $\lambda$=8 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/search/W=mmpp_Q=10_FS=10_C=4_U=15.png}
\label{djikstra_h_random}
}
\subfloat[Medium Arrival Rate \newline $\lambda$=15 Jobs/Sec ]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/search/W=mmpp_Q=10_FS=10_C=4_U=35.png}
\label{elastic_tree_h_random}
}
\subfloat[High Arrival Rate \newline $\lambda$=25 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/search/W=mmpp_Q=10_FS=10_C=4_U=55.png}
\label{elastic_tree_h_wasp}
}
\caption{Energy Savings comparison for different server and network path selection algorithms with MMPP based Job Arrival Pattern for search based 6-task job model. Every 30 seconds we have 10-second bursts with 1.5x Average Arrival rate.
\label{fig:mmpp-search}
\end{figure*}
\begin{figure*}[!h]
\centering
\captionsetup{font=small}
\subfloat[Low Arrival Rate \newline Avg. Rate = 9 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/search/W=trace_Q=10_FS=10_C=4_U=15.png}
\label{djikstra_h_random}
}
\subfloat[Medium Arrival Rate \newline Avg. Rate = 15 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/search/W=trace_Q=10_FS=10_C=4_U=25.png}
\label{elastic_tree_h_random}
}
\subfloat[High Arrival Rate \newline Avg. Rate = 30 Jobs/Sec]{
\includegraphics[width=0.33\textwidth]{figures/evaluation/search/W=trace_Q=10_FS=10_C=4_U=50.png}
\label{elastic_tree_h_wasp}
}
\caption{Energy Savings comparison for different server and network path
selection algorithms with NLANR trace based Job Arrival Pattern for search
based 6-task job model.}
\label{fig:trace-search}
\end{figure*}
\subsection{Evaluation Results}
\subsubsection{Energy Savings Comparison}
We compare the average energy consumption per job, for different combination of
network and server selection algorithm with Popcorn's selection algorithm. We
compare against a combination baseline policies WASP and random server
allocation policies with Elastictree and Shortest Path network selection
algorithms. As shown in Figure~\ref{fig:mmpp-ser}, popcorn algorithm for 2-task
service based job model for random arrival job provides significant savings
compared to Shortest-path based policies. This is attributed to fewer switches
and server being active. For ElasticTree based runs in figure
~\ref{fig:mmpp-ser}, using WASP as server allocation policy does provide more
opportunities for network consolidation and hence decreased savings. As we
increase the arrival rate to 60 Jobs/sec in figure, we see that Elastictree based runs perform
closer to Popcorns-Pro\xspace as there are fewer opportunities for savings. We note that due to the bursty
nature of MMPP arrival rates (as discussed in section ~\ref{workloads}), the increased
delay in waking up additional switches leads to slower flow transmission rate
and increased energy consumption. We gather that server-only optimizations are
not always optimal when considering the whole data center energy consumption.
With NLANR trace based job arrivals.
For Search based workload discussed in beginning of section ~\ref{sec:experiment}, the Popcorns-Pro\xspace algorithm still results in significant savings.
We also see WASP based server optimization consistently performing better than Random server allocation. This is attributed to smaller communication size between each parent and child tasks and minimal time wasted waiting for the network data to arrive for processing.
\subsection{Flow size sensitivity}
The goal of this experiment is to determine if the routing path and server selection
by Popcorns-Pro\xspace algorithm is resilient to increases in network flow sizes and
thereby increased network utilization. As we see in figure ~\ref{fig:flows} the
Popcorns-Pro\xspace policy's is able to select paths and servers which are least affected
by the increase in flowsizes. Shortestpath based policies use most number of
switches and with increasing flow sizes there are fewer chances going to sleep
and hence consume the most energy. Elastic tree's greedy bin packing of flow
consolidation benefits from some network consolidation but with the non-optimal
server selection leading to longer flow communication times, the energy
consumption increases exponentially.
\begin{figure}[h]
\captionsetup{font=small}
\centering\includegraphics[width=.85\linewidth]{figures/flowsize/W=user_specify_U=11_Q=10.png}
\caption{Energy consumption/Job for various flow sizes for different server
scheduling and network routing algorithms. The experiment involves a 1024
server fat-Tree topology with WebService jobs arriving randomly with a Poisson random
variable value $\lambda$ = 25 jobs/sec and 10X QoS threshold.}
\label{fig:flows}
\end{figure}
\subsection{Job completion latency Comparison}
\begin{figure}[!h]
\captionsetup{font=small}
\includegraphics[width=1\linewidth]{figures/cdf/cdf33.png}\caption{Cumulative-Distributed-function
plot of job latencies for different server scheduling and network routing
algorithms. The experiment involves a 1024 server fat-Tree topology with
WebService modeled jobs arriving randomly with a Poisson random variable value $\lambda$ = 25
jobs/sec and QoS threshold as 10X.
\label{cdf}
\end{figure}
Although all the algorithms discussed, consider the QoS latency threshold of
10x (Sum of Task Size of all the tasks in the job) for network
routing path selection, we see that the jobs in the Popcorns-Pro\xspace algorithm are much
below the threshold in figure ~\ref{cdf}. Comparing WASP based policies with Server
balanced random server selection scheme, fewer servers being utilized in WASP,
the transmission time is higher and thus resulting in higher latencies.
\begin{figure} []
\captionsetup{font=small}
\includegraphics[width=1\linewidth]{figures/energy_distribution/popcorns.pdf}
\caption{Component-wise energy distribution for Popcorns in Joules for 500
second execution of 1024 server Fat-Tree topology with WebService model jobs arriving randomly (Poisson distribution) with $\lambda$ = 25 jobs per second. The energy consumed at different sleep states of the switch is also shown. }
\label{fig:Popcorn-component}
\end{figure}
\begin{figure} []
\captionsetup{font=small}
\centering\includegraphics[width=1\linewidth]{figures/energy_distribution/wasp-et.pdf}\caption{Component-wise
energy distribution for WASP server scheduling policy with Elastic tree
as network routing algorithm in Joules for 500 second execution of 1024
server Fat-Tree topology with WebService model jobs arriving randomly (Poisson distribution) with $\lambda$=25 jobs per second. The energy consumed at different sleep states of the switch is also shown.}
\label{fig:WASP-component}
\end{figure}
\subsection{Energy Distribution}
In this section we illustrate how system architects can utilize our simulated
settings and the power model to understand which component in the hardware
~\ref{fig:Popcorn-component} shows the energy consumed by each component in the
switch power model at each of the sleep state of the switch for typical
workload. This allows the system architect to focus their limited resources
towards optimizing a particular system architecture. For instance, looking at
figures ~\ref{fig:Popcorn-component} and ~\ref{fig:WASP-component}, we see that
most energy consumed by the network processor in its deepest sleep state. It can
also tell that a high cost design change for Sleep state S1 has more benefit
when using the Popcorns policy when compared to WASP and Elastictree policy.
\begin{comment}
\subsubsection{Real traces}
{\color{blue}Results from real traces}
\subsubsection{DNS Service}
Figure~\ref{figure100} and Figure~\ref{figure99} show the process of DNS service and its corresponding job DAG. In simulation, we set the flow size to be 100MB and the average job service time is randomly generated between 150ms and 200ms based on prior studies~\cite{meisner2009powernap}~\cite{he2013next}. Since our experiment is based on multi-task job, we define the service time as average task size (e.g. 75ms$\sim$100ms for a DNS task based on job DAG), and the system utilization rate is defined as \emph{service time}$*$\emph{arrival rate}.
\begin{figure*}
\centering
\captionsetup{font=small}
\subfloat[DNS average power]{
\includegraphics[scale=0.1]{figures/powerdns.pdf}
\label{fig:dnspower}
}
\subfloat[DNS latency, Utilization=0.1]{
\includegraphics[scale=0.1]{figures/dnsp1-0.pdf}
\label{fig:dns_pp_0.1_90th}
}
\subfloat[DNS latency, Utilization=0.3]{
\includegraphics[scale=0.1]{figures/dnsp3-0.pdf}
\label{fig:dns_pp_0.3_90th}
}
\subfloat[DNS latency, Utilization=0.6]{
\includegraphics[scale=0.095]{figures/dnsp6-0.pdf}
\label{fig:dns_pp_0.6_90th}
}
\caption{DCN executes 2000 Poisson-arrival DNS jobs under low, medium, and high system utilization respectively. Suffix $-CNS$ represents Cooperative Network-Server Algorithm and $-SB$ denotes Server-Balanced Algorithm.}
\label{fig:dns-exp}
\end{figure*}
Figure~\ref{fig:dnspower} shows the power saving corresponding to the execution of 2000 DNS jobs, compared to the baseline policies. In our experiment setup, if all the line cards and ports remain active, the switch power is 398.76$W$. We can see that for each configuration, the avaerage power of Popcorns is 233$\sim$312$W$, namely Popcorns is able to achieve approximately 18\%$\sim$39\% network power savings, while the baseline policy also saves 20\%$\sim$34\% network power with smart use of both line card and port low power state, compared to no power management on line cards and ports at all.
Additionally, we can see that on the server side, using the Cooperative Network-Server Algorithm~\ref{algorithm2}, Popcorns further consumes approximately 19\%$\sim$22\% less power than the Server-Balanced Algorithm. There is no power saving on the servers for the latter algorithm, since all the servers are kept active, the power of which is 92$W$~\ref{Serverpowermodel}. In this sense, we can conclude that intelligently scheduling tasks based on their interdependence and system status, together with the use of server C6 package sleep state further saves more power.
%
Figure~\ref{fig:dns_pp_0.1_90th}, ~\ref{fig:dns_pp_0.3_90th}, and ~\ref{fig:dns_pp_0.6_90th} show the CDF of job latency in DNS Service experiment. We can see that the $90^{th}$ percentile job latency is the best under low system utilization, and gradually increases as utilization increases. Prior study~\cite{yao2017wasp} shows that low system utilization is a more practical case in real scenarios. Thus, we note that our proposed power management policy together with job placement algorithm saves a large portion of power in DCN while maintaining the QoS demands.
\subsubsection{Web Search}
Figure~\ref{fig:ws-graph} shows the steps in processing a web search request and its corresponding job DAG. In simulation, we set the flow size to be 100MB, and average job service time is randomly generated between 20ms and 60ms based on prior studies~\cite{alizadeh2013pfabric}~\cite{hadjilambrou2015characterization}. Thus, the average web search task size is 10ms$\sim$30ms in our job model.
\begin{figure*}
\centering
\captionsetup{font=small}
\subfloat[Web search average power]{
\centering\includegraphics[scale=0.1]{figures/powerwsearch.pdf}
\label{fig:wsearchpower}
}
\subfloat[b][Web search latency, Utilization=0.1]{
\includegraphics[scale=0.1]{figures/wsearchp1-0.pdf}
\label{fig:ws.1_90th}
}
\subfloat[b][Web search latency, Utilization=0.3]{
\includegraphics[scale=0.1]{figures/wsearchp3-0.pdf}
\label{fig:ws_pp_0.3_90th}
}
\subfloat[b][Web search latency, Utilization=0.6]{
\includegraphics[scale=0.095]{figures/wsearchp6-0.pdf}
\label{fig:ws_pp_0.6_90th}
}
\caption{DCN executes 2000 Poisson-arrival web search jobs under low, medium, and high system utilization respectively. Suffix $-CNS$ represents Cooperative Network-Server Algorithm and $-SB$ denotes Server-Balanced Algorithm.}
\label{fig:ws-exp}
\end{figure*}
\begin{figure*}
\centering
\captionsetup{font=small}
\subfloat[Web service average power]{
\centering\includegraphics[scale=0.1]{figures/powerws.pdf}
\label{fig:wsvpower}
}
\subfloat[b][Web service latency, Utilization=0.1]{
\includegraphics[scale=0.1]{figures/wsp1-0.pdf}
\label{fig:wsv_pp_0.1_90th}
}
\subfloat[b][Web service latency, Utilization=0.3]{
\includegraphics[scale=0.1]{figures/wsp3-0.pdf}
\label{fig:wsv_pp_0.3_90th}
}
\subfloat[b][Web service latency, Utilization=0.6]{
\includegraphics[scale=0.095]{figures/wsp6-0.pdf}
\label{fig:wsv_pp_0.6_90th}
}
\caption{DCN executes 2000 Poisson-arrival web service jobs under low, medium, and high system utilization respectively. Suffix $-CNS$ represents Cooperative Network-Server Algorithm and $-SB$ denotes Server-Balanced Algorithm.}
\label{fig:wsv-exp}
\end{figure*}
\todo{update all diagrams}
Figure~\ref{fig:wsearchpower} shows the power saving corresponding to the execution of 2000 DNS jobs, compared to the baseline policies. We can see that for each configuration, Popcorns is able to achieve $\sim$42\% network power savings under high system utilization rate, and even with low utilization, $\sim$16\% network power savings is achieved. As for the Server-Balanced case, the numbers are 33\% and 18\% respectively, which demonstrates the benefits of smart use of low power states of switch components.
What is more, on the server side, Popcorns further saves approximately $\sim$13\% power than the Server-Balanced Agorithm, which has no power saving on the servers. The results are basically in accordance with the previous DNS experiment.
%
Figure~\ref{fig:ws.1_90th}, ~\ref{fig:ws_pp_0.3_90th}, and ~\ref{fig:ws_pp_0.6_90th} show the CDF of job latency in the Web Search experiment. We can see that the $90^{th}$ percentile job latency is also the best under low system utilization, and gradually increases with utilization rate getting higher.
%
\subsubsection{Web Service}
Figure~\ref{fig:wsv-graph} shows the process of a web service response and its corresponding job DAG. In simulation, we set the flow size to be 100MB, and average job service time is randomly generated between 2ms and 10ms based on prior studies~\cite{wu2013adaptive}~\cite{meisner2009powernap}. Therefore, the average web service task size is 1ms$\sim$5ms in our job model.
Figure~\ref{fig:wsv-exp} shows the power consumption for the execution of 2000 web service jobs, which further reinforce the conclusion of previous experiment and priority of our proposed algorithms. We can see that for a DCN executing small-sized jobs like \emph{web service}, smart use of server sleep state also saves $\sim$7\% power. And two configuration both achieve $\sim$27\% network power saving under three system utilization.
\end{comment}
\subsection{Modeling Job{\color{blue}-------$>$Multi-task v.s. single-task job models}}
We model the execution of jobs at the server side as follows. A job consists of multiple inter-dependent tasks that include both spatial and temporal inter-dependence. Application tasks are typically executed by specific server types. For example, a web service request will first be processed by an application or web server, and a search request is processed by a database server, and this kind of task relationship is called spatial inter-dependence. In terms of temporal inter-dependence, a task cannot start executing until all of its 'parent' tasks have finished their execution, and until after their results have been communicated to the server assigned to the task. A job is considered to have finished when all of its tasks finish execution. As for servers, there are multiple cores per server and one core can only process one task. In this paper, we configure single-core processor for the servers.
Each job $j$ can be represented as a directed acyclic graph (DAG) $G^j(V^j, E^j)$, where $V^j$ is the set of tasks of job $j$. In DAG, if there is a link from task $i$ to task $r$, then task $i^j$ must finish and communicate its results to task $r^j$ before $r^j$ can start processing. Each task $v^j \in V^j$ has a workload requirement, namely task size or execution time requirement $w^j_v$ for the core. For each link in $E^j$, there is a data transfer size $D^j_l$ associated with it, which denotes the bandwidth requirement to transfer the result over link $l$ (from the task at the head of DAG link to the task at the tail) when assigned a network flow. Figure~\ref{figure1} shows an example of a job DAG.
\begin{figure}
\captionsetup{font=small}
\centering\includegraphics[width=0.15\textwidth]{figures/DAG.pdf}\caption{Example of a job DAG. Numbers 1-6 denote task 1-task 6 respectively. Numbers around the tasks represent task size, while numbers on the links represent flow size.}
\label{figure1}
\end{figure}
\begin{figure}
\captionsetup{font=small}
\centering\includegraphics[width=0.5\textwidth]{figures/fat_tree.png}
\caption{Fat tree topology.}
\label{figure12}
\end{figure}
\section{Introduction}\label{sec:introduction}}
\end{document}
\section{Motivation}
\subsection{Need for Switch sleep states}
\label{sec:motivation}
In this section, we motivate the need for having more sleep states in switches. As discussed in sections \ref{sec:transition} and \ref{sec:switchpower}, the switch power consumption can be temporarily reduced by selectively turning off parts of the line cards and woken up when required. For components which do not have any memory or state the wakeup latency is the time it takes to initialize the system. For other components such as DRAM storing address forwarding tables should be reinitialized from the host line cards. In our study, we find that having multiple sleep states for the switch helps in trade-off various levels of wake-up latency for energy consumption reduction. This scheme is inherently useful when the idle period is not large enough to accommodate the high wastage of energy spent during waking from a single deepest sleep state and would cause network transmission to get delayed.
\begin{figure}[!h]
\captionsetup{font=small}
\centering\includegraphics[width=0.4\textwidth]{figures/motivation/service.png}\caption{Network Energy consumption normalized per job for a 1024 server FatTree topology network. Jobs containing two dependent tasks of 500ms CPU duration each arrive at 30 Job/second. The Switch sleep state and latency models are constructed using the power model defined in section~\ref{sec:switchpower}.}
\label{motivation12}
\end{figure}
\section{Design}
\subsection{Problem Definition}
\label{sec:problem solution}
In this section, we formulate the joint server and data center network power
optimization as a constrainted optimization problem using the switch power model
and job model defined above. First, to obtain the power consumption of a switch
$k$, assume the number of active line cards and ports are $\zeta^{active}_k$ and
$\rho^{active}_k$ respectively, and the number of line cards in respective sleep states and ports in LPI mode are $\zeta^{sleep}_k$ and $\rho^{LPI}_k$ respectively. Since the total power of a switch is the sum of base power $P_k^{base}$, power of ports, and power of line cards, then we have $P_k^{switch}=P_k^{base}+\zeta^{active}_k*P_{line card}^{active}+\rho^{active}_k*P_{port}^{active}+\zeta^{sleep}_k*P^{sleep}_{line card}+\rho^{LPI}_k*P_{port}^{LPI}$.
To calculate the power consumption of server $i$, since our system consideres the core as basic processing unit in server, the total power of a server is the sum of idle power $P_i^{idle}$ and dynamic power which is linear in the number of active cores $C_i^{on}$. Then we have $P_i^{server}=P_i^{idle}+C_i^{on}*P_{core}^{on}$, where $P_{core}^{on}$ denotes the power consumed by an active core.
Then the joint power optimization problem can be formulated as minimize
$\sum_{k=1}^{N_{switch}}{P_k^{switch}}+\sum_{i=1}^{N_{server}}{P_i^{server}}$
under both network-side and serve-side constraints, such as link capacity,
computation resources, etc.
\begin{table*}
\centering
\scriptsize
\caption{Estimating the energy consumption}
\label{tab:estimatingenergy}
\arrayrulecolor{black}
\begin{tabular}{|l|l|l|}
\hline
\multicolumn{2}{|c|}{\textbf{Energy Component} } & \textbf{Description} \\
\hline
\multicolumn{3}{|l|}{\textbf{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ Server Energy} } \\
\hline
\multirow{2}{*}{\textbf{Candidate server} } & Server Task Execution Energy & Energy consumed when the task is being executed \\
\arrayrulecolor[rgb]{0.651,0.651,0.651}\cline{2-3}
& Server waiting for tranmission & Energy consumed when the task is waiting for communication to complete. \\
\hline
\multirow{3}{*}{\textbf{Server (If sleeping)} } & Wakeup Energy during transistion & Energy spent during wakeup. \\
\cline{2-3}
& Core Energy savings loss Sleep & If sleeping, Energy savings loss due to Active state \\
\cline{2-3}
& Package Energy savings loss Sleep & If the first core being woken up, CPU package sleep savings loss. \\
\arrayrulecolor{black}\hline
\multicolumn{3}{|l|}{\textbf{~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~Network Energy} } \\
\hline
\multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}\textbf{For Each Switch in the }\\\textbf{candidate flow path} \end{tabular}} & \begin{tabular}[c]{@{}l@{}}Network Energy cost based \\on Bandwidth-allocated \end{tabular} & \begin{tabular}[c]{@{}l@{}}Every flow is allocated a BW based on the network congestion and~\\link capacity of the entire flow path,~flow's minimum bandwidth \\the requirement to meet QoS. \end{tabular} \\
\arrayrulecolor[rgb]{0.651,0.651,0.651}\cline{2-3}
& Wakeup Energy during transistion & Energy spent to transistion to active state \\
\cline{2-3}
& Line card Active energy & If the Linecard is sleeping, the energy cost is the full active power consumption. \\
\hline
\begin{tabular}[c]{@{}l@{}}\textbf{For Every flow in common }\\\textbf{path of the candidate path} \end{tabular} & \begin{tabular}[c]{@{}l@{}}Energy savings due to decreased \\Bandwidth for the every other flow \end{tabular} & \begin{tabular}[c]{@{}l@{}}For every other switch in the current flow path whose BW is reduced,\\calculate the energy spent on it. \end{tabular} \\
\arrayrulecolor{black}\hline
\end{tabular}
\end{table*}
\subsubsection{Supervisor modules}
As shown in figure ~\ref{figure13}, the control pane where all
forwarding decisions is performed by the SUP(Supervisor) module which consists
of the Route Processor (OSI layer 3 function) and Switch Processing card(OSI
layer 2 functions).
The CISCO SUP2E contains DRAM upto 32GB(Core switches).
According to Cisco power calculator~\cite{CiscoPowerCalculator}, a routing
processor card consumes upto 69W. We breakdown the components of the route
processing card in table ~\ref{tab:switchchassis}.
\subsubsection{Line Card power Model}
A linecard consists of several components such as ASICs, TCAMs, DRAM memory, and ports. Our power model for each component is explained below:
Due to lack of detailed power models for commercial switches, we propose and
derive our switch power model based on available literature and memory power
modeling tools from Micron~\cite{MicronMemoryPower}. A switch used in production uses
the several components such as ASICs, TCAMs, DRAM memory, and ports. Our power model for each component is explained below:
\label{sec:switcharch}
\begin{figure}
\captionsetup{font=small}
\centering\includegraphics[width=1.2\linewidth]{figures/Switcharch.png}\caption{Illustration of Switch architecture.} \label{figure13}
\end{figure}
\begin{table*}
\centering
\small
\setlength{\extrarowheight}{0pt}
\addtolength{\extrarowheight}{\aboverulesep}
\addtolength{\extrarowheight}{\belowrulesep}
\setlength{\aboverulesep}{0pt}
\setlength{\belowrulesep}{0pt}
\caption{Single Linecard Power model for an Edge level Switch with 1 Gbps maximum bandwidth}
\label{tab:linecardpower}
\resizebox{\linewidth}{!}{%
\begin{tabular}{llllll} \toprule
& \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{Active }\\\textbf{(1 Gbps~Bandwidth )}\end{tabular}} & \textbf{Low power state 1} & \textbf{Low power state 2} & \textbf{Low Power state 3} & \begin{tabular}[c]{@{}l@{}}\textbf{Line Card~ -}\\\textbf{Deepest Sleep State~}\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}Packet Forwarding\\~(Forwarding and \\replication Engines)\end{tabular} & 165W & 132W & 66W & 33W & 0W \\
\rowcolor[rgb]{0.882,0.882,0.882} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Active-Max Clock\\frequency\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}DVFS savings for\\the network processor.\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Clock gating the\\lookup~caches in\\the forwarding\\engines, Replication\\engines clock gated.\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Forwarding engine's \\Core clock and bus \\stopped.\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Architectural state\\flushed to DRAM\\and Forwarding\\engine and replication\\engines are power gated.\end{tabular} \\
\begin{tabular}[c]{@{}l@{}}VoQ (M)\\(INPUT + OUTPUT)(max)\end{tabular} & \begin{tabular}[c]{@{}l@{}}15.5W +\\2 x 6 mW per MB of flow\end{tabular} & \begin{tabular}[c]{@{}l@{}}15.5W+\\1 x 6mW per MB flow\end{tabular} & 4W & 0 W & 0 W \\
\rowcolor[rgb]{0.882,0.882,0.882} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Both Input and\\Output buffers active\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Output VoQ buffer\\turned off\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}DRAM clock gated\\and contents lost.\end{tabular} & DRAM Power gated & DRAM power gated \\
TCAM(max) & \begin{tabular}[c]{@{}l@{}}12W+ \\2.6 mW per MB of flow\end{tabular} & \begin{tabular}[c]{@{}l@{}}12W+2.6 mW per MB\\of flow\end{tabular} & \begin{tabular}[c]{@{}l@{}}12W+2.6 mW per MB\\of flow\end{tabular} & 12W & 0W \\
\rowcolor[rgb]{0.882,0.882,0.882} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}TCAM active and\\synchronized\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}TCAM Active and\\synchronized\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}TCAM active and\\synchronized\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}TCAM stopped,\\synchronized,\\and clock gated.\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}TCAM-routing\\tables flushed and\\power gated\end{tabular} \\
Interconnect Fabric interface & 23W & 23W & 23W & 0 W & 0 W \\
\rowcolor[rgb]{0.882,0.882,0.882} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Active for Control\\plane operations\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Active for Control\\plane operations\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Active for Control\\plane operations.\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Power gated,\\Control plane\\operation stopped\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Power gated,\\Control plane\\operation stopped\end{tabular} \\
Host Processor & 24 W & 22 W & 9 W & 9 W & 3 W \\
\rowcolor[rgb]{0.882,0.882,0.882} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Active and running\\Linecard OS\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Active and running\\Linecard OS, with\\DVFS\end{tabular} & C3 state. Halt mode. & C3 state. Halt mode & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}C5 state: deeper\\sleep state\end{tabular} \\
Ports & 29 W & 15W & 4 W & 4 W & 4 W \\
\rowcolor[rgb]{0.882,0.882,0.882} & All 24 ports active & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}Output ports in Low\\Power Idle- Wake on Arrival\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}All ports in Low\\Power Idle-Wake\\on Arrival\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}All ports in Low\\Power Idle- Wake on Arrival\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.882,0.882,0.882}}l@{}}All ports in Low\\Power Idle- Wake on Arrival\end{tabular} \\ \hline
\begin{tabular}[c]{@{}l@{}}\textbf{Max Power Consumption~~}\\\textbf{in each state (}\\\textbf{1 Gbps when active)}\end{tabular} & \textbf{310 W} & \textbf{250 W} & \textbf{151 W} & \textbf{45 W} & \textbf{7 W} \\ \bottomrule
\end{tabular}
}
\end{table*}
\begin{table}
\centering
\scriptsize
\caption{Switch Chassis power model.}
\label{tab:switchchassis}
\begin{tabular}{lll}
\toprule
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{Switch Chassis }\\\textbf{components}\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{At-least}\\\textbf{One}\\\textbf{Line card~}\\\textbf{Active}\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{All Line }\\\textbf{Cards}\\\textbf{in OFF}\\\textbf{state}\end{tabular}} \\
\hline
\multicolumn{3}{c}{\textbf{Route processing card}} \\
\hline
Hostprocessor & 24W & 0 \\
DRAM-3GB & 32.5W & 0 \\
Persistent Storage & 10W & 0 \\
\hline
\multicolumn{3}{c}{\textbf{\textbf{Switch Interconnect card}}} \\
\hline
Switch Fabric and~~I/0 (40\%load) & 85W & 0 \\
Scheduler ASIC & 24W & 0 \\
\hline
\multicolumn{3}{c}{\textbf{Chassis Controller}} \\
\hline
Host-Processor & 24W & 24W \\
DRAM-3GB & 12W & 12W \\
\hline
\multicolumn{3}{c}{\textbf{Power Proportional Chassis Components}} \\
\hline
Power supply losses~per Active card (25\%) & 93W & 5W \\
Cooling per Active Line~\textasciitilde{}Card (20\%) & 77W & 4W \\
\bottomrule
\end{tabular}
\end{table}
\textbf{1. ASICs/ Network Processors:} The data plane operations of the switch such as
parsing the packet contents to read the header, looking up routing tables, and
forwarding data to the corresponding destination port are performed by the
networking processor. The processing is divided by two functional components:
Forwarding Engine and Replication Engine. The Replication engine separates the
header and data parts of a network packet and passes the header to the forwarding
engine. The replication engine also reassembles the packet after its processed
by the forwarding engine and in multicast communication duplicate packets to
different destination ports. The forwarding engine is the decision making
component in the line cards, and it uses the locally synchronized routing
tables, QoS and ACL lookups. According to Wobker's report \cite{wobkerPowerConsumptionHighend2012}, this
consumes 52\% of the total power in enterprise Cisco line cards. Accordingly,
the ASIC/network processor's power consumption is computed to be 165 W. Based on
studies done by Iqbal et al. \cite{iqbalEfficienttrafficaware2012} and Luo et
al.
To construct the energy saving sleep states, we draw parallels to the design
of the sleep states in general purpose processors. In the first idle state, the
clock can be set to the lowest frequency therby resulting in 20\% savings. In
the first sleep state, the lookup caches in the forwarding engines can be flushed
and clock gated along with the replication engines. In the next sleep state, the core processing clock and processor bus is clockgated.
In the lowest sleep state state, the entire architectural state of the processor including runtime data from Fowarding information base rules, Qos Classification counters is written to the Linecard DRAM and power gated.
\textbf{2. VoQ (DRAM memory and SRAM buffer):} In Cisco line cards, the Virtual Output
Queuing memory is used to provide buffering and queing function to manage the
QoS, packet replication, congestion control functions. The ingress DRAM is used to buffer incoming
packets and the egress DRAM buffer is used store data payload while the header
is being processed by the forwarding engine. The active power consumption of
DRAM depends on the frequency of accesses, and leakage/static power depends on
the transistor technology. Micron Power calculators \cite{MicronMemoryPower} for
RLDDR3 (Reduced latency DRAM) show a power consumption of 1571 mW per 1.125 GB
of memory when active and 314 mW when in sleep.
There is also a high speed SRAM buffer which acts as a write-back cache for the
DRAM memory.
\textbf{3. TCAM (Ternary Content addressable memory):} The TCAM structre is used by the
L3 routing function and to store routing table entries synchronized locally from
the route processing card. A typical 4.5 Mb TCAM structure which is used to offload high-speed packet lookup, consumes 15 W of power \cite{guoResistiveTCAMAccelerator2011}. We model the static leakage power for a 4.5 Mb CAM structure using Cacti \cite{balasubramonianCACTINewTools2017}, which estimates the power consumed during the idle sleep period when memory is not accessed.
\textbf{4. Line card Interconnect fabric:} The line card communicates with the chassis
bus using the interconnect interface. The line card fabric interconnect consumes 23W during active power state \cite{panZerotimeWakeupLine2016}.
\textbf{5. Host processor and local DRAM:} Each line card includes a host processor which is used in line card boot and initialization process for copying routing table information from the switch fabric card. The processor is kept running in sleep mode to keep the routing tables synchronized and to wake up the line card on packet arrival. We assume a 30\% power reduction due to dynamic frequency scaling during line card sleep \cite{liuSleepScaleRuntimeJoint2014}.
\textbf{6. Ports:} To tackle the energy consumption in network equipment, the IEEE 802.3az standard introduces the Low Power Idle (LPI) mode of Ethernet ports, which is used when there is no data to transmit, rather than keeping the port in active state all the time~\cite{gunaratneReducingEnergyConsumption2008}. The idea behind LPI is to refresh and wake up the port when there is data to be transmitted; the wakeup duration is usually small.
Apart from the line cards, we model a constant baseline power of 120 W for the rest of the switch in ON state, which includes switch supervisor module, the backplane, cooling systems, switch fabric card based on Pan et al. \cite{panZerotimeWakeupLine2016}. The wakeup latency for each of the states is derived from sleep and wake up state conceptualized in section~\ref{sec:transition}
\subsection{Modeling switch sleep state transition and wake up latency}
\label{sec:transition}
\begin{figure}
\captionsetup{font=small}
\centering\includegraphics[width=.5\textwidth]{figures/switchtransition.pdf}\caption{Illustration of a Switch Sleep and Wakeup Transition process with 3 sleep states.}
\label{figure13-sleepcycle}
\end{figure}
As shown in the Figure \ref{figure13-sleepcycle}, we model the sleep policy for a switch based on the power model described in section~\ref{sec:switchpower}. The line card unit of a switch is considered to be in active state when processing any output traffic, and idle when either there is no incoming traffic for certain period of time. In the first sleep state, the switch stops processing incoming packets but continues to receive new packets in the packet buffer to require the DRAM to be on the State. TCAM is not flushed to avoid the wakeup latency associated with re-populating the routing tables. In the second sleep state S2, the server's Energy consumption is reduced by setting the ASIC or Network processor to clock-gated state. In the second sleep state, additional savings can be achieved by turning off the DRAM, and deeper sleep state for the NP/ASIC. In the last sleep state, we clock gate the TCAM memories storing the routing tables, host processor and switch interconnects to be put in sleep state.
\section{Power models}
\label{sec:systemdesign}
\label{sec:system-model}
\subsection{Conceptual Network Switch Low Power States}
\label{sec:switchpower}
\input{switchpower}
\subsection{Server Low-power States and Power Model}
\label{sec:serverpower}
With the focus on improving energy efficiency when servers are under-utilized, low-power states were created
with goal of reaching energy proportional computing. For standardization across computing platforms, Advanced Configuration
and Power Interface (ACPI)~\cite{acpi} specification gives the Operating system developers and hardware vendors common platform independent energy savings.
ACPI has been historically well supported by various hardware vendors such as Intel and IBM~\cite{wareArchitectingPowerManagement2010}. ACPI uses global states, \emph{Gx}, to represent states of the
entire system that are visible to the user. We discuss the component-wise
breakdown of such system states as described in the table ~\ref{tab:serverpowermodel}.
Although processor sleep states can significantly reduce the power consumed by the processor, servers can still consume a considerable amount of power, as the platform may still remain active. As a result, in order to achieve further energy savings, \emph{system sleep states}, that also put platform components into low-power state, are considered for server farm power management. We consider additional full system sleep state.
\begin{itemize}
\item CPU(s) Power: Low-power states are now an important feature that are
widely supported in today's processors. In a multicore processor, each core
can have different architectural components or features disabled or turned off
by shutting down the clock signal or turning off power and such states are
called C-states. A higher level C state or S state typically indicates more
aggressive energy savings but also corresponds to long wakeup latencies.
For multi-core processor, low-power sleep states are supported at both core
level and package level. When all cores become idle and reside in some
\emph{Core C state}, the entire package would be resolved to a greater power saving state, denoted as \emph{package sleep state}, which further reduces power.
\item{Chipset} The chipset and board consume energy used to support the main
chipset, voltage regulators, bus control chips, interfaces for peripheral
devices. According to Intel specifications~\cite{IntelX99Chipset} the chipset
has TDP of 6.5W. The minimal power savings in the deeper sleep states of the
system can be ignored.
\item{DRAM}: The DRAM power consumption can be seperated into 3 parts ,
read/write energy, Activate and pre-charge energy to open the specific row buffer to operate
upon and refresh power to continuously perform the refresh cycle.
Characterizing the DRAM power depends on several factors such as:
1. DRAM technology DDR3, DDR4, LPDDR3L, etc with their respective power and
performance characteristics.
2. Workload : number of last level cache misses: An application whose working
memory does not fit into the CPU cache can have a lot misses.
3. Manufacturer optimizations: DRAM powerdown mode can be implemented
differently by different manufacturers. The DRAM self-refresh cycle allows
DRAM to be clock-gated from the external memory controller for self refreshing cycle.
4. Memory clock speed. Higher clock improves performance buy consumes more memory.
We used the micron power calculator to arrive the power consumption of the
memory components as shown in the table ~\ref{tab:linecardpower}.
\item{Power supply} We consider the AC-DC energy conversion loss which are
typically 10\% of the power consumed by the system.
\item{Fans}: The power consumption of a cooling fan is directly proportional to
the cubic function of current fan speed utilization
\end{itemize}
\begin{table*}
\small \centering
\setlength{\extrarowheight}{0pt}
\addtolength{\extrarowheight}{\aboverulesep}
\addtolength{\extrarowheight}{\belowrulesep}
\setlength{\aboverulesep}{0pt}
\setlength{\belowrulesep}{0pt}
\caption{A Representative Server Power model with Multi-socket Xeon processors and 128GB DDR3 }
\label{tab:serverpowermodel}
\resizebox{\linewidth}{!}{%
\begin{tabular}{lcllll}
\toprule
\textbf{Server Power Component } & \textbf{Number of units } & \textbf{Active } & \textbf{Idle (G0) } & \textbf{G1 } & \textbf{G2 } \\
\hline
\begin{tabular}[c]{@{}l@{}}\textbf{CPU}\\\textbf{(Intel Xeon E5}\\\textbf{V2 2690) }\end{tabular} & 2 & 135W & 108W & 22W & 15W \\
\rowcolor[rgb]{0.851,0.851,0.851} & & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Average sustainable\\power consumption (TDP)\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}All CPUs/cores set\\to lowest DVFS \\frequency with\\20\% savings\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}All cores of all CPUs\\in C3 state- Clock\\Stopped and Cache\\flushed but other\\architectural state\\maintained\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}All cores of all CPUs\\in C6 state- Power\\gating the CPU, after\\saving the architectural\\state to the DRAM\end{tabular} \\
\textbf{ Chipset } & 1 & 8W & 8W & 8W & 8W \\
\rowcolor[rgb]{0.851,0.851,0.851} & & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Chipset powering \\the interfaces and\\peripherals\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Chipset powering\\the interfaces and\\peripherals\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Chipset powering the\\interfaces and\\peripherals\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Chipset powering the\\interfaces and peripherals\end{tabular} \\
\begin{tabular}[c]{@{}l@{}}\textbf{ Memory }\\\textbf{(8x16GB DDR3) }\end{tabular} & 8 & 1.45W & 0.29W & 0.29W & 0.29W \\
\rowcolor[rgb]{0.851,0.851,0.851} & & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}DRAM serving a normal\\utilization~of reads writes\\(25\% of all CPU~cycles\\each) + ACT~power (228mW) +\\I/0 power (540mw) + Background +\\Termination power(672mW)\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Memory without new\\Reads or Writes, with ACT\\and background power\\consumption, and\\Background power 57mW.\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Memory without new\\Reads or Writes, \\with ACT and\\background power consumption, and\\Background power 57mW.\end{tabular} & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Memory without new\\Reads or Writes, with\\ACT and background\\power consumption,\\and Background\\power 57mW.\end{tabular} \\
\begin{tabular}[c]{@{}l@{}}\textbf{ Disks }\\\textbf{(8 TB SSDs}\\\textbf{in RAID) }\end{tabular} & 4 & 10W & 10W & 1W & 1W \\
\rowcolor[rgb]{0.851,0.851,0.851} & & Active mode & Active mode & standby power & standby power \\
\textbf{ Network Interface card } & 2 & 3W & 3W & 1W & 1W \\
\rowcolor[rgb]{0.851,0.851,0.851} & & Active mode & Wake-on-LAN mode & Wake-on-LAN mode & Wake-on-LAN mode \\
\textbf{ Power Supply losses } & 2 & 38 W & 31W & 7W & 5W \\
\rowcolor[rgb]{0.851,0.851,0.851} & \multicolumn{1}{l}{} & 10\% current system power consumption & 10\% current system power consumption & 10\% current system power consumption & 10\% current system power consumption \\
\begin{tabular}[c]{@{}l@{}}\textbf{Cooling Fans}\\\textbf{15W fans}\end{tabular} & 2 & 10.5W & 5W & 5W & 0 W \\
\rowcolor[rgb]{0.851,0.851,0.851} & & 70\% of Full Power & 30\% Power & 30\% Power & Off \\
\hline
\textbf{Total} & \multicolumn{1}{l}{} & \textbf{385W} & \textbf{308W} & \textbf{73W} & \textbf{~51W} \\
\hline
\textbf{ Transition Time - To sleep } & & 0usecs & 10 usecs & 100 usecs & 500 msecs \\
\rowcolor[rgb]{0.851,0.851,0.851} & & & DVFS transition time & Transition to C3 state & \begin{tabular}[c]{@{}>{\cellcolor[rgb]{0.851,0.851,0.851}}l@{}}Saving Architectural state of CPU to ram\\and~power supply\\power up\end{tabular} \\
\hline
\textbf{ Wakeup latency } & & 0 usecs & 100 usecs & 100 msecs & 1 second \\
\rowcolor[rgb]{0.851,0.851,0.851} & & & CPU frequency scale & Transiting all cores to C0 state & Transitioning CPU to Active state \\
\bottomrule
\end{tabular}
}
\end{table*}
\subsection{Modeling Job}
More and more applications are being designed using a modular microservices-based paradigm, where inter-dependent tasks are hosted on different servers. The modularisation of job helps reduce complexities in software development, with independently update-able software programs in a scalable 'single-service instance per server' deployment pattern.
We model the execution of jobs at the server side as consisting of multiple inter-dependent tasks that include both spatial and temporal inter-dependence. Application tasks are typically executed by specific server types. For example, a web service request will first be processed by an application or web server, and a search request is processed by a database server, and this kind of task relationship is called spatial inter-dependence. In terms of temporal inter-dependence, a task cannot start executing until all of its 'parent' tasks have finished their execution, and until after their results have been communicated to the server assigned to the task. A job is considered to have finished when all of its tasks finish execution. As for servers, there are multiple cores per server and one core can only process one task. We support asynchronous task execution by allowing the server running the parent task to release the cpu, after all flows are completed to the child tasks' server even if it is waiting in the queue for execution.
Each job $j$ can be represented as a directed acyclic graph (DAG) $G^j(V^j, E^j)$, where $V^j$ is the set of tasks of job $j$. In DAG, if there is a link from task $i$ to task $r$, then task $i^j$ must finish and communicate its results to task $r^j$ before $r^j$ can start processing. Each task $v^j \in V^j$ has a workload requirement, namely task size or execution time requirement $w^j_v$ for the core. For each link in $E^j$, there is a data transfer size $D^j_l$ associated with it, which denotes the bandwidth requirement to transfer the result over link $l$ (from the task at the head of DAG link to the task at the tail) when assigned a network flow. Figure~\ref{figure1} shows an example of a job DAG.
\begin{figure}
\captionsetup{font=small}
\centering\includegraphics[width=0.4\linewidth]{figures/DAG.pdf}\caption{Example of a job DAG. Numbers 1-6 denote task 1-task 6 respectively. Numbers around the tasks represent task size, while numbers on the links represent flow size.}
\label{figure1}
\end{figure}
\begin{figure}
\captionsetup{font=small}
\centering\includegraphics[width=0.3\textwidth]{figures/fat_tree.png}
\caption{Fat tree topology with K=4 (16 servers).}
\label{figure12}
\end{figure}
\subsection{Data Center System}
\label{sec:dc-sys}
Figure~\ref{figure12} shows the classic fat tree topology used in our network-server system, where each subset is called a 'pod'. In our system, each switch consists of a number of distributed cards plugged into the backplane, which provides the physical connectivity~\cite{panZerotimeWakeupLine2016}. Among these cards, there are multiple line cards for forwarding packets or flows, and can be in active, sleep, or off state. In turn, each line card contains several ports connecting to external links, which can also be in active, LPI, or off state. A typical schematic of switch, line card, and port is shown in Figure~\ref{figure13}.
\section{Evaluation on Real System}
\label{sec:evaluation}
\begin{figure*}[htbp]
\centering
\captionsetup{font=small}
\includegraphics[scale=0.56]{./figure/combine-energy.pdf}
\caption{\label{fig:per-server-energy} Energy measured on a server farm with 10 servers with different energy management policies. The first three groups of bars represent energy breakdown in each server when Active-Idle, Delay-Doze and \titlename are applied, respectively. The rightmost three bars illustrate the total server farm energy consumption for Active-Idle (black bar), Delay-Doze (gray bar) and \titlename respectively (white bar).}
\end{figure*}
We evaluate \titlename on a testbed with 10 Dell Poweredge servers equipped with Inten Xeon-based processors with all of the servers deployed on a dedicated rack. We installed a modified version of the Apache HTTP server for our Local Power Controller. We extended the Local Power Controller to also include a Delay-Doze timer. The Global Server Farm Power Manager is added to an additional apache server with the \emph{mod\_proxy\_balancer} module used for load balancing. Specifically, the load balancer performs operating mode transitions in servers (as discussed in Section~\ref{sec:handler}); this is done by sending special HTTP requests (\emph{/hostname/trans-to-active-mode/}, \emph{/hostname/trans-to-lp-mode}) to the application server. It also monitors the power state for each server, and manages the server wakeups (from system sleep) using IPMI interface supported by Dell systems. The special requests are handled by the local power controller that would determine the server low-power transitions accordingly. We set up a custom cpuidle governor which allows direct processor C-state transitions from userspace (e.g., C0-C6). For power measurement, we leverage two techniques: the RAPL interface for fine-grained component power, and the IPMI's system power management interface for coarse-grained server power. We evaluate the effectiveness of \titlename by providing two sets of workloads to the system: the non-bursty Wikipedia workload, that does not require server provisioning, and four bursty NLANR workloads~\cite{nlanr}, that require server provisioning to handle bursty workloads (See section~\ref{sec:provision}).
\subsection{Wikipedia Trace}
\label{sec:bursty-trace}
We performed real system energy measurements by deploying Wikipedia software stack, namely Wikipedia application (Mediawiki), database system (Mysql) on servers. We compare \titlename against Active-Idle and Delay-Doze approaches described in Section~\ref{sec:smartlp}. To capture detailed energy breakdown, we leverage RAPL interface for fine-grained power measurement. The RAPL utility records the CPU and RAM power values periodically. We configure \titlename with the $T_s$, $T_w$ and $\tau$ parameters that achieve energy-latency Pareto-optimality with tail latency constraint set to 2.0. Similarly, for Delay-Doze, we explore various
values of the delay-timer and choose the setting that achieves the best power with the same tail latency constraint. From our experiments, we get actual CPU and RAM energy consumption for each server. To get the overall server energy, we also factored in the platform energy shown in Table~\ref{table:powermode}.
Figure~\ref{fig:per-server-energy} shows the per-server energy breakdown in terms of CPU, DRAM, and platform energy. With Active-Idle power management, all the 10 servers have similar energy consumption. With Delay-Doze, some of the servers are able to stay in system sleep state for longer periods of time, thus saving energy. With \titlename, we can clearly see that most of the servers drastically reduce energy consumption, and only a minimal subset of servers (server\#6 and \#10) are used for servicing jobs. Note that the energy consumption of server\#10 is slightly higher than that of Active-Idle power management since the server is at a higher utilization level while other servers remained inactive. Overall, \titlename gains 39\% reduction in energy saving compared to Delay-Doze, and 56\% energy savings compared to Active-Idle.
\begin{figure}[htbp]
\centering
\subfloat[ny09]{
\includegraphics[scale=0.34]{./figure/ny09.pdf}
\label{fig:nlanr-ny09}
\hspace{0.05in}
}
\subfloat[pa09]{
\includegraphics[scale=0.34]{./figure/pa09.pdf}
\label{fig:nlanr-pa09}
}
\subfloat[pa10]{
\includegraphics[scale=0.34]{./figure/pa10.pdf}
\label{fig:nlanr-pa10}
\hspace{0.05in}
}
\subfloat[uc09]{
\includegraphics[scale=0.34]{./figure/uc09.pdf}
\label{fig:nlanr-uc09}
}
\caption{System utilization for four bursty traces.}
\label{fig:utilizations}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.45]{./figure/trace-eval/trace_result.pdf}
\caption{Normalized energy consumption relative to peak energy on a 10-server cluster}
\label{fig:traceresults}
\end{figure}
\subsection{Bursty Traces}
\label{sec:nonbursty-trace}
As the raw NLANR network traces~\cite{nlanr} present job arrivals that are too infrequent for the server farm system with 10 application servers (less than 2\%), we speed up the trace by scaling the 24-hour trace to one hour. We choose four traces, namely \emph{ny09}, \emph{pa09}, \emph{pa10} and \emph{uc09}. Figure~\ref{fig:utilizations} shows the utilization levels (scaled) for the four traces over one hour. All traces exhibit bursty traffic patterns. For example, the \emph{ny09} trace has highly fluctuating utilizations ranging from 4\% to 45\% with a large number of spikes. To run the traces, we set up a software stack similar to the one in Section~\ref{sec:bursty-trace}. Each request in the trace is serviced by a PHP script that accesses a pre-defined set of pages randomly, and we note that the average service time is about the same as Wikipedia web requests.
To enable server provisioning, the Server Farm Power Manager additionally samples the server farm utilization levels based on the job arrival rates. Utilization is calculated as the product of job arrival rate and average job execution time. Standard deviation on the samples for utilization levels is calculated every 120 seconds. The number of provisioned servers is calculated dynamically (See Section~\ref{sec:provision} for details). Note that, in our comparative studies, the delay-timer values are re-evaluated for each trace such that best possible energy savings are had while meeting the QoS constraints. Figure~\ref{fig:traceresults} shows the energy consumption for the four bursty workloads using Active-Idle, Delay-Doze and \titlename. The energy is normalized to the peak energy which is $PeakPower*Time$. The energy reduction for \titlename ranges from 34\% to 40\% compared to Active-Idle. Even with the best delay timer settings, Delay-Doze only achieves 9\% to 12\% energy reduction in bursty workloads. We observe that due to the job arrival rate spikes (especially for \emph{uc09}), in order for Delay-Doze to meet the tail latency constraint of 2.0, the delay timer has to be set to larger values, and in turn the servers have limited chance to enter system sleep state.
|
{
"timestamp": "2021-12-01T02:27:29",
"yymm": "2111",
"arxiv_id": "2111.15502",
"language": "en",
"url": "https://arxiv.org/abs/2111.15502"
}
|
\section*{Introduction}\label{sec:intro}
Logistic regression is a widely used statistical model to describe the relationship between a binary response variable and predictor variables in data sets~\cite{hosmer2013applied}. It is often used in machine learning to identify important predictor variables~\cite{el2020comparative,zanon2020sparse}. This task, variable selection, typically amounts to fitting a logistic regression model regularized by a convex combination of $\ell_1$ and $\ell_{2}^{2}$ penalties. Variable selection is frequently applied to problems in medicine~\cite{BAGLEY2001979,bursac2008purposeful, greene2014big,PEREIRA2009S199,prive2018efficient,wu2009genome,zhang2018variable}, natural language processing~\cite{berger1996maximum,manning2003optimization,genkin2007large,pranckevivcius2017comparison,taddy2013multinomial}, economics~\cite{lowe2004logistic,theodossiou1998effects,zaghdoudi2013bank,zaidi2016forecasting}, and social science~\cite{achia2010logistic,king2001logistic,muchlinski2016comparing}, among others.
Since modern big data sets can contain up to billions of predictor variables, variable selection methods require efficient and robust optimization algorithms to perform well~\cite{l2017machine}. State-of-the-art algorithms for variable selection methods, however, were not traditionally designed to handle big data sets; they either scale poorly in size~\cite{chu2007map} or are prone to produce unreliable numerical results~\cite{bringmann2018homotopy,loris2008_package,yuan2010comparison,yuan2012improved}. These shortcomings in terms of efficiency and robustness make variable selection methods on big data sets essentially impossible without access to adequate and costly computational resources~\cite{demchenko2013addressing,sculley2014machine}. Further exacerbating this problem is that machine learning applications to big data increasingly rely on computing power to make progress~\cite{dhar2020carbon,kambatla2014trends,l2017machine,leiserson2020there}. Without efficient and robust algorithms to minimize monetary and energy costs, these shortcomings prevent scientific discoveries. Indeed, it is expected that progress will rapidly become economically and environmentally unsustainable as computational requirements become a severe constraint~\cite{thompson2021deep}.
This paper proposes a novel optimization algorithm that addresses the shortcomings of state-of-the-art algorithms used for variable selection. Our proposed algorithm is an accelerated nonlinear variant of the classic primal-dual hybrid gradient (PDHG) algorithm, a first-order optimization method initially developed to solve imaging problems~\cite{esser2010general,pock2009algorithm,zhu2008efficient,chambolle2011first,hohage2014generalization,chambolle2016ergodic}. Our proposed accelerated nonlinear PDHG algorithm, which is based on the work the authors recently provided in~\cite{langlois2021accelerated}, uses the Kullback--Leibler divergence to efficiently fit a logistic regression model regularized by a convex combination of $\ell_1$ and $\ell_{2}^{2}$ penalties. Specifically, our algorithm provably computes a solution to a logistic regression problem regularized by an elastic net penalty in $O(T(m,n)\log(1/\epsilon))$ operations, where $\epsilon \in (0,1)$ denotes the tolerance and $T(m,n)$ denotes the number of arithmetic operations required to perform matrix-vector multiplication on a data set with $m$ samples each comprising $n$ features. This result improves on the known complexity bound of $O(\min(m^2n,mn^2)\log(1/\epsilon))$ for first-order optimization methods such as the classic primal-dual hybrid gradient or forward-backward splitting methods.
Before describing our methodology, we briefly discuss how variable selection works with logistic regression, why this problem is challenging, what the state-of-the-art algorithms are, and what their limitations are.
\subsection*{Description of the problem}
Suppose we receive $m$ independent samples $\{(\boldsymbol{x}_i,y_i)\}_{i=1}^{m}$, each comprising an $n$-dimensional vector of predictor variables $\boldsymbol{x}_i \in \R^n$ and a binary response variable $y_i \in \{0,1\}$. The predictor variables are encoded in an $m \times n$ matrix $\matr A$ whose rows are the vectors $\boldsymbol{x}_i = (x_{i1},\dots, x_{in})$, and the binary response variables are encoded in an $m$-dimensional vector $\boldsymbol{y}$. The goal of variable selection is to identify which of the $n$ predictor variables best describe the $m$ response variables. A common approach to do so is to fit a logistic regression model regularized by a convex combination of $\ell_1$ and $\ell_{2}^{2}$ penalties:
\begin{equation}\label{eq:binary_LR_wp}
\inf_{\boldsymbol{\theta} \in \R^n} f(\boldsymbol{\theta};\alpha,\lambda) = \inf_{\boldsymbol{\theta} \in \R^n} \left\{ \frac{1}{m}\sum_{i=1}^{m}\log\left(1 + \exp{((\matr A \boldsymbol{\theta})_i)}\right) - \frac{1}{m}\left\langle\boldsymbol{y},\matr A \boldsymbol{\theta} \right\rangle + \lambda \left(\alpha \normone{\boldsymbol{\theta}} + \frac{1-\alpha}{2}\normsq{\boldsymbol{\theta}}\right)\right\},
\end{equation}
where $\lambda > 0$ is a tuning parameter and $\alpha \in (0,1)$ is a fixed hyperparameter. The function $\boldsymbol{\theta} \mapsto \lambda (\alpha \normone{\boldsymbol{\theta}} + (1-\alpha)\normsq{\boldsymbol{\theta}}/2)$ is called the elastic net penalty~\cite{zou2005regularization}. It is a compromise between the ridge penalty ($\alpha = 0$)~\cite{hoerl1970ridge} and the lasso penalty ($\alpha = 1$)~\cite{tibshirani1996regression}. The choice of $\alpha$ depends on the desired prediction model; for variable selection its value is often chosen to be close to but not equal to one~\cite{tay2021elastic}.
The elastic net regularizes the logistic regression model in three ways. First, it ensures that the logistic regression problem~\eqref{eq:binary_LR_wp} has a unique solution (global minimum)~\cite[Chapter II, Proposition 1.2]{ekeland1999convex}. Second, the $\ell_{2}^{2}$ penalty shrinks the coefficients of correlated predictor variables toward each other (and zero), which alleviates negative correlation effects (e.g., high variance) between highly correlated predictor variables. Third, the $\ell_1$ penalty promotes sparsity in the solution of~\eqref{eq:binary_LR_wp}; that is, the global minimum of~\eqref{eq:binary_LR_wp} has a number of entries that are identically zero~\cite{Foucart2013,el2020comparative,zanon2020sparse}. We note that other penalties are sometimes used in practice to promote sparsity, including, for example, the group lasso penalty~\cite{meier2008group}. In any case, the non-zero entries are identified as the important predictor variables, and the zero entries are discarded. The number of non-zero entries itself depends on the value of the fixed hyperparameter $\alpha$ and the tuning parameter $\lambda$.
In most applications, the desired value of $\lambda$ proves challenging to estimate. To determine an appropriate value for it, variable selection methods first compute a sequence of minimums $\boldsymbol{\theta}^{*}(\lambda)$ of problem~\eqref{eq:binary_LR_wp} from a chosen sequence of values of the parameter $\lambda$ and then choose the parameter that gives the preferred minimum~\cite{bringmann2018homotopy,friedman2010regularization}. Variable selection methods differ in how they choose the sequence of parameters $\lambda$ and how they repeatedly compute global minimums of problem~\eqref{eq:binary_LR_wp}, but the procedure is generally the same. The sequence of parameters thus computed is called a regularization path~\cite{friedman2010regularization}.
Unfortunately, computing a regularization path to problem~\eqref{eq:binary_LR_wp} can be prohibitively expensive for big data sets. To see why, fix $\alpha \in (0,1)$ and $\lambda > 0$, and let $\boldsymbol{\theta}_{\epsilon}(\alpha,\lambda) \in \R^n$ with $\epsilon > 0$ denote an $\epsilon$-approximate solution to the true global minimum $\boldsymbol{\theta}^{*}(\alpha,\lambda)$ in~\eqref{eq:binary_LR_wp}, i.e.,
\[
f(\boldsymbol{\theta}_{\epsilon}(\alpha,\lambda);\alpha,\lambda) - f(\boldsymbol{\theta}^{*}(\alpha,\lambda);\alpha,\lambda) < \epsilon.
\]
Then the best achievable rate of convergence for computing $\boldsymbol{\theta}_{\epsilon}(\alpha,\lambda)$ in the Nesterov class of optimal first-order methods is linear, that is, $O(\log(1/\epsilon))$ in the number of iterations~\cite{Nesterov2018}. While optimal, this rate of convergence is difficult to achieve in practice because it requires a precise estimate of the largest singular value of the matrix $\matr{A}$, a quantity essentially impossible to compute for large matrices due to its prohibitive computational cost of $O(\min{(m^2 n,mn^2)})$ operations~\cite{hastie2009elements}. This issue generally makes solving problem~\eqref{eq:binary_LR_wp} difficult and laborious. As computing a regularization path entails repeatedly solving problem~\eqref{eq:binary_LR_wp} for different values of $\lambda$, this process can become particularly time consuming and resource intensive for big data sets.
In summary, variable selection methods work by repeatedly solving an optimization problem that can be prohibitively computationally expensive for big data sets. This issue has driven much research in the development of robust and efficient algorithms to minimize costs and maximize performance.
\subsection*{Algorithms for variable selection methods and their shortcomings}
The state of the art for computing regularization paths to problem~\eqref{eq:binary_LR_wp} is based on coordinate descent algorithms~\cite{friedman2007pathwise,friedman2010regularization,hastie2021glmnet,simon2011regularization,simon2013blockwise,tibshirani2012strong,wu2008coordinate,yuan2012improved}. These algorithms are implemented, for example, in the popular glmnet software package~\cite{hastie2021glmnet}, which is available in the Python, MATLAB, and R programming languages. Other widely used variable selection methods include those based on the least angle regression algorithm and its variants~\cite{efron2004least,hesterberg2008least,lee2006efficient,tibshirani2013lasso,zou2005regularization}, and those based on the forward-backward splitting algorithm and its variants~\cite{beck2009fast,chambolle2016introduction,daubechies2004iterative,shi2010fast,shi2013linearized}. Here, we focus on these algorithms, but before doing so we wish to stress that many more algorithms have been developed to compute minimums of~\eqref{eq:binary_LR_wp}; see \cite{bertsimas2019sparse,el2020comparative,li2020survey,vidaurre2013survey,zanon2020sparse} for recent surveys and comparisons of different methods and models.
Coordinate descent algorithms are considered the state of the art because they are scalable, with steps in the algorithms generally having an asymptotic space complexity of at most $O(mn)$ operations. Some coordinate descent algorithms, such as those implemented in the glmnet software~\cite{hastie2021glmnet}, also offer options for parallel computing. Despite these advantages, coordinate descent algorithms generally lack robustness and good convergence properties. For example, the glmnet implementation depends on the sparsity of the matrix $\matr{A}$ to converge fast~\cite{zou2005regularization}, and it is known to be slowed down when the predictor variables are highly correlated~\cite{friedman2007pathwise}. This situation often occurs in practice, and it would be desirable to have a fast algorithm for this case. Another issue is that the glmnet implementation approximates the logarithm term in problem~\eqref{eq:binary_LR_wp} with a quadratic in order to solve the problem efficiently. Without costly step-size optimization, which glmnet avoids to improve performance, the glmnet implementation may not converge~\cite{friedman2010regularization,lee2006efficient}. Case in point, \citet{yuan2010comparison} provides two numerical experiments in which glmnet does not converge. Although some coordinate descent algorithms recently proposed in~\cite{catalina2018accelerated} and in~\cite{fercoq2016optimization} can provably solve the logistic regression problem~\eqref{eq:binary_LR_wp} (with parameter $\alpha = 1$), in the first case, the convergence rate is strictly less than the achievable rate, and in the second case, the method fails to construct meaningful regularization paths to problem~\eqref{eq:binary_LR_wp}, in addition to having large memory requirements.
The least angle regression algorithm is another popular tool for computing regularization paths to problem~\eqref{eq:binary_LR_wp}. This algorithm, however, scales poorly with the size of data sets because the entire sequence of steps for computing regularization paths has an asymptotic space complexity of at most $O(\min{(m^2 n + m^3,mn^2 + n^3)})$ operations~\cite{efron2004least}. It also lacks robustness because, under certain conditions, it fails to compute meaningful regularization paths to problem~\eqref{eq:binary_LR_wp}~\cite{bringmann2018homotopy,loris2008_package}. Case in point, \citet{bringmann2018homotopy} provides an example for which the least angle regression algorithm does not converge.
The forward-backward splitting algorithm and its variants are widely used because they are robust and can provably compute $\epsilon$-approximate solutions of~\eqref{eq:binary_LR_wp} in at most $O(\log(1/\epsilon))$ iterations. To achieve this convergence rate, the step size parameter in the algorithm needs to be fine-tuned using a precise estimate of the largest singular value of the matrix $\matr A$. As mentioned before, however, computing this estimate is essentially impossible for large matrices due to its prohibitive computational cost, which has an asymptotic computational complexity of at most $O(\min{(m^2 n,mn^2)})$ operations. Line search methods and other heuristics are often employed to bypass this problem, but they come at the cost of slowing down the convergence of the forward-backward splitting algorithm. Another approach is to compute a crude estimate of the largest singular value of the matrix $\matr{A}$, but doing so dramatically reduces the speed of convergence of the algorithm. This problem makes regularization path construction methods based on the forward-backward splitting algorithm and its variants generally inefficient and impractical for big data sets.
In summary, state-of-the-art and other widely used variable selection methods for computing regularization paths to problem~\eqref{eq:binary_LR_wp} either scale poorly in size or are prone to produce unreliable numerical results. These shortcomings in terms of efficiency and robustness make it challenging to perform variable selection on big data sets without access to adequate and costly computational resources. This paper proposes an efficient and robust optimization algorithm for solving~problem~\eqref{eq:binary_LR_wp} that addresses these shortcomings.
\section*{Methodology}\label{sec:methodology}
We consider the problem of solving the logistic regression problem~\eqref{eq:binary_LR_wp} with $\alpha \in (0,1)$. Our approach is to reformulate problem~\eqref{eq:binary_LR_wp} as a saddle-point problem and solve the latter using an appropriate primal-dual algorithm. Based on work the authors recently provided in~\cite{langlois2021accelerated}, we propose to use a nonlinear PDHG algorithm with Bregman divergence terms tailored to the logistic regression model and the elastic net penalty in~\eqref{eq:binary_LR_wp}. Specifically, we propose to use the Bregman divergence generated from the negative sum of $m$ binary entropy functions. This divergence is the function $D_{H}\colon \R^m \times \R^m \to [0,+\infty]$ given by
\begin{equation}\label{eq:kl-divergence}
D_{H}(\boldsymbol{s},\boldsymbol{s}') = \begin{dcases}
& \sum_{i=1}^{m} s_{i}\log\left(\frac{s_{i}}{s_{i}'}\right) + (1-s_{i})\log\left(\frac{1-s_{i}}{1-s_{i}'}\right) \quad \mathrm{if}\, \boldsymbol{s},\boldsymbol{s}' \in [0,1]^m, \\
& +\infty,\quad \mathrm{otherwise}.
\end{dcases}
\end{equation}
We also show how to adapt our approach for solving the logistic regression problem~\eqref{eq:binary_LR_wp} with the lasso penalty ($\alpha = 0$) or, more generally, for a broad class of convex penalties, such as the group lasso.
\subsection*{Numerical optimization algorithm}
The starting point of our approach is to express the logistic regression problem~\eqref{eq:binary_LR_wp} in saddle-point form. To do so, we use the convex conjugate formula of the sum of logarithms that appears in~\eqref{eq:binary_LR_wp}, namely
\begin{equation}\label{eq:conv_conj}
\psi(\boldsymbol{s}) = \sup_{\boldsymbol{u} \in \R^n} \left\{\left\langle \boldsymbol{s},\boldsymbol{u} \right\rangle - \sum_{i=1}^{m}\log(1+\exp{(u_i)})\right\} =
\begin{dcases}
& \sum_{i=1}^{m}s_i\log(s_i) + (1-s_i)\log(1-s_i) \quad \mathrm{if}\, \boldsymbol{s} \in [0,1]^m, \\
& +\infty,\quad \mathrm{otherwise}.
\end{dcases}
\end{equation}
Hence we have the representation
\[
\sum_{i=1}^{m}\log(1+\exp{((\matr{A}\boldsymbol{\theta})_{i})}) = \sup_{\boldsymbol{s} \in [0,1]^m} \left\{\left\langle \boldsymbol{s},\matr{A}\boldsymbol{\theta} \right\rangle - \psi(\boldsymbol{s})\right\},
\]
and from it we can express problem~\eqref{eq:binary_LR_wp} in saddle-point form as
\begin{equation}\label{eq:binary_LR_saddle}
\inf_{\boldsymbol{\theta} \in \R^n}\sup_{\boldsymbol{s} \in [0,1]^{m}} \left\{ - \frac{1}{m}\psi(\boldsymbol{s}) - \frac{1}{m}\left\langle\boldsymbol{y} - \boldsymbol{s},\matr A \boldsymbol{\theta} \right\rangle + \lambda \left(\alpha \normone{\boldsymbol{\theta}} + \frac{1-\alpha}{2}\normsq{\boldsymbol{\theta}}\right) \right\}.
\end{equation}
A solution to the convex-concave saddle-point problem~\eqref{eq:binary_LR_saddle} is called a saddle point. For $\alpha \in (0,1)$, the saddle-point problem~\eqref{eq:binary_LR_wp} has a unique saddle point $(\boldsymbol{\theta}^{*},\boldsymbol{s}^{*})$, where the element $\boldsymbol{\theta}^{*}$ itself is the unique global minimum of the original problem~\eqref{eq:binary_LR_wp}~\cite[Proposition 3.1, page 57]{ekeland1999convex}. Hence for our purpose it suffices to compute a solution to the saddle-point problem~\eqref{eq:binary_LR_saddle}, and to do so we can take advantage of the fact that the saddle point $(\boldsymbol{\theta}^{*},\boldsymbol{s}^{*})$ satisfies the following optimality conditions:
\begin{equation}\label{eq:opt_conds}
\frac{1}{m}\matr{A}^T(\boldsymbol{y}-\boldsymbol{s}^{*}) - \lambda(1-\alpha)\boldsymbol{\theta}^{*} \in \lambda\alpha \partial \normone{\boldsymbol{\theta}^{*}} \quad \mathrm{and} \quad s_{i}^{*} = \frac{1}{1 + \exp{(-(\matr{A}\theta^{*})_{i})}}\quad \mathrm{for}\,i \in \{1,\dots,m\}.
\end{equation}
The next step of our approach is to split the infimum and supremum problems in~\eqref{eq:binary_LR_saddle} with an appropriate primal-dual scheme. We propose to alternate between a nonlinear proximal ascent step using the Kullback--Leibler divergence~\eqref{eq:kl-divergence} and a proximal descent step using a quadratic function:
\begin{equation}
\begin{alignedat}{1}\label{eq:kl-nPDHG-alg}
&\boldsymbol{s}^{(k+1)} = \argmax_{\boldsymbol{s} \in (0,1)^m} \left\{-\psi(\boldsymbol{s}) + \left\langle \boldsymbol{s}, \matr{A}(\boldsymbol{\theta}^{(k)} + \rho(\boldsymbol{\theta}^{(k)} - \boldsymbol{\theta}^{(k-1)}))\right\rangle - \frac{1}{\sigma}D_{H}(\boldsymbol{s},\boldsymbol{s}^{(k)})\right\}, \\
&\boldsymbol{\theta}^{(k+1)} = \argmin_{\boldsymbol{\theta} \in \R^n} \left\{\left(\lambda_{1}\normone{\boldsymbol{\theta}} + \frac{\lambda_{2}}{2}\normsq{\boldsymbol{\theta}}\right) + \left\langle \boldsymbol{s}^{(k+1)}-\boldsymbol{y}, \matr{A}\boldsymbol{\theta}\right\rangle + \frac{1}{2\tau}\normsq{\boldsymbol{\theta}-\boldsymbol{\theta}^{(k)}}\right\},
\end{alignedat}
\end{equation}
where $\lambda_{1} = m\lambda\alpha$, $\lambda_{2} = m\lambda(1-\alpha)$, and $\rho,\sigma,\tau > 0$ are parameters to be specified in the next step of our approach. The scheme starts from initial values $\boldsymbol{s}^{(0)} \in (0,1)^m$ and $\boldsymbol{\theta}^{(-1)} = \boldsymbol{\theta}^{(0)} \in \R^n$.
The key element in this primal-dual scheme is the choice of the Kullback--Leibler divergence~\eqref{eq:kl-divergence} in the first line of~\eqref{eq:kl-nPDHG-alg}. Its choice is motivated by two facts. First, because it is generated from the sum of $m$ binary entropy that appears \emph{explicitly} in the saddle-point problem~\eqref{eq:binary_LR_saddle} as the function $\psi$ defined in~\eqref{eq:conv_conj}, i.e.,
\begin{equation}\label{eq:kullback_def}
D_{H}(\boldsymbol{s},\boldsymbol{s}') = \psi(\boldsymbol{s}) - \psi(\boldsymbol{s}') -\left\langle \boldsymbol{s} - \boldsymbol{s}', \nabla \psi(\boldsymbol{s}') \right\rangle.
\end{equation}
This fact will make the maximization step in~\eqref{eq:kl-nPDHG-alg} easy to evaluate. Second, because it is strongly convex with respect to the $\ell1$-norm in that
\[
D_{H}(\boldsymbol{s},\boldsymbol{s}') \geqslant \frac{1}{2}\normone{\boldsymbol{s}-\boldsymbol{s}'}^{2}
\]
for every $\boldsymbol{s},\boldsymbol{s}' \in [0,1]^{m}$, which is a direct consequence of a fundamental result in information theory known as Pinsker's inequality~\cite{beck2003mirror,csiszar1967information,kemperman1969optimum,kullback1967lower,pinsker1964information}.
The latter fact, notably, implies that the primal-dual scheme~\eqref{eq:kl-nPDHG-alg} alternates between solving a 1-strongly concave problem over the space $(\R^m,\normone{\cdot})$ and a $\lambda_{2}$-strongly convex problem over the space $(\R^n,\normtwo{\cdot})$. The choice of these spaces is significant, for it induces the matrix norm
\begin{equation}\label{eq:matr_parameter}
\norm{\matr{A}}_{op} = \sup_{\normone{\boldsymbol{s}} = 1} \normtwo{\matr{A}^{T}\boldsymbol{s}} = \max_{i \in \{1,\dots,m\}} \sqrt{\sum_{j=1}^{n}{A_{ij}^{2}}} = \max_{i \in \{1,\dots,m\}} \normtwo{\boldsymbol{x}_{i}},
\end{equation}
which can be computed in \emph{optimal} $\Theta(mn)$ time. This is unlike most first-order optimization methods, such as the forward-backward splitting algorithm, where instead the matrix norm is the largest singular value of the matrix $\matr{A}$, which takes $O(\min{(m^2n,mn^2)})$ operations to compute. This point is \emph{crucial}: the smaller computational cost makes it easy and efficient to estimate all the parameters of the nonlinear PDHG algorithm, which is needed to achieve an optimal rate of convergence.
The last step of our approach is to choose the parameters $\rho$, $\sigma$, and $\tau$ so that the iterations in the primal-dual scheme~\eqref{eq:kl-nPDHG-alg} converge. Based on the analysis of accelerated nonlinear PDHG algorithms the authors recently provided in~\cite[Section 5.4]{langlois2021accelerated}, the choice of parameters
\[
\rho = 1 - \frac{\lambda_{2}}{2\norm{\matr{A}}_{op}^2}\left(\sqrt{1+\frac{4\norm{\matr{A}}_{op}^2}{\lambda_{2}}}-1\right), \quad \sigma = \frac{1-\rho}{\rho}, \quad \mathrm{and} \quad \tau = \frac{(1-\rho)}{\lambda_{2}\rho},
\]
ensure that the iterations converge to the unique saddle point $(\boldsymbol{\theta}^{*},\boldsymbol{s}^{*})$ of problem~\eqref{eq:binary_LR_saddle}. In particular, the rate of convergence is linear in the number of iterations, with
\begin{equation}\label{eq:rate}
\frac{1}{2}\normsq{\boldsymbol{\theta}^{*} - \boldsymbol{\theta}^{(k)}} \leqslant \rho^k\left(\frac{1}{2}\normsq{\boldsymbol{\theta}^{*} - \boldsymbol{\theta}^{(0)}} + \frac{1}{\lambda_{2}}D_{H}(\boldsymbol{s}^{*},\boldsymbol{s}^{(0)})\right).
\end{equation}
This convergence rate is optimal: it is the best achievable rate of convergence in the Nesterov class of optimal first-order methods~\cite{Nesterov2018}.
An important feature of our proposed algorithm is that the minimization steps in~\eqref{eq:kl-nPDHG-alg} can be computed \textit{exactly}. Specifically, with the auxiliary variables $\boldsymbol{u}^{(k)} = \matr{A}\boldsymbol{\theta}^{(k)}$ and $v_{i}^{(k)} = \log\left(s_{i}^{(k)}/(1-s_{i}^{(k)})\right)$ for $i \in \{1,\dots m\}$, the steps in algorithm~\eqref{eq:kl-nPDHG-alg} can be expressed explicitly as follows:
\begin{equation}\label{eq:alg_oPDHG_specific}
\begin{dcases}
\boldsymbol{v}^{(k+1)} &= \frac{1}{1+\sigma}\left(\sigma\boldsymbol{u}^{(k)} + \sigma\rho\left(\boldsymbol{u}^{(k)} - \boldsymbol{u}^{(k-1)}\right) + \boldsymbol{v}^{(k)}\right) \\
s^{(k+1)}_{i} &= \frac{1}{1 + \exp{\left(-v_{i}^{(k+1)}\right)}} \quad \mathrm{for}\,i\in\left\{1,\dots,m\right\}, \\
\hat{\boldsymbol{\theta}}^{(k+1)} &= \boldsymbol{\theta}^{(k)} - \tau \matr{A}^T\left(\boldsymbol{s}^{(k+1)} - \boldsymbol{y}\right)\\
\theta^{(k+1)}_{j} &= \mathrm{sign~}{\hat{\theta}^{(k+1)}_j}\max{\left(0,\frac{\left|\hat{\theta}^{(k+1)}_{j}\right| - \lambda_{1}\tau}{1+\lambda_{2}\tau}\right)} \quad \mathrm{for}\,j\in\{1,\dots,n\}\\
\boldsymbol{u}^{(k+1)} &= \matr{A}\boldsymbol{\theta}^{(k+1)}.
\end{dcases}
\end{equation}
In addition, from the auxiliary variables and the optimality condition on the right in~\eqref{eq:opt_conds}, we have the limit
\[
\lim_{k \to +\infty} \normtwo{\boldsymbol{u}^{(k)} - \boldsymbol{v}^{(k)}} = 0,
\]
which can serve as a convergence criterion. We refer to \textit{Material and Methods} for the derivation of algorithm~\eqref{eq:alg_oPDHG_specific} from the iterations in~\eqref{eq:kl-nPDHG-alg}.
Our proposed explicit nonlinear PDHG algorithm~\eqref{eq:alg_oPDHG_specific} offers many advantages in terms of efficiency and robustness. First, the computational bottlenecks in algorithm~\eqref{eq:alg_oPDHG_specific} consist of matrix-vector multiplications and the estimation of the induced matrix $\norm{\matr{A}}_{op}$ given by~\eqref{eq:matr_parameter}. For the matrix-vector multiplications, the asymptotic space complexity is at most $O(mn)$ operations, or possibly better if the matrix $A$ has structure. For the estimation of the induced matrix $\norm{\matr{A}}_{op}$, it can be easily computed in optimal $\Theta(mn)$ time. As mentioned before, this is unlike most first-order optimization methods, such as the forward-backward splitting algorithm, where instead the matrix norm is the largest singular value of the matrix $\matr{A}$, which takes $O(\min{(m^2n,mn^2)})$ operations to compute. This fact is crucial because the smaller computational cost makes it easy and efficient to estimate all the parameters of the nonlinear PDHG algorithm, which is needed to achieve an optimal rate of convergence. Second, our algorithm also exhibits scalable parallelism because the matrix-vector multiplication can be implemented via parallel algorithms. Third, our algorithm provably computes an $\epsilon$-approximate solution of~\eqref{eq:binary_LR_wp} in $O(\log(1/\epsilon))$ operations~\cite[Section 5.4]{langlois2021accelerated}. The size of the parameter $\rho$ dictates this linear rate of convergence; it depends on the matrix $\matr{A}$, the tuning parameter $\lambda$, and the hyperparameter $\alpha$. With these three properties, algorithm~\eqref{eq:alg_oPDHG_specific} overcomes the limitations of the state-of-the-art and other widely-used algorithms for solving the logistic regression problem~\eqref{eq:binary_LR_wp}. We are unaware of any other algorithm that offers these advantages in terms of efficiency and robustness simultaneously.
In general, the nonlinear PDHG algorithm~\eqref{eq:alg_oPDHG_specific} can be adapted to any regularized logistic regression problem for which the penalty is strongly convex on the space $(\R^n,\normtwo{\cdot})$. To do so, substitute this penalty for the elastic net penalty in the minimization problem of the scheme~\eqref{eq:kl-nPDHG-alg} and use its solution in place of the third and fourth lines in the explicit algorithm~\eqref{eq:alg_oPDHG_specific}.
\subsection*{Special case: Logistic regression with the lasso penalty}
In some situations, it may be desirable to fit the regularized logistic regression model~\eqref{eq:binary_LR_wp} without the $\ell{2}^{2}$ penalty ($\alpha = 1$). In this case, algorithm~\eqref{eq:alg_oPDHG_specific} does not apply since it depends on the strong convexity of $\ell{2}^{2}$ penalty. We present here an algorithm for fitting a logistic regression model regularized by an $\ell1$ penalty, or in principle, any convex penalty that is not strongly convex, such as the group lasso.
The $\ell{1}$-regularized logistic regression problem is
\begin{equation}\label{eq:lr_l1}
\inf_{\boldsymbol{\theta} \in \R^n} \left\{\frac{1}{m}\sum_{i=1}^{m}\log\left(1 + \exp{((\matr A \boldsymbol{\theta})_i)}\right) - \frac{1}{m}\left\langle\boldsymbol{y},\matr A \boldsymbol{\theta} \right\rangle + \lambda \normone{\boldsymbol{\theta}} \right\},
\end{equation}
and its associated saddle-point problem is
\begin{equation}\label{eq:binary_LR_saddle_l1}
\inf_{\boldsymbol{\theta} \in \R^n}\sup_{\boldsymbol{s} \in [0,1]^{m}} \left\{ - \frac{1}{m}\psi(\boldsymbol{s}) - \frac{1}{m}\left\langle\boldsymbol{y} - \boldsymbol{s},\matr A \boldsymbol{\theta} \right\rangle + \lambda \normone{\boldsymbol{\theta}}\right\}.
\end{equation}
The $\ell1$ penalty in~\eqref{eq:lr_l1} guarantees that problem~\eqref{eq:lr_l1} has at least one solution. Accordingly, the saddle-point problem~\eqref{eq:binary_LR_saddle_l1} also has at least one saddle point. As before, we split the infimum and supremum problems in~\eqref{eq:binary_LR_saddle_l1} by alternating between a nonlinear proximal ascent step using the Kullback--Leibler divergence~\eqref{eq:kl-divergence} and a proximal descent step using a quadratic function, but this time we also update the stepsize parameters at each iteration:
\begin{equation}
\begin{alignedat}{1}\label{eq:kl-lasso-alg}
&\boldsymbol{s}^{(k+1)} = \argmax_{\boldsymbol{s} \in (0,1)^m} \left\{-\psi(\boldsymbol{s}) + \left\langle \boldsymbol{s}, \matr{A}(\boldsymbol{\theta}^{(k)} + \rho^{(k)}(\boldsymbol{\theta}^{(k)} - \boldsymbol{\theta}^{(k-1)}))\right\rangle - \frac{1}{\sigma^{(k)}}D_{H}(\boldsymbol{s},\boldsymbol{s}^{(k)})\right\}, \\
&\boldsymbol{\theta}^{(k+1)} = \argmin_{\boldsymbol{\theta} \in \R^n} \left\{m\lambda \normone{\boldsymbol{\theta}} + \left\langle \boldsymbol{s}^{(k+1)}-\boldsymbol{y}, \matr{A}\boldsymbol{\theta}\right\rangle + \frac{1}{2\tau^{(k)}}\normsq{\boldsymbol{\theta}-\boldsymbol{\theta}^{(k)}}\right\}, \\
&\rho^{(k+1)} = 1/\sqrt{1+\sigma^{(k)}}, \quad \sigma^{(k+1)} = \rho^{(k+1)}\sigma^{(k)}, \quad \tau^{(k+1)} = \tau^{(k)}/\rho^{(k+1)}.
\end{alignedat}
\end{equation}
The scheme starts from initial stepsize parameters $\rho^{(0)} \in (0,1)$, $\tau^{(0)} > 0$ and $\sigma^{(0)} = 1/(\tau^{(0)}\norm{\matr{A}}_{op}^{2})$, and initial values $\boldsymbol{s}^{(0)} \in (0,1)^m$ and $\boldsymbol{\theta}^{(-1)} = \boldsymbol{\theta}^{(0)} \in \R^n$.
The following accelerated nonlinear PDHG algorithm computes a global minimum of~\eqref{eq:lr_l1}:
\begin{equation}\label{eq:alg_l1}
\begin{dcases}
\boldsymbol{v}^{(k+1)} &= \frac{1}{1+\sigma^{(k)}}\left(\sigma^{(k)}\boldsymbol{u}^{(k)} + \sigma^{(k)}\rho^{(k)}\left(\boldsymbol{u}^{(k)} - \boldsymbol{u}^{(k-1)}\right) + \boldsymbol{v}^{(k)}\right) \\
s^{(k+1)}_{i} &= \frac{1}{1+\exp{\left(-v_{i}^{(k+1)}\right)}} \quad \mathrm{for}\,i\in\left\{1,\dots,m\right\}, \\
\hat{\boldsymbol{\theta}}^{(k+1)} &= \boldsymbol{\theta}^{(k)} - \tau^{(k)}\matr A^T\left(\boldsymbol{s}^{(k+1)} - \boldsymbol{y}\right)\\
\theta^{(k+1)}_{j} &= \mathrm{sign~}{\hat{\theta}^{(k+1)}_j}\max{\left(0,\left|\hat{\theta}^{(k+1)}_{j}\right| - m\lambda\tau^{(k)}\right)} \quad \mathrm{for}\,j\in\{1,\dots,n\}, \\
\boldsymbol{u}^{(k+1)} &= \matr{A}\boldsymbol{\theta}^{(k+1)}, \\
\rho^{(k+1)} &= 1/\sqrt{1+\sigma^{(k)}}, \quad \sigma^{(k+1)} = \rho^{(k+1)}\sigma^{(k)}, \quad \tau^{(k+1)} = \tau^{(k)}/\rho^{(k+1)}.
\end{dcases}
\end{equation}
In addition, from the auxiliary variables and the optimality condition on the right in~\eqref{eq:opt_conds}, we have the limit
\[
\lim_{k \to +\infty} \normtwo{\boldsymbol{u}^{(k)} - \boldsymbol{v}^{(k)}} = 0,
\]
which can serve as a convergence criterion. The derivation of algorithm~\eqref{eq:alg_l1} from the iterations in~\eqref{eq:kl-lasso-alg} follows from the derivation of algorithm~\eqref{eq:alg_oPDHG_specific} from the iterations in~\eqref{eq:kl-nPDHG-alg} described in $\textit{Material and Methods}$ by setting $\alpha = 1$. According to results provided by the authors in~\cite[Proposition 5.2]{langlois2021accelerated}, the sequence of iterates $\{(\boldsymbol{\theta}^{(k)},\boldsymbol{s}^{(k)})\}_{k=1}^{+\infty}$ converges to a saddle point $(\boldsymbol{\theta}^{*},\boldsymbol{s}^{*})$ of~\eqref{eq:binary_LR_saddle_l1} at a sublinear rate $O(1/k^2)$ in the iterations. Moreover, this sublinear rate satisfies the lower bound
\[
\frac{2\tau^{(0)}\norm{\matr{A}}^{2}_{op}}{1 + 2\tau^{(0)}\norm{\matr{A}}^{2}_{op}}k + \frac{2\tau^{(0)}}{(1 + 2\tau^{(0)}\norm{\matr{A}}^{2}_{op})^2}k^2.
\]
In particular, the constant term multiplying $k^2$ is maximized when $\tau^{(0)} = 1/(2\norm{\matr{A}}_{op}^{2})$. This suggests a practical choice for the free parameter $\tau^{(0)}$.
In general, the nonlinear PDHG algorithm~\eqref{eq:alg_l1} can be adapted to any regularized logistic regression problem for which the penalty is proper, lower semicontinuous and convex, and for which a solution exists. To do so, substitute the $\ell1$ penalty in the minimization problem of the scheme~\eqref{eq:kl-nPDHG-alg} and use its solution in place of the third and fourth lines in the explicit algorithm~\eqref{eq:alg_l1}.
\section*{Examples}
\todo{Complete...}
In this section, we compare the run times of the nonlinear PDHG algorithm~\eqref{eq:alg_oPDHG_specific}, the coordinate descent algorithm implemented in the glmnet software package in MATLAB~\cite{hastie2021glmnet}, and the FISTA algorithm~\cite[Algorithm 5]{chambolle2016introduction} on several large synthetic and real data sets. The nonlinear PDHG algorithm~\eqref{eq:alg_oPDHG_specific} and the FISTA algorithm were implemented in MATLAB, with several subroutines implemented in $C++$ for speeding up the computations. The numerical computations were performed on a laptop with Intel(R) Core(TM) i7-10750H CPU @ 2.60 GHz, 1 CPU with 32 GB RAM.
For each numerical experiment below, we ran glmnet to let it generate (using the default settings) a decreasing sequence of 100 tuning parameters $\{\lambda_{l}\}_{l=1}^{100}$. We then let glmnet solve~\eqref{eq:binary_LR_wp} sequentially for each $\lambda_{l}$ using the solution found for problem~\eqref{eq:binary_LR_wp} with tuning parameter $\lambda_{l}$ as the initial solution for problem~\eqref{eq:binary_LR_wp} with tuning parameter $\lambda_{l+1}$. The initial value $\boldsymbol{\theta} = \boldsymbol{0}$ was used for solving problem~\eqref{eq:binary_LR_wp} with $\lambda_{1}$. We then used the same sequence of tuning parameters $\{\lambda_{l}\}_{l=1}^{m}$ and strategy as above to solve problem~\eqref{eq:binary_LR_wp} sequentially for each $\lambda_{l}$ with our implementation of the nonlinear PDHG algorithm~\eqref{eq:alg_oPDHG_specific} and FISTA. The iterations in algorithm~\eqref{eq:alg_oPDHG_specific}, algorithm~\eqref{eq:alg_l1} (when applicable), and the FISTA algorithm were stopped once the optimality conditions~\eqref{eq:opt_conds} were approximately met: $\normsq{\boldsymbol{u}^{(k)} - \boldsymbol{v}^{(k)}} \leqslant 10^{-8}$ and for $\alpha \in (0,1]$, \todo{Modify this?}
\[
\frac{1}{\lambda\alpha}\left(\frac{1}{m}\matr{A}^T(\boldsymbol{y}-\boldsymbol{s}^{(k)}) - \lambda(1-\alpha)\boldsymbol{\theta}^{(k)}\right)_{i} \in
\begin{dcases}
(-1.0001,-0.9999) &\quad \mathrm{if}\, \theta_{i}^{(k)} > 0\\
[-0.9999,0.9999] &\quad \mathrm{if}\, \theta_{i}^{(k)} = 0\\
(0.9999,1.0001) &\quad \mathrm{if}\, \theta_{i}^{(k)} < 0.
\end{dcases}
\]
\subsection*{Synthetic data I: Random dense matrices}
For the first numerical experiment, we draw $\boldsymbol{x}_{i} \sim \mathcal{N}(\boldsymbol{0},\matr{1})$, $i \in \{1,\dots,m\}$ independent and identically distributed realizations from an $n$-dimensional standard normal distribution with zero mean and unit variance. The predictor variables are encoded in an $m \times n$ matrix $\matr A$ whose rows are the vectors $\boldsymbol{x}_i = (x_{i1},\dots, x_{in})$. For the response variable, we set $y_i$ to $0$ or $1$ equally at random for $i \in \{1,\dots,m\}$. This data design, while simple, allows us to compare the performance of our nonlinear PDHG algorithm, glmnet, and the FISTA algorithm on large and dense matrices.
For the simulations, we set $\alpha \in \{0.05,0.35,0.65,0.95,1\}$ and use the dimensions $(m,n) = (1000, 100000)$, $(m,n) = (2500, 40000)$, and $(m,n) = (10000, 10000)$. Table 1 shows the average CPU timings for the glmnet, nonlinear PDHG, and FISTA algorithms. The timings for FISTA include the time required to compute the largest singular value of the matrix $\matr{A}$. \rednote{TABLE 1 HERE}
\subsection*{Synthetic data II: Structured dense matrices}
For the second numerical experiment, we generate synthetic data with certain sparsity structure using a similar methodology to~\citet[Section 3 and 4]{bertsimas2019sparse}. We draw $\boldsymbol{x}_{i} \sim \mathcal{N}(\boldsymbol{0},\matr{\Sigma})$, $i \in \{1,\dots,m\}$ independent realizations from an $n$-dimensional Gaussian distribution with mean $\boldsymbol{0}$ and covariance matrix $\matr{\Sigma}$. The predictor variables are encoded in an $m \times n$ matrix $\matr A$ whose rows are the vectors $\boldsymbol{x}_i = (x_{i1},\dots, x_{in})$. For the covariance matrix $\matr{\Sigma}$, we set $\Sigma_{ij} = r^{|i-j|}$ for $r \in (0,1)$. We randomly draw a weight vector $\boldsymbol{\theta}_{\mathrm{true}} \in \{-1,0,1\}$ where only 0.5\% of the coefficients are nonzero. We draw $z_{i}$, $i \in \{1,\dots,m\}$ independent identically distributed noise components from a normal distribution with zero mean and signal-to-noise ratio equal to one, so that $\normtwo{\boldsymbol{z}} = \normtwo{\matr{A}\boldsymbol{\theta}_{\mathrm{true}}}$. Finally, we compute the response variables by setting
\[
y_{i} = \begin{cases}
&1, \quad \mathrm{if}\, \mathrm{sign}\left(\matr{A}\boldsymbol{\theta}_{\mathrm{true}} + \boldsymbol{z}\right)_i > 0, \\
& 0, \quad \mathrm{otherwise}.
\end{cases}
\]
For the simulations, we set $\alpha = 0.8$ and use the dimensions $(m,n) = (1000, 100000)$, $(m,n) = (2500, 40000)$, and $(m,n) = (10000, 10000)$. For each set of dimensions $(m,n)$, we use the correlation coefficient $r \in \{0.25,0.75\}$, for a total of six simulations. Table 2 shows the average CPU timings for the glmnet, nonlinear PDHG, and FISTA algorithms. The timings for FISTA include the time requires to compute the largest singular value of the matrix $\matr{A}$. \rednote{TABLE 2 HERE}
\subsection*{Synthetic data III: Sparse matrices}
For the third numerical experiment, we repeat the same procedure as in the first experiment, but this time we set 95\% of the coefficients in the matrix to zero in order to generate a sparse matrix. Table 3 shows the average CPU timings for the glmnet, nonlinear PDHG, and FISTA algorithms. The timings for FISTA include the time requires to compute the largest singular value of the matrix $\matr{A}$. \rednote{TABLE 3 HERE}
\subsection*{Real data I: Dense matrices}
For the fourth numerical experiment, we use the Arcene mass spectrometry data set~\cite{guyon2004result} with a binary response indicating presence of cancer. The data set has $m = 100$ samples and $n = 10000$ features, where approximately 50\% of the entries are nonzero.
For the simulations, we set $\alpha = 0.8$ and standardize the features to take values in the interval $[0,1]$. Table 4 shows the average CPU timings for the glmnet, nonlinear PDHG, and FISTA algorithms. The timings for FISTA include the time requires to compute the largest singular value of the matrix $\matr{A}$. \rednote{TABLE 4 HERE}.
\subsection*{Real data II: Sparse matrices}
For the fifth numerical experiment, we use the Dorothea drug discovery data set~\cite{guyon2004result} with a binary response indicating whether a compound (sample) bind to Thrombin. The data set has $m = 850$ samples and $n = 100 000$ features, where less than 1\% of the entries are nonzero.
For the simulations, we set $\alpha = 0.8$. Table 5 shows the average CPU timings for the glmnet, nonlinear PDHG, and FISTA algorithms. The timings for FISTA include the time requires to compute the largest singular value of the matrix $\matr{A}$. \rednote{TABLE 5 HERE}.
\section*{Material and Methods}\label{sec:material}
\subsection*{Derivation of the explicit algorithm~\eqref{eq:alg_oPDHG_specific}}
We derive here the explicit algorithm~\eqref{eq:kl-nPDHG-alg} from the iterations in~\eqref{eq:alg_oPDHG_specific}. Consider the first line of~\eqref{eq:kl-nPDHG-alg}. This maximization problem has a unique maximum inside the interval $(0,1)^m$~\cite[Proposition 3.21-3.23, Theorem 3.24, Corollary 3.25]{bauschke2003bregman}, and the objective function is differentiable. Thus it suffices to compute the gradient with respect to $\boldsymbol{s}$ and solve for $\boldsymbol{s}$ to compute its global maximum. To do so, it helps to first rearrange the objective function. Substitute $\boldsymbol{s}^{(k)}$ for $\boldsymbol{s}'$ in equation~\eqref{eq:kl-divergence}, use equation~\eqref{eq:kullback_def}, and rearrange to obtain the objective function
\[
\begin{alignedat}{1}
-\psi(\boldsymbol{s}) &+ \left\langle \boldsymbol{s}, \matr{A}(\boldsymbol{\theta}^{(k)} + \rho(\boldsymbol{\theta}^{(k)} - \boldsymbol{\theta}^{(k-1)}))\right\rangle - \frac{1}{\sigma}D_{H}(\boldsymbol{s},\boldsymbol{s}^{(k)})\\
&= -\psi(\boldsymbol{s}) + \left\langle \boldsymbol{s}, \matr{A}(\boldsymbol{\theta}^{(k)} + \rho(\boldsymbol{\theta}^{(k)} - \boldsymbol{\theta}^{(k-1)}))\right\rangle - \frac{1}{\sigma}\left(\psi(\boldsymbol{s}) - \psi(\boldsymbol{s}^{(k)}) - \left\langle \boldsymbol{s}-\boldsymbol{s}^{(k)},\nabla\psi(\boldsymbol{s}^{(k)}) \right\rangle)\right) \\
&= -\left(1 + \frac{1}{\sigma}\right)\psi(\boldsymbol{s}) + \left\langle \boldsymbol{s}, \matr{A}(\boldsymbol{\theta}^{(k)} + \rho(\boldsymbol{\theta}^{(k)} - \boldsymbol{\theta}^{(k-1)}))\right\rangle + \frac{1}{\sigma}\left\langle \boldsymbol{s}-\boldsymbol{s}^{(k)},\nabla\psi(\boldsymbol{s}^{(k)}) \right\rangle + \psi(\boldsymbol{s}^{(k)}).
\end{alignedat}
\]
The optimality condition is then
\[
\nabla\psi(\boldsymbol{s}^{(k+1)}) = \frac{\sigma}{1+\sigma}\left(\matr{A}(\boldsymbol{\theta}^{(k)} + \rho(\boldsymbol{\theta}^{(k)} - \boldsymbol{\theta}^{(k-1)}))\right) + \frac{1}{1+\sigma}\nabla\psi(\boldsymbol{s}^{(k)}),
\]
where $(\nabla\psi(\boldsymbol{s}))_i = \log\left(s_{i}^{(k+1)}/(1-s_{i}^{(k+1)})\right)$ for $i \in \{1,\dots,m\}$ and $\boldsymbol{s} \in (0,1)^m$. With the auxiliary variables $\boldsymbol{u}^{(k)} = \matr{A}\boldsymbol{\theta}^{(k)}$ and $v_{i}^{(k)} = \log\left(s_{i}^{(k)}/(1-s_{i}^{(k)})\right)$ for $i \in \{1,\dots m\}$, the optimality condition can be written as
\[
\boldsymbol{v}^{(k+1)} = \frac{1}{1+\sigma^{(k)}}\left(\sigma^{(k)}\boldsymbol{u}^{(k)} + \sigma^{(k)}\rho^{(k)}\left(\boldsymbol{u}^{(k)} - \boldsymbol{u}^{(k-1)}\right) + \boldsymbol{v}^{(k)}\right).
\]
This gives the first line in~\eqref{eq:alg_oPDHG_specific}. The second line follows upon solving for $\boldsymbol{s}^{(k+1)}$ in terms of $\boldsymbol{v}^{(k+1)}$. The fifth line follows from the definition of the auxiliary variable $\boldsymbol{u}^{(k)}$.
Now, consider the second line of~\eqref{eq:kl-nPDHG-alg}. Complete the square and multiply by $\tau/(1+\lambda_{2}\tau)$ to get the equivalent minimization problem
\[
\boldsymbol{\theta}^{(k+1)} = \argmin_{\boldsymbol{\theta} \in \R^n} \left\{\frac{\lambda_{1}\tau}{1 + \lambda_{2}\tau}\normone{\boldsymbol{\theta}} + \frac{1}{2}\normsq{\boldsymbol{\theta} - \left(\boldsymbol{\theta}^{(k)} - \tau\matr{A}^T(\boldsymbol{s}^{(k+1)} - \boldsymbol{y})\right)/(1+\lambda_{2}\tau)}\right\}.
\]
The unique minimum is computed using the soft thresholding operator~\cite{daubechies2004iterative,figueiredo2001wavelet,lions1979splitting}. With the notation
\[
\hat{\boldsymbol{\theta}}^{(k+1)} = \boldsymbol{\theta}^{(k)} - \tau\matr A^T\left(\boldsymbol{s}^{(k+1)} - \boldsymbol{y}\right),
\]
the soft thresholding operator is defined component-wise by
\begin{equation*}
\theta^{(k+1)}_{i} = \mathrm{sign~}{\hat{\theta}^{(k+1)}_j}\max{\left(0,\frac{\left|\hat{\theta}^{(k+1)}_{j}\right| - \lambda_{1}\tau}{1+\lambda_{2}\tau}\right)} \quad \mathrm{for}\,j\in\{1,\dots,n\}.
\end{equation*}
The third and fourth lines of~\eqref{eq:alg_oPDHG_specific} are precisely these two equations.
\subsubsection*{Significance statement}
\end{centering}
Logistic regression is a widely used statistical model to describe the relationship between a binary response variable and predictor variables in data sets. With the trends in big data, logistic regression is now commonly applied to data sets whose predictor variables range from hundreds of thousands to billions. State-of-the-art algorithms for fitting logistic regression models, however, were not traditionally designed to handle big data sets; they either scale poorly in size or are prone to produce unreliable numerical results. This paper proposes a nonlinear primal-dual algorithm that provably computes a solution to a logistic regression problem regularized by an elastic net penalty in $O(T(m,n)\log(1/\epsilon))$ operations, where $\epsilon \in (0,1)$ denotes the tolerance and $T(m,n)$ denotes the number of arithmetic operations required to perform matrix-vector multiplication on a data set with $m$ samples each comprising $n$ features. This result improves on the known complexity bound of $O(\min(m^2n,mn^2)\log(1/\epsilon))$ for first-order optimization methods such as the classic primal-dual hybrid gradient or forward-backward splitting methods.
\bigbreak
\input{1_introduction.tex}
\input{2_methodology.tex}
\input{5_Material-and-Methods.tex}
\newpage
\bibliographystyle{plainnat}
|
{
"timestamp": "2021-12-01T02:24:48",
"yymm": "2111",
"arxiv_id": "2111.15426",
"language": "en",
"url": "https://arxiv.org/abs/2111.15426"
}
|
\section{Introduction}
The topological aspects of QCD play a pivotal role in many theoretical problems.
Prominent examples include the explanation of the $\eta'$ meson mass~\cite{Witten:1979vv,Veneziano:1979ec} and (possible) solution to the strong CP problem leading to the prediction of a new particle, the QCD axion~\cite{Peccei:1977hh,Weinberg:1977ma,Wilczek:1977pj}. This new particle is also considered as a promising candidate for Dark Matter constituent. Another wide topic of interest is the interplay between topology and various mechanisms of chiral and axial symmetry breaking/restoration in hot QCD~\cite{Gross:1980br,Ringwald:1999ze,Bottaro:2020dqh}.
Lattice simulations were first applied to the problem of axion properties in Ref.~\cite{Berkowitz:2015aua}. In particular, the axion mass can be extracted from lattice data on high-temperature topological susceptibility under certain assumptions about axion cosmological evolution (post-inflationary scenario). First results were obtained in~\cite{Berkowitz:2015aua} in quenched approximation, followed by numerous works with dynamical quarks~\cite{Bonati:2015vqz,Borsanyi:2016ksw,Petreczky:2016vrs,Bonati:2016tvi,Burger:2017xkz,Burger:2018fvb}.
In this Proceeding we report the preliminary results of our ongoing project on simulation of finite-$T$ QCD with Wilson twisted mass fermions at the physical point. We extend our previous study on axions performed at higher than physical pion masses in~\cite{Burger:2017xkz,Burger:2018fvb}.
We calculate the temperature dependence of several chiral observables, including chiral condensate and susceptibility, and relate them to high-temperature topological susceptibility via QCD symmetry relations. Then, using the observed value of Dark Matter density as an input, we obtain the lower limit on (post-inflationary) axion mass.
\section{Lattice setup}
We perform simulations with $N_f=2+1+1$ Wilson twisted mass fermions tuned at maximal twist~\cite{Frezzotti:2003ni,Shindler:2007vp}.
The summary of our lattice ensembles are given in Table~\ref{tbl:summary}. Strange and charm quark masses are set to the physical values, and four different pion masses are available including the physical point. For lattice spacing and other parameters we rely on ETMC $T=0$ results~\cite{Alexandrou:2014sha,Alexandrou:2020okk}.
We employ fixed-scale approach for finite-$T$ simulations: for each ensemble the lattice spacing $a$ is fixed, and the temperature is varied by lattice size in temporal direction $L_t$. Thus we cover the temperature range approximately $120 \lesssim T \lesssim 600$~MeV.
Additional details on our lattice simulations can be found in~\cite{Burger:2018fvb,Kotov:2021rah}.
\begin{table}[thb]
\begin{center}
\begin{tabular}{c c c}
\hline
\hspace*{.2cm}Ensemble\hspace*{.2cm} & \hspace*{.2cm}$m_\pi$ [MeV]\hspace*{.2cm} & \hspace*{.2cm}$a$ [fm]\hspace*{.2cm} \\
\hline
M140 & 139(1) & 0.0801(4) \\
D210 & 213(9) & 0.0646(7) \\
A260 & 261(11) & 0.0936(13) \\
B260 & 256(12) & 0.0823(10) \\
A370 & 364(15) & 0.0936(13) \\
B370 & 372(17) & 0.0823(10) \\
D370 & 369(15) & 0.0646(7) \\
\hline
\end{tabular}
\caption{Parameters of $N_f=2+1+1$ lattice ensembles used for the analysis~\cite{Alexandrou:2014sha,Alexandrou:2020okk}.
\label{tbl:summary}
}
\end{center}
\end{table}
\section{Observables}
We consider the following chiral observables:
\begin{itemize}
\item Chiral condensate $\langle \bar{\psi}\psi\rangle=\langle \bar{u}u\rangle+\langle \bar{d}d\rangle=\dfrac{T}{V}\dfrac{\partial Z}{\partial m_l}=\dfrac{1}{L_t L_s^3}\langle\mathop{\mathrm{Tr}}\nolimits M^{-1}\rangle$.
\item Chiral susceptiblity $\chi_L=\dfrac{\partial}{\partial m_l}\langle \bar{\psi}\psi\rangle=\chi_\text{disc}+\chi_\text{conn}$
consisting from connected and disconnected parts.
\item By combining chiral condensate $\langle \bar{\psi}\psi\rangle$ and its susceptibility $\chi_L$ we introduce the new observable
\begin{equation}
\langle\bar\psi\psi\rangle_3 = \langle\bar\psi\psi\rangle - m_l\, \chi_L,
\label{eq:pbp3-def}
\end{equation}
which is free from linear additive renormalization as well as from linear correction to scaling. For additional details on $ \langle\bar\psi\psi\rangle_3$ and its properties we refer to~\cite{Kotov:2021rah,lat21_proc}.
\end{itemize}
In order to measure the topological susceptibility $\chi_\text{top}$ we employ its relation to the disconnected chiral susceptibility $\chi_\text{disc}$
via the QCD symmetry arguments~\cite{Kogut:1998rh,Bazavov:2012qja,Buchoff:2013nra}. In particular, the following continuum relation is valid:
\begin{equation}
\label{eq:chit-chi5}
\chi_\text{top}=\frac{\langle Q^2\rangle}{V}=m_l^2\,\chi_{5,\text{disc}},
\end{equation}
where $Q$ is topological charge, and $\chi_{5,\text{disc}}$ is disconnected pseudo-scalar susceptibility. The direct measurement of $\chi_{5,\text{disc}}$ on the lattice is difficult due to large fluctuations. Instead, we note that after the chiral transition $\chi_{5,\text{disc}}$ becomes equal to $\chi_\text{disc}$. Then, Eq.~\eqref{eq:chit-chi5} reads as
\begin{equation}
\label{eq:chit-pbp}
\chi_\text{top}(T\gtrsim T_c)=m_l^2\,\chi_\text{disc}=m_l^2\,\frac{V}{T}\left( \langle{(\bar\psi \psi)^2}\rangle_l - \langle{\bar\psi \psi}\rangle_l^2 \right)
\end{equation}
defining the topological susceptibility in high-$T$ region. Finally, we note that Eqs.~\eqref{eq:chit-chi5}--\eqref{eq:chit-pbp} are exact only in the continuum limit, meaning that fine lattices should be used in order to avoid large artifacts.
\section{Results}
We present the results on topological susceptibility measured according to Eq.~\eqref{eq:chit-pbp} at the physical pion mass in Fig.~\ref{fig:top_results}. We compare it with the results obtained in other lattice approaches~\cite{Bonati:2018blm,Taniguchi:2016tjc,Petreczky:2016vrs,Borsanyi:2016ksw} and also with our previous study at higher pion masses~\cite{Burger:2018fvb}.
In order to set the common scale for comparison, the results from non-physical pion masses are rescaled according to $\chi_\text{top}\propto m_\pi^4$. Such behavior is predicted by dilute instanton gas model (DIGA) and can also be obtained from more general considerations based on the analyticity of chiral condensate in light quark mass~\cite{Burger:2018fvb}.
Fig.~\ref{fig:top_results} shows that different studies lead to similar results following the same trend, but still lacking complete numerical agreement.
\begin{figure}[bt]
\begin{center}
\includegraphics{chitop_phys_comb}
\end{center}
\vspace*{-.3cm}
\caption{Topological susceptibility vs temperature obtained in this work and in Refs.~\cite{Bonati:2018blm,Taniguchi:2016tjc,Petreczky:2016vrs,Borsanyi:2016ksw,Burger:2018fvb}. The results for non-physical pion masses are rescaled as $\chi_\text{top}\propto m_\pi^4$.
\label{fig:top_results}}
\end{figure}
In order to obtain a simple analytical expression for topological susceptibility, we fit it in Fig.~\ref{fig:top_fits} with DIGA-inspired high-temperature behavior
\begin{equation}
\chi_\text{top}\simeq A\,T^{-d}.
\label{eq:chi_top-diga}
\end{equation}
The data are well described by the power-law decay~\eqref{eq:chi_top-diga} in the region $T\gtrsim300$~MeV.
For higher than physical pion masses the fits are performed over the combined data from all available ensembles (see Table~\ref{tbl:summary}).
The data show no apparent lattice spacing dependence suggesting that artifacts are small, as was also confirmed in~\cite{Burger:2018fvb} by more detailed analysis.
\begin{figure}[tb]
\vspace*{.2cm}
\begin{center}
\includegraphics{chitop_fits}
\end{center}
\vspace*{-.3cm}
\caption{Fits of the topological susceptibility with power-law decay~\eqref{eq:chi_top-diga}.
All ensembles from Table~\ref{tbl:summary} corresponding to the same value of pion mass are treated equally.
\label{fig:top_fits}
}
\end{figure}
The temperature dependence of the $\langle\bar{\psi}\psi\rangle_3$~\eqref{eq:pbp3-def} is shown in Fig.~\ref{fig:pbp3}. First, we rescale with the leading order Griffith analyticity prediction $\langle\bar{\psi}\psi\rangle_3\propto m_\pi^6$. Then, we fit with the universal scaling behavior $\langle\bar{\psi}\psi\rangle_3\propto (T-T_0)^{-\gamma-2\beta\delta}$, where $T_0$ is fixed to the critical temperature $T_0=138$ MeV in the chiral limit~\cite{Kotov:2021rah,lat21_proc}. The critical exponents $\beta$, $\gamma$ and $\delta$ are fixed to represent 3D $O(4)$ universality class.
As expected, the universal behavior sets in near the transition and remains up to $T\simeq300$~MeV.
After that, rescaled data from different pion masses merge to the single curve, indicating simple Griffith analyticity behavior.
It is intriguing that this change of trend in $\langle\bar{\psi}\psi\rangle_3(T)$ coincides with the onset of DIGA-like behavior for the topological susceptibility $\chi_\text{top}(T)$ mentioned above, both occurring at approximately the same temperature $T\simeq300$~MeV.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=.6\linewidth]{Osub-highT-scaled}
\end{center}
\caption{$\langle\bar\psi\psi\rangle_3$ vs temperature, also fitted with the 3D $O(4)$ scaling behavior $\langle\bar{\psi}\psi\rangle_3\propto (T-T_0)^{-\gamma-2\beta\delta}$. For higher than physical pion masses the data are rescaled as $\langle\bar{\psi}\psi\rangle_3\propto m_\pi^6$.
\label{fig:pbp3}
}
\end{figure}
Once we determined the temperature dependence of topological susceptibility~\eqref{eq:chi_top-diga} in high-$T$ region, we can use it to estimate the axion mass~\cite{Turner:1985si,Berkowitz:2015aua}:
\begin{equation}
m_A(T)=\frac{\sqrt{\chi_\text{top}(T)}}{f_A}.
\label{eq:mA}
\end{equation}
Since the exact value of the axion decay constant $f_A$ is unknown, we take relation~\eqref{eq:mA} at two moments of time (or, equivalently, temperatures) corresponding to the evolution of axions in the early Universe and to present day. The two time moments are connected by the axion equation of motion, allowing to obtain today's axion density $\Omega_A$ as a function of its mass.
Then, assuming that axions are responsible for the observed Dark Matter density $\Omega_\text{DM}$, the axion mass can finally be extracted.
For detailed derivation we refer to the original works~\cite{Turner:1985si,Berkowitz:2015aua} (see also the review~\cite{Lombardo:2020bvn} and references therein). In particular, we use the result of Ref.~\cite{Burger:2018fvb}
\begin{equation}
\Omega_A=F(A,d,\ldots)\,m_A^{-\frac{3.053+d/2}{2.027+d/2}},
\label{eq:omega}
\end{equation}
where $F$ is a function of topological susceptibility parameters~\eqref{eq:chi_top-diga} (amplitude~$A$ and power decay constant~$d$) and of relevant cosmological constants.
We plot the result~\eqref{eq:omega}, using the parameters extracted from our fits, in Fig.~\ref{fig:omega}. For physical pion ensemble we also explore the limiting cases by increasing or decreasing the amplitude $A$ by factor $10^4$ and setting the decay constant $d=8$ and $d=4$ corresponding to pure DIGA prediction and to very slow decay of topological susceptibility, respectively.
The actual fraction of axion density in $\Omega_\text{DM}$ is unknown, so the ratio $\Omega_A/\Omega_\text{DM}$ plays the role of a free parameter. By setting $\Omega_A=\Omega_\text{DM}$ we can obtain the lower limit on the axion mass.
The curves in Fig.~\ref{fig:omega} corresponding to different pion masses lead to virtually the same value of axion mass.
Indeed, as was shown earlier in Fig.~\ref{fig:top_results}, the results for topological susceptibility from different ensembles lie close to each other.
So, in this preliminary analysis we retain the result of Ref.~\cite{Burger:2018fvb} for the lower bound on axion mass $m_A=20(5)$~$\mu$eV.
\begin{figure}[tb]
\begin{center}
\includegraphics{rho_a-2}
\end{center}
\caption{The axion fraction in Dark Matter vs the axion mass.
For physical pion ensemble the parameters $A$ and $d$ are varied as indicated in the legend.
\label{fig:omega}
}
\end{figure}
\section{Summary}
We measured chiral observables and topological susceptibility in the region $120 \lesssim T \lesssim 600$~MeV.
The temperature dependence of $\langle\bar{\psi}\psi\rangle_3$~\eqref{eq:pbp3-def} shows clear threshold at $T\simeq 300$~MeV,
above which a trend consistent with 3D $O(4)$ scaling gives way to a simple leading order Griffith analytic behavior.
Around the same point $T\simeq 300$~MeV the topological susceptibility starts to follow DIGA-like power-law decay.
The high-$T$ topological results from different studies are in the same ballpark, but lacking complete quantitative agreement.
Still, the final prediction for axion mass is rather insensitive to these differences. The same holds for its dependence on pion mass, once the appropriate scaling is applied.
\section*{Acknowledgments}
This work is partially supported by STRONG-2020 under grant agreement No. 824093, RFBR grant 18-02-40126, and by the "BASIS" foundation.
Numerical simulations have been carried out on computational resources of CINECA (INFN--CINECA agreement project INF21\_sim and ISCRA project IsB20), the supercomputer of Joint Institute for Nuclear Research "Govorun", and the computing resources of the federal collective usage
center Complex for Simulation and Data Processing for Mega-science Facilities at NRC "Kurchatov Institute", http://ckp.nrcki.ru/.
\bibliographystyle{JHEP}
|
{
"timestamp": "2021-12-01T02:24:42",
"yymm": "2111",
"arxiv_id": "2111.15421",
"language": "en",
"url": "https://arxiv.org/abs/2111.15421"
}
|
\section{Details on the Models}
\label{app:models}
\subsection{The CKM Model and Parameters}
\label{app:models:CKM}
When evaluating low-energy observables within the \ac{SM}\xspace, the \ac{CKM}\xspace matrix elements
are evaluated using the Wolfenstein parametrization~\cite{Wolfenstein:1983yz} expanded to
order $\lambda^8$~\cite{Charles:2004jd}. The Wolfenstein parameters $\lambda$
and $A$ are used without modifications. The $\rho$ and $\eta$ parameters are traded
for $\bar{\rho}$ and $\bar{\eta}$~\cite{Charles:2004jd}, the coordinates of the
apex of the standard unitarity triangle. The two parameters are defined to all orders
in $\lambda$ as~\cite{Charles:2004jd}
\begin{equation}
\begin{aligned}
\bar\rho & = -\Re \frac{V_{ud}^{\phantom{*}}\,V_{ub}^*}{V_{cd}^{\phantom{*}}\,V_{cb}^*}\,, &
\bar\eta & = -\Im \frac{V_{ud}^{\phantom{*}}\,V_{ub}^*}{V_{cd}^{\phantom{*}}\,V_{cb}^*}\,.
\end{aligned}
\end{equation}
These four parameter can be accessed \textit{via} \code{CKM::lambda}, \code{CKM::A},
\code{CKM::rhobar}, and \code{CKM::etabar}, respectively.\\
A frequent physics use case involves inferring the absolute value or complex argument
of a \ac{CKM}\xspace matrix element from data. Choosing the \ac{CKM}\xspace model using \code{'model': 'CKM'}
ensures that each complex-valued \ac{CKM}\xspace matrix element is parametrized in terms of
its absolute value and complex argument. For example, the parametrization
of the \ac{CKM}\xspace matrix element $V_{ub}$ involves the parameters
\code{CKM::abs(V_ub)} and \code{CKM::arg(V_ub)}. The names for the remaining \ac{CKM}\xspace
parameter follow the same naming scheme.
\subsection{The WET Model, Operator Bases, and Parameters}
\label{app:models:WET}
The observables for low-energy processes below the electroweak scale rely on
a description in the \ac{WET}\xspace, both within the
\ac{SM}\xspace~\cite{Buchalla:1995vs,Buras:1998raa,Buras:2011we} and in \ac{BSM}\xspace scenarios
\cite{Aebischer:2017gaw,Jenkins:2017jig}. Observables can be evaluated
within the \ac{WET}\xspace by setting the \code{model} option to \code{WET}.
Within this model, \ac{WET}\xspace Wilson coefficients are parametrized
by individual \texttt{EOS}\xspace parameters, and \ac{CKM}\xspace matrix elements are treated as
in the \code{CKM} model; see \refapp{models:CKM}.
Within \texttt{EOS}\xspace, the \ac{WET}\xspace is parametrized as
\begin{equation}
\mathcal{L}^{\textrm{WET}} = \sum_{\mathcal{S}} \mathcal{L}^{\mathcal{S}}\,,
\end{equation}
where $\mathcal{S}$ denotes a \emph{sector} of the \ac{WET}\xspace, \emph{i.e.}, a set of
operators with definite quantum numbers under global symmetries preserved by
the renormalization group evolution~\cite{Aebischer:2017ugx}.
For each sector, \texttt{EOS}\xspace follows the \texttt{WCxf}\xspace convention~\cite{Aebischer:2017ugx}:
\begin{equation}
\mathcal{L}^{\mathcal{S}} \equiv
\sum_{\mathcal{O}_i^{\mathcal{S}} \neq \mathcal{O}_i^{\mathcal{S},\dagger}}
\left[\mathcal{C}^\mathcal{S}_i \, \mathcal{O}^{\mathcal{S}}_i + \text{h.c.}\right]
+ \sum_{\mathcal{O}_i^{\mathcal{S}} = \mathcal{O}_i^{\mathcal{S},\dagger}}
\mathcal{C}^\mathcal{S}_i \, \mathcal{O}^{\mathcal{S}}_i\,.
\end{equation}
Here $\mathcal{O}$ denotes the dimension-six operators, and $\mathcal{C}$ is a
dimensionless Wilson coefficient renormalized at an appropriate low-energy scale $\mu$.
This scale is accessible as an \class{eos.Parameter}. The prefix part of its
qualified name corresponds to the sector and the name part of its qualified name
is \texttt{mu}, i.e., for the sector \texttt{sbsb} this parameter is named
\texttt{sbsb::mu}.\\
As of version 1.0, \texttt{EOS}\xspace supports the following sectors:
\begin{itemize}
\item \texttt{sb}
\item \texttt{sbee}, \texttt{sbmumu}, \texttt{sbtautau}
\item \texttt{sbnunu}
\item \texttt{sbsb}
\item \texttt{cbenue}, \texttt{cbmunumu}, \texttt{cbtaunutau}
\item \texttt{ubenue}, \texttt{ubmunumu}, \texttt{ubtaunutau}
\end{itemize}
Changes to the parameters representing the Wilson coefficients only affect observables
that are constructed with the \code{model} option set to \code{'WET'}.
By convention the Wilson coefficients comprise both their \ac{SM}\xspace value and
potential \ac{BSM}\xspace shifts, i.e.:
\begin{align}
\mathcal{C}_i(\mu) &
= \mathcal{C}_i^\text{SM}(\mu) + \mathcal{C}_i^\text{BSM}(\mu)\,.
\end{align}
A complete list of the sectors and their operators supported by \texttt{EOS}\xspace is part
of the \texttt{WCxf}\xspace basis file \cite{wcxf:EOS-basis}.
The parameters describing the Wilson coefficients are listed as part of the
\texttt{EOS}\xspace documentation~\cite[List of Parameters]{EOS:doc}. Using \texttt{WCxf}\xspace and
the \texttt{wilson}\xspace package~\cite{Aebischer:2018bkb}, constraints on the \ac{WET}\xspace can be readily interpreted
as constraints at a different scale in the \ac{WET}\xspace or as constraints on
Wilson coefficients in the Standard Model Effective Field Theory~\cite{Buchmuller:1985jz,Grzadkowski:2010es}.
The \ac{SM}\xspace values of the \ac{WET}\xspace Wilson Coefficients up to mass dimension six are
known to high precision in the \ac{SM}\xspace. By default, \texttt{EOS}\xspace evaluates observables
with the \code{model} option set to \code{'SM'}.
This choice of option leads \texttt{EOS}\xspace to compute the WET Wilson Coefficients
at the electroweak scale $\mu_0$ and evolve them to the appropriate
low-energy scale $\mu$~\footnote{%
The choice of $\mu_0$ should, in general, not be changed,
and for some sectors there is more than one high scale involved.
}.
\begin{itemize}
\item For the sectors \code{sb}, \code{sbee}, \code{sbmumu}, and \code{sbtautau},
the SM values are computed to NNLO in QCD~\cite{Adel:1993ah,Greub:1997hf,Bobeth:1999mk}.
The RG evolution to the low-energy scale $\sim m_b$ crucially requires the resummation of radiative QCD and partially
also QED corrections~\cite{Chetyrkin:1996vx,Bobeth:2003at,Gorbahn:2004my,Gorbahn:2005sa,Huber:2005ig}.
\item For the sector \code{sbnunu}, the SM values are computed to NLO in QCD~\cite{Misiak:1999yg,Buchalla:1998ba}.
\item For the sectors \code{cbenue} through \code{ubtaunutau}, the SM values are
computed to next-to-leading order in QED~\cite{Sirlin:1981ie}.
\item For the sector \code{sbsb}, the SM values are computed to NLO in
QCD~\cite{Buras:1990fn} and NLO in EW~\cite{Gambino:1998rt}.
The RG evolution to the low scale $\sim m_b$ crucially requires the resummation
of radiative QCD corrections~\cite{Buras:2000if}.
\end{itemize}
\section{Collection of Examples}
\label{app:PlotExamples}
Here we collect a number of code examples that are used in the main text to produce
a variety of plots. They have been moved to this appendix to ease legibility of the main text.
\begin{lstlisting}[%
language=iPython,%
caption={%
Histogram samples of the 1D-marginal posterior for $|V_{cb}|$ and plot their kernel density estimate.
This code is used to produce \refout{inference:posterior-sample-hist+kde} (left).
\label{lst:plot-ex:inference:posterior-sample-hist}
}
]
plot_args = {
'plot': {
'x': { 'label': r'$|V_{cb}|$', 'range': [38e-3, 47e-3] },
'legend': { 'location': 'upper left' }
},
'contents': [
{
'type': 'histogram',
'data': { 'samples': parameter_samples[:, 0] }
},
{
'type': 'kde', 'color': 'C0', 'label': 'posterior', 'bandwidth': 2,
'range': [38e-3, 47e-3],
'data': { 'samples': parameter_samples[:, 0] }
}
]
}
eos.plot.Plotter(plot_args).plot()
\end{lstlisting}
\begin{lstlisting}[%
language=iPython,%
caption={%
Plot contours of the joint 2D-marginal posterior of the parameters $|V_{cb}|$ and $f_+(0)$
at $68\%$ and $95\%$ probability using a kernel density estimate.
This code is used to produce \refout{inference:posterior-sample-hist+kde} (right).
\label{lst:plot-ex:inference:posterior-sample-kde}
}
]
plot_args = {
'plot': {
'x': { 'label': r'$|V_{cb}|$', 'range': [38e-3, 47e-3] },
'y': { 'label': r'$f_+(0)$', 'range': [0.6, 0.75] },
},
'contents': [
{
'type': 'kde2D', 'color': 'C1', 'label': 'posterior',
'levels': [68, 95], 'contours': ['lines', 'areas'], 'bandwidth': 3,
'data': { 'samples': parameter_samples[:, (0,1)] }
}
]
}
eos.plot.Plotter(plot_args).plot()
\end{lstlisting}
\begin{lstlisting}[%
language=iPython,%
caption={%
Plot the analytical 1D PDFs and 1D-marginal histograms of pseudo events
for the decays $\bar{B}\to D\lbrace \mu^-,\tau^-\rbrace \bar{\nu}$.
The pseudo events for the semimuonic decay are obtained from \reflst{simulation:sample-1D}.
The result is shown in the left plot of \refout{simulation:plot+histogram}.
\label{lst:plot-ex:simulation:plot+histogram-1D}
}
]
plot_args = {
'plot': {
'x': {'label': r'$q^2$', 'unit': r'$\textnormal{GeV}^2$', 'range': [0.0, 11.60]},
'y': {'label': r'$P(q^2)$', 'range': [0.0, 0.30]},
'legend': {'location': 'upper left'}
},
'contents': [
{
'label': r'samples ($\ell=\mu$)',
'type': 'histogram',
'data': {'samples': mu_samples},
'color': 'C0'
},
{
'label': r'samples ($\ell=\tau$)',
'type': 'histogram',
'data': {'samples': tau_samples},
'color': 'C1'
},
{
'label': r'PDF ($\ell=\mu$)',
'type': 'signal-pdf',
'pdf': 'B->Dlnu::dGamma/dq2;l=mu',
'kinematic': 'q2',
'range': [0.02, 11.60],
'kinematics': {'q2_min': 0.02, 'q2_max': 11.60},
'color': 'C0'
},
{
'label': r'PDF ($\ell=\tau$)',
'type': 'signal-pdf',
'pdf': 'B->Dlnu::dGamma/dq2;l=tau',
'kinematic': 'q2',
'range': [3.17, 11.60],
'kinematics': {'q2_min': 3.17, 'q2_max': 11.60},
'color': 'C1'
},
]
}
eos.plot.Plotter(plot_args).plot()
\end{lstlisting}
\section{Constraints Data Format}
\label{app:constraints-format}
Constraints are stored as \texttt{YAML}\xspace~\cite{YAML} files within the \texttt{EOS}\xspace source repository in the directory
\texttt{eos/constraints/}. Each constraint file is an associative array, with the
top-level keys corresponding to the constraint's qualified name, and the value describing
the constraint data. The constraint data itself is also an associative array. The
\texttt{type} key determines the type of the likelihood, and therefore which other keys
must be present. \texttt{EOS}\xspace supports the following types of likelihood:
\begin{description}
\item[\hlred{\texttt{Gaussian}}] The likelihood is a univariate Gaussian density.
It requires the following keys:
\smallskip
\begin{description}
\item[\hlgreen{\texttt{observable}}] The name of the observable that appears in this likelihood, as an \class{eos.QualifiedName}.
\smallskip
\item[\hlgreen{\texttt{kinematics}}] The kinematic variables and their values that underlay
the likelihood's observable, as an associative array.
\smallskip
\item[\hlgreen{\texttt{options}}] The option keys and values that underlay the likelihood's
observable, as an associative array.
\smallskip
\item[\hlgreen{\texttt{mean}}] The mean of the likelihood, as a floating point value.
\smallskip
\item[\hlgreen{\texttt{sigma-stat}}] The statistical uncertainty of the likelihood, as an associative array
with keys \texttt{hi} and \texttt{lo}. For a completely symmetric uncertainty, set both keys to the same value.
\smallskip
\item[\hlgreen{\texttt{sigma-sys}}] The systematic uncertainty of the likelihood, as an associative array
with keys \texttt{hi} and \texttt{lo}. For a completely symmetric uncertainty, set both keys to the same value.
\smallskip
\item[\hlgreen{\texttt{dof}}] The degrees of freedom, as a floating point value.
Must be set to \texttt{1} to be backward compatible.
\end{description}
%
\medskip
%
\item[\hlred{\texttt{MultivariateGaussian(Covariance)}}] The likelihood is a multivariate Gaussian density,
and correlations and total uncertainties are specified through the covariance matrix.
It requires the following keys:
\smallskip
\begin{description}
\item[\hlgreen{\texttt{dim}}] The dimension of the covariance matrix, as an integer. Denoted below as $D$.
\smallskip
\item[\hlgreen{\texttt{dof}}] The degrees of freedom, as an integer.
\smallskip
\item[\hlgreen{\texttt{observables}}] The names of the observables that appear in this likelihood, as an
ordered list of \class{eos.QualifiedName} of length $P$. Denoted below as $\vec{o}$.
\smallskip
\item[\hlgreen{\texttt{kinematics}}] The kinematic configuration for each of the observables, as an ordered list
of length $P$ of associative arrays.
\smallskip
\item[\hlgreen{\texttt{options}}] The options for each of the observables, as an ordered list of length $P$
of associative arrays.
\smallskip
\item[\hlgreen{\texttt{means}}] The mean values of the likelihood, as an ordered list of floating point values.
Denoted below as $\mu$.
\smallskip
\item[\hlgreen{\texttt{covariance}}] The $D\times D$-dimensional covariance matrix of the likelihood, as an ordered list of ordered lists of floating point values (row-first ordering). Denoted below as $\Sigma$.
\smallskip
\item[\hlgreen{\texttt{response}}] The optional $D\times P$-dimensional response matrix that converts a $P$
dimensional theory prediction into a $D$ dimension measurement. If not specified, \texttt{EOS}\xspace assumes that $P=D$
and that the response matrix is the identity matrix. Specified as an ordered list of ordered lists of floating point values (row-first ordering). The response matrix is used to fold the theory predictions. This enables fits involving experimental results
that have not or cannot be unfolded. Denoted below as $R$.
\smallskip
\end{description}
The logarithm of the likelihood $L$ reads
\begin{equation}
-2 \ln L = -2 \ln \mathcal{N}_D(R \vec{o}\,|\,\vec{\mu}, \Sigma)
\sim \left(\vec\mu - R \vec{o}\right)^T \Sigma^{-1} \left(\vec\mu - R \vec{o}\right)\,.
\end{equation}
In the above, $\mathcal{N}_D(\cdot\,|\,\vec{\mu},\Sigma)$ denotes a $D$-variate Gaussian \ac{PDF}\xspace centered
at $\mu$ with covariance $\Sigma$.
%
\medskip
%
\item[\hlred{\texttt{Mixture}}] The likelihood is a mixture density, with all mixture components
being multivariate Gaussian densities. Their correlations and total uncertainties are specified
through their respective covariance matrices.
It requires the following keys:
\smallskip
\begin{description}
\item[\hlgreen{\texttt{dim}}] The dimension of each covariance matrix, as an integer. Denoted below as $D$.
\smallskip
\item[\hlgreen{\texttt{observables}}] The names of the observables that appear in this likelihood, as an
ordered list of \class{eos.QualifiedName} of length $D$. Denoted below as $\vec{o}$.
\smallskip
\item[\hlgreen{\texttt{kinematics}}] The kinematic configuration for each of the observables, as an ordered list
of length $D$ of associative arrays.
\smallskip
\item[\hlgreen{\texttt{options}}] The options for each of the observables, as an ordered list of length $D$
of associative arrays.
\smallskip
\item[\hlgreen{\texttt{components}}] The description of the mixture components as a list of length $N$.
Each list element is an associative array that requires the following keys:
\begin{description}
\item[\hlorange{means}] The mean values of this component, as a list of floats of length $D$. Denoted below as
$\vec{\mu}_n$.
\smallskip
\item[\hlorange{covariance}] The covariance of this component, as a list of lists of floats. Denoted below as
$\Sigma_n$.
\end{description}
\smallskip
\item[\hlgreen{\texttt{weights}}] The weights of the mixture components as a list of length $N$. Denoted below as $\alpha_n$.
\end{description}
The likelihood $L$ reads
\begin{equation}
L = \sum_{n=1}^N \alpha_n\, \mathcal{N}_D(\vec{o}\,|\,\vec{\mu}_n, \Sigma_n)
\qquad \text{with} \qquad
\sum_{n=1}^N \alpha_n = 1 \,.
\end{equation}
\end{description}
\begin{lstlisting}[%
language=yaml,%
caption={
Example of a multivariate Gaussian constraint as recorded in the \texttt{EOS}\xspace source code repository,
representing binned measurements of the $\bar{B}^0\to D^+e^-\bar\nu$ branching ratio by
the Belle experiment~\cite{Belle:2015pkj}.
\label{lst:constraint:Belle2015A}
}
]
B^0->D^+e^-nu::BRs@Belle:2015A:
type: MultivariateGaussian(Covariance)
dim: 10
observables:
[ B->Dlnu::BR, B->Dlnu::BR, B->Dlnu::BR, B->Dlnu::BR, B->Dlnu::BR,
B->Dlnu::BR, B->Dlnu::BR, B->Dlnu::BR, B->Dlnu::BR, B->Dlnu::BR ]
options:
[ { l: e, q: d }, { l: e, q: d }, { l: e, q: d }, { l: e, q: d }, { l: e, q: d },
{ l: e, q: d }, { l: e, q: d }, { l: e, q: d }, { l: e, q: d }, { l: e, q: d } ]
kinematics:
- { q2_min: 10.44, q2_max: 11.63 }
- { q2_min: 9.26, q2_max: 10.44 }
- { q2_min: 8.07, q2_max: 9.26 }
- { q2_min: 6.89, q2_max: 8.07 }
- { q2_min: 5.71, q2_max: 6.89 }
- { q2_min: 4.52, q2_max: 5.71 }
- { q2_min: 3.34, q2_max: 4.52 }
- { q2_min: 2.15, q2_max: 3.34 }
- { q2_min: 0.97, q2_max: 2.15 }
- { q2_min: 0.01, q2_max: 0.97 }
means:
[ 4.154e-05, 6.106e-04, 1.255e-03, 1.635e-03, 1.901e-03, 2.758e-03, 3.524e-03, 4.216e-03, 4.371e-03, 4.050e-03 ]
covariance:
- [1.912e-09, 1.726e-10, 3.427e-10, 4.424e-10, 5.041e-10, 7.320e-10, 9.624e-10, 1.096e-09, 1.049e-09, 8.840e-10]
- [1.726e-10, 1.478e-08, 1.811e-09, 2.383e-09, 2.738e-09, 3.977e-09, 5.166e-09, 5.959e-09, 5.903e-09, 5.209e-09]
- [3.427e-10, 1.811e-09, 2.863e-08, 4.849e-09, 5.579e-09, 8.101e-09, 1.051e-08, 1.215e-08, 1.214e-08, 1.075e-08]
- [4.424e-10, 2.383e-09, 4.849e-09, 3.786e-08, 7.398e-09, 1.071e-08, 1.387e-08, 1.602e-08, 1.603e-08, 1.425e-08]
- [5.041e-10, 2.738e-09, 5.579e-09, 7.398e-09, 4.355e-08, 1.241e-08, 1.606e-08, 1.860e-08, 1.873e-08, 1.678e-08]
- [7.320e-10, 3.977e-09, 8.101e-09, 1.071e-08, 1.241e-08, 6.176e-08, 2.334e-08, 2.709e-08, 2.731e-08, 2.440e-08]
- [9.624e-10, 5.166e-09, 1.051e-08, 1.387e-08, 1.606e-08, 2.334e-08, 8.585e-08, 3.533e-08, 3.555e-08, 3.156e-08]
- [1.096e-09, 5.959e-09, 1.215e-08, 1.602e-08, 1.860e-08, 2.709e-08, 3.533e-08, 1.022e-07, 4.194e-08, 3.744e-08]
- [1.049e-09, 5.903e-09, 1.214e-08, 1.603e-08, 1.873e-08, 2.731e-08, 3.555e-08, 4.194e-08, 1.005e-07, 3.887e-08]
- [8.840e-10, 5.209e-09, 1.075e-08, 1.425e-08, 1.678e-08, 2.440e-08, 3.156e-08, 3.744e-08, 3.887e-08, 8.132e-08]
dof: 10
\end{lstlisting}
\section{Basic Classes and Concepts}
\label{sec:basics}
\texttt{EOS}\xspace provides a number of \texttt{Python}\xspace classes that make it possible to fulfill the physics use cases discussed in \refsec{usage}.
Three of the most relevant classes are used as follows:
\begin{itemize}
\item hadronic and \ac{BSM}\xspace parameters are represented by objects of the \class{eos.Parameter} class;
\smallskip
\item physical observables and pseudo-observables (such as hadronic form factors) are represented by objects of the \class{eos.Observable} class;
\smallskip
\item likelihood functions, stemming from either experimental measurements or theoretical calculations, are represented by objects of the \class{eos.Constraint} class.
\end{itemize}
To facilitate their handling, \texttt{EOS}\xspace has databases for all known objects of these classes. The user can interactively
inspect these databases within a \texttt{Jupyter}\xspace notebook in the following way:
\begin{lstlisting}[language=iPython]
display(eos.Parameters()) # only run one line at a time, since the output is lengthy
display(eos.Observables())
display(eos.Constraints())
\end{lstlisting}
\texttt{EOS}\xspace provides a rich display for most classes, including the above, which is not shown here for brevity.\\
All three databases can be searched by name of the target object. \texttt{EOS}\xspace uses the same naming scheme for all three databases,
which is enforced through use of the \class{eos.QualifiedName} class.
The naming scheme is
\begin{center}
\texttt{\hlred{PREFIX}::\hlred{NAME}[@\hlred{SUFFIX}][;\hlred{OPTIONLIST}]}
\end{center}
where parts shown in square brackets are optional.
The individual parts have the following meaning:
\begin{description}
\item[\texttt{\hlred{PREFIX}}] The prefix part is used to separate objects with (otherwise) identical names
into different namespaces, to avoid conflicts. Examples of prefixes include
parameter categories (e.g., \code{mass} or \code{decay-constant}), physical processes (e.g., \code{B->Kll}),
or sectors of the \ac{WET}\xspace (e.g., \code{sbsb}).
\smallskip
\item[\texttt{\hlred{NAME}}] The name part is used to identify objects within its \code{PREFIX} namespace. Examples include observable names (e.g., \code{BR} for a branching ratio) or names of \ac{WET}\xspace Wilson coefficients (e.g., \code{cVL} for a coefficient of a left-handed
vector operator).
\smallskip
\item[\texttt{\hlred{SUFFIX}}] The (optional) suffix part is used to distinguish between objects
of otherwise identical names based on context. One example is
the parameter describing $\Lambda_b$ baryon polarization, which takes different values based on the experimental
environment. Generally, $\Lambda_b$ polarization would be represented by \code{Lambda_b::polarization}.
The use of \code{@LHCb} and \code{@unpolarized} as a suffix distinguishes between the average polarization
encountered within the LHCb experiment and an unpolarized setting (e.g. when using the whole phase space of the ATLAS and CMS experiments).
\smallskip
\item[\texttt{\hlred{OPTIONLIST}}] The option list is an optional comma-separated list of key/value pairs, which
allows to modify the named object in an unambigious way. One example is \code{model=SM,l=mu,q=s}, which instructs
an observable to use the Standard Model, $\mu$ lepton flavor, and strange-flavored spectator quarks.
Details on possible options are discussed in \refsec{basics:observables}.
\end{description}
In the remainder of this section we discuss how to use the six representation
classes and their corresponding database classes
\begin{itemize}
\item \class{eos.Parameter} within \class{eos.Parameters},
\item \class{eos.KinematicVariable} within \class{eos.Kinematics},
\item \class{eos.Option} within \class{eos.Options},
\item \class{eos.Observable} within \class{eos.Observables},
\item \class{eos.Constraint} within \class{eos.Constraints}, and
\item \class{eos.SignalPDF} within \class{eos.SignalPDFs},
\end{itemize}
and the utility classes \class{eos.Analysis} and \class{eos.Plotter}. The relationship between the first four sets of classes
are illustrated in \reffig{basics:class-diagram}.
We provide a few examples here. However, for more exhaustive and interactive examples we refer to the notebook named
\href{https://github.com/eos/eos/tree/v1.0/examples/basics.ipynb}{basics.ipynb}, which is part of the collection of
\texttt{EOS}\xspace example notebooks~\cite{EOS:examples}.
\begin{figure}[t]
\centering
\includegraphics[width=.6\textwidth,trim=0 100 0 50,clip]{figures/ClassDiagram.pdf}
\caption{%
Visual representation of the basic \texttt{EOS}\xspace classes and their relationships.
\label{fig:basics:class-diagram}
}
\end{figure}
\subsection[Classes eos.Parameters and eos.Parameter]
{Classes \class{eos.Parameters} and \class{eos.Parameter}}
\label{sec:basics:parameters}
\texttt{EOS}\xspace makes extensive use of the \class{eos.Parameter} class, which provides access
to a single real-valued scalar parameter. Any such \class{eos.Parameter} object is part of a large set of
built-in parameters. Users cannot directly create new objects of the \class{eos.Parameter} class.
However, new named sets of parameters can be created from which the parameter of interest can be extracted,
inspected, and altered.\\
We begin by creating and displaying a new set of parameters:
\begin{lstlisting}[language=iPython]
parameters = eos.Parameters()
display(parameters)
\end{lstlisting}
The new variable \object{parameters} now contains a representation of all parameters known to \texttt{EOS}\xspace.
The \texttt{Jupyter}\xspace \command{display} command has been augmented to provide a sectioned list of the known parameters,
which is rather lengthy and not shown here. It is
equivalent to the section ``List of Parameters`` in the \texttt{EOS}\xspace documentation~\cite{EOS:doc}.
The display provides the user with an overview of all parameter names, their canonical physical
notation, and their value and unit.
A single parameter, here the muon mass as an example, can be isolated:
\begin{lstlisting}[language=iPython]
parameters['mass::mu']
\end{lstlisting}
Again, the user is provided with an overview of the parameter, including its qualified name,
unit, default value, and current value.
The value of an \class{eos.Parameter} object can be altered with the \method{eos.Parameter}{set} method:
\begin{lstlisting}[language=iPython]
m_mu = parameters['mass::mu']
display(m_mu) # shows a value of 0.10566
m_mu.set(1.779) # we just made the muon as heavy as the tauon!
display(m_mu) # shows a value of 1.779
\end{lstlisting}
In this example, the muon mass parameter within \object{parameters} has been set to the measurd value for the tauon mass,
and the \object{m\_mu} object, which represents this parameter, has transparently changed its value.
Put differently: any \class{eos.Parameter} object ``remembers`` the set of parameters (i.e., the \class{eos.Parameters}
object) that it belongs to and forwards all changes to that set. To obtain
an independent set of parameters, the user can use
\begin{lstlisting}[language=iPython]
parameters2 = eos:Parameters()
parameters2['mass::mu']
display(parameters2 == parameters) # prints 'False', since the two sets are not identical!
\end{lstlisting}
A parameter's properties can be readily accessed through the methods \method{eos.Parameter}{name},
\method{eos.Parameter}{latex}, and \method{eos.Parameter}{evaluate}
\begin{lstlisting}[language=iPython]
display(m_mu.name()) # shows 'mass::mu'
display(m_mu.latex()) # shows 'm_\mu'
display(m_mu.evaluate()) # shows 1.779, since we changed it above.
\end{lstlisting}
A parameter object can be used like any other \texttt{Python}\xspace object, e.g., as an element of a
\pyclass{list}, a \pyclass{dict}, or a \pyclass{tuple}:
\begin{lstlisting}[language=iPython]
lepton_masses = [parameters['mass::' + l] for l in ['e', 'mu', 'tau']]
[display(p) for p in lepton_masses]
translation = {p.name(): p.latex() for p in lepton_masses}
display(translation)
\end{lstlisting}
These properties allow to bind a function (e.g., the functional expression
of an observable or a likelihood function) to an arbitrary number of parameters,
let the function evaluate these parameters in a computationally efficient way, and let the
user change these parameters at a whim. Parameter sets are meant to be \emph{shared}, i.e.,
a single set of parameters is meant to be used by any number of functions.
The sharing of parameters across observables makes it possible for \texttt{EOS}\xspace to consistently and efficiently
evaluate a large number of functions.
The default set of parameters is stored in \texttt{YAML}\xspace files that are installed together
with the binary \texttt{EOS}\xspace library and the \texttt{Python}\xspace modules and scripts.
The default parameter set can be replaced. To do this, the user must set the
environment variable \code{EOS_HOME} to point to an accessible directory.
The \texttt{YAML}\xspace files found within \code{EOS_HOME/parameters} will be used \emph{instead} of
the default set of parameter contained in the \texttt{EOS}\xspace package. The class \class{eos.Parameters} facilitate creating such files,
through the \method{eos.Parameters}{dump} method, which writes the current set
of parameters to a \texttt{YAML}\xspace file. Alternatively, to use mostly the default parameter set, but override
a subset of parameters in persistent way, the user can use the
\method{eos.Parameters}{override\_from\_file} method to load only a subset of parameters from
a given file.
\subsection[Classes eos.Kinematics and eos.KinematicVariable]
{Classes \class{eos.Kinematics} and \class{eos.KinematicVariable}}
\label{sec:basics:kinematics}
\texttt{EOS}\xspace uses the \class{eos.Kinematics} class to store a set of real-valued scalar kinematic variables by name. Contrary
to the class \class{eos.Parameters}, there are neither default variables nor default values.
Instead, \class{eos.Kinematics} objects are empty by default. Moreover, their
variables are only defined within the scope of a single \class{eos.Observable} object:
two observables that do not share an \class{eos.Kinematics} object can use identically-named, independent
kinematic variables.
Therefore, the names of kinematic variables do not require any prefix, and are simply (short) strings.\\
An empty set of kinematic variables can be created by
\begin{lstlisting}[language=iPython]
kinematics = eos.Kinematics()
\end{lstlisting}
A new kinematic variable can be declared with the existing (empty) set by providing
a key/value pair to the \object{kinematics} object, e.g.\index{eos.Kinematics!declare}
\begin{lstlisting}[language=iPython]
k1 = kinematics.declare('q2', 1.0) # 1 GeV^2
k2 = kinematics.declare('E_pi', 0.139) # 139 MeV, a pion at rest!
k3 = kinematics.declare('cos(theta_pi)', -1.0) # negative values are OK!
\end{lstlisting}
In this example, we have also captured the newly created kinematic variables as objects \object{k1}, \object{k2},
and \object{k3} of class \class{eos.KinematicVariable} for latter use.
\texttt{EOS}\xspace uses the following guidelines for names and units of kinematic variables:
\begin{itemize}
\item using \code{'q2'}, \code{'p2'}, and so on for the squares of four momenta $q^\mu$, $p^\mu$;
\smallskip
\item using \code{'E\_pi'}, \code{'E\_gamma'}, and so on for the energies of states $\pi$, $\gamma$
in the rest frame of a decaying particle;
\smallskip
\item using \code{'cos(theta\_pi)'} and similar for the cosine of a helicity angle $\theta_\pi$;
\smallskip
\item using natural units, i.e., expressing all momenta and energies as powers of $\ensuremath{\mathrm{GeV}}$.
\end{itemize}
The new \class{eos.KinematicVariable} objects are now collected within the \object{kinematics} object.
They can be collectively inspected using
\begin{lstlisting}[language=iPython]
display(kinematics)
\end{lstlisting}
In addition, the individual objects \object{k1}, \object{k2}, etc.~can also be inspected
\begin{lstlisting}[language=iPython]
display(k1)
display(k2)
\end{lstlisting}
To directly obtain an \code{eos.Kinematics} object pre-populated with the variables one needs,
a \texttt{Python}\xspace \pyclass{dict} can be provided to the constructor:\index{eos.Kinematics}
\begin{lstlisting}[language=iPython]
kinematics = eos.Kinematics({ 'q2': 1.0, 'E_pi': 0.139, 'cos(theta_pi)': -1.0 })
\end{lstlisting}
To extract a previously declared kinematic variable from the \object{kinematics} object, the \class{eos.Kinematics}
provides access via the subscript operator \code{[...]}
\begin{lstlisting}[language=iPython]
k1 = kinematics['q2']
k1.set(16.0)
\end{lstlisting}
In the above, the \method{eos.KinematicVariable}{set} method is used to change the value of \object{k1}.\\
Kinematic variables and their naming usually pertain to only one observable, which
will be discussed below in \refsec{basics:observables}.
Therefore, when creating observables, the user should create only a single independent set of kinematic variables
per observable. Nevertheless, it is possible to create observables that have a common set of
kinematic variables. This makes it possible to investigate correlations among observables that
share a kinematic variable (e.g., LFU ratios such as $R_K$ as a function of the lower dilepton momentum cut-off).
\subsection[Class eos.Options]
{Class \class{eos.Options}}
\label{sec:basics:options}
\texttt{EOS}\xspace uses objects of the \code{eos.Options} class to modify the behavior of observables at runtime.
A new and empty set of options is created as follows\index{eos.Options}
\begin{lstlisting}[language=iPython]
options = eos.Options()
\end{lstlisting}
This object is usually populated with individual options, which are key/value pairs of \pyclass{str} objects.
Typical keys and their respective values include:
\begin{description}
\item[\hlred{\texttt{model}}] is used to change the behavior of the low-energy observables.
As of \texttt{EOS}\xspace version 1.0\xspace, it can take the values \code{SM}, \code{CKM}, \code{WET}.
\smallskip
When choosing \code{SM}, the observables are computed within the \ac{SM}\xspace, and the values of the \ac{WET}\xspace Wilson coefficients
are computed from \ac{SM}\xspace parameters. \ac{CKM}\xspace matrix elements are computed within the Wolfenstein parametrization.
Details, such as the relevant parameter names, are discussed in \refapp{models:CKM}.
\smallskip
When choosing \code{CKM}, the observables are computed with \ac{SM}\xspace values for the \ac{WET}\xspace Wilson coefficients. However, the
\ac{CKM}\xspace matrix elements are not computed from the Wolfenstein parameters. Instead, each \ac{CKM}\xspace matrix elements is parametrized
in terms of two parameters for its absolute value and complex argument. This choice makes fitting \ac{CKM}\xspace matrix elements
possible.
Details, such as the relevant parameter names, are discussed in \refapp{models:CKM}.
\smallskip
When choosing \code{WET}, the observables are computed with generic values for the \ac{WET}\xspace Wilson coefficients. The
\ac{CKM}\xspace matrix elements are treated as in the \code{CKM} case. This choice makes fitting \ac{WET}\xspace Wilson coefficients possible.
Details, such as the \texttt{EOS}\xspace convention for the basis of \ac{WET}\xspace operators and the relevant parameter names,
are discussed in \refapp{models:WET}.
\smallskip
\item[\hlred{\texttt{form-factors}}] is used to select from one of the available parametrizations of hadronic form factors
that are pertinent to the process. Its values are process-specific. For true observables (e.g., a semileptonic
branching ratio) a sensible default choice is always provided. For pseudo-observables (e.g., the hadronic form factors
$f_+(q^2)$ in $B\to \pi$ transitions) the choice must be made by the user.
\smallskip
\item[\hlred{\texttt{l}}] is used to select the charged lepton flavor in processes with at least one charged lepton.
Allowed values are generally \code{e}, \code{mu} and \code{tau}. Individual processes might restrict the set of
allowed values further, e.g., when hadronic matrix elements relevant to semitauonic decays are either unknown or unimplemented.
\smallskip
\item[\hlred{\texttt{q}}] is used to select the spectator quark flavor.
Allowed values are typically \code{u}, \code{d}, \code{s}, and \code{c}. Individual processes might restrict
the set of allowed values further. Processes with \code{s} and \code{c} spectator quarks are typically
accessible through explicit specification of the spectator quark flavor in the process name,
e.g., \code{B_s->K^*lnu}.
\end{description}
Obtaining the full list of option keys pertaining to a specific observable and their allowed keys is discussed in
\refsec{basics:observables}.\\
Adding new options to an existing \object{options} object is achieved as follows\index{eos.Options!declare}
\begin{lstlisting}[language=iPython]
options.declare('model', 'CKM')
options.declare('form-factors', 'BSZ2015')
options.declare('l', 'mu') # Since we are all so "cautiously excited"!
options.declare('q', 's')
display(options)
\end{lstlisting}
Analogously to the kinematic variables, an \class{eos.Options} object can be created
pre-populated with the values one needs using a \texttt{Python}\xspace \pyclass{dict}
\begin{lstlisting}[language=iPython]
options = eos.Options({
'form-factors': 'BSZ2015', # Bharucha, Straub, Zwicky 2015
'model': 'WET',
'l': 'tau',
'q': 's'
})
\end{lstlisting}
\subsection[Classes eos.Observables and eos.Observable]
{Classes \class{eos.Observables} and \class{eos.Observable}}
\label{sec:basics:observables}
\texttt{EOS}\xspace uses the \class{eos.Observable} class to provide theory predictions for a variety of
flavor physics processes and their associated (pseudo-)observables.
The complete list of observables known to \texttt{EOS}\xspace is available as part of the online documentation~\cite[List of Observables]{EOS:doc}
and interactively in a \texttt{Jupyter}\xspace notebook via\index{eos.Observables}
\begin{lstlisting}[language=iPython]
eos.Observables()
\end{lstlisting}
Within this list, all observables are uniquely identified by an \class{eos.QualifiedName} object; see
the beginning of \refsec{basics} for information on how such a name is structured.
To ease recognition, the typically used mathematical symbol for each observable is shown next to its name.
To search within this list, keyword arguments for the prefix part, name part, or suffix part of
a qualified name will filter the output. For example, the following code displays only branching ratios (\code{BR}) in processes
involving a $B^\mp$ meson (\code{B_u})\index{eos.Observables}
\begin{lstlisting}[language=iPython]
eos.Observables(prefix='B_u', name='BR')
\end{lstlisting}
Amongst others, this command lists the observable \code{B_u->lnu::BR}, representing the branching
ratio of $B^\mp\to \ell^\mp\bar\nu$ decays.
As part of the output the user is notified that this particular observable requires
no kinematic variables. The user is also notified about the \class{eos.Options} keys recognized
by this observable, which include \code{model} and \code{l}.\\
To create a new \class{eos.Observable} object the user needs to
\begin{itemize}
\item identify it by name;
\item provide a set of parameters that can optionally be shared with other observables;
\item provide a set of kinematic variables that can optionally be shared with other observables; and
\item specify the relevant options.
\end{itemize}
Again, the branching ratio of $B^\mp \to \ell^\mp\bar\nu$ is used as an example, specifically for a $\tau$
in the final state.
The observable is created as an \object{eos.Observable} object as follows\index{eos.Observable!make}
\begin{lstlisting}[language=iPython]
observable1 = eos.Observable.make('B_u->lnu::BR',
eos.Parameters(), eos.Kinematics(), eos.Options({'l': 'tau', 'model': 'WET'}))
\end{lstlisting}
Here \code{B\_u->lnu::BR} is the \class{eos.QualifiedName} for
this particular observable, and default parameters are provided when using \code{eos.Parameters()}.
This observable does not
require any kinematic variables, and therefore an empty \class{eos.Kinematics} object is
provided. Setting the \code{l} option to \code{tau} selects a $\tau$ final state.
Setting the \code{model} option to \code{WET} enables the user to evaluate
the observable in the \ac{WET}\xspace for arbitrary values of the Wilson coefficients;
see \refapp{models:WET} for details.\\
The \class{eos.Observable} class provides access to the name, parameters, kinematics, options, and current value
of an observable by means of the following methods
\index{eos.Observable!name}\index{eos.Observable!parameters}\index{eos.Observable!kinematics}\index{eos.Observable!options}\index{eos.Observable!evaluate}
\begin{lstlisting}[language=iPython]
display(observable1.name()) # shows 'B_q->lnu::BR'
observable1.parameters() # accesses the parameters
observable1.kinematics() # accesses the (emtpy) set of kinematic variables
display(observable1.options()) # shows the options used to create the observable
display(observable1.evaluate()) # shows the current value
\end{lstlisting}
Note that each observable is associated with one object of the class \class{eos.Parameters}.
To illustrate this feature, the above code is repeated to create a second observable \object{observable2}\index{eos.Observable!make}
\begin{lstlisting}[language=iPython]
observable2 = eos.Observable.make('B_u->lnu::BR',
eos.Parameters(), eos.Kinematics(), eos.Options({'l': 'tau', 'model': 'WET'}))
\end{lstlisting}
Even though the two objects \object{observable1} and \object{observable2}
share the same name and options, their respective parameter sets are
independent, as can be checked as follows:\index{eos.Observable!parameters}
\begin{lstlisting}[language=iPython]
observable1.parameters() == observable2.parameters() # yields False
\end{lstlisting}
To correlate any number of observables, it is necessary to create \emph{all of them} using the same
\object{eos.Parameters} object; this will be further discussed in \refsec{usage:inference}.
In the above, this is \emph{not the case}, since for the creation of each observable the call
to \code{eos.Parameters()} created a new, independent set of parameters as explained in \refsec{basics:parameters}.\\
In many cases, observables have a default set of options, e.g., the default choice
of hadronic form factors or the default choice of a \ac{BSM}\xspace model. In some cases,
it does not make sense to have a default choice. In such cases, an error will be shown through a \texttt{Python}\xspace exception
if the user does not provide a valid option value. An example for this behavior are the
form factor pseudo-observables, e.g., \code{B->K^*::V(q2)}, which always require providing
a valid value for the option \code{form-factors}. This is achieved
by including the option as part of the \class{eos.QualifiedName}. In this case,
\code{B->K^*::V(q2);form-factors=BSZ2015} selects the form factor parametrization as used
by Bharucha, Straub, and Zwicky~\cite{Straub:2015ica} in 2015.
A full list of all option keys and their respective valid values is available
as part of the online documentation~\cite{EOS:doc} and by displaying \texttt{eos.Observables()} in
an interactive \texttt{Jupyter}\xspace notebook.\\
Contrary to parameters and kinematic variables, modifying the \class{eos.Options} object of any observable after its creation has no effect\index{eos.Observable!options}
\begin{lstlisting}[language=iPython]
observable1.options().set('l', 'mu') # does not affect observable1
\end{lstlisting}
This design decision ensures high-performance evaluations of all observables.\\
Objects of type \class{eos.Observable} are regular \texttt{Python}\xspace objects. For example, they can be collected in a \pyclass{list},
which is useful for evaluating a number of identical observables at different points in their phase space.
This can be achieved as follows\index{eos.Observable!make}\index{eos.Observable!evaluate}
\begin{lstlisting}[language=iPython]
import numpy
parameters = eos.Parameters()
observables = [eos.Observable.make('B->D^*lnu::A_FB(q2)', parameters, eos.Kinematics(q2=q2_value), eos.Options())
for q2_value in numpy.linspace(1.00, 10.67, 10)]
values = [o.evaluate() for o in observables]
display(values)
\end{lstlisting}
Here the instantiation of all observables with the same \class{eos.Parameters} object \object{parameters} ensures that
they share the same numerical values for all parameters. As a consequence, changes of numerical values within \object{parameters}
are broadcasted to all these instances and are taken into account in their subsequent evaluations.
\subsection[Classes eos.Constraints and eos.Constraint]
{Classes \class{eos.Constraints} and \class{eos.Constraint}}
\label{sec:basics:constraints}
\texttt{EOS}\xspace uses the class \class{eos.Constraint} to manage and create individual likelihood functions
at run time. To this end, objects of type \class{eos.Constraint} contain both
information on the concrete likelihood (e.g., mean values and standard deviation of a
Gaussian measurement)
and meta-information about the constrained observables (e.g., the \texttt{EOS}\xspace internal
names for an observable, relevant kinematic variables, and required options).
Besides (multivariate) Gaussian likelihood functions, \texttt{EOS}\xspace
also supports LogGamma and Amoroso functions~\cite{crooks2015amoroso}, and Gaussian mixture densities.
The database of constraints makes it possible to construct a likelihood function
for any experimental measurement and/or theory input in terms of \class{eos.Observable} objects.
Hence, \class{eos.Constraint} objects are the building blocks for
parameter inference studies that use the \texttt{EOS}\xspace software.\\
\texttt{EOS}\xspace provides a database of constraints, which is available as part of the
online documentation~\cite[List of Constraints]{EOS:doc} as well as interactively accessible in a \texttt{Jupyter}\xspace
notebook via\index{eos.Constraints}
\begin{lstlisting}[language=iPython]
display(eos.Constraints())
\end{lstlisting}
This database is stored within \texttt{EOS}\xspace in a series of \texttt{YAML}\xspace files.
Most \texttt{EOS}\xspace users will not require knowledge about the file format. However, advanced users
may need to provide constraints that are not part of the built-in database. In such
a case, the user can specify a \emph{manual} constraint; see \refsec{basics:analysis} and ref.~\cite{EOS:API} for details.
Alternatively, similar to the \class{eos.Parameters} database, the user can set the \code{EOS_HOME}
environment variable to point to an accessible directory. All \texttt{YAML}\xspace files within \code{EOS_HOME/constraints}
will be loaded and used \emph{instead} of the default \class{eos.Constraints} database.
We document the format in \refapp{constraints-format} and an example entry is shown
in \reflst{constraint:Belle2015A}.\\
Examples of built-in constraints that are used later on in this document include:
\begin{itemize}
\item The constraint \code{B->D::f_++f_0@FNAL+MILC:2015B} describes a lattice QCD result for
the $\bar{B}\to D$ form factors $f_+$ and $f_0$. Here the suffix
indicates that this constraint has been extracted from ref.~\cite{Lattice:2015rga},
which is included in the \texttt{EOS}\xspace list of references as \code{FNAL+MILC:2015B}.
The constraint can be used to create a likelihood function for the model parameters
for the form factors $f_+$ and $f_0$,
e.g., when using the BSZ2015 parametrization as the form factor model. Using
\code{B->D::f_++f_0@FNAL/MILC:2015B;form-factors=BSZ2015} (i.e., the constraint name including
an option list that specificies the form factor model) ensures that the correct form factor model
(here: BSZ2015) is used when creating a likelihood from this constraint.
\vspace*{\smallskipamount}
\item The constraint \code{B^0->D^+e^-nu::BRs@Belle:2015A} describes the correlated
measurement of the $\bar{B}^0\to D^+e^-\bar\nu$ branching ratio in $10$ bins of the kinematic
variable $q^2$. Here the suffix indicates that the results have been extracted from ref.~\cite{Belle:2015pkj},
which is included in the \texttt{EOS}\xspace list of references as \code{Belle:2015A}.
\end{itemize}
\subsection[Classes eos.SignalPDFs and eos.SignalPDF]
{Classes \class{eos.SignalPDFs} and \class{eos.SignalPDF}}
\label{sec:basics:signalpdf}
\texttt{EOS}\xspace uses the \class{eos.SignalPDF} class to provide a theoretical prediction
for the \ac{PDF}\xspace that describes a physical process, be it
a decay or a scattering process. The dependence on an arbitrary number of
kinematic variables are modeled through a shared object of class \class{eos.Kinematics},
and its \class{eos.KinematicVariable} objects. Parameters can be modified
or inferred through a shared \class{eos.Parameters} object. Hence,
each \class{eos.SignalPDF} object works very similar to an \class{eos.Observable}
object.
The list of \acp{PDF} can be accessed using the \class{eos.SignalPDFs} class.
Searching for a specific \ac{PDF}\xspace in the \texttt{EOS}\xspace database of signal PDFs is possible
by filtering by the prefix part, name part, or suffix part of the signal \ac{PDF}\xspace qualified name,
very similar to how the database of observables is searchable
\begin{lstlisting}[language=iPython]
eos.SignalPDFs(prefix='B->Dlnu') # display a list of all known SignalPDF objects
# this include 'B->Dlnu::dGamma/dq2', which requires
# 'q2_min', 'q2_max', 'q2' as kinematic variables.
\end{lstlisting}
The signal \ac{PDF}\xspace \code{B->Dlnu::dGamma/dq2} features one kinematic variable, \code{q2}.
Its boundaries are also passed by means of \class{eos.KinematicVariable} objects,
which are conventionally named \code{q2_min} and \code{q2_max}.
\begin{lstlisting}[language=iPython]
pdf = eos.SignalPDF.make('B->Dlnu::dGamma/dq2', eos.Parameters(),
eos.Kinematics({'q2_min': 0.0, 'q2_max': 10.0, 'q2': 5.0),
eos.Options({'l': 'mu', 'model': 'WET'}))
\end{lstlisting}
The \ac{PDF}\xspace's parameters, kinematics, and options can be accessed with eponymous methods.
This design permits the user some flexibility. It makes it possible to produce
pseudo-events within the \ac{SM}\xspace and in the generic \ac{WET}\xspace; see \refsec{usage:simulation}
for this use case. In addition, it enables unbinned likelihood fits; their description
goes beyond the scope of this document.
\subsection[Class eos.Analysis]
{Class \class{eos.Analysis}}
\label{sec:basics:analysis}
\texttt{EOS}\xspace uses the \class{eos.Analysis} as an interface for the user to describe a Bayesian analysis
to infer one or more parameters.
When creating an \class{eos.Analysis} object, the following arguments are used:
\begin{description}
\item[\hlred{\texttt{priors}}] is a mandatory \pyclass{list} describing the univariate priors.
This argument must describe at least one prior.
Each prior is described through a \pyclass{dict} object, the structure of which is documented
as part of the \texttt{Python}\xspace API documentation~\cite[\texttt{eos.Analysis}]{EOS:API}.
\smallskip
\item[\hlred{\texttt{likelihood}}] is a mandatory \pyclass{list} describing all the constraints
that enter the likelihood. Each element is a \pyclass{str} or \class{eos.QualifiedName}, specifying
a single constraint. Although it is a mandatory parameter, this list can be left empty.
\smallskip
\item[\hlred{\texttt{global\_options}}] is an optional \pyclass{dict} describing the options that
will be applied to all the observables that enter the likelihood.
Note that these global options \textit{override} those specified via the qualified name scheme.
For example, in a \ac{BSM}\xspace analysis, it is useful to include \code{'model': 'WET'}
as a global option, to ensure that all observables will be evaluated using a selectable point
in the \ac{WET}\xspace parameter space.
\smallskip
\item[\hlred{\texttt{fixed\_parameters}}] is an optional \pyclass{dict} describing
parameters that shall be fixed to non-default values as part of the analysis.
For example, to carry out a \ac{BSM}\xspace analysis of $b\to c\tau\nu$ processes for a non-default renormalization
scale, the user can set the scale parameter to a fixed value of $3\,\ensuremath{\mathrm{GeV}}$ using \code{'cbtaunutau::mu': '3.0'}.
\smallskip
\item[\hlred{\texttt{manual\_constraints}}] is an optional \pyclass{dict} describing constraints
that are not yet included in the \texttt{EOS}\xspace database of constraints. The constraint format is described in
\refapp{constraints-format}.
Note that to use any of the manual constraints as part of the likelihood, their qualified names
must still be added to the \code{likelihood} argument.
\end{description}
\enlargethispage{2em}
\begin{lstlisting}[
language=iPython,%
caption={%
Example for a Bayesian analysis to extract the \ac{CKM}\xspace parameter $|V_{cb}|$ from
$\bar{B}\to D\lbrace e^-,\mu^-\rbrace \bar\nu$ data by the Belle experiment
and lattice QCD input by the HPQCD and Fermilab/MILC collaborations.
\label{lst:basics:analysis:definition}\index{eos.Analysis}\index{eos.Parameter!set}
}
]
analysis_args = {
'global_options': { 'form-factors': 'BSZ2015', 'model': 'CKM' },
'priors': [
{'parameter': 'CKM::abs(V_cb)', 'min': 38e-3, 'max': 45e-3, 'type': 'uniform'},
{'parameter': 'B->D::alpha^f+_0@BSZ2015', 'min': 0.0, 'max': 1.0, 'type': 'uniform'},
{'parameter': 'B->D::alpha^f+_1@BSZ2015', 'min': -4.0, 'max': -1.0, 'type': 'uniform'},
{'parameter': 'B->D::alpha^f+_2@BSZ2015', 'min': 4.0, 'max': 6.0, 'type': 'uniform'},
{'parameter': 'B->D::alpha^f0_1@BSZ2015', 'min': -1.0, 'max': 2.0, 'type': 'uniform'},
{'parameter': 'B->D::alpha^f0_2@BSZ2015', 'min': -2.0, 'max': 0.0, 'type': 'uniform'}
],
'likelihood': [
'B->D::f_++f_0@HPQCD:2015A',
'B->D::f_++f_0@FNAL+MILC:2015B',
'B^0->D^+e^-nu::BRs@Belle:2015A',
'B^0->D^+mu^-nu::BRs@Belle:2015A'
]
}
analysis = eos.Analysis(**analysis_args)
\end{lstlisting}
In \reflst{basics:analysis:definition} we define a statistical analysis for the inference of $|V_{cb}|$
from measurements of the $\bar{B}\to D\ell^-\bar\nu$ branching ratios by the Belle experiment.
This example will be further discussed in \refsec{usage:analysis}.
First, we define all the arguments used in our analysis.
\begin{itemize}
\item Using the \code{global_options}, we choose the \code{BSZ2015} parametrization~\cite{Straub:2015ica}
to model the hadronic form factors that enter semileptonic $\bar{B}\to D$ transitions.
We also choose the \code{CKM} model to ensure that $|V_{cb}|$ is represented by a single parameter.
\smallskip
\item Priors for both the $|V_{cb}|$ parameter and the \code{BSZ2015} parameters are
described in \code{priors}. Here, each parameter is assigned a uniform prior, which is chosen to
contain at least $98\%$ ($\sim 3\,\sigma$) of the \emph{ideal} posterior probability, i.e., the priors have been
chosen to be wide enough to ``contain`` the posterior defined by this analysis.
\smallskip
\item The likelihood is defined through a list of constraints, which in the above includes both
theoretical lattice QCD results as well as experimental measurements by the Belle collaboration.
For the first part we combine the correlated lattice QCD results published by the Fermilab/MILC and HPQCD collaborations in 2015 \cite{Na:2015kha,MILC:2015uhg}.
For the second part, we combine binned measurements of the branching ratio for
$\bar{B}^0\to D^+e^-\bar\nu$ and $\bar{B}^0\to D^+\mu^-\bar\nu$ decay.
We reiterate that \texttt{EOS}\xspace treats genuine physical observables and pseudo-observables identically.
\end{itemize}
\noindent The class \class{eos.Analysis} further provides convenience methods to carry out the statistical analysis:
\begin{description}
\item[\hlred{\texttt{optimize}}] \index{eos.Analysis!optimize} uses the \code{scipy.optimize} module to find the best
fit point of the posterior. Optional parameters determine the abort condition for the optimization
and the starting point.
\smallskip
\item[\hlred{\texttt{sample}}] \index{eos.Analysis!sample} uses the \code{pypmc} module to produce random variates
of the posterior using an adaptive version of the Metropolis-Hastings algorithm~\cite{doi:10.1063/1.1699114,10.1093/biomet/57.1.97,10.2307/3318737}
with a single Markov chain.
This method can be run several times to repeatedly explore the posterior density and accurately sample from it.
\smallskip
\item[\hlred{\texttt{sample\_pmc}}] \index{eos.Analysis!sample\_pmc} uses the \code{pypmc} module to produce random
variates of the posterior using the Population Monte Carlo algorithm~\cite{2010MNRAS.405.2381K}. To this end, an initial guess
of the posterior in form of a Gaussian mixture density is created~\cite{2013arXiv1304.7808B} from Markov chain Monte Carlo
samples obtained using \method{eos.Analysis}{sample}.
\end{description}
At any point, the attribute \method{eos.Analysis}{parameters} can be used to access the analysis' parameter set,
e.g., to save the set to file via the \method{eos.Parameters}{dump} method.
We refer to the documentation of the \texttt{EOS}\xspace \texttt{Python}\xspace API~\cite{EOS:API} for further information.\\
Note that the \texttt{C++}\xspace backend used by \class{eos.Analysis} parallelizes the evaluation of the likelihood function.
By default, the number of concurrent threads will match the number of available processors. Users who need to
limit this number (e.g., due to using \texttt{EOS}\xspace on a multi-user system in parallel to other users' jobs) can do so by
setting the \texttt{EOS\_MAX\_THREADS} environment variable to the limit.
\subsection[Class eos.Plotter]
{Class \class{eos.Plotter}}
\label{sec:basics:plotter}
\texttt{EOS}\xspace implements a versatile plotting framework based on the class \class{eos.Plotter},
which relies on \texttt{matplotlib}\xspace~\cite{matplotlib} for the actual plotting.
Its input must be formatted as a dictionary containing two keys:
\code{plot} contains metadata and \code{contents} describes the plot items.
The value associated to the \code{plot} key is a dictionary; it describes the layout of the plot, including axis labels,
positioning of the legend, and similar settings that affect the entire plot.
The value associated to the \code{contents} key is a list; it describes the contents of the plot, expressed in terms of
independent plot items. Possible types of plot items include points, bands, contours, histograms.
\begin{lstlisting}[
language=iPython,%
caption={%
High-level description of the arguments for the \class{eos.Plotter} class.
The plot will appear inline in a \texttt{Jupyter}\xspace notebook (if \code{FILENAME} is not specified)
or be written to \code{FILENAME} (if specified). In the latter case, the output format
will be determined based on the file extension.
\label{lst:basics:plotter:description}\index{eos.Plotter}
}%
]
plot_desc = {
'plot': {
'x': { ... }, # description of the x axis
'y': { ... }, # description of the y axis
'legend': { ... }, # description of the legend
... # further layouting options
},
'contents': [
{ ... }, # first plot item
{ ... }, # second plot item
]
}
eos.plot.Plotter(plot_desc, FILENAME).plot()
\end{lstlisting}
Each of the items is represented by a dictionary that contains a \code{type} key and an optional \code{name} key.
A full description of all item types and their parameters is available as part of the \texttt{EOS}\xspace \texttt{Python}\xspace API documentation~\cite{EOS:API}.
Here, we provide a brief summary for the most common types, which are used within examples in the course of this document:
\begin{description}
\item[\hlred{\texttt{observable}}] \index{eos.Plotter!observable} plots a single \texttt{EOS}\xspace observable without uncertainties as a function of one kinematic variable or one parameter.
See \reflst{usage:BtoDlnu:BR} for an example.
\smallskip
\item[\hlred{\texttt{histogram}}] \index{eos.Plotter!histogram}
\item[\hlred{\texttt{histogram2D}}] \index{eos.Plotter!histogram2D} plots either a 1D or a 2D histogram of pre-existing random samples. These samples can be contained in \texttt{Python}\xspace objects within the notebook's memory or contained in
a datafile on disk.
See \reflst{usage:plot-prior-prediction-int} and \reflst{simulation:histogram-2D} for examples.
\smallskip
\item[\hlred{\texttt{uncertainty}}] \index{eos.Plotter!uncertainty} plots the uncertainty band of an observables
as a function of one kinematic variable or one parameter. The random samples for the observables
can be contained in \texttt{Python}\xspace objects within the notebook's memory or contained in
a datafile on disk.
See \reflst{usage:plot-prior-prediction-diff} for an example.
\smallskip
\item[\hlred{\texttt{constraint}}] \index{eos.Plotter!constraint} displays a constraint either from the \texttt{EOS}\xspace library or a manually added constraint.
See \reflst{inference:posterior_samples_uncertainties} for an example.
\smallskip
\end{description}
Beyond \code{type} and \code{name} keys, all item types also recognise the following optional keys:
\begin{description}
\item[\hlred{\texttt{alpha}}] \index{eos.Plotter!alpha} A \pyclass{float}, between 0.0 and 1.0, which describes
the opacity of the plot item expressed as an alpha value.
A value of 0.0 means completely transparent, 1.0 means completely opaque.
\smallskip
\item[\hlred{\texttt{color}}] \index{eos.Plotter!color} A \pyclass{str}, containing any valid \texttt{matplotlib}\xspace color
specification, which describes the color of the plot item. Defaults to one of the colors in the \texttt{matplotlib}\xspace default color cycler.
\smallskip
\item[\hlred{\texttt{label}}] \index{eos.Plotter!label} A \pyclass{str}, containing LaTeX commands, which describes
the label that appears in the plot’s legend for this plot item.
\end{description}
In \reflst{basics:plotter:description}, \texttt{FILENAME} is an optional argument naming the file into which the plot shall be placed.
The file format is automatically determined based on the file name extension.
\subsection[Classes eos.References, eos.Reference, and eos.ReferenceName]
{Classes \class{eos.References}, \class{eos.Reference}, and \class{eos.ReferenceName}}
\label{sec:basics:references}
\texttt{EOS}\xspace strives to give complete credit to the various works that underpin the theory predictions and the experimental
and phenomenological analyses that provide likelihoods. To this end, \texttt{EOS}\xspace keeps a database of bibliographical metadata,
which is accessible via the \class{eos.References} class. Each entry is a tuple of an \class{eos.ReferenceName} object
that uniquely identifies the reference and the metdata data of the reference as an \class{eos.Reference} object.
For a complete list of works used within \texttt{EOS}\xspace, we refer to the documentation~\cite[List of References]{EOS:doc}.
Each observable provides a list of reference names, corresponding to the pertinent pieces of literature
that were used in their implementations. This list is obtained via the \method{eos.Observable}{references} method,
which returns a generator of \class{eos.ReferenceName} objects:
\begin{lstlisting}[language=iPython]
obs = eos.Observable.make('B_u->lnu::BR',
eos.Parameters(), eos.Kinematics(), eos.Options({'l': 'tau', 'model': 'WET'}))
display([rn for rn in obs.references()]) # shows 'DBG:2013A', amongst others
\end{lstlisting}
Further information on this reference can be obtained from its \class{eos.Reference} object:
\begin{lstlisting}[language=iPython]
ref = eos.References()['DBG:2013A']
display(ref) # displays the reference's title, authors, and eprint hyperlink (if available)
\end{lstlisting}
In a similar way, by convention the suffix part of each \class{eos.Constraint} is a valid reference name.
Therefore, to look up the reference that provides a constraint (e.g., \code{B->D::f_++f_0@FNAL+MILC:2015B})
the user can look up the associated bibliographical metadata based on the name's suffix part:
\begin{lstlisting}[language=iPython]
display(eos.References()['FNAL+MILC:2015B'])
\end{lstlisting}
If you feel that your work should be listed as part of a reference for any of the \texttt{EOS}\xspace observables, please
contact the authors to include it.
\section{Conclusion \& Outlook}\label{sec:summary}
We have presented the \texttt{EOS}\xspace software in version 1.0\xspace and explained its three main use cases at the hand
of concrete examples in the field of flavor physics phenomenology.
Beyond these examples, \texttt{EOS}\xspace has been used extensively for numerical evaluations, statistical analyses
and plots in a number of peer-reviewed publications. We plan to extend \texttt{EOS}\xspace with further processes
and observables, while keeping the \texttt{Python}\xspace interface unchanged.\\
To keep this document concise, some advanced aspects of \texttt{EOS}\xspace have not been discussed.
These aspects, documented in the online documentation~\cite{EOS:doc}, include
\begin{itemize}
\item the possibility to combine existing observables in arithmetic expressions at run time;
\item the command line interface intended to use as part of massively parallelized batch jobs in grid or cluster environments; and
\item the addition of \texttt{C++}\xspace code for new observables and processes.
\end{itemize}
Despite ongoing unit testing and development of the software, we are conscious that \texttt{EOS}\xspace is
neither free of bugs nor providing all the features the user could possibly need. We therefore encourage
the users to report any and all bugs found and to request additional features.
We ask that any such reports or requests are communicated as issues within the \texttt{EOS}\xspace Github
repository~\cite{EOS:repo}.
We are very happy to discuss the addition of further observables and processes with interested parties
from the phenomenological and experimental communities.
\section{Introduction and Physics Case}
\label{sec:intro}
Flavor physics phenomenology has a long history of substantial impact on the development of the \ac{SM}\xspace of particle physics.
Over the last decades, two developments are particularly noteworthy:\\
First, the determination of the \ac{CKM}\xspace matrix elements has developed into a precision enterprise,
thanks in large parts to the efforts at the B-factory experiments BaBar and Belle~\cite{Bevan:2014iga}
and more recently the LHCb experiment~\cite{LHCb:2012myk},
the technological progress in lattice gauge theory predictions~\cite{FlavourLatticeAveragingGroup:2019iem},
and the development of precision phenomenology with continuum methods.\\
Second, the emergence of the so-called ``$b$ anomalies'' has led to cautious excitement in the community.
These anomalies are substantial tensions
between theory predictions of $b$-quark decay observables and their measurements by the ATLAS, BaBar, Belle, CMS, and
LHCb experiments, which present a coherent pattern that might be due to \ac{BSM}\xspace effects,
but do not yet reach individually the required significance of $5\,\sigma$;
see e.g.\ refs.~\cite{Albrecht:2021tul,Bernlochner:2021vlv} for recent reviews.\\
Both developments have led to increasingly sophisticated phenomenological analyses.\\
Such analyses involve researchers regularly carrying out structurally similar and recurring tasks.
These typical \emph{use cases} include
\begin{enumerate}
\item predicting flavor observables and assessing their theory uncertainties both within the \ac{SM}\xspace and for general
\ac{BSM}\xspace scenarios in the \ac{WET}\xspace;
\smallskip
\item inferring hadronic, \ac{SM}\xspace, and/or \ac{WET}\xspace parameters from an extendable database of experimental and theoretical likelihoods;
\smallskip
\item simulating flavor processes and producing high-quality pseudo events for use in sensitivity studies and
for the preparation of experimental analyses.
\end{enumerate}
The \texttt{EOS}\xspace software~\cite{EOS} has been continuously developed since 2011~\cite{vanDyk:2012zla,EOS:repo} to achieve these tasks.
\texttt{EOS}\xspace is free software published under the GNU General Public License 2~\cite{GPLv2}.
It has produced publication-quality results for approximately 30 peer-reviewed and published phenomenological studies~\cite{%
Bobeth:2010wg,%
Bobeth:2011gi,Bobeth:2011nj,%
Beaujean:2012uj,Bobeth:2012vn,%
Beaujean:2013soa,%
Faller:2013dwa,SentitemsuImsong:2014plu,Boer:2014kda,%
Beaujean:2015gba,Feldmann:2015xsa,Mannel:2015osa,%
Bordone:2016tex,Meinel:2016grj,Boer:2016iez,Serra:2016ivr,%
Bobeth:2017vxj,Blake:2017une,%
Boer:2018vpx,Feldmann:2018kqr,Gubernari:2018wyi,%
Boer:2019zmp,Bordone:2019vic,Blake:2019guk,Bordone:2019guc,%
Gubernari:2020eft,%
Bruggisser:2021duo,Leljak:2021vte,Bobeth:2021lya%
}.
Besides applications in phenomenology, \texttt{EOS}\xspace also has been used in a number of published experimental studies
by the CDF~\cite{%
CDF:2011tds%
}, the CMS~\cite{%
CMS:2013mkz,%
CMS:2015bcy%
} and the LHCb~\cite{%
LHCb:2012bin,%
LHCb:2013zuf,%
LHCb:2014auh,%
LHCb:2015svh,%
LHCb:2018jna,%
LHCb:2020lmf%
} experiments.
The Belle II experiment has included \texttt{EOS}\xspace as part of the external software~\cite{basf2ext} within the
Belle II software analysis framework~\cite{Kuhr:2018lps}.\\
In this article, we describe the \texttt{EOS}\xspace user interface. Although the software is developed mainly in \texttt{C++}\xspace,
it is designed to be used in \texttt{Python}\xspace \cite{python}.
As such, \texttt{EOS}\xspace relies heavily
on the \code{numpy}~\cite{Harris:2020xlr} and \texttt{pypmc}\xspace~\cite{pypmc} packages.
We \emph{highly} recommend new users to use \texttt{EOS}\xspace within a \texttt{Jupyter}\xspace notebook environment \cite{jupyter}.\\
\texttt{EOS}\xspace can be installed in binary form on Linux-based systems\footnote{%
This is limited to systems with \texttt{Python}\xspace version 3.6 or later that also fulfill the ``manylinux\_2\_17\_x86\_64'' platform
requirement as defined in PEP~600~\cite{PEP-600}.
}
with a single command:
\begin{lstlisting}[language=bash]
pip3 install eoshep
\end{lstlisting}
Afterwards, the \texttt{EOS}\xspace \texttt{Python}\xspace module can be accessed, e.g. within a \texttt{Jupyter}\xspace notebook, using
\begin{lstlisting}[language=ipython]
import eos
\end{lstlisting}
We note that this means of installation also works for the ``Windows Subsystem for Linux v2 (WSL2)''. For the purpose
of installing \texttt{EOS}\xspace, WSL2 can be treated like any Linux system.\\
Although \texttt{EOS}\xspace can also be built and installed from source on macOS systems, we do not currently support these.
For macOS users we recommend to install on a remote-accessible Linux system and access a \texttt{Jupyter}\xspace
notebook via \code{SSH}; our recommendation is described in detail as part of the frequently-asked questions~\cite{EOS:doc}.
Prospective \texttt{EOS}\xspace developers find detailed instructions on how to build \texttt{EOS}\xspace from source in the installation section of the documentation~\cite{EOS:doc}.\\
Presently, \texttt{EOS}\xspace provides a total of 844 (pseudo-)observables\footnote{%
\texttt{EOS}\xspace does not distinguish between true observables, which can be unambiguously measured in an experimental
setting, and pseudo-observables, which can not be unambiguously inferred from experimental or theoretical
data. Pseudo-observables in \texttt{EOS}\xspace include hadronic form factors and other auxiliary hadronic quantities.
} %
pertaining to a large variety of flavor processes.
Obtaining and browsing the full list of observables is discussed in \refsec{basics:observables}.
The processes implemented include
\begin{itemize}
\item (semi)leptonic charged-current $\bar{B}$ meson decays (e.g., $\bar{B}\to D^*\tau\bar\nu$);
\item semileptonic charged-current $\Lambda_b$ baryon decays (e.g., $\Lambda_b\to \Lambda_c(\to \Lambda \pi)\mu\bar\nu$);
\item rare (semi)leptonic and radiative neutral-current $\bar{B}$ meson decays (e.g., $\bar{B}\to \bar{K}^*\mu^+\mu^-$);
\item rare semileptonic and radiative neutral-current $\Lambda_b$ baryon decays (e.g., $\Lambda_b\to \Lambda(\to p \pi) \mu^+\mu^-$); and
\item $B$-meson mixing observables (e.g., $\Delta m_s$).
\end{itemize}
\texttt{EOS}\xspace is designed to be self documenting: a complete list of processes and their respective observables is automatically generated
as part of documentation, which is accessible both through the software itself and online~\cite{EOS:doc}.
The theoretical descriptions of most observables use the \ac{WET}\xspace to account for both \ac{SM}\xspace and \ac{BSM}\xspace predictions.
Details of the \texttt{EOS}\xspace bases of \ac{WET}\xspace operators are described in \refapp{models:WET}.\\
Although \texttt{EOS}\xspace is --- to our knowledge --- the first publicly available open-source flavor physics software~\cite{vanDyk:2012zla,EOS:repo},
it is by far not the only one. \texttt{EOS}\xspace competes with the \texttt{flavio}\xspace~\cite{Straub:2018kue}, \texttt{SuperISO}\xspace~\cite{Neshatpour:2021nbn},
HEPfit~\cite{DeBlas:2019ehy} and \texttt{FlavBit}\xspace~\cite{Workgroup:2017myk} software. Major distinctions between \texttt{EOS}\xspace and these competitors are:
\begin{itemize}
\item \texttt{EOS}\xspace focuses on the simultaneous inference of hadronic and \ac{BSM}\xspace parameters;
\item \texttt{EOS}\xspace ensures modularity of hadronic matrix elements, i.e., the possibility to select from various hadronic models
and parametrizations at run time;
\item \texttt{EOS}\xspace provides means to produce pseudo events for use in sensitivity studies and in preparation for experimental measurements; and
\item \texttt{EOS}\xspace provides means to predict hadronic matrix elements from QCD sum rules.
\end{itemize}
These distinctions make analyses possible that cannot currently be carried out with the competing software~\cite{%
Beaujean:2012uj,Beaujean:2013soa,Bobeth:2017vxj,Feldmann:2018kqr,bsll2021%
}, e.g., due to multi-modal or otherwise complicated posteriors that cannot be be captured by Markov chain Monte Carlo methods alone.
However, this benefit comes with an increased level of complexity, which we address in the \texttt{EOS}\xspace documentation~\cite{EOS:doc}
and --- to some extent --- in this article.\\
\subsection{How to Read This Document}
Although this paper will give you a first impression of \texttt{EOS}\xspace and basic examples to try in a \texttt{Jupyter}\xspace notebook,
it is not meant to be a stand-alone document. To obtain a deeper understanding, additional documentation, and further
examples, the user is referred to refs.~\cite{EOS:doc,EOS:API,EOS:examples}.
Wherever we list \texttt{Python}\xspace code, we assume that the reader evaluates it within a \texttt{Jupyter}\xspace notebook environment,
to make full use its rich display capabilities.\\
In \refsec{basics}, we illustrate the basic
usage of \texttt{EOS}\xspace, beginning with an overview of the various classes and concepts available
through the \texttt{Python}\xspace interface.
In \refsec{usage} we continue with a discussion of and examples for the main use cases.
In a series of appendices we provide further details.
\begin{itemize}
\item We describe the three physics models available in \texttt{EOS}\xspace in \refapp{models}.
\item We relegate lengthy \texttt{Python}\xspace code examples that would otherwise interrupt reading \refsec{usage}
to \refapp{PlotExamples}.
\item We document the \texttt{EOS}\xspace internal data format for storing experimental and theoretical likelihoods
in \refapp{constraints-format}.
\item We include a glossary of the main \texttt{EOS}\xspace objects and associated methods in \refapp{glossary}.
\end{itemize}
This article is accompanied by a number of auxiliary files, containing example \texttt{Jupyter}\xspace notebooks for
the basic usage and each of the use cases. These notebooks correspond to the examples
contained in the public source code repository~\cite{EOS:examples} as of \texttt{EOS}\xspace version 1.0\xspace.
\section*{Acknowledgments}
DvD is grateful to Gudrun Hiller, Thomas Mannel, Gino Isidori, and Nico Serra,
whose support made the development of \texttt{EOS}\xspace possible in the first place.
We thank all \texttt{EOS}\xspace contributors who are not authors of this paper,
including Bastian M\"uller, Romy O'Connor, Stefanie Reichert, Martin Ritter, Eduardo Romero,
Ismo Toijala, and Christian Wacker.\\
The work of DvD, EE, NG, SK, and MR and the development of EOS is supported by the German Research Foundation (DFG)
within the Emmy Noether Programme under grant DY 130/1-1 and by the National Natural Science Foundation of China (NSFC)
and the DFG through the funds provided to the Sino-German Collaborative Research Center TRR110
``Symmetries and the Emergence of Structure in QCD''
(NSFC Grant No. 12070131001, DFG Project-ID 196253076 -- TRR 110).
The work of TB is supported by the Royal Society (United Kingdom).
The work of CB was supported by the DFG under grant BO-4535/1-1.
The work of MB and MJ is supported by the Italian Ministry of Research (MIUR) under
grant PRIN 20172LNEEZ.
The work of NG is also supported by the DFG under grant 396021762 -- TRR 257 ``Particle
Physics Phenomenology after the Higgs Discovery''.
The work of RSC and EG was supported by the Swiss National Science Foundation (SNSF) under contracts 159948, 172637 and 174182.
The work of PL is supported by the Cluster of Excellence ``ORIGINS'' funded by
the DFG under Germany's Excellence Strategy -- EXC-2094 -- 390783311.
The work of JV is supported by funding from the Spanish MICINN through the ``Ram\'on y Cajal'' program RYC-2017-21870,
the ``Unit of Excellence Mar\'ia de Maeztu 2020-2023” award to the Institute of Cosmos Sciences (CEX2019-000918-M) and from PID2019-105614GB-C21 and 2017-SGR-929 grants.
\newpage
\input{appendix.tex}
\printindex
\section{Use Cases}
\label{sec:usage}
Each of the three major use cases introduced in \refsec{intro} is discussed in details
in sections \ref{sec:usage:predictions} to \ref{sec:usage:simulation}.
\subsection{Theory Predictions}
\label{sec:usage:predictions}
[\textit{The example developed in this section can be run interactively from the example notebook for theory predictions available
from ref.~\cite{EOS:repo}, file \href{https://github.com/eos/eos/tree/v1.0/examples/predictions.ipynb}{examples/predictions.ipynb}}]\\
\texttt{EOS}\xspace is equipped to produce theory predictions including their parametric uncertainties
for any of its built-in observables using Bayesian statistics. This requires knowledge
of the probability density function (PDF) of the pertinent parameters.
Here and throughout we will denote the set of parameters as $\ensuremath{\vec\vartheta}$, with
\begin{equation*}
\ensuremath{\vec\vartheta} \equiv (\ensuremath{\vec{x}}, \ensuremath{\vec\nu})\,
\end{equation*}
where $\ensuremath{\vec{x}}$ represents the parameters of interest, and
$\ensuremath{\vec\nu}$ represents the nuisance parameters. This distinction is entirely a semantic one,
and no technical differences arise from treating a parameter either way.
Production of theory predictions then falls into one of the following cases:
\begin{enumerate}
\item theory predictions for fixed values of all parameters $\ensuremath{\vec\vartheta} = \ensuremath{\vec\vartheta}^*$;
\item \textit{a-priori} predictions with propagation of uncertainties due
to the \emph{prior} PDF $P_0(\ensuremath{\vec\vartheta})$;
\item \textit{a-posteriori} predictions with propagation of uncertainties
due to the \emph{posterior} PDF $P(\ensuremath{\vec\vartheta}|D)$, where $D$ represents
some data $D$.
\end{enumerate}
Case 1 has been already mentioned with the concluding example of
\refsec{basics:observables}.
In \refsec{usage:predictions:fixed} we provide an example showcasing
how to efficiently obtain these predictions. Cases 2 and 3 can be handled identically
in a Monte-Carlo framework and are discussed collectively in \refsec{usage:predictions:sampling}.
\subsubsection{Direct Evaluation for Fixed Parameters}
\label{sec:usage:predictions:fixed}
In \refsec{basics} we have explained how to evaluate an observable for a single configuration of the kinematic variables,
e.g., an integrated branching ratio with fixed integration boundaries, or a differential branching ratio at one point
in the kinematic phase space. Commonly, users need to plot such differential observables as a function of the kinematic
variable but for fixed values of its parameters.
To illustrate how this can be achieved with \texttt{EOS}\xspace, we use the differential branching ratios
for $\bar{B} \to D\lbrace \mu^-,\tau^-\rbrace \bar\nu$ as an example.
The \class{eos.Plotter} class (see \refsec{basics:plotter}), provides means to plot any
\texttt{EOS}\xspace observable as a function of a single kinematic variable (here: $q^2$).
\begin{lstlisting}[%
language=iPython,%
caption={%
Plot the $q^2$-differential branching ratios for $\bar{B} \to D\lbrace \mu^-,\tau^-\rbrace \bar\nu$.
The results are shown as the two central curves in the right plot \refout{usage:prior-prediction}.
\label{lst:usage:BtoDlnu:BR}\index{eos.Plotter!plot}
}%
]
plot_args = {
'plot': {
'x': {'label': r'$q^2$', 'unit': r'$\textnormal{GeV}^2$', 'range': [0.0, 11.60] },
'y': {'label': r'$d\mathcal{B}/dq^2$', 'range': [0.0, 5e-3] },
'legend': { 'location': 'upper center' }
},
'contents': [
{
'type': 'observable', 'observable': 'B->Dlnu::dBR/dq2;l=mu',
'variable': 'q2', 'range': [0.02, 11.60],
'label': r'$\ell=\mu$',
},
{
'type': 'observable', 'observable': 'B->Dlnu::dBR/dq2;l=tau',
'variable': 'q2', 'range': [3.17, 11.60],
'label': r'$\ell=\tau$',
}
]
}
eos.plot.Plotter(plot_args).plot()
\end{lstlisting}
The output is a plot containing the branching ratios for $\ell=\mu, \tau$,
where $x$ axis shows the kinematic variable $q^2$, and the $y$ axis shows the
value of the differential branching ratio. The output corresponds to the central curves
shown in the right plot of \refout{usage:prior-prediction}.
In the listing above, the statement \code{'variable': 'q2'} specifies that the
kinematic variable \code{q2} is varied in the available \code{range}.\\
Similarly, we can plot an observable as a function of a single parameter, with all other parameters
kept fixed and for a given kinematic configuration. To this end, the \code{'xrange'} requires
adjustment compared to the previous example, and the contents should be replaced by
\begin{lstlisting}[language=iPython]
...
'contents': [
{
'type': 'observable', 'observable': 'B->Dlnu::dBR/dq2;l=mu,model=WET',
'kinematics': {'q2': 2.0}, 'parameters': {'CKM::abs(V_cb)' : 0.042}
'variable': 'cbmunumu::Re{cSL}', 'range': [-1.0, 1.0],
'label': r'$\ell=\mu$',
}
]
...
\end{lstlisting}
Here the dependence of the differential branching fraction at $q^2 = 2\,\ensuremath{\mathrm{GeV}}^2$
on the real part of the \ac{WET}\xspace Wilson coefficient $C_{S_L}$ in the $\bar{c}b\mu\nu_\mu$ sector of the \ac{WET}\xspace
is plotted. Note that \code{kinematics} key is used to provide the fixed set of kinematic
variables and the \code{parameters} key is used to modify parameter values.
As before, \code{variable} selects the entity that is plotted on the $x$ axis,
which is now recognized to be an \class{eos.Parameter} object rather than an \class{eos.KinematicVariable} object.
\subsubsection{Predictions from Monte Carlo Sampling}
\label{sec:usage:predictions:sampling}
\texttt{EOS}\xspace provides the means for a more sophisticated estimation of theory uncertainties
using Monte Carlo techniques, including importance sampling techniques.
For the sampling of a probability density function, \texttt{EOS}\xspace relies on the
\texttt{pypmc}\xspace package that provides methods for adaptive
Metropolis-Hastings~\cite{doi:10.1063/1.1699114,10.1093/biomet/57.1.97,10.2307/3318737}
and Population Monte Carlo~\cite{2010MNRAS.405.2381K,2013arXiv1304.7808B} sampling.
The uncertainty of an observable $O$ is estimated
from its random variates. We recall that $O \sim P(O)$ with~\cite{gelmanbda04}
\begin{align}
P(O) &
= \int\mathrm{d}\ensuremath{\vec\vartheta}\, P(O, \ensuremath{\vec\vartheta})
= \int\mathrm{d}\ensuremath{\vec\vartheta}\, P(O|\ensuremath{\vec\vartheta}) P(\ensuremath{\vec\vartheta})
= \int\mathrm{d}\ensuremath{\vec\vartheta}\,
\delta\left[O - f_O(\ensuremath{\vec\vartheta})\right] P(\ensuremath{\vec\vartheta})\,.
\end{align}
Here the Dirac $\delta$-function was used and $f_O(\ensuremath{\vec\vartheta})$ is the theoretical expression that
predicts $O$ for a given set of parameters \ensuremath{\vec\vartheta}. With this knowledge
at hand, we approach the two cases 2 and 3 as discussed in \ref{sec:usage:predictions}
in a basically identical way:\\
For \emph{case 2}, we use $P(\ensuremath{\vec\vartheta}) = P_0(\ensuremath{\vec\vartheta})$, i.e., the prior PDF.
We note that \texttt{EOS}\xspace treats all priors $P_0$ as \emph{univariate PDFs} and therefore as uncorrelated.
Mathematically, a multivariate prior is equivalent to a multivariate likelihood
with flat, univariate priors.
By design, \texttt{EOS}\xspace implements multivariate correlated priors in terms of a multivariate correlated
likelihood.
For example, the parameters in the parameterizations
of hadronic form factors are constrained by various theoretical methods like lattice
QCD calculations, light-cone sum rule calculations, unitarity bounds and
constraints that arise in the limit of a heavy-quark mass. Under these
circumstances one might still use the terminology \textit{prior prediction}
whenever the included constraints are only of theoretical nature, i.e. no
experimental information was used.\\
For \emph{case 3}, we use $P(\ensuremath{\vec\vartheta}) = P(\ensuremath{\vec\vartheta} | D)$, i.e., the posterior PDF
as obtained from a previous fit given some data $D$. Although based on case 3,
the examples below also illustrate case 2, since this distinction is entirely a semantic one.\\
We continue using the integrated branching ratios of $B^-\to D^0 \lbrace\mu^-,
\tau^-\rbrace\bar\nu$ decays as examples. The largest source of theoretical
uncertainty in these decays arises from the hadronic matrix elements, i.e.,
from the form factors $f^{\bar{B}\to D}_+(q^2)$ and $f^{\bar{B}\to D}_0(q^2)$.
Both form factors have been obtained independently using lattice QCD simulations
by the HPQCD \cite{Na:2015kha} and FNAL/MILC \cite{Lattice:2015rga} collaborations.
In the following this information is used as part of the data $D$
in the form of a joint likelihood. The form factors at different $q^2$ values
of each calculation are available in \texttt{EOS}\xspace as \class{eos.Constraint} objects
under the names \code{B->D::f_++f_0@HPQCD:2015A} and
\code{B->D::f_++f_0@FNAL+MILC:2015B}.
Here, we use these two constraints to construct a multivariate Gaussian prior
as follows:
\begin{lstlisting}[language=iPython]
analysis_args = {
'priors': [
{'parameter': 'B->D::alpha^f+_0@BSZ2015', 'min': 0.0, 'max': 1.0, 'type': 'uniform'},
{'parameter': 'B->D::alpha^f+_1@BSZ2015', 'min':-5.0, 'max':+5.0, 'type': 'uniform'},
{'parameter': 'B->D::alpha^f+_2@BSZ2015', 'min':-5.0, 'max':+5.0, 'type': 'uniform'},
{'parameter': 'B->D::alpha^f0_1@BSZ2015', 'min':-5.0, 'max':+5.0, 'type': 'uniform'},
{'parameter': 'B->D::alpha^f0_2@BSZ2015', 'min':-5.0, 'max':+5.0, 'type': 'uniform'}
],
'likelihood': [ 'B->D::f_++f_0@HPQCD:2015A', 'B->D::f_++f_0@FNAL+MILC:2015B' ]
}
prior = eos.Analysis(**analysis_args)
\end{lstlisting}
Next we create two observables: the semimuonic branching ratio and the
semitauonic branching ratio. By using
\object{prior.parameters} in the construction of these observables, we
ensure that our observables and the \object{prior} share the
same parameter set. This means that changes to \object{prior.parameters}
will affect the evaluation of both observables.
\begin{lstlisting}[language=iPython,
caption={%
Produce samples of the prior and prior-predictive samples for two observables.
\label{lst:usage:prior-samples-int}
}%
]
obs_mu = eos.Observable.make('B->Dlnu::BR', prior.parameters, eos.Kinematics({'q2_min': 0.02, 'q2_max': 11.60}),
eos.Options({'l':'mu', 'form-factors':'BSZ2015'}))
obs_tau = eos.Observable.make('B->Dlnu::BR', prior.parameters, eos.Kinematics({'q2_min': 3.17, 'q2_max': 11.60}),
eos.Options({'l':'tau','form-factors':'BSZ2015'}))
observables = (obs_mu, obs_tau)
parameter_samples, _, observable_samples = prior.sample(N=5000, pre_N=1000, observables=observables)
\end{lstlisting}
In the above, we provide the option \code{'form-factors': 'BSZ2015'}
to ensure that the form factor plugin corresponds to the set of parameters
that are described by \object{prior}.
Sampling from the natural logarithm of the prior PDF and -- at the same time -- producing
prior-predictive samples of both observables is achieved using the \method{eos.Analysis}{sample} method.
This method runs one Markov chain using the \texttt{pypmc}\xspace package, and it is discussed in more detail in
\refsec{usage:inference}.
Here \code{N=5000} samples of both the parameter set and the observable set are produced,
and we discard the values of the log prior for each parameter sample
by assigning the return value to \code{_}.
Note that the production of posterior-predictive samples is achieved in the same way. The distinction
between a prior PDF and a posterior PDF is entirely a semantic one.\\
To illustrate the prior-predictive samples we use \texttt{EOS}\xspace' plotting framework:
\begin{lstlisting}[%
language=iPython,%
caption={%
Histogram prior-predictive samples of two observables.
The output is shown in the left plot \refout{usage:prior-prediction}.
\label{lst:usage:plot-prior-prediction-int}\index{eos.Plotter!plot}
}%
]
plot_args = {
'plot': {
'x': { 'label': r'$d\mathcal{B}/dq^2$', 'range': [0.0, 3e-2] },
'legend': { 'location': 'upper center' }
},
'contents': [
{ 'label': r'$\ell=\mu$', 'type': 'histogram', 'bins': 30,
'data': { 'samples': observable_samples[:, 0] } },
{ 'label': r'$\ell=\tau$','type': 'histogram', 'bins': 30,
'data': { 'samples': observable_samples[:, 1] } },
]
}
eos.plot.Plotter(plot_args).plot()
\end{lstlisting}
The arithmetic mean and the variance of the samples can be determined with
standard techniques, e.g., using the \texttt{NumPy}\xspace routines \code{numpy.average} and \code{numpy.var}.\\
A further recurring task is to produce and plot uncertainty bands for differential
observables. Here, we use the differential branching ratios for the previously discussed
semimuonic and semitauonic decays.
Using \texttt{EOS}\xspace we approach this task by creating two lists of observables.
The first list includes only the $\bar{B}\to D\mu^-\bar\nu$ at various points in its phase space.
Due to the strong dependence of the branching ratio on $q^2$,
we do not distribute the points equally across the full phase space.
Instead, we equally distribute half of the points in the interval $[0.02\,\ensuremath{\mathrm{GeV}}^2, 1.00\,\ensuremath{\mathrm{GeV}}^2]$ and the other half in the remainder of the phase space.
The second list is constructed similarly for $\bar{B}\to D\tau^-\bar\nu$. We then pass these
lists to \method{eos.Analysis}{sample}, to obtain prior-predictive samples of the observables:
\begin{lstlisting}[%
language=iPython,%
caption={%
Produce prior-predictive samples for the differential
$\bar{B}\to D\lbrace \mu^-,\tau^-\rbrace \bar\nu$ branching ratios
at various points in their respective phase spaces.
The results are used in \reflst{usage:plot-prior-prediction-diff}
to produce the output shown in the right plot of \refout{usage:prior-prediction}.
\label{lst:usage:prior-samples-diff}
}%
]
mu_q2values = numpy.unique(numpy.concatenate((numpy.linspace(0.02, 1.00, 20), numpy.linspace(1.00, 11.60, 20))))
mu_obs = [eos.Observable.make(
'B->Dlnu::dBR/dq2', prior.parameters, eos.Kinematics(q2=q2),
eos.Options({'form-factors': 'BSZ2015', 'l': 'mu'}))
for q2 in mu_q2values]
tau_q2values = numpy.linspace(3.17, 11.60, 40)
tau_obs = [eos.Observable.make(
'B->Dlnu::dBR/dq2', prior.parameters, eos.Kinematics(q2=q2),
eos.Options({'form-factors': 'BSZ2015', 'l': 'tau'}))
for q2 in tau_q2values]
_, _, mu_samples = prior.sample(N=5000, pre_N=1000, observables=mu_obs)
_, _, tau_samples = prior.sample(N=5000, pre_N=1000, observables=tau_obs)
\end{lstlisting}
We plot the so-obtained prior-predictive samples with \texttt{EOS}\xspace's plotting framework:
\begin{lstlisting}[%
language=iPython,%
caption={%
Plot the previously obtained prior-predictive samples.
The production of the samples is achieved in \reflst{usage:prior-samples-diff},
and the output is shown in the right plot of \refout{usage:prior-prediction}.
\label{lst:usage:plot-prior-prediction-diff}\index{eos.Plotter!plot}
}%
]
plot_args = {
'plot': {
'x': {'label': r'$q^2$', 'unit': r'$\textnormal{GeV}^2$', 'range': [0.0, 11.60] },
'y': {'label': r'$d\mathcal{B}/dq^2$', 'range': [0.0, 5e-3] },
'legend': { 'location': 'upper center' }
},
'contents': [
{
'label': r'$\ell=\mu$', 'type': 'uncertainty', 'range': [0.02, 11.60],
'data': { 'samples': mu_samples, 'xvalues': mu_q2values }
},
{
'label': r'$\ell=\tau$','type': 'uncertainty', 'range': [3.17, 11.60],
'data': { 'samples': tau_samples, 'xvalues': tau_q2values }
},
]
}
eos.plot.Plotter(plot_args).plot()
\end{lstlisting}
\begin{joutput}[t]
\centering
\includegraphics[width=0.48\linewidth]{figures/predictions/prior-prediction-int.pdf}
\includegraphics[width=0.48\linewidth]{figures/predictions/prior-prediction-diff.pdf}
\caption{%
Plot of the branching ratios of $\bar{B}\to D\lbrace \mu^-,\tau^-\rbrace\bar\nu$.
Left: prior-predictive samples for the integrated branching ratios obtained from the code in \reflst{usage:prior-samples-int}.
Right: differential branching ratios as functions of $q^2$. The central curves are obtained from \reflst{usage:BtoDlnu:BR}.
The uncertainty bands are obtained from the samples obtained in \reflst{usage:prior-samples-diff}
using the plotting code in \reflst{usage:plot-prior-prediction-diff}.
}
\label{out:usage:prior-prediction}
\end{joutput}
\subsection{Parameter Inference}
\label{sec:usage:inference}
[\textit{The example developed in this section can be ran interactively from the example notebook for parameter inference available
from ref.~\cite{EOS:repo}, file \href{https://github.com/eos/eos/tree/v1.0/examples/inference.ipynb}{examples/inference.ipynb}}]\\
\texttt{EOS}\xspace infers parameters from a database of experimental or theoretical constraints in combination with its built-in observables.
This section illustrates how to construct an \class{eos.Analysis} object that represents the statistical analysis
and to infer the best-fit point and uncertainties of a list of parameters through optimization and Monte Carlo methods.
We pick up the example introduced in \refsec{basics:analysis} to illustrate the above-mentioned features of \texttt{EOS}\xspace.
In particular, we use the two experimental constraints \code{B^0->D^+e^-nu::BRs@Belle:2015A} and \code{B^0->D^+mu^-nu::BRs@Belle:2015A}, to infer the value of the \ac{CKM}\xspace matrix element $|V_{cb}|$.
\subsubsection{Defining the Statistical Analysis} \label{sec:usage:analysis}
To define our statistical analysis for the inference of $|V_{cb}|$ from measurements of the $\bar{B}\to D\ell^-\bar\nu$
branching ratios, some decisions are needed.
First, we must decide how to parametrize the hadronic form factors that describe semileptonic $\bar{B}\to D$ transitions.
For what follows we will use the parametrization of Ref. \cite{Straub:2015ica}, referred to as \code{[BSZ:2015A]}.
Next, we must decide the theory input for the form factors.
For this, we will combine the correlated lattice QCD results published by the Fermilab/MILC and HPQCD collaborations in 2015 \cite{Na:2015kha,MILC:2015uhg}.
The corresponding \object{eos.Analysis} object is shown in \reflst{basics:analysis:definition}; it has been used previously
as an example in \refsec{basics:analysis}.
The global options ensure that our choice of form factor parametrization is used throughout, and that for \ac{CKM}\xspace matrix elements the \code{CKM} model is used.
The latter provides parametric access to the $V_{cb}$ matrix element through two objects of type \object{eos.Parameter}:
the absolute value \code{CKM::abs(V_cb)} and the complex phase \code{CKM::arg(V_cb)}.
The latter is not accessible from $b\to c\ell\bar\nu$.
We also set the starting value of \code{CKM::abs(V_cb)} to a sensible value of $42 \@ifstar\@@ng\@ng*{\cdot} 10^{-3}$ \textit{via}
\begin{lstlisting}[language=iPython]
analysis.parameters['CKM::abs(V_cb)'].set(42.0e-3)
\end{lstlisting}
\begin{joutput}[t]
\resizebox{\textwidth}{!}{%
\begin{tabular}{ll}
\toprule
parameter & value \\
\midrule
$|V_{cb}|$ & 0.0422 \\
\texttt{B->D::alpha\^{}f+\_0@BSZ2015} & 0.6671 \\
\texttt{B->D::alpha\^{}f+\_1@BSZ2015} & -2.5314 \\
\texttt{B->D::alpha\^{}f+\_2@BSZ2015} & 4.8813 \\
\texttt{B->D::alpha\^{}f0\_1@BSZ2015} & 0.2660 \\
\texttt{B->D::alpha\^{}f0\_2@BSZ2015} & -0.8410 \\
\bottomrule
\end{tabular}
%
\hspace*{1em}
%
\begin{tabular}{lll}
\toprule
constraint & $\chi^2$ & d.o.f. \\
\midrule
\texttt{B->D::f\_++f\_0@HPQCD:2015A} & 3.4847 & 7 \\
\texttt{B->D::f\_++f\_0@FNAL+MILC:2015B} & 3.1016 & 5 \\
\texttt{B\^{}0->D\^{}+e\^{}-nu::BRs@Belle:2015A} & 11.8206 & 10 \\
\texttt{B\^{}0->D\^{}+mu\^{}-nu::BRs@Belle:2015A} & 5.2242 & 10 \\
\bottomrule
\end{tabular}
%
\hspace*{1em}
%
\begin{tabular}{ll}
\toprule
total $\chi^2$ & 23.6310 \\
total degrees of freedom & 26 \\
p-value & 59.7053\% \\
\bottomrule
\end{tabular}
}
\caption{%
Display of the best-fit point and goodness-of-fit summary obtained from optimizing the
the $\bar{B}\to D\ell^-\bar\nu$ analysis shown in \reflst{basics:analysis:definition}.
}
\label{out:inference:bfpgof}
\end{joutput}
To maximize the (logarithm of the) posterior density we can call the \method{eos.Analysis}{optimize} method, as shown in \reflst{inference:bfpgof}.
In a \texttt{Jupyter}\xspace notebook, it is useful to display the return value of this method, which illustrates the best-fit point.
Further useful information is contained in the goodness-of-fit summary.
The latter lists each constraint, its degrees of freedom, and its $\chi^2$ value (if applicable\footnote{%
Note that \texttt{EOS}\xspace supports likelihood functions
that do not have a $\chi^2$ test statistic or any test statistic at all.
}), alongside the $p$-value for the entire likelihood.
\begin{lstlisting}[%
language=iPython,%
caption={%
Optimize the posterior density and provide the best-fit point and goodness-of-fit summary.
The output is shown in \refout{inference:bfpgof}.
\label{lst:inference:bfpgof}
}%
]
bfp = analysis.optimize()
display(bfp)
display(analysis.goodness_of_fit())
\end{lstlisting}
Instead of setting individual parameters to sensible values as we did for \code{CKM::abs(V_cb)} earlier,
a starting point can alternatively be provided to \method{eos.Analysis}{optimize} using the \code{start_point} keyword argument.
The maximization of the posterior by means of \method{eos.Analysis}{optimize} uses \texttt{SciPy}\xspace's \pyclass{optimize}
module~\cite{scipy}. The default optimization algorithm is the Sequential Least SQuares Programming (SLSQP).
Other algorithms can be selected and configured through keyword arguments that \method{eos.Analysis}{optimize}
forwards to \pyclass{scipy.optimize}.\\
To interface with optimizers other than available within \texttt{SciPy}\xspace, \texttt{EOS}\xspace provides the \method{eos.Analysis}{log\_pdf} method.
As its first argument, it expects the list of the parameter values. The parameters' ordering must correspond to
the ordering of \object{analysis.varied_parameters}, and each parameter's values must be rescaled to the interval $[-1, +1]$,
where the boundaries correspond to the minimal/maximal value in the prior specification.
\subsubsection{Importance Sampling of the Posterior}
To sample from the posterior, \texttt{EOS}\xspace provides the \method{eos.Analysis}{sample} method.
Optionally, this can also produce posterior-predictive samples for a list of observables.
We can use these samples to illustrate the results of our fit in relation to the experimental constraints.\\
For this example, we produce such posterior-predictive samples for the differential $\bar{B}\to D^+\mu^-\bar\nu$ branching
ratio in the 40 points of the kinematic variable $q^2$ used in the previous examples (redifined in the following
listing for completeness).
\begin{lstlisting}[%
language=iPython,%
caption={%
Produce posterior-predictive samples for the differential $\bar{B}\to D^+\mu^-\bar\nu$ branching ratio. \label{lst:inference:posterior_predictive_sample_distribution}
}%
]
mu_q2values = numpy.unique(numpy.concatenate((numpy.linspace(0.02, 1.00, 20),
numpy.linspace(1.00, 11.60, 20))))
mu_obs = [eos.Observable.make('B->Dlnu::dBR/dq2', analysis.parameters, eos.Kinematics(q2=q2),
eos.Options({'form-factors': 'BSZ2015', 'l': 'mu', 'q': 'd'})) for q2 in mu_q2values]
parameter_samples, log_weights, mu_samples = analysis.sample(N=20000, stride=5, pre_N=1000, preruns=5, start_point=bfp.point,
observables=mu_obs)
\end{lstlisting}
In the above we start sampling at the best-fit point as obtained earlier through optimization, which is optional.
We carry out 5 burn-in runs/preruns of 1000 samples each.
The samples obtained in each of these preruns are used to adapt the Markov chain but are then discarded.
The main run produces a total of \code{N * stride = 100000} random Markov Chain samples.
The latter are thinned down by a factor of \code{stride = 5} to obtain \code{N = 20000} samples, which are stored in \code{parameter_samples}.
The thinning reduces the autocorrelation of the samples.
The values of the log(posterior) are stored in \code{log_posterior}.
The posterior-predictive samples for the observables are stored in \code{e_samples}, and are only returned if the observables keyword argument is provided.\\
We can now illustrate the posterior samples either as a histogram or as a \ac{KDE}\xspace using the built-in plotting functions,
see \refout{inference:posterior-sample-hist+kde} and \reflst{plot-ex:inference:posterior-sample-hist}.
Contours at given levels of posterior probability, as shown in \refout{inference:posterior-sample-hist+kde},
can be obtained for any pair of parameters using \reflst{plot-ex:inference:posterior-sample-kde}.\\
\begin{joutput}[t]
\centering
\includegraphics[width=0.48\linewidth]{figures/inference/posterior-sample-hist.pdf}
\includegraphics[width=0.48\linewidth]{figures/inference/posterior-sample-kde.pdf}
\caption{%
Distribution of samples (left) of the 1D-marginal posterior of $|V_{cb}|$ as a regular histogram
and as a kernel density estimate (blue line); and (right) of the 2D-marginal joint posterior
of $|V_{cb}|$ and $f^{\bar{B}\to D}_+(0)$ as contours at $68\%$ and $95\%$ probability (orange lines and filled areas).
The plots are produced by \reflst{plot-ex:inference:posterior-sample-hist}
and \reflst{plot-ex:inference:posterior-sample-kde}, respectively.
}
\label{out:inference:posterior-sample-hist+kde}
\end{joutput}
Sampling with the Metropolis-Hastings algorithm is known to work well for unimodal densities.
However, in cases of multimodal densities or blind directions, problems regularly arise.
\texttt{EOS}\xspace provides the means to follow the approach of ref.~\cite{2013arXiv1304.7808B}, which
proposes to use (potentially unadapted) Markov chains to explore the parameter space to
initialize a Gaussian mixture density. The latter is then adapted using the Population
Monte Carlo algorithm~\cite{2010MNRAS.405.2381K}, for which \texttt{EOS}\xspace uses the \texttt{pypmc}\xspace package~\cite{pypmc}.
Within \texttt{EOS}\xspace, we use schematically the following approach:
\begin{lstlisting}[
language=iPython,%
caption={%
Create a mixture density from a number of Markov chains, and adapt it to the posterior
through a call to \code{eos.Analysis.sample_pmc}\index{eos.Analysis!sample\_pmc}.
\label{lst:inference:sample_pmc}
}%
]
from pypmc.mix_adapt.r_value import make_r_gaussmix
chains = []
for i in range(10):
# run Markov Chains for your problem
chain, _ = analysis.sample(...) # use relevant settings for your analysis in the '...'
chains.append(chain)
# please consult the pypmc documentation for details on the call below
proposal_density = make_r_gaussmix(chains, K_g=3, critical_r=1.1)
# adapt the proposal to the posterior and obtain high-quality samples
analysis.sample_pmc(proposal_density, ...) # use relevant settings for your analysis in the '...'
\end{lstlisting}
We can visualize the posterior-predictive samples using:
\begin{lstlisting}[%
language=iPython,%
caption={%
Plot posterior-predictive importance samples for the differential $\bar{B}\to D^+\mu^-\bar\nu$ branching ratio vs. $q^2$.
The result is shown in \refout{inference:posterior-prediction-diff}.
\label{lst:inference:posterior_samples_uncertainties}\index{eos.Plotter!plot}
}%
]
plot_args = {
'plot': {
'x': { 'label': r'$q^2$', 'unit': r'$\textnormal{GeV}^2$', 'range': [0.0, 11.63] },
'y': { 'label': r'$d\mathcal{B}/dq^2$', 'range': [0.0, 5e-3] },
'legend': { 'location': 'lower left' }
},
'contents': [
{
'label': r'$\ell=\mu$',
'type': 'uncertainty',
'range': [0.02, 11.60],
'data': { 'samples': mu_samples, 'xvalues': mu_q2values }
},
{
'label': r'Belle 2015 $\ell=e,\, q=d$',
'type': 'constraint',
'color': 'C0',
'constraints': 'B^0->D^+e^-nu::BRs@Belle:2015A',
'observable': 'B->Dlnu::BR',
'variable': 'q2',
'rescale-by-width': True
},
{
'label': r'Belle 2015 $\ell=\mu,\,q=d$',
'type': 'constraint',
'color': 'C1',
'constraints': 'B^0->D^+mu^-nu::BRs@Belle:2015A',
'observable': 'B->Dlnu::BR',
'variable': 'q2',
'rescale-by-width': True
},
]
}
eos.plot.Plotter(plot_args).plot()
\end{lstlisting}
Note that the use of \code{'rescale-by-width': True} converts the database's existing entry
for the \emph{bin-integrated} branching ratio into the \emph{bin-averaged} branching ratio.
Only that latter can be meaningfully compared with the differential branching ratio's curve.
\FloatBarrier
\begin{joutput}[t]
\centering
\includegraphics[width=0.48\linewidth]{figures/inference/posterior-prediction-diff.pdf}
\caption{
Plot of the posterior-predictive importance samples for the differential $\bar{B}\to D^+\mu^-\bar\nu$ branching ratio vs. $q^2$,
juxtaposed with bin-averaged measurements of the $\bar{B}\to D^+\lbrace e^-,\mu^-\rbrace\bar\nu$ branching ratio
by the Belle experiment.
}
\label{out:inference:posterior-prediction-diff}
\end{joutput}
\subsection{Event Simulation}
\label{sec:usage:simulation}
[\textit{The example developed in this section can be run interactively from the example notebook for event simulation available
from ref.~\cite{EOS:repo}, file \href{https://github.com/eos/eos/tree/v1.0/examples/simulation.ipynb}{examples/simulation.ipynb}}]\\
\texttt{EOS}\xspace contains built-in probability density functions (PDFs) from which pseudo events can be simulated using Markov chain Monte Carlo techniques.
\subsubsection{Constructing a 1D PDF and Simulating Pseudo Events}
The simulation of events is performed using the \method{eos.SignalPDF}{sample\_mcmc} method.
For example, the construction of the one-dimensional PDF describing the $B\to D\ell\nu_\ell$ decay distribution in the variable $q^2$ and for $\ell=\mu$ leptons requires:
\begin{itemize}
\item the \code{q2} kinematic variable that can be set to an arbitrary starting value.
\item the boundaries, \code{q2_min} and \code{q2_max}, for the phase space from which we want to sample. If needed, the phase space can be shrunk to a volume smaller than physically allowed; the normalization of the PDF will automatically adapt.
\end{itemize}
For $B\to D\ell\nu_\ell$, the Markov chains can self adapt to the PDF in 3 preruns with 1000 pseudo events/samples each.
The simulation of \code{stride*N=250000} pseudo events/samples from the PDF, which are thinned down to \code{N=50000}, is performed with the following code:
\begin{lstlisting}[%
language=iPython,%
caption={%
Produce importance samples of the one-dimensional \code{SignalPDF}
for the $\bar{B}\to D\ell^-\bar\nu$ differential branching ratio.
The samples are compared to the analytic expression in the left figure of \refout{simulation:plot+histogram},
which is produced from \reflst{plot-ex:simulation:plot+histogram-1D}.
\label{lst:simulation:sample-1D}
}%
]
rng = numpy.random.mtrand.RandomState(123456) # Defines a seeded random number generator
mu_kinematics = eos.Kinematics({'q2': 2.0, 'q2_min': 0.02, 'q2_max': 11.6})
mu_options = eos.Options({'l': 'mu'})
mu_pdf = eos.SignalPDF.make('B->Dlnu::dGamma/dq2', eos.Parameters(), mu_kinematics, mu_options)
mu_samples, mu_weights = mu_pdf.sample_mcmc(N=50000, stride=5, pre_N=1000, preruns=3, rng=rng)
\end{lstlisting}
Samples for other lepton flavors, e.g., $\ell=\tau$, require only a change of the \class{eos.Options} object to use \code{'l': 'tau'} instead
and adjustment of the phase space.
Similar to observables, \class{eos.SignalPDF} objects can be plotted as a function of a single kinematic variable, while keeping all other kinematic variables fixed.
The fixed kinematic variables are provided as a \pyclass{dict} via the \code{kinematics} key.
We show two such plots in combination with histograms of the \ac{PDF}\xspace samples in \refout{simulation:plot+histogram} (left).
The output shows excellent agreement between the simulations and the respective analytic expressions for the 1D \acp{PDF}.
\subsubsection{Constructing a 4D PDF and Simulating Pseudo Events}
Samples can also be drawn for \acp{PDF} with more than one kinematic variable.
As an example, we use the full four-dimensional \ac{PDF}\xspace for $\bar{B}\to D^*\ell\bar{\nu}$ decays.
Declaration and initialization of all four kinematic variables
(\code{q2}, \code{cos(theta_l)}, \code{cos(theta_d)}, and \code{phi}) is similar to the 1D case.
\begin{lstlisting}[language=iPython,
caption={%
Produce importance samples of the four-dimensional \code{SignalPDF}
for the $\bar{B}\to D^*(\to D\pi)\ell^-\bar\nu$ differential branching ratio.
\label{lst:simulation:sample-4D}
}%
]
dstarlnu_kinematics = eos.Kinematics({
'q2': 2.0, 'q2_min': 0.02, 'q2_max': 10.5,
'cos(theta_l)': 0.0, 'cos(theta_l)_min': -1.0, 'cos(theta_l)_max': +1.0,
'cos(theta_d)': 0.0, 'cos(theta_d)_min': -1.0, 'cos(theta_d)_max': +1.0,
'phi': 0.3, 'phi_min': 0.0, 'phi_max': 2.0 * numpy.pi
})
\end{lstlisting}
We then produce the samples in a similar way as for the 1D \ac{PDF}\xspace:
\begin{lstlisting}[language=iPython]
rng = numpy.random.mtrand.RandomState(74205) # Defines a seeded random number generator
dstarlnu_pdf = eos.SignalPDF.make('B->D^*lnu::d^4Gamma', eos.Parameters(), dstarlnu_kinematics, eos.Options())
dstarlnu_samples, _ = dstarlnu_pdf.sample_mcmc(N=1e6, stride=5, pre_N=1000, preruns=3, rng=rng)
\end{lstlisting}
The samples of the individual kinematic variables can be accessed as the columns of the \code{dstarlnu_samples} object.
We can now show correlations of the kinematic variables by plotting 2D histograms, e.g. $q^2$ vs $\cos\theta_\ell$:
\begin{lstlisting}[%
language=iPython,%
caption={%
Plot a 2D histogram for samples of the $\bar{B}\to D^*(\to D\pi)\mu^-\bar\nu$ PDF
in the variables $q^2$ and $\cos(\theta_\ell)$.
The samples are obtained from \reflst{simulation:sample-4D}, and the output is shown in the right plot of \refout{simulation:plot+histogram}.
\label{lst:simulation:histogram-2D}\index{eos.Plotter!plot}
}%
]
plot_args = {
'plot': {
'x': { 'label':r'$q^2$', 'unit': r'$\textnormal{GeV}^2$', 'range': [0.0, 10.50]},
'y': { 'label':r'$cos(\theta_\ell)$', 'range': [-1.0, +1.0]}
},
'contents': [
{
'label': r'samples ($\ell=\mu$)',
'type': 'histogram2D',
'data':{
'samples': dstarlnu_samples[:, (0,1)]
},
'bins': 40
},
]
}
eos.plot.Plotter(plot_args).plot()
\end{lstlisting}
\begin{joutput}[t]
\centering
\includegraphics[width=0.48\linewidth]{figures/simulation/plot+histogram-1D.pdf}
\includegraphics[width=0.48\linewidth]{figures/simulation/histogram-2D.pdf}
\caption{%
Left: Distribution of $B\to D\ell\nu_\ell$ events for $\ell=\mu, \tau$, as implemented in \texttt{EOS}\xspace (solid lines)
and as obtained from Markov Chain Monte Carlo importance sampling (histograms).
The samples are produced from \reflst{simulation:sample-1D}, and the plot is produced
by \reflst{plot-ex:simulation:plot+histogram-1D}.
Right: 2D histogram of the $\bar{B}\to D^*(\to D\pi)\mu^-\bar\nu$ PDF in the variables $q^2$ and $\cos(\theta_\ell)$.
This output is produced by the code shown in \reflst{simulation:histogram-2D}.
}
\label{out:simulation:plot+histogram}
\end{joutput}
|
{
"timestamp": "2021-12-01T02:24:56",
"yymm": "2111",
"arxiv_id": "2111.15428",
"language": "en",
"url": "https://arxiv.org/abs/2111.15428"
}
|
\section{Introduction}\label{sec:intro}
Theories of beyond standard model physics allow for the production of ultralight bosons that could constitute a portion or all of dark matter \cite{arvanitaki2015discovering}. If such ultralight bosons exist, they could appear around rotating black holes due to quantum fluctuations \cite{Brito:2015oca}. They would then scatter off and extract angular momentum from these black holes, and form macroscopic clouds, i.e. boson condensates, through a superradiance process \cite{Brito:2015oca}. The energy structure of the clouds resembles that of a hydrogen atom, earning boson clouds the name ``gravitational atoms'' in the sky. After the black hole spin decreases below a threshold, the superradiance process stops and the cloud depletes over time mainly through annihilation of bosons into gravitons, which generates quasi-monochromatic, long-duration gravitational waves\xspace when the self-interaction for bosons is weak.
The signal would also have a small, positive frequency variation, known as a ``spin-up'', that would arise from the classical contraction of the cloud over time as it loses mass \cite{Brito:2015oca}. The values of the spin-up depend on whether we consider scalar \cite{Brito:2017zvb}, vector \cite{baryakhtar2017black,siemonsen2020gravitational}, or tensor \cite{Brito:2020lup} boson clouds, but in this search, we consider scalar boson cloud signals with very small spin-ups, of maximum $\mathcal{O}(10^{-12})$ Hz/s.
Recently, the effect on gravitational-wave\xspace emission from boson self-interactions has been studied for the scalar case \cite{2021PhRvD.103i5019B}, showing that the signal can be significantly affected, including the magnitude of the spin-up, as the self-interaction increases.
The gravitational-wave\xspace signal frequency depends primarily on the mass of the boson, and weakly on the mass and spin of the black hole. Recently, exclusion regions of the black hole / scalar boson mass parameter space were calculated using upper limits from an all-sky search for spinning neutron stars in LIGO/Virgo data from the second observing run (O2)~\cite{palomba2019direct}. Furthermore, a search for boson clouds from Cygnus X-1 was performed with a hidden Markov model (HMM) method \cite{isi2019directed} on the same data set, and disfavored particular boson masses for this object \cite{Sun:2019mqb}. It has been suggested \cite{PhysRevD.102.063020} that current methods for all-sky searches, such as the one used in \cite{palomba2019direct}, could suffer a significant sensitivity penalty in the extreme case of a population of $10^8$ black holes in the Milky Way, due to the superposition of many signals in a small frequency range (as a consequence of the signal's frequency dependence at leading order on the boson mass). A detailed study \cite{pierini2021inprep} has quantified this effect, showing that the actual average loss of sensitivity is at most of about 15$\%$ for a signal ``density'' of 1-3 signals per frequency bin, while it rapidly reduces, and becomes negligible, for both lower and higher densities.
In addition to quasi-monochromatic gravitational waves\xspace emitted by boson clouds around isolated black holes, it is also possible to probe the existence of boson clouds in binary black hole coalescences. One avenue requires measurements of spin: in principle, boson clouds will extract mass and spin from black holes, resulting in low spin values for black holes compared to those in a universe without boson clouds; in practice, individual black hole spins are hard to measure \cite{Vitale:2014mka,Purrer:2017uch}, and the current spin distribution of black holes depends on both the mass of boson clouds and the distribution of spins when the black holes were born \cite{Belczynski:2017gds,Gerosa:2018wbw}. It is therefore interesting to hierarchically combine spin measurements from various black hole mergers to obtain constraints on boson cloud/spin interactions \cite{Ng:2019jsx,Ng:2020ruv}.
Additionally, the presence of a boson cloud will induce a change in the waveforms used in compact binary searches, e.g. new multipole moments and tidal effects (parameterized as Love numbers) \cite{baumann2019probing,Yang:2017lpm}, and such differences in binary black hole mergers may actually be detectable depending on the boson mass and field strength \cite{Choudhary:2020pxy}.
It has also been proposed to combine multi-band observations of black hole mergers and boson cloud signals when future detectors are online \cite{Ng:2020jqd}, and to search for a stochastic gravitational-wave\xspace background from scalar and vector boson cloud signals \cite{Brito:2017wnc,Tsukada:2018mbp,Tsukada:2020lgt}. Complementary methods \cite{pierce2019dark,Miller:2020vsl} and searches \cite{guo2019searching,LIGOScientific:2021odm} for ultralight scalar and vector bosons have also been developed that use the gravitational-wave\xspace interferometers as particle detectors, which provide additional constraints on the boson mass.
Ultralight bosons can therefore have different effects on different types of gravitational-wave\xspace signals.
This paper presents results from the first all-sky search tailored to gravitational waves\xspace from depleting scalar boson clouds around black holes using the Advanced LIGO~\citep{2015CQGra..32g4001L} data in the third observing run (O3). Although the expected gravitational-wave\xspace signal is almost monochromatic, it could come from anywhere in the sky; thus, the Doppler modulations from the relative motion of the Earth and the source, of $\mathcal{O}(10^{-4}f)$, where $f$ is the frequency of the signal, are a priori unknown. A fully coherent search, in which we take a single Fourier Transform of the whole data set and look for peaks in the power spectrum after demodulating the data, is therefore impossible to perform in each sky direction because of limited computational power. Indeed, the number of sky positions to search over scales with the square of both the Fourier Transform length and the frequency, and would amount to $\mathcal{O}(10^{20})$ at high frequencies \cite{Astone:2014esa}. This means that we must employ semi-coherent methods, in which we break the data into smaller chunks in time that are analyzed coherently, and then combined incoherently,
to keep the computational cost under control while retaining the desired sensitivity \cite{riles2017recent,sieniawska2019continuous}.
Moreover, we adopt a multi-resolution approach, equivalent to considering several different Fourier Transform lengths \cite{d2018semicoherent}, in order to be sensitive towards possible unpredicted frequency fluctuations of the gravitational-wave\xspace signal.
This paper is organized as follows: in section \ref{sec:model}, we describe our model for a gravitational-wave\xspace signal resulting from a depleting scalar boson cloud around a rotating black hole; in section \ref{sec:method}, we explain our method to search for such signals; in section \ref{sec:data}, we provide information on the data sets we use in our analysis; in section \ref{sec:results}, we present the results of the search and our upper limits, including their astrophysical interpretations in terms of constraints in the boson/black hole mass plane and on the maximum detectable distance; finally, in section \ref{sec:conclusions}, we discuss prospects for future work. Appendices contain details on the follow-up of selected outliers.
\section{The signal} \label{sec:model}
An isolated black hole is born with a defining mass, spin and charge \cite{thorne2000gravitation}. Ultralight boson particles around a black hole will scatter off it and extract energy and momentum from it if the so-called \emph{superradiant} condition is satisfied, i.e. $\omega<m\Omega$ \cite{Bekenstein:1973mi,Brito:2015oca} where $\omega$ is the angular frequency of the boson (related to the boson mass linearly), $m$ is the azimuthal quantum number in the hydrogen-like gravitational atom, and $\Omega$ is the angular frequency of the rotating black hole's outer horizon. Because the bosons have a finite mass, they become gravitationally bound to the black hole, which allows for successive scatterings and a build-up of a macroscopic cloud, extracting up to about $10\%$ of the black hole mass \cite{Brito:2015oca}. This process is maximized when the particle's reduced Compton wavelength is comparable to the black hole radius:
\begin{equation}
\frac{\hbar c}{m_b}\simeq \frac{GM_{\rm BH}}{c^2},
\end{equation}
where $m_b$ is the boson mass-energy, $M_{\rm BH}$ is the black hole mass, $\hbar$ is the reduced Planck's constant, and $c$ is the speed of light. The typical time scale of the superradiance phase is \cite{Brito:2017zvb}
\begin{equation}
\tau_{\rm inst} \approx 27\left(\frac{M_\mathrm{BH}}{10M_{\odot}}\right)\left(\frac{\alpha}{0.1}\right)^{-9}\left(\frac{1}{\chi_i}\right)~\mathrm{days},
\end{equation}
where $\chi_i$ is the black hole initial dimensionless spin, and
\begin{equation}
\alpha = \frac{G M_{\rm BH}}{c^3} \frac{m_b}{\hbar}, \label{eq:alpha}
\end{equation}
is the fine-structure constant in the gravitational atom.
Once the superradiant condition is no longer satisfied, the growth stops, and the cloud begins to give off energy primarily via annihilation of particles in the form of gravitational waves\xspace \cite{isi2019directed}, over a typically much longer time scale which, for $m=1$ and in the limit $\alpha \ll 0.1$, is given by
\begin{equation}
\tau_{\rm gw} \approx 6.5 \times 10^4\left(\frac{M_\mathrm{BH}}{10M_{\odot}}\right)\left(\frac{\alpha}{0.1}\right)^{-15}\left(\frac{1}{\chi_i}\right)~\mathrm{years}.
\label{eq:taugw}
\end{equation}
For scalar bosons, these clouds have a much shorter growth time scale compared to the time scale for them to deplete \cite{Brito:2017zvb}; thus, if they exist now, they are more likely to be depleting rather than growing. The gravitational-wave\xspace emission is at a frequency $f_\text{gw}$ \cite{palomba2019direct}:
\begin{eqnarray}
f_\text{gw} & \simeq & 483\,{\rm Hz} \left(\frac{m_\text{b}}{10^{-12}~\text{eV}}\right) \nonumber\\
&& \times \left[1-7\times 10^{-4}\left(\frac{M_\text{BH}}{10M_{\odot}}\frac{m_\text{b}}{10^{-12}~\text{eV}}\right)^2\right].
\label{eq:fgw}
\end{eqnarray}
In fact, this frequency slowly increases in time by an amount that depends on the boson self-interaction constant $F_b$ \cite{2021PhRvD.103i5019B}. As in \cite{2021PhRvD.103i5019B}, we consider a scalar boson that, in addition to a non-zero mass, has a quartic interaction with an extremely small coupling $\lambda$, and characterize this coupling in terms of the parameter $F_b:=m_b/\sqrt{\lambda}$. In the context of an axion-like particle, $F_b$ would roughly correspond to the symmetry breaking energy scale.
Specifically, if $F_b \gtrsim 2 \times 10^{18}$~GeV, the spin-up is dominated by the depletion of the cloud through particle annihilation and is given by
\begin{equation}
\label{eq:fanni}
\dot{f}_\mathrm{gw}\approx 7\times 10^{-15} \left(\frac{m_\text{b}}{10^{-12}~\text{eV}}\right)^2\left(\frac{\alpha}{0.1}\right)^{17}~\mathrm{Hz/s}.
\end{equation}
With this condition on $F_b$, our search probes energies at the Planck scale, and is sensitive to the QCD axion with a mass of $\sim 10^{-13}$--$10^{-12}$ eV \cite{2021PhRvD.103i5019B}.
However, if the bosons have a non-negligible self-interaction (i.e. for smaller $F_b$), two other spin-up terms, one due to energy level transitions and another due to the variation of the self-interaction energy, appear. The former is given by
\begin{equation}
\label{eq:fdot_tr}
\dot{f}_\mathrm{tr}\approx 10^{-10} \left(\frac{m_\text{b}}{10^{-12}~\text{eV}}\right)^2\left(\frac{\alpha}{0.1}\right)^{17}\left(\frac{10^{17}\mathrm{GeV}}{F_b}\right)^4~\mathrm{Hz/s},
\end{equation}
which is dominant in the intermediate self-interaction regime, for $8.5 \times 10^{16}(\alpha/0.1)~\mathrm{GeV} \lesssim F_b \lesssim 2 \times 10^{18}~\mathrm{GeV}$. In this regime, the signal duration significantly shortens, thus reducing the chance of detection. In the strong self-interaction regime, when $F_b \lesssim 8.5 \times 10^{16}(\alpha/0.1)~\mathrm{GeV}$, the spin-up can be parametrized as
\begin{equation}
\label{eq:fdot_se}
\dot{f}_\mathrm{se}\approx 10^{-10} \left(\frac{m_\text{b}}{10^{-12}~\text{eV}}\right)^2\left(\frac{\alpha}{0.1}\right)^{17}\left(\frac{10^{17}\mathrm{GeV}}{F_b}\right)^6~\mathrm{Hz/s},
\end{equation}
and the signal strength rapidly decreases with increasing interaction, making the detection of annihilation signals basically impossible for current detectors.
In Sec.~\ref{sec:astro}, we will briefly discuss the implication of our search results in relation to the value of $F_b$.
These signals have lower amplitudes than those from compact binary coalescences \cite{PhysRevD.102.063020}, with initial values (for $\alpha \ll 0.1$)
\begin{equation}
\label{eq:h0}
h_0 \approx 6 \times 10^{-24} \left(\frac{M_\mathrm{BH}}{10 M_\odot}\right)
\left(\frac{\alpha}{0.1}\right)^7 \left(\frac{1\,\rm kpc}{D}\right)
\left({\chi_i - \chi_c}\right),
\end{equation}
where
\begin{eqnarray}
\chi_c \approx \frac{4 \alpha}{4\alpha^2 + 1},
\end{eqnarray}
is the black hole spin when the superradiance is saturated, and $D$ is the source distance.
The emitted signal amplitude decreases in time as the clouds are depleted, according to
\begin{equation}
h(t) = \frac{h_0}{1+\frac{t}{\tau_{\rm gw}}}.
\label{eq:hoft}
\end{equation}
The equations above are analytic approximations to the true behavior of a gravitational wave\xspace originating from boson clouds. When we consider $\alpha \sim 0.1$, we must account for the difference between the numerical and analytic solutions, which is $\sim 3$ in energy at the highest $\alpha$ considered here, $\alpha\sim0.15$ We obtain this factor of $\sim 3$ comparing the black and red curves in Fig. 2 of \cite{Brito:2017zvb} at $\alpha\sim 0.15$.
\section{Search method} \label{sec:method}
The semi-coherent method we employ is based on the Band Sampled Data (BSD) framework \cite{piccinibsd}, which is being used in an increasing number of continuous-wave searches, due to its flexibility and computational efficiency \cite{piccinni2020directed,abbott2021snr,abbott2020gravitational}. BSD files are data structures that store a reduced analytic strain-calibrated time series in 10-Hz/one-month chunks. To construct BSD files, we take a Fourier transform of a chunk of time-series strain data, extract a 10-Hz band, keep only
the positive frequency components, and inverse Fourier
Transform to obtain the reduced analytic signal.
The main steps of the analysis pipeline are schematically shown in Fig. \ref{fig:pipescheme} and are summarized below.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{figures/pipescheme.png}
\caption{Scheme of the search pipeline.}
\label{fig:pipescheme}
\end{figure}
\subsection{Peakmap construction}
Our analysis starts with a set of BSD files covering the whole third observing run and frequencies between 20 Hz and 610 Hz. The maximum frequency is chosen to ensure the computational cost of the analysis is reasonable, and is consistent with the fact that the most relevant part of the accessible parameter space of the black hole-boson system corresponds to the intermediate frequencies. Each of these time series is divided into chunks of duration
\begin{equation}
T_{\rm FFT}=\frac{1}{\Omega_{\rm rot}}\sqrt{\frac{c}{2f_{\rm 10}R_{\rm rot}}},
\label{eq:tfft}
\end{equation}
where $f_{\rm 10}$ is the maximum frequency of each corresponding 10-Hz band, $\Omega_{\rm rot}=2\pi/86164.09~$rad/s is the Earth sidereal angular frequency, and $R_{\rm rot}$ is the Earth rotational radius, which we conservatively take as that corresponding to the detector at the lowest latitude ($R_{\rm rot} = 5.5 \times 10^6$~km for the Livingston detector). This $T_{\rm FFT}$ length guarantees that, if we take the Fourier Transform of the chunk, the power spread of any possible continuous-wave\xspace signal, caused by the motion of the Earth with respect to the source, is fully contained in one frequency bin with a width of $\delta f=1/T_{\rm FFT}$ in each chunk. In Fig.~\ref{fig:tfft}, the chunk duration is shown as a function of the frequency.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{figures/tfft.png}
\caption{Chunk duration $T_{\rm FFT}$ as a function of frequency. The $T_{\rm FFT}$ is fixed within each 10-Hz band.}
\label{fig:tfft}
\end{figure}
For each of these chunks, the square modulus of the Fourier Transform is computed (by means of the FFT algorithm) and divided by an estimation of the average spectrum over the same time window. The resulting quantity has a mean value of one independently of the frequency - for this reason it is called the {\it equalized} spectrum - and takes values significantly different from one when narrow frequency features are present in the data. We select prominent peaks in the equalized spectra, defined as time-frequency pairs that correspond to local maxima and have a magnitude above a threshold of $\theta = 2.5$ \cite{Astone:2014esa}. The collection of the these time-frequency peaks forms the so-called {\it peakmap}, see e.g. \citep{Astone:2014esa} for more details.
\subsection{Doppler effect correction}
We build a suitable grid in the sky such that, after the Doppler effect correction for a given sky direction, any residual Doppler effect always produces an error in frequency within half a frequency bin \cite{Astone:2014esa}. For each sky direction in the grid, the peaks in the peakmap are shifted in order to compensate for the time-dependent Doppler modulation, according to the relation
\begin{equation}
f_{0,\rm{t_k}}=\frac{f_{\rm obs,t_k}}{1+\frac{\vec{v}_{\rm t_k} \cdot \hat{n}}{c}},
\label{eq:doppcorr}
\end{equation}
where $f_{\rm obs,t_k}$ is an observed frequency peak at the time $t_k$, $\vec{v}_{\rm t_k}$ is the detector's velocity, and $\hat{n}$ is the unit vector identifying the sky direction.
As we have described in Sec.~\ref{sec:model}, the signal has an intrinsic spin-up, associated with the cloud depletion. Although we do not apply an explicit correction for the spin-up, the analysis retains most of the signal power - with a maximum sensitivity loss less than a few percent\footnote{In principle, the maximum sensitivity loss would be of about 36$\%$, when the corrected signal frequency falls in the middle among two consecutive frequency bins. In practice, however, the application of the moving average on the peakmaps, see section \ref{sec:peakmav}, allows us to recover part of the lost signal power, reducing the maximum loss to about 19$\%$ (when applying a moving average of two bins, i.e. a window with W=2).} - as long as the corresponding frequency variation during the full observation time $T_\mathrm{obs}$ is confined within $\pm 1/2$ bin of the frequency evolution rate $\delta \dot{f}$, given by
\begin{equation}
\delta \dot{f} = \frac{1}{2T_\mathrm{obs} T_\mathrm{FFT}}.
\end{equation}
The $\delta \dot{f}$ value is plotted as a function of the frequency in Fig.~\ref{fig:spindownstep}.
\begin{figure}[htb]
\centering
\includegraphics[width=\columnwidth]{figures/spindownstep.png}
\caption{Bin size of the frequency time derivative, $\delta \dot{f}$, a function of the frequency, given the full observing time of O3 and $T_\mathrm{FFT}$ chosen for each 10-Hz band. The search by default covers a frequency time derivative range of $\dot{f} \in [-\delta \dot{f}/2, \delta \dot{f}/2]$. Such a range contributes to set constraints on the portion of the boson mass/black hole mass explored in the search.}
\label{fig:spindownstep}
\end{figure}
\subsection{Peakmap projection}
Once we have applied the Doppler correction, we project the peakmap onto the frequency axis and select the most significant frequencies for further analysis. Indeed, if a monochromatic signal comes from a given direction, we expect its associated peaks to be aligned in frequency and appear as a prominent peak in the projected peakmap, when the signal is strong enough. The statistics of the peakmap are described in detail in \citep{Astone:2014esa}, so here we only provide a brief review. In the absence of signals, the probability $p_0$ of selecting a peak depends on the noise distribution. In the case of Gaussian noise, it is given by
\begin{equation}
p_0=e^{-\theta}-e^{-2\theta}+\frac{1}{3}e^{-3\theta},
\end{equation}
where $\theta=2.5$ is the threshold we apply to construct the peakmap.
On the other hand, if a signal with spectral amplitude (in units of equalized noise)
\begin{equation}
\lambda=\frac{4|\tilde{h}(f)|^2}{T_{\rm FFT}S_n(f)},
\end{equation}
where $\tilde{h}(f)$ is the signal Fourier Transform, and $S_n(f)$ is the detector noise power spectral density, is present, the corresponding probability $p_{\lambda}$ of selecting a signal peak, for weak signals with respect to the noise level, is given by
\begin{equation}
p_{\lambda}=p_0 + \frac{\lambda}{2}\theta \left(e^{-\theta}-2e^{-2\theta}+e^{-3\theta}\right).
\end{equation}
The statistic of the peakmap projection is, for well-behaved noise, a binomial with expectation value $\mu=N p_0$, where $N$ is the number of FFTs, and standard deviation $\sigma=\sqrt{N p_0 (1-p_0)}$. In the presence of a signal, the expected number of peaks at the signal frequency (after the Doppler correction) is $N p_{\lambda}$. It is then useful to introduce the Critical Ratio CR:
\begin{equation}
\rm{CR}=\frac{n-\mu}{\sigma},
\end{equation}
where $n$ is the actual number of peaks in a given frequency bin,
which, in the limit of large N (in practice, greater than a $\sim30$), closely follows a Gaussian distribution with zero mean and unity standard deviation.
\subsection{Peakmap moving averages}
\label{sec:peakmav}
The signal emitted by a boson cloud is not exactly monochromatic.
In order to deal with the uncertainty in the signal morphology, we apply a sequence of moving averages (MA) over a window width $W$ varying from two to ten frequency bins, starting from the Doppler corrected peakmap projection. This procedure can be shown to be equivalent -- and computationally much more efficient -- to build peakmap projections with an effective FFT duration $\approx T_{\rm FFT}/W$. While these moving averages reduce the sensitivity to signals with a characteristic coherence time larger than $T_{\rm FFT}$, e.g. purely monochromatic
signals, they provide better sensitivity to signals with a typical coherence time comparable to the effective FFT duration \citep{d2018semicoherent}.
\subsection{Candidate selection and coincidences}
\label{sec:candidates}
We use the CR as a detection statistic to identify potentially interesting outliers. Outliers are uniformly selected in the parameter space by taking the two points with the highest CR in each peakmap projection, for each sky position in every 0.05-Hz frequency band, independent of the actual CR values, as long as it is above a minimum value of 3.8.
In the next step, we identify coincident pairs among outliers found in the Hanford and Livingston data, corresponding to the same $W$, by using a dimensionless coincidence window of 3. In other words, two outliers, one from Hanford and the other from Livingston, are considered coincident if the dimensionless distance
\begin{equation}
d = \sqrt{\left(\frac{\Delta f}{\delta f}\right)^2+\left(\frac{\Delta \lambda}{\delta \lambda}\right)^2+ \left(\frac{\Delta \beta}{\delta \beta}\right)^2}\le 3
\end{equation}
where $\Delta f,~\Delta \lambda,~\Delta \beta$ are the differences between parameter values of the outliers found in Hanford (H1) and Livingston (L1), and $\delta f,~\delta \lambda,~\delta \beta$ are the corresponding bin sizes\footnote{The sky position bin sizes in general are different for two outliers identified at different sky positions, so the average among the corresponding bin sizes is actually used.}. The coincidence window of 3 is chosen according to the studies carried out using simulated signals and is consistent with the choice in standard all-sky searches for neutron stars \cite{pisarski2019all,abbott2017all}.
\subsection{Post-processing}
\label{sec:postproc}
Coincident outliers are subject to a sequence of post-processing steps in order to discard those incompatible with an astrophysical signal.
The first veto is to check if some of the outliers are compatible with known narrow-band instrumental disturbances, also known as {\it noise lines}. An outlier is considered as caused by a noise line, and thus discarded, if its Doppler band, defined by $\Delta f_{\rm dopp}=\pm 10^{-4}f$, where $f$ is the outlier frequency, intersects the noise line.
The next is a consistency veto, in which a pair of coincident outliers are discarded if their CR is not compatible with the detectors' noise level at the corresponding frequency. Specifically, we veto those coincident outliers for which the CR in one detector is more than a factor of 5 different than that in the other detector, after being normalized by the detector's power spectral density \cite{abbott2017all,palomba2019direct}. This choice, also used in standard all-sky searches for continuous waves from neutron stars, is motivated by the desire to safely eliminate an outlier only if the discrepancy is really significant and the CR is large in at least one of the two detectors (i.e., a real astrophysical signal is not expected to behave that way).
The previous steps are all applied to coincident outliers separately for each value of the MA window width $W$. In case of outliers coincident across different $W$ values, only the one with the highest CR is kept\footnote{This restriction reduces the computational cost with only a small loss in sensitivity.}.
\subsection{Follow-up}
Outliers with $W=1$, the case in which we do not apply moving averages, could be due to monochromatic signals or signals with very small fluctuations in frequency, while outliers with $W>1$ are more compatible with signals characterized by a larger degree of frequency fluctuation. These two sets of candidates are then followed up with different methods, one based on the Frequency-Hough algorithm, and used e.g. in \cite{pisarski2019all}, and the other using the Viterbi method, see e.g. \cite{Sun:2019mqb}. Both methods are briefly explained in Appendix \ref{sec:fumethods}.
\section{Data}\label{sec:data}
We use data collected by the Advanced LIGO gravitational-wave\xspace detectors over the O3 run, which
lasted from April 1st, 2019 to March 27th, 2020, with a one-month break in data collection in October 2019. The duty cycles of the two detectors, H1 and L1, are $\sim 76\%$ and $\sim 77\%$, respectively, during O3.
In the case of a detection, the calibration uncertainties in the detector strain data stream could impact the estimates of the boson properties. Without a detection, these uncertainties could affect the estimated instrument sensitivity and inferred upper limits on astrophysical source properties.
The analysis uses the ``C01'' version of calibrated data, which has estimated maximum uncertainties (68\% confidence interval) of $\sim 7\%$/$\sim 11 \%$ in magnitude and $\sim 4 \deg$/$\sim 9 \deg$ in phase, over the first/second halves of O3~\citep{Sun:2020wke,Sun:calibrationO3b}.
The frequency-dependent uncertainties vary over the course of a run but do not change by large values. The time-dependent variations could lead to errors in opposite directions at a given frequency during different time periods. In addition, the calibration errors and uncertainties are different at the two LIGO sites. By integrating the data over the whole run, we expect that the impact from the time-dependent, frequency-dependent calibration uncertainties cancels out to some extent, and the overall impact on the inferred upper limits is within a level of $\sim 2\%$. Thus we do not explicitly consider the calibration uncertainties in our analysis.
Due to the presence of a large number of transient noise artifacts, a \emph{gating} procedure \cite{gatingDocument,y_stochasticGatingDocument} has also been applied to the LIGO data. This procedure applies an inverse Tukey window to the LIGO data at times when the root-mean-square value of the whitened strain channel in the band of 25--50~Hz or 70--110~Hz exceeds a certain threshold. Only $0.4\%$ and $\sim 1\%$ of data is removed for Hanford and Livingston, respectively, and the improvement in data quality from applying such gating is significant, as seen in the stochastic and continuous gravitational-wave analyses in O3 \cite{Abbott:2021xxi}.
\section{Results}\label{sec:results}
In this section, we present the results of the analysis described in Sec.~\ref{sec:method}. With no candidates surviving the follow-up, we present the upper limits on the signal strain amplitude in Sec.~\ref{sec:upperlimits}, and the astrophysical implications in Sec.~\ref{sec:astro}.
Coincident outliers produced by the main search are
first pruned in order to reduce their number to a manageable level.
A further reduction is based on a close case-by-case inspection of the strongest outliers, as described in the following.
\subsection{Outlier vetos}
First, we remove outliers due to {\it hardware injections}, which simulate the gravitational wave signals expected from spinning neutron stars, added for testing our analysis methods by physically moving the mirrors as if a signal had arrived. See Appendix~\ref{sec:hi} for more details.
Moreover, we find several bunches of outliers which are not compatible with astrophysical signals, and thus are also discarded and not followed up. In these cases, the time-frequency peakmaps and spectra show the presence of broadband disturbances or lines,
Examples of such disturbances are shown in Fig.~\ref{fig:disturbance_28}, where we plot on the left the time-frequency peakmap in the frequency range of 28.2--28.35~Hz over the whole run and on the right the corresponding projection onto the frequency axis. A wide and strong transient disturbance responsible for many outliers with frequency in the range of 28.279--28.283~Hz is clearly visible.
\begin{figure*}[htb]
\centering
\includegraphics[width=\columnwidth]{figures/tfmap_28_O3_H_cool.png}
\includegraphics[width=\columnwidth]{figures/tfmap_proj_28_O3_H_cool.png}
\caption{Time-frequency peakmap (left) and corresponding projection (right) in the frequency band of 28.2--28.35 Hz over the full run. Several instrumental artifacts are visible, e.g., strong lines with varying amplitudes ($\sim 28.22$~Hz), weaker lines with sudden frequency variation ($\sim 28.33$~Hz), and broad transient disturbances (vertical bright lines). In particular, the features around 28.28~Hz in the second half of O3 are responsible for a big bunch of outliers with frequency in the range of 28.279--28.283~Hz.}
\label{fig:disturbance_28}
\end{figure*}
In addition, we find an excess of outliers at several multiples of 10~Hz in both H1 and L1 data, starting from 230~Hz. These are artifacts produced by the procedure to build BSDs\footnote{The artifacts are a well-known consequence of the 10 Hz baseline used to build BSD files. They could be avoided by constructing the BSDs with interlaced frequency bands.}. As it is extremely unlikely that the real astrophysical signals are coincident with multiples of 10~Hz, we veto all candidates within a range of $\pm 10^{-4}$~Hz around each multiple of 10~Hz.
Finally, we apply a final threshold on the CR in order to further reduce the number of outliers to a small enough number to permit follow up. In particular, we use different thresholds ${\rm CR}_{\rm thr}$ = 4.1 and 4.4 for outliers with $W=1$ and $W>1$, respectively.
This procedure leaves us with 9449 outliers with $W=1$ and 7816 outliers with $W > 1$. The choice of threshold does not affect the upper limits presented later, since those are based on the outliers with the highest CR.
Such outliers are followed-up with two different methods: a method based on the Frequency-Hough algorithm if the outliers are characterized by $W=1$ (Appendix~\ref{sec:fu_FH}), and a method based on a HMM tracking scheme (the Viterbi algorithm) if the outliers are characterized by $W>1$ (Appendix~\ref{sec:fu_viterbi}). No potential CW candidate remains after the follow-up. Details of the follow-up procedure and results are presented in Appendix~\ref{app:cand}.
\subsection{Upper limits}
\label{sec:upperlimits}
Having concluded that none of the outliers is compatible with an astrophysical signal, we compute $95\%$ confidence level (C.L.) upper limits placed on the signal strain amplitude.
The limits are computed, for every 1-Hz sub-band in the range of 20--610~Hz, with the analytic formula of the Frequency-Hough sensitivity, given by Eqn.~(67) in \cite{Astone:2014esa}, where the relevant quantities, i.e. an estimate of the noise power spectral density $S_n$ and the CR, are obtained from the O3 data used in this search. Specifically, for each 1-Hz subband, we take the maximum CR of the outliers, separately for Hanford and Livingston detectors, and the detectors' averaged noise spectra in the same subband. We then compute the two $95\%$ confidence-level upper limits using the equation, and take the worse (i.e. the higher) among the two.
This procedure has been validated by injecting simulated signals into LIGO and Virgo O2 and O3 data \cite{dicethesis}. In particular, this procedure has been shown to produce conservative upper limits with respect to those obtained with a much more computationally demanding injection campaign. It has been also verified that the upper limits obtained with a classical procedure based on injections are always above those based on the same analytical formula but using the minimum CR in each 1-Hz subband. The two curves, based, respectively, on the highest and the smallest CR, define a belt containing both a more stringent upper limit estimation and the sensitivity estimation of our search. When we will discuss the astrophysical implications of the search, we will always refer to the upper limits obtained using Eqn.~(67) in \cite{Astone:2014esa}, which are conservative with respect to limits derived from injections. This same procedure is being used for the O3 standard all-sky search for continuous waves from spinning neutron stars \cite{ref:allskyO3inprep}.
The resulting upper limits, marginalized over the sky and the source polarization parameters, are shown in Fig.~\ref{fig:upper}, as circles connected by a dashed line.
\begin{figure*}[htb]
\centering
\includegraphics[width=1.5\columnwidth]{figures/boson_O3_UL_1Hz.png}
\caption{$95\%$ C.L. upper limits on the signal strain amplitude obtained from this search (circles, connected by the dashed line). Values are given in each 1-Hz subband. No value is given for the subbands where no outlier survives. The dots connected by a dashed line define the lower bound obtained using the minimum CR in each subband.}
\label{fig:upper}
\end{figure*}
The minimum value is $1.04\times 10^{-25}$ at 130.5~Hz. In the same figure, the dots connected by the dashed line correspond to the lower bound computed using the minimum CR.
A comparison with previous all-sky searches, see e.g. Fig.~4 of \cite{CW_O3early}, shows that our conservative results are better than most of the past searches, including the recent early O3 analysis \cite{CW_O3early}. In particular, our minimum upper limit value improves upon that in \cite{CW_O3early} by $\sim 30\%$. The main factors that contribute to the improvement are: 1) the use of the full O3 C01 gated data, 2) the use of longer FFTs at least in a portion of the frequency band, and 3) the restricted spin-up/down range that could impact the maximum CR and thus the upper limit in each subband. On the other hand, the all-sky search described in \cite{CW_O3early}, as well as most of the other past searches, cover a significantly larger parameter space in both the spin-down/up and frequency ranges, and thus are sensitive to signals that are not considered in this analysis.
An O2 all-sky search reported in \cite{dergachev2021falconO2} produces upper limits slightly better than ours, by about $5-10\%$ on average (due to the use of a much longer coherent integration time of days), but obtained over a smaller parameter space and more limited in scope, being focused on low-ellipticity sources. See Sec.~\ref{sec:conclusions} for a more detailed discussion.
We want to stress two important points here. First, our conservative procedure to compute upper limits, based on the maximum CR in each subband, produces strain values that are typically larger than the minimum detectable amplitudes at the same frequency. That is, the search sensitivity is expected to be better than the upper limits.
Second, this is the first all-sky search that is optimized for frequency wandering signals, both at the level of outlier selection and follow-up. This allows us to achieve a better sensitivity to such signals compared to that achievable using the maximum possible FFT duration (which is the best choice for nearly monochromatic signals).
\subsection{Astrophysical implications}
\label{sec:astro}
The upper limits presented above can be translated into
physical constraints on the source properties. We interpret the results in two different ways. First, similar to what has been done in \cite{palomba2019direct}, we compute the exclusion regions in the boson mass and black hole mass plane, assuming fixed values of the other relevant parameters, namely the distance to the system $D$, the initial dimensionless spin of the black hole $\chi_i$, and a time equal to the age of the cloud $t=t_\mathrm{age}$. Indeed, from Eqs.~(\ref{eq:h0}) and (\ref{eq:hoft}), once $D$, $\chi_\mathrm{i}$, and $t_\mathrm{age}$ are fixed, the signal strain amplitude depends only on the boson mass $m_b$ and black hole mass $M_\mathrm{BH}$. Thus for each pair of $(m_b,~ M_\mathrm{BH})$, we can exclude the presence of a source with those assumed parameters, if it would have produced a signal whose strain is larger than the upper limit at a given frequency in figure \ref{fig:upper}.
In our search, the $\alpha$ values probed roughly fall in the range of [0.02, 0.13]. Eqs.~(\ref{eq:taugw}) and (\ref{eq:h0}) do not hold in the range of $\alpha \gtrsim 0.1$, as shown in Fig. 2 of \cite{Brito:2017zvb}. Specifically, we look at the Brito+ curve derived analytically compared to the black curve obtained from numerical simulations, and see that they differ in energy by a factor of 3 at the largest $\alpha$ considered in this search, $\alpha~0.15$.
This implies that we underestimate $\tau_\mathrm{gw}$ up to a factor of 3 and overestimate $h_0$ up to a factor of $\sqrt{3}$ (at $\alpha\simeq 0.13$).
Thus we correct Eqs.~(\ref{eq:taugw}) and (\ref{eq:h0}) by these factors such that the resulting exclusion regions are slightly conservative in the whole parameter space.
In Fig.~\ref{fig:exclude_chi09}, the exclusion regions are plotted for $D = 1$~kpc (left) and $D = 15$~kpc (right), assuming a high initial spin of $\chi_i=0.9$. For each distance assumption, three different values of the black hole age are considered.
\begin{figure*}[htb]
\centering
\includegraphics[width=\columnwidth]{figures/exclusion_region_chi0.9_d1kpc_1Hz_factor_3.png}
\includegraphics[width=\columnwidth]{figures/exclusion_region_chi0.9_d15kpc_1Hz_factor_3.png}
\caption{Exclusion regions in the boson mass ($m_b$) and black hole mass ($M_{\rm BH}$) plane for an assumed distance of $D=1$~kpc (left) and $D = 15$~kpc (right), and an initial black hole dimensionless spin $\chi_i=0.9$. For $D=1$~kpc, three possible values of the black hole age, $t_\mathrm{age}=10^3,~10^6,~10^8$ years, are considered; for $D=15$~kpc, $t_\mathrm{age}=10^3,~10^{4.5},~10^6$ years are considered.}
\label{fig:exclude_chi09}
\end{figure*}
In Fig.~\ref{fig:exclude_chi04}, the exclusion regions are shown for the same parameters as before, except that a lower initial spin value, $\chi_i=0.5$ is assumed.
\begin{figure*}[htb]
\centering
\includegraphics[width=\columnwidth]{figures/exclusion_region_chi0.5_d1kpc_1Hz_factor_3.png}
\includegraphics[width=\columnwidth]{figures/exclusion_region_chi0.5_d15kpc_1Hz_factor_3.png}
\caption{Same as Fig. \ref{fig:exclude_chi09} but for black hole initial spin $\chi_i=0.5$. The assumed distance is $D=1$~kpc (left), and $D=15$~kpc (right).}
\label{fig:exclude_chi04}
\end{figure*}
In both cases, as expected, the constrained region is smaller when the source is assumed to be at a farther distance as, in this case, the signal amplitude at the detector is smaller.
For any given black hole mass, these results improve upon the constraints obtained in Advanced LIGO O2 data \cite{palomba2019direct} for lower boson masses, while they are slightly less constraining for higher boson masses. This is because for a given black hole mass, higher boson masses correspond to higher signal spin-up (see Eq. (\ref{eq:fanni})), and we simply do not cover a large spin-up range.
On the other hand, the constraints described in \cite{palomba2019direct} were obtained from the results of a search not specifically designed for boson clouds. In particular, restricting the exploration to the parameter space relevant for the expected boson cloud signals significantly reduces the trial factor with a consequent implicit gain in the sensitivity with respect to standard wider parameter space searches.
A second type of interpretation, as shown in Fig.~\ref{fig:maxdist}, is represented by the maximum distance $D_\mathrm{max}$ at which we can exclude, as a function of the boson mass, the presence of an emitting system of a given age $t_\mathrm{age}$, assuming a population of black holes. The black hole population has been simulated using a Kroupa mass distribution, with probability density described by $f(m)\propto m^{-2.3}$ \cite{elbert2017}, with two different ranges, of $[5,~50]M_\odot$ and $[5,~100]M_\odot$, and a uniform initial spin distribution in the range of $[0.2,~0.9]$. The criterion to compute $D_\mathrm{max}$ at a given boson mass is that at least $5\%$ of the simulated signals would produce a strain amplitude larger than the upper limit at the corresponding frequency in the detectors, and thus would have been detected by our search. This choice of 5\% should be conservative, given the large black hole population in the galaxy (10$^7$-10$^8$) from which we seek a single signal. As before, we correct Eqs.~(\ref{eq:taugw}) and (\ref{eq:h0}) by a factor of 3 and $\sqrt{3}$, respectively, to obtain more accurate $\tau_{\rm gw}$ and $h_0$ estimates at $\alpha \gtrsim 0.1$ and thus slightly conservative constraints in the full range. As expected, on average when the maximum black hole mass is smaller, the maximum distance is also smaller, as the signal strain increases with a high power of the black hole mass.
\begin{figure*}[htb]
\centering
\includegraphics[width=1\columnwidth]{figures/max_distance_50_factor_3.png}
\includegraphics[width=1\columnwidth]{figures/max_distance_100_factor_3.png}
\caption{Maximum distance at which at least $5\%$ of a simulated population of black holes with a boson cloud would produce a gravitational-wave signal with strain amplitude larger than the upper limit in the detectors. The left plot refers to a maximum black hole mass of 50$M_{\odot}$, while the right plot to a maximum mass of 100$M_{\odot}$. The different colored markers correspond to different system ages, ranging from $10^3$ years to $10^7$ years, as indicated in the legend. The alignment of points for different ages at the smallest boson masses (and distances) is the result of a discretization effect due to the finite size grid used in distance.}
\label{fig:maxdist}
\end{figure*}
Instead of focusing on the specific properties of the emitting system, as for the exclusion regions previously discussed, this constraint on distance depends on the chosen Kroupa mass and uniform spin distributions of the black holes, and thus it reflects the ensemble properties of the assumed black hole population.
Overall, the distance constraints allow us to draw semi-quantitative conclusions on the possible presence of emitting boson clouds in our Galaxy. For instance, young systems, with $t_{\rm age}$ smaller than about $10^3$ years are disfavored in the whole Galaxy for boson masses above about $2.5\times 10^{-13}$~eV for a maximum black hole mass of 50$M_{\odot}$ and above about $1.2\times 10^{-13}$~eV for a maximum black hole mass of 100$M_{\odot}$. As expected, older systems, expected to be more abundant, are more likely to be ruled out only at smaller distances, as they produce on average weaker gravitational-wave signals. As an example, bosons with masses between $\sim 2\times 10^{-13}$--$8\times 10^{-13}$~eV ($\sim 10^{-13}$--$8\times 10^{-13}$~eV) would be excluded within a distance of 1~kpc from Earth for a maximum black hole mass of 50$M_{\odot}$ (100$M_{\odot}$).
The general shape of the maximum distance curves is a result of the combination of different factors pushing in different directions: higher boson masses tend to produce signals with stronger initial amplitudes, but also faster decay rates as a function of $t_{\rm age}$.
Towards the lower $m_b$ end in the plot, $D_{\max}$ increases with $m_b$ because the effect of the increasing initial signal amplitudes dominates; towards the higher $m_b$ end, $D_{\max}$ tends to decrease again since the faster signal decay rate dominates.
At the same time, for higher boson masses, lower black hole masses are required to form a cloud producing signals that last long enough to be detectable.
Hence the black hole mass distribution (larger population at lower $M_{\rm BH}$) comes into play as well.
In addition, the results are weighted by the frequency-dependent detector sensitivity (reflected in the upper limit curve in Fig.~\ref{fig:upper}), which also affects the final shape of the $D_{\max}$ curves.
For both of the constraints presented above, we do not take into account factors that are largely uncertain but could also be relevant to the interpretation, e.g., the black hole spatial distribution or the formation rate. We defer the integration of these factors to future work when they are better constrained.
Concerning the interpretation of our results with respect to the assumed boson self-interaction, the search covers a small range of spin-up values (see Sec~\ref{sec:method} and, in particular, Fig.~\ref{fig:spindownstep}), mainly corresponding to a regime of small boson self-interaction, i.e. one in which the spin-up is dominated by Eq~(\ref{eq:fanni}). For a subset of smaller boson and black hole masses, the spin-up dominated by Eq~(\ref{eq:fdot_tr}) in the intermediate regime is also covered in the search. However, as we have clarified in Sec~\ref{sec:model}, the signal duration is expected to be shorter in this intermediate regime, and the signal amplitude is smaller due to the smaller boson and black hole masses [see Eq~(\ref{eq:h0})]. Thus the signals in the intermediate regime are less likely to be detected compared to those in the small self-interaction regime.
\section{Conclusions}\label{sec:conclusions}
This paper describes the first all-sky search tailored to the predicted continuous-wave emission from scalar boson clouds around spinning black holes.
We cover a frequency range between 20~Hz and 610~Hz, and a small frequency-dependent spin-up range corresponding to the small self-interaction regime.
We use a multiple frequency-resolution approach, in order to optimize the sensitivity for signals characterized by a slightly wandering frequency, for which a search based on the fixed maximum possible Fourier transform duration would cause a sensitivity loss. No candidate survives the multi-step follow-up procedures we have implemented.
Following up outliers in this analysis was significantly more challenging due, in part, to not having a spin-down or spin-up range to consider, typical of standard all-sky continuous-wave\xspace analyses, for which instrumental artifacts lead to clusters over extended ranges in source spin-down. Establishing an instrumental source of such clusters is more straightforward in standard analyses than in this analysis. Manual checks by visual inspection of many spectra had to be performed, along with additional methods necessary to reject an outlier due to a transient spectral artifact in H1 (see section \ref{sec:fuw1}). Therefore, this work not only motivates further searches for boson cloud systems, but also refined methods to mitigate or conclusively veto persistent or transient spectral disturbances at the level of detector commissioning and characterization.
The resulting upper limits are significantly better than those obtained in previous all-sky searches for continuous waves, including a recent search in the early O3 data \cite{CW_O3early}, although our search typically covers a much smaller spin-down/up range with respect to those.
On the other hand, our upper limits are slightly worse than those obtained in the O2 search carried out with the Falcon pipeline \cite{dergachev2021falconO2}, which however was mainly focused on low-ellipticity sources, and thus covered
a much smaller spin-down/up range than we do (from a factor of $\sim 4$ at lower frequencies up to a factor of $\sim 18$ at higher frequencies), and moreover did not account for the possibility of signal frequency wandering. The same pipeline was also run from 500-1700 Hz \cite{Dergachev:2020fli} with the same caveats, and although the limits presented in both of these searches are quite stringent, they rely on low-ellipticity sources emitting almost monochromatic gravitational waves\xspace.
Code improvements are possible to make the search more sensitive and computationally more efficient. In this way, future boson cloud searches using the basic methodology adopted in this paper will expand the parameter space, covering a wider frequency band and a larger spin-up range, as well as search for vector boson cloud signals that could have much higher spin-ups than those considered here. This, together with the expected detector improvements in upcoming runs of Advanced LIGO, Advanced Virgo and KAGRA detectors, will significantly increase the chance of detecting gravitational radiation from these interesting sources, or at least to better constrain the parameter space of these systems.
|
{
"timestamp": "2021-12-01T02:27:46",
"yymm": "2111",
"arxiv_id": "2111.15507",
"language": "en",
"url": "https://arxiv.org/abs/2111.15507"
}
|
\section{Introduction}
\label{sec:Intro}
\vspace*{1mm}
\noindent
It is a general observation, that the Feynman parameter integrals for certain classes of topologies can be expressed
in terms of higher transcendental functions of the hypergeometric type, cf.~e.g.~\cite{HAMBERG,Davydychev:2003mv,
Bierenbaum:2007qe,Kalmykov:2020cqz}.
This concerns their representation before expanding in the dimensional parameter $\varepsilon = D - 4$, with $D$ the dimension of
space--time.
Here we consider the generalization of the Euler integrals to the generalized
hypergeometric functions $_pF_q$, and the multiple hypergeometric functions of the Appell-, Kamp\'e de F\'eriet-, Horn-,
and Lauricella-, Saran-,
Srivastava-, and Exton type
\cite{HYPKLEIN,HYPBAILEY,SLATER1,APPELL1,APPELL2,KAMPE1,KAMPE2,HORN,EXTON72a,
EXTON1,EXTON2,SCHLOSSER,
Anastasiou:1999ui,Anastasiou:1999cx,SRIKARL,Lauricella:1893, Saran:1954,Saran:1955,ERDELYI,Kalmykov:2020cqz}.
In physical applications a standard integration
method is that of solving systems of ordinary and partial differential equation systems \cite{DEQ} generated by the
integration by parts (IBP) relations \cite{IBP}. In this context it is important to recognize the differential equations of
the classes of the aforementioned functions, since their mathematical structure is widely known. This allows the direct analytic
solution of at least this part of the physical problem. Starting with certain topologies, more general differential equations
will contribute, requiring different solution technologies. Structures of the above kind have been obtained
in general
off--shell representations at the one--loop level for multi--leg diagrams,
cf.~e.g.~\cite{Boos:1990rg,Fleischer:2003rm,
Watanabe:2013ova,Bluemlein:2017rbi,Phan:2018cnz}. At the two- and
three--loop level for various scattering processes related structures are found,
cf.~e.g.~\cite{Anastasiou:1999ui,Anastasiou:1999cx,
Bauberger:1994nk,Ablinger:2012qm}.
For all the above quantities the (partial) differential equations are known and they partly turn out to be of rather
high order. On the other hand, one may consider the formal multiple Taylor expansion of these higher transcendental functions,
which allows one to obtain difference equations for the corresponding expansion coefficients. This is advised, since these are
remarkably simpler.
In this paper we describe a systematic classification of partial differential equations for scalar or master integrals
of one or more scales w.r.t.\ known solutions in the hypergeometric classes. These differential equations have multivariate
multiple series solutions for parameters $x_1, ..., x_n$ in the vicinity of $\{0, ..., 0\}$ as formal Taylor series.
We determine the expansion coefficients, which are obtained as rational product solutions. In identifying the associated
non--linear coefficient pattern for the respective case the expansion coefficients can be factorized into rational
expressions of Pochhammer symbols.
The method works for general values of the space--time dimension $D$. The (multiple) infinite series representations
found can be finally expanded in the dimensional parameter $\varepsilon$, which transforms the summand in terms of Pochhammer symbols and general hypergeometric products by introducing in addition (cyclotomic) harmonic sums \cite{Vermaseren:1998uu,Blumlein:1998if,Ablinger:2011te} or generalized versions, like Hurwitz
harmonic sums. One may try to simplify the obtained multiple sums in terms of hypergeometric products and indefinite nested sums to expressions that are purely given in terms of indefinite nested sums using the package \texttt{EvaluateMultiSums}~\cite{Ablinger:2010pb,Blumlein:2012hg,Schneider:2013zna,Schneider:19}. The underlying summation engine is based on the package {\tt Sigma} \cite{SIG1,SIG2} that contains
non--trivial algorithms in the setting of difference rings~\cite{DR,TermAlgebra,LinearSolver}.
Hypergeometric structures emerge in the calculation of Feynman integrals using
Feynman parameter representations from the simplest topologies onward, cf.~e.g.\ \cite{IZ}.
In the most simple cases they can be calculated in terms of Euler Beta functions
\begin{eqnarray}
\int_0^1 dz z^a (1-z)^b = B(a+1,b+1).
\end{eqnarray}
Here and in the following we represent the respective higher transcendental functions in their convergence region
but allow to perform analytic continuations to their whole analyticity range, cf.~\cite{WW}. The next more involved
function is ${_2F_1}$
\begin{eqnarray}
\pFq{2}{1}{a_1,a_2}{b_1}{z} = \frac{\Gamma(b_1)}{\Gamma(a_1) \Gamma(b_1-a_1)} \int_0^1 dx x^{a_1-1} (1-x)^{b_1-a_1-1}
(1-zx)^{-a_2}
\end{eqnarray}
followed by $_{p+1}F_p$ by the iterative integral
\begin{eqnarray}
\int_0^1 dx x^{a-1} (1-x)^{b-1}
\pFq{p}{q}{a_1,...,a_p}{b_1,...b_q}{x z} = \frac{\Gamma(a) \Gamma(b)}{\Gamma(a+b)} \pFq{p+1}{q+1}
{a_1,...,a_p,a}{b_1,...b_q,a+b}{z}.
\end{eqnarray}
Other topologies \cite{Ablinger:2012qm} lead to the integral representation of the Appell function $F_1$
\begin{eqnarray}
I &=& \int_0^1 dw_1 \int_0^1 dw_2 \theta(1-w_1-w_2) w_1^{b-1} w_2^{b'-1} (1-w_1-w_2)^{c-b-b'-1}(1-w_1 x - w_2 y)^{-a}
\nonumber\\
&=& \frac{\Gamma(b) \gamma(b') \Gamma(c-b-b')}{\Gamma(c)} F_1\left[a;b,b';c;x,y\right]
\end{eqnarray}
and others.
The solution of Feynman integrals through Feynman parameterizations mapping to higher transcendental functions
is not a method which can be easily made uniform. At a certain stage it will also require the use of
Mellin--Barnes integral representations \cite{MB} to be solved by the residue theorem. Although this method
can establish links to higher transcendental functions in principle since those have Pochhammer Umlauf-integral
representations \cite{POCHHAMMER,KF,SLATER1}, it may easily lead to non--minimal representations \cite{Blumlein:2010zv}
which are difficult to reduce analytically, if one is not only interested in numerical results \cite{MBnum}.
The advantage of all these representations lies in the fact that multiple Feynman parameter integrals are reduced
to much lower dimensional infinite sum representations, which are one--fold in the case of generalized
hypergeometric
functions, two--fold e.g.\ for Appell and Horn functions
\cite{APPELL1,APPELL2,Anastasiou:1999ui,Anastasiou:1999cx,HORN},
three--fold for the Srivastava functions
\cite{SRIKARL} and further given by multi--sum Lauricella-type functions \cite{Lauricella:1893} in more
involved cases, with an
early application in \cite{Bauberger:1994nk}.
In this paper we will consider (partial) differential equations for master integrals w.r.t.\ their parameters. The master
integrals
are obtained as the result of the IBP--reduction. Usually these are first order equations. However, one will decouple the
corresponding systems, cf.~\cite{BCP13,Zuercher:94}, using algorithms which are available, e.g., in {\tt OreSys} \cite{ORESYS}. In this way higher order
differential equations will emerge. In the case of partial differential equations one may use, e.g., Janet bases \cite{JANET}.
These (partial) differential equations can be mapped to corresponding multi--variate difference equations expanding
the associated ansatz in terms of the multi--variate formal Taylor series
\begin{eqnarray}
\sum_{k_1, ..., k_n = 0}^\infty f[k_1, ..., k_n] x_1^{k_1} ... x_n^{k_n}.
\label{eq:TAYLOR}
\end{eqnarray}
The recurrences obeyed by the expansion coefficients $f[k_1, ..., k_n]$
can be solved using difference ring techniques \cite{SIG1,SIG2} for the linear case. For the multivariate case we will utilize ideas from~\cite{kauers10,kauers11} that led to the new package \texttt{solvePartialLDE} that may support in parts the solving task. Finally, one may express $f[k_1, ..., k_n]$
as a rational term of (multi--indexed) Pochhammer symbols, which contain all parameters of the original differential
equations, like particle masses and kinematic invariants of the processes considered, including the dimensional parameter
$\varepsilon$. In the general case first product--solutions emerge, which can be factored into Pochhammer--structures by solving
algebraic equations. Alternatively, one can keep the multiplicands in non--linear form and can apply a new
function implemented in the package \texttt{EvaluateMultiSum} that produces the $\varepsilon$--expansions without introducing algebraic extensions.
In most cases, we will limit our consideration to the principal structure of the known classes of higher transcendental functions of the
hypergeometric type, i.e. those where $f[k_1, ..., k_n]$ is given by Pochhammer--ratios
\begin{eqnarray}
f[k_1, ..., k_n] = \frac{\prod_{i=1}^p (a(i))_{l(i)}}{\prod_{i=j}^q (b(j))_{m(j)}},~~~l(i), m(j),p,q \in \mathbb{N},
\end{eqnarray}
and $l(i), m(j)$ are linear functions of $k_r \in \mathbb{N}$ with integer coefficients.
Here the Pochhammer symbols are defined by
\begin{eqnarray}
(a)_n = \frac{\Gamma(a+n)}{\Gamma(a)},~~~a \in \mathbb{C} \backslash \mathbb{Z}_-,~~n \in \mathbb{N}
\end{eqnarray}
and $\mathbb{Z}_- = \mathbb{Z} \backslash \mathbb{N} \cup \{0\}$.
The coefficients $a(i)$ and $b(j)$
will in general depend on the dimensional parameter $\varepsilon$ and we will further
consider the expansion of the higher transcendental functions in this parameter.
The paper is organized as follows. In Section~\ref{sec:delist} we list the differential equations of the
multi--variate
generalized hypergeometric functions up to four variables in explicit form, parameterizing them linearly. There are
also general differential equations as those for the hypergeometric functions $_pF_q$, the Kamp\'e de F\'eriet function
\cite{KAMPE2}, and the Lauricella--Saran functions \cite{Lauricella:1893,Saran:1954,Saran:1955}. In Section~\ref{sec:reclist} we
derive the recursions for the multivariate expansion coefficients of these functions. An algorithm is presented in
Section~\ref{sec:recsol} to find hypergeometric product solutions for first--order linear recurrence
systems.
In this way the multivariate functions $f(x_1, ..., x_n)$ can be represented in the vicinity of $(\vec{0})_n$.
The parameters of the differential and difference equations depend also on the dimensional parameter $\varepsilon$.
Usually one would like to perform corresponding expansions in this quantity, which we describe in
Section~\ref{sec:epExpansion}. Here the so-called Hurwitz harmonic sums and more general versions occur, the
summation problem of which can be dealt with the packages \texttt{EvaluateMultiSums} and {\tt Sigma}.
In Section~\ref{sec:fullmachinery} we demonstrate the full machinery, to obtain for a given system of linear
differential equations the first coefficients of the $\varepsilon$--expansions in terms of indefinite nested sums
and
products. In Section~\ref{sec:PLDEsolver} we supplement the solving tools from Section~\ref{sec:recsol} and
turn to general partial difference equations with rational coefficients. Based on the algorithms presented
in~\cite{kauers10,kauers11} we present different strategies to find solutions in terms of
hypergeometric products and iterative sums over such products that appear in the calculation of Feynman integrals.
Section~\ref{sec:conclusion} contains the conclusions.
In Appendix~\ref{sec:A} we provide for convenience a list of the main functions dealt with in the present
paper, which are defined by their series representation, see also the file {\tt cases.m}.
Appendix~\ref{sec:B} illustrates the matching conditions to be met to obtain from the general solutions in a direct way
the Pochhammer-type solutions. They are given in computer-readable form in the file {\tt Mconditions.m}.
In Appendix~\ref{sec:C} a brief description of the commands of the code {\tt HypSeries} is given and
Appendix~\ref{sec:D} provides a brief description of the code {\tt solvePartialLDE}. For both cases we will provide
{\tt Mathematica} notebooks illustrating the corresponding operations in
examples. In Appendix~\ref{sec:E} a special constant is evaluated, which
appears in one of the examples. Appendix~\ref{sec:F} lists the
{\tt Mathematica} and other software
packages required to execute the example notebooks.
\section{The differential equations}
\label{sec:delist}
\vspace*{1mm}
\noindent
Multivariate master integrals obey partial differential equations, which are obtained after the IBP reduction and,
if necessary, the decoupling of coupled systems. In this way differential equations of higher than first order are
obtained. The first class concerns the univariate case of the generalized hypergeometric functions; for its
definition we refer to Appendix~\ref{sec:A}.
The differential operator of Gau\ss{}' $_2F_1$ function reads, cf.~\cite{SLATER1},
\begin{eqnarray}
x(1-x) \frac{d^2}{dx^2} + (c -(a+b+1)x) \frac{d}{dx} - ab,
\end{eqnarray}
which we write more generally as
\begin{eqnarray}
x(1-x) \frac{d^2}{dx^2} + (A_1+B_1 x) \frac{d}{dx} + C.
\label{eq:D2F1g}
\end{eqnarray}
For the function $_{3}F_2$ one obtains
\begin{eqnarray}
x^2(1-x) \frac{d^3}{dx^3} + x (A_{{2}}+B_{{2}} x) \frac{d^2}{dx^2}
+ (A_{{1}}+B_{{1}} x) \frac{d}{dx} + C,
\label{eq:D3F2}
\end{eqnarray}
with $A_{{2}} = b_1+b_2 +1, B_{{2}} = -(3 + a_1 + a_2 +a_3), A_{{1}} = b_1b_2,
B_{{1}} = -(a_2 a_1 + a_3 a_1 + a_2 a_3 + a_1 +a_2 +a_3 +1), C = - a_1 a_2 a_3$.
In general, we get for ${}_{p+1}F_p$ the linear differential equation
\begin{eqnarray}
x^p (1-x) \frac{d^{p+1}}{dx^{p+1}} + \sum_{k = 1}^{p} x^{k-1}(A_k + B_k x) \frac{d^k}{dx^k} + C.
\end{eqnarray}
The $_pF_q$ function is the homogeneous solution of the differential
operator
\begin{eqnarray}
x \frac{d}{dx} \Biggl( x \frac{d}{dx} + b_1 -1 \Biggr) ... \Biggl( x \frac{d}{dx} + b_q -1 \Biggr)
- x \Biggl( x \frac{d}{dx} + a_1 \Biggr) ... \Biggl( x \frac{d}{dx} + a_p \Biggr).
\label{eq:PFQ1}
\end{eqnarray}
The products of the differential operators in (\ref{eq:PFQ1}) $\vartheta = x (d/dx) \equiv x \partial_x$,
can be written in the following form
\begin{eqnarray}
\vartheta &=& x \partial_x
\\
\vartheta^2 &=& x \partial_x + x^2 \partial_x^2
\\
\vartheta^3 &=& x \partial_x + 3 x^{{2}} \partial_x^2 + x^3 \partial_x^3
\\
\vartheta^4 &=& x \partial_x + 7 x^{{2}} \partial_x^2 + 6 x^3 \partial_x^3+ x^4 \partial_x^4
\\
\vartheta^5 &=& x \partial_x + 15 x^{{2}} \partial_x^2 + 25 x^3 \partial_x^3+ 10 x^4 \partial_x^4 + x^5
\partial_x^5,~~\text{etc.}
\end{eqnarray}
Inserting this into Eq.~(\ref{eq:PFQ1}) will imply the corresponding general differential operator, which has the form
\begin{eqnarray}
\sum_{k = 0}^m P_k(x) \frac{d^k}{dx^k},
\end{eqnarray}
with the corresponding polynomials $P_k(x)$ and $m = {\rm max}\{p,q\}$. The coefficient polynomials result
from the expansion of (\ref{eq:PFQ1}).
Here and in the following we will first parameterize the differential operators in general terms. In the literature
the different coefficients are usually related by algebraic equations, which is possible, but not necessary. The list
of these equations are given in a subsidiary file to this paper.
The differential operators for the two--variable Horn type functions \cite{APPELL1,APPELL2,KAMPE1,KAMPE2,HORN}
$F_1$ to $F_4$, $G_1$ to $G_3$, and $H_1$ to $H_7$, including the Appell functions
\cite{APPELL1,APPELL2}, are given by
{
\begin{eqnarray}
a
+ (b x+c) \partial_x
+x (d+e x) \partial_x^2
+f y \partial_y
+(g y+h x y) \partial_{x,y}^2
+j y^2 \partial_y^2 &=& 0
\label{APP1}
\\
a_1
+ (b_1 y+ c_1) \partial_y
+y (d_1+e_1
y) \partial_y^2
+f_1 x \partial_x
+ (g_1 x+ h_1 x y) \partial_{x,y}^2
+j_1 x^2 \partial_x^2 &=& 0,
\label{APP2}
\end{eqnarray}
}
with the example of the Appell $F_1$ function
{
\begin{eqnarray}
F_1 &:& x(1-x)\partial_x^2 +y(1-x) \partial_{x,y}^2 + (A+B x)\partial_x + C y\partial_y + D
\\
F_1 &:& y(1-y)\partial_y^2 +x(1-y) \partial_{x,y}^2 + (A +B' y)\partial_y + C' x\partial_x + D'.
\end{eqnarray}
}
Here {$\partial_{x,y}^2 = \partial_x \partial_y$}, etc.
In physical applications two more differential operators appeared in the bi--variate case,
\cite{Anastasiou:1999ui,Anastasiou:1999cx}, to which the functions
$S_1$ and $S_2$ belong. The differential operators read
{
\begin{eqnarray}
S_1 &:&
a+(c+b x) \partial_x +x (d+e x) \partial_x^2 +x^2 (l+p x) \partial_x^3+f y \partial_y +x
(q+r x) y \partial_x^2 \partial_y +j y^2 \partial_y^2
\nonumber\\ &&
+s x y^2 \partial_x \partial_y^2 +(g y+h x y)
\partial_{x,y}^2
\\
&:&
a_1+f_1 x \partial_x + j_1 x^2 \partial_x^2 +(c_1+b_1 y) \partial_y+y (d_1+e_1 y) \partial_y^2 +(g_1 x+h_1 x
y) \partial_{x,y}^2
\\
S_2 &:&
a+(c+b x) \partial_x +x (d+e x) \partial_x^2 +x^2 (l+p x) \partial_x^3 +f y \partial_y +x
(q+r x) y \partial_x^2 \partial_y
+j y^2 \partial_y^2
\nonumber\\ &&
+s x y^2 \partial_x \partial_y^2 +(g y+h x y)
\partial_{x,y}^2
\\
&:&
a_1+f_1 x \partial_x +(c_1+b_1 y) \partial_y +j_1 x^2 \partial_x^2 \partial_y +y (d_1+e_1 y)
\partial_y^2+q_1 x y \partial_x \partial_y^2 +p_1 y^2 \partial_y^3
\nonumber\\ &&
+(g_1 x+h_1 x y) \partial_{x,y}^2.
\end{eqnarray}
}
For the Kamp\'e de F\'eriet function
\begin{eqnarray}
F^{p;q;k}_{l;m;n} \left[ \begin{array}{cc}
(a_p) ; (b_q) ; (c_k) & \\
& x, y \\
(\alpha_l) ; (\beta_m) ; (\gamma_n) &
\end{array}
\right]
&=& \sum_{r,s=0}^\infty \, \frac{\prod_{j=1}^p (a_j)_{r+s}
\prod_{j=1}^q (b_j)_r \prod_{j=1}^k (c_{{j}})_s}{\prod_{j=1}^l (\alpha_j)_{r+s} \prod_{j=1}^m (\beta_j)_r \prod_{j=1}^n
(\gamma_j)_s } \frac{x^r}{r!} \frac{y^s}{s!} \\
&=& \sum_{r,s=0}^\infty f[r,s] x^r y^s
\end{eqnarray}
one obtains the following annihilating differential operators \cite{KAMPE1,KAMPE2}
\begin{eqnarray}
\prod_{j=1}^p (x \partial_x + y \partial_y +a_j) \prod_{j=1}^q (x \partial_x +b_j) -{\partial_x} \prod_{j=1}^l (x \partial_x + y \partial_y -1+\alpha_j) \prod_{j=1}^m ({x \partial_x} -1+\beta_j) = 0,
\\
\prod_{j=1}^p (x \partial_x +y \partial_y +a_j) \prod_{j=1}^k (y \partial_y +c_j) -{\partial_y} \prod_{j=1}^l (x
\partial_x + y \partial_y -1+\alpha_j) \prod_{j=1}^n (y \partial_y -1+\gamma_j) = 0.
\end{eqnarray}
The differential operators for the triple hypergeometric series \cite{SRIKARL} read
{
\begin{eqnarray}
D_{3,1} &=& A+(B_0+B_1 x) \partial_x +x (E_0+E_1 x) \partial_x^2 + C_1 y \partial_y +{F_1} y^2 \partial_y^2 +(H_0+H_1 x) y
\partial_{x,y}^2
\nonumber\\ &&
+D_1 z \partial_z
+G_1 z^2 \partial_z^2 +(L_0+L_1 x) z \partial_{x,z}^2 + S_1 y z \partial_{y,z}^2
\\
D_{3,2} &=& A'+{B_1'} x \partial_x +E_1' x^2 \partial_x^2 +(C_0'+C_1' y) \partial_y +y (F_0'+F_1' y)
\partial_y^2+x
(H_2'+H_1' y) \partial_{x,y}^2
\nonumber\\ &&
+ D_1' z \partial_z
+ G_1' z^2 \partial_z^2 + L_1' x z \partial_{x,z}^2
+(S_0'+S_1' y) z \partial_{y,z}^2
\\
D_{3,3} &=&
A''+ B_1'' x \partial_x + {E''_1} x^2 \partial_x^2 + C_1'' y \partial_y + {F_1''} y^2 \partial_y^2
+ H_1'' x y \partial_{x,y}^2 + (D''_0 + D_1'' z) \partial_z
\nonumber\\ &&
+ z (G_0'' + G_1'' z) \partial_z^2
+x {(L_2'' + L''_1 z)} \partial_{x,z}^2 + y (S_2''+ S_1'' z) \partial_{y,z}^2 .
\end{eqnarray}
}
The differential operators for the quadruple versions are given by
{
\begin{eqnarray}
D_{4,1} &=&
A + E_1 t \partial_t +L_1 t^2 \partial_t^2 +(B_0+B_1 x)\partial_x +x (F_0+F_1 x) \partial_x^2+t
(P_0+P_1 x) \partial_{t,x}^2 +C_1 y \partial_y
\nonumber\\ &&
+G_1 y^2 \partial_y^2 +R_1 t y \partial_{t,y}^2 + (M_0+M_1 x) y
\partial_{x,y}^2 + D_1 z \partial_z + H_1 z^2 \partial_z^2 + S_1 t z \partial_{t,z}^2
\nonumber\\ &&
+(N_0+N_1 x) z
\partial_{x,z}^2 +Q_1 y z \partial_{y,z}^2
\\
D_{4,2} &=&
A'+E_1' t \partial_t + L_1' t^2 \partial_t^2 + B_1' x \partial_x +F_1' x^2 \partial_x^2 + P_1' t x
\partial_{t,x}^2+(C_0'+ C_1' y) \partial_y
+y (G_0' + G_1' y) \partial_y^2
\nonumber\\ &&
+ t (R_0'+R_1' y) \partial_{t,y}^2 + x (M_2' + M_1' y)
\partial_{x,y}^2 + D_1' z \partial_z +H_1' z^2 \partial_z^2 +S_1' t z \partial_{t,z}^2 + N_1' x z
\partial_{x,z}^2
\nonumber\\ &&
+ (Q_0'+Q_1' y) z \partial_{y,z}^2
\\
D_{4,3} &=&
A''+E_1'' t \partial_t + L_1'' t^2 \partial_t^2 + B_1'' x \partial_x + F_1'' x^2 \partial_x^2
+ P_1'' t x \partial_{t,x}^2 + C_1'' y \partial_y + G_1'' y^2 \partial_y^2 + R_1'' t y \partial_{t,y}^2
\nonumber\\ &&
+ M_1'' x
y \partial_{x,y}^2
+(D_0'' + D_1'' z) \partial_z + z (H_0''+H_1'' z) \partial_z^2 + t (S_0''+S_1'' z) \partial_{t,z}^2+x
(N''_2 + N_1'' z) \partial_{x,z}^2
\nonumber\\ &&
+ y (Q_2''+Q_1'' z) \partial_{y,z}^2
\\
D_{4,4} &=&
A''' + (E_0''' + E_1''' t) \partial_t + t (L_0''' + L_1''' t) \partial_t^2 + B_1''' x \partial_x
+ F_1''' x^2 \partial_x^2 + (P_2''' + P_1''' t) x \partial_{t,x}^2
\nonumber\\ &&
+ C_1''' y \partial_y + G_1''' y^2
\partial_y^2 + (R_2''' + R_1''' t) y \partial_{t,y}^2 + M_1''' x y \partial_{x,y}^2 + D_1''' z \partial_z + H_1''' z^2
\partial_z^2
\nonumber\\ &&
+ (S_2'''+ S_1''' t) z \partial_{t,z}^2 + N_1''' x z \partial_{x,z}^2 + Q_1''' y z \partial_{y,z}^2.
\end{eqnarray}
}
They cover the functions $K_i,~~i = 1 ... 21$ of Refs.~\cite{EXTON72a,EXTON1}.
\section{The Recursions}
\label{sec:reclist}
\vspace*{1mm}
\noindent
The formal power series ansatz (\ref{eq:TAYLOR}) allows to obtain difference equations for the expansion
coefficient from the differential equations given in Section~\ref{sec:delist}.\footnote{There are also contiguous relations
for the corresponding functions, cf.~\cite{CONTIG,Kalmykov:2020cqz}.}
The following recursions are obtained:
\begin{eqnarray}
_2F_1 &:&
(C + n (1 - n + B_1)) f[n] + (1 + n) (n + A_1) f[1 + n] = 0
\label{eq:R2F1}
\\
_3F_2 &:&
{
\big( n B_1 + (n-1)n B_2 +C-(n-2) (n-1) n \big) f[n]
}
\nonumber\\ &&
{
+ \big( (n+1) A_1 + n (n+1) A_2 + (n-1)n(n+1)\big) f[n+1] = 0
}
\label{eq:R3F2}
\\
_{p+1}F_p &:&
{
\Biggl[\frac{C}{n!} - \frac{1}{(n-p-1)!} + \sum_{k=1}^p \frac{B_k}{(n-k)!} \Biggr] f[n]
}
\nonumber\\ &&
{
+ (n+1) \Biggl[ \frac{1}{(n-p)!} + \sum_{k=1}^p \frac{A_k}{(n-k+1)!} \Biggr] f[n+1] = 0.
}
\end{eqnarray}
In the two--variable cases the expansion coefficients of the Horn--type functions obey
\begin{eqnarray}
&&\Big[a+b m+e (m-1) m+n \big(f+h m+j (n-1)\big)\Big] f[m,n]
\nonumber\\&&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
+(1+m) (c+d m+g n) f[1+m,n]=0,
\\
&&\Big[a_1+f_1 m+j_1 (m-1) m+n \big(b_1+h_1 m+e_1 (n-1)\big)\Big] f[m,n]
\nonumber\\&&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
+(1+n) (c_1+g_1 m+d_1 n) f[m,1+n]=0,
\end{eqnarray}
and for the $S_1$-functions one has
\begin{eqnarray}
&&\Big[a+b m+e (m-1) m+f n+h m n+j (n-1) n+(m-2) (m-1) m p+(m-1) m n r
\nonumber\\&&
+m (n-1) n s\Big] f[m,n]
+(1+m) \Big[c+g n+m \big(d+l (m-1)+n q\big)\Big] f[1+m,n]=0,
\\
&&\Big[a_1+f_1 m+j_1 (m-1) m+n \big(b_1+h_1 m+e_1 (n-1)\big)\Big] f[m,n]
\nonumber\\&&
+(1+n) (c_1+g_1 m+d_1 n) f[m,1+n]=0,
\end{eqnarray}
as, likewise, for the
$S_2$-functions
\begin{eqnarray}
&&\Big[a+b m+e (m-1) m+f n+h m n+j (n-1) n+(m-2) (m-1) m p+(m-1) m n r+
\nonumber\\&&
m (n-1) n s\Big] f[m,n]+(1+m) \Big[c+g n+m \big(d+l (-1+m)+n q\big)\Big] f[1+m,n]=0,
\\
&&\Big[a_1+f_1 m+n \big(b_1+h_1 m+e_1 {(n-1)}\big)\Big] f[m,n]+(1+n) \Big[c_1+n (d_1+(-1+n) p_1)
\nonumber\\&&
+m \big(g_1+j_1 (m-1)+n q_1\big)\Big] f[m,1+n]=0.
\end{eqnarray}
For the expansion coefficients $f[r,s]$ of the Kamp\'e de F\'eriet functions the recurrences read
\begin{eqnarray}
\prod_{j=1}^p (r+s+a_j) \prod_{j=1}^q (r+b_j) f[r,s] - (r+1) \prod_{j=1}^l (r+s+\alpha_j) \prod_{j=1}^m (r+\beta_j)
f[r+1,s] = 0,
\\
\prod_{j=1}^p (r+s+a_j) \prod_{j=1}^k (s+c_j) f[r,s] - (s+1) \prod_{j=1}^l (r+s+\alpha_j) \prod_{j=1}^n (s+\gamma_j)
f[r,s+1] =0.
\end{eqnarray}
The coefficients in the 3-variable cases obey
\begin{eqnarray}
&&\Big[A+ B_1 m+ E_1 (m-1) m+ C_1 n+ H_1 m n+ F_1 (n-1) n+ D_1 p+ L_1 m p+ G_1 (p-1) p
\nonumber\\&&
+n p S_1 \Big] f[m,n,p]+(1+m) ( B_0 + E_0 m+ H_0 n+ L_0 p) f[1+m,n,p]=0,
\\
&&\Big[ A' + B'_1 m+ E'_1 (m-1) m+ C'_1 n+ H'_1 m n+ F'_1 (n-1) n+ D'_1 p+ L'_1 m p+ G'_1 (p-1) p
\nonumber\\&&
+n p S'_1 \Big] f[m,n,p]+(1+n) ( C'_0 + H'_2 m+ F'_0 n+p S'_0 ) f[m,1+n,p]=0,
\\
&&\Big[ A'' + B''_1 m+ E''_1 (m-1) m+ C''_1 n+ H''_1 m n+ F''_1 (n-1) n+ D''_1 p+ L''_1 m p+ G''_1 (p-1) p
\nonumber\\&&
+n p S''_1 \Big] f[m,n,p]+(1+p) ( D''_0 + L''_2 m+ G''_0 p+n S''_2 ) f[m,n,1+p]=0.
\end{eqnarray}
For the 4-variable systems one has
\begin{eqnarray}
&&\Big[A+ B_1 m+ F_1 (m-1) m+ C_1 n+m M_1 n+ G_1 (n-1) n+ D_1 p+m N_1 p+ H_1 (p-1) p
\nonumber\\&&
+ E_1 q+m P_1 q+ L_1 (q-1) q+n p Q_1 +n q R_1 +p q S_1 \Big] f[m,n,p,q]+(1+m) ( B_0 + F_0 m
\nonumber\\&&
+ M_0 n+ N_0 p+ P_0 q) f[1+m,n,p,q]=0,
\\[2mm]
&&\Big[ A' + B'_1 m+ F'_1 (m-1) m+ C'_1 n+m M'_1 n+ G'_1 (n-1) n+ D'_1 p+m N'_1 p+ H'_1 (p-1) p
\nonumber\\&&
+ E'_1 q+m P'_1 q+ L'_1 (q-1) q+n p Q'_1 +n q R'_1 +p q S'_1 \Big] f[m,n,p,q]+(1+n) ( C'_0 +m M'_2
\nonumber\\&&
+ G'_0 n+p Q'_0 +q R'_0 ) f[m,1+n,p,q]=0,
\\[2mm]
&&\Big[ A'' + B''_1 m+ F''_1 (m-1) m+ C''_1 n+m M''_1 n+ G''_1 (n-1) n+ D''_1 p+m N''_1 p+ H''_1 (p-1) p
\nonumber\\&&
+ E''_1 q+m P''_1 q+ L''_1 (q-1) q+n p Q''_1 +n q R''_1 +p q S''_1 \Big] f[m,n,p,q]+(1+p) ( D''_0 +m N''_2
\nonumber\\&&
+ H''_0 p+n Q''_2 +q S''_0 ) f[m,n,1+p,q]=0,
\\[2mm]
&&\Big[ A''' + B'''_1 m+ F'''_1 (m-1) m+ C'''_1 n+m M'''_1 n+ G'''_1 (n-1) n+ D'''_1 p+m N'''_1 p
\nonumber\\&&
+ H'''_1 (p-1) p+ E'''_1 q+m P'''_1 q+ L'''_1 (q-1) q+n p Q'''_1 +n q R'''_1 +p q S'''_1 \Big] f[m,n,p,q]
\nonumber\\&&
+(1+q) ( E'''_0 +m P'''_2 + L'''_0 q+n R'''_2 +p S'''_2 ) f[m,n,p,1+q]=0.
\end{eqnarray}
\section{The Solution of the Recursions}
\label{sec:recsol}
Let ${\mathbb K}$ be a field of characteristic $0$ (i.e., a field that contains $\mathbb Q$ as subfield).
A power series in $r$ variables
\begin{equation}
f(x_1,\ldots,x_r) = \sum_{n_i\ge0} A(n_1,\ldots,n_r) x_1^{n_1}\cdots x_r^{n_r}
\label{eq:hyp-series}
\end{equation}
is called a multiple hypergeometric series if the multivariate sequence $A:\mathbb{N}^r\to{\mathbb K}$ is hypergeometric, i.e., we have
\begin{equation}
s_i\,A(n_1,\ldots,n_i,\ldots,n_r) = t_i\,A(n_1,\ldots,n_i+1,\ldots,n_r) , \qquad i=1,\ldots,r
\label{eq:hyp-ratio}
\end{equation}
for polynomials $s_i,t_i\in{\mathbb K}[n_1,\dots,n_r]$ being coprime.
Often the hypergeometric sequence $A$ is given in terms of binomial coefficients, Pochhammer symbols,
$\Gamma$--functions and related special functions. However, in concrete applications one often starts with
a
given system of partial linear differential equations and searches for a hypergeometric series solution as
specified above. Plugging this ansatz into the equations and doing coefficient comparison w.r.t.\ $x_1^{n_1}\cdots x_r^{n_r}$ yield a system of partial linear difference equations; for concrete examples see Section~\ref{sec:reclist}. In the general case not too many methods are known that can support the use to solve these difference equations; for some first steps in this direction we refer the reader to Section~\ref{sec:PLDEsolver}. In the following we concentrate on first--order systems
of the form~\eqref{eq:hyp-ratio}.
We remark that in many concrete applications such a system~\eqref{eq:hyp-ratio} can be found. In particular, this is the case if the underlying system of linear differential equations is of the form
\begin{equation}
\Big[
s_i \Bigl(x_1\frac{\partial}{\partial x_i}, \ldots, x_i \frac{\partial}{\partial x_i}, \ldots, x_r \frac{\partial}{\partial x_r} \Bigr)
- \frac{1}{x_i} t_i \Bigl(x_1 \frac{\partial}{\partial x_1}, \ldots, x_i\frac{\partial}{\partial x_i}-1, \ldots, x_r \frac{\partial}{\partial x_r} \Bigr)
\Big] f(x_1,\ldots,x_r)
= 0.
\label{eq:DEsystem}
\end{equation}
To show this, we utilize the crucial property
\begin{equation}
x_i \frac{\partial}{\partial x_i} x_1^{n_1}\cdots x_r^{n_r} = n_i x_1^{n_1}\cdots x_r^{n_r}
\end{equation}
which implies that for a polynomial $p(n_1,\ldots,n_r)$ we have
\begin{equation}
p \Big(x_1\frac{\partial}{\partial x_1}, \ldots, x_i \frac{\partial}{\partial x_i}, \ldots, x_r \frac{\partial}{\partial x_r} \Bigr) x_1^{n_1}\cdots x_r^{n_r} = p(n_1,\ldots,n_r) x_1^{n_1}\cdots x_r^{n_r} .
\end{equation}
Thus
\begin{align}
\label{eq:t-term}
&\Big[ s_i \Bigl(x_1\frac{\partial}{\partial x_1}, \ldots, x_i \frac{\partial}{\partial x_i}, \ldots, x_r \frac{\partial}{\partial x_r} \Bigr) \Big] f(x_1,\ldots,x_r)
\nonumber\\
&\qquad \qquad \qquad \qquad \qquad = \sum_{n_i\ge 0} s_i(n_1,\ldots,n_i,\ldots,n_r) A(n_1,\ldots,n_i,\ldots,n_r) x_1^{n_1}\cdots x_r^{n_r} \\
&\Big[ t_i \Bigl(x_1 \frac{\partial}{\partial x_1}, \ldots, x_i\frac{\partial}{\partial x_i}-1, \ldots, x_r \frac{\partial}{\partial x_r} \Bigr) \Big] f(x_1,\ldots,x_r)
\nonumber\\
&\qquad \qquad \qquad \qquad \qquad = \sum_{n_i\ge 0} t_i(n_1,\ldots,n_i-1,\ldots,n_r) A(n_1,\ldots,n_i,\ldots,n_r) x_1^{n_1}\cdots x_r^{n_r}
\end{align}
and therefore, dividing the second equation by $x_i$ from the left,
\begin{align}
\label{eq:s-term}
&\Big[ \frac{1}{x_i} t_i \Bigl(x_1 \frac{\partial}{\partial x_1}, \ldots, x_i\frac{\partial}{\partial x_i}-1, \ldots, x_r \frac{\partial}{\partial x_r} \Bigr) \Big] f(x_1,\ldots,x_r)
\nonumber\\
& \qquad \qquad \qquad \qquad \qquad = \sum_{n_i\ge 0}
t_i(n_1,\ldots,n_i-1,\ldots,n_r) A(n_1,\ldots,n_i,\ldots,n_r)
x_1^{n_1}\cdots x_i^{n_i-1} \cdots x_r^{n_r}.
\end{align}
The coefficient of the term $x_1^{n_1}\cdots x_i^{n_i}\cdots x_r^{n_r}$ in
\eqref{eq:t-term} and in
\eqref{eq:s-term} is respectively
\begin{equation}
s_i(n_1,\ldots,n_i,\ldots,n_r) A(n_1,\ldots,n_i,\ldots,n_r)
\end{equation}
and
\begin{equation}
t_i(n_1,\ldots,n_i,\ldots,n_r) A(n_1,\ldots,n_i+1,\ldots,n_r).
\end{equation}
This shows, due to \eqref{eq:hyp-ratio}, that~\eqref{eq:DEsystem} holds.
For example, for the case of the Gauss hypergeometric function
\begin{equation}
_2F_1(a,b;c;x) = \sum_{n\ge 0} \frac{(a)_n (b)_n}{(c)_n n!}x^n
\end{equation}
one has
\begin{eqnarray}
A(n) &=& \frac{(a)_n (b)_n}{(c)_n n!} \\
s(n) &=& (a+n)(b+n) \\
t(n) &=& (n+1)(c+n)
\end{eqnarray}
and the differential equation obeyed by $_2F_1(a,b;c;x)$ is, from \eqref{eq:DEsystem},
\begin{equation}
\Big[ \Big(a+ x \frac{\partial}{\partial x} \Big) \Big(b+ x \frac{\partial}{\partial x} \Big) - \frac{1}{x} \Big(x \frac{\partial}{\partial x}\Big) \Big( x \frac{\partial}{\partial x} -1 +c \Big) \Big] {}_2F_1(a,b;c;x) = 0.
\end{equation}
\vspace*{1mm}
\noindent
\subsection{An algorithm for hypergeometric products}\label{sec:solveProd}
Given a hypergeometric sequence $A$ with~\eqref{eq:hyp-ratio} we seek for a representation in terms of indefinite nested products that can be modeled, e.g., within the summation package \texttt{Sigma}.
For the univariate case $r=1$ this task is immediate. Since $s_1,t_1\in{\mathbb K}[n_1]$ have only finitely many roots, there is a $\lambda_1\in\mathbb{N}$ such that $s_1(k)\neq0\neq t_1(k)$ for all $k\geq\lambda_1$. Thus for $n_1\geq\lambda_1$ we get
\begin{equation}\label{Equ:Prod1}
\begin{split}
A(n_1)&=\frac{s_1(n_1-1)}{t_1(n_1-1)}A(n_1-1)\\
&=\frac{s_1(n_1-1)}{t_1(n_1-1)}\frac{s_1(n_1-2)}{t_1(n_1-2)}A(n_1-2)=\dots=\left(\prod_{k=\lambda_1+1}^{n_1}\frac{s_1(k-1)}{t_1(k-1)}\right)A(\lambda_1)
\end{split}
\end{equation}
where $A(n_1)$ can be written in terms of the hypergeometric product $\prod_{k=\lambda_1+1}^{n_1}\frac{s_1(k-1)}{t_1(k-1)}$
which is nonzero for each $n_1\geq0$. In other words, a hypergeometric sequence is either trivial, i.e., it is the $0$ sequence from a certain point on (if $A(\lambda_1)=0$) or is nonzero for all $n\geq\lambda_1$.
Next, we turn to the multivariate case.
As introduced in~\cite{AP:02} we call a sequence non--trivial if the zero points vanish on a nonzero
polynomial from ${\mathbb K}[n_1,\dots,n_r]$. In other words $A$ is almost everywhere a nonzero sequence. An important consequence of~\cite[Prop~4]{AP:02} is that for such a hypergeometric sequence with~\eqref{eq:hyp-ratio} the following compatibility property holds for $R_i=\frac{s_i}{t_i}\in{\mathbb K}(n_1,\dots,n_r)$: for $1\leq i\leq j\leq r$,
\begin{equation}\label{Equ:CompatibilityProp}
\frac{R_i(n_1,\dots,n_j+1,\dots,n_r)}{R_i(n_1,\dots,n_j,\dots,n_r)}=\frac{R_j(n_1,\dots,n_i+1,\dots,n_r)}{R_j(n_1,\dots,n_i,\dots,n_r)}.
\end{equation}
In particular, the Ore-Sato Theorem~\cite{OS:90} holds: $A$ can be written as a product in terms of geometric products and factorial terms; for a rigorous (and rather involved) proof see~\cite{AP:02} and for further generalizations see~\cite{OSGeneral}.
In the following we will introduce a special case of the Ore-Sato theorem that deals with the problem to represent $A$ in terms of hypergeometric products which are valid for all $(n_1,\dots,n_r)\in\mathbb N$ where the $n_i$ are chosen sufficiently large.
This is precisely the situation that we require for hypergeometric power series as given in~\eqref{eq:hyp-series}.
As it turns out, such a representation is always possible if we require the following additional assumptions (which hold in the univariate case automatically): we can choose\footnote{In general there is no algorithm by the Davis--Matiyasevich--Putnam--Robinson
theorem~\cite{Hilbert10} that can decide if there is an integer root (or even infinitely many integer roots). However, in our applications the polynomials are usually small, mostly even linear and thus such integers $\lambda_i$ can be determined.} $\lambda_i\in\mathbb N$ such that for all $(n_1,\dots,n_r)\in\mathbb N^r$ with $n_i\geq\lambda_i$ we have
$$s_i(n_1,\dots,n_r)\neq0\neq t_i(n_1,\dots,n_r).$$
Therefore~\eqref{eq:hyp-ratio} is equivalent to
\begin{equation}
A(n_1,\ldots,n_i+1,\ldots,n_r) = R_i(n_1,\dots,n_r)\,A(n_1,\ldots,n_i,\ldots,n_r) , \qquad i=1,\ldots,r
\end{equation}
with $R_i(n_1,\dots,n_r)\neq0$ for all $n_i\geq\lambda_i$.
Applying these relations iteratively shows that for any $(n_1,\dots,n_r)\in\mathbb N^r$ with $n_i\geq\lambda_i$
there is a $c\in{\mathbb K}\setminus\{0\}$ such that
$$A(n_1,\dots,n_r)=c\,A(\lambda_1,\dots,\lambda_r).$$
Similarly to the univariate case we get the following consequence: $A(n_1,\dots,n_r)$ is the zero sequence (for all $n_i\geq\lambda_i$) if $A(\lambda_1,\dots,\lambda_r)=0$ or it is nonzero for all $n_i\geq\lambda_i$ otherwise.
\medskip
\textit{Remark:} In the second case all zeroes of $A$ are finite and thus vanish on a particular chosen nonzero polynomial. Thus we can apply~\cite[Prop~4]{AP:02} as above. If the compatibility criteria~\eqref{Equ:CompatibilityProp} does not hold, then $A$ must be the zero sequence or the hypergeometric system is inconsistent.
\medskip
With these properties there is a simple algorithm which finds a product representation of $A$ of the form
\begin{equation}\label{Equ:FinalProdForm}
c\,\left(\prod_{k=\lambda_1}^{n_1}h_1(k,n_2,\dots,n_r)\right)\left(\prod_{k=\lambda_2}^{n_2}h_2(k,n_3\dots,n_r)\right)\dots \left(\prod_{k=\lambda_r}^{n_r}h_r(k)\right)
\end{equation}
with $c=A(\lambda_1,\dots,\lambda_r)\in{\mathbb K}\setminus\{0\}$ and $h_i(x,n_{i+1},\dots,n_r)\in{\mathbb K}(x,n_{i},\dots,n_r)$ with $1\leq i\leq r$. In particular, we have that $\prod_{k=\lambda_i}^{n_i}h_i(k,n_{i+1}\dots,n_r)\neq0$ for all $n_i\geq\lambda_i$ with $i=1,\dots,r$.
If $r=1$, such a product can be derived immediately with~\eqref{Equ:Prod1} and $c=A(\lambda_1)$. Otherwise, the algorithm works by recursion (induction on $r>1$). As in the case $r=1$ it follows that we can write
$$A(n_1,\dots,n_r)=A(\lambda_1,n_2,\dots,n_r)\prod_{k=\lambda_1+1}^{n_1}h_1(k,n_2,\dots,n_r)$$
with $h_1(k,n_2,\dots,n_r)=R_1(k-1,n_2,\dots,n_r)=\frac{s_1(k-1,n_2,\dots,n_r)}{t_1(k-1,n_2,\dots,n_r)}$; note that
$A(\lambda_1,n_2,\dots,n_r)\neq0$ for all $(n_2,\dots,n_r)\in\mathbb N$ with $n_i\geq\lambda_i$.
Now consider the multivariate sequence
$$A'(n_2,\dots,n_r):=A(\lambda_1,n_2,\dots,n_r
$$
which satisfies
$$A'(n_2,\ldots,n_i+1,\ldots,n_r) = R_i(\lambda_1,n_2,\dots,n_r)\,A'(n_2,\ldots,n_i,\ldots,n_r) , \qquad i=2,\ldots,r$$
with $R_i(\lambda_1,n_2,\dots,n_r)\in{\mathbb K}[n_2,\dots,n_r]$, where
$R_i(\lambda_1,n_2,\dots,n_r)\neq0$ for all $n_i\geq\lambda_i$.
Obviously, $A'$ is again hypergeometric with all the assumptions (in particular satisfying the compatibility criteria in~\eqref{Equ:CompatibilityProp}) and we can proceed by induction/recursion. Thus we get
$$A'(n_2,\dots,n_r)=c\,\left(\prod_{k=\lambda_1}^{n_2}h_2(k,n_3\dots,n_r)\right)
\dots \left(\prod_{k=\lambda_r}^{n_r}h_r(k)\right),$$
with
$c=A'(\lambda_2,\dots,\lambda_r)=A(\lambda_1,\lambda_2,\dots,\lambda_r)\in{\mathbb K}\setminus\{0\}$
and $h_i(x,n_{i+1},\dots,n_r)\in{\mathbb K}(x,n_{i},\dots,n_r)$ with $2\leq i\leq r$.
This finally shows~\eqref{Equ:FinalProdForm}.
We remark that in all the examples of this article we can set $\lambda_i=0$ for $1\leq i\leq r$.
\subsection{Examples}\label{sec:exProdExp}
Let us illustrate the solution of some of the recursions in explicit form. Here we refer first to the general
representation of the corresponding differential and difference equations. We consider the differential equation
(\ref{eq:D2F1g}) which leads to the recurrence (\ref{eq:R2F1}) for the expansion coefficient $f[n]$. The recursion is
of order one and is solved for $f[n] = 0$. {\tt Sigma} obtains the following product solution
\begin{eqnarray}
f[n] =
\frac{\prod_{i_1=1}^n \big(
2
+B_1
-C
-3 i_1
-B_1 i_1
+i_1^2
\big)}{n! (A_1)_n}
\equiv \frac{\prod_{i_1=1}^n \big[-C +B_1(1-i_1) + (1-i_1)(2-i_1)
\big]}{n! (A_1)_n},
\label{eq:T1}
\nonumber\\
\end{eqnarray}
which is not yet expressed by Pochhammer symbols.
{\tt Mathematica} allows to obtain the factorization of the
product in (\ref{eq:T1}) in terms of Pochhammer symbols by
\begin{eqnarray}
f[n] = \frac{(\alpha_1)_n (\alpha_2)_n}{(A_1)_n n!},
\end{eqnarray}
with
\begin{eqnarray}
\alpha_{1(2)} = -\frac{1}{2}(1+B_1) \mp \frac{1}{2} \sqrt{(1+B_1)^2 + 4 C}.
\end{eqnarray}
By replacing $A_1,B_1$ and $C$ directly to
\begin{eqnarray}
C \rightarrow -a b,~~A_1 \rightarrow c,~~B_1 \rightarrow -1 - a - b
\end{eqnarray}
one obtains
\begin{eqnarray}
f[n] = \frac{(a)_n (b)_n}{(c)_n n!}.
\end{eqnarray}
This choice of variables is therefore instrumental to obtain the most simple structure. However,
it will sometimes not naturally appear in the physical differential equations, requesting associated variable
transformations in general.
This becomes more and more involved in higher hypergeometric cases, which is already illustrated in the case of
the generalized hypergeometric function $_3F_2$. Its differential equation (\ref{eq:D3F2}) implies the recurrence
for $f[n]$ (\ref{eq:R3F2}) with $f[n]=1$, which has the solution
\begin{eqnarray}
f[n] = \frac{\prod_{i_1=1}^n
[-C
+B_1 \big(
1-i_1\big)
-B_2 \big(
2-i_1
\big)
{ \big(1-i_1\big) }
-\big(
3-i_1
\big)
\big(2-i_1
\big)
\big(1-i_1\big)]}
{n!~\prod_{i_1=1}^n
[A_1
-A_2 \big(
1-i_1\big)
+\big(
2-i_1
\big)
\big(1-i_1\big)]}.
\label{eq:3F2a}
\end{eqnarray}
Eq.~(\ref{eq:3F2a}) can be rewritten in terms of radicals by
\begin{eqnarray}
f[n] = \frac{
(\alpha_1)_n
(\alpha_2)_n
(\alpha_3)_n}{ {n!} \big(
-\frac{1}{2}
+\frac{A_2 }{2}
-\frac{z_5}{2}
\big)_n \big(
-\frac{1}{2}
+\frac{A_2 }{2}
+\frac{z_5}{2}
\big)_n},
\end{eqnarray}
with
\begin{eqnarray}
\alpha_1 &=&
1
-\frac{z_4}{3}
+\frac{\sqrt[3]{z_1
+z_2}
}{6 \sqrt[3]{2}}
-\frac{i \sqrt[3]{z_1
+z_2
}}{2 \sqrt[3]{2} \sqrt{3}}
+\frac{\sqrt[3]{-2} z_3}{3 \sqrt[3]{z_1
+z_2
}}
\\
\alpha_2 &=&
1
-\frac{z_4}{3}
-\frac{\sqrt[3]{z_1
+z_2
}}{3 \sqrt[3]{2}}
-\frac{\sqrt[3]{2} z_3}{3 \sqrt[3]{z_1
+z_2
}}
\\
\alpha_3 &=&
1
-\frac{z_4}{3}
+\frac{\sqrt[3]{z_1
+z_2
}}{6 \sqrt[3]{2}}
+\frac{i \sqrt[3]{z_1
+z_2
}}{2 \sqrt[3]{2} \sqrt{3}}
-\frac{(-1)^{2/3} \sqrt[3]{2} z_3}{3 \sqrt[3]{z_1
+z_2
}}
\\
z_1 &=& 27 C + (3 + B_2) (9 B_1 + B_2 (3 + 2 B_2))
\\
z_2 &=& \sqrt{-4 (3 + 3 B_1 +
B_2 (3 + B_2))^3 + (27 C + (3 + B_2) (9 B_1 +
B_2 (3 + 2 B_2)))^2}
\\
z_3 &=& 3 + 3 B_1 + B_2 (3 + B_2)
\\
z_4 &=& 6 + B_2
\\
z_5 &=& \sqrt{-4 A_1 + (A_2-1)^2}.
\end{eqnarray}
After performing the replacements
\begin{eqnarray} A_{{2}} &\rightarrow& b_1+b_2 +1,~~ B_{{2}} \rightarrow -(3 + a_1 + a_2 +a_3),~~A_{{1}} \rightarrow b_1 b_2,
\nonumber\\
B_{{1}} &\rightarrow& -(a_2 a_1 + a_3 a_1 + a_2 a_3 + a_1 +
a_2 +a_3 +1),~~C \rightarrow - a_1 a_2 a_3
\label{eq:VIETA}
\end{eqnarray}
one obtains
\begin{eqnarray}
f[n] = \frac{(a_1)_n (a_2)_n (a_3)_n}{(b_1)_n (b_2)_n n!}.
\end{eqnarray}
One observes that the substitutions (\ref{eq:VIETA}) are related to the root--relations by Vieta's theorem
\cite{VIETA} for the
roots $r_i$ of the algebraic equation
\begin{eqnarray}
x^n + \sum_{k = 1}^{n} a_{n-k} x^{n-k} = 0,
\end{eqnarray}
which obey
\begin{eqnarray}
-a_{n-1} &=& r_1 + ... + r_n
\nonumber\\
a_{n-2} &=& r_1 (r_2 + ... + r_n) + r_2(r_3+ ... r_n) + ... r_{n-1} r_n
\nonumber\\
&\vdots&
\nonumber\\
(-1)^n a_0 &=& r_1 ... r_n.
\end{eqnarray}
In all cases, which can be solved by a single recurrence at the time the above procedures are applied.
Considering the generalized hypergeometric function $_{p+1}F_p$ the product--solution for the expansion coefficient
reads
\begin{eqnarray}
{
f[n] =
\prod_{i=1}^n \frac{\frac{1}{(i-p-2)!} - \sum_{k=1}^p \frac{B_k}{(i-k-1)!} - \frac{C}{(i-1)!}}
{\frac{i}{(i-p-1)!} + \sum_{k=1}^p \frac{A_k i}{(i-k)!} }
}
\end{eqnarray}
and one may factorize the corresponding product as in the above examples. However, the corresponding roots
can in general only be obtained numerically. Still one may work with the corresponding symbolic expressions.
This is in particular useful w.r.t.\ their expansion in the dimensional parameter $\varepsilon$, which is contained
in the quantities $A_n, B_n$ and $C$ in polynomial form.
We turn now to the multivariate case. Here we have explicit formal solutions which apply to
all concrete cases listed in the appendix, resp.\ in the files attached, but may cover even more cases.
To find this out for concrete parameter settings one is advised to check
whether these particular solutions obey the
corresponding difference equations.
For the Horn--type functions one obtains {from Eqs. \eqref{APP1}, \eqref{APP2}} the product solution for $f[m,n]$
\begin{eqnarray}
f_{\rm H}[m,n] &=&
\Biggl[
\prod_{i_1=1}^m \frac{-a
+b
-2 e
-f n
+h n
+j n
-j n^2
-b i_1
+3 e i_1
-h n i_1
-e i_1^2
}{\big(
c
-d
+g n
+d i_1
\big) i_1}\Biggr]
\nonumber\\&& \times
\Biggl[\prod_{i_1=1}^n \frac{-a_1
+b_1
-2 e_1
-b_1 i_1
+3 e_1 i_1
-e_1 i_1^2
}{\big(
c_1
-d_1
+d_1 i_1
\big) i_1}\Biggr]
\label{eq:HORN2}
\end{eqnarray}
and for the functions $S_1$ and $S_2$ one has
\begin{eqnarray}
f_{\rm S_1}[m,n] &=&
\Biggl[
\prod_{i_1=1}^n \frac{- a_1
+ b_1
-2 e_1
- b_1 i_1
+3 e_1 i_1
- e_1 i_1^2
}{\big(
c_1
- d_1
+ d_1 i_1
\big) i_1}\Biggr]
\nonumber\\&& \times
\Biggl[
\prod_{i_1=1}^m
\bigg(
\frac{1}{\big(
c
-d
+2 l
+g n
-n q
+d i_1
-3 l i_1
+n q i_1
+l i_1^2
\big) i_1} (-a
+b
\nonumber\\&&
-2 e
-f n
+h n
+j n
-j n^2
+6 p
-2 n r
-n s
+n^2 s
-b i_1
\nonumber\\&&
+3 e i_1
-h n i_1
-11 p i_1
+3 n r i_1
+n s i_1
-n^2 s i_1
-e i_1^2
+6 p i_1^2
-n r i_1^2
\nonumber\\&&
-p i_1^3
)
\bigg)
\bigg]
\\
f_{\rm S_2}[m,n] &=&
\bigg(
\prod_{i_1=1}^n \frac{- a_1
+ b_1
-2 e_1
- b_1 i_1
+3 e_1 i_1
- e_1 i_1^2
}{\big(
c_1
-d_1
+2 p_1
+d_1 i_1
-3 p_1 i_1
+ p_1 i_1^2
\big) i_1}\bigg)
\nonumber\\&& \times
\bigg[
\prod_{i_1=1}^m
\bigg(
\frac{1}{\big(
c
-d
+2 l
+g n
-n q
+d i_1
-3 l i_1
+n q i_1
+l i_1^2
\big) i_1}
(-a
+b
-2 e
-f n
\nonumber\\&&
+h n
+j n
-j n^2
+6 p
-2 n r
-n s
+n^2 s
-b i_1
+3 e i_1
-h n i_1
-11 p i_1
+3 n r i_1
\nonumber\\&&
+n s i_1
-n^2 s i_1
-e i_1^2
+6 p i_1^2
-n r i_1^2
-p i_1^3
)
\bigg)\Bigr].
\end{eqnarray}
Eq.~(\ref{eq:HORN2}) can be rewritten as
\begin{eqnarray}
f[n,m] &=& \frac{\displaystyle (-1)^{m+n}}
{\displaystyle m! n! \Biggl(
\frac{c_1}{d_1}\Biggr)_n \Biggl(
\frac{c}{d}
+\frac{g n}{d}
\Biggr)_m}
\Biggl(
\frac{e}{d}\Biggr)^m \Biggl(
\frac{e_1}{d_1}\Biggr)^n \Biggl(
-\frac{1}{2}
+\frac{b_1}{2 e_1}
-\frac{r_1}{2 e_1}
\Biggr)_n
\nonumber\\ && \times
\Biggl(
-\frac{1}{2}
+\frac{b_1}{2 e_1}
+\frac{r_1}{2 e_1}
\Biggr)_n \Biggl(
-\frac{1}{2}
+\frac{b}{2 e}
+\frac{h n}{2 e}
-\frac{r_2}{2 e}
\Biggr)_m \Biggl(
-\frac{1}{2}
+\frac{b}{2 e}
+\frac{h n}{2 e}
+\frac{r_2}{2 e}
\Biggr)_m,
\nonumber\\
\label{eq:FMN}
\end{eqnarray}
with
\begin{eqnarray}\label{Equ:AlgebraicExt}
r_1 &=& \sqrt{(b_1 - e_1)^2 - 4 a_1 e_1}
\\
r_2 &=&
{
\sqrt{(b - 3 e + h n)^2 - 4 e (a - b + 2 e + f n - h n - j n + j n^2)}.
}
\end{eqnarray}
The Pochhammer-form of (\ref{eq:FMN}) directly allows the $\varepsilon$--expansion, if the free parameters in the
Pochhammer symbols are replaced accordingly by expressions that contain also the $\varepsilon$-parameter. Further simplifications are obtained using the
replacement rules given in Appendix~\ref{sec:A}. E.g. for the Appell function $F_1$ one obtains
\begin{eqnarray}
f[n,m] = \frac{(\alpha )_{m+n} (\beta )_m (\beta')_n}{m! n! (\gamma )_{m+n}},
\end{eqnarray}
by using the replacements given in (\ref{eq:F1a}), Appendix~\ref{sec:B}.
In the tri--variate cases one obtains
\begin{eqnarray}
f[m,n,p] &=&
\bigg(
\prod_{i_1=1}^m \frac{-A
+ B_1
-2 E_1
- B_1 i_1
+3 E_1 i_1
- E_1 i_1^2
}{\big(
B_0
-E_0
+E_0 i_1
\big) i_1}
\bigg)
\nonumber\\&& \times
\bigg[\prod_{i_1=1}^n
\bigg(
\frac{1}{\big(
C'_0
- F'_0
+ H'_2 m
+ F'_0 i_1
\big) i_1}
(- A'
+ C'_1
-2 F'_1
- B'_1 m
\nonumber\\&&
+ E'_1 m
+ H'_1 m
- E'_1 m^2
- C'_1 i_1
+3 F'_1 i_1
- H'_1 m i_1
- F'_1 i_1^2
)
\bigg)
\bigg]
\nonumber\\&& \times
\bigg[
\prod_{i_1=1}^p
\bigg(
\frac{1}{\big(
D''_0
- G''_0
+ L''_2 m
+n S''_2
+ G''_0 i_1
\big) i_1}
(-A''
+ D''_1
-2 G''_1
- B''_1 m
+ E''_1 m
\nonumber\\&&
+ L''_1 m
- E''_1 m^2
- C''_1 n
+ F''_1 n
- H''_1 m n
- F''_1 n^2
+n S''_1
- D''_1 i_1
+3 G''_1 i_1
- L''_1 m i_1
\nonumber\\&&
-n S''_1 i_1
- G''_1 i_1^2
)
\bigg)
\bigg].
\end{eqnarray}
Finally, in the four--variable case the product solution reads
\begin{eqnarray}
f[m,n,p,q] &=& \bigg(
\prod_{i_1=1}^m
\frac{-A
+B_1
-2 F_1
-B_1 i_1
+3 F_1 i_1
-F_1 i_1^2
}{\big(
B_0
-F_0
+F_0 i_1
\big) i_1}
\bigg)
\nonumber\\&&\times
\bigg[
\prod_{i_1=1}^n
\frac{1}{\big(
C'_0
-G'_0
+m M'_2
+G'_0 i_1
\big) i_1}
(-A'
+C'_1
-2 G'_1
-B'_1 m
+F'_1 m
\nonumber\\&&
-F'_1 m^2
+m M'_1
-C'_1 i_1
+3 G'_1 i_1
-m M'_1 i_1
-G'_1 i_1^2
)
\bigg]
\nonumber\\&&\times
\bigg[
\prod_{i_1=1}^p
\frac{1}{\big(
D''_0
-H''_0
+m N''_2
+n Q''_2
+H''_0 i_1
\big) i_1}
(-A''
+D''_1
-2 H''_1
-B''_1 m
+F''_1 m
\nonumber\\&&
-F''_1 m^2
-C''_1 n
+G''_1 n
-m M''_1 n
-G''_1 n^2
+m N''_1
+n Q''_1
-D''_1 i_1
+3 H''_1 i_1
\nonumber\\&&
-m N''_1 i_1
-n Q''_1 i_1
-H''_1 i_1^2
)
\bigg]
\nonumber\\&&\times
\bigg[
\prod_{i_1=1}^q \frac{1}{\big(
E'''_0
-L'''_0
+m P'''_2
+n R'''_2
+p S'''_2
+L'''_0 i_1
\big) i_1}
(-A'''
+E'''_1
-2 L'''_1
-B'''_1 m
\nonumber\\&&
+F'''_1 m
-F'''_1 m^2
-C'''_1 n
+G'''_1 n
-m M'''_1 n
-G'''_1 n^2
-D'''_1 p
+H'''_1 p
-m N'''_1 p
\nonumber\\&&
-H'''_1 p^2
+m P'''_1
-n p Q'''_1
+n R'''_1
+p S'''_1
-E'''_1 i_1
+3 L'''_1 i_1
-m P'''_1 i_1
-n R'''_1 i_1
\nonumber\\&&
-p S'''_1 i_1
-L'''_1 i_1^2
)
\bigg].
\end{eqnarray}
Pochhammer solutions are of advantage, since the $\varepsilon$--expansion can be derived more easily compared to
the case of the product solutions.
The contributing powers in $i_1$ in the above products determine the degree, {\sf d}, of the algebraic
equations to switch to the associated Pochhammer form, which is in many cases {\sf d = 2} and {\sf d = 3} for
the $S_{1,(2)}$ functions, for the functions considered in the present paper. In the $_pF_q$ case it
can be even of higher order. Here complex solutions will appear in general for the Pochhammer symbols.
The first index of the Pochhammer symbol will imply a new constant
in the ground field to be used in the summation problem.
However, still the solutions remain real. If the corresponding algebraic equations can be solved in closed form
the special conditions discussed in Appendix~\ref{sec:B} need not to be obeyed.
In any case, if the degree {\sf d} of the algebraic
equations is too high, in particular, if the algebraic extensions get too complicated, one can use the
general tools developed in Section~\ref{Sec:EpExpansion} to derive the $\varepsilon$--expansion by introducing
generalized
versions of harmonic sums and Hurwitz type sums where the summands have denominators which do not factor linearly.
\section{Computing the expansion in \boldmath $\varepsilon$}
\label{sec:epExpansion}
\vspace*{1mm}
\noindent
Performing the expansion in the dimensional parameter $\varepsilon$ on the basis of series representation
around $x_i$ in the vicinity of zero, the convergence region of the respective series has to be known
in general. For the one--parameter series we consider the $_pF_q$ functions, which converge for $|x| <
1$ for $p \leq q+1$, \cite{SLATER1}, which we are going to consider. In the two--variable case one has
\cite{SLATER1}
\begin{eqnarray}
F_1,
F_3
&:& |x|, |y| < 1,
\\
F_2 &:& |x| + |y| < 1.
\\
F_4 &:& \sqrt{|x|} + \sqrt{|y|} < 1
\end{eqnarray}
In the attachment {\tt converg.m} we present the corresponding convergence conditions for all functions
up to three variables, as have been given in \cite{SRIKARL}, in computer-readable form, for the convenience
of the user. These conditions are partly very involved.
An example is $f_{26f}$
\begin{eqnarray}
f_{26f} &:& |z| < \frac{1}{4}, |x| < \frac{1}{1+2\sqrt{|z|}}, |y|<
\sqrt{1+|x|(1+2\sqrt{|z|})}
- \sqrt{|x|(1+2\sqrt{|z|})},
\end{eqnarray}
\cite{SRIKARL}.
More involved conditions are obtained in the four-variable case. They may be derived using d'Alemberts
ratio test \cite{WW,DAL} to these cases.
In general, multi--sums appear with complicated hypergeometric products and one may try to apply, e.g., the
package \texttt{EvaluateMultiSums}~\cite{Ablinger:2010pb,Blumlein:2012hg,Schneider:2013zna,Schneider:19} (utilizing the difference ring algorithms~\cite{DR,TermAlgebra,LinearSolver} available in {\tt Sigma}) to represent these sums to indefinite nested sums. In general, this seems not possible. But we will show how this goal can be accomplished for various interesting cases with our computer algebra tools. In particular, if the products depend on the dimensional parameter $\varepsilon$
and one is interested in its $\varepsilon$--expansion, the best tactic is to
perform
the $\varepsilon$--expansion of the innermost summand, given in terms of hypergeometric products, and to apply
afterwards the summation quantifiers to the coefficients of the expansion; here one has to take care that the interchange of infinite summation quantifiers and the differential operator w.r.t.\ $\varepsilon$ is possible.
To accomplish this task, we will first explain how such products can be expanded in full generality.
Afterwards we will focus on the task to carry out the summations on top of the $\varepsilon$--expansion.
\subsection{The \boldmath $\varepsilon$--expansion of the summand}\label{Sec:EpExpansion}
In general the summand is built by a product of the form
\begin{equation}\label{Equ:HypProd}
\prod_{i=\ell}^n h(\varepsilon,i),
\end{equation}
where $h(\varepsilon,x)\in{\mathbb K}(\varepsilon,x)$ is a rational function in the variables $\varepsilon$ and $x$, or by a linear combination of power products of such products; for concrete examples see~\eqref{eq:HORN2} and below. For simplicity we suppress further summation variables that may arise in $h$ and move them to the ground field (e.g., for the variables $m,n$ in~\eqref{eq:HORN2} we take the rational function field ${\mathbb K}=K(m,n)$ over a field $K$ of characteristic $0$).
Before expanding in the dimensional parameter $\varepsilon$ one may map to Pochhammer symbols.
In such a representation $\varepsilon$ occurs usually in the form
\begin{eqnarray}\label{Equ:PochhammerForm}
(a + r \varepsilon)_n,~~~~\text{with}~~r \in \mathbb{Q}.
\end{eqnarray}
The series expansion is then given in terms of harmonic sums \cite{Vermaseren:1998uu,Blumlein:1998if}
at argument $a$ and $a+n$, with $a \in \mathbb{C} \backslash \mathbb{Z}_-$,
\begin{eqnarray}
(a + r \varepsilon)_n &=& \frac{\Gamma(n+a)}{\Gamma(a)} =
(a)_n \Biggl\{ 1
+ r \varepsilon \Biggl[
\frac{n}{a (a
+n
)}
-S_1(a)
+S_1(a+n)
\Biggr]
+ r^2 \varepsilon^2 \Biggl[
-\frac{n}{a (a
+n
)^2}
\nonumber\\ &&
+\Biggl(
-\frac{n}{a (a
+n)}
-S_1(a+n)
\Biggr) S_1(a)
+\frac{1}{2} S_1^2(a)
+\frac{n S_1(a+n)}{a (a
+n
)}
+\frac{1}{2} \Biggl(S_1^2(a+n)
\nonumber\\ &&
+ S_2(a)
- S_2(a+n) \Biggr)
\Biggr]
+r^3 \varepsilon^3 \Biggl[
\frac{n}{a (a
+n
)^3}
+\Biggl(
\frac{n}{a (a
+n
)^2}
-\frac{n S_1(a+n)}{a (a
+n
)}
\nonumber\\ &&
-\frac{1}{2} \Biggl(S_1^2(a+n)
+ S_2(a)
- S_2(a+n)\Biggr)
\Biggr) S_1(a)
+\Biggl(
\frac{n}{2 a (a
+n
)}
+\frac{1}{2} S_1(a+n)
\Biggr) S_1^2(a)
\nonumber\\ &&
-\frac{1}{6} S_1^3(a)
+\Biggl(
-\frac{n}{a (a
+n
)^2}
+\frac{1}{2} S_2(a)
-\frac{1}{2} S_2(a+n)
\Biggr) S_1(a+n)
+\frac{n S_1^2(a+n)}{2 a (a
+n
)}
\nonumber\\ &&
+\frac{1}{6} S_1^3(a+n)
+\frac{n S_2(a)}{2 a (a
+n
)}
-\frac{n S_2(a+n)}{2 a (a
+n
)}
-\frac{1}{3} S_3(a)
+\frac{1}{3} S_3(a+n)
\Biggr]
\Biggr\} + O(\varepsilon^4).
\nonumber\\ \label{Equ:PochhammerExpansion}
\end{eqnarray}
Here the harmonic sums \cite{Vermaseren:1998uu,Blumlein:1998if}
are defined by
\begin{eqnarray}
S_{b,\vec{a}}(N) = \sum_{k=1}^N \frac{({\rm sign}(b))^k}{k^{|b|}}
S_{\vec{a}}(k),~~~S_\emptyset = 1,~~ a_i,b \in \mathbb{N} \backslash
\{0\}.
\end{eqnarray}
Analogous expressions are obtained in the case that the Pochhammer symbols depend on $\varepsilon$ polynomially.
The harmonic sums $S_{\vec{c}}(a;n)$
will be called {\it Hurwitz harmonic sums}, since they converge to the Hurwitz
$\zeta$-values \cite{HURWITZ} in the limit $n \rightarrow \infty$.
These sums are defined by
\begin{eqnarray}
S_1(a;n) &=& \sum_{k=1}^n \frac{1}{a+k}
\\
S_{c,\vec{b}}(a;n) &=& \sum_{k=1}^n \frac{({\rm sign}(c))^k}{(a+k)^{|c|}} S_{\vec{b}}(a;k).
\end{eqnarray}
Single Hurwitz harmonic sums are given by
\begin{eqnarray}
S_l(a;n) \equiv S_l(a+n) - S_l(a),~~a \in \mathbb{C}.
\end{eqnarray}
Here the harmonic sums are understood as derived from their Mellin transformation,
cf.~Ref.~\cite{Blumlein:1998if}. More involved relations of this kind hold also for
nested sums.
In course of further summations in the multivariate case also the Hurwitz generalizations of the sum having been dealt with
in Refs.~\cite{Ablinger:2011te,
Ablinger:2013cf,Ablinger:2014bra,Ablinger:2021fnc} can occur.
If the multiplicands $h(i,\varepsilon)$ of the arsing products~\eqref{Equ:HypProd}
do not factorize linearly over the given field, one has to introduce algebraic extensions, such as given in~\eqref{Equ:AlgebraicExt}, in order to obtain the product representations as given in~\eqref{Equ:PochhammerForm}.
In the case that one wants to avoid such non--trivial field extensions
(which
are often hard to handle with symbolical tools), we propose the following general and rather flexible method.
Let $\ell$ be an integer and suppose that $f_i(\varepsilon)$ are functions in $\varepsilon$ which are nonzero and complex differentiable around $0$ (and thus infinitely many times complex differentiable) for all integers $i\geq\ell$. By the product rule (and the quotient rule for the second identity) it follows that
\begin{align}
\partial_{\varepsilon}\prod_{i=\ell}^n f_i(\varepsilon)&=\left(\prod_{i=\ell}^n f_i(\varepsilon)\right)\sum_{i=\ell}^n\frac{\partial_{\varepsilon}f_i(\varepsilon)}{f_i(\varepsilon)}\label{Equ:StandardRule},\\
\partial_{\varepsilon}\prod_{i=\ell}^n \frac1{f_i(\varepsilon)}&=-\left(\prod_{i=\ell}^n \frac1{f_i(\varepsilon)}\right)\sum_{i=\ell}^n \frac{\partial_{\varepsilon}f_i(\varepsilon)}{f_i(\varepsilon)}\label{Equ:InverseRule}
\end{align}
holds for all $n\geq\ell$.
Since $f_i(\varepsilon)$ for $i\geq\ell$ is infinitely many times differentiable around $0$, also the summand
$\frac{\partial_{\varepsilon}f_i(\varepsilon)}{f_i(\varepsilon)}$ and thus the finite sums in~\eqref{Equ:StandardRule} and~\eqref{Equ:InverseRule} are infinitely many times differentiable around $0$. E.g., we get
$$\partial_{\varepsilon}\sum_{i=\ell}^n\frac{\partial_{\varepsilon}f_i(\varepsilon)}{f_i(\varepsilon)}=
\sum_{i=\ell}^n\frac{f_i(\varepsilon)\partial_{\varepsilon}^2f_i(\varepsilon)-f_i(\varepsilon)\partial_{\varepsilon}f_i(\varepsilon)}{f_i(\varepsilon)^2}.$$
As a consequence, we can apply $D^r_{\varepsilon}$ iteratively on $\prod_{i=\ell}^n f_i(\varepsilon)$ and obtain an explicit expression $F_r(\varepsilon)$ given by the product $\prod_{i=\ell}^n f_i(\varepsilon)$ itself times a polynomial expression in terms of sums where the denominator is of the form $f_i(n)^r$ and the numerator is built by a linear combination of power products of the form $f_i(\varepsilon)^{e_0}(\partial f_i(\varepsilon))^{e_1}\dots(\partial^r f_i(\varepsilon))^{e_r}$ with $e_1+\dots+e_r=r$ and $e_1<r$.
To calculate $\varepsilon$--expansions for such products, we assume from now on in addition that $f_i(0)\neq0$
holds for all $i\geq\ell$. Then
$$F_r(0)=\frac{\partial^r_{\varepsilon}f_i(\varepsilon)}{f_i(\varepsilon)}\Bigg|_{\varepsilon=0}$$
is well defined and by Taylor's formula we get the power series expansion
$$\prod_{i=\ell}^n f_i(\varepsilon)=F_u(0)\varepsilon^u+F_{u+1}(0)\varepsilon^{u+1}+F_2(0)\varepsilon^{u+2}+\dots$$
with order $u\geq0$ where $F_u(0)\neq0$.
Within the package \texttt{EvaluateMultiSums} we specialized this general mechanism to the product case~\eqref{Equ:HypProd}, i.e., we assume that $f_i(\varepsilon)=h(\varepsilon,x)$ where $h(\varepsilon,x)\in{\mathbb K}(\varepsilon,i)$ is a rational function in the variables $\varepsilon$ and $x$. If $h(0,i)$ is zero for some $i\geq\ell$, we take $\ell'\geq\ell$ as the minimal value such that this is not the case and extract the critical part with
$$\prod_{i=\ell}^n f_i(\varepsilon)=r(\varepsilon)\prod_{i=\ell'}^n h(\varepsilon,i)$$
where $r(\varepsilon)=\prod_{i=\ell}^{\ell'-1}h(\varepsilon,i)\in{\mathbb K}(\varepsilon)$ is a rational
function in $\varepsilon$. We may assume that $r(\varepsilon)=\varepsilon^s\frac{p(\varepsilon)}{q(\varepsilon)}$ for an integer $s$ where $p,q$ are coprime polynomials in $\varepsilon$ with $p(0)q(0)\neq0$.
Applying now the above machinery to $\prod_{i=\ell'}^n h(\varepsilon,i)$ leads to a power series expansion of order $u\geq0$ as stated above. In addition, we can compute a Laurent series expansion of $r$ in $\varepsilon$ of order $s$. Thus by the Cauchy product we end up at the Laurent series expansion
$$\prod_{i=\ell}^n h(\varepsilon,i)=H_{t}\varepsilon^t+H_{t+1}\varepsilon^{t+1}+H_{t+2}\varepsilon^{t+2}+\dots$$
of order $t=u+s$. Here each coefficient $H_r$ is given by $\prod_{i=\ell'}^n h(0,i)$ times a polynomial expression in terms of single sums $\sum_{i=\ell'}^n \frac{a(i)}{h(0,i)^r}$ where $a(i)$ is built by a linear combinations of power products of the form $h(\varepsilon,i)^{e_0}(\partial h(\varepsilon,i))^{e_1}\dots(\partial^r h(\varepsilon,i))^{e_r}|_{\varepsilon=0}$ with $e_1+\dots+e_r=r$ and $e_1<r$.
In order to obtain a nicer output, we factorize the input multiplicand
$h(\varepsilon,i)$ over the fixed field ${\mathbb K}$ and pull over the product sign to each irreducible factor. In addition, we replace any product $\big(\prod_{i=\ell'}^n p(\varepsilon,i)\big)^{-z}$ with $z>0$ to $\big(\prod_{i=\ell'}^n\frac1{p(\varepsilon,i)}\big)^{z}$ with positive exponents $z$ and use~\eqref{Equ:InverseRule} instead of~\eqref{Equ:StandardRule}. Carrying out the expansions for each product and combining them by the Cauchy product yield an expression in terms of the input product $\prod_{i=\ell}^n h(0,i)$ (or a power product built by the irreducible parts $\big(\prod_{i=\ell'}^n p(0,i)\big)^{-z}$) times a polynomial expressions of sums of the form $\sum_{i=\ell'}^n \frac{a(i)}{p(0,i)^r}$ where $a(x)\in{\mathbb K}[x]$ is a polynomial of degree smaller than the degree of the polynomial $p(0,x)^r$.
The following remarks should be stated. First, applying this general method to $(a + r
\varepsilon)_n=\prod_{i=1}^np(\varepsilon,i)$ with $p(\varepsilon,x)=(-1 + a + \varepsilon r + x)$ we rediscover precisely the
$\varepsilon$--expansion given in~\eqref{Equ:PochhammerExpansion}. Second, the polynomial $p(0,i)$ may be reducible and thus the denominators in the sums can be split further. In this case the routines of package \texttt{EvaluateMultiSums} (using the summation tools of \texttt{Sigma}) split the sums automatically further. E.g., calling the command \texttt{SeriesForProduct[SigmaProduct[}$2 \varepsilon + 2 i + \varepsilon i + 3 i^2 + 6 \varepsilon i^2 + i^3 + \varepsilon i^3$\texttt{,\{i,1,n\}],\{$\varepsilon$,0,2\},\{n\}]} of \texttt{EvaluateMultiSums} yields the $\varepsilon$--expansion
\begin{align*}
\prod_{i=1}^n h(\varepsilon,i)=&n!^3\Big\{\frac{\varepsilon^0}{2} (1+n)^2 (2+n)\\
&\hspace*{0.3cm}+\frac{\varepsilon^1}{2}(1+n) \big(
n \big(
-6-3 n+n^2\big)
+3 (1+n) (2+n) S_1({n})
\big)\\
&\hspace*{0.3cm}+\frac{\varepsilon^2}{4} \Big(
n \big(
498+597 n+185 n^2-9 n^3+n^4\big)
+6 (1+n) \big(
-2-9 n-4 n^2+n^3\big) S_1({n})\\
&\quad\quad+9 (1+n)^2 (2+n) S_1({n})^2
-101 (1+n)^2 (2+n) S_2({n})
\Big) \Big\}+O(\varepsilon^3)
\end{align*}
for the irreducible polynomial $h(\varepsilon,x)=2 \varepsilon + 2 x + \varepsilon x + 3 x^2 + 6 \varepsilon x^2 + x^3 + \varepsilon x^3\in\mathbb Q[\varepsilon,x]$ which factorizes linearly to $h(0,x)=x(x+1)(x+2)$ when it is evaluated at $\varepsilon=0$.
However, for a generic irreducible polynomial $p(\varepsilon,x)$, also $p(0,x)$ is irreducible. For instance, consider the product expression
\begin{equation}\label{Equ:NonTrivialSummand}
f[\varepsilon,n]=\frac{\prod_{i=1}^n \big(
2-\varepsilon
+B_1
-C
-3 i
-B_1 i
+i^2
\big)}{n! (A_1-4\varepsilon)_n},
\end{equation}
where $f[0,n]$ equals to the product given in~\eqref{eq:T1}. Then applying the command \texttt{SeriesForProduct} to this expression gives
\begin{equation}\label{Equ:ExpandPrductExpr}
\begin{split}
f[\varepsilon,n]=&\frac{\prod_{i=1}^n \big(
2
+B_1
-C
-3 i
-B_1 i
+i^2
\big)}{n! (A_1)_n}\times\\
&\times\Big\{\varepsilon^0+\varepsilon^1 \Big(4\big(
\frac{n}{A (A
+n
)}
-S_1({A})
+S_1({A+n})
+
\sum_{i=1}^n \tfrac{1}{2
+B
-C
-(3+B) i
+i^2
}\Big)
\Big\}+O(\varepsilon^2).
\end{split}
\end{equation}
If necessary or appropriate, depending on the application, the found sum
solutions (and the products) can be factorized further (within an
appropriate algebraic field extension).
\subsection{Symbolic summation}
We consider now multi--sums over such products (e.g., a single sum over the discrete parameter $n$, or sums
over further discrete parameters that appear in other products or even inside of products). Applying the package \texttt{EvaluateMultiSum} one can now try to work from the inner sum towards the outermost sum and to transform the definite sums stepwise to indefinite nested versions.
Internally, one computes stepwise recurrences and tries to solve these recurrences within the class of indefinite nested sums; for details see~\cite{SIG2}.
In the case that a sum has an infinite upper bound, one first considers a
truncated version with the upper bound $N$, applies the symbolic summation tools to this version and performs afterwards the limit $N \rightarrow \infty$ using
procedures available in the package {\tt HarmonicSums}
\cite{HARMSU,Vermaseren:1998uu,Blumlein:1998if,Remiddi:1999ew,Ablinger:2011te,
Ablinger:2013cf,Ablinger:2014bra,Ablinger:2021fnc}.
Since in our application also the formal parameters $x_i$ are involved, it may
turn out in course of the summation that the summation problem cannot be solved for certain classes of cases, while it is
possible in others. In particular, if the $\varepsilon$ parameter appears in the innermost summand, it is of great
advantage to first expand in $\varepsilon$ and to apply afterwards the summation tools to the coefficients of the expansion which are free of $\varepsilon$. To carry out the infinite sums after the $\varepsilon$--expansion, the infinite power series have to be considered
in their convergence region around zero to perform the infinite sums, see {\tt converg.m} for the cases up to three variables.
For instance, we take the summand in~\eqref{Equ:NonTrivialSummand} and specialize it further to $A\to3, B\to-2, C\to-1$. Then with the
expansion~\eqref{Equ:ExpandPrductExpr} we obtain
$$\sum_{n=0}^{\infty}\frac{\prod_{i=1}^n(\varepsilon+1-i+i^2)}{n!(3-4\varepsilon)_n}=G_0+\varepsilon\,G_1+O(\varepsilon^2),$$
with
\begin{align*}
G_0&=\sum_{n=0}^{\infty}\frac{\prod_{i=1}^n(1-i+i^2)}{n!(3)_n}\\
G_1&=\sum_{n=0}^{\infty}\frac{\prod_{i=1}^n(1-i+i^2)}{n!(3)_n}\Big(-6
+\frac{4}{1+n}
+\frac{4}{2+n}
+4 S_1({n})+\sum_{i=1}^n \frac{1}{1-i+i^2}\Big).
\end{align*}
Given this expansion, we apply our summation tools to the $\varepsilon$-free sums in the second step. For $G_0$ we consider first the truncated version and get the simplification
$$\sum_{n=0}^{N}\frac{\prod_{i=1}^n(1-i+i^2)}{n!(3)_n}=\frac{(3+N) \big(
1+N+N^2\big)}{3}\frac{\prod_{i=1}^N \big(
1-i+i^2\big)}{N! (3)_N}.$$
Finally, we perform $N\to\infty$ yielding
$$G_0=\frac{2{\rm cosh}(\tfrac {\sqrt {3}\pi} {2})}{3\pi}.$$
The sum $G_1$ is more complicated and the command \texttt{EvaluateMultiSum[$G_1$]} produces the output
$$G_1= -\frac{8}{3}
+\frac{8 C}{3}
-\frac{20 \cosh \big(
\frac{\sqrt{3} \pi }{2}\big)}{9 \pi }
+\frac{2 \cosh \big(
\frac{\sqrt{3} \pi }{2}\big)}{3 \pi}\sum_{i=1}^{\infty}
\frac{1}{1-i+i^2},$$
with the extra constant
\begin{eqnarray}
C = \sum_{k=1}^\infty \left(\frac{1}{\pi k} {\rm cosh}\left[\frac{\sqrt{3} \pi}{2}\right] - \frac{1}{(k!)^2}
\prod_{l=1}^k (1 - l + l^2)\right).
\label{eq:C1}
\end{eqnarray}
As will be shown by non--trivial considerations in Appendix~\ref{sec:E}
this convergent sum can be simplified to
\begin{eqnarray}
C &=& 1 + \frac{2 \cosh \left[\frac{\sqrt{3} \pi }{2}\right]
\left\{
\Re\left[\psi
\left(\frac{1}{2}+\frac{i
\sqrt{3}}{2}\right)\right] + \gamma_E \right\}}{\pi },
\label{eq:C3}
\end{eqnarray}
with $\gamma_E$ the Euler--Mascheroni constant and $\psi(x)$ the digamma
function.
If this transformation works successfully (in particular,
if the recurrences arising within the course of the transformation can
be fully solved within the class of indefinite nested sums defined over
hypergeometric products),
one obtains finally an expression in terms of special functions $f(x_1, ..., x_n)$, which are the results
of the $\varepsilon$--expansion
of the respective higher transcendental function. In this process one tries to keep the parameters symbolically and one finally
inserts the respective function of the parameters of the original differential equations. This will in general
lead to representations in radicals. For numerical representations this is not problematic, while the analytic
representations are involved. Calculating the respective amplitudes for off--shell invariants one may use
these quantities
in principle in higher loop diagrams by observing the respective kinematics. Whether this will be a practical method compared
to the direct calculation of the higher loop diagrams has to be seen in the respective cases.
Summarizing, the $\varepsilon$--expansion leads to (multiple) infinite sums which can be simplified further by
symbolic summation in many non--trivial applications. These are functions of the corresponding
set of variables, either in terms of functions which also appear in other quantum-fieldtheoretic calculations
\cite{Remiddi:1999ew,Ablinger:2011te,Ablinger:2013cf,Ablinger:2014bra} or higher transcendental functions. Frequently the
different letters appear within root--valued expressions.
In the examples that we will present in Section~\ref{sec:fullmachinery} below or in the {\tt Mathematica} notebooks attached one obtains e.g.\ the following sums
\begin{eqnarray}
s_1 &=& \sum_{i=1}^{\infty} \frac{y^{i} \big(
\frac{3}{2}\big)_{i}}{i! i}
\\
s_2 &=&
\sum_{i=1}^{\infty } \frac{x^{2 i} \big(\frac{3}{2}\big)_{i}}{i!}
\displaystyle\sum_{j=1}^{i} \frac{y^{-j} j!}{\big(
\frac{3}{2}\big)_{j}}
\end{eqnarray}
and much more complicated structures and variables, as shown in the attachment in several examples.
The above sums evaluate to
\begin{eqnarray}
s_1 &=&
-2
+2 \frac{1}{\sqrt{1-y}}
-2 \ln \left[
\frac{1}{2} \big(
1+\sqrt{1
-y
}\big)\right]
\\
s_2 &=&
-\frac{y}{(1-x^2) (x^2-y)}
-\frac{1}{(1-x^2)^{3/2}}
-\frac{y}{(1-x^2)^{3/2} \sqrt{1-y}}
\Biggl[{\rm arctanh}\left(
\frac{1}{\sqrt{1-y}}\right)
\nonumber\\ &&
+ {\rm arctanh}\left(
\frac{\sqrt{1-x^2}}{\sqrt{1-y}}\right) \Biggr].
\end{eqnarray}
In general one has to introduce integral representations successively as has been described in Ref.~\cite{Ablinger:2014bra}
in detail.
\section{The full machinery}
\label{sec:fullmachinery}
In the following we consider two--variable examples starting from its partial differential equation down to
its infinite sum representation and $\varepsilon$--expansion to illustrate the principle formalism.
\subsection{Example 1}
\vspace*{1mm}
\noindent
Consider for example the system of equations
\begin{eqnarray}
\Bigg[ (x-1) y \partial_{x,y}^2+ \Big[x \Big(2 \varepsilon +\frac{7}{2}\Big)-\varepsilon +1\Big]\partial_x + (x-1) x \partial_x^2
&\nonumber\\
+y (2 \varepsilon +1) \partial_y +\frac{3}{2} (2 \varepsilon +1) \Bigg] f(x,y) &= 0, \\
\Bigg[ x (y-1) \partial_{x,y}^2 +x (4-\varepsilon ) \partial_x+ \Big[y \Big(\frac{13}{2}-\varepsilon \Big)-\varepsilon +1\Big]\partial_y
&\nonumber\\
+(y-1) y \partial_y^2+\frac{3 (4-\varepsilon )}{2}
\Bigg]f(x,y) &= 0,
\end{eqnarray}
for which we search for a solution of the form~\eqref{eq:hyp-series} with $r=2$ where $x_1=x$ and $x_2=y$.
Computing a first--order recurrence system of $A(n_1,n_2)=A(m,n)$ and solving it by the method presented in Section~\ref{sec:solveProd} provides the solution
\begin{equation}
f(x,y) = \sum_{m,n= 0}^\infty A(m,n) = \sum_{m,n= 0}^\infty \frac{x^m y^n \big(
\frac{3}{2}\big)_{m
+n
} (4-\varepsilon )_n (1+2 \varepsilon )_m}{m! n! (-1+\varepsilon )_{m
+n
}}.
\label{eq:example-summand}
\end{equation}
A series expansion of the summand $A(m,n)$ in \eqref{eq:example-summand} up to $\mathcal O(\varepsilon^0)$ gives
\begin{eqnarray}
A(m,n)&=& -\frac{1}{6} \frac{x^m y^n (3+n)! \big(
\frac{3}{2}\big)_{m
+n
}}{n! (-2
+m
+n
)! \varepsilon}
+
\frac{1}{36} \bigg[
-\frac{1}{(1+n) (2+n) (3+n) (m
+n
) (-1
+m
+n
)}
\nonumber\\&& \times
\big(
-36
-30 n
+17 n^2
+97 n^3
+79 n^4
+17 n^5
+m^2 \big(
36+115 n+84 n^2+17 n^3\big)
\nonumber\\&&
+m \big(
36+89 n
+218 n^2
+163 n^3+34 n^4\big)
\big)
-12 S_1({m})
+6 S_1({n})
+6 S_1({m+n})
\bigg]
\nonumber\\&& \times
\frac{x^m y^n (3+n)! \big(
\frac{3}{2}\big)_{m
+n
}}{n! (-2
+m
+n
)!}
+\mathcal O(\varepsilon).
\end{eqnarray}
A series expansion of \eqref{eq:example-summand} in the region $0<x<\sqrt{y}$, $0<y<\frac{1}{2}$,
\begin{equation}
f(x,y) = \frac{1}{\varepsilon} f_{-1}(x,y) + f_0(x,y) +\mathcal O(\varepsilon)
\end{equation}
is possible using {\tt EvaluateMultiSum} and results in an expression involving the sums
\begin{eqnarray}
R_0 &=& \sum_{i=1}^{\infty } \frac{x^i \big(\frac{3}{2}\big)_i}{i!} = -1+\frac{1}{(1-x)^{3/2}}
\\
R_1 &=& \sum_{i=1}^{\infty } \frac{y^i \big(\frac{3}{2}\big)_i}{i!} = -1+\frac{1}{(1-y)^{3/2}}
\end{eqnarray}
at $\mathcal O(\varepsilon^{-1})$. The function $f_{-1}(x,y)$ reads
\begin{eqnarray}
f_{-1}(x,y) &=& -\frac{15 x^6 }{4 (x-y)^4(1-x)^{7/2}}
-\frac{15 y^3}{64 (x-y)^4 (1-y)^{13/2}} \big[
y^3 \big(
160+80 y-10 y^2+y^3\big)
\nonumber\\&&
-x y^2 \big(
576+176 y
-64 y^2+5 y^3\big)
+x^3 \big(
-320+120 y-36 y^2+5 y^3\big)
\nonumber\\&&
+3 x^2 y \big(
240+8 y-22 y^2+5 y^3\big)
\big] .
\end{eqnarray}
In addition, one encounters at $\mathcal O(\varepsilon^0)$ the sums
\begin{eqnarray}
R_2 &=& \sum_{i=1}^{\infty } \frac{x^i \big(
\frac{3}{2}\big)_i}{(1+2 i)^2 i!} = -1 + \frac{1}{\sqrt{x}} \arcsin\big( \sqrt{x}\big)
\\
R_3 &=& \sum_{i=1}^{\infty } \frac{y^i \big(
\frac{3}{2}\big)_i}{i i!} = -2
+2 \ln(2)
+2 \frac{1}{\sqrt{1-y}}
-2 H_{-1}\big(
\sqrt{1-y}\big)
\\
R_4 &=& \sum_{i_1=1}^{\infty } \frac{x^{i_1} \big(\frac{3}{2}\big)_{i_1}
}{i_1!}
\sum_{i_2=1}^{i_1} \frac{1}{1+2 i_2}
= \frac{1}{2} \frac{H_1(x)}{(1-x)^{3/2}}
\\
R_5 &=& \sum_{i_1=1}^{\infty } \frac{x^{i_1}
\big(\frac{3}{2}\big)_{i_1}}{i_1!}
\sum_{i_2=1}^{i_1} \frac{y^{-i_2} i_2!}{\big(
\frac{3}{2}\big)_{i_2}}
\nonumber\\&=&
\frac{y}{(1-x) (y-x)}
-\frac{1}{(1-x)^{3/2}}
+\frac{y}{2(1-x)^{3/2} \sqrt{1-y}} \Big[
i \pi
-H_0\big( \sqrt{1-y} -\sqrt{1-x} \big)
\nonumber\\&&
-2 H_{-1}\big( \sqrt{1-y}\big)
+H_0(y)
+H_0\big( \sqrt{1-y}+\sqrt{1-x} \big)
\Big]
\\
R_6 &=& \sum_{i_1=1}^{\infty } \frac{x^{i_1} y^{i_1}
\big(\big(
\frac{3}{2}\big)_{i_1}\big)^2}{\big(
i_1!\big)^2}
\sum_{i_2=1}^{i_1} \frac{y^{-i_2} i_2!}{\big(
\frac{3}{2}\big)_{i_2}}
\nonumber\\&=&
\frac{1}{2}\int_0^1 dt \bigg[
\frac{-1+t}{\pi (-1
+t
+y
)} \Big[
\frac{1}{(1
-(1-t) x
)^2}
\big[
4 E(x -t x)
-2 [1
-(1-t) x
] K(x-t x)
\big]
\nonumber\\&&
-\frac{4 E(x y)
+2 (-1
+x y
) K(x y)
}{(-1
+x y
)^2}
\Big] \frac{1}{\sqrt{t}}
\bigg]
\\
R_7 &=& \sum_{i_1=1}^{\infty } \frac{x^{i_1} y^{i_1}
\big(\big(
\frac{3}{2}\big)_{i_1}\big)^2}{\big(
i_1!\big)^2 \big(
1+2 i_1\big)^2}
\sum_{i_2=1}^{i_1} \frac{y^{-i_2} i_2!}{\big(
\frac{3}{2}\big)_{i_2}}
=
{ \frac{1}{\pi} } \int_0^1 dt \frac{t-1}{\sqrt{t}(t+y-1) } \big[ K(x(1-t))
-K(x y) \big]
\\
R_8 &=& \sum_{i_1=1}^{\infty } \frac{y^{i_1}
\big(\frac{3}{2}\big)_{i_1}}{i_1! \big(
1+2 i_1\big)}
\sum_{i_2=1}^{i_1} \frac{x^{i_2} y^{-i_2}}{i_2}
=
-2 \frac{H_0\big(
\sqrt{1-x}+\sqrt{1-y}\big)}{\sqrt{1-y}}
+2 \frac{H_{-1}\big(
\sqrt{1-y}\big)}{\sqrt{1-y}}
\\
R_9 &=& \sum_{i_1=1}^{\infty } \frac{x^{i_1}
\big(\frac{3}{2}\big)_{i_1}}{i_1!} \big(
\sum_{i_2=1}^{i_1} \frac{y^{-i_2} i_2!}{\big(
\frac{3}{2}\big)_{i_2}}
\big)
\big(
\sum_{i_2=1}^{i_1} \frac{y^{i_2} \big(
\frac{3}{2}\big)_{i_2}}{i_2!}
\big)
\nonumber\\&=&
\frac{1}{(1-y)^{5/2}} \Big[ \left(\frac{1}{(1-x)^{3/2}}-1\right) \left((1-y)^{3/2}-1\right)
\biggl(\sqrt{1-y} y
\Big(H_{-1}\big(\sqrt{1-y}\big)
\nonumber\\&&
-\frac{H_0(y)}{2}+\frac{i \pi }{2}\Big)-y+1\biggr) \Big]
+\sum_{i_ 1=1}^{\infty } \bigg\{
\frac{1}{\pi
(1-y)^2 \sqrt{1- y} \Gamma \big(
1+i_ 1\big) \Gamma \big(
2+i_ 1\big)}
\nonumber\\&& \times
\bigg[ 4 x^{i_ 1} y^{1+i_ 1} \Big[
1
-y
+\sqrt{1-y} y \Big(
\frac{i \pi }{2}
-\frac{1}{2} H_ 0(y)
+H_ {-1}\big(
\sqrt{1-y}\big)
\Big)
\Big] \Gamma \big(
\frac{3}{2}+i_ 1\big)
\nonumber\\&& \times
\Gamma \big(
\frac{5}{2}+i_ 1\big)
\,_ 2F_ 1\Big(
-\frac{1}{2},1+i_ 1;2+i_ 1;y\Big) \bigg]
-\frac{1}{(-1+y)
\sqrt{\pi -\pi y} \Gamma \big(
1+i_ 1\big)}
\nonumber\\&& \times
\Big[2 x^{i_ 1} \Gamma \big(
\frac{3}{2}+i_ 1\big)
\,_ 2F_ 1\Big(
-\frac{1}{2},1+i_ 1;2+i_ 1;y\Big)
\,_ 2F_ 1\Big(
1,2+i_ 1;\frac{5}{2}+i_ 1;\frac{1}{y}\Big) \Big]
\nonumber\\&&
-\frac{1}{(-1+y) \sqrt{\pi -\pi y} \big(
3+2 i_ 1\big)}
\Big[ 2 \sqrt{\pi } x^{i_ 1} y^{-1-i_ 1} \big(
-1+(1
-y
)^{3/2}\big)
\nonumber\\&& \times
\,_ 2F_ 1\Big(
1,2+i
_ 1;\frac{5}{2}+i_ 1;\frac{1}{y}
\Big)
\big(1+i_ 1\big) \Big]
\bigg\}
\\
R_{10} &=& \sum_{i_1=1}^{\infty } \frac{y^{i_1} \big(
\frac{3}{2}\big)_{i_1} S_1\big({i_1}\big) i_1}{i_1!}
=
-3 \ln(2) y \frac{1}{(1-y)^{5/2}}
+\frac{3}{2} y \frac{H_1(y)}{(1-y)^{5/2}}
\nonumber\\&&
+3 y \frac{H_{-1}\big(
\sqrt{1-y}\big)}{(1-y)^{5/2}}
+\Big[
1
+y \big(
3-2 \sqrt{1
-y
}\big)
-\sqrt{1-y}
\Big] \frac{1}{(1-y)^{5/2}},
\end{eqnarray}
as well as the combination
\begin{eqnarray}
R_{11} &=&\big( 1-\big(1-x\big)^{3/2} \big)
\sum_{i_1=1}^{\infty } \frac{y^{i_1}
\big(\frac{3}{2}\big)_{i_1}}{i_1! \big(
1+2 i_1\big)}
\sum_{i_2=1}^{i_1} \frac{y^{-i_2} i_2!}{\big(
\frac{3}{2}\big)_{i_2}}
\nonumber\\&&
-\big(
1-x\big)^{3/2}
\sum_{i_1=1}^{\infty } \frac{y^{i_1}
\big(\frac{3}{2}\big)_{i_1}}{i_1! \big(
1+2 i_1\big)}\Big(
\sum_{i_2=1}^{i_1} \frac{y^{-i_2} i_2!}{\big(
\frac{3}{2}\big)_{i_2}}
\Big)
\Big(
\sum_{i_2=1}^{i_1} \frac{x^{i_2} \big(
\frac{3}{2}\big)_{i_2}}{i_2!}
\Big)
\nonumber\\&=&
\frac{1}{4} \bigg\{
\Big[
2
-2 \sqrt{1-x}
+2 x \sqrt{1-x}
-3 x F_1\Big(
\frac{5}{2};\frac{1}{2},1;2;x y,x
\Big)
\big(1-x\big)^{3/2}
\Big]
\big[i \pi y
\nonumber\\&&
+2 \sqrt{1-y}
-y H_0(y)
+2 y H_{-1} \big(\sqrt{1-y}\big)
\big]\bigg\} \frac{1}{\sqrt{1-y}}
\nonumber\\&&
-
\sum_{i_1=1}^{\infty } \Big[ \frac{x^{1+ i_1} \big(
1-x\big)^{3/2} \Gamma \big(
\frac{1}{2}+i_1\big) }{\sqrt{\pi } y \Gamma \big(
1+i_1\big)}
\,_2F_1\Big(
1,2+i_1;\frac{5}{2}+i_1;\frac{1}{y}\Big)
\nonumber\\&& \times
\,_2F_1\Big(
1,\frac{5}{2}+i_1;2+i_1;x\Big) \Big].
\end{eqnarray}
The harmonic polylogarithms \cite{Remiddi:1999ew} are defined by
\begin{eqnarray}
H_{b,\vec{a}}(x) &=& \int_0^x dy f_b(y) H_{\vec{a}}(y),~~~H_\emptyset =
1,~~~b, a_i \in \{0,-1,1\},
\end{eqnarray}
with
\begin{eqnarray}
f_0(x) = \frac{1}{x},~~
f_{-1}(x) = \frac{1}{1+x},~~
f_1(x) = \frac{1}{1-x}.
\end{eqnarray}
One can further employ the relations
\begin{eqnarray}
_2F_1\Big(\frac{3}{2},\frac{3}{2};1,z\Big) &=& \frac{2(z-1)K(z)+4E(z)}{\pi(z-1)^2}
\\
K(z) &=& \int_0^1 \frac{1}{\sqrt{(1-t^2)(1-z t^2)}} dt = \frac{\pi}{2} \,_2F_1\Bigr(\frac{1}{2},\frac{1}{2};1;z \Bigl)
\\
E(z) &=& \int_0^1 \frac{\sqrt{1-z t^2}}{\sqrt{1-t^2}} dt = \frac{\pi}{2} \,_2F_1\Bigl(-\frac{1}{2},\frac{1}{2},1,z \Bigr).
\end{eqnarray}
The function $f_0(x,y)$ reads
{
\begin{eqnarray}
f_0(x,y) &=&
\frac{5 R_{10} y^2}{16 (1-y)^4 (x
-y
)^4} \Bigl[
y^3 \big(
160+80 y-10 y^2+y^3\big)
-x y^2 \big(
576+176 y-64 y^2+5 y^3\big)
\nonumber\\&&
+x^3 \big(
-320+120 y-36 y^2+5 y^3\big)
+3 x^2 y \big(
240+8 y-22 y^2+5 y^3\big)
\Bigr]
\nonumber\\&&
-\frac{15 R_8 y^3}{32 (1-y)^6 (x
-y
)^4} \Bigl[
y^3 \big(
160+80 y-10 y^2+y^3\big)
-x y^2 \big(
576+176 y-64 y^2+5 y^3\big)
\nonumber\\&&
+x^3 \big(
-320+120 y-36 y^2+5 y^3\big)
+3 x^2 y \big(
240+8 y-22 y^2+5 y^3\big)
\Bigr]
\nonumber\\&&
+\frac{1}{128 (1-x)^2 (1-y)^6 y (x
-y
)^4} \big(
-960 x^7 (-1+y)^7
+y^5 \big(
128-1344 y+1536 y^2
\nonumber\\&&
-4240 y^3+4110 y^4-223 y^5+33
y^6\big)
+8 x^6 y \big(
4-24 y+60 y^2-4880 y^3+1860 y^4
\nonumber\\&&
-564 y^5+79 y^6\big)
+x^5 y \big(
-832+2752 y-1920 y^2+76160 y^3+69280 y^4-8772
y^5
\nonumber\\&&
+2427 y^6-495 y^7\big)
-x y^4 \big(
512-4544 y+3264 y^2-34480 y^3+4240 y^4+3363 y^5
\nonumber\\&&
-141
y^6+66 y^7\big)
+x^3 y^2 \big(
-512+384 y+11008 y^2+83680 y^3+169980 y^4+6287
y^5
\nonumber\\&&
+5931 y^6+747 y^7-305 y^8\big)
+x^2 y^3 \big(
768-4736 y-2272 y^2-84288 y^3-56570 y^4
\nonumber\\&&
+11627
y^5-3549 y^6+387 y^7+33 y^8\big)
+x^4 y \big(
128+1984 y-9792 y^2-29440 y^3
\nonumber\\&&
-180320 y^4-63768
y^5+10536 y^6-8583 y^7+2055 y^8\big)
\big)
\nonumber\\&&
+R_1
\biggl[
-
\frac{1}{128 (1-x)^2 (1-y)^5 y (x
-y
)^5} \Bigl[
960 x^7 (-1+y)^6
-y^6 \big(
128+4800 y-4640 y^2
\nonumber\\&&
+4480 y^3-270 y^4+33
y^5\big)
+2 x y^5 \big(
320+10944 y-4016 y^2+4664 y^3+1749 y^4
\nonumber\\&&
-101
y^5+33 y^6\big)
+x^6 y \big(
-448+1920 y-12800 y^2+5440 y^3+1920 y^4-628
y^5+65 y^6\big)
\nonumber\\&&
-x^2 y^4 \big(
1280+38080 y+20128 y^2-6304 y^3+18546
y^4-4204 y^5+406 y^6+33 y^7\big)
\nonumber\\&&
+2 x^3 y^3 \big(
640+15040 y+32080 y^2-9896 y^3+11559 y^4-3791
y^5-491 y^6+169 y^7\big)
\nonumber\\&&
-5 x^4 y^2 \big(
128+1856 y+11712 y^2+1856 y^3-2076 y^4+1727
y^5-2058 y^6+448 y^7\big)
\nonumber\\&&
+2 x^5 y \big(
64+320 y+8000 y^2+15360 y^3-12080 y^4+5164
y^5-4170 y^6+935 y^7\big)
\Bigr]
\nonumber\\&&
-\frac{15 R_5 x^6 (1-y)^2}{2 (1-x)^2 y (x
-y
)^4}
\biggr]
+R_0 \biggl[
\frac{1}{16 (1-x)^2 (1-y)^6 y (x
-y
)^5} \Bigl[
-120 x^8 (-1+y)^7
\nonumber\\&&
+y^6 \big(
-32+384 y+2988 y^2+140 y^3-15 y^4\big)
+5 x y^5 \big(
32-352 y-2676 y^2-1772 y^3
\nonumber\\&&
-95 y^4+12 y^5\big)
+5 x^2 y^4 \big(
-64+608 y+4688 y^2+7516 y^3+1723 y^4+100
y^5-18 y^6\big)
\nonumber\\&&
+x^6 y \big(
20-624 y+3324 y^2+8040 y^3+18380 y^4-6720
y^5+2164 y^6-329 y^7\big)
\nonumber\\&&
+5 x^3 y^3 \big(
64-448 y-4056 y^2-12396 y^3-6809 y^4-612
y^5-10 y^6+12 y^7\big)
\nonumber\\&&
-5 x^4 y^2 \big(
32-64 y-1888 y^2-9120 y^3-11664 y^4-1436
y^5-158 y^6+40 y^7+3 y^8\big)
\nonumber\\&&
+x^5 y \big(
32+416 y-2848 y^2-12000 y^3-46800 y^4-11880
y^5+430 y^6-200 y^7+85 y^8\big)
\nonumber\\&&
+x^7 \big(
-60+304 y-444 y^2-360 y^3-2780 y^4-1080
y^5+1536 y^6-701 y^7+120 y^8\big)
\Bigr]
\nonumber\\&&
+\frac{15 R_3
x^6}{4 (1-x)^2 (x
-y
)^4}
-\frac{15 R_1 x^6 (1-y)}{2 (1-x)^2 y (x
-y
)^4}
\biggr]
+\frac{15 R_9 x^6 (1-y)^2}{2 (1-x)^2 y (x
-y
)^4}
\nonumber\\&&
+\frac{15 R_6 x^6 (1-y) (-1
+(2+x) y
)}{4 (1-x)^2 y (x
-y
)^4}
+\frac{15 R_3 x^6}{4 (1-x)^2 (x
-y
)^4}
+\frac{15 R_2 x^6 (1-y)}{4 (1-x)^2 y (x
-y
)^4}
\nonumber\\&&
+\frac{15 R_4 x^6 (1-y)}{2 (1-x)^2 y (x
-y
)^4}
-\frac{15 R_7 x^6 (1-y)}{4 (1-x)^2 y (x
-y
)^4}
-\frac{15 R_{11} x^6 (1-y) }{2 y (x
-y)^4 (1-x)^{7/2}}.
\end{eqnarray}
\subsection{Example 2}
Consider for example the system of equations
\begin{eqnarray}
1+\varepsilon +(2-x+\varepsilon ) \partial_x+2 x (1+x)
\partial_x^2=0\\
2-\varepsilon +(1-2 y+2 \varepsilon ) \partial_y+y (3+y) \partial_y^2=0
\end{eqnarray}
We can write its solution as
\begin{equation}
\mathcal F(x,y) = \sum_{x,y\ge 0}A(m,n) x^m y^n
\end{equation}
with
\begin{equation}
A(m,n) = \bigg(
\prod_{i_1=1}^m \frac{-6
-\varepsilon
+7 i_1
-2 i_1^2
}{\big(
\varepsilon
+2 i_1
\big) i_1}\bigg) \prod_{i_1=1}^n \frac{-6
+\varepsilon
+5 i_1
-i_1^2
}{\big(
-2
+2 \varepsilon
+3 i_1
\big) i_1}.
\end{equation}
The quantity $A(m,n)$ can also be expressed as
\begin{eqnarray}
A(m,n) &=& \frac{(-1)^m \big(
-\frac{3}{4}-\frac{1}{4} \sqrt{1
-8 \varepsilon
}\big)_m \big(
\frac{1}{4} \big(
-3+\sqrt{1
-8 \varepsilon
}\big)\big)_m }{\big(
1+\frac{\varepsilon }{2}\big)_m \Gamma (1+m) }
\nonumber\\&&\times
\frac{ (-1)^n 3^{-n} \big(
-\frac{3}{2}-\frac{1}{2} \sqrt{1
+4 \varepsilon
}\big)_n \big(
\frac{1}{2} \big(
-3+\sqrt{1
+4 \varepsilon
}\big)\big)_n}{\big(
\frac{1}{3}+\frac{2 \varepsilon }{3}\big)_n \Gamma (1+n)}
\end{eqnarray}
and $\mathcal F(x,y)$ can be rewritten as
\begin{eqnarray}
\mathcal F(x,y) &=& \Big( \sum_{m\ge 0} x^m f_1(m,\varepsilon) \Big) \Big( \sum_{n\ge 0}y^n f_2(n,\varepsilon) \Big).
\\
&=& F_1(x,\varepsilon) F_2(y,\varepsilon).
\end{eqnarray}
Expanding $F_1$ and $F_2$ in a series in $\varepsilon$ using {\tt EvaluateMultiSums}, one can write an expression containing infinite (nested) sums. These are rewritten as iterated integrals following \cite{Ablinger:2014bra}. Two of the sums are written in semi-analytic form as definite integrals by writing part of the summand as the Mellin transform of a function. For example, we encounter the sum
\begin{equation}
s_1=\sum_{i=1}^{\infty } \frac{(-1)^i x^i \big(
-\frac{3}{2}+i\big)! \big(
\sum_{j=1}^i \frac{1}{1+2 j}\big) S_1({i})}{i i!}.
\end{equation}
By isolating the term $i=1$ and applying the Legendre duplication formula
\begin{equation}
\Gamma\Big(z+\frac{1}{2}\Big) = \sqrt{\pi} \frac{\Gamma(2z)}{2^{2z-1} \Gamma(z)}
\end{equation}
and the identity
\begin{equation}
\Gamma(2z) = \frac{1}{2} \binom{2z}{z} \Gamma(z) \Gamma(z+1)
\end{equation}
we write
\begin{eqnarray}
s_1 &=& -\frac{1}{3} x \sqrt{\pi }
+
\sum_{i=1}^{\infty } \frac{(-1)^{1+i} 2^{-2 i} \sqrt{\pi } x^{1+i}
\binom{2 i}{i}}{(1+i)^3 (3+2 i)}
+
\sum_{i=1}^{\infty } \frac{(-1)^{1+i} 2^{-2 i} \sqrt{\pi } x^{1+i}
\binom{2 i}{i}
\sum_{j=1}^i \frac{1}{1+2 j}}{(1+i)^3}
\nonumber\\&&
+
\sum_{i=1}^{\infty } \frac{(-1)^{1+i} 2^{-2 i} \sqrt{\pi } x^{1+i}
\binom{2 i}{i} S_1({i})}{(1+i)^2 (3+2 i)}
\nonumber\\&&
+
\sum_{i=1}^{\infty } \frac{(-1)^{1+i} 2^{-2 i} \sqrt{\pi } x^{1+i}
\binom{2 i}{i} \big(
\sum_{j=1}^i \frac{1}{1+2 j}\big) S_1({i})}{(1+i)^2}.
\end{eqnarray}
The first three sums are treated following \cite{Ablinger:2014bra}. The fourth sum can be written as
\begin{eqnarray}
t_1 &=& \sum_{i=1}^{\infty } \frac{(-1)^{1+i} 2^{-2 i} \sqrt{\pi } x^{1+i}
\binom{2 i}{i} \big(
\sum_{j=1}^i \frac{1}{1+2 j}\big) S_1({i})}{(1+i)^2}
\nonumber\\&=&
\sum_{i=1}^\infty \frac{(-1)^{1+i}2^{-2i}\sqrt{\pi} {x^{1+i}} \binom{2i}{i}}{(1+i)^2} \int_0^1 dz \bigg\{ (z^i-1) \frac{1}{2 (-1+z)} \bigg[
-2
+2 z
+\big(
1+\sqrt{z}\big) G\Big(
\frac{\sqrt{\tau }}{1-\tau };z\Big)
\nonumber\\&&
+2 \sqrt{z} \big(1-\ln (2)\big)
\bigg] \bigg\}
\nonumber\\&=&
\int_0^1 dz \bigg\{ { \frac{1}{2(z-1)} } \bigg[
-2
+2 z
+\big(
1+\sqrt{z}\big) G\Big(
\frac{\sqrt{\tau }}{1-\tau };z\Big)
+2 \sqrt{z} \big(1-\ln (2)\big)
\bigg]
\nonumber\\&& \times
\sum_{i=1}^\infty \frac{\big(
-1+z^i\big) (-1)^{1+i} 2^{-2 i} \sqrt{\pi } x^{1+i} \binom{2 i}{i}}{(1+i)^2}
\bigg\}
\nonumber\\&=&
\int_0^1 dz \bigg\{ { \frac{1}{2(z-1)} }
\bigg[
-2
+2 z
+\big(
1+\sqrt{z}\big) G\Big(
\frac{\sqrt{\tau }}{1-\tau };z\Big)
+2 \sqrt{z} \big(1-\ln (2)\big)
\bigg]
\bigg[
-\frac{4}{z} \Big[
-1
+z
\nonumber\\&&
+\sqrt{1
+x z
}
-z \sqrt{1+x}
+z H_0\Big(
\frac{1}{2} \big(
1+\sqrt{1+x}\big)\Big)
-H_0\big(
\frac{1}{2} \Big(
1+\sqrt{1+x z}\big)\Big)
\Big] \sqrt{\pi }
\bigg]
\bigg\}.
\end{eqnarray}
The $\varepsilon$ expansion of $F_1(x,\varepsilon)$ and $F_2(x,\varepsilon)$ then can be written by
\begin{eqnarray}
F_1(x,\varepsilon) &=& 1
-
\frac{x}{2}
+\varepsilon \biggl\{
-1
+\sqrt{1+x}
+\frac{1}{4} x \big(
-9+4 \sqrt{1
+x
}\big)
+\frac{1}{2} (-2+x) H_0(x)
\nonumber\\&&
+\frac{1}{2} (2-x) G_{3}(x)
\biggr\}
+\varepsilon ^2 \Biggl\{
\frac{1}{8} \Bigl[
20 \big(
-1+\sqrt{1
+x
}\big)
+x \big(
-33
+4 x
+20 \sqrt{1+x}
\big)
\Bigr]
\nonumber\\&&
+(-2+x) \Bigl(
-\frac{1}{4} H_0(x)^2
+\frac{1}{4} H_{0,-1}(x)
+\frac{ G_{11}(x)}{4}
-\frac{ G_{12}(x)}{4}
+\frac{ G_{5}(x)^2}{8}
\Bigr)
\nonumber\\&&
+\frac{1}{2} (2-x) \bigl( G_{8}(x)
+ G_{9}(x)
\bigr)
+\frac{1}{4} (-4+13 x) H_0(x)
+\frac{1}{2} (1+x)^{3/2} H_{-1}(x)
\nonumber\\&&
+\biggl[
1
-\frac{13 x}{4}
+\frac{1}{2} (-2+x) H_0(x)
\biggr] G_{3}(x)
+\biggl[
\frac{1}{2} (1+x)^{3/2}
+\frac{1}{4} (2-x) H_0(x)
\biggr] G_{5}(x)
\Biggr\}
\nonumber\\&&
+\varepsilon ^3 \Biggl\{
\frac{ G_{5}(x)^2}{16} \Bigl[
x \big(
17-4 \sqrt{1
+x
}\big)
-4 \big(
3+\sqrt{1
+x
}\big)
\Bigr]
+ G_{11}(x) \Bigl[
\frac{1}{8} \big(
x \big(
13-6 \sqrt{1
+x
}\big)
\nonumber\\&&
-6 \sqrt{1+x}
\big)
+\frac{1}{8} (-2+x) H_0(x)
\Bigr]
+ G_{12}(x) \biggl[
\frac{1}{8} \Bigl[
2 \big(
6+\sqrt{1
+x
}\big)
+x \big(
-17+2 \sqrt{1
+x
}\big)
\Bigr]
\nonumber\\&&
+\frac{3}{8} (-2+x) H_0(x)
\biggr]
+ G_{3}(x) \biggl[
\frac{9}{2}
-\frac{105 x}{8}
+\frac{1}{4} (-14+17 x) H_0(x)
+\frac{1}{4} (2-x) H_0(x)^2
\biggr]
\nonumber\\&&
+ G_{5}(x) \Biggl[
(-2+x) \Bigl(
-\frac{1}{16} H_0(x)^2
+\frac{1}{8} H_{0,-1}(x)
+\frac{3 G_{11}(x)}{8}
-\frac{ G_{12}(x)}{8}
\Bigr)
\nonumber\\&&
+\frac{1}{4} \biggl[
-6 x
-2 x^2
+(1+x) (5+16 x) \sqrt{1+x}
\biggr]
+\frac{1}{8} \biggl[
x \big(
-13+6 \sqrt{1
+x
}\big)
\nonumber\\&&
+6 \sqrt{1+x}
\biggr] H_0(x)
-\frac{1}{4} (1+x)^{3/2} H_{-1}(x)
\Biggr]
+(-2+x) \big(
\frac{1}{12} H_0(x)^3
-\frac{3}{8} H_{0,0,-1}(x)
\nonumber\\&&
-\frac{1}{8} H_{0,-1,-1}(x)
-\frac{ G_{18}(x)}{8}
-\frac{5 G_{19}(x)}{8}
-\frac{3 G_{20}(x)}{8}
+\frac{ G_{21}(x)}{8}
-\frac{3 G_{22}(x)}{4}
+\frac{ G_{23}(x)}{4}
\nonumber\\&&
-\frac{1}{24} G_{5}(x)^3
\big)
+\frac{1}{2} (2-x) ( G_{14}(x)
+ G_{15}(x)
+ G_{16}(x)
+ G_{17}(x)
)
\nonumber\\&&
+\frac{1}{240} \Biggl\{
-60 (-2+x)
\int_0^1 dz \biggl\{
-\frac{1}{(-1+z) z}2 \sqrt{\pi } \Bigl[
-2
+2 z
+\big(
1+\sqrt{z}\big) G_1(z)
\nonumber\\&&
-2 \sqrt{z} \bigl(-1+\ln (2)\bigr)
\Bigr]
\biggl[-1
+z
-\sqrt{1+x} z
+\sqrt{1
+x z
}
+z H_0\Big(
\frac{1}{2} \big(
1+\sqrt{1+x}\big)\Big)
\nonumber\\&&
-H_0\Big(
\frac{1}{2} \big(
1+\sqrt{1+x
z}\big)\Big)
\biggr] \biggr\}
+180 x
\int_0^1 dz \biggl\{
-
\frac{1}{4 \sqrt{1
+x z
}}\sqrt{\pi } x \big(
-1+\sqrt{1
+x z
}
\big)
\biggl[4
\nonumber\\&&
+2 \sqrt{z} \Bigl[
-8
+2 z^{3/2}
-3 \sqrt{z} \big(
2
+\zeta_2
\big)
+z \big(6-4 \ln (2)\big)
+8 \ln (2)
\Bigr]
+4 H_0(z)
\nonumber\\&&
+4 H_1(z)
+2 \Bigl[
-3
-4 \sqrt{z}
+3 z
+2 z^{3/2}
+(-1+z) H_0(z)
+(-1+z) H_1(z)
+2 \ln (2)
\nonumber\\&&
-2 z \ln (2)
\Bigr] G_{1}(z)
+(-1+z) G_{1}(z)^2
-2 (-1+z) G_{6}(z)
-2 (-1+z) G_{7}(z)
+6 \zeta_2
\biggr]\biggr\}
\nonumber\\&&
+\biggl[
x \Bigl[
-2215
+4 x \big(
435
+96 x
-500 \sqrt{1+x}
\big)
+60 \sqrt{1+x}
\Bigr]
\nonumber\\&&
+2060 \big(
-1+\sqrt{1
+x
}\big)
\biggr] \sqrt{\pi }
\Biggr\} \frac{1}{\sqrt{\pi }}
+\frac{3}{8} (-12+35 x) H_0(x)
+\frac{1}{8} (14-17 x) H_0(x)^2
\nonumber\\&&
+\frac{1}{4} (11-16 x) (1+x)^{3/2} H_{-1}(x)
+\frac{5}{8} (1+x)^{3/2} H_{-1}(x)^2
+\frac{1}{8} \Bigl[
x \big(
17-2 \sqrt{1
+x
}\big)
\nonumber\\&&
-2 \big(
6+\sqrt{1
+x
}\big)
\Bigr] H_{0,-1}(x)
+8 \sqrt{x} G_{10}(x)
+8 \sqrt{x} G_{13}(x)
+\frac{ G_{2}(x)}{2}
\nonumber\\&&
+\Bigl[
2 (17-4 x) \sqrt{x}
-8 \sqrt{x} G_{5}(x)
\Bigr] G_{4}(x)
+2 (-9+4 x) \sqrt{x} G_{4}(x)
+\biggl[
\frac{7}{2}
-\frac{17 x}{4}
\nonumber\\&&
+\frac{1}{2} (-2+x) H_0(x)
\biggr] G_{8}(x)
+\biggl[
-\frac{7}{2} (-1+x)
+\frac{1}{2} (-2+x) H_0(x)
\biggr] G_{9}(x)
\Biggr\}
\nonumber\\&&
+\mathcal{O}(\varepsilon^4),
\end{eqnarray}
\begin{eqnarray}
F_2(y,\varepsilon) &=& 1
-2 y
-
\frac{1}{4} (-20+y) y \varepsilon
+\varepsilon ^2 \biggl\{
-\frac{1}{48} y \big(
480-765 y-56 y^2+64 y^3+12 y^4\big)
\nonumber\\&&
+\frac{1}{4} (-9+4 y) y^{2/3} (3+y)^{4/3} G_{26}(y)
+(1-2 y) G_{30}(y)
\biggr\}
+\varepsilon ^3 \Biggl\{
\frac{1}{192} y \big(
3840-21453 y
\nonumber\\&&
-1672 y^2+1638 y^3+280 y^4-6 y^5\big)
+ G_{26}(y) \biggl[
\frac{1}{16} \big(
243-108 y+2 y^2\big) y^{2/3} (3+y)^{4/3}
\nonumber\\&&
+\frac{1}{6} (9-4 y) y^{2/3} (3+y)^{4/3} H_0(y)
+\frac{1}{6} (-9+4 y) y^{2/3} (3+y)^{4/3} H_{-3}(y)
\nonumber\\&&
+\frac{1}{270} \big(
-1215+108 y+4 y^2\big) y^{2/3} G_{24}(y)
+\frac{7}{5} (-1+2 y) G_{25}(y)
+\frac{2}{3} (-1+2 y) G_{28}(y)
\nonumber\\&&
-\frac{2}{3} (-1+2 y) G_{29}(y)
\biggr]
+\big(
-1215+108 y+4 y^2
\big)
\biggl[-\frac{1}{270} y^{2/3} G_{27}(y)
-\frac{1}{270} y^{2/3} G_{33}(y)
\biggr]
\nonumber\\&&
+(-1+2 y) \Bigl(
-\frac{7 G_{34}(y)}{5}
+\frac{2 G_{35}(y)}{3}
\Bigr)
+\biggl[
\big(
\frac{2}{3}-\frac{4 y}{3}\big) G_{31}(y)
+\frac{2}{3} (-1+2 y) G_{32}(y)
\biggr] G_{25}(y)
\nonumber\\&&
+\frac{1}{20} \big(
-52+204 y-5 y^2\big) G_{30}(y)
+\frac{1}{6} (-9+4 y) y^{2/3}
(3+y)^{4/3} G_{31}(y)
\nonumber\\&&
-\frac{1}{6} (3+y) (-9+4 y) y^{2/3} \sqrt[3]{3+y} G_{32}(y)
-\frac{2}{3} (-1+2 y) G_{36}(y)
\Biggr\}+\mathcal{O}(\varepsilon^4).
\end{eqnarray}
By multiplying $F_1(x)$ and $F_2(y)$ one obtains the series expansion of $\mathcal F(x,y)$, with $0<x<1, 0<y<1$.
The functions $G_i$ are the iterated integrals of the type
\cite{Ablinger:2014bra}
\begin{eqnarray}
G(f_1(\tau), ... f_n(\tau); x)
= \int_0^x dy f_1(y) G(f_2(\tau), ..., f_n(\tau);y),
\end{eqnarray}
with
\begin{eqnarray}
G_{1} (z) &=& G \Big(
\frac{\sqrt{\tau }}{1-\tau };z\Big)=-2 \sqrt{z}
+H_1\big(
\sqrt{z}\big)
+H_{-1}\big(
\sqrt{z}\big)
,
\\
G_{2} (x) &=& G \Big(
\sqrt{1+\tau };x\Big)=-\frac{2}{3}
+\frac{2 \sqrt{1+x}}{3}
+\frac{2}{3} x \sqrt{1+x}
,
\\
G_{3} (x) &=& G \Big(
\frac{\sqrt{1+\tau }}{\tau };x\Big)=-2
+2 \ln(2)
-i \pi
+2 \sqrt{1+x}
-H_1\big(
\sqrt{1+x}\big)
\nonumber\\&&
-H_{-1}\big(
\sqrt{1+x}\big)
,
\\
G_{4} (x) &=& G \big(
\sqrt{\tau } \sqrt{1+\tau };x\big)=\frac{1}{4}
\sqrt{x (1+x)}
+\frac{1}{2} x \sqrt{x (1+x)}
-\frac{1}{4} \ln \big(
\sqrt{x}
+\sqrt{1+x}
\big)
,
\\
G_{5} (x) &=& G \Big(
\frac{1-\sqrt{1
+\tau
}}{\tau };x\Big)=2
-2 \ln(2)
-2 \sqrt{1+x}
+2 H_{-1}\big(
\sqrt{1+x}\big)
,
\\
G_{6} (z) &=& G \Big(
\frac{\sqrt{\tau }}{1-\tau },\frac{1}{1-\tau };z\Big)=-4 \sqrt{z}
-2 \sqrt{z} H_1(z)
+H_1\big(
\sqrt{z}\big) H_1(z)
+H_{-1}\big(
\sqrt{z}\big) H_1(z)
\nonumber\\&&
+2 H_1\big(
\sqrt{z}\big)
-H_{-1}\big(
\sqrt{z}\big) H_1\big(
\sqrt{z}\big)
-\frac{1}{2} H_1^2\big(\sqrt{z}\big)
+2 H_{-1}\big(\sqrt{z}\big)
+
\frac{1}{2} H_{-1}^2\big(\sqrt{z}\big)
\nonumber\\&&
+2 H_{-1,1}\big(
\sqrt{z}\big)
,
\\
G_{7} (z) &=& G \Big(
\frac{\sqrt{\tau }}{1-\tau },\frac{1}{\tau };z\Big)=4 \sqrt{z}
-2 \sqrt{z} H_0(z)
+H_{-1}\big(
\sqrt{z}\big) H_0(z)
+H_0(z) H_1\big(
\sqrt{z}\big)
\nonumber\\&&
-2 H_{0,1}\big(
\sqrt{z}\big)
-2 H_{0,-1}\big(
\sqrt{z}\big)
,
\\
G_{8} (x) &=& G \Big(
\frac{\sqrt{1+\tau }}{\tau },\frac{1}{\tau
};x\Big)=4
+2 i \pi
-\frac{2 \pi ^2}{3}
-4 \sqrt{1+x}
-4 \ln(2)
-2 i \pi \ln(2)
\nonumber\\&&
+2 \ln^2(2)
+2 \sqrt{1+x} H_0(x)
-H_{-1}\big(
\sqrt{1+x}\big) H_0(x)
+2 H_1\big(
\sqrt{1+x}\big)
\nonumber\\&&
-H_0(x) H_1\big(
\sqrt{1+x}\big)
-H_{-1}\big(
\sqrt{1+x}\big) H_1\big(
\sqrt{1+x}\big)
-\frac{1}{2} H_1^2\big(
\sqrt{1+x}\big)
\nonumber\\&&
+2 H_{-1}\big(
\sqrt{1+x}\big)
+\frac{1}{2} H_{-1}^2\big(
\sqrt{1+x}\big)
+2 H_{-1,1}\big(
\sqrt{1+x}\big)
,
\\
G_{9} (x) &=& G \Big(
\frac{\sqrt{1+\tau }}{\tau },\frac{1}{1+\tau };x\Big)=4
-\frac{\pi ^2}{2}
-4 \sqrt{1+x}
-H_{-1}(x) H_1\big(
\sqrt{1+x}\big)
+\frac{2 H_{-1}(x)}{\sqrt{1+x}}
\nonumber\\&&
+\frac{2 x H_{-1}
(x)}{\sqrt{1+x}}
-H_{-1}(x) H_{-1}\big(
\sqrt{1+x}\big)
-2 H_{0,1}\big(
-\sqrt{1+x}\big)
+2 H_{0,1}\big(
\sqrt{1+x}\big)
,
\\
G_{10} (x) &=& G \Big(
\sqrt{\tau } \sqrt{1+\tau },
\frac{1}{1+\tau };x\Big)=
\frac{1}{48} \bigg\{
6 \sqrt{x} \sqrt{1+x}
-12 x^{3/2} \sqrt{1+x}
\nonumber\\&&
-6 H_0\big(
\sqrt{x}+\sqrt{1+x}\big)
-12 H_{-1}(x) H_0\big(
\sqrt{x}+\sqrt{1+x}\big)
\nonumber\\&&
+24 H_{-1}\Big(
\big(
\sqrt{x}+\sqrt{1+x}\big)^2\Big) H_0\big(
\sqrt{x}+\sqrt{1+x}\big)
-12 H_0^2\big(
\sqrt{x}+\sqrt{1+x}\big)
\nonumber\\&&
+12 \sqrt{x} \sqrt{1+x} H_{-1}(x)
+24 x^{3/2} \sqrt{1+x} H_{-1}(x)
-12 H_{0,-1}\Big(
\big(
\sqrt{x}+\sqrt{1+x}\big)^2\Big)
\nonumber\\&&
+6 \zeta_2
\bigg\},
\\
G_{11} (x) &=& G \Big(
\frac{1-\sqrt{1
+\tau
}}{\tau },\frac{1}{\tau };x\Big)=-4
-2 i \pi
+\frac{\pi ^2}{6}
+4 \sqrt{1+x}
+4 \ln(2)
+2 i \pi \ln(2)
\nonumber\\&&
-2 \ln^2(2)
-2 \sqrt{1+x} H_0(x)
+H_0(-x) H_0(x)
+H_{-1}\big(
\sqrt{1+x}\big) H_0(x)
-\frac{1}{2} H_0^2(-x)
\nonumber\\&&
-2 H_1\big(
\sqrt{1+x}\big)
+H_0(x) H_1\big(
\sqrt{1+x}\big)
+H_{-1}\big(
\sqrt{1+x}\big) H_1\big(
\sqrt{1+x}\big)
\nonumber\\&&
+\frac{1}{2} H_1^2\big(
\sqrt{1+x}\big)
-2 H_{-1}\big(
\sqrt{1+x}\big)
-\frac{1}{2} H_{-1}^2\big(
\sqrt{1+x}
\big)
-2 H_{-1,1}\big(
\sqrt{1+x}\big)
,
\\
G_{12} (x) &=& G \Big(
\frac{1-\sqrt{1
+\tau
}}{\tau },\frac{1}{1+\tau };x\Big)=-4
+\frac{\pi ^2}{3}
+4 \sqrt{1+x}
+H_{-1}(x) H_0(1)
\nonumber\\&&
+H_{-1}(x) H_0(-x)
+H_0(-x) H_1(-x)
+H_{-1}(x) H_1\big(
\sqrt{1+x}\big)
-2 \sqrt{1+x} H_{-1}(x)
\nonumber\\&&
+H_{-1}(x) H_{-1}\big(
\sqrt{1+x}\big)
+\zeta_2
-H_{0,1}(-x)
-2 H_{0,1}\big(
\sqrt{1+x}\big)
\nonumber\\&&
-2 H_{0,-1}\big(
\sqrt{1+x}\big)
,
\\
G_{13} (x) &=& G \Big(
\frac{1-\sqrt{1
+\tau
}}{\tau },\sqrt{\tau } \sqrt{1+\tau
};x\Big)=\frac{1}{40} \bigg[
-40 \sqrt{x}
-20 x^{3/2}
-8 x^{5/2}
+15 \sqrt{x (1+x)}
\nonumber\\&&
+10 x \sqrt{x (1+x)}
\bigg]
+\bigg[
\frac{1}{8} \big(
1+4 \sqrt{1
+x
}\big)
-\frac{1}{2} H_1\big(
\sqrt{x}+\sqrt{1+x}\big)
\nonumber\\&&
+\frac{1}{2} H_1\Big(
\big(
\sqrt{x}+\sqrt{1+x}\big)^2\Big)
-\frac{1}{2} H_{-1}\big(
\sqrt{x}+\sqrt{1+x}\big)
\bigg] H_0\big(
\sqrt{x}+\sqrt{1+x}\big)
\nonumber\\&&
+\frac{1}{4} H_0^2\big(
\sqrt{x}+\sqrt{1+x}\big)
-\frac{1}{2} \zeta_2
+H_{0,-1}\big(
\sqrt{x}+\sqrt{1+x}\big),
\\
G_{14} (x) &=& G \Big(
\frac{\sqrt{1+\tau }}{\tau },\frac{1}{\tau
},\frac{1}{\tau };x\Big)
=
-8
+
\frac{4 \ln^3(2)}{3}
-4 i \pi
+\frac{4 \pi ^2}{3}
+\frac{5 i \pi ^3}{6}
-2 \ln^2(2) (2+i \pi )
\nonumber\\&&
+\frac{1}{3} \ln(2) \big(
24+12 i \pi -\pi ^2\big)
+8 \sqrt{1+x}
+\Big[
-i \ln(2) \pi
+\frac{3 \pi ^2}{2}
-4 \sqrt{1+x}
\nonumber\\&&
-2 (-1+i \pi ) H_1\big(
\sqrt{1+x}\big)
-\frac{1}{2} H_1^2\big(
\sqrt{1+x}\big)
+2 H_{-1,1}\big(
\sqrt{1+x}\big)
\Big] H_0(x)
\nonumber\\&&
+\Big[
-i \pi
+\sqrt{1+x}
-\frac{1}{2} H_1\big(
\sqrt{1+x}\big)
\Big] H_0^2(x)
+\Big[
-4
-i \ln(2) \pi
+\frac{3 \pi ^2}{2}
\nonumber\\&&
+2 H_{-1,1}\big(
\sqrt{1+x}\big)
\Big] H_1\big(
\sqrt{1+x}\big)
+(1-i \pi ) H_1^2\big(
\sqrt{1+x}\big)
-\frac{1}{6} H_1^3\big(
\sqrt{1+x}\big)
\nonumber\\&&
+\bigg[
-4
+i \ln(2) \pi
-\frac{3 \pi ^2}{2}
+\Big[
2
+2 i \pi
-H_1\big(
\sqrt{1+x}\big)
\Big] H_0(x)
-\frac{1}{2} H_0^2(x)
\nonumber\\&&
+2 (1+i \pi ) H_1\big(
\sqrt{1+x}\big)
-\frac{1}{2} H_1^2\big(
\sqrt{1+x}\big)
\bigg] H_{-1}\big(
\sqrt{1+x}\big)
+\Big[
-1
-i \pi
+\frac{1}{2} H_0(x)
\nonumber\\&&
+\frac{1}{2} H_1\big(
\sqrt{1+x}\big)
\Big] H_{-1}^2\big(
\sqrt{1+x}\big)
-\frac{1}{6} H_{-1}^3\big(
\sqrt{1+x}\big)
-4 H_{-1,1}
\big(
\sqrt{1+x}\big)
\nonumber\\&&
-2 H_{-1,1,1}\big(
\sqrt{1+x}\big)
-2 H_{-1,-1,1}\big(
\sqrt{1+x}\big)
+2 \zeta_3
,
\\
G_{15} (x) &=& G \Big(
\frac{\sqrt{1+\tau }}{\tau },\frac{1}{\tau
},\frac{1}{1+\tau };x\Big)
=
8 \big(
-1+\sqrt{1
+x
}\big)
-H_{-1}^2\big(
\sqrt{1+x}\big) H_0\big(
\sqrt{1+x}\big)
\nonumber\\&&
-8 \sqrt{1+x} H_{-1}\big(
-1+\sqrt{1+x}\big)
+\Big[
4 \big(
1+\sqrt{1
+x
}\big) H_0\big(
\sqrt{1+x}\big)
+2 H_{0,-1}\big(
\sqrt{1+x}\big)
\nonumber\\&&
-\zeta_2
\Big] H_{-1}\big(
\sqrt{1+x}\big)
+4 \big(
-1+\sqrt{1
+x
}\big) H_{0,-1}\big(
-1+\sqrt{1+x}\big)
\nonumber\\&&
-4 \big(
1+\sqrt{1
+x
}\big) H_{0,-1}\big(
\sqrt{1+x}\big)
+2 H_{0,0,-1}\big(
-1+\sqrt{1+x}\big)
-2 H_{0,-1,-1}\big(
\sqrt{1+x}\big)
\nonumber\\&&
+2 H_{0,-2,-1}\big(
-1+\sqrt{1+x}\big)
-2 H_{-2,0,-1}\big(
-1+\sqrt{1+x}\big)
+2 \big(
1+\sqrt{1
+x
}\big) \zeta_2
\nonumber\\&&
+\frac{1}{4} \zeta_3
,
\\
G_{16} (x) &=& G \Big(
\frac{\sqrt{1+\tau }}{\tau },\frac{1}{1+\tau
},\frac{1}{\tau };x\Big)
=
8 \big(
-1+\sqrt{1
+x
}\big)
-4 \ln(2) \big(
-1+\sqrt{1
+x
}\big)
\nonumber\\&&
+\Big[
-4 \big(
-1+\sqrt{1
+x
}\big)
+2 H_{0,-1}\big(
-1+\sqrt{1+x}\big)
\Big] H_0\big(
-1+\sqrt{1+x}\big)
\nonumber\\&&
+\Big[
4 \ln(2) \sqrt{1+x}
+4 \sqrt{1+x} H_0\big(
-1+\sqrt{1+x}\big)
\Big] H_{-1}\big(
-1+\sqrt{1+x}\big)
\nonumber\\&&
-4 \big(
1+\sqrt{1
+x
}\big) H_{-2}\big(
-1+\sqrt{1+x}\big)
+\Big[
2 \ln(2)
-4 \sqrt{1+x}
\Big] H_{0,-1}\big(
-1+\sqrt{1+x}\big)
\nonumber\\&&
+4 \sqrt{1+x} H_{-1,-2}\big(
-1+\sqrt{1+x}\big)
-2 \ln(2) H_{-2,-1}\big(
-1+\sqrt{1+x}\big)
\nonumber\\&&
-4 H_{0,0,-1}\big(
-1+\sqrt{1+x}\big)
+2 H_{0,-1,-2}\big(
-1+\sqrt{1+x}\big)
\nonumber\\&&
-2 H_{-2,-1,0}\big(
-1+\sqrt{1+x}\big)
-2 H_{-2,-1,-2}\big(
-1+\sqrt{1+x}\big)
,
\\
G_{17} (x) &=& G \Big(
\frac{\sqrt{1+\tau }}{\tau },\frac{1}{1+\tau
},\frac{1}{1+\tau };x\Big)
=
8 \big(
-1+\sqrt{1
+x
}\big)
-8 \sqrt{1+x} H_{-1}\big(
-1+\sqrt{1+x}\big)
\nonumber\\&&
+4 \sqrt{1+x} H_{-1}^2\big(
-1+\sqrt{1+x}\big)
+4 H_{0,-1,-1}\big(
-1+\sqrt{1+x}\big)
\nonumber\\&&
-4 H_{-2,-1,-1}\big(
-1+\sqrt{1+x}\big)
,
\\
G_{18} (x) &=& G \Big(
\frac{1-\sqrt{1
+\tau
}}{\tau },\frac{1}{\tau },\frac{1}{\tau
};x\Big)
=
8
-
\frac{4 \ln^3(2)}{3}
+2 \ln^2(2) (2+i \pi )
+4 i \pi
-\frac{5 i \pi ^3}{6}
\nonumber\\&&
+\ln(2) \big(
-8
-4 i \pi
+2 \zeta_2
\big)
-8 \sqrt{1+x}
+\Big[
i \ln(2) \pi
+4 \sqrt{1+x}
+2 (-1+i \pi ) H_1\big(
\sqrt{1+x}\big)
\nonumber\\&&
+\frac{1}{2} H_1^2\big(
\sqrt{1+x}\big)
-2 H_{-1,1}\big(
\sqrt{1+x}\big)
-9 \zeta_2
\Big] H_0(x)
+\Big[
\frac{i \pi }{2}
-\sqrt{1+x}
+\frac{1}{2} H_0(-x)
\nonumber\\&&
+\frac{1}{2} H_1\big(
\sqrt{1+x}\big)
\Big] H_0^2(x)
-\frac{1}{3} H_0^3(x)
+\Big[
4
+i \ln(2) \pi
-2 H_{-1,1}\big(
\sqrt{1+x}\big)
\nonumber\\&&
-9 \zeta_2
\Big] H_1\big(
\sqrt{1+x}\big)
+ (-1+i \pi ) H_1^2\big(
\sqrt{1+x}\big)
+\frac{1}{6} H_1^3\big(
\sqrt{1+x}\big)
+\bigg[
4
-i \ln(2) \pi
\nonumber\\&&
+\frac{1}{2} H_0^2(x)
-2 (1+i\pi ) H_1\big(
\sqrt{1+x}\big)
+\frac{1}{2} H_1^2\big(
\sqrt{1+x}\big)
+9 \zeta_2
+H_0(x) \Big[
-2-2 i \pi
\nonumber\\&&
+H_1\big(
\sqrt{1+x}\big)\Big]
\bigg] H_{-1}\big(
\sqrt{1+x}\big)
+\Big[
1
+i \pi
-\frac{1}{2} H_0(x)
\nonumber\\&&
-\frac{1}{2} H_1\big(
\sqrt{1+x}\big)
\Big] H_{-1}^2\big(
\sqrt{1+x}\big)
+\frac{1}{6} H_{-1}^3\big(
\sqrt{1+x}\big)
+4 H_{-1,1}\big(
\sqrt{1+x}
\big)
\nonumber\\&&
+2 H_{-1,1,1}\big(
\sqrt{1+x}\big)
+2 H_{-1,-1,1}\big(
\sqrt{1+x}\big)
-8 \zeta_2
-2 \zeta_3
,
\\
G_{19} (x) &=& G \Big(
\frac{1-\sqrt{1
+\tau
}}{\tau },\frac{1}{\tau },\frac{1}{1+\tau
};x\Big)
=
-8 \big(
-1+\sqrt{1
+x
}\big)
+8 \sqrt{1+x} H_{-1}\big(
-1+\sqrt{1+x}\big)
\nonumber\\&&
-4 \big(
-1+\sqrt{1
+x
}\big) H_{0,-1}\big(
-1+\sqrt{1+x}\big)
-4 \big(
1+\sqrt{1
+x
}\big) H_{-2,-1}\big(
-1+\sqrt{1+x}\big)
\nonumber\\&&
-2 H_{0,0,-1}\big(
-1+\sqrt{1+x}\big)
-2 H_{0,-2,-1}\big(
-1+\sqrt{1+x}\big)
+2 H_{-2,0,-1}\big(
-1+\sqrt{1+x}\big)
\nonumber\\&&
+2 H_{-2,-2,-1}\big(
-1+\sqrt{1+x}\big)
+H_{0,0,-1}(x)
,
\\
G_{20} (x) &=& G \Big(
\frac{1-\sqrt{1
+\tau
}}{\tau },\frac{1}{1+\tau },\frac{1}{\tau
};x\Big)
=
4 (-2+\ln(2)) \big(
-1+\sqrt{1
+x
}\big)
\nonumber\\&&
+\Big[
4 \big(
-1+\sqrt{1
+x
}\big)
-2 H_{0,-1}\big(
-1+\sqrt{1+x}\big)
\Big] H_0\big(
-1+\sqrt{1+x}\big)
\nonumber\\&&
+\Big[
-4 \ln(2) \sqrt{1+x}
-4 \sqrt{1+x} H_0\big(
-1+\sqrt{1+x}\big)
\Big] H_{-1}\big(
-1+\sqrt{1+x}\big)
\nonumber\\&&
+4 \big(
1+\sqrt{1
+x
}\big) H_{-2}\big(
-1+\sqrt{1+x}\big)
+H_0(x) H_{0,-1}(x)
\nonumber\\&&
+\Big[
-2 \ln(2)
+4 \sqrt{1+x}
\Big] H_{0,-1}\big(
-1+\sqrt{1+x}\big)
-4 \sqrt{1+x} H_{-1,-2}\big(
-1+\sqrt{1+x}\big)
\nonumber\\&&
+2 \ln(2) H_{-2,-1}\big(
-1+\sqrt{1+x}\big)
-2 H_{0,0,-1}(x)
+4 H_{0,0,-1}\big(
-1+\sqrt{1+x}\big)
\nonumber\\&&
-2 H_{0,-1,-2}\big(
-1+\sqrt{1+x}\big)
+2 H_{-2,-1,0}\big(
-1+\sqrt{1+x}\big)
\nonumber\\&&
+2 H_{-2,-1,-2}\big(
-1+\sqrt{1+x}\big)
,
\\
G_{21} (x) &=& G \Big(
\frac{1-\sqrt{1
+\tau
}}{\tau },\frac{1}{1+\tau },\frac{1}{1+\tau
};x\Big)
=
-8 \big(
-1+\sqrt{1
+x
}\big)
+\Big[
4 \sqrt{1+x}
\nonumber\\&&
-2 H_{0,1}\big(
\sqrt{1+x}\big)
-2 H_{0,-1}\big(
\sqrt{1+x}\big)
\Big] H_{-1}(x)
+\Big[
-\sqrt{1+x}
\nonumber\\&&
+\frac{1}{2} H_1\big(
\sqrt{1+x}\big)
+\frac{1}{2} H_{-1}\big(
\sqrt{1+x}\big)
\Big] H_{-1}(x)^2
+4 H_{0,0,1}\big(
\sqrt{1+x}\big)
\nonumber\\&&
+4 H_{0,0,-1}\big(
\sqrt{1+x}\big)
-7 \zeta_3
+H_{0,-1,-1}(x),
\\
G_{22} (x) &=& G \Big(
\frac{1-\sqrt{1
+\tau
}}{\tau },\frac{1-\sqrt{1
+\tau
}}{\tau },\frac{1}{\tau };x\Big)
=
-16
+
\frac{8 \ln^3(2)}{3}
-7 i \pi
-6 x
+4 \ln(2) \Big[
4
+2 i \pi
\nonumber\\&&
-2 \sqrt{1+x}
-i \pi \sqrt{1+x}
+7 \zeta_2
\Big]
+16 \sqrt{1+x}
+4 i \pi \sqrt{1+x}
+\ln^2(2) \Big[
-8+3 i \pi
\nonumber\\&&
+4 \sqrt{1
+x
}\Big]
+\bigg[
-3
+5 \ln^2(2)
+\ln(2) (4-5 i \pi )
+2 x
+\Big[
4-15 i \pi
\nonumber\\&&
-2 \sqrt{1
+x
}\Big] H_1\big(
\sqrt{1+x}\big)
-\frac{11}{2} H_1^2\big(
\sqrt{1+x}\big)
+42 \zeta_2
\bigg] H_0(x)
+\Big[
\frac{1}{2} \big(
4-15 i \pi -2 \sqrt{1
+x
}\big)
\nonumber\\&&
-\frac{11}{2} H_1\big(
\sqrt{1+x}\big)
\Big] H_0^2(x)
-\frac{11}{6} H_0^3(x)
+\Big[
-7
+5 \ln^2(2)
+\ln(2) (4-5 i \pi )
\nonumber\\&&
+4 \sqrt{1+x}
+42 \zeta_2
\Big] H_1\big(
\sqrt{1+x}\big)
+\frac{1}{2} \Big(
4-15 i \pi -2 \sqrt{1
+x
}\Big) H_1^2\big(
\sqrt{1+x}\big)
\nonumber\\&&
-\frac{11}{6} H_1^3\big(
\sqrt{1+x}\big)
+\bigg[
-9
-9 \ln^2(2)
+\ln(2) (4+9 i \pi )
-4 i \pi
+4 \sqrt{1+x}
+\Big[
-4
\nonumber\\&&
+15 i \pi
-2 \sqrt{1+x}
+13 H_1\big(
\sqrt{1+x}\big)
\Big] H_0(x)
+\frac{13}{2} H_0^2(x)
+\Big(
-4+15 i \pi
\nonumber\\&&
-2 \sqrt{1
+x
}\Big) H_1\big(
\sqrt{1+x}\big)
+\frac{13}{2} H_1^2\big(
\sqrt{1+x}
\big)
-34 \zeta_2
\bigg] H_{-1}\big(
\sqrt{1+x}\big)
\nonumber\\&&
+\Big[
\frac{1}{2} \big(
-15 i \pi +2 \sqrt{1
+x
}\big)
-\frac{11}{2} H_0(x)
-\frac{11}{2} H_1\big(
\sqrt{1+x}\big)
\Big] H_{-1}^2\big(
\sqrt{1+x}\big)
\nonumber\\&&
+\frac{3}{2} H_{-1}^3\big(
\sqrt{1+x}\big)
+4 \big(
-1+\sqrt{1
+x
}\big) H_{-1,1}\big(
\sqrt{1+x}\big)
-4 H_{-1,-1,1}\big(
\sqrt{1+x}\big)
\nonumber\\&&
+14 \zeta_2
+8 i \pi \zeta_2
-8 \sqrt{1+x} \zeta_2
+\frac{1}{2} \zeta_3
,
\\
G_{23} (x) &=& G \Big(
\frac{1-\sqrt{1
+\tau
}}{\tau },\frac{1-\sqrt{1
+\tau
}}{\tau },\frac{1}{1+\tau };x\Big)
=
-16
+16 \ln(2)
-4 i \pi
-6 x
+16 \sqrt{1+x}
\nonumber\\&&
+2 \big(
-2
+\zeta_2
\big) H_0(x)
-4 H_{-1}^2\big(
\sqrt{1+x}\big) H_0\big(
-\sqrt{1+x}\big)
\nonumber\\&&
+2 \big(
-2
+\zeta_2
\big) H_1\big(
\sqrt{1+x}\big)
+\Big[
2 (1+x)
-4 \sqrt{1+x} H_{-1}\big(
\sqrt{1+x}\big)
\nonumber\\&&
+2 H_{-1}^2\big(
\sqrt{1+x}\big)
\Big] H_{-1}(x)
+\Big[
2 \big(
-6
+5 \zeta_2
\big)
\nonumber\\&&
+8 \sqrt{1+x} H_0\big(
-\sqrt{1+x}\big)
\Big] H_{-1}\big(
\sqrt{1+x}\big)
+8 \sqrt{1+x} H_{0,1}\big(
1+\sqrt{1+x}\big)
\nonumber\\&&
-8 H_{0,0,1}\big(
1+\sqrt{1+x}\big)
+2 i \pi \zeta_2
-12 \sqrt{1+x} \zeta_2
+7 \zeta_3
\\
G_{24} (y) &=& G\Big(
\frac{(3+\tau )^{1/3}}{\tau };y\Big)
=
\frac{1}{6} \bigg\{
3 \bigg[
-6
+3 \ln (3)
-2 (-1)^{1/3} \Big[
-\ln (3)
\nonumber\\&&
+\ln \Big(
3
-(-3)^{2/3} (3+y)^{1/3}
\Big)\Big]
+2 (-1)^{2/3} \ln \Big[
1+3^{-1/3}(-1)^{1/3}(3+y)^{1/3}
\Big]
\nonumber\\&&
+2 \ln \big(
-3+3^{2/3} (3+y)^{1/3}\big)
\bigg] 3^{1/3}
+\pi 3^{5/6}
+18 (3+y)^{1/3}
\bigg\}
,
\\
G_{25} (y) &=& G\Big(
\tau ^{2/3} (3+\tau )^{1/3};y\Big)
=
\frac{1}{2} \bigg\{
(1+y) (3+y)^{1/3}
-3^{1/3} \,_2F_1\Big[
\frac{2}{3},\frac{2}{3};\frac{5}{3};-\frac{y}{3}\Big]
\bigg\} y^{2/3}
,
\\
G_{26} (y) &=& G\Big(
\tau ^{1/3} (3+\tau )^{2/3};y\Big)
=
\frac{1}{2} \bigg\{
(2+y) (3+y)^{2/3}
-2 \times 3^{2/3} \,_2F_1\Big[
\frac{1}{3},\frac{1}{3};\frac{4}{3};-\frac{y}{3}\Big]
\bigg\} y^{1/3}
,
\\
G_{27} (y) &=& G\Big(
\frac{(3+\tau )^{1/3}}{\tau },\tau ^{1/3} (3+\tau )^{2/3};y\Big),
\\
G_{28} (y) &=& G\Big(
\tau ^{2/3} (3+\tau )^{1/3},\frac{1}{\tau };y\Big),
\\
G_{29} (y) &=& G\big(
\tau ^{2/3} (3+\tau )^{1/3},\frac{1}{3+\tau };y\Big),
\\
G_{30} (y) &=& G\Big(
\tau ^{2/3} (3+\tau )^{1/3},\tau ^{1/3} (3+\tau )^{2/3};y\Big),
\\
G_{31} (y) &=& G\Big(
\tau ^{1/3} (3+\tau )^{2/3},\frac{1}{\tau };y\Big),
\\
G_{32} (y) &=& G\Big(
\tau ^{1/3} (3+\tau )^{2/3},\frac{1}{3+\tau };y\Big),
\\
G_{33} (y) &=& G\Big(
\tau ^{1/3} (3+\tau )^{2/3},\frac{(3+\tau )^{1/3}}{\tau
};y\Big),
\\
G_{34} (y) &=& G\Big(
\tau ^{1/3} (3+\tau )^{2/3},\tau ^{2/3} (3+\tau )^{1/3};y\Big),
\\
G_{35} (y) &=& G\Big(
\tau ^{1/3} (3+\tau )^{2/3},\frac{1}{\tau },\tau ^{2/3} (3+\tau )^{1/3};y\Big),
\\
G_{36} (y) &=& G\Big(
\tau ^{1/3}
(3+\tau )^{2/3},\frac{1}{3+\tau },\tau ^{2/3}
(3+\tau )^{1/3};y\Big).
\end{eqnarray}
\subsection{Example 3}
Consider as an example the system of two differential equations
in two variables implied by the following differential operators,
\begin{eqnarray}
&& -\alpha \beta -x^6 \partial_x^6-\beta x^5 \partial_x^5-5 x^5 y \partial_x^5 \partial_y-15 x^5 \partial_x^5-10 \beta x^4 \partial_x^4-10 x^4 y^2 \partial_x^4 \partial_y^2-5 \beta x^4 y \partial_x^4 \partial_y
\nonumber\\&&
-60 x^4 y \partial_x^4 \partial_y-66 x^4 \partial_x^4-26 \beta x^3 \partial_x^3-10 x^3 y^3 \partial_x^3 \partial_y^3-10 \beta x^3 y^2 \partial_x^3 \partial_y^2-90 x^3 y^2 \partial_x^3 \partial_y^2
\nonumber\\&&
-40 \beta x^3 y \partial_x^3 \partial_y-198 x^3 y \partial_x^3
\partial_y-96 x^3 \partial_x^3-18 \beta x^2 \partial_x^2-5 x^2 y^4 \partial_x^2 \partial_y^4-10 \beta x^2 y^3 \partial_x^2 \partial_y^3
\nonumber\\&&
-60 x^2 y^3 \partial_x^2 \partial_y^3-60 \beta x^2 y^2 \partial_x^2 \partial_y^2-198 x^2 y^2
\partial_x^2 \partial_y^2-78 \beta x^2 y \partial_x^2 \partial_y-192 x^2 y \partial_x^2 \partial_y
\nonumber\\&&
-38 x^2 \partial_x^2-\alpha x \partial_x-2 \beta x \partial_x+\gamma \partial_x-x y^5 \partial_x \partial_y^5-5 \beta x y^4 \partial_x
\partial_y^4-15 x y^4 \partial_x \partial_y^4
\nonumber\\&&
-40 \beta x y^3 \partial_x \partial_y^3-66 x y^3 \partial_x \partial_y^3-78 \beta x y^2 \partial_x \partial_y^2-96 x y^2 \partial_x \partial_y^2-36 \beta x y \partial_{x y}-38 x y \partial_{x y}
\nonumber\\&&
+y
\partial_{x y}+x \partial_x^2-2 x \partial_x-\beta y^5 \partial_y^5-10 \beta y^4 \partial_y^4-26 \beta y^3 \partial_y^3-18 \beta y^2 \partial_y^2-2 \beta y \partial_y,
\\ [1cm]
&&
-\alpha \beta_1 -\beta_1 x^5 \partial_x^5-x^5 y \partial_x^5 \partial_y-10 \beta_1 x^4 \partial_x^4-5 x^4 y^2 \partial_x^4 \partial_y^2-5 \beta_1 x^4 y \partial_x^4
\partial_y-15 x^4 y \partial_x^4 \partial_y
\nonumber\\&&
-26 \beta_1 x^3 \partial_x^3-10 x^3 y^3 \partial_x^3 \partial_y^3-10 \beta_1 x^3 y^2 \partial_x^3 \partial_y^2-60 x^3 y^2 \partial_x^3 \partial_y^2-40 {\beta_1} x^3 y \partial_x^3 \partial_y
\nonumber\\&&
-66 x^3 y \partial_x^3 \partial_y-18 \beta_1 x^2 \partial_x^2-10 x^2 y^4 \partial_x^2 \partial_y^4-10 \beta_1 x^2 y^3 \partial_x^2 \partial_y^3-90 x^2 y^3 \partial_x^2
\partial_y^3
\nonumber\\&&
-60 \beta_1 x^2 y^2 \partial_x^2 \partial_y^2-198 x^2 y^2 \partial_x^2 \partial_y^2-78 \beta_1 x^2 y \partial_x^2 \partial_y-96 x^2 y \partial_x^2 \partial_y-2 \beta_1 x \partial_x
\nonumber\\&&
-5 x
y^5 \partial_x \partial_y^5-5 \beta_1 x y^4 \partial_x \partial_y^4-60 x y^4 \partial_x \partial_y^4-40 \beta_1 x y^3 \partial_x \partial_y^3-198 x y^3 \partial_x \partial_y^3
\nonumber\\&&
-78 \beta_1 x y^2 \partial_x
\partial_y^2-192 x y^2 \partial_x \partial_y^2-36 \beta_1 x y \partial_{x y}+x \partial_{x y}-38 x y \partial_{x y}-y^6 \partial_y^6-\beta_1 y^5 \partial_y^5
\nonumber\\&&
-15 y^5 \partial_y^5-10 \beta_1
y^4 \partial_y^4-66 y^4 \partial_y^4-26 \beta_1 y^3 \partial_y^3-96 y^3 \partial_y^3-18 \beta_1 y^2 \partial_y^2-38 y^2 \partial_y^2-\alpha y \partial_y
\nonumber\\&&
-2 \beta_1 y \partial_y+\gamma
\partial_y+y \partial_y^2-2 y \partial_y.
\end{eqnarray}
Assuming a hypergeometric solution
\begin{equation}
\mathcal F(x,y) = \sum_{m,n\ge 0} f(m,n) x^m y^n,
\end{equation}
the coefficients $f(m,n)$ must obey
\begin{eqnarray}
(m+1) (\gamma +m+n) f(m+1,n)-(\beta +m) \left(\alpha +(m+n)^5+(m+n)^3\right) f(m,n)=0,\\
(n+1) (\gamma +m+n) f(m,n+1)-(\beta_1 +n) \left(\alpha +(m+n)^5+(m+n)^3\right) f(m,n)=0.
\end{eqnarray}
Solving these two equations with the help of {\tt Sigma}, one obtains
\begin{eqnarray}
f(m,n) &=& \bigg(
\prod_{i_1=1}^n \frac{\big(
-1
+{\beta_1}
+i_1
\big)
\big(-2
+\alpha
+8 i_1
-13 i_1^2
+11 i_1^3
-5 i_1^4
+i_1^5
\big)}{\big(
-1
+\gamma
+i_1
\big) i_1}\bigg)
\nonumber\\&& \times
\prod_{i_1=1}^m \frac{\big(
-1
+\beta
+i_1
\big)
}{\big(
-1
+n
+\gamma
+i_1
\big) i_1}
\big(-2
+8 n
-13 n^2
+11 n^3
-5 n^4
+n^5
+\alpha
+8 i_1
-26 n i_1
\nonumber\\&&
+33 n^2 i_1
-20 n^3 i_1
+5 n^4 i_1
-13 i_1^2
+33 n i_1^2
-30 n^2 i_1^2
+10 n^3 i_1^2
+11 i_1^3
-20 n i_1^3
\nonumber\\&&
+10 n^2 i_1^3
-5 i_1^4
+5 n i_1^4
+i_1^5
\big)
\end{eqnarray}
This quantity cannot be analytically expressed as a product of Pochhammer symbols due to the high degree of the polynomials appearing.
\section{Partial difference equations with rational coefficients}
\label{sec:PLDEsolver}
\vspace*{1mm}
\noindent
In a series of problems also partial linear difference equations in various variables with polynomial coefficients occur, with the target solution space being that of rational functions in several variables, possibly also including harmonic sums or Pochhammer symbols in the numerator. In various interesting situations, see Section~\ref{sec:fullmachinery}, one can derive solutions iteratively by solving
first--order linear recurrences. More generally, one may solve higher-order linear recurrences using
difference ring algorithms~\cite{abramov71,LinearSolver} implemented in \texttt{Sigma}.
However, in the general case of multivariate linear difference equations, there are only very few algorithms available to find the solution if compared to the case of the univariate difference equations. To support possible future challenges in applications, we developed a {\tt Mathematica} implementation of the algorithms of Refs. \cite{kauers10,kauers11}, which are a multivariate generalization of the algorithm described in Ref. \cite{abramov71}. In addition, we enhanced these methods by further heuristic techniques that may be useful for the calculation of Feynman integrals.
The basic idea of these algorithms is to constrain the denominator of the solution. From this, finding the numerator of the solution using an ansatz becomes easier. In particular, it only requires the solution of a linear system of equations.
In the following, we give a survey on how to constrain the denominator of
the solution of a partial linear difference equation (PLDE).
In Section~\ref{sec:definitions}, we describe the notation used and in
Section~\ref{sec:denBounds} we describe the concepts of aperiodic and
periodic denominator bounds given in the literature, and in Section \ref{sec:num} we discuss the determination of the numerator of the solution. In particular, we explain how one can deal with a hypergeometric prefactor in the solution in Section \ref{sec:hypergeometric-prefactor} and how one can search in addition for solutions in terms nested sums in Section~\ref{sec:findingNestedSums}. After commenting on the problem to combine the solutions using initial values in Section~\ref{sec:initVal} we turn in Section~\ref{sec:expansion} to tools to obtain a Laurent expansion in the dimensional parameter $\varepsilon$ efficiently by successively solving a set of difference equations where the parameter no longer appears in the coefficients. This section is supplemented by Section \ref{sec:D} where we describe the commands available in our {\tt Mathematica} implementation \texttt{solvePartialLDE}.
\subsection{The basic problem description}
\label{sec:definitions}
\vspace*{1mm}
\noindent
With $y(n_1,\ldots,n_r)\in{\mathbb K}(n_1,\dots,n_r)$ a rational function in $r$ variables, we define the \emph{shift operators} $N_\mathbf s$ with respect to the \emph{shift} $\mathbf s=(s_1,\ldots,s_r)\in\mathbb Z^r$ as
\begin{equation}
N_\mathbf s y = y(n_1+s_1,\ldots,n_r+s_r).
\end{equation}
Partial linear difference equations are equations of the type
\begin{equation}
\sum_{\mathbf s \in S} a_\mathbf s N_\mathbf s y = f,
\label{eq:PLDE}
\end{equation}
where $S$ is a finite subset of $\mathbb Z^r$, $a_\mathbf s$ and $f$ are polynomials in the variables $n_1,\ldots,n_r$, and $y$ is an unknown rational function to be determined; the set $S$ of all shifts appearing in the equation is called the \emph{shift set} or \emph{structure set}. Because the equation is linear, the general solution is the sum of a particular solution of \eqref{eq:PLDE} and of the homogeneous equation with $f=0$.
An example of the type of equation under consideration is:
\begin{equation}
-(1 + k + n^2) y(n, k) + (4 + k + 2 n + n^2) y(1 + n, 2 + k) = 0.
\end{equation}
It has the shift set $S=\{(0,0), (1,2)\}$ and its coefficients are
\begin{eqnarray}
a_{(0,0)} &=& -(1+k+n^2),\\
a_{(1,2)} &=& (4 + k + 2 n + n^2).
\end{eqnarray}
A distinction used in the literature~\cite{DR,LinearSolver,kauers10} is the one between \emph{periodic} and \emph{aperiodic} polynomials. A polynomial $p$ is periodic if there exist infinitely many shifts, mapping $p$ into $p'$, such that $\mathrm{gcd}(p,p')\neq 1$. A polynomial is called aperiodic if it is not periodic.
For example, the polynomial $(n+k+2)$ is periodic, and the polynomial $(n^2+k+6)$ is aperiodic.
An important fact is that any polynomial can be factorized into a periodic and an aperiodic part.
Given a partial linear difference equation, algorithms exist to constrain what denominators may appear in the solution. These algorithms target separately the periodic and the aperiodic part of the denominator of the solution of \eqref{eq:PLDE}. In our package we have implemented and enhanced the algorithms described in \cite{kauers10,kauers11} and we describe our implementation choices and their rationale in the following.
\subsection{Denominator bounds}
\label{sec:denBounds}
\vspace*{1mm}
\noindent
Let us first review the reason why the calculation of a denominator bound for the solution of a PLDE is valuable. One naive way, which one aims to avoid, to search for solutions of \eqref{eq:PLDE} among the space of rational functions would be to start with an ansatz; for example, one can naively write a generic rational function in the variables $n_1,\ldots,n_r$ with undetermined coefficients $c_{\mathbf k}$ and $c_{\mathbf k'}$,
\begin{equation}
y(n_1,\ldots,n_r)=\frac{\sum\limits_{\mathbf k} c_\mathbf k \prod\limits_i n_i^{k_i} }{\sum\limits_{\mathbf k'} c_{\mathbf k'} \prod\limits_i n_i^{k'_i}}.
\label{eq:rational-ansatz}
\end{equation}
By plugging the ansatz \eqref{eq:rational-ansatz} into \eqref{eq:PLDE},
one
obtains equations for the unknown coefficients $c_\mathbf k$ and
$c_{\mathbf k'}$ by imposing the equality of every monomial in the
variables $n_i$ on both sides of the equation, and one finds in this way,
if they exist, the solutions having numerator and denominator of degree lower or equal to the degree chosen
for the ansatz. However, the equations obtained in this way will be, in general, non--linear, and therefore difficult to solve.
As observed in the univariate case~\cite{abramov71} the situation improves if we are able to find a \emph{denominator bound} for the solutions. A denominator bound $d$ is a polynomial such that for any solution $y=\frac{n}{p}$ of \eqref{eq:PLDE} it must be $p|d$: the denominator of the solution is a divisor of the denominator bound.
If we were able to calculate $d$ algorithmically, then only an ansatz for the numerator of the solution would be required, and the equations for the unknown coefficients $c_\mathbf k$ would be linear, and therefore easier to solve. It is possible to formulate an ansatz for the numerator which also includes terms involving harmonic sums \cite{Vermaseren:1998uu,Blumlein:1998if}, satisfying a wide class of recurrence equations.
If we write the solution to a partial linear difference equation as $y=\frac{n}{uv}$ with $u$ aperiodic and $v$ periodic, it is always possible to calculate a bound $d_a$ for the aperiodic part $u$ of the denominator. We refer to \cite{kauers10} for a description of how the aperiodic denominator bound is calculated.
For the periodic part $v$ it is not always possible to obtain a complete denominator bound for a PLDE. This is illustrated for example by the equation
\begin{equation}
y(n+1,k) - y(n,k+1) = 0,
\label{eq:noDenBound}
\end{equation}
which is satisfied by $\frac{1}{(n+k)^\alpha}$ for any $\alpha \in \mathbb N$. Clearly, no polynomial can be a denominator bound for equation \eqref{eq:noDenBound}.
In other words, one cannot expect to obtain a complete denominator bound
(due to the intrinsic problem that periodic factors might arise with
arbitrary powers). Nevertheless, it is often possible to calculate a partial bound, and to identify what shape the factors of $v$, that cannot be predicted,
must have. (A partial bound is a bound for some, but not all, the periodic factors).
The algorithm in \cite{kauers11} works by successively examining the
periodic factors $u$ of the coefficients $a_\mathbf p$ when $\mathbf p$ is
a ``corner point", see \cite{kauers11} for a definition. Applying all the tactics described in this article, one obtains an explicitly given polynomial $d_{p}$, a finite set of polynomials $P$ and a set of generators that spans a lattice $V$ in $\mathbb Z^r$ such for any solution of the given PLDE one can predict the aperiodic denominator part $v$ as follows:
$$v\mid d_p \cdot v_{\text{semi-known}} \cdot v_{\text{unknown}}$$
where $v_{\text{semi-known}}$ is a polynomial whose factors are take from the set $\{N_\mathbf s p^m\mid \mathbf
s\in\mathbb Z^r, m\in\mathbb Z\}$ and $v_{\text{unknown}}$ is a polynomial such that\footnote{The spread of a polynomial $u$ is closely related to Abramov's definition of the dispersion~\cite{abramov71} and is defined by $\mathrm{spread}(u)=\{\mathbf s\in\mathbb Z^r\mid \gcd(u,N_\mathbf s u)\neq1\}$.} $\mathrm{spread}(v_{\text{unknown}})=V$.
If $P\neq\{\}$ or $V\neq\{\}$, the implementation will print out the
corresponding data in order to support the user to guess the missing parts
$v_{\text{semi-known}}$ and/or $v_{\text{unknown}}$. Summarizing, the user
will obtain a guidance in formulating an ansatz $d_{user}$ for the missing
factors in the denominator of the solution. To force their inclusion in
the search when looking for the numerator of the solution, one can use the
option {\tt InsertDenFactor $\rightarrow$} $d_{\text{user}}$ of our
package, cf.\ Section~\ref{sec:num} and Appendix~\ref{sec:D}.
\subsection{Determination of the numerator}
\label{sec:num}
\vspace*{1mm}
\noindent
Once the aperiodic and periodic denominator bounds $d_a,d_p$ are calculated, and possibly an ansatz $d_{user}$ for missing factors in the denominator has been set, one can search for the numerator contribution. In general, it has been shown in~\cite{AP:12} based on~\cite{Hilbert10} that this problem is unsolvable: given a homogeneous PLDE with polynomial coefficients, there does not exist an algorithm that can determine all polynomial solutions. Nevertheless, one can search for the desired polynomial solutions by taking as ansatz a general polynomial $\text{num}(c_i)$ with undetermined coefficients $c_i$ where the polynomial degree is set sufficiently high. Then one may substitute the rational function
\begin{equation}\label{Equ:PLDEAnsatz}
y=\frac{\text{num}(c_i)}{d_a d_p d_{\text{user}}}
\end{equation}
into the equation \eqref{eq:PLDE}. Then finding non--trivial solutions of the the underlying linear system
allows one to specialize the $c_i$ such that $y$ is a solution of the given PLDE.
In many cases, it is the determination of the $c_i$ which requires the largest computation time, whereas the denominator bounds can be computed quite quickly. For this reason we propose the following strategy to reduce the computation time.
In the cases where the PLDE does not contain any symbolic parameters (such
as
the dimensional regulator $\varepsilon$, or ratios of invariants) other than the shift variables, one may obtain constraints on the undetermined $c_i$ simply by plugging, sufficiently many times, random numerical values for the shift variables. Then one quickly obtains a linear system for the $c_i$.
If there are symbols present, instead, one may consider performing a first
pass with the symbols replaced by random numbers, with the purpose of
identifying and removing redundant constraints. Then, after removing the redundant equations for the $c_i$, the system can be solved in a stepwise manner, i.e. considering one at a time the constraints produced by one monomial, and plugging the result in the rest of the equation. This is what our package does when the function {\tt SolvePLDE} is called, cf.
Appendix~\ref{sec:D}.
It is certainly possible that the use of random numbers to generate constraints can cause the system to generate two equations for the $c_i$ which are not independent. The probability of such an occurrence can be made arbitrarily small by choosing a sufficiently large range over which the random numbers are chosen. In any event, the consequence of an unfortunate draw of random numbers can only cause the software to output more functions misidentified as solutions when in fact they are not; it cannot cause the software to miss any solutions. By explicitly checking the result, one can guard against this remote possibility, at the expense of additional computation time.
In the following we elaborate further enhancements in order to extend the solution space from the rational function case to more general classes of functions. Besides the examples below, further examples for each aspect can be found in the {\tt Mathematica} notebook auxiliary to this paper.
\subsubsection{Treatment of a hypergeometric prefactor}
\label{sec:hypergeometric-prefactor}
\vspace*{1mm}
\noindent
Given a partial linear difference equation \eqref{eq:PLDE},
\begin{equation}
\sum_{\mathbf s \in S} a_\mathbf s N_\mathbf s y = f,
\label{eq:PLDE_2}
\end{equation}
it is possible to derive another difference equation
\begin{equation}
\sum_{\mathbf s \in S} a'_\mathbf s N_\mathbf s y' = f',
\label{eq:PLDE1}
\end{equation}
whose solution $y'$ is related to $y$ by
\begin{equation}
y' = r y
\end{equation}
with $r=r(n_i)$ a hypergeometric function of its arguments, i.e. a function such that the ratio
\begin{equation}
\frac{N_{\mathbf e_i} r}{r} = \frac{r(n_1,\ldots,n_i+1,\ldots n_r)}{r(n_1,\ldots,n_i,\ldots,n_r)}
\end{equation}
is for all $i$ a rational function of the variables $n_i$. Examples of
hypergeometric functions are Pochhammer symbols, factorials,
$\Gamma$-functions, binomial symbols, and obviously rational functions and polynomials.
The transformation from \eqref{eq:PLDE_2} to \eqref{eq:PLDE1} is useful whenever it is possible to formulate an ansatz for $r$. Once some specific form can be postulated for $r$, the equation \eqref{eq:PLDE1} is obtained by substitution and by exploiting the hypergeometric property.
Consider for example the equation
\begin{eqnarray}
&&
(1 + k) (\varepsilon + k) (1 + k + n^2) \, y(n, k) - 2 k (2 + k + n^2) \, y(n, 1 + k)
\nonumber\\ &&
+ (1 + k) (\varepsilon + k) (2 + k + 2 n + n^2) \, y(1 + n, k) = 0.
\label{eq:example_5}
\end{eqnarray}
We assume that its solution is
\begin{equation}
y(n,k) = (\varepsilon)_k \, y'(n,k)
\end{equation}
with $y'$ a rational function of $n$ and $k$ and $(\varepsilon)_k$ the Pochhammer symbol
\begin{equation}
(\varepsilon)_k = \varepsilon(\varepsilon+1)\cdots(\varepsilon+k-1).
\end{equation}
Then one derives a difference equation for $y'$, namely
\begin{eqnarray}
&& (\varepsilon + k) \big[(1 + k^2 + n^2 + k (2 + n^2)) y'(n, k) -
2 k (2 + k + n^2) y'(n, 1 + k)
\nonumber\\&&
+ (2 + k^2 + 2 n + n^2 + k (3 + 2 n + n^2)) y'(1 + n, k)\big] = 0 .
\end{eqnarray}
We can now solve the new equation, obtaining
\begin{equation}
y'(n,k)=\frac{k}{1 + k + n^2}.
\end{equation}
From this we conclude that the solution of \eqref{eq:example_5} is
\begin{equation}
y(n,k) = (\varepsilon)_k \, \frac{k}{1 + k + n^2} C,
\end{equation}
for some constant $C\in{\mathbb K}(\varepsilon)$.
\subsubsection{Finding solutions in terms of nested sums}\label{sec:findingNestedSums}
Solutions connected to Feynman integrals involve often also indefinite nested sums, such as (cyclotomic) harmonic sums \cite{Vermaseren:1998uu,Blumlein:1998if,Ablinger:2011te} or generalized versions, like Hurwitz
harmonic sums. A straightforward modification of the ansatz~\eqref{Equ:PLDEAnsatz} is to search for a numerator $\text{num}(c_i)$ that is built not only by a polynomial in ${\mathbb K}[n_1,\dots,n_r]$ with the unknown coefficients $c_i$ but to search for polynomial expressions of a finite set of nested sums, i.e., one takes a linear combination of power products in terms of the given nested sums whose coefficients are polynomials in ${\mathbb K}[n_1,\dots,n_r]$ with unknown coefficients. In practice, it is important for this list of nested sums to be shift-stable, meaning that a shift in any of the variables must not introduce new harmonic sums not already included in the list, and they also should be linearly independent. To guarantee this property, one can use quasi-shuffle algebras or difference ring methods~\cite{AlgebraicRelations,TermAlgebra}. The nested sums at shifted arguments can then be rewritten through the repeated application of identities of the type
\begin{equation}\label{Equ:SumShiftRules}
S_1(n+i) = \frac{1}{n+i} + S_1(n+i-1)
\end{equation}
and similarly for all other nested sums, until only unshifted nested sums appear. After clearing denominators, the PLDE implies a set of linear constraints on the undetermined parameters $c_i$, obtained by coefficient comparison in all the power products which appear when $y$ is plugged back into the equation~\eqref{eq:PLDE}.
Note that the number of unknowns $c_i$ increases strongly: one tries to determine not only one numerator polynomial but numerous polynomials for each power product. In this regard, the homomorphic image techniques described in the beginning of Section~\ref{sec:num} are instrumental to perform these calculations in reasonable time.
\medskip
This heuristic method provides in many cases the desired solution. For instance, consider the equation
\begin{eqnarray}
&&(-k-1) \left(k+n^2+2 n+1\right) f(n,k)
+k \left(k+n^2+2 n+2\right) f(n,k+1)
\nonumber\\&&
+2 (k+1) \left(k+n^2+4 n+4\right) f(n+1,k)
-2 k \left(k+n^2+4 n+5\right) f(n+1,k+1)
\nonumber\\&&
-(k+1) \left(k+n^2+6 n+9\right) f(n+2,k)
+k \left(k+n^2+6 n+10\right) f(n+2,k+1)=0,
\nonumber\\
\end{eqnarray}
Looking for solutions of the form described, with a numerator of degree up to 2, containing the harmonic sums $S_1(n),S_1(k),S_{2,1}(n)$ the algorithm finds the denominator
\begin{equation}
d_p=1 + k + 2 n + n^2
\end{equation}
and the corresponding numerators of the solutions of the homogeneous equation:
\begin{eqnarray}
1, k, k^2, n, k n, S_1(k), k S_1(k), n S_1(k), S_1(k)^2,k S_1(n), k S_{2,1}(n).
\end{eqnarray}
\medskip
\noindent\textit{Remark.} A more advanced (and also less heuristic) tactic is to apply a recursive strategy as worked out in~\cite{LinearSolver}: one defines an order of the nested sums $(s_1,s_2,\dots,s_e)$ where a sum $s_i$ does not arise inside of any of the sums $s_1,\dots,s_{i-1}$.
Then one makes an ansatz for the solution $y=p_0+p_1\,s_e\dots+p_d s_e^d$ where $p_0,\dots,p_d$ are polynomial expressions in terms of the remaining nested sums $s_0,\dots,s_{e-1}$ with coefficients from the ground field ${\mathbb K}(n_1,\dots,n_r)$. Here one has to set up $d$ sufficiently high in order to guarantee that the desired solution can be derived. Then one plugs $y$ into~\eqref{eq:PLDE}, applies the shift rules such as~\eqref{Equ:SumShiftRules}, clears denominators and compares the coefficients of the highest term $s_e^d$. This yields a new PLDE in terms of the unknown $p_d$. Now we compute by recursion all the solutions of this new PLDE in terms of the remaining sums $s_1,\dots,s_{e-1}$, plug the solutions into $p_d$ of the original system and obtain an updated PLDE of~\eqref{eq:PLDE} where $s_e$ occurs only up to degree $d-1$. Now we proceed by degree reduction to compute the remaining coefficients $p_0,\dots,p_{d-1}$ in order to obtain the final solution $y$. We remark that in the base cases, i.e., when all sums are removed within the recursion one ends up to solve several PLDEs purely in the ground field ${\mathbb K}(n_1,\dots,n_r)$, i.e., the machinery described in the beginning of Section~\ref{sec:PLDEsolver} is applied. It is our plan in the near future to implement this more advanced machinery within the formal setting of $R\Pi\Sigma$-difference ring extensions~\cite{DR} based on the reduction strategy given in~\cite{LinearSolver}.
\subsubsection{Matching the solution to initial values}
\label{sec:initVal}
\vspace*{1mm}
\noindent
If initial values are provided, it is possible to look for a general solution that conforms to them.
This general solution is found by building a linear combination with
undetermined coefficients of the solutions of the homogeneous equation,
plus a particular solution of the equation. Next, the initial values are
plugged in, and a system of equations is obtained. In the case that the
system contains symbolic parameters other than the shift variables, the
undetermined coefficients to be searched for are not just numbers. In that case, the coefficients of the
linear combination are taken to be general rational functions in the parameters up to some chosen degree.
The combination of the solutions will be of particular importance for the next subsection.
\subsubsection{Finding the solution in a series expansion}
\label{sec:expansion}
\vspace*{1mm}
\noindent
In many applications it is desirable to obtain the Laurent series expansion of the solution of a difference
equation. This may be easier to achieve than the derivation of a complete solution, because, at each order in the expansion, it is possible to derive a difference equation where the expansion parameter is absent, therefore the linear system to find the coefficients $c_i$ can potentially be solved much more quickly. The procedure, described in the following, generalizes the univariate case given in~\cite{Blumlein:2010zv}. It assumes that the initial values of the solution in its $\varepsilon$--expansion are known.
Consider for instance of \eqref{eq:PLDE}, possibly containing a parameter
$\varepsilon$ in the coefficients:
\begin{equation}
\sum_{\mathbf s \in S} a_\mathbf s(n_i,\varepsilon) N_\mathbf s y(n_i) = f(n_i,\varepsilon) ,
\label{eq:PLDE-eps}
\end{equation}
where the coefficients $a_{\mathbf s}(n_i,\varepsilon)$ are polynomials in the shift variables and in the parameter $\varepsilon$. Assume that the solution of \eqref{eq:PLDE-eps} has, around $\varepsilon=0$, a Laurent expansion starting from the power $\varepsilon^{-\ell}$ of the parameter, with $\ell$ known,
\begin{equation}
y_\varepsilon(n_i) = \varepsilon^{-\ell} y_{-\ell}(n_i) + \cdots + y_0(n_i) + \varepsilon y_1(n_i) + \cdots + \varepsilon^c y_c(n_i) ,
\label{eq:y_eps}
\end{equation}
and that the right-hand side of the equation can be expanded in a series in $\varepsilon$ as
\begin{equation}
f = \varepsilon^{-\ell} f_{-\ell}(n_i) + \varepsilon^{-\ell+1} f_{-\ell+1} (n_i) + \cdots .
\label{eq:f_eps}
\end{equation}
Assume also that the $a_{\mathbf s}(n_i,\varepsilon=0)$ are not all zero, so that an overall power of $\varepsilon$, if present in the equation, has been factored out.
Then, one may proceed by inserting \eqref{eq:y_eps} and \eqref{eq:f_eps} into \eqref{eq:PLDE-eps} and doing a coefficient comparison of the $\varepsilon^{-\ell}$ terms, obtaining
\begin{equation}
\sum_{\mathbf s \in S} a_\mathbf s(n_i,\varepsilon=0) N_\mathbf s y_{-\ell}(n_i) = f_{-\ell}(n_i).
\label{eq:PLDE_eps-l}
\end{equation}
Equation \eqref{eq:PLDE_eps-l} is now free of $\varepsilon$, which facilitates the task of finding a solution and reduces the computational time required. If \eqref{eq:PLDE_eps-l} can be uniquely solved for $y_{-\ell}$ and the solution matched to initial values, one can move to the next higher power in $\varepsilon$ by plugging the solution into \eqref{eq:y_eps}. In this new equation one does a coefficient comparison of the next power in $\varepsilon$ and solves for $y_{\ell+1}$. The process is repeated as many times as needed until all the terms of interest in the Laurent expansion are obtained.
For instance, consider the equation
\begin{eqnarray}
&&\big[3 (k+1) (n+1)+4 (n+1)+1\big] (4 k n^2 \varepsilon ^3+5 n \varepsilon +6 \varepsilon ^2+1) f(n+1,k+1)
\nonumber\\&&
-(3 k n+4 n+1) \big[4 (k+1) (n+1)^2 \varepsilon ^3+5 (n+1)
\varepsilon +6 \varepsilon ^2+1\big] f(n {,k} ) = 0 .
\label{eq:example-series}
\end{eqnarray}
Together with a list of 25 initial values, our procedure to compute the expansion encounters at order
$\varepsilon^{-2},\varepsilon^{-1},\varepsilon^0$ the equations
\begin{eqnarray}
(-3 k n-4 n-1) f(n,k)+(3 k n+3 k+7 n+8) f(n+1,k+1) &=& \tau ,
\end{eqnarray}
with $\tau = 0, 5, 0$, respectively,
which are free of $\varepsilon$. The series solution of \eqref{eq:example-series} is found to be
\begin{equation}
f(n,k) = \frac{1}{\varepsilon ^2 (3 k n+4 n+1)}+\frac{5 n}{\varepsilon (3 k n+4 n+1)}+\frac{6}{3 k n+4 n+1} + \mathcal O(\varepsilon).
\end{equation}
\section{Conclusions}
\label{sec:conclusion}
\vspace*{1mm}
\noindent
We reviewed some techniques, algorithms and implementation choices for the solution of
partial linear differential equations in form of multivariate power series representations. Here we extract the underlying partial linear difference equations of the power series coefficients (see, e.g., Section~\ref{sec:reclist}) and try to solve them in terms of special functions. For this task we presented an algorithm that can solve frequently arsing hypergeometric systems in terms of hypergeometric products (see Section~\ref{sec:recsol}) and elaborated heuristic methods to find such solutions (also in terms of nested sums) for the general higher-order case (see Section~\ref{sec:PLDEsolver}). Special care has been put on the
$\varepsilon$--expansion of such solutions (see Sections~\ref{Sec:EpExpansion} and~\ref{sec:expansion}) where in
addition, e.g., Hurwitz harmonic sums and generalized versions may therefore arise. Finally, we utilize the
available summation tools in the setting of difference rings to simplify the found sum solutions in terms of indefinite nested sums over hypergeometric products. In particular, various concrete examples of this computer algebra machinery has been elaborated (see Section~\ref{sec:fullmachinery}).
\vspace{5mm}\noindent
{\bf Acknowledgment.}~
We thank J.~Ablinger, D.~Broadhurst, and P.~Marquard for discussions. This project has received
funding from the European Union's Horizon 2020 research and innovation programme under the Marie
Sk\l{}odowska--Curie grant agreement No. 764850, SAGEX and from the Austrian Science Fund (FWF)
grant SFB F50 (F5009-N15) and P33530.
|
{
"timestamp": "2021-12-01T02:27:15",
"yymm": "2111",
"arxiv_id": "2111.15501",
"language": "en",
"url": "https://arxiv.org/abs/2111.15501"
}
|
\section{Introduction}
\paragraph{Outcome prediction from clinical notes.} The use of automatic systems in the medical domain is promising due to their potential exposure to large amounts of data from earlier patients. This data can include information that helps doctors make better decisions regarding diagnoses and treatments of a patient at hand. Outcome prediction models take patient information as input and then output probabilities for all considered outcomes \citep{outcome-google,outcome-emnlp}. We focus this work on outcome models using natural language in the form of clinical notes as an input, since they are a common source of patient information and contain a multitude of possible variables.
\paragraph{The problem of black box models and biases.} Recent models show promising results on tasks such as mortality prediction \citep{mortality-amia} and diagnosis prediction \citep{disease-prediction-deep-ehr,outcome-google}. However, since most of the proposed models work as black boxes we do not know which features they consider important for their decisions and how they interpret certain patient characteristics. From earlier work we also know that highly parameterized models are prone to emphasize biases in the data \citep{bias-gender}. Such biases are known to be especially dangerous in the clinical domain \citep{bias-medicine}. We further argue that they have high potential to disadvantage minority groups as their behavior towards out-of-distribution samples is often unpredictable. Thus, understanding models and their shortcomings is an essential prerequisite for their application in the clinical domain. We argue that more in-depth evaluations are needed to know whether such models have learned medically meaningful patterns or not.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.55\textwidth]{images/intro.pdf}
\caption{Minimal alterations to the patient description can have a large impact on outcome predictions of clinical NLP models. We introduce behavioral testing for the clinical domain to analyse whether a model has learned useful or harmful patterns.}
\label{fig:intro}
\end{figure*}
\paragraph{Behavioral testing for the clinical domain.} As a step towards this goal, we introduce a novel testing framework specifically for the clinical domain that enables us to examine the influence of certain patient characteristics on the model predictions. Our work is motivated by behavioral testing frameworks for general Natural Language Processing (NLP) tasks \citep{checklist} in which model behavior is observed under changing input data. Our framework incorporates a number of test cases and is further extendable to the needs of individual data sets and clinical tasks.
\paragraph{Influence of patient characteristics.} As an initial case study we apply the framework to analyse the behavior of models trained on the widely used MIMIC-III database \citep{mimic}. We analyse how sensitive these models are towards textual indicators of protected characteristics in a clinical note, such as \textit{age}, \textit{gender} and \textit{ethnicity}. These characteristics are known to be affected by discrimination and bias in health care \citep{discrimination-framework}, on the other hand, they can represent important risk factors for certain diseases or conditions. That is why we consider it especially important to understand how these mentions affect model decisions.
\paragraph{Contributions.} In summary, we present the following contributions in this work:\\
\textbf{1)} We introduce a novel behavioral testing framework specifically for clinical NLP models. We release the code for applying and extending the framework\footnote{URL: \url{https://github.com/bvanaken/clinical-behavioral-testing}} to enable in-depth evaluations of clinical NLP models.\\
\textbf{2)} We present an analysis on the patient characteristics \textit{gender}, \textit{age} and \textit{ethnicity} to understand the sensitivity of models towards textual cues regarding these groups and whether their predictions are medically plausible.\\
\textbf{3)} We show results of three state-of-the-art clinical NLP models and find that model behavior strongly varies depending on the applied pre-training. We further show that highly optimised models are often more prone to overestimate the effect of certain patient characteristics leading to potentially harmful behavior.
\section{Related Work}
\paragraph{Clinical Outcome Prediction.} Outcome prediction from clinical text has been studied regarding a variety of outcomes. The most prevalent being in-hospital mortality \citep{mortality-kdd,mortality0,mortality-patient-groups, mortality-amia}, diagnosis prediction \citep{diaprediction1,disease-prediction-deep-ehr,disease-prediction-representation} and phenotyping \citep{phenotyping1,phenotyping2,phenotyping3,phenotyping4}. In recent years, most approaches are based on deep neural networks due to their ability to outperform earlier methods in most of the settings. Most recently, Transformer-based models have been applied for prediction of patient outcomes with reported increases in performance \citep{bert-outcome0,bert-outcome1,bert-outcome2,bert-outcome3,vanaken,bert-outcome4}. In this work we analyse three of these Transformer-based models due to their upcoming prevalence in the application of NLP in health care.
\subsection{Behavioral Testing in NLP} \citet{checklist} identify shortcomings of common model evaluation on held-out datasets, such as the occurrence of the same biases in both training and test set and the lack of comprehensive testing scenarios in the held-out set. To mitigate these problems, they introduce \textsc{CheckList}, a behavioral testing framework to test general NLP abilities. In particular, they highlight that such frameworks evaluate input-output behavior without any knowledge of internal structures of a system \citep{beizer}. Building upon \textsc{CheckList}, \citet{hatecheck} introduce a behavioral testing suite for the domain of hate speech detection to address the individual challenges of the task. Following their work, we create a behavioral testing framework for the domain of clinical outcome prediction, that comprise idiosyncratic data and respective challenges.
\subsection{Revealing Biases in Clinical NLP}
The problem of biases in clinical NLP models is already highlighted by \citet{hurtful-words}. They quantify such biases by focusing on the recall gap among patient groups and by applying an artificial fill-in-the-gap task. They show that the models trained on data from MIMIC-III inherit biases regarding gender, language, ethnicity, and insurance status--often in favor of the majority group. We take these findings as motivation to directly analyse the sensitivity of such models with regard to patient characteristics. In contrast to their work and following \citet{checklist}, we want to eliminate the influence of biased test data on our evaluation. Further, our approach simulates patient cases that are similar to real-life occurrences. It thus displays the actual impact of learned biases on all analysed patient groups.
\section{Behavioral Testing of Clinical NLP Models}
\paragraph{Sample alterations.} Our goal is to examine how clinical NLP models react to mentions of certain patient characteristics in text. Comparable to earlier approaches to behavioral testing we use sample alterations to artificially create different test groups. In our case, a test group is defined by one manifestation of a patient characteristic, such as \textit{female} as the patient's gender. In order to ensure that we only measure the influence of this certain characteristic, we keep the rest of the patient case unchanged and apply the alterations to all samples in our test dataset. Depending on the original sample, the operations to create a certain test groups thus include 1) changing a mention, 2) adding a mention or 3) keeping a mention unchanged (in case of a patient case that is already part of the test group at hand). This results in one newly created dataset per test group, all based on the same patient cases and only different in the patient characteristic under investigation.
\begin{figure*}[t!]
\includegraphics[width=\textwidth]{images/framework.pdf}
\caption{\textbf{Behavioral testing framework for the clinical domain}. Schematic overview of the introduced framework. From an existing test set we create test groups by altering specific tokens in the clinical note. We then analyse the change in predictions which reveals the impact of the mention on the clinical NLP model.}
\label{fig:framework}
\end{figure*}
\paragraph{Prediction analysis.} After creating the test groups, we collect the models' predictions for all cases in each test group.
Different from earlier approaches to behavioral testing we do not test whether predictions on the altered samples are true or false with regard to the ground truth. As \citet{vanaken} pointed out, there is no real ground truth in clinical data, because the data that is collected does only show one possible pathway for a patient out of many. Further, existing biases in treatments and diagnoses are likely included in our testing data potentially leading to meaningless results. To prevent that, we instead focus on detecting how the model outputs change regardless of the original annotations. This way we can also evaluate very rare mentions (e.g. \textit{transgender}) and observe their impact on the model predictions reliably. Figure \ref{fig:framework} shows a schematic overview of the functioning of the framework.
\paragraph{Extensibility.} In this study, we use the introduced framework to analyse model behavior with regard to patient characteristics as described in \ref{section:characteristics}. However, it can also be used to test more general model behavior such as the ability to identify negated symptoms or to detect specific diagnoses when certain indicators are present in the text. It is further possible to combine certain test groups e.g. to analyse how a model behaves on a combination of patient characteristics.
\section{Experimental Setup}
\subsection{Data}
We conduct our analysis on data from the MIMIC-III database \citep{mimic}. In particular we use the outcome prediction task setup by \citet{vanaken}. The classification task includes 48,745 admission notes annotated with the patients' clinical outcomes at discharge. We select the outcomes \textit{diagnoses at discharge} and \textit{in-hospital mortality} for this analysis, since they have the highest impact on patient care and present a high potential to disadvantage certain patient groups. We use three models (see \ref{section:models}) trained on the two \textit{admission} to \textit{discharge} tasks and conduct our analysis on the test set defined by the authors with a total of 9,829 samples.
\subsection{Considered Patient Characteristics}
\label{section:characteristics}
We choose three characteristics for the analysis in this work: \textit{Age}, \textit{gender} and \textit{ethnicity}. While these characteristics differ in their importance as clinical risk factors, all of them are known to be subject to biases and stigmas in health care \citep{discrimination-framework}. Therefore, we want to test, whether the analysed models have learned medically plausible patterns or ones that might be harmful to certain patient groups.
We deliberately also include groups that occur very rarely in the original dataset. We want to understand the impact of imbalanced input data especially on minority groups, since they are already disadvantaged by the health care system \citep{health-bias1,health-bias2}.
When altering the samples in our test set, we utilize that patients are described in a mostly consistent way at the beginning of a clinical note. We collect all mention variations from the training set used to describe the different patient characteristics and alter the samples accordingly in an automated setup.
\paragraph{Age.} The age of a patient is a significant risk factor for a number of clinical outcomes. Our test includes all ages between 18 and 89 and the \hbox{[** Age over 90**]} de-idenfitication label from the MIMIC-III database. \citet{vanaken} presented a comparable analysis on 20 random patient cases. We extend this analysis to all samples within a given testset for more reliable results. By analysing the model behavior on age mentions we can get insights on how the models interpret numbers, which is considered challenging for current NLP models \citep{bert-numbers}.
\paragraph{Gender.} A patient's gender is both a risk factor for certain diseases and also subject to unintended biases in healthcare. We test the model's behavior regarding gender by altering the gender mention and by changing all pronouns in the clinical note. In addition to \textit{female} and \textit{male}, we also consider \textit{transgender} as a gender test group in our study. This group is extremely rare in clinical datasets like MIMIC-III, but since approximately 1.4 million people in the U.S. identify as transgender \citep{transgender-us}, it is important to understand how model predictions change when the characteristic is present in a clinical note.
\paragraph{Ethnicity.}
The ethnicity of a patient is only occasionally mentioned in clinical notes and its role in medical decision-making is controversial, since it can lead to disadvantages in patient care \citep{race-role,race-relevant}. Earlier studies have also shown that ethnicity in clinical notes is often incorrectly assigned \citep{race-validity}. We want to know how clinical NLP models interpret the mention of ethnicity in a clinical note and whether their behavior can cause unfair treatment. We choose \textit{White}, \textit{African American}, \textit{Hispanic} and \textit{Asian} as ethnicity groups for our evaluation, as they are the most frequent ethnicities in MIMIC-III.
\begin{table}
\caption{Performance of three state-of-the-art models on the outcome prediction tasks diagnoses (multi-label) and mortality prediction (binary task) in \% AUROC. PubMedBERT outperforms the other two models in both tasks by a small margin.}
\label{table:models}
\centering
\begin{tabular}{lccc}
\toprule
& PubMedBERT & CORe & BioBERT \\
\midrule
Diagnoses & \textbf{83.75} & 83.54 & 82.81 \\
Mortality & \textbf{84.28} & 84.04 & 82.55 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Clinical NLP Models}
\label{section:models}
In this study, we apply the introduced testing framework to three existing clinical models which are fine-tuned on the tasks of diagnosis and mortality prediction. We use the model checkpoints of \citet{vanaken} and additionally fine-tune the PubMedBERT model \citep{pubmedbert} on the same training data with the same hyperparameter setup\footnote{Hyperparameters: Batch size: 20; learning rate: 5e-05; dropout: 0.1; warmup steps: 1000; early stopping patience: 20.}. The models are based on the BERT architecture \citep{bert} as it presents the current state-of-the-art in predicting patient outcomes. Their performance on the two tasks is shown in Table \ref{table:models}. We deliberately choose three models based on the same architecture to investigate the impact of pre-training data while keeping architectural considerations aside. In general the proposed testing framework is model agnostic and works with any type of text-based outcome prediction model.
\paragraph{BioBERT.} \citet{biobert} introduced BioBERT which is based on a pre-trained BERT Base \citep{bert} checkpoint. They applied another language model fine-tuning step using biomedical articles from PubMed abstracts and full-text articles. BioBERT has shown improved performance on both medical and clinical downstream tasks.
\paragraph{CORe.} Clinical Outcome Representations (CORe) by \citet{vanaken} are based on BioBERT and extended with a pre-training step that focuses on the prediction of patient outcomes. The pre-training data includes clinical notes, Wikipedia articles and case studies from PubMed.
\paragraph{PubMedBERT.} \citet{pubmedbert} recently introduced PubMedBERT based on similar data as BioBERT. They use PubMed articles and abstracts but instead of extending a BERT Base model, they train PubMedBERT from scratch. The model reaches state-of-the-art results on multiple medical NLP tasks and outperforms the other analysed models on the outcome prediction tasks.
\section{Results}
\label{section:results}
We present the results on all test cases by averaging the probabilities that a model assigns to each test sample. We then compare the averaged probabilities across test cases to identify which characteristics have a large impact on the model's prediction over the whole test set. The values per diagnosis in the heatmaps shown in Figure \ref{fig:gender}, \ref{fig:mimic-gender}, \ref{fig:ethnicity} and \ref{fig:ethnicity-mimic} are defined using the following formula:
\begin{equation}
c_i = p_i - \frac{\sum_{j}^{N} p_j}{N}
\end{equation}
where $c_i$ is the value assigned to test group $i$, $p$ is the (predicted) probability for a given diagnosis and $N$ is the number of all test groups except $i$.
We choose this illustration to highlight both positive and negative influence of a characteristic on model behavior. Since all test groups are based on the same patients and only differ regarding the characteristic at hand, even small differences in the averaged predictions can point towards general patterns that the model learned to associate with a characteristic.
\subsection{Influence of Gender}
\paragraph{Transgender mention leads to lower mortality and diagnoses predictions.} Table \ref{table:mortality-gender} shows the mortality predictions of the three analysed models with regard to the gender assigned in the text. While the predicted mortality risk for female and male patients lies within a small range, all models predict the mortality risk of patients that are described as transgender as lower than non-transgender patients. This is probably due to the relative young age of most transgender patients in the MIMIC-III training data, but can be harmful to older patients identifying as transgender at inference time.
\begin{figure*}[t!]
\includegraphics[width=\textwidth]{images/gender_vlag.pdf}
\caption{Influence of \textbf{gender} on predicted diagnoses. Blue: Predicted probability for diagnosis is below-average; red: predicted probability above-average. PubMedBERT shows highest sensitivity to gender mention and regards many diagnoses less likely if \textit{transgender} is mentioned in the text. Graph shows deviation of probabilities on 24 most common diagnoses in test set.}
\label{fig:gender}
\end{figure*}
\begin{figure*}[t!]
\includegraphics[width=0.48\textwidth]{images/mimic_gender_vlag.pdf}
\caption{Original distribution of diagnoses per \hbox{\textbf{gender}} in MIMIC-III. Cell colors: Deviation from average probability. Numbers in parenthesis: Occurrences in the training set. Most diagnoses occur less often in transgender patients due to their very low sample count.}
\label{fig:mimic-gender}
\end{figure*}
\paragraph{Sensitivity to gender mention varies across models.} Figure \ref{fig:gender} shows the change in model prediction for each diagnosis with regard to the gender mention. The cells of the heatmap are the deviations from the average score of the other test cases. Thus, a light cell indicates that the model assigns a higher probability to a diagnosis for this gender group. We see that PubMedBERT is highly sensitive to the change of the patient gender, especially regarding transgender patients. Except from few diagnoses such as \textit{Cardiac dysrhythmias} and \textit{Drug Use / Abuse}, the model predicts a lower probability to diseases if the patient letter contains the transgender mention. The CORe and BioBERT models are less sensitive in this regard. The most salient deviation of the BioBERT model is a drop in probability of \textit{Urinary tract disorders} for male patients, which is medically plausible due to anatomic differences \citep{urinary-tract-infections2016}.
\paragraph{Biases in MIMIC-III training data are partially inherited.} In Figure \ref{fig:mimic-gender} we show the original distribution of diagnoses per gender in the training data. Note that the deviations are about 10 times larger than the ones produced by the model predictions in Figure \ref{fig:gender}. This indicates that the models take gender as a decision factor, but only among others. Due to the very rare occurrence of transgender mentions (only seven cases in the training data), most diagnoses are underrepresented for this group. This is partially reflected by the model predictions, especially by PubMedBERT, as described above. Other salient patterns such as the prevalence of \textit{Chronic ischemic heart disease} in male patients are only reproduced faintly by the models.
\begin{table}[t!]
\caption{Influence of \textbf{gender} on mortality predictions. PubMedBERT assigns highest risk to female, the other models to male patients. Notably, all models decrease their mortality prediction for transgender patients.}
\label{table:mortality-gender}
\centering
\begin{tabular}{lccc}
\toprule
& PubMedBERT & CORe & BioBERT \\
\midrule
Female & \textbf{0.335} & 0.239 & 0.119 \\
Male & 0.333 & \textbf{0.245} & \textbf{0.121} \\
Transgender & \cellcolor{gray!30} 0.326 & \cellcolor{gray!30} 0.229 & \cellcolor{gray!30} 0.117 \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Influence of Age}
\begin{figure}[b!]
\centering
\includegraphics[width=0.47\textwidth]{images/age_mortality.pdf}
\caption{Influence of \textbf{age} on mortality predictions. X-axis: Simulated age; y-axis: predicted mortality risk. The three models are differently calibrated and only CORe is highly influenced by age.}
\label{fig:age-mortality}
\end{figure}
\paragraph{Mortality risk predictions are differently influenced by age.} Figure \ref{fig:age-mortality} shows the averaged predicted mortality per age for all models and the actual distribution from the training data (dotted line). We can see that BioBERT does not take age into account when predicting mortality risk except for patients over 90 (which are described by the tokens [**Age over 90 **] in MIMIC-III). The PubMedBERT model assigns a higher mortality risk to all age groups with a small increase for patients over 60 and an even steeper increase for patients over 90. The CORe model is following the training data the most and is also inheriting many peaks and troughs in the data.
\paragraph{Models are equally affected by age when predicting diagnoses.} We exemplify the impact of age on diagnosis prediction on eight outcome diagnoses in Figure \ref{fig:age}. The dotted lines show the distribution of the diagnosis within an age group in the training data. The change of predictions regarding age are similar throughout the analysed models with only small variations such as for \textit{Cardiac dysrhythmias}. Some diagnoses are regarded more probable in older patients (e.g. \textit{Acute Kidney Failure}) and others in younger patients (e.g. \textit{Abuse of drugs}). The \hbox{distributions} per age group in the training data are more extreme, but follow the same tendencies as predicted by the models.
\begin{figure*}[t!]
\includegraphics[width=\textwidth]{images/diagnoses.pdf}
\caption{Influence of \textbf{age} on diagnosis predictions. The x-axis is the simulated age and the y-axis is the predicted probability of a diagnosis. All models follow similar patterns with some diagnosis risks increasing with age and some decreasing. The original training distributions (black dotted line) are mostly followed but attenuated.}
\label{fig:age}
\end{figure*}
\paragraph{Prediction peaks indicate lack of number understanding.} From earlier studies we know that BERT-based models have difficulties dealing with numbers in text \citep{bert-numbers}. The peaks that we observe in some predictions support this finding. For instance, the models assign a higher risk of \textit{Cardiac dysrhythmias} to patients aged 73 than to patients aged 74, because they do not capture that these are consecutive ages. Therefore, the influence of age on the predictions is solely based on the individual age tokens observed in the training data.
\subsection{Influence of Ethnicity}
\paragraph{Mention of any ethnicity decreases prediction of mortality risk.} Table \ref{table:mortality-eth} shows the mortality predictions when different ethnicities are mentioned and when there is no mention. We observe that the mention of any of the ethnicities leads to a decrease in mortality risk prediction in all models, with White and African American patients receiving the lowest probabilities.
\paragraph{Diagnoses predicted by PubMedBERT are highly sensitive to ethnicity mentions.} Figure \ref{fig:ethnicity} depicts the influence of ethnicity mentions on the three models. Notably, the predictions of PubMedBERT are strongly influenced by ethnicity mentions. Multiple diagnoses such as \textit{Chronic kidney disease} are more often predicted when there is no mention of ethnicity, while diagnoses like \textit{Hypertension} and \textit{Abuse of drugs} are regarded more likely in African American patients and \textit{Unspecified anemias} in Hispanic patients. While the original training data in Figure \ref{fig:ethnicity-mimic} shows the same strong variance among ethnicities, this bias is not inherited the same way in the CORe and BioBERT models. However, we can also observe deviations regarding ethnicity in these models.
\begin{table}[b!]
\caption{Influence of \textbf{ethnicity} on mortality predictions. The mention of an ethnicity decreases the predicted mortality risk. White and African American patients are assigned with the lowest mortality risk (gray-shaded).}
\label{table:mortality-eth}
\centering
\begin{tabular}{lcccl}
\toprule
& PubMedBERT & CORe & BioBERT & \\
\midrule
No mention & \textbf{0.333} & \textbf{0.243} & \textbf{0.120} & \\
White & \cellcolor{gray!30} 0.329 & \cellcolor{gray!30} 0.235 & 0.119 & \\
African Amer. & \cellcolor{gray!30} 0.329 & 0.239 & \cellcolor{gray!30} 0.116 & \\
Hispanic & 0.331 & 0.237 & 0.118 & \\
Asian & 0.330 & 0.238 & 0.118 & \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure*}[t!]
\includegraphics[width=\textwidth]{images/ethnicity_vlag.pdf}
\caption{Influence of \textbf{ethnicity} on diagnosis predictions. Blue: Predicted probability for diagnosis is below-average; red: predicted probability above-average. PubMedBERT's predictions are highly influenced by ethnicity mentions, while CORe and BioBERT show smaller deviations, but also disparities on specific groups.}
\label{fig:ethnicity}
\end{figure*}
\begin{figure}[t!]
\includegraphics[width=0.48\textwidth]{images/ethnicity_mimic_vlag.pdf}
\caption{Original distribution of diagnoses per \hbox{\textbf{ethnicity}} in MIMIC-III. Cell colors: Deviation from average probability. Numbers in parenthesis: Occurrences in the training set. Both the distribution of samples and the occurrences of diagnoses are highly unbalanced in the training set. Some patterns are inherited by the fine-tuned models, while others are not.}
\label{fig:ethnicity-mimic}
\end{figure}
\paragraph{African American patients are assigned lower risk of diagnoses by CORe and BioBERT.} The heatmaps showing predictions of CORe and BioBERT reveal a potentially harmful pattern in which the mention of \textit{African American} in a clinical note decreases the predictions for a large number of diagnoses. This pattern is found more prominently in the CORe model, but also in BioBERT. This behavior can lead to disadvantages in the treatment of African American patients and would reinforce existing biases in health care \citep{race-treatment}.
\section{Discussion}
\paragraph{Sensitivity and impact of characteristics show large variance.} The results described in \ref{section:results} reveal large differences in the influence of patient characteristics throughout models. The analysis shows that there is no overall \textit{best} model, but each model has learned both useful patterns (e.g. age as a medical plausible risk factor) and potentially dangerous ones (e.g. decreases in diagnosis risks for minority groups). The large variance is surprising since the models have a shared architecture and are fine-tuned on the same data--they only differ in their pre-training. And while the reported AUROC scores for the models (Table \ref{table:models}) are close to each other, the variance in learned behavior show that we should consider in-depth analyses a crucial part of model evaluation in the clinical domain. This is especially important since unintended biases in clinical NLP models are often fine-grained and difficult to detect.
\paragraph{Best performing model is especially sensitive to gender and ethnicity mentions.} The analysis has shown that PubMedBERT which outperforms the other models in both mortality and diagnosis prediction show larger sensitivity to mentions of gender and ethnicity in the text. This is alerting since it particularly affects minority groups which are already disadvantaged by the health care system. It also shows that instead of measuring clinical models regarding a single score, looking at their robustness and potential impact should be further emphasized.
\paragraph{De-biasing methods need to be aligned with medical knowledge.} The application of de-biasing approaches has shown to be effective in general language scenarios in the past \citep{debias}. While their evaluation is out of the scope of this work, we want to highlight that their application in clinical outcome prediction can be challenging. We argue that de-biasing methods cannot be applied to patient characteristics in clinical text in the same way as for general language. The decision about which characteristics should be considered a risk factor and their impact on outcome predictions should be aligned with medical knowledge. Therefore, we focus followup research towards iterative model learning using feedback loops with medical professionals to define favorable patterns and adverse ones.
\section{Conclusion}
In this work, we introduced a novel behavioral testing framework for the clinical domain that enables us to understand the effects of textual variations on the model's prediction. We apply this framework to examine the impact of certain patient characteristics, and evaluate whether current NLP models reproduce dangerous biases in health care. Our results show that the models have indeed learned to overestimate certain characteristics especially those of minority groups which potentially lead to disadvantages. With this work we want to emphasize the importance of model evaluation beyond common metrics especially in sensitive areas like health care. For future research we propose additional behavioral analyses, e.g. regarding stigmatizing language in clinical notes as defined by \citet{words-matter}. We also propose to apply the framework to evaluate different de-biasing approaches and to further develop approaches for removing harmful biases while keeping plausible patterns regarding clinical risk factors intact.
\section*{Acknowledgments}
Our work is funded by the German Federal Ministry for Economic Affairs and Energy (BMWi) under grant agreement 01MD19003B (PLASS) and 01MK2008MD (Servicemeister).
|
{
"timestamp": "2021-12-01T02:28:04",
"yymm": "2111",
"arxiv_id": "2111.15512",
"language": "en",
"url": "https://arxiv.org/abs/2111.15512"
}
|
\section{Introduction}
Pancreatic ductal adenocarcinoma (PDAC) is the most common form of pancreatic cancer, which has the worst prognosis of all cancer diseases worldwide with a 5-year relative survival rate of only 10.8\% \cite{Ryan2014PancreaticAdenocarcinoma, CancerStatFacts-PancreaticCancerAccessed19/11/2021}. The incidence of pancreatic cancer is increasing, and it is estimated to become the second leading cause of cancer-related deaths in Western societies by 2030 \cite{CancerStatFacts-PancreaticCancerAccessed19/11/2021, Siegel2020Cancer2020}. Patients diagnosed in early disease stages, where the tumors are small (size<2cm) and frequently resectable, present a much higher 3-year survival rate (82\%) than patients diagnosed in later disease stages where the tumors are larger (17\%) \cite{Ardengh2003PancreaticResectability}. Unfortunately, tumors are rarely found in early stages and approximately 80–85\% of patients present with either unresectable or metastatic disease at the time of diagnosis \cite{Ryan2014PancreaticAdenocarcinoma}. Given these statistics, it is clear that early diagnosis of PDAC is crucial to improve patient outcomes, as reversing the stage distribution would more than double the overall survival, without any additional improvements in therapy \cite{Kenner2021ArtificialCancer}.
Early PDAC detection is challenging, as most patients do not present specific symptoms until advanced disease stages, and screening the general population is cost-prohibitive with current technology \cite{Gheorghe2020EarlySurvival, Kenner2021ArtificialCancer}. Furthermore, PDAC tumors are difficult to visualize in computed tomography (CT) scans, which are the most used modality for initial diagnosis, as lesions present irregular contours and poorly-defined margins \cite{Kenner2021ArtificialCancer}. This becomes an even more significant challenge in the initial disease stages as lesions are not only small (<2cm) but also often iso-attenuating, making them easily overlooked even by experienced radiologists \cite{HoYoon2011Small}. A recent study that reconstructed the progression of CT changes in prediagnostic PDAC, showed that suspicious changes could be retrospectively observed 18 to 12 months before clinical PDAC diagnosis. However, the radiologists' sensitivity at identifying those changes, and consequently referring patients for further investigation, was only 44\% \cite{Singh2020ComputerizedStudy}.
Artificial intelligence (AI) can potentially assist radiologists in early PDAC detection by leveraging high amounts of imaging data. Deep learning models, and more specifically convolutional neural networks (CNNs), are a class of AI algorithms especially suited for image analysis and have shown high accuracy in the image-based diagnosis of various types of cancer \cite{Esteva2017Dermatologist-levelNetworks, McKinney2020InternationalScreening, Yasaka2018DeepStudy}. CNNs take the scan as input and automatically extract relevant features for the diagnostic task by performing a series of sequential convolution and pooling operations.
Clinically relevant computer-aided diagnostic systems should have the ability to both detect the presence of cancer and, in the positive cases, localize the lesion in the input image, with minimal to none required user interaction.
Recently deep learning models have started to be investigated for automatic PDAC diagnosis \cite{Zhu2019Multi-scaleAdenocarcinoma, Xia2020DetectingEnsemble, Ma2020ConstructionDiagnosis, Liu2020DeepValidation, K2021FullyTumors, Wang2021LearningPrediction}. However, most studies perform only binary classification of the input image as cancerous or not cancerous, without simultaneous lesion localization. Furthermore, the majority of publications do not focus on small, early-stage lesions, with only one study reporting the model performance for tumors with size < 2cm \cite{Liu2020DeepValidation}.
In this study, we hypothesize that state-of-the-art deep learning architectures can be used to detect and localize PDAC lesions accurately, especially regarding the subgroup of tumors with size < 2 cm. We propose a fully automatic deep-learning framework that takes an abdominal CE-CT scan as input and produces a tumor likelihood score and a likelihood map as output. Furthermore, we assess the impact of surrounding anatomy integration, which is known to be relevant for clinical diagnosis \cite{HoYoon2011Small}, on the performance of the deep-learning models. The framework performance is validated using an external, publicly available test set, and the results on the subgroup of tumors with size < 2cm are also reported.
\section{Materials and Methods}
\subsection{Dataset}
This study was approved by the institutional review board (Radboud University Medical Centre, Nijmegen, The Netherlands), and informed consent from individual patients was waived due to its retrospective design. CE-CT scans in the portal venous phase from 119 patients with pathology-proven PDAC in the pancreatic head (PDAC cohort) and 123 patients with normal pancreas (non-PDAC cohort), acquired between January 1st, 2013 and June 1st, 2020, were selected for model development.
Two publicly available abdominal CE-CT datasets containing scans in the portal venous phase were combined and used for model testing: (1) "The Medical Segmentation Decathlon" dataset (MSD) from Memorial Sloan Kettering Cancer Center (USA), consisting of 281 patients with pancreatic malignancies \cite{Simpson2019AAlgorithms} , and (2) "The Cancer Imaging Archive" dataset from the US National Institutes of Health Clinical Center, containing 80 patients with normal pancreas \cite{Clark2013TheRepository, Pancreas-CTTheWiki} .
The size of the tumors was measured from the tumor segmentation as the maximum diameter in the axial plane.
\subsection{Image Acquisition and Labeling}
The CE-CT scans were acquired with five scanners (Aquilion One, Toshiba [Tochigi, Japan]; Sensation 64 and SOMATOM Definition AS+, Siemens Healthcare [Forchheim, Germany]; Brilliance 64, Philips Healthcare [Best, Netherlands]; BrightSpeed, GE Medical system, [Milwaukee, WI, USA]). The slice thickness was 1.0–5.0 mm, and image size was either 512×512 pixels (232 images) or 1024x1024 pixels (10 images). Images with size 1024x1024 pixels were resampled to 512x512 prior to inclusion in model development.
All images from the PDAC-cohort were manually segmented using ITK-SNAP version 3.8.0 by trained medical students, being verified and corrected by an abdominal radiologist with 17 years of experience in pancreatic radiology.
The annotations included the segmentation of the tumor, pancreas parenchyma, and six surrounding relevant anatomical structures, namely the surrounding veins (portal vein, superior mesenteric vein, and splenic vein), arteries (aorta, superior mesenteric artery, celiac trunk, hepatic artery, and splenic artery), pancreatic duct, common bile duct, pancreatic cysts (if present) and portomeseneric vein thrombosis (if present).
\subsection{Automatic PDAC Detection Framework}
This study uses a segmentation-oriented approach for automatic PDAC detection and localization, where each voxel in the image is assigned either a tumor or non-tumor label. The models in the proposed pipeline were developed using the state-of-the-art, self-configuring framework for medical segmentation \textit{nnUnet} \cite{Isensee2021NnU-Net:Segmentation}. All models employed a 3D U-Net \cite{Cicek20163DAnnotation} as base architecture and were trained for 250,000 training steps with 5-fold cross-validation.
Regions of interest (ROIs) around the pancreas were manually extracted for both the PDAC and non-PDAC cohorts. An anatomy segmentation network was trained to segment the pancreas and the other anatomical structures (refer to the previous section), using the extracted ROIs from the scans in the PDAC cohort. This network was used to automatically annotate the ROIs from the non-PDAC cohort, which were then combined with the manually annotated PDAC cohort to train three different \textit{nnUnet} models for PDAC detection and localization: (1) segmenting only the tumor (\textit{nnUnet\_T}), (2) segmenting the tumor and pancreas (\textit{nnUnet\_TP}), (3) segmenting the tumor, pancreas and the multiple surrounding anatomical structures (\textit{nnUnet\_MS}). These networks were trained with two different initializations and identical 5-fold cross-validation splits, originating ten models for each configuration. The cross-entropy (CE) loss function was used for the PDAC-detection networks since it has been shown to be more suitable for segmentation-oriented detection tasks than the soft DICE+CE loss function, which is selected by default in the \textit{nnUnet} framework \cite{Baumgartner2021NnDetection:Detection, Saha2021AnatomicalFunctions}. Additionally, the full CE-CT scans from the PDAC cohort were downsampled to a resolution of 256$\times$256 and used to train a low-resolution pancreas segmentation network, which was then employed to automatically extract the pancreas ROI from unseen images during inference.
At inference time, images were downsampled, and the low-resolution pancreas segmentation network was used to obtain a coarse segmentation of the pancreas. This coarse mask was upsampled back to the original image resolution and dilated with a spherical kernel to close any existing gaps. Finally, a fixed margin was applied to automatically extract the ROI, which was the input to the previously described PDAC detection models. This extraction margin was defined based on the cross-validation results obtained with the PDAC cohort so that no relevant information is lost while cropping the ROI.
Each of the PDAC detection models (\textit{nnUnet\_T}, \textit{nnUnet\_TP} and \textit{nnUnet\_MS}) outputs a voxel-level tumor likelihood map, which indicates the regions of the image where the network predicts a PDAC lesion and the respective prediction confidence. In the case of the \textit{nnUnet\_TP} and \textit{nnUnet\_MS} networks, a segmentation of the pancreas is also produced. This segmentation was used in post-processing to reduce false positives outside the pancreas by masking the tumor confidence maps so that only the PDAC predictions in the pancreas region are maintained.
After post-processing, candidate PDAC lesions were extracted iteratively from the tumor likelihood map by selecting the voxel with maximum predicted likelihood and including all connected voxels (in 3D) with at least 40\% of this peak likelihood value. Then, the candidate lesion was removed from the model prediction, and the process was repeated until no candidates remained or a maximum of 5 lesions were extracted. The final output of the framework was a tumor likelihood defined as the maximum value of the tumor likelihood map.
A schematic representation of the inference pipeline from the original image input to the final tumor likelihood prediction is shown in Figure~\ref{fig:inference_pipeline}.
\subsection{Analysis}
Patient-level performance was evaluated using the receiver operating characteristic (ROC) curve, while lesion-level performance was evaluated using the free-response receiver operating characteristic (FROC) curve. The ROC analysis assesses the model's confidence that a tumor is or is not present by plotting the true positive rate (sensitivity) against the false positive rate (1-specificity) at different thresholds for the model output, defined as the maximum value of the tumor likelihood map. The FROC analysis additionally assesses whether the model identified the lesion in the correct location, by plotting the true positive rate against the average number of false positives per image, at different thresholds for each individual lesion prediction \cite{Chakrabortv1989MaximumData, Bunch1977APerformance}. Each candidate lesion extracted from the tumor detection likelihood map was represented by the maximum confidence value within that lesion candidate, being considered a true positive if the Dice similarity coefficient with the ground truth was at least 0.1.
To compare the three different PDAC-detection configurations, the ten trained models for each were applied individually to the test set. A permutation test with 100,000 iterations was then used to assess statistically significant differences between the area under the ROC curve (AUC-ROC) and partial area under the FROC curve (pAUC-FROC), which was calculated in the interval of [0.001-5] false positives per patient. A confidence level of 97.5\% was used to assess statistical significance (with Bonferroni correction for multiple comparisons).
The final performance for each configuration was obtained by ensambling the predictions of the ten models.
\end{paracol}
\nointerlineskip
\begin{figure}[H]
\centering
\includegraphics[width=16cm]{Inference_Pipeline_New.png}
\caption{Schematic overview of the proposed automatic PDAC detection framework. The first step in the pipeline is to automatically extract the ROI from the full input CE-CT scan, using the low-resolution pancreas segmentation network. This ROI is then fed to each of the PDAC detection networks: \textit{nnUnet\_T}, \textit{nnUnet\_TP} and \textit{nnUnet\_MS}. The final tumor likelihood output is derived from the networks' tumor detection likelihood maps, which in the case of the \textit{nnUnet\_TP} and \textit{nnUnet\_MS} models is post-processed using the automatically generated pancreas segmentation. }
\label{fig:inference_pipeline}
\end{figure}
\begin{paracol}{2}
\switchcolumn
\section{Results}
The clinical characteristics of the patients in the PDAC cohort are summarized in Table~\ref{tabel:patient_charachteristics}. For the non-PDAC cohort, the mean age was 52.3\textpm21.4 (years), and there were 54 female and 69 male patients.
The performance of the three different PDAC detection network configurations on the internal 5-fold cross validation sets are shown in Table~\ref{tabel:validation_table}. At the patient level, the \textit{nnUnet\_MS} achieves the best performance, with a AUC-ROC of 0.991. Regarding lesion localization performance, the three configurations achieve similar pAUC-FROC, with the \textit{nnUnet\_MS} and \textit{nnUnet\_TP} performing slightly better than the \textit{nnUnet\_T}.
\begin{specialtable}[H]
\centering
\captionsetup{justification=centering}
\caption{Clinical characteristics of the patients in the PDAC cohort. Data are mean\textpm standard deviation or median (interquartile range). The tumor stages are: I-locally resectable, II-borderline resectable, III-locally advanced, IV-metastasized. \label{tabel:patient_charachteristics}}
\begin{tabular}{cc}
\midrule
Age (years) & 69.2\textpm 8.5 \\
Gender (M/F) & 67/52\\
Tumor Stage (I/II/III/IV) & 22/21/47/29 \\
Tumor size (cm) & 2.8 (2.3-3.7)\\
\bottomrule
\end{tabular}
\end{specialtable}
\begin{specialtable}[H]
\centering
\captionsetup{justification=centering}
\caption{Internal 5-fold cross-validation results for each configuration.\label{tabel:validation_table}}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}} ccc}
\toprule
Configuration & mean AUC-ROC (95\%CI) & mean pAUC-fROC (95\%CI)\\
\midrule
\textbf{nnUnet\_T} & 0.963 (0.914-1.0) & 3.855 (3.156-4.553)\\
\textbf{nnUnet\_TP} & 0.986 (0.956-1.0) & 3.999 (3.252-4.747)\\
\textbf{nnUnet\_MS} & 0.991 (0.970-1.0) & 3.996 (3.027-4.965)\\
\bottomrule
\end{tabular*}
\end{specialtable}
The mean ROC and FROC curves obtained on the external test set with each PDAC detection network configuration are shown in Figure~\ref{fig:test_results}, with the respective 95\% confidence intervals. These curves were calculated using the 10 different trained models (2 initialisations with 5 fold cross-validation) for each configuration. The \textit{nnUnet\_MS} and \textit{nnUnet\_TP} both achieve AUC-ROC around 0.89, which is significantly higher than the \textit{nnUnet\_T} ($p=0.007$ and $p=0.009$, respectively). At a lesion level, the \textit{nnUnet\_MS} achieves a significantly higher pAUC-FROC than both the \textit{nnUnet\_TP} and \textit{nnUnet\_T} ($p<10^{-4}$).
\end{paracol}
\nointerlineskip
\begin{figure}[H]
\captionsetup{justification=centering}
\widefigure
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{ROC_all_legend.png}
\label{fig:mean_ROC_test}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{FROC_all_legend.png}
\label{fig:mean_FROC_test}
\end{subfigure}
\caption{Mean ROC and FROC curves with respective confidence intervals for the external test set.}\label{fig:test_results}
\end{figure}
\begin{paracol}{2}
\switchcolumn
There were 73 tumors with size < 2 cm in the MSD dataset. Figure~\ref{fig:test_results_subgroup} shows the patient and lesion level results for each configuration on this sub-set of smaller tumors. In a patient level, the AUC-ROC decreases in about 0.05 for each configuration, when compared to the results obtained on the whole dataset. The \textit{nnUnet\_MS} and \textit{nnUnet\_TP} continued to outperform the \textit{nnUnet\_T}, although the differences were not statistically significant at a confidence level of 97.5\% ($p=0.034$ and $p=0.077$ respectively). Regarding lesion-level performance, the pAUC-FROC for the \textit{nnUnet\_MS} was still significantly higher than for the \textit{nnUnet\_TP} and \textit{nnUnet\_T} ($p<10^{-4}$ and $p=4.8\cdot 10^{-4}$ respectively).
The results obtained by ensembling the 10 models for each configuration are shown in Table~\ref{tabel:enamble_results}.
\end{paracol}
\nointerlineskip
\begin{figure}[H]
\widefigure
\captionsetup{justification=centering}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{ROC_sg_legend.png}
\label{fig:mean_ROC_validation}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[width=\linewidth]{FROC_sg_legend.png}
\label{fig:mean_FROC_validation}
\end{subfigure}
\caption{Mean ROC and FROC curves with respective confidence intervals for the external set considering only the subgroup of tumors with size < 2cm.}\label{fig:test_results_subgroup}
\end{figure}
\begin{paracol}{2}
\switchcolumn
\begin{specialtable}[H]
\centering
\captionsetup{justification=centering}
\caption{Ensemble results for each configuration on the whole test set and the subgroup of tumors with size < 2 cm.\label{tabel:enamble_results}}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}} cccc}
\toprule
Subgroup & Configuration & AUC-ROC & pAUC-FROC \\
\midrule
\multirow{3}{*}{Whole Test Dataset} & \textbf{nnUnet\_T} & 0.872 & 3.031\\
& \textbf{nnUnet\_TP} & 0.914 & 3.397\\
& \textbf{nnUnet\_MS} & 0.909 & 3.700\\
\midrule
\multirow{3}{*}{Tumors size < 2cm} & \textbf{nnUnet\_T} & 0.831 & 2.671\\
&\textbf{nnUnet\_TP} & 0.867 & 3.289\\
&\textbf{nnUnet\_MS} & 0.876 & 3.553\\
\bottomrule
\end{tabular*}
\end{specialtable}
Figure~\ref{fig:output_example} shows an example of the network outputs of \textit{nnUnet\_TP} and \textit{nnUnet\_MS} for an isoattenuating lesion in the neck-body of the pancreas. This lesion was missed by both the \textit{nnUnet\_T} and \textit{nnUnet\_TP}, but could be correctly identified by the \textit{nnUnet\_MS} model.
\begin{figure}[H]
\widefigure
\includegraphics[width=\linewidth]{patient5.png}
\label{fig:output_example}
\caption{Example of an isoattenuating tumor from the external test set which was missed by both the \textit{nnUnet\_T} and \textit{nnUnet\_TP} but could be correctly localized by the \textit{nnUnet\_MS}. (A) slice of the original ROI input; (B) ground truth segmentation of tumor and pancreas; (C) output of the \textit{nnUnet\_TP}, which in this case is only the pancreas segmentation as the tumor is not detected; (D) output of the \textit{nnUnet\_MS}, which is the segmentation of the detected tumor and surrounding anatomy.}\label{fig:output_example}
\end{figure}
\section{Discussion}
In this study, the state-of-the-art, self-configuring framework for medical segmentation \textit{nnUnet} \cite{Isensee2021NnU-Net:Segmentation} was used to develop a fully automatic pipeline for the detection and localization of PDAC tumors on CE-CT scans. Furthermore, the impact of integrating surrounding anatomy was assessed.
A significant challenge of applying deep learning to PDAC detection is that the pancreas occupies only a small portion of abdominal CE-CT scans, with the lesions being an even smaller target within that region. Training and testing the networks with full CE-CT scans would be very resource consuming and provide a lot of unnecessary information regarding surrounding organs, distracting the model's attention from the pancreatic lesion location. In this way, it is necessary to select a small volume of interest around the pancreas, but having expert professionals manually annotate the pancreas before running each image through the network requires extra time and resources, which would significantly diminish the model's clinical usefulness. To address this issue, the first step in our PDAC detection framework is to automatically extract a smaller volume of interest from the full input CE-CT scan by obtaining a coarse pancreas segmentation with a low-resolution \textit{nnUnet}. To the best of our knowledge, this is the first study to develop a deep-learning-based fully automatic PDAC detection framework and externally validate it on a publicly available test set.
Previous studies have employed deep CNNs for automatic PDAC detection on CT scans \cite{Zhu2019Multi-scaleAdenocarcinoma, Xia2020DetectingEnsemble, Ma2020ConstructionDiagnosis, Liu2020DeepValidation, K2021FullyTumors, Wang2021LearningPrediction}, but only two studies validated their models on an external test set \cite{Liu2020DeepValidation, K2021FullyTumors}, with one using the publicly available pancreas dataset. Liu and Wu, et al. \cite{Liu2020DeepValidation} developed a 2D, patch-based deep learning model using the VGG architecture to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue. This approach required the prior expert delineation of the pancreas, which was then processed by the network in patches that were classified as cancerous or non-cancerous. At a patient level, the presence of tumor was then determined based on the proportion of patches that the model classified as cancerous. The authors tested this model on the external test set and achieved a AUC-ROC of 0.750 (95\%CI [0.749-0.752]) for the patch-based classifier, and 0.920 (95\%CI [0.891-0.948]) for the patient-based classifier \cite{Liu2020DeepValidation}. On the sub-group of tumors with size < 2 cm, the model achieved a sensitivity of 0.631 (0.502 to 0.747). More recently, Si, et al. \cite{K2021FullyTumors} developed an end-to-end diagnosis pipeline for pancreatic malignancies, achieving an AUC-ROC of 0.871 in an external test set, but validation on the public available dataset was not performed.
Our proposed automatic PDAC detection framework achieved a maximum ROC-AUC of 0.914 for the whole external test set and 0.876 for the subgroup of tumors with size < 2 cm. This performance is comparable to the current state-of-the-art for this test dataset \cite{Liu2020DeepValidation}, but with the advantage of being obtained automatically from the input image, with no user interaction required. Another advantage of our framework is that the lesion location is also identified and so the classification outcomes are immediately interpretable, since they directly arise from the network's segmentation of the tumor. Moreover, the achieved results set a new baseline performance for fully automatic PDAC detection, noticeably improving on the previous best AUC-ROC of 0.871 reported by Si, et al.
To the best of our knowledge, this is the first study to assess the impact of multiple surrounding anatomical structures in the performance of deep learning models for PDAC detection. Pancreatic lesions often present low contrast and poorly defined margins on CE-CT scans, with 5.4-14\% of tumors being completely iso-attenuating and impossible to differentiate from normal pancreatic tissue \cite{Blouhos2015TheAnalysis}. These iso-attenuating tumors are identified only by the presence of secondary imaging findings (such as the dilation of the pancreatic duct) and are more prevalent in early disease stages \cite{HoYoon2011Small, Blouhos2015TheAnalysis}. In clinical practice, surrounding structures such as the pancreatic duct, the common bile duct, the surrounding veins (protomesenteric and splenic veins), and arteries (celiac trunk, superior mesenteric, common hepatic, and splenic arteries) are essential for PDAC diagnosis and local staging \cite{HoYoon2011Small, Blouhos2015TheAnalysis}. However, so far deep-learning models have focused only on the tumor and non-cancerous pancreas parenchyma, not taking the diagnostic information provided by all surrounding anatomy into account.
In this framework, the anatomy information was incorporated in the \textit{nnUnet\_MS} model, which was trained to segment not only the tumor and pancreas parenchyma but also several other relevant anatomical structures. The rationale behind this approach was that by learning to differentiate between the different types of tissue present in the pancreas volume of interest, the network could learn underlying relationships between the structures and consequently better localize the lesions. This network was compared to the \textit{nnUnet\_T}, which was trained to segment only the tumor, and the \textit{nnUnet\_TP}, trained to segment the tumor and pancreas parenchyma, in order to assess the impact of adding surrounding anatomy.
The results on the external test set show that, at a patient level, there is a clear benefit in adding the pancreas parenchyma when compared to training with only the tumor segmentation, as both the \textit{nnUnet\_TP} and \textit{nnUnet\_MS} achieved a significantly higher AUC-ROC than the \textit{nnUnet\_T}. There were however no differences in the performances of the \textit{nnUnet\_TP} and \textit{nnUnet\_MS} networks. Contrastingly, at a lesion-level, there was a clear separation between the three FROC curves both on the whole test set and on the subgroup of tumors with size<2 cm (Figures~\ref{fig:test_results},\ref{fig:test_results_subgroup}), with the \textit{nnUnet\_MS} achieving significantly higher pAUC-FROC than the two other configurations. This shows that the addition of surrounding anatomy improves the model's ability to localize PDAC lesions. Figure~\ref{fig:output_example} illustrates the advantage of anatomy integration in the case of an iso-dense lesion that is obstructing the pancreatic duct, causing its dilation. Both the \textit{nnUnet\_T} and \textit{nnUnet\_TP} models fail to identify this lesion, as there are no visible differences between the tumor and healthy pancreas parenchyma. However, the \textit{nnUnet\_MS} can accurately detect its location in the pancreatic neck-body following the termination of the dilated duct. By providing supervised training to segment the duct and other surrounding structures, the neural model can better focus on the remaining regions in the pancreas parenchyma, which may explain its ability to detect faint tumors. Furthermore, the multi structure segmentation provided by the \textit{nnUnet\_MS} presents useful information to the radiologist that can assist the interpretation of the network output regarding the tumor.
Despite the promising results, there are two main limitations to this study. First, the models were trained with a relatively low number of patients and only included tumors in the pancreatic head, which could be holding back the performance on external cohorts with heterogeneous imaging data. We are currently working on extending the training dataset to incorporate more patients, including tumors in the body and tale of the pancreas, in order to mitigate this issue. Second, training the anatomy segmentation network requires manual labeling of the different structures, which is resource-intensive. To address this problem, we only manually labeled the images from the PDAC-cohort and used self-learning to automatically segment the non-PDAC cohort, which could be introducing errors in training. Like the previous issue, the solution to this problem is to increase the size of the training dataset so that the model can learn better representations of the anatomy and consequently perform higher quality automatic annotations.
\section{Conclusions}
This study proposes a fully automatic, deep-learning-based framework that can identify whether a patient suffers from PDAC or not and localize the tumor in CE-CT scans. The proposed models achieve a maximum AUC of 0.914 in the whole external test set and 0.876 for the subgroup of tumors with size < 2 cm, indicating that state of the art deep learning models are able to identify small PDAC lesions and could be useful at assisting radiologists in early PDAC diagnosis. Moreover, we show that adding surrounding anatomy information significantly increases model performance regarding lesion localization.
\vspace{6pt}
\authorcontributions{Conceptualization, N.A., M.S., J.H and H.H.; methodology, N.A. and H.H.; software, N.A. and J.B,.; validation, N.A.; formal analysis, N.A.; investigation, N.A., M.S., G.L., J.B., J.H. and H.H.; resources, N.A., J.H and H.H.; data curation, N.A and G.L.; writing---original draft preparation, N.A.; writing---review and editing, M.S., G.L., J.B., J.H. and H.H.; visualization, N.A; supervision, J.H and H.H.; project administration, H.H.; funding acquisition, H.H. All authors have read and agreed to the published version of the manuscript.}
\funding{This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101016851, project PANCAIM.}
\dataavailability{The data presented in this study are available on request from the corresponding author dependent on ethics board approval. The data are not publicly available due to data protection legislation}
\conflictsofinterest{The authors declare no conflict of interest.}
\end{paracol}
\reftitle{References}
\externalbibliography{yes}
|
{
"timestamp": "2021-12-03T02:26:44",
"yymm": "2111",
"arxiv_id": "2111.15409",
"language": "en",
"url": "https://arxiv.org/abs/2111.15409"
}
|
\section{Introduction}
Painlev\'e equations may be formulated as Hamiltonian systems. This
has led to an important role in the theory of such equations for concepts
from classical mechanics and symplectic geometry, such as canonical
coordinates, tau-functions, and moduli spaces of solutions with symplectic
structures. The benefit of the symplectic point of view is that it
illuminates a path to the study of more general nonlinear differential
equations, especially those which are ``integrable''.
The Painlev\'e equations themselves are scalar ordinary differential
equations of second order, and this facilitates explicit calculations.
For systems, or higher-order equations, geometry plays a more essential
role.
Indeed, a considerable amount of general theory has been developed,
for example by Hitchin \cite{H}, Boalch \cite{B}, building on earlier
work of Schlesinger \cite{Sch}, Jimbo-Miwa-Ueno \cite{JMU}. On
the other hand examples are rather scarce, partly because of the difficulty
of carrying out explicit calculations, and partly because of the lack
of interesting concrete example for higher rank systems.
The purpose of this article is to explain some symplectic aspects
of the tt{*}-Toda equations, a system of nonlinear ordinary differential
equation of ``Painlev\'e type'', which is a relatively recent example
arising in physics. The tt{*} equations (topological--anti topological
fusion equations) arose in the work of Cecotti and Vafa on supersymmetric
quantum field theory (\cite{CV}, \cite{D}), and the tt{*}-Toda equations
are a special case of these equations of ``Toda type''. The simplest
nontrivial case of the tt{*}-Toda equations is the (radial) sinh-Gordon
equation, which is in fact a case of the third Painlev\'e equation.
It was investigated -- for similar physical reasons -- by McCoy-Tracy-Wu
\cite{MTW}, and their work had far-reaching consequences.
More recently, the tt{*}-Toda equations were investigated in detail
by Guest-Its-Lin \cite{II,III} and by Mochizuki \cite{M1,M2}, and
our motivation was to put some of these results into a symplectic
context and investigate them further. We have succeeded to do this
only for a certain subset of solutions -- an open subset of the moduli
space of all solutions -- but the results are encouraging, and have
already led to a new application, which we shall explain later.
The paper is organized as follows. After a brief review of the tt{*}-Toda
equations in section 2, we give their Hamiltonian formulation in section
3. In section 4 we explain the symplectic structures on the space
of solutions that we consider and on a corresponding space of monodromy
data. The correspondence between solutions (asymptotic data) and monodromy
data (Stokes matrices and connection matrices) is an example of the
Riemann-Hilbert correspondence for meromorphic connections with irregular
singularities. Our first main result (Theorem \ref{thm:symp}) is
that this correspondence preserves the symplectic structures. This
is consistent with the general results of Boalch \cite{B}, but we
shall go further and give an explicit generating function which relates
the corresponding canonical coordinates (Theorem \ref{thm:gen_fcn}).
In section 5 we give an application of these results to the asymptotics
of the tau function. For each solution of the tt{*}-Toda equation
there is a corresponding tau function, and it is the properties these
tau functions (rather than the solutions themselves) which are important
for many applications in physics.
A recent example is the work of Its, Lisovyy, and Tykhyy \cite{ILT},
in which the structure of the tau functions was elucidated using representation
theory (conformal blocks), as a consequence of AGT duality in physics.
This was used to solve the ``constant problem'' for Painlev\'e
equations, i.e. the problem of finding the constant which relates
the short-distance and long-distance expansions of the tau function.
In the case of the (radial) sine-Gordon equation, a rigorous and more
direct proof was given by Its and Prokhorov \cite{IP}. Our second
main result makes use of their method in order to solve this ``constant
problem'' for the tt{*}-Toda equations. As in \cite{IP}, we find
that the (explicit) generating function plays a crucial role (Theorem
\ref{thm:const}).
This work is part of the author's Ph.\ D.\ thesis at Waseda University.
He would like to acknowledge his supervisor, Prof.\ Martin Guest
for his support throughout this work and his guidance in this field.
He would like to acknowledge Prof.\ Alexander Its for his friendly
advice and his suggestions regarding the constant problem. He is also
glad to acknowledge the financial support from the Mathematics and
Physics Unit ``Multiscale Analysis, Modelling and Simulation'',
Top Global University Project, Waseda University.
\section{The tt{*}-Toda equations}
Let a positive integer $n$ be fixed. The tt{*}-Toda equations are
\begin{equation}
2\left(w_{i}\right)_{t\bar{t}}=-e^{2(w_{i+1}-w_{i})}+e^{2(w_{i}-w_{i-1})},\:w_{i}:\mathbb{C}^{*}\rightarrow\mathbb{R},\:i\in\mathbb{Z},\label{eq:tt}
\end{equation}
where, for all i, $w_{i}=w_{i+n+1}$, $w_{i}=w_{i}(\left|t\right|)$
$(t\in\mathbb{C}^{*})$, and
\[
w_{0}+w_{n}=0,w_{1}+w_{n-1}=0,\;\dots\;\text{(anti-symmetry condition)}.
\]
The equations (\ref{eq:tt}) are equivalent to the flatness of $\nabla:=d+\alpha$,
i.e. the zero curvature equation $d\alpha+\alpha\wedge\alpha=0$,
where
\[
\alpha:=\left(w_{t}+\frac{1}{\lambda}W^{T}\right)dt+\left(-w_{\bar{t}}+\lambda W\right)d\bar{t},
\]
\[
w=\begin{pmatrix}w_{0}\\
& \ddots\\
& & w_{n}
\end{pmatrix},\,W=\begin{pmatrix}0 & e^{w_{1}-w_{0}}\\
& 0 & \ddots\\
& & \ddots & e^{w_{n}-w_{n-1}}\\
e^{w_{0}-w_{n}} & & & 0
\end{pmatrix}.
\]
The tt{*}-Toda equations are also equivalent to the isomonodromy condition
for the following ordinary differential equation
\begin{equation}
\frac{d\Psi}{d\zeta}=\left(-\frac{1}{\zeta^{2}}W-\frac{1}{\zeta}xw_{x}+x^{2}W^{T}\right)\Psi,\label{eq:aux}
\end{equation}
where $x:=\left|t\right|$.
Generically, the local solutions near $x=0$ of the tt{*}-Toda equations
are parametrized by real numbers $\gamma_{i},\rho_{i}$ as follows
\cite{III,M1,M2}:
\begin{equation}
2w_{i}(x)=\gamma_{i}\log x+\rho_{i}+o(1)\quad\text{as }x\rightarrow0.\label{eq:o(1)}
\end{equation}
We call the parameters $\gamma_{i},\rho_{i}$ the asymptotic data.
``Generically'' means $-2<\gamma_{i+1}-\gamma_{i}<2$; the general
case has $-2\leq\gamma_{i+1}-\gamma_{i}\leq2$. We assume the generic
condition from now on.
There is another important set of data $m_{i}$, $e_{i}^{\mathbb{R}}$
called the monodromy data. These are eigenvalues of certain matrices
$M$ and $E$, which are related to monodromy data such as Stokes
matrices. See \cite{III} or the appendix for details. The proof in
\cite{III} is for the case $n=3$, but exactly the same method provides
the results of Theorem \ref{thm:GIL} and \ref{thm:global-solutions}
below for general $n$.
\begin{thm}
\label{thm:GIL}\cite{III}The monodromy data $m_{i}$, $e_{i}^{\mathbb{R}}$
may be expressed in terms of the asymptotic data as follows:
\begin{align*}
m_{i} & =-\frac{1}{2}\gamma_{i}\\
e_{i}^{\mathbb{R}} & =\begin{cases}
e^{\rho_{i}}2^{2\gamma_{i}}\frac{X_{n-i}(\gamma_{0},\dots,\gamma_{(n-1)/2},-\gamma_{(n-1)/2},\dots,-\gamma_{0})}{X_{i}(\gamma_{0},\dots,\gamma_{(n-1)/2},-\gamma_{(n-1)/2},\dots,-\gamma_{0})} & n:\text{odd}\\
e^{\rho_{i}}2^{2\gamma_{i}}\frac{X_{n-i}(\gamma_{0},\dots,\gamma_{(n-2)/2},0,-\gamma_{(n-2)/2},\dots,-\gamma_{0})}{X_{i}(\gamma_{0},\dots,\gamma_{(n-2)/2},0,-\gamma_{(n-2)/2},\dots,-\gamma_{0})} & n:\text{even}
\end{cases}
\end{align*}
where
\[
X_{k}(\gamma_{0},\dots,\gamma_{n}):=\prod_{j=1}^{n}\Gamma(\frac{\gamma_{k}-\gamma_{k+j}+2j}{2(n+1)})\;(\gamma_{j+n+1}=\gamma_{j}).
\]
\end{thm}
Global solutions can be parametrized only by the $\gamma_{i}$ (or
only by the $m_{i}$), that is, for global solutions the $\rho_{i}$
are determined by the $\gamma_{i}$:
\begin{thm}
\label{thm:global-solutions}\cite{III}For global solutions (i.e.
solutions which are smooth for $0<x<\infty$) we have
\[
\rho_{i}=-(2\log2)\gamma_{i}+\log(X_{i}/X_{n-i}),
\]
i.e. $e_{i}^{\mathbb{R}}=1$.
\end{thm}
\section{The Hamiltonian formulation\label{sec:The-Hamiltonian-formulation}}
Next, we introduce a Hamiltonian function and a symplectic form.
Let $\left\lfloor x\right\rfloor :=\max\{n\in\mathbb{Z}:n\leq x\}$
for $x\in\mathbb{R}$. The tt{*}-Toda equations can be written as
a non-autonomous Hamiltonian system,
\begin{align}
\left(w_{i}\right)_{x} & =\frac{\partial H}{\partial\tilde{w}_{i}}=\frac{\tilde{w}_{i}}{x}\label{eq:*}\\
\left(\tilde{w}_{i}\right)_{x} & =-\frac{\partial H}{\partial w_{i}}=-2x\left(e^{2(w_{i+1}-w_{i})}-e^{2(w_{i}-w_{i-1})}\right),\label{eq:**}
\end{align}
on the phase space $\mathbb{R}^{2\left\lfloor (n-1)/2\right\rfloor +2}=\{(w,\tilde{w})\}$
($w=(w_{0},\dots,w_{\left\lfloor (n-1)/2\right\rfloor })$, $\tilde{w}=(\tilde{w}_{0},\dots,\tilde{w}_{\left\lfloor (n-1)/2\right\rfloor })$)
equipped with the symplectic structure
\[
\theta:=\sum_{i=0}^{\left\lfloor (n-1)/2\right\rfloor }dw_{i}\wedge d\tilde{w}_{i}
\]
where the Hamiltonian $H$ is defined by
\[
H(w,\tilde{w};x):=\frac{1}{2x}\sum_{i=0}^{\left\lfloor (n-1)/2\right\rfloor }\tilde{w}_{i}^{2}-x\sum_{i=1}^{\left\lfloor (n-1)/2\right\rfloor }e^{2(w_{i}-w_{i-1})}-\frac{x}{2}\left(e^{-4w_{\left\lfloor (n-1)/2\right\rfloor }}+e^{4w_{0}}\right).
\]
The symplectic form $\theta$ is asymptotic to $\sum_{i=0}^{\left\lfloor (n-1)/2\right\rfloor }d(\rho_{i}/2)\wedge d(\gamma_{i}/2)$
as $x\rightarrow0$.
\begin{rem}
The Hamiltonian system may be written in terms of $X:=\log x$ as
follows:
\[
H(w,\tilde{w};X):=\frac{1}{2e^{X}}\sum_{i=0}^{\left\lfloor (n-1)/2\right\rfloor }\tilde{w}_{i}^{2}-e^{X}\sum_{i=1}^{\left\lfloor (n-1)/2\right\rfloor }e^{2(w_{i}-w_{i-1})}-\frac{e^{X}}{2}\left(e^{-4w_{\left\lfloor (n-1)/2\right\rfloor }}+e^{4w_{0}}\right).
\]
\begin{align*}
\left(w_{i}\right)_{X} & =\frac{\partial e^{X}H}{\partial\tilde{w}_{i}}=\tilde{w}_{i}\\
\left(\tilde{w}_{i}\right)_{X} & =-\frac{\partial e^{X}H}{\partial w_{i}}=-2e^{2X}\left(e^{2(w_{i+1}-w_{i})}-e^{2(w_{i}-w_{i-1})}\right).
\end{align*}
\end{rem}
\section{Symplectic structures and the Riemann-Hilbert correspondence}
Both the asymptotic data $\gamma_{i}$, $\rho_{i}$ and the monodromy
data $m_{i}$, $\log e_{i}^{\mathbb{R}}$ can be considered as defining
local charts of the moduli space of solutions. From Theorem \ref{thm:GIL}
we can show that the transformation between two charts via the Riemann-Hilbert
correspondence is symplectic with respect to the ``obvious'' symplectic
structure. The symplectic form $2\theta$ we define in section \ref{sec:The-Hamiltonian-formulation}
is asymptotic to the left hand side of the equality below as $x\rightarrow0$.
\begin{thm}
\label{thm:symp}
\[
-\frac{1}{2}\sum_{i=0}^{\left\lfloor (n-1)/2\right\rfloor }d\gamma_{i}\wedge d\rho_{i}=\sum_{i=0}^{\left\lfloor (n-1)/2\right\rfloor }dm_{i}\wedge d\log e_{i}^{\mathbb{R}}.
\]
\end{thm}
\begin{rem}
\label{rem:KKAH}The left hand side is related to the Kirillov-Kostant
form on a coadjoint orbit, and the right hand side is related to the
Atiyah-Hitchin form on the space of the based rational maps of degree
$n+1$ from $\mathbb{C}P^{1}$ to itself. Thus, both symplectic forms
arise naturally from geometry. We shall present details of these facts
elsewhere.
\end{rem}
Theorem \ref{thm:symp} can be verified by direct calculation, but
we prefer to give a proof by showing the existence of a generating
function. The generating function will play an important role later.
\begin{defn}
\label{def:generating function}Let
\begin{align}
F(\rho_{0},\dots,\rho_{\left\lfloor (n-1)/2\right\rfloor },m_{0},\dots,m_{\left\lfloor (n-1)/2\right\rfloor }): & =-\sum_{i=0}^{\left\lfloor (n-1)/2\right\rfloor }\rho_{i}m_{i}+2\log2\sum_{i=0}^{\left\lfloor (n-1)/2\right\rfloor }m_{i}^{2}\label{eq:F}\\
& +\frac{n+1}{2}\sum_{k=0}^{n}\sum_{j=1}^{n}\psi^{(-2)}\left(\frac{m_{k-j}-m_{k}+j}{n+1}\right)\nonumber
\end{align}
where $m_{j+n+1}=m_{j}$ and $m_{j}=-m_{n-j}$. Here $\psi^{(-2)}(z)=\int_{0}^{z}\log\Gamma(x)dx=\frac{z(1-z)}{2}+\frac{z}{2}\log2\pi+z\log\Gamma(z)-\log G(1+z)$,
and $G$ is the Barnes G-function.
\end{defn}
\begin{thm}
\label{thm:gen_fcn}The function $F$ is a generating function of
the transformation
\[
(m_{0},\dots,m_{\left\lfloor (n-1)/2\right\rfloor },\rho_{0},\dots,\rho_{\left\lfloor (n-1)/2\right\rfloor })\mapsto(m_{0},\dots,m_{\left\lfloor (n-1)/2\right\rfloor },\log e_{0}^{\mathbb{R}},\dots,\log e_{\left\lfloor (n-1)/2\right\rfloor }^{\mathbb{R}})
\]
with respect to the given symplectic forms. More precisely, $F$
satisfies
\[
m_{i}=-\frac{\partial F}{\partial\rho_{i}},\;\log e_{i}^{\mathbb{R}}=-\frac{\partial F}{\partial m_{i}}.
\]
\end{thm}
\begin{proof}
The first identity is obvious. We show the second identity. Let
\[
\tilde{K}(m_{0},\dots,m_{n}):=(n+1)\sum_{i=0}^{n}\sum_{j=1}^{n}\psi^{(-2)}(\frac{m_{i}-m_{i+j}+j}{n+1}),
\]
where $m_{j+n+1}=m_{j}$. Let
\[
K(m_{0},\dots,m_{\left\lfloor (n-1)/2\right\rfloor }):=\frac{1}{2}\tilde{K}(m_{0},\dots,m_{\left\lfloor (n-1)/2\right\rfloor },-m_{\left\lfloor (n-1)/2\right\rfloor },\dots,-m_{0}).
\]
This $K$ is the last term of $F$ in (\ref{eq:F}). From the definition
of $\log e_{i}^{\mathbb{R}}$ and $F$, it suffices to show that
\begin{align*}
& \frac{\partial K}{\partial m_{k}}(m_{0},\dots,m_{\left\lfloor (n-1)/2\right\rfloor })\\
& =\log\left(\frac{X_{k}(m_{0},\dots.m_{\left\lfloor (n-1)/2\right\rfloor },-m_{\left\lfloor (n-1)/2\right\rfloor },\dots,-m_{0})}{X_{n-k}(m_{0},\dots.m_{\left\lfloor (n-1)/2\right\rfloor },-m_{\left\lfloor (n-1)/2\right\rfloor },\dots,-m_{0})}\right).
\end{align*}
We can easily obtain that $X_{n-k}(-m_{n},\dots,-m_{0})=\prod_{j=1}^{n}\Gamma(\frac{-m_{k}+m_{k-j}+j}{n+1})$
and that $\tilde{K}(m_{0},\dots,m_{n})=(n+1)\sum_{k=0}^{n}\sum_{j=1}^{n}\psi^{(-2)}(\frac{m_{k-j}-m_{k}+j}{n+1})$.
Then we obtain
\[
\frac{\partial\tilde{K}}{\partial m_{k}}=\log(X_{k}(m_{0},\dots,m_{n})/X_{n-k}(-m_{n},\dots,-m_{0})).
\]
Hence we have
\begin{align*}
& \frac{\partial K}{\partial m_{k}}(m_{0},\dots,m_{\left\lfloor (n-1)/2\right\rfloor })\\
= & \frac{1}{2}\biggl(\frac{\partial\tilde{K}}{\partial m_{k}}(m_{0},\dots,m_{\left\lfloor (n-1)/2\right\rfloor },-m_{\left\lfloor (n-1)/2\right\rfloor },\dots,-m_{0})\\
& -\frac{\partial\tilde{K}}{\partial m_{n-k}}(m_{0},\dots,m_{\left\lfloor (n-1)/2\right\rfloor },-m_{\left\lfloor (n-1)/2\right\rfloor },\dots,-m_{0})\biggr)\\
= & \frac{1}{2}\bigl(\log X_{k}(m_{0},\dots.m_{\left\lfloor (n-1)/2\right\rfloor },-m_{\left\lfloor (n-1)/2\right\rfloor },\dots,-m_{0})\\
& -\log X_{n-k}(m_{0},\dots.m_{\left\lfloor (n-1)/2\right\rfloor },-m_{\left\lfloor (n-1)/2\right\rfloor },\dots,-m_{0})\\
& -\log X_{n-k}(m_{0},\dots.m_{\left\lfloor (n-1)/2\right\rfloor },-m_{\left\lfloor (n-1)/2\right\rfloor },\dots,-m_{0})\\
& +\log X_{k}(m_{0},\dots.m_{\left\lfloor (n-1)/2\right\rfloor },-m_{\left\lfloor (n-1)/2\right\rfloor },\dots,-m_{0})\bigr)\\
= & \log X_{k}(m_{0},\dots.m_{\left\lfloor (n-1)/2\right\rfloor },-m_{\left\lfloor (n-1)/2\right\rfloor },\dots,-m_{0})\\
& -\log X_{n-k}(m_{0},\dots.m_{\left\lfloor (n-1)/2\right\rfloor },-m_{\left\lfloor (n-1)/2\right\rfloor },\dots,-m_{0}).
\end{align*}
This completes the proof.
\end{proof}
\section{Tau functions and the constant problem}
In this section, we assume for simplicity that $n=3$, so $w=(w_{0},w_{1},w_{2},w_{3})$
with $w_{2}=-w_{1}$, $w_{3}=-w_{0}$. For general $n$ the same method
applies. We consider only the global solutions, i.e., we assume $\log e_{i}^{\mathbb{R}}=0$,
which means that the $\rho_{i}$'s are determined by the $\gamma_{j}$'s,
as in Theorem \ref{thm:global-solutions}. The following calculation
is motivated by Theorem 1 in \cite{IP}. See also \cite{P} for further
details.
\begin{defn}
Let us define the tau function of a global solution $w$ by
\[
\log\tau^{w}(x_{1},x_{2})=\intop_{x_{1}}^{x_{2}}H(w_{i}(x),\tilde{w}_{i}(x),x)dx
\]
where $H$ is the Hamiltonian function.
\end{defn}
\begin{rem}
Usually the tau function is defined (up to a multiplicative constant)
by $\log\tau^{w}(x)=\intop^{x}H(w_{i}(x),\tilde{w}_{i}(x),x)dx$.
In that notation we have $\tau^{w}(x_{1},x_{2})=\tau^{w}(x_{2})/\tau^{w}(x_{1})$.
\end{rem}
The Hamiltonian function is
\[
H(x,w_{0},w_{1},\tilde{w}_{0},\tilde{w}_{1})=\frac{1}{2x}(\tilde{w}_{0}^{2}+\tilde{w}_{1}^{2})-xe^{2(w_{1}-w_{0})}-\frac{x}{2}\left(e^{-4w_{1}}+e^{4w_{0}}\right)
\]
and is quasihomogeneous, that is,
\[
H(w,\lambda\tilde{w};\lambda x)=\lambda H(w,\tilde{w};x)\text{ for any }\lambda>0.
\]
It follows that
\[
\sum_{i=0}^{1}\tilde{w}_{i}\frac{\partial H}{\partial\tilde{w}_{i}}+x\frac{\partial H}{\partial x}=H.
\]
For the solution $(w_{0}(x),w_{1}(x),\tilde{w}_{0}(x),\tilde{w}_{1}(x))$
of (\ref{eq:*}) and (\ref{eq:**}), we have
\begin{align*}
\sum_{i=0}^{1}\tilde{w}_{i}(x)\frac{\partial H}{\partial\tilde{w}_{i}}(x,w_{0}(x),w_{1}(x),\tilde{w}_{0}(x),\tilde{w}_{1}(x)) & =\sum_{i=0}^{1}\tilde{w}_{i}(x)\left(w_{i}\right)_{x}(x)\\
x\frac{\partial H}{\partial x}(x,w_{0}(x),w_{1}(x),\tilde{w}_{0}(x),\tilde{w}_{1}(x)) & =-H(w_{0}(x),w_{1}(x),\tilde{w}_{0}(x),\tilde{w}_{1}(x))+\\
& \frac{dxH(x,w_{0}(x),w_{1}(x),\tilde{w}_{0}(x),\tilde{w}_{1}(x))}{dx}.
\end{align*}
The first equality is obvious. The second equality follows from $\frac{dxH}{dx}=x\frac{dH}{dx}+H$
and
\begin{align*}
& \frac{dH(x,w_{0}(x),w_{1}(x),\tilde{w}_{0}(x),\tilde{w}_{1}(x))}{dx}\\
& =\frac{\partial H}{\partial x}(x,w_{0}(x),w_{1}(x),\tilde{w}_{0}(x),\tilde{w}_{1}(x))+\left(w_{i}\right)_{x}(x)\frac{\partial H}{\partial w_{i}}(x,w_{0}(x),w_{1}(x),\tilde{w}_{0}(x),\tilde{w}_{1}(x))\\
& +\left(\tilde{w}_{i}\right)_{x}(x)\frac{\partial H}{\partial\tilde{w}_{i}}(x,w_{0}(x),w_{1}(x),\tilde{w}_{0}(x),\tilde{w}_{1}(x))\\
& =\frac{\partial H}{\partial x}(x,w_{0}(x),w_{1}(x),\tilde{w}_{0}(x),\tilde{w}_{1}(x))\\
& -\left(w_{i}\right)_{x}(x)\left(\tilde{w}_{i}\right)_{x}(x)+\left(\tilde{w}_{i}\right)_{x}(x)\left(w_{i}\right)_{x}(x)\\
& =\frac{\partial H}{\partial x}(x,w_{0}(x),w_{1}(x),\tilde{w}_{0}(x),\tilde{w}_{1}(x)).
\end{align*}
Then it follows that
\begin{prop}
$H=\tilde{w}_{0}\left(w_{0}\right)_{x}+\tilde{w}_{1}\left(w_{1}\right)_{x}-H+\frac{d}{dx}(xH).$
\end{prop}
Let $S(x_{1},x_{2}):=\intop_{x_{1}}^{x_{2}}\left(\sum_{i=0}^{1}\tilde{w}_{i}\left(w_{i}\right)_{x}-H\right)dx$,
which is called the classical action, the functional from which we
can derive the Euler-Lagrange equation using the fundamental lemma
of calculus of variations. We obtain
\begin{align*}
\frac{\partial S(x_{1},x_{2})}{\partial\gamma_{j}} & =\intop_{x_{1}}^{x_{2}}\left(\sum_{i=0}^{1}\left(\left(\tilde{w}_{i}\right)_{\gamma_{j}}\left(w_{i}\right)_{x}+\tilde{w}_{i}\left(\left(w_{i}\right)_{x}\right)_{\gamma_{j}}\right)-\left(H\right)_{\gamma_{j}}\right)dx\\
& =\intop_{x_{1}}^{x_{2}}\left(\sum_{i=0}^{1}\left(\left(\tilde{w}_{i}\right)_{\gamma_{j}}\frac{\partial H}{\partial\tilde{w}_{i}}+\tilde{w}_{i}\left(\left(w_{i}\right)_{x}\right)_{\gamma_{j}}\right)-\sum_{i=0}^{1}\left(\frac{\partial H}{\partial w_{i}}\left(w_{i}\right)_{\gamma_{j}}+\frac{\partial H}{\partial\tilde{w}_{i}}\left(\tilde{w}_{i}\right)_{\gamma_{j}}\right)\right)dx\\
& =\intop_{x_{1}}^{x_{2}}\left(\sum_{i=0}^{1}\left(\left(\tilde{w}_{i}\right)_{\gamma_{j}}\frac{\partial H}{\partial\tilde{w}_{i}}-\left(\tilde{w}_{i}\right)_{x}\left(w_{i}\right)_{\gamma_{j}}\right)-\sum_{i=0}^{1}\left(\frac{\partial H}{\partial w_{i}}\left(w_{i}\right)_{\gamma_{j}}+\frac{\partial H}{\partial\tilde{w}_{i}}\left(\tilde{w}_{i}\right)_{\gamma_{j}}\right)\right)dx\\
& +\left.\left(\sum_{i=0}^{1}\tilde{w}_{i}\left(w_{i}\right)_{\gamma_{j}}\right)\right|_{x_{1}}^{x_{2}}\\
& =\left.\left(\tilde{w}_{0}\left(w_{0}\right)_{\gamma_{j}}+\tilde{w}_{1}\left(w_{1}\right)_{\gamma_{j}}\right)\right|_{x_{1}}^{x_{2}}.
\end{align*}
The second equality follows from (\ref{eq:*}) and the chain rule,
the third from integration by parts, and the fourth from (\ref{eq:**}).
From the proposition and the definition of the $\tau$ function, we
obtain
\begin{align}
\frac{\partial}{\partial\gamma_{j}}\log\tau^{w}(x_{1},x_{2}) & =\frac{\partial}{\partial\gamma_{j}}\int_{x_{1}}^{x_{2}}\left(\sum_{i=0}^{1}\tilde{w}_{i}\left(w_{i}\right)_{x}-H+\frac{d}{dx}(xH)\right)dx\nonumber \\
& =\frac{\partial S(x_{1},x_{2})}{\partial\gamma_{j}}+\left(x_{2}H(x_{2})-x_{1}H(x_{1})\right)_{\gamma_{j}}\nonumber \\
& =\left.\left(\sum_{i=0}^{1}\tilde{w}_{i}\left(w_{i}\right)_{\gamma_{j}}\right)\right|_{x_{1}}^{x_{2}}+\left(x_{2}H(x_{2})-x_{1}H(x_{1})\right)_{\gamma_{j}}.\label{eq:gamma_tau}
\end{align}
At $x=0$ the form of (\ref{eq:o(1)}) is
\begin{align*}
w_{i}(x) & =\frac{\gamma_{i}}{2}\log x+\frac{\rho_{i}}{2}+O(x^{\varepsilon_{i}}),\quad x\rightarrow0
\end{align*}
for some $\varepsilon_{i}>0$ (which depends on $\gamma_{0}$ and
$\gamma_{1}$); this can be shown as in Theorem 14.1 of \cite{FIKN}
for the case $n=1$. This formula is differentiable in $x$ and the
$\gamma_{i}$'s. Therefore
\begin{align*}
\tilde{w}_{i} & =\frac{\gamma_{i}}{2}+O(x^{\varepsilon_{i}}),\\
\left(w_{i}\right)_{\gamma_{j}} & =\frac{\delta_{i,j}}{2}\log x+\frac{1}{2}\left(\rho_{i}\right)_{\gamma_{j}}+O(x^{\varepsilon_{i}}\log x),\\
\left(\tilde{w}_{i}\right)_{\gamma_{j}} & =\frac{\delta_{i,j}}{2}+O(x^{\varepsilon_{i}}\log x)
\end{align*}
as $x\rightarrow0$.
At $x=\infty$, from \cite{II}, if $s_{1}^{\mathbb{R}}\neq0$,
\begin{equation}
w_{i}(x)=-s_{1}^{\mathbb{R}}2^{-\frac{7}{4}}(\pi x)^{-\frac{1}{2}}e^{-2\sqrt{2}x}+O(x^{-1}e^{-2\sqrt{2}x})\quad\text{as }x\rightarrow\infty,\label{eq:infinity}
\end{equation}
where $s_{1}^{\mathbb{R}}=-2\cos\frac{\pi}{4}(\gamma_{0}+1)-2\cos\frac{\pi}{4}(\gamma_{1}+3)$.
If $s_{1}^{\mathbb{R}}=0$, we have
\begin{align*}
w_{0}(x) & =s_{2}^{\mathbb{R}}2^{-\frac{5}{2}}(\pi x)^{-\frac{1}{2}}e^{-4x}+O(x^{-1}e^{-4x})\sim O(x^{-1}e^{-2\sqrt{2}x})\\
w_{1}(x) & =-s_{2}^{\mathbb{R}}2^{-\frac{5}{2}}(\pi x)^{-\frac{1}{2}}e^{-4x}+O(x^{-1}e^{-4x})\sim O(x^{-1}e^{-2\sqrt{2}x}),
\end{align*}
so the equation (\ref{eq:infinity}) holds for any generic $(\gamma_{0},\gamma_{1})$.
The equation (\ref{eq:infinity}) is also differentiable in $x$ and
the $\gamma_{i}$'s, so
\begin{align*}
\tilde{w}_{i}(x) & =s_{1}^{\mathbb{R}}2^{-\frac{1}{4}}\sqrt{\pi}x^{\frac{1}{2}}e^{-2\sqrt{2}x}+O(e^{-2\sqrt{2}x}),\\
\left(w_{i}\right)_{\gamma_{j}} & =-\left(s_{1}^{\mathbb{R}}\right)_{\gamma_{j}}2^{-\frac{7}{4}}(\pi x)^{-\frac{1}{2}}e^{-2\sqrt{2}x}+O(x^{-1}e^{-2\sqrt{2}x}),\\
\left(\tilde{w}_{i}\right)_{\gamma_{j}} & =\left(s_{1}^{\mathbb{R}}\right)_{\gamma_{j}}2^{-\frac{1}{4}}\sqrt{\pi}x^{\frac{1}{2}}e^{-2\sqrt{2}x}+O(e^{-2\sqrt{2}x})
\end{align*}
as $x\rightarrow\infty$.
By substituting the above asymptotic expansions into (\ref{eq:gamma_tau})
we obtain
\[
\frac{\partial}{\partial\gamma_{i}}\log\tau^{w}(x_{1},x_{2})=-\frac{\gamma_{i}}{4}\log x_{1}-\sum_{k=0}^{1}\frac{\gamma_{k}}{4}\left(\rho_{k}\right)_{\gamma_{i}}-\frac{\gamma_{i}}{4}+O(x_{1}^{\varepsilon_{i}}\log x_{1})+O(x_{2}^{\frac{3}{2}}e^{-2\sqrt{2}x_{2}})
\]
as $x_{1}\rightarrow0,\,x_{2}\rightarrow\infty$.
In our situation we have:
\[
\tau^{w}(1,x)=C_{0}x^{\frac{1}{8}(\gamma_{0}^{2}+\gamma_{1}^{2})}(1+O(x^{\varepsilon})),\quad x\rightarrow0,
\]
\[
\tau^{w}(1,x)=C_{\infty}e^{-x^{2}}(1+O(x^{1/2}e^{-2\sqrt{2}x})),\quad x\rightarrow\infty.
\]
Then
\[
\log\tau^{w}(x_{1},x_{2})=\log\frac{C_{\infty}}{C_{0}}-x_{2}^{2}-\frac{1}{8}(\gamma_{0}^{2}+\gamma_{1}^{2})\log x_{1}+O(x_{1}^{\varepsilon})+O(x_{2}^{1/2}e^{-2\sqrt{2}x_{2}}).
\]
Let
\[
C:=\log\frac{C_{\infty}}{C_{0}}=\lim_{\substack{x_{1}\rightarrow0\\
x_{2}\rightarrow\infty
}
}\left(\log\tau^{w}(x_{1},x_{2})+x_{2}^{2}+\frac{\gamma_{0}^{2}+\gamma_{1}^{2}}{8}\log x_{1}\right).
\]
Then we obtain
\[
\frac{\partial C}{\partial\gamma_{i}}=\lim_{\substack{x_{1}\rightarrow0\\
x_{2}\rightarrow\infty
}
}\left(\frac{\partial}{\partial\gamma_{i}}\left(\log\tau^{w}(x_{1},x_{2})+x_{2}^{2}+\frac{\gamma_{0}^{2}+\gamma_{1}^{2}}{8}\log x_{1}\right)\right)=-\frac{\gamma_{i}}{4}-\sum_{k=0}^{1}\frac{\gamma_{k}}{4}\left(\rho_{k}\right)_{\gamma_{i}}
\]
that is,
\begin{equation}
C=-\sum_{i=0}^{1}\frac{\gamma_{i}^{2}}{8}-\frac{1}{4}\sum_{k=0}^{1}\gamma_{k}\rho_{k}+\frac{1}{4}\int\sum_{k=0}^{1}\rho_{k}d\gamma_{k}.\label{eq:C}
\end{equation}
Note that $\frac{\partial K}{\partial m_{i}}=-2\frac{\partial K}{\partial\gamma_{i}}$,
where $K$ is the function defined in the proof of theorem \ref{thm:gen_fcn},
and
\[
\int\sum_{k=0}^{1}\rho_{k}d\gamma_{k}=-(\log2)\sum_{k=0}^{1}\gamma_{k}^{2}-2K+\text{{\rm const.}}
\]
The constant above is independent of the $\gamma_{i}$'s. By substituting
$\gamma_{0}=\gamma_{1}=0$, which corresponds to the trivial solution
$w_{0}\equiv w_{1}\equiv0$, into (\ref{eq:C}), we obtain $C=-4\left(\psi^{(-2)}(1/4)+\psi^{(-2)}(2/4)+\psi^{(-2)}(3/4)\right)+\text{{\rm const.}}$
On the other hand, the tau function $\tau^{w}(x_{1},x_{2})$ corresponding
to the trivial solution is $\exp(x_{1}^{2}-x_{2}^{2})$, so $C=0$
in this case. In conclusion we have the following result:
\begin{thm}
\label{thm:const}
\[
C=-\frac{1}{8}\left(\gamma_{0}^{2}+\gamma_{1}^{2}\right)-\frac{1}{2}\left(\gamma_{0}\rho_{0}+\gamma_{1}\rho_{1}\right)-\frac{1}{2}F+4\left(\psi^{(-2)}(1/4)+\psi^{(-2)}(2/4)+\psi^{(-2)}(3/4)\right).
\]
\end{thm}
The function $F$ in the theorem, which is the generating function,
is given in Definition \ref{def:generating function}.
|
{
"timestamp": "2021-12-01T02:26:00",
"yymm": "2111",
"arxiv_id": "2111.15459",
"language": "en",
"url": "https://arxiv.org/abs/2111.15459"
}
|
\section{Introduction}
Dynamic linear elastic problems appear on many length scales. On the large scales we can find problems relating to earthquakes and other geophysics problems.
On the small scales ($\mu$m to mm - scale), we can think of the application of various biomedical treatments with ultrasound or shockwaves, where the biomaterial is often regarded as simply acoustic, even though this assumption might not always be justified~\citep{Rapet2019}. Advances in the development of ultrasonics and microfluidics
have also renewed the interest in this area~\citep{Dual2012}, such as cell trapping as well as ultrasonic inspection. With the availability of MHz acoustic transducers in recent years, applications in these areas are probably going to increase. At intermediate length scales, say 1 m, one can think of studies regarding sound and vibration reduction caused by moving parts of machinery.
Although numerical methods can be employed to solve dynamic linear elastic problems, they do not necessary give physical insight which can only be obtained from analytical solutions~\citep[Chap 3]{hills2021}.
Steady state analytical solutions are rare but do exist~\citep{Lim2006}. Analytical solutions for dynamic linear elastic problems are even rarer (or at least not very well known). Sneddon and Berry on page 126 wrote ``There are very few exact solutions even of these steady state equations and such as they are limited to spheres and cylinders''. Moreover, those analytical solutions are not only rare, but also they often do not show the `full' range of physics of real systems; that is, some severe limitations are being imposed on the solution, such as the solution for a radially oscillating sphere which only gives longitudinal waves without transverse waves~\citep{Grasso2012}. Solutions that show both longitudinal and transverse waves do exist though, for example the elastic scattering wave solution by~\cite{Hinders1991} which is very similar to the Mie theory~\citep{Mie1908} in electromagnetics. A drawback of this solution is that an infinite sum of Bessel functions is needed to calculate the solution. Although elegant, it is not easy to guess how many terms one needs in this sum when using Bessel functions (the accuracy can even go down again, when too many terms are included). Classical authors such as Lamb and Love~\citep{Love1892} already described various analytical solutions for example for the internal resonances for elastic spheres or the solution for the elastic material in between two concentric spheres. \cite{Papargyri2009} presented some analytical solutions for gradient elastic solids.
\cite{KlaseboerJElast2019} presented an analytical solution for a rigid sphere vibrating in an infinite elastic medium. The current article can be considered as a significant extension of this theory, by adding an elastic shell to the rigid core. This `shell' solution turns out to be much richer in physical phenomena than the case without a shell and is the focus of the current work. The solution is free of infinite sums and is relatively easy to calculate and visualize (a clear advantage that the 19th century classical authors did not have), which makes it ideal to validate numerical tools.
In Fig~\ref{Fig:Explanation}, an illustration of the problem is given, the core rigid sphere with radius $a$ oscillates with an amplitude $U^0$ and angular frequency $\omega$ inside a concentric spherical shell with radius $b$ and density $\rho_{\mathrm{sh}}$. In doing this emitted (`e') or outgoing waves are generated in the shell. Reflected waves (`r') can also occur. Finally in the external infinite domain with density $\rho_{\mathrm{out}}$ transmitted (`t') waves can be generated. The material constants in the shell and the infinite domain are not necessarily the same.
As we will see in the latter part of this work, the solution for a rigid vibrating core with a shell around it shows some surprising physics (resonance peaks for T and/or L waves when the vibrating frequency is changed), which do not occur for a core without such a shell, where a very smooth spectrum is obtained.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.75\textwidth]{Fig1.jpg}
\caption{Schematic illustration of the problem under consideration; a rigid core sphere with radius $a$ oscillates periodically with an amplitude $U^0$ with frequency $\omega$ and is surrounded by a concentric spherical shell with radius $b$ and material constants $k_L^{\mathrm{sh}}$ and $k_T^{\mathrm{sh}}$. Both spheres are embedded in an infinite material with material constants $k_L^{\mathrm{out}}$ and $k_T^{\mathrm{out}}$, which may or may not be identical to the shell constants. Elastic waves are being emitted (labelled `e'), reflected (labelled `r') and transmitted (labelled `t') towards infinity. Both longitudinal (L) waves and transverse (T) waves can exist in this system. } \label{Fig:Explanation}
\end{center}
\end{figure*}
Also, the analytical solutions with different parameters can be used as non-trivial test cases for numerical methods, for instance, one example has been given for validating a boundary element method in \ref{App:BEM}. Furthermore, the vibrating spherical core-shell system could possibly be used as a simple and elegant template for practical applications, such as, to design spherical piezoelectrical actuators to emit or harvest energy which efficiency highly depends on the material properties and frequency response~\citep{Covaci2020}, possibly with multiple spherical core-shell structures placed in arrays. Another way to generate either longitudinal or transverse waves can be achieved by vibrating the metallic core sphere using magnetic means, which can make the spherical core-shell system become a heat generator when it converts magnetic energy to heat via relaxation processes and hysteresis losses~\citep{Schmidt2007}. Along this line, our analytical solution can be used to optimize the design of hyperthermia agents using magnetic beads for cancer treatments~\citep{Philippova2011}. There are likely other applications in which a spherical core-shell system is a good approximation of a real physical system.
The structure of this work is organized as follows. In Sec.~\ref{sec:anasol}, we demonstrate the derivation of the analytical solution for a vibrating rigid core with a shell in an infinite elastic medium and the detailed steps are given in~\ref{App:AppA} and~\ref{App:AppB}. In Sec.~\ref{sec:results}, we study the elastic wave phenomena at different oscillation frequencies followed by some discussions in Sec.~\ref{sec:discussion} which shows the limit case of solution and pulsed time domain solutions using the fast Fourier transform. The conclusion is given in Sec.~\ref{sec:conclusion}.
\section{Dynamic elastic waves} \label{sec:anasol}
\subsection{General theory}
Within the approximation of small deformations and small stresses, the Navier equation for dynamic linear elasticity in the frequency domain can be written as~\citep{KlaseboerJElast2019,Love1892,Pelissier2007}
\begin{align} \label{eq:Navier}
c^2_L\nabla (\nabla \cdot \boldsymbol{u}) - c^2_T \nabla \times \nabla \times \boldsymbol{u} + \omega^2 \boldsymbol{u} = \boldsymbol{0}
\end{align}
where $\boldsymbol{u}$ is the (complex valued) displacement vector, $\omega$ is the angular frequency, and the constants $c_L$ and $c_T$ are the longitudinal dilatation and transverse shear wave velocities, respectively, that are defined in terms of the Lam\'e constants $\lambda$, $\mu$ and the density $\rho$~\citep{LandauLifshitz,Bedford1994}:
\begin{equation} \label{eq:cTcL}
\begin{aligned}
c^2_L = (\lambda + 2 \mu)/\rho\quad ; \quad
c^2_T = \mu/\rho.
\end{aligned}
\end{equation}
Eq.~(\ref{eq:Navier}) essentially expresses the equilibrium of the elastic forces (the first two terms) and the inertial forces (the third term)\footnote{Here we have ignored volume forces and thermoelastic effects~\citep{Ruimi2012}}. It is well known that the displacement $\boldsymbol{u}$ can be decomposed into a transverse part $\boldsymbol{u}_{T}$ and a longitudinal part $\boldsymbol{u}_L$ as:
\begin{align} \label{eq:HelmDec}
\boldsymbol{u} = \boldsymbol{u}_{L}+\boldsymbol{u}_{T},
\end{align}
with $\boldsymbol{u}_T$ being divergence free and $\boldsymbol{u}_L$ being curl free, thus:
\begin{align}\label{eq:divCurlZero}
\boldsymbol{\nabla} \boldsymbol{\cdot} \boldsymbol{u}_{T} = 0 \quad ; \quad
\boldsymbol{\nabla} \times \boldsymbol{u}_{L} = \boldsymbol{0}.
\end{align}
In this work, we will refer to the longitudinal waves as ``L" and to the transverse waves as ``T". These two sorts of waves are also often referred to as pressure waves and shear waves, respectively. Introducing Eq. (\ref{eq:HelmDec}) into Eq. (\ref{eq:Navier}) and considering the relations in Eq.~(\ref{eq:divCurlZero}), we obtain
\begin{align} \label{eq:HelmuTuL}
\nabla^2 \boldsymbol{u}_T + k^2_T \boldsymbol{u}_T = \boldsymbol{0} \quad ;\quad
\nabla^2 \boldsymbol{u}_L + k^2_L \boldsymbol{u}_L = \boldsymbol{0},
\end{align}
where $k_T = \omega/ c_T$ and $k_L = \omega/ c_L$ are the transverse and longitudinal wavenumbers, respectively. Thus both the transverse displacement $\boldsymbol{u}_T$ and the longitudinal displacement $\boldsymbol{u}_L$ satisfy the Helmholtz equation, yet with different wavenumbers. As is obvious from Eq.~(\ref{eq:cTcL}), the longitudinal wave velocity is always greater than the transverse wave speed, thus $c_L^2 \ge 2 c_T^2$ or in terms of wavenumbers $k_T \ge \sqrt{2}k_L$.
\subsection{Theory for vibrating spheres}\label{sec:coreshellsol}
As shown in Fig.~\ref{Fig:Explanation}, we impose that the geometry under consideration consists of a rigid core with radius $r=a$, surrounded by another concentric sphere with radius $r=b$. The material in between the two spheres (the shell) is elastic and indicated with `sh'. The core-shell sphere combination is situated inside a different external outer elastic material referred to as `out'. Since the core only vibrates along the $z$-axis, due to symmetry, we look for a solution that has a zero azimuthal $\varphi$ component for both the displacement and the stress (a similar framework was used by the authors to calculate the acoustic boundary layer around a vibrating sphere, see \cite{KLaseboerPhFl2020} \footnote{\cite{KLaseboerPhFl2020} studied acoustic boundary layers around a sphere in fluid dynamics. The same governing equations appear as in elasticity, except for the difference that $k_L$ and $k_T$ are now complex numbers and that ‘$\boldsymbol u$’ is now the velocity instead of the displacement. The focus was there on the phenomenon of ‘streaming’ which is a second order effect which causes a slow mean flow on top of the flow caused by the oscillation of the sphere. This non linear effect doesn't appear here. }). Such a solution can be written as:
\begin{equation} \label{eq:phi_h}
\begin{aligned}
\boldsymbol u = \boldsymbol{u}_L + \boldsymbol{u}_L&= \nabla \left((\boldsymbol x \cdot \boldsymbol u^0) \frac{\phi(r)}{r}\right) + \nabla \times \left( (\boldsymbol x \times \boldsymbol u^0) \frac{h(r)}{r}\right)\\
&= \nabla [\phi(r) \cos{\theta}] U^0 \quad \; - \nabla \times [h(r) \sin{\theta} \boldsymbol{e}_{\varphi}] U^0
\end{aligned}
\end{equation}
where we have used spherical coordinates $(r,\theta,\varphi)$, $\boldsymbol{e}_{\varphi}$ is the unit vector in the $\varphi$ direction and $\boldsymbol{x}$ is the position vector $\boldsymbol{x}=(x,y,z)$. Two radial functions, $h(r)$ and $\phi(r)$, are to be determined\footnote{The solution with $\phi$ and $h$ can only present solutions in the plane made by the vectors $\boldsymbol x$ and $\boldsymbol u^0$. There are other analytical solutions for a spherical configuration that cannot be described with the $h -\phi$ framework. For example for a sphere periodically rotating back and forth with frequency $\boldsymbol{\Omega}$ around the $z$-axis, the following analytical solution can be found
$\boldsymbol u = \boldsymbol u_T = -\frac{a^3}{e^{\mathrm{i} k_Ta}} \frac{1}{(\mathrm{i} k_Ta-1)} \nabla \times \left[\frac{e^{\mathrm{i} k_Tr}}{r} \boldsymbol \Omega \right]$.
The solution now only consists of a transverse part, while there is no longitudinal component. } in which the term $\phi\equiv \phi(r)$ is a potential function and the term $h\equiv h(r)$ is inspired by the $h$-function in electrophoresis problems~\citep{Jayaraman2019, Ohsima1983}. It can easily be seen that the term with $\phi$ corresponds to the curl free vector $\boldsymbol{u}_L$ and the term with $h$ to the divergence free vector $\boldsymbol{u}_T$ (remember that $\nabla \times \nabla = \boldsymbol{0}$ and $\nabla \cdot \nabla \times = 0$). A constant vector $\boldsymbol{u}^0$ is introduced with length $|\boldsymbol{u}^0|=U^0$. It represents the amplitude of the displacement of the core sphere in the frequency domain. For the time being we will take $\boldsymbol{u}^0 =(u_x,u_y,u_z) = (0,0,U^0)$, which means that in the time domain this vector oscillates as $(0,0,U^0\cos(\omega t))$.
Analytical solutions exist, for example a sphere harmonically changing its volume has an analytical solution\footnote{For a radially volume changing sphere the solution for the displacement is:
$\boldsymbol u = \boldsymbol u_L = \frac{e^{\mathrm{i} k_Lr}}{r^3}(\mathrm{i} k_L r -1) \boldsymbol x$
}, yet this solution only shows L-waves and does not have any T-waves. It is therefore desirable to have some analytical solutions that at least show both L and T waves simultaneously. Eq.~(\ref{eq:phi_h}) will turn out to be sufficient to describe the displacement field caused by the vibration of a rigid core sphere, surrounded by an elastic shell, situated in an infinite other elastic material. The function $\phi$ is related to the Helmholtz equation as $\nabla^2 (\phi \boldsymbol x/r) + k_L^2 (\phi \boldsymbol x/r)= \boldsymbol 0$, while $h$ satisfies $\nabla^2 (h \boldsymbol x/r) + k_T^2 (h \boldsymbol x/r) =\boldsymbol 0$. This essentially implies that both $\phi(r) \cos (\theta)$ and $h(r) \cos (\theta)$ satisfy the Helmholtz equation.
The task at hand is now to determine the two functions $\phi$ and $h$. Since the material properties of the shell and the outer material are different, we will search for $\phi^{\mathrm{sh}}$ and $h^{\mathrm{sh}}$ for the shell solution and $\phi^{\mathrm{out}}$ and $h^{\mathrm{out}}$ for the external domain. We will describe two different paths, one using tensor notation and another approach for readers more familiar with spherical coordinate systems and Bessel functions. Both approaches of course will lead to the same answer. The problem we wish to solve is to get the displacement field caused by the motion $\boldsymbol{u}^0$. In order to do so, we need to satisfy that both displacements and stresses are continuous across boundaries, that is: there are no gaps or stress jumps in the material boundaries. Thus the displacement at $r=a$ must obey $\boldsymbol{u}=\boldsymbol{u}^0$ and at $r=b$, both $\boldsymbol{u}$ and the traction $\boldsymbol{f}$ must be continuous.
Eq.~(\ref{eq:phi_h}) can be written in an alternative more convenient way (note that we have deliberately kept the terms $h/r$ and $\phi/r$) by separating the terms with $\boldsymbol{u}^0$ and $\boldsymbol{x}$. For the shell we get:
\begin{equation}
\begin{aligned} \label{eq:u_in}
\boldsymbol{u}^{\mathrm{sh}} = \left[ -r \frac{\mathrm{d}}{\mathrm{d} r} \left(\frac{h^{\mathrm{sh}}}{r}\right) - 2 \frac{h^{\mathrm{sh}}}{r} + \frac{\phi^{\mathrm{sh}}}{r}\right]\boldsymbol{u}^0 + \frac{\mathrm{d}}{\mathrm{d} r}\left(\frac{h^{\mathrm{sh}}}{r} +\frac{\phi^{\mathrm{sh}}}{r} \right) \frac{\boldsymbol{x} \cdot \boldsymbol{u}^0}{r} \boldsymbol{x}
\end{aligned}
\end{equation}
where the shell solution consist of an `expanding' (subscript `e') and a `reflected' (subscript `r') wave with
\begin{equation}
\begin{aligned}
h^{\mathrm{sh}}(r)=h_e(r) + h_r(r)=-a C_e^T \exp(\mathrm{i} k_T^{\mathrm{sh}} r) G(k_T^{\mathrm{sh}} r) -a C_r^T \exp(-\mathrm{i} k_T^{\mathrm{sh}} r) G^*(k_T^{\mathrm{sh}} r), \\ \phi^{\mathrm{sh}}(r)=\phi_e(r) + \phi_r(r)=-a C_e^L \exp(\mathrm{i} k_L^{\mathrm{sh}} r) G(k_L^{\mathrm{sh}} r) -a C_r^L \exp(-\mathrm{i} k_L^{\mathrm{sh}} r) G^*(k_L^{\mathrm{sh}} r)
\end{aligned}
\end{equation}
with $G^*$ the complex conjugate of $G$ (also note the `-' sign in the $C_r$ exponentials)\footnote{Note that in terms of spherical Bessel functions: $y_1(kr) = [G(kr) \exp(\mathrm{i} kr) + G^*(kr) \exp(-\mathrm{i} kr)]/2=-\cos (kr)/(kr)^2 - \sin (kr)/(kr)$ and $j_1(kr) = [-G(kr)\exp(\mathrm{i} kr) +G^*(kr) \exp(-\mathrm{i} kr)]/(2i) = \sin(kr)/(kr)^2 - \cos(kr)/(kr)$.}, and $G(x)=\mathrm{i}/x-1/x^2$.
Here $C_e^T$, $C_r^T$, $C_e^L$ and $C_r^L$ are dimensionless complex valued constants. The term with $C_e^T$ will give an expanding T-wave, the term with $C_r^T$ a reflected (spherical incoming) T-wave. Similarly the terms $C_e^L$ and $C_t^L$ give expanding and reflected L-waves in the shell. For the (infinite) external domain there is only a `transmitted expanding' (subscript `t') wave with
\begin{equation}
\begin{aligned} \label{eq:u_out}
\boldsymbol{u}^{\mathrm{out}} = \left[ -r \frac{\mathrm{d}}{\mathrm{d} r} \left(\frac{h^{\mathrm{out}}}{r}\right) - 2 \frac{h^{\mathrm{out}}}{r} + \frac{\phi^{\mathrm{out}}}{r}\right] \boldsymbol{u}^0 + \frac{\mathrm{d}}{\mathrm{d} r}\left(\frac{h^{\mathrm{out}}}{r} +\frac{\phi^{\mathrm{out}}}{r} \right) \frac{\boldsymbol{x} \cdot \boldsymbol{u}^0}{r} \boldsymbol{x},
\end{aligned}
\end{equation}
\begin{equation}
\begin{aligned}
h^{\mathrm{out}}(r)=h_t(r)=-a C_t^T \exp(\mathrm{i} k_T^{\mathrm{out}}r) G(k_T^{\mathrm{out}} r), \\ \phi^{\mathrm{out}}(r)=\phi_t(r)=-a C_t^L \exp(\mathrm{i} k_L^{\mathrm{out}}r) G(k_L^{\mathrm{out}} r)
\end{aligned}
\end{equation}
with $k_T^{\mathrm{out}}$ and $k_L^{\mathrm{out}}$ the parameters for the external domain and $C_t^T$ and $C_t^L$ dimensionless constants.
There are six unknown parameters: $C_e^T$, $C_r^T$, $C_e^L$, $C_r^L$, $C_t^T$ and $C_t^L$ which can be determined by matching the $u_i$ components of the displacement at $r=a$ (two equations), the displacement at $r=b$ (two equations), and finally the continuity of the shear stress at $r=b$ (two equations). The details of how to get these six parameters are described in \ref{App:AppA}.
An alternative way of getting the solution using spherical coordinates and spherical Bessel and Hankel functions of the first kind is given in \ref{App:AppB}. Both approaches are equivalent and give the same result. This way of solving the problem corresponds more to the classical mathematical approach, yet the constants of \ref{App:AppA} correspond directly to the `emitted' and `reflected' waves in the elastic layer, while this is not explicitly the case in the approach of \ref{App:AppB}.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.32\textwidth]{kT2_5kL1_kTout8_kLout3a.jpg}
\includegraphics[width=0.32\textwidth]{kT2_5kL1_kTout8_kLout3b.jpg}
\includegraphics[width=0.32\textwidth]{kT2_5kL1_kTout8_kLout3c.jpg}\\
\includegraphics[width=0.32\textwidth]{kT2_5kL1_kTout8_kLout3d.jpg}
\includegraphics[width=0.32\textwidth]{kT2_5kL1_kTout8_kLout3e.jpg}
\includegraphics[width=0.32\textwidth]{kT2_5kL1_kTout8_kLout3f.jpg}
\caption{Inner rigid core sphere `vibrating' periodically with amplitude $U^0$ in another concentric spherical shell ``sh'', the total embedded in an infinite external outer domain ``out'' with $b/a=2.0$. Parameters: $k_T^{\mathrm{sh}} a=2.5$, $k_L^{\mathrm{sh}} a=1.0$, $k_T^{\mathrm{out}}a=8.0$, $k_L^{\mathrm{out}}a=3.0$, $\rho_{\mathrm{out}}/\rho_{\mathrm{sh}}=1.0$. The sphere oscillates from back to front of the figure. On the horizontal plane the total displacement vectors are plotted. A complicated pattern is formed due to the interaction of L and T waves. On the left of the plane the function $h(r) \cos(\theta)$ is plotted while on the right hand side $\phi(r) \cos(\theta)$ is plotted in color. Here time-snapshots are shown at 0/6, 1/6, 2/6, 3/6, 4/6 and 5/6 times the oscillation cycle. A moviefile is available for this case showing 30 time frames. }\label{Fig:sphereShell1}
\end{center}
\end{figure*}
\section{Results} \label{sec:results}
Some screenshots of the solution with parameters $\rho_{\mathrm{out}}/\rho_{\mathrm{sh}} =1.0$, $b/a=2.0$, $\boldsymbol u^0 = (0,0,U^0)$, $k_T^{\mathrm{sh}} a =2.5$, $k_L^{\mathrm{sh}} a=1.0$, $k_T^{\mathrm{out}}a=8.0$ and $k_L^{\mathrm{out}} a=3.0$ are shown in Fig.~\ref{Fig:sphereShell1}. Since any solution $\boldsymbol{u} \exp(i \alpha)$, with $\alpha$ a phase factor, is also a solution of the problem, we can easily reconstruct the solution in the time domain, by choosing appropriate values for $\alpha$ for each time step. The inner core sphere vibrates front to back. The shell/outer medium boundary is indicated in transparent blue. A $40 \times 40$ grid is chosen on the horizontal plane and the displacements are indicated on this grid with arrows. An intricate pattern of displacements can be observed. Since $k_T^{\mathrm{out}} > k_T^{\mathrm{sh}}$ and $k_L^{\mathrm{out}} > k_L^{\mathrm{sh}}$, the waves in the outer domain are more densely packed than in the shell. In the outer domain waves can be seen to travel towards infinity, while in the shell complex interference patterns appear due to the interaction of the emitted and reflected waves.
Next we wonder what will happen if we keep the physical system the same, but change the oscillation frequency $\omega$. Since $k=\omega/c$, this is essentially the same as multiplying all $k$ values (both L and T and the shell and the outer domain) by the same value. We choose an example with the following parameters: $\rho_{\mathrm{out}}/\rho_{\mathrm{sh}} =3.0$, $b/a=2.0$, $\boldsymbol u^0 = (0,0,1)$. Take initially $k_T^{\mathrm{sh}} a =4.5$, $k_L^{\mathrm{sh}} a=2.0$, $k_T^{\mathrm{out}}a=2.0$ and $k_L^{\mathrm{out}} a=1.0$. Then gradually increase (or reduce) the frequency, i.e. multiply (or divide) each $k$ by $1.005$ and recalculate all $C$'s until $k_T^{\mathrm{sh}} a=33$. The results for the transmitted coefficients (in the outer domain) $|C_t^T|$ and $|C_t^L|$ are shown in Fig~\ref{Fig:solC_t}. A complicated spectrum of peaks and valleys appears. For some values of $k_T^{\mathrm{sh}} a$, $|C_t^T|$ is near zero, while for other values $|C_t^L|$ becomes near zero. The emitted and reflected coefficients $|C_e^T|$ and $|C_r^T|$ for the transverse waves in the shell are shown in Fig.~\ref{Fig:sol_erT}. The first peaks appear near $k_T^{\mathrm{sh}}=3.6$. Note that both $|C_e^T|$ and $|C_r^T|$ tend towards infinity when $k_T^{\mathrm{sh}} a=0$. Finally, the emitted and reflected coefficients $|C_e^L|$ and $|C_r^L|$ for the longitudinal waves in the shell are shown in Fig.~\ref{Fig:sol_erL}. Again both $|C_e^L|$ and $|C_r^L|$ tend towards infinity when $k_T^{\mathrm{sh}} a=0$.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.95\textwidth]{sweep1.jpg}
\caption{Frequency response curve for a sphere with a shell. Transmitted T and L-coefficients $|C^T_t|$ and $|C^L_t|$ (i.e. into the external domain) as a function of the parameter $k_T^{\mathrm{sh}} a$. Here, we keep the material parameters the same, but the oscillation frequency $\omega$ is changed. The parameter are $\rho_{\mathrm{out}}/\rho_{\mathrm{sh}} =3.0$, $b/a=2.0$, $\boldsymbol u^0/U^0 = (0,0,1)$. Take initially $k_T^{\mathrm{sh}} a =4.5$, $k_L^{\mathrm{sh}} a=2.0$, $k_T^{\mathrm{out}}a=2.0$ and $k_L^{\mathrm{out}} a=1.0$. Then change the frequency, thus multiply each $k$ by the same value and recalculate all constants. Note the rather chaotic character of this graph, with many maximum and minimum values for both the T and L-coefficients, however, not at the same frequencies. The `spectrum' shows remarkable peaks and valleys especially compared to the case when no shell as presented in Fig.~\ref{Fig:sol_noShell}. } \label{Fig:solC_t}
\end{center}
\end{figure*}
Let us investigate what exactly happens in these peaks and valleys by investigating three cases. Based on Fig.~\ref{Fig:solC_t} or the zoom-in shown in Fig.~\ref{Fig:Cases}(a), around $k_T^{\mathrm{sh}} a =5.98$, both $|C^T_t|$ and $|C^L_t|$ are near a peak value. We will call this Case 1. Thus for Case 1, $k_L^{\mathrm{sh}} a = 5.98\times2.0/4.5$, $k_T^{\mathrm{out}} a =5.98\times2.0/4.5$ and $k_L^{\mathrm{out}} a =5.98/4.5$ (scale all wavenumbers with the same amount). The cases are indicated with blue large arrows for clarity in Fig.~\ref{Fig:Cases}(a). In Fig.~\ref{Fig:Cases}(b), the displacement pattern is shown with red arrows. We can see that both T- and L-waves appear in the outer domain. Around $k_T^{\mathrm{sh}} a = 8.18$ the constant $|C^T_t|$ becomes very small while $|C^L_t|$ becomes large. Take $k_L^{\mathrm{sh}} a = 8.18\times2.0/4.5$ and $k_T^{\mathrm{out}}a=8.18\times2.0/4.5$ and $k_L^{\mathrm{out}}a=8.18/4.5$, we call this Case 2. In Fig.~\ref{Fig:Cases}(c), the displacement pattern is shown with red arrows, we see that they are all pointing radially in or outwards, indicating that mainly L-waves occur in the outer domain. Finally, for Case 3, we take the value $k_T^{\mathrm{sh}} a = 12.70$, where a minimum in $|C^L_t|$ occurs, again we multiply all wavenumbers by the same amount as in Cases 2 and 3. Now we clearly see a T-wave in the outer domain (all displacement vectors in Fig.~\ref{Fig:Cases}(d) are 90 degrees rotated when compared to Case 2). In all three cases, $\phi(r)\cos(\theta)$ is shown on the right hand side of the horizontal plane and $h(r) \cos(\theta)$ is shown on the left hand side in color.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.95\textwidth]{sweep2.jpg}
\caption{Frequency response for a sphere with shell. As Fig.\ref{Fig:solC_t}, but now for the emitted and reflected T-coefficients $|C^T_e|$ and $|C^T_r|$ (in the shell) as a function of $k_T^{\mathrm{sh}} a$. The peaks and valleys are mostly overlapping, but not always. Note that both coefficients diverge at $k_T^{\mathrm{sh}} a=0$.}\label{Fig:sol_erT}
\end{center}
\end{figure*}
The constants $C_e^T$ and $C_r^T$, representing the emitted and reflected transverse waves in the shell, are shown in Fig.~\ref{Fig:sol_erT} and the $C_e^L$ and $C_r^L$ constants in Fig.~\ref{Fig:sol_erL} which show the emitted and reflected longitudinal waves in the shell.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.95\textwidth]{sweep3.jpg}
\caption{Frequency response for a sphere with shell. As Fig.\ref{Fig:solC_t}, but now for the emitted and reflected L-coefficients: $|C^L_e|$ and $|C^L_r|$ (in the shell) as a function of $k_T^{\mathrm{sh}} a$. Note that both coefficients diverge at $k_T^{\mathrm{sh}} a=0$. } \label{Fig:sol_erL}
\end{center}
\end{figure*}
\begin{figure*}[!ht]
\begin{center}
\subfloat[]{\includegraphics[width=0.4\textwidth]{sweep1b.jpg}} \quad\quad\quad
\subfloat[]{\includegraphics[width=0.47\textwidth]{case1_kT6_0.jpg}}\\
\subfloat[]{\includegraphics[width=0.47\textwidth]{case2_kT8_18.jpg}}\quad
\subfloat[]{\includegraphics[width=0.47\textwidth]{case3_kT12_67.jpg}}
\caption{(a) Zoom in of Fig.~\ref{Fig:solC_t} with the three selected cases indicated by arrows. Vector plots in the horizontal plane: (b) Case 1: Both T-waves and L-waves are generated in the external domain. (c) Case 2: Mainly L-waves occur in the external domain. (d) Case 3: Mainly T-waves appear in the external domain. The function $h(r)\cos(\theta)$ is plotted on the left hand side of the horizontal plane and $\phi(r) \cos(\theta)$ is shown on the right hand side. It is thus possible, by a clever combination of materials and frequency to generate mainly L or mainly T waves or a combination of both.
} \label{Fig:Cases}
\end{center}
\end{figure*}
\section{Discussion} \label{sec:discussion}
Note that $\boldsymbol{u^0}$ can be different from the (real valued) $\boldsymbol{u^0}/U^0=(0,0,1)$, it could assume a complex value as well (as long as it is a constant). For example $\boldsymbol{u^0}/U^0=(\mathrm{i},0,1)/\sqrt 2$ will give a circularly vibrating sphere (not shown here).
The following six non-dimensional parameter space can be distinguished for the shell case: $b/a$, $k_T^{\mathrm{sh}} a$, $k_T^{\mathrm{sh}}/k_L^{\mathrm{sh}}$, $k_T^{\mathrm{sh}}/k_T^{\mathrm{out}}$, $k_T^{\mathrm{sh}}/k_L^{\mathrm{out}}$ and $\rho_{\mathrm{out}}/\rho_{\mathrm{sh}}$ (or any combination of these parameters).
For a typical $a=1$ mm application, with $c_L = 6000$ m/s and $\rho=1000$ kg/m$^3$, a value of $k_L^{\mathrm{sh}} a= 1.0$ would correspond to a frequency of $2 \pi \omega = 1$ MHz and for $k_L^{\mathrm{sh}} a =100$, one would need 100 MHz, a frequency that is now becoming available in acoustic transducers~\citep{Fei2016}. For an object with typical size $a=1$ m, the frequency will be 1 kHz for $k_L^{\mathrm{sh}} a=1$ and 100 kHz for $k_L^{\mathrm{sh}} a=100$.
The current framework can easily be extended to multiple shells. For a core with a single shell, we had to solve a $6 \times 6$ matrix, with every additional shell we will have to add 4 more equations, thus a $10 \times 10$ matrix for a two-shells system for example. It is also possible to calculate the stresses caused by the movement of the sphere, although we have not shown them here. Although outside the scope of the current analytical solution, a real system could easily be built for example by embedding a steel sphere in an elastic material and exciting it by magnetic means, for example possibly to convert electrical to mechanical energy and to generate heat remotely via magnetic stimuli. On the other hand, the study of this relatively simple system, yet with complex behavior, opens the way to further study and better understand real systems with their associated resonances, noise generation, fatigue and failure behavior, frequency responses etc.
As we have shown the current analytical solution exhibits non-trivial behavior. It has rich physical detail, for example the presence of both longitudinal and transverse waves, including interference between outgoing and reflected waves and is therefore ideally suited to test numerical solutions, for example those generated by finite element or boundary element codes. In \ref{App:BEM}, we have used the analytical solution to test a boundary element code based on the framework developed by~\cite{Rizzo1985}. Excellent agreement is achieved when the numerical solution is compared to the theory.
\subsection{Vibrating sphere without a shell}
A solution for a vibrating sphere without a shell was previously given by~\cite{KlaseboerJElast2019}\footnote{
In order to get back the same solutions the constants used there should be replaced by
with $C^T_e = - 2 c_1$ and $C^L_e = k_L^2 a^2 [2 c_1/(k_T^2 a^2) + c_2]$, where $c_1$ and $c_2$ are the constants used by \cite{KlaseboerJElast2019}.}, which is a special case of the current work when the shell material is set to be the same as the outer material. For such a case which is the equivalent to no shell at all, the number of parameters mentioned in the previous section (six) reduces to two, namely $k_T^{\mathrm{out}} a$ and $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}$.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.95\textwidth]{sweepNoShell1.jpg}
\caption{Frequency response curves for a sphere with no shell. $|C^T_t|$ (upper curves) and $|C^L_t|$ (lower curves) for various $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}$ ratios, from $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=\sqrt{2}$ (the smallest this ratio can be) to $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=2, 3, 5$ and $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=100$ in the inset (going towards the in-compressible limit). Note the smoothness of the curves when the parameter $k_T^{\mathrm{out}} a$ is changed (which is essentially the same as changing the driving frequency), which is in stark contrast to the curves shown in Fig.~\ref{Fig:solC_t}.} \label{Fig:sol_noShell}
\end{center}
\end{figure*}
When we keep the material parameters constant and change the vibration frequency (thus changing $k_T^{\mathrm{out}} a$ and keeping $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}$ constant), we can calculate $|C_t^T|$ and $|C_t^L|$. The results are plotted in Fig.~\ref{Fig:sol_noShell}. When compared to Fig.~\ref{Fig:solC_t}, the smoothness of the curves in Fig.~\ref{Fig:sol_noShell} is immediately noticed. The larger the $k_T^{\rm\mathrm{out}}/k_L^{\mathrm{out}}$ ratio is, the smaller the longitudinal parameter $|C_t^L|$ becomes for low $k_Ta$ values $(k_Ta <<1)$. But $|C_t^T|$ and $|C_t^L|$ both converge towards a value of 1.0 for larger $k_T^{\mathrm{out}} a$ values.
For a vibrating sphere with no shell, it seems not possible to have a zero L or zero T contribution according to Fig.~\ref{Fig:sol_noShell}. The only possibility to generate a near zero L-wave is to reduce the frequency to near zero values. For larger frequencies (thus larger $k_T^{\mathrm{out}} a$ values), all curves tend towards $|C_L^t|=|C_T^{t}|=1$. The $|C_t^L|$ curves all seem to be monotonously increasing. For $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=\sqrt{2}$ and $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=2$, the $|C_t^T|$ curves monotonously decrease. However, for $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=3$ and onward, these curves show a maximum value of $|C_t^L|$. For $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=3$, the maximum $|C_t^L|$ is at $k_T^{\mathrm{out}}a = 1.492$ with a value of $1.426$ ($|C_t^L|$ is $1.421$ at $k_T^{\mathrm{out}}a$ near zero). The maximum $|C_t^L|$ appears later for $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=5$ at $k_T^{\mathrm{out}}a = 3.592$ with a value of $1.498$. The maximum $|C_t^L|$ shifts to larger and larger values of $k_T^{\mathrm{out}}a$ when the ratio $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}$ increases further, but does not seem to go significantly above a value of $1.6$. For example, in the inset the curves for $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=100$ are shown where the maximum $|C_t^L|$ occurs around $k_T^{\mathrm{out}}a = 95.3$ with $1.607$. For even larger $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=1000$ (not shown) the maximum $|C_t^L|=1.612$ and occurs at $k_T^{\mathrm{out}}a = 970$ approximately\footnote{Fig.~\ref{Fig:sol_noShell} was generated by setting $b/a=2$, $k_L^{\mathrm{sh}}a=k_L^{\mathrm{out}} a$, $k_T^{\mathrm{sh}} a = k_T^{\mathrm{out}} a$ and $\rho_{\mathrm{sh}}=\rho_{\mathrm{out}}$ (thus setting the material of the shell identical to that of the outer domain). We then get $C_r^L=0$, $C_r^T=0$, $C_e^L=C_t^L$ and $C_e^T=C_t^T$, as it should be.}.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.75\textwidth]{FigPulse1.jpg}
\caption{The used -/+ pulse for the FFT transform with width $W=5a$ and 128 points. The values from $i=57$ to $72$ are non-zero with a minus/plus pulse centered at $i=65$ as $\text{DATA}[i]=2(i+N-N_w/2)-1]=\sin(2x) \exp(-\alpha x/2)$ with $x = (i-1-N_w/2)\pi/N_w$, with $N=64$, $N_w=N/4$ and $\alpha=0.1$. Since the wave is antisymmetric, the lowest frequency of the 65 frequencies in the FFT corresponds to $k_T^{\mathrm{sh}}a=1.256$ and the highest to $k_T^{\mathrm{sh}}a=80.425$ (there is no need to calculate $k_T^{\mathrm{sh}}a=0$).
}
\label{Fig:FFT1Pulse}
\end{center}
\end{figure*}
\subsection{Pulsed time domain solutions using the Fast Fourier Transform}
Now that the response for each frequency can be calculated we can use the Fast Fourier Transform (FFT)~\citep{Bedford1994} to get the solution for the displacements, at each location and for each time instant, if we assume the core is exhibiting a pulsed vibration. The minus/plus pulse used is shown in Fig.~\ref{Fig:FFT1Pulse}. We have deliberately chosen an antisymmetric pulse in order to avoid issues with non vanishing displacements associated with $k_T^{\mathrm{sh}} a = 0$. The standard FFT procedure given in the book Numerical Recipes was used~\citep{Numrep}.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=0.32\textwidth]{PulseNoShell1.jpg}
\includegraphics[width=0.32\textwidth]{PulseNoShell2.jpg}
\includegraphics[width=0.32\textwidth]{PulseNoShell3.jpg}\\
\includegraphics[width=0.32\textwidth]{PulseNoShell4.jpg}
\includegraphics[width=0.32\textwidth]{PulseNoShell5.jpg}
\includegraphics[width=0.32\textwidth]{PulseNoShell6.jpg}
\caption{Screenshots at different times, obtained with a FFT-transform of a single -/+ pulse (as shown in Fig.~\ref{Fig:FFT1Pulse}) for the case of a sphere with no shell. The sphere first moves to the right and then to the left before it stops moving. The resulting displacement patterns (in vectors) show the separation of the L-waves and the T-waves in the 4th and 5th image onward. The L-waves travel twice as fast for this particular case ($k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=2.0$). On the horizontal plane the scalar function $\phi(r) \cos \theta$ is also given, while on the vertical plane $h(r) \cos \theta$ is plotted in color. A moviefile is available for this case. } \label{Fig:FFT1}
\end{center}
\end{figure*}
In Fig.~\ref{Fig:FFT1} a typical example of the separation of the T and L waves (where the L waves travel at twice the speed as the T waves) for the case with no shell associated with the pulse given in Fig.~\ref{Fig:FFT1Pulse}. Initially the T and L waves are interfering with each other, until from Frame 4 onward, the L wave (most outer wave) clearly separates from the T wave (inner wave). The material parameters for this case are $k_T^{\mathrm{out}}/k_L^{\mathrm{out}}=2.0$.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=0.32\textwidth]{PulseShell1.jpg}
\includegraphics[width=0.32\textwidth]{PulseShell2.jpg}
\includegraphics[width=0.32\textwidth]{PulseShell3.jpg}\\
\includegraphics[width=0.32\textwidth]{PulseShell4.jpg}
\includegraphics[width=0.32\textwidth]{PulseShell5.jpg}
\includegraphics[width=0.32\textwidth]{PulseShell6.jpg}\\
\includegraphics[width=0.32\textwidth]{PulseShell7.jpg}
\includegraphics[width=0.32\textwidth]{PulseShell8.jpg}
\includegraphics[width=0.32\textwidth]{PulseShell9.jpg}\\
\includegraphics[width=0.32\textwidth]{PulseShell10.jpg}
\includegraphics[width=0.32\textwidth]{PulseShell11.jpg}
\includegraphics[width=0.32\textwidth]{PulseShell12.jpg}
\caption{Screenshots at different times, obtained with a FFT-transform of a single -/+ pulse (as shown in Fig.~\ref{Fig:FFT1Pulse}) for the case with a shell with $b/a=3$ (the second sphere is indicated in transparent yellow color). Multiple reflections and double reflections of the T and L-waves can be observed, which obviously do not occur for the case with no shell of Fig.~\ref{Fig:FFT1}. A moviefile is available for this case.} \label{Fig:FFTShell}
\end{center}
\end{figure*}
Another case with a shell is shown next. The parameter chosen are $b/a=3$, $k_T^{\mathrm{sh}}a = 4.0$, $k_T^{\mathrm{sh}}/k_L^{\mathrm{sh}}=2.0$, $k_T^{\mathrm{sh}}/k_T^{\mathrm{out}}=4/7$, $k_T^{\mathrm{sh}}/k_L^{\mathrm{out}}=4/3$ and $\rho_{\mathrm{out}}/\rho_{\mathrm{in}}=1.0$. This parameter set will give results with not too many reflections. The results are shown in Fig.~\ref{Fig:FFTShell}. The shell/outer boundary is indicated with a transparent yellow sphere. Expanding waves can be observed caused by the pulse which is the same as the one used in Fig.~\ref{Fig:FFT1}. Also, reflected waves from the shell/outer boundary can be clearly distinguished and even doubly reflected waves (i.e. reflected waves that reflect once more on the inner rigid core sphere). In order to more easily differentiate the expanding and reflecting waves, they are indicated in each frame.
\section{Conclusions} \label{sec:conclusion}
The analytical solution for the dynamic elastic problem of a vibrating rigid sphere surrounded by an elastic shell, the total being immersed in another infinite medium is presented. The solution shows some surprisingly unexpected physics with various peaks for both the longitudinal (L) and transverse (T) response when the frequency of the vibration is changed. These do not appear for the simpler case of a sphere without an elastic shell layer, where the frequency response is a smooth line. In practice, this means that almost pure L or T waves can be generated by carefully choosing the material parameters and the frequency of the vibration of the core sphere. This complex behavior, which is not present for spheres without a shell, appears very similar to the unique properties that can be observed with mechanical metamaterials, see \cite{Kelkar2020} or \cite{Wang2014}.
Since all the responses for multiple frequencies can be easily obtained in the frequency domain, we can use the FFT framework to predict the response to a pulsed vibration in the time domain. Some examples are shown for a narrow pulse, which shows the separation of the L and T waves which move out radially as clearly distinctive pulses after some time has passed.
In this article we have just scratched the surface regarding the multitude of possible solutions; there are six dimensionless parameters that can be varied in the core-shell vibration problem. The solution could be considered as the first approximation of an oscillating body in an elastic material.
The analytical solutions can also be used as benchmark cases to test numerical solutions obtained with for example the finite element method (where boundary conditions at infinity are not easy to implement) or a boundary element method (which have hyper-singular integrals that need to be treated with extreme precaution), even in the time domain when the fast Fourier transform framework is used (such as in the example of Fig.~\ref{Fig:FFTShell}).
The implementation of the solution is relatively straightforward, without any infinite sums or other mathematical difficulties. The codes (in Fortran language) used to generate the plots in this article are available from the authors on request.
\section*{Acknowledgments}
Q.S. was supported by the Australian Research Council (ARC) through Grants DE150100169, FT160100357 and CE140100003.
|
{
"timestamp": "2021-12-01T02:26:02",
"yymm": "2111",
"arxiv_id": "2111.15460",
"language": "en",
"url": "https://arxiv.org/abs/2111.15460"
}
|
\section{Introduction}
\label{sec:intro}
\noindent
The heliosphere is created due to interaction between the solar wind and the magnetized, partly ionized interstellar gas surrounding the Sun.
The neutral component, composed mostly of neutral hydrogen and further on referred to as interstellar neutral hydrogen (ISN H), penetrates freely inside the heliosphere.
Since the Sun is moving relative to the local interstellar matter, ISN H creates ``an interstellar wind'' that flows past the Sun.
Within the heliosphere, ISN H is collision-less and the atoms can be regarded as individual particles following trajectories governed by solar forces until they are ionized and become pickup ions in the solar wind.
The Sun emits ultraviolet radiation, with the Lyman-$\alpha$~{} line dominating the EUV part of the spectrum.
The Lyman-$\alpha$~{} radiation resonantly excites the atoms by resonance scattering mechanism.
Within this mechanism, a photon is absorbed from the solar beam and subsequently re-emitted in a random direction. By using single scattering approximation, effectively those photons are removed from the solar beam and form the heliospheric backscatter glow, but for brevity we will still refer to this process as an absorption.
The absorption process involves a small instantaneous change in the atom momentum in the anti-solar direction.
Simultaneously, a photon is taken out from the solar photon flux and directionally redistributed within the heliosphere.
Thus, absorption of the solar EUV radiation on one hand is closely related to radiation pressure that the Sun exerts of H atoms, and on the other hand creates an absorption feature in the solar Lyman-$\alpha$~{} line.
Absorption clearly reduces the solar Lyman-$\alpha$~{} flux in the same spectral range that is responsible for solar radiation pressure.
Without absorption, the radiation pressure force drops with the square of distance, i.e., the ratio of radiation pressure to gravity is constant.
With absorption taken into account, this ratio will be distance-dependent.
Absorption of solar radiation by ISN H inside the heliosphere has been pointed out in early heliospheric papers \citep[e.g.,][]{axford:72, meier:77a, wu_judge:79b} and more recently by \citet{IKL:18b} but it was largely neglected in the modeling of the distribution of ISN H inside the heliosphere, i.e., the atom dynamics has been treated assuming that the ratio of radiation pressure and solar gravity forces $\mu$ is independent on the location within the heliosphere.
Here, we assess the magnitude of absorption as a function of distance from the Sun and heliographic longitude and latitude.
We start with defining a theory of absorption of the solar Lyman-$\alpha$~{} radiation by ISN H (Section \ref{sec:theory})
We define an extension of the density, speed, and temperature of ISN H calculation scheme in the presence of absorption (Section \ref{sec:WTPMExt}).
With this, we investigate the spatial distribution of absorption in the heliosphere.
To that end, we define an iterative scheme of absorption features and density distribution (Section \ref{sec:calcScheme}), the calculation grid (Section \ref{sec:calcGrid}), and the initial conditions for the ISN H used in the simulations (Section \ref{sec:initial}).
We investigate the depth, width, and spectral offset of the heliospheric absorption feature in the solar Lyman-$\alpha$~{} profile and parametrized it by the Gauss-like function (Section \ref{sec:params}).
The strength of absorption effect is measured by the attenuation factor $f_{abs}$ defined in Section \ref{sec:attenfact}.
We also investigate how a model density distribution inside the heliosphere changes when absorption (and thus radiation pressure) is taken into account (Section \ref{sec:Hdensity}).
In Section \ref{sec:solar_cycle} we show modulation of the absorption due to the solar activity cycle and in Section \ref{sec:TSdens} we discuss influence of the assumed density beyond the termination shock.
Finally, in Section \ref{sec:IBEXSignal} we show how absorption modifies the simulated flux of ISN H observed by Interstellar Boundary Explorer (IBEX).
\section{Absorption - theory}
\label{sec:theory}
The theory of absorption of the solar Lyman-$\alpha$~{} radiation by ISN H is adopted from a single scattering model presented by \citet{quemerais:06a}.
Following this approach, we calculate the spectral irradiance $\text{I}(\myvec{r},\nu)$ at a given radius-vector $\myvec{r}$ and a frequency $\nu$ assuming that the initial spectral irradiance $\text{I}(\myvec{r_\text{E}},\nu)$ is that measured at $r_\text{E} = 1$ au, i.e., that ISN H is so rarefied inside 1 au that absorption in this region can be neglected.
This assumption is well fulfilled, as it is evident from observations of the solar Lyman-$\alpha$~{} profile performed by \citet{lemaire_etal:15a} using the SOHO spacecraft, orbiting the Sun near the L1 Lagrange point.
In these observations, no absorption features are visible.
In the approach originally followed by \citet{quemerais:06a}, the Sun is treated as an isotropic source of Lyman-$\alpha$~{} radiation.
In the absorption theory we implement, we use a single-scattering approach, where photons after being absorbed by an atom (which is responsible for the absorption effect) are treated as lost from the global photon population.
In the reality, they are re-emitted and form the helioglow. The re-emitted photons can potentially enter further absorption--re-emission sequences, which is referred to as the multiple-scattering effect.
Whether or not multiple-scattering is important for the heliosphere inside the termination shock, is still under discussion \citep[e.g., ][]{scherer_fahr:96, quemerais:00}.
Latest comparisons done by \citep{strumik_etal:21b} suggest that our WawHelioGlow model, that uses single scatter approach \citep{kubiak_etal:21a}, fits SOHO/SWAN measurements with a good agreement both in the downwind and the upwind directions.
Multiple-scattering is expected to modify the strongest the upwind to downwind helioglow intensity ratios for observations performed at 1 au \citep{quemerais:00}.
One of the possible explanations for the good agreement between our model and SWAN observations \citep[see][]{strumik_etal:21b} is that multiple scattering is not significant for helioglow observations at 1 au. Nevertheless, this requires further investigations, which we intend to perform in the future.
For the topic of this paper, multiple-scattering would only reduce the effect of absorption of direct solar radiation.
We calculate the spectral absorption following the formulae given by \citet{IKL:18b}, repeated below for readers' convenience.
The spectral irradiance decreases with the square of solar distance because of purely-geometrical reasons and on top of that, there is another exponential reduction due to absorption that depends on the partial column density of the gas $n(\myvec{r})$ as well as on the cross section for absorption $\sigma_{cs}(\myvec{r},\nu)$, the thermal spread, and the radial component of the bulk velocity of ISN H in a given location in the heliosphere:
\begin{equation}
\label{eq:I_abs}
\frac{\text{I}(\myvec{r},\nu)}{\text{I}(\myvec{r_\text{E}},\nu)}=\left(\frac{r_\text{E}}{r}\right)^2\exp\left[-\left(\int^r_{r_\text{E}} n_{pr}(\myvec{r'})\sigma_{cs}(\myvec{r'},\nu,T_{g,pr}) dr'+\int^r_{r_\text{E}} n_{sc}(\myvec{r'})\sigma_{cs}(\myvec{r'},\nu,T_{g,sc}) dr'\right)\right],
\end{equation}
where $\text{I}(\myvec{r},\nu)$ is the spectral irradiance in the direction described by vector $\myvec{r}$; $\text{I}(\myvec{r_\text{E}},\nu)$ is the spectral irradiance measured at Earth's orbit (further on we will use our analytical model); $n_{pr}(\myvec{r})$ and $n_{sc}(\myvec{r})$ are the column densities of the primary and the secondary populations of ISN H, respectively; $\sigma_{cs}(\myvec{r},\nu,T_{g,pr})$ and $\sigma_{cs}(\myvec{r},\nu,T_{g,sc})$ are the cross sections for absorption process calculated for the primary and the secondary populations of the ISN H.
Note that the cross section formula depends on the local temperature of the gas, which is different for each population.
The cross section is approximated as:
\begin{align}
\label{eq:cross_section1}
\sigma_{cs}(\nu)&=\sigma_0 \exp\left[-\left(\frac{\nu - \nu_0(1+u_r/c)}{\Delta \nu_D}\right)^2\right]\\
\label{eq:cross_section2}
\Delta \nu_D&=\frac{\nu_0}{c}\sqrt{\frac{2kT_g}{m_\text{H}}}\\
\label{eq:cross_section3}
\sigma_0&=\frac{\sigma_{tot}}{\sqrt{\pi}\Delta \nu_D},
\end{align}
where $\sigma_0$ is the cross section for $\nu=\nu_0$ (which is the case for atoms with null radial component of their velocity), $\Delta \nu_D$ is the Doppler width that is proportional to the thermal velocity of the gas at temperature $T_g$ at the distance $r'$, and $u_r$ is the radial component of the bulk velocity.
Other quantities used in our model along with their units in the cgs system are listed in Table \ref{tab:units}.
The cross section becomes Lorentzian when $\Delta \nu_D$ is much smaller than the natural width of the spectral line (which is true in the case of the Lyman-$\alpha$~{} line).
Then it can be expressed as a Voigt function and easy integrated in an analytical way:
\begin{align}
\label{eq:cross_section4}
\sigma_{tot}&=\int_0^\infty \sigma_{cs}(\nu)d\nu\\
&=\frac{\pi e^2}{m_e c}f_{osc}. \nonumber
\end{align}
\begin{deluxetable*}{llll}
\tablecaption{\label{tab:units} Relevant physical constants in the cgs units. In our numerical simulations, we use values based on the NIST standards.
}
\tablehead{\colhead{constant} & \colhead{symbol} & \colhead{value} & \colhead{unit}}
\startdata
elementary charge & $e$ & $4.8032046729976595417 \times 10^{-10}$ & $ \text{cm}^{3/2} \text{ g}^{1/2} \text{ s}^{-1}$\\
electron mass & $m_\text{e}$ & $9.1093837015 \times 10^{-28}$ & g \\
speed of light & $c$ & $2.99792458 \times 10^{10}$ & $ \text{cm s}^{-1}$\\
Lyman-$\alpha$~{} wavelength & $\lambda_0$ & $ 121.56701$ & nm\\
Lyman-$\alpha$~{} frequency & $\nu_0$ & $ 2.46606755 \times 10^{15}$ & Hz\\
astronomical unit & $r_\text{E}$ & $1.495978707 \times 10^{13}$ & cm\\
hydrogen mass & $m_\text{H}$ & $1.6735328 \times 10^{-24}$ & g\\
H oscillator strength & $f_{osc}$ & $0.41641$ & dimensionless\\
Boltzmann constant & $k$ & $1.38064852 \times 10^{-16}$ & erg K$^{-1}$\\
total cross section & $\sigma_{tot}$ & $1.11 \times 10^{-2}$ & cm$^2$ s$^{-1}$\\
gravity const. $\times$ solar mass & GM$_\sun$ & $1.327124421864553 \times 10^{26}$ & cm$^3$ s$^{-2}$\\
\enddata
\end{deluxetable*}
Later in this paper we will be using a dimensionless factor $\mu$, which is defined as a ratio between the force caused by the momentum transfer due to photon scattering events and gravitational force (see Equation \ref{eq:radpress}).
When $\mu = 1$, there is no effective force acting on a moving atom.
\begin{align}
\label{eq:radpress}
\nonumber
\mu&=\frac{\text{P}_\text{rad}}{|\text{F}_\text{g}|}\\ \nonumber
&= \text{I}(\myvec{r},\nu_0) \frac{\pi \text{e}^2}{\text{m}_e \text{c}}\frac{\text{h}\lambda_0}{\text{c}}\text{f}_{\text{osc}} \frac{\text{r}_\text{E}^2}{\text{GM}_\sun\text{m}_\text{H}}\\
&=\frac{\text{I}(\myvec{r},\nu_0)}{\text{p}_\text{H}},\\
\label{eq:ph}
\text{p}_\text{H} & = \left[\frac{\pi\, \text{e}^2}{\text{m}_\text{e}\,c}\,\frac{\text{h}\lambda_0}{\text{c}} \text{f}_{\text{osc}} \frac{r_\text{E}^2}{\text{G}\,\text{M}_\sun \text{m}_\text{H}}\right]^{-1} = 3.34467\times 10^{12} & \text{ ph s}^{-1}\,\text{cm}^{-2}\,\text{nm}^{-1}.
\end{align}
More details can be found in Appendix in \citet{kubiak_etal:21a}.
The advantage of using the factor $\mu$ is that it is independent from the distance when absorption is neglected.
Also we will often refer to the radial velocity ($u_r$) of the ISN H atoms rather than their resonance frequency.
According to the Doppler law, there is an easy way to translate one into the other:
\begin{equation}
u_r=c\left(1-\frac{\nu_0}{\nu}\right)=-c \frac{\Delta \lambda}{\lambda_0},
\label{eq:ur}
\end{equation}
where $u_r$ is the radial component of the ISN H atom velocity, $\nu$ is the shifted frequency corresponding to the velocity $u_r$, and $\nu_0$ is listed in Table \ref{tab:units}.
\section{Extension of the WTPM code to accommodate absorption}
\label{sec:WTPMExt}
The Warsaw Test Particle Model \citep[WTPM,][]{tarnopolski_bzowski:09} of ISN H distribution inside the termination shock is based on the paradigm of the so-called hot model \citep{fahr:78, fahr:79, thomas:78, wu_judge:79a}.
The hot model paradigm is based on a solution of the Boltzmann equation, which governs the spatial distribution of a collision-less gas streaming past the Sun and subjected to ionization losses, by method of characteristics.
The density is calculated by numerical integration of the local distribution function of the gas, which is computed as composed of a number of characteristics, modeled as trajectories of individual test particles, corresponding to ISN H atoms.
The goal is to relate the magnitude of the distribution function at the location inside the heliosphere to its value far away from the Sun, in the so-called source region, where the distribution function of the ISN gas is assumed to be known.
The relation is by the Liouville theorem, with the kinematic parameters modified by the force acting on the atoms, and the magnitude of the local distribution function modified by losses due to ionization.
Assuming that the force is central, constant in time, and decreasing with the square of the distance from the Sun, that there is no absorption of the solar radiation within the ISN gas, and that the ionization rate also decreases with the distance squared, the equation of motion of a H atom is given by
\begin{equation}
\frac{d^2 \myvec{r}(t)}{d t^2} = -\frac{G M_\sun (1 - \mu)}{r(t)^2}\frac{\myvec{r}(t)}{r(t)},
\label{eq:eqMotionKepler}
\end{equation}
where $\myvec{r}(t)$ is the radius vector of a H atom at a time $t$, $r = |\myvec{r}|$, $G$ is the gravitational constant, $M_\sun$ the solar mass, and $\mu$ is a factor of compensation of the solar gravity force by the solar radiation pressure force.
The objective of this equation is to provide a relation between the velocity vector of the atom $\myvec{u}_0 = d \myvec{r}_0/dt$ at a location inside the heliosphere given by the radius-vector $\myvec{r}_0$ with the velocity vector $\myvec{u}_{\text{ISM}}$ in the source region.
When $\mu$ is a constant, one does not need to solve the equation of motion explicitly.
Instead, $\myvec{u}_0$ can be related to $\myvec{u}_{\text{ISM}}$ by the hyperbolic Kepler equation.
This is true regardless of $\mu < 1$ (for an effective attractive force) or $\mu > 1$ (for an effective repulsive force).
Out of two possible solutions, one needs to select that corresponding to an earlier time, i.e., the equation of motion must be solved backward in time.
In the reality, however, both the ionization rate and the magnitude of the force acting on H atoms inside the heliosphere vary with time because of the changes in the solar activity.
This causes the solar radiation pressure $\mu$ to become a function of time: $\mu \equiv \mu(t)$, but the radiation pressure still drops with the square of solar distance and thus can be expressed as a certain time-dependent factor compensating the solar gravity force.
To address the situation when the $\mu$ factor is time dependent but does not vary with heliolatitude, \citet{rucinski_bzowski:95b} suggested to use a numerical scheme to solve the equation of motion all the way from $\myvec{r}_0$ to the source region to provide $\myvec{u}_{\text{ISM}}$ along with the attenuation factor due to ionization losses \citep[see Equations 2 and 3 in][]{bzowski_etal:13b}.
In this case, the equation of motion \ref{eq:eqMotionKepler} with $\mu \equiv \mu(t)$ can be reduced to a two-dimensional form.
\citet{tarnopolski_bzowski:09} realized that the solar spectral flux varies significantly within the Lyman-$\alpha$~{} line and that a H atom in its solar orbit is sensitive to different portions of this profile due to the Doppler effect, depending on its radial velocity $u_r(t) = \myvec{u}(t)\cdot \myvec{r}(t)$.
They also pointed out that the solar Lyman-$\alpha$~{} profile shape (the spectral flux) $I_\nu(\nu)$ varies depending on the total instantaneous flux within the Lyman-$\alpha$~{} line $I_{\text{tot}} = \int I_\nu(\nu)\, d\nu$.
Since the radiation pressure factor is proportional to the solar spectral flux, and the atom's radial velocity is related by the Doppler law to the wave frequency $\nu$, the radiation pressure factor $\mu$ in Equation \ref{eq:eqMotionKepler} becomes a complex function of radial velocity and the total solar flux in the Lyman-$\alpha$~{} line, which varies with time and, as recently demonstrated by \citet{strumik_etal:21b}, with heliolatitude $\phi$: $\mu \equiv \mu(u_r, I_{\text{tot}}(t, \phi))$.
Still, the magnitude of radiation pressure force drops with the square of solar distance.
While the motion of individual atoms is planar because the force is central, the equation of motion is solved numerically and it is more convenient to use its three-dimensional form.
The dependence of $I_\text{tot}$ on heliolatitude is approximated by an analytic function \citep{bzowski_etal:13a}, and its dependence on time is obtained from in-ecliptic measurements \citep{machol_etal:19a} and tabulated on a fixed time grid with the nodes at centers of Carrington rotation periods. The dependence of $\mu$ on $u_r$ for a an $I_{\text{tot}}$ value characteristic for a time moment $t$ was given by \citet{IKL:20a}.
The WTPM model with these considerations implemented in the radiation pressure model was presented by \citet{kubiak_etal:21a}.
But \citet{IKL:18b} assessed the magnitude of absorption of the solar Lyman-$\alpha$~{} radiation by the ISN H gas inside the heliosphere and pointed out that absorption of the solar radiation on ISN H results in a more rapid drop of radiation pressure with the solar distance than $1/r^2$. In this paper, we modify the WTPM model to take this effect into account.
Absorption adds an additional narrow line to the solar line profile.
As discussed later in the paper (see Equation \ref{eq_mu}), this feature can be represented as a certain factor $1- F_{fit}(\myvec{r}, u_r)$ attenuating the radiation pressure factor $\mu$.
The magnitude of $F_{fit}$ varies with the distance and the location within the heliosphere.
In the WTPM, it has a form of an analytic function (see Equation \ref{eq:fit_fun}) with the parameters calculated on a fixed grid in the 3D space.
Between the grid points the absorption profile parameters are linearly interpolated during the calculations.
Thus, the radiation pressure factor $\mu$ is now a function of all three coordinates in the heliographic reference system and the equation of motion of a H atom becomes:
\begin{equation}
\frac{d^2 \myvec{r}(t)}{d t^2} = -\frac{G M_\sun (1 - \mu_{abs})}{r(t)^2}\frac{\myvec{r}(t)}{r(t)},
\label{eq:eqMotionAbsorption}
\end{equation}
with
\begin{equation}
\mu_{abs} \equiv \mu\left(u_r,I_\text{tot}(t) I_{\lambda \phi}(\phi)\right)\,\left(1 - F_{fit}(\myvec{r}, u_r)\right),
\label{eq:muEff}
\end{equation}
and $\mu_{abs}$ varies both with the distance from the Sun and the direction of the radius-vector $\myvec{r}$.
In Equation \ref{eq:muEff}, $\mu(u_r,I_{\text{tot}}(t))$ corresponds to the radiation pressure acting on a H atom at 1 au from the Sun in the solar equator plane.
When this atom has a radial velocity $u_r$ at a time $t$, then the magnitude of the total solar Lyman-$\alpha$~{} flux at solar equator is equal to $I_{\text{tot}}(t)$.
This function $\mu(u_r, I_{\text{tot}}(t))$ is defined by \citet{IKL:20a}.
The factor $I_{\lambda\phi}(\phi)$ describes the variation of $I_{\text{tot}}$ with heliolatitude $\phi$, as discussed by \citet{kubiak_etal:21a}.
Hence, away from the solar equator at a heliolatitude $\phi$ but still at 1 au, the radiation pressure is calculated from the formula given by \citet{IKL:20a} parametrized by the total solar Lyman-$\alpha$~{} flux modulated by the adopted heliolatitude dependence, equal to $I_{\text{tot}}(t) I_{\lambda \phi}(\phi)$.
In the present version, $I_{\lambda\phi}(\phi)$ is assumed to be invariable with time, but this can be easily changed to accommodate a time variation of this parameter, which might be needed according to \citet{strumik_etal:21b}.
The drop of radiation pressure with the solar distance is described by the absorption feature $1 - F_{fit}(\myvec{r}, u_r)$, given by Equation \ref{eq:fit_fun}.
Note that with these definitions, we can still use the radiation pressure compensation function $\mu_{abs}$, even though the radiation pressure force now decreases more rapidly with the distance than $1/r^2$. Also note that still, the total force acting on H atoms is central in the solar-inertial frame.
\section{Simulations}
\label{sec:calculations}
\noindent
\subsection{Simulations scheme}
\label{sec:calcScheme}
The simulations were performed in the following steps:
\begin{enumerate}
\item Baseline calculations of the density of ISN H with no absorption, where the density and bulk velocity distributions of ISN H were calculated using the nWTPM code in the grid points for the selected epochs.
\item Calculation of absorption profiles in all nodes of the grid.
\item Approximation of the absorption profiles using analytic functions (see Section \ref{sec:params}).
\item Adoption of a modified model of radiation pressure for all grid points, with absorption effects taken into account.
\item Calculation of the density distribution of ISN H using the appropriately modified nWTPM model with the absorption-modified radial velocity-dependent radiation pressure.
\item Iterating absorption estimates--density calculations until the difference in the densities between subsequent iterations became smaller than 0.5\% (it turned out that this criterion was satisfied after two iterations).
\end{enumerate}
\subsection{Simulation grid}
\label{sec:calcGrid}
The calculations were performed on a 3D spatial grid in the heliographic coordinates, centered at the Sun. The nodes are separated by 10\degr{} in heliographic longitude and latitude.
The radial nodes are distributed from r=0.1 au to r=140 au.
The outer boundary is set to the location of the TS in the downwind direction.
Our model is intended to be used inside the TS, so depending on preferred global model of the heliosphere, some of the grid points should be excluded from discussion.
In this paper, we present all results up to the distance of 140 au in each direction, so the reader can decide where the calculations should be stopped due to the TS position.
The grid points are shown in Figure \ref{fig:params}.
\subsection{Initial conditions}
\label{sec:initial}
Following \citet{IKL:18b}, we assumed that ISN H inside the heliosphere is a superposition of the primary and secondary populations with the densities adding up to 0.085~cm$^{-3}$ on the boundary of our calculations (set at 300 au), based on \citet{bzowski_etal:09a}.
The inflow direction of the primary population in the heliographic coordinates are given by lon=179.35\degr{} and lat=5.13\degr{} (which corresponds to the ecliptic coordinates: lon=255.75\degr{} and lat=5.17\degr{}), while the inflow direction of the secondary population is consistent with the so called Warm Breeze \citep{kubiak_etal:16a}: lon=251.57\degr{} and lat=11.95\degr{} in the J2000 ecliptic coordinate system.
Since recently \citet{swaczyna_etal:20a} suggested that the density at the TS might be, in fact, larger by $\sim 45$\%, we repeated some of the simulations for the densities of the two populations appropriately scaled up, with the temperatures and bulk velocities of the two populations unchanged (see Section \ref{sec:TSdens}).
The simulations of the gas distribution were performed using the nWTPM model of ISN H \citep{tarnopolski_bzowski:09} \citep[see also][]{kubiak_etal:21a} with the ionization rate being a function of time and heliolatitude following \citet{sokol_etal:20a}.
The radiation pressure at 1 au was taken from \citet{IKL:20a}.
Since the solar output varies during the solar cycle, we performed additional simulations for high and low solar activity (see Section \ref{sec:solar_cycle}).
The simulations were performed for three epochs during the solar activity cycle: 1996 (low activity), 2002 (high activity), and 1999 (medium activity). The baseline simulations are for medium activity, and the high and low activity cases are used to assess the time dependence of absorption effects.
\subsection{Inclusion of absorption}
When calculating the density using the nWTPM, we track the trajectories of H atoms in all sky directions for all speeds.
To obtain the density, the results of tracking of individual trajectories, which correspond to characteristics of the Boltzmann equation, are integrated over speed, which yields the partial density in a given location in space.
Subsequently, this partial density is integrated over the directions covering the entire sphere.
When an atom is tracked numerically on its trajectory, in each point of the trajectory we need to have an exact radiation pressure dependent on time, to calculate the equation of motion.
The dependence of radiation pressure on time was first introduced by \citet{rucinski_bzowski:95b}.
These authors assumed that the radiation pressure force was a distance-independent fraction of the solar gravity force, and this assumption was maintained in the WTPM code until now. In the approach from the present paper, the radiation pressure in each point and time moment is reduced by the absorption.
First, we calculate the density of hydrogen on the heliolatitudinal grid, with no absorption included. Then, for each point of our grid we estimate the four parameters of absorption: $A_a$, $\xi_a$, $\sigma_a$, and $n_a$ (see Section \ref{sec:params}).
We prepare four files with these parameters set on our 3D grid.
Now we can use them to re-calculate the density.
For each atom tracked, for each point of its trajectory, we calculate the effective radiation pressure with absorption included according to Equation \ref{eq:muEff}.
Finally, we use our new $\mu_{abs}$ to calculate numerically the trajectories of hydrogen atoms, to integrate over them to obtain the local density and the local velocity along with other parameters of the ISN H in the grid points.
\section{Results}
\label{sec:Results}
\subsection{Parameters of the absorption profiles}
\label{sec:params}
As a result of the simulations presented in Section \ref{sec:calculations}, in each node of the computation grid we have a full Lyman-$\alpha$~{} profile as a function of radial velocity (equivalently to frequency or wavelength), with the absorption feature.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.7\linewidth]{fig1a.eps}
\includegraphics[width=0.7\linewidth]{fig1b.eps}
\caption{The radiation pressure
parameter $\mu$ as a function of radial velocity, including absorption
effects, found for the solar Lyman-$\alpha$~{}{} profile given by Equation \ref{eq:radpress} and
corrected for absorption with Equation \ref{eq:muEff}.
Black solid line shows the original profile without absorption, while dashed lines correspond to profiles at various distances from the Sun (red -- 2.5 au, blue -- 17 au, green -- 50 au, and magenta -- 140 au) with absorption included. The top panel presents the upwind direction, while the bottom panel the downwind direction. The profiles are calculated for the medium solar activity in 1999.0.}
\label{fig:absorb_prof}
\end{figure*}
Figure \ref{fig:absorb_prof} shows several examples of absorbed profiles seen at different distances in two directions: upwind and downwind. In the upwind direction, our profile is absorbed on the right side (for $\Delta \lambda>0$). The atoms that are coming from the upwind
direction towards the Sun have negative radial velocities. From the H atom point of view, the source of the photons is approaching, so the wavelength of these photons is shifted towards shorter values
due to the Doppler effect. It means that in the atom rest frame photons that originally have longer
wavelengths than Lyman-$\alpha$~{} are shifted, and if they end up at 121.567 nm, they will be absorbed.
Therefore, the right side of the original profile will be affected. The situation is reversed in the downwind direction, where the left side of the profile is affected.
The depth of the absorbed part of the profile depends on the column density in the given direction.
We can compare Figure \ref{fig:absorb_prof} with the Figure 3 from \citet{wu_judge:79a}, where absorption is shown to affect the opposite side of the profile. It is a result of a mistake in their formula, which was corrected by the authors in erratum published later.
The absorption effect in \citet{wu_judge:79a} is slightly stronger than in our paper. There could be few reasons: we assumed a smaller density of the ISN H beyond the heliosphere, but also we have a much more sophisticated model of radiation pressure and ionization that depends on the solar activity phase. Qualitatively, however, the absorption features predicted by \citet{wu_judge:79a} and by us are similar.
Had the ISN gas been homogeneous, the absorption profiles would be Gaussian-like.
In the reality, however, the gas along any radial line features a gradient in radial velocity as well as a gradient in the temperature.
In addition, the gas is a superposition of two populations, with different densities, flow directions, speeds, and temperatures.
As a result, the absorption profiles become non-Gaussian, as illustrated by black lines in Figure \ref{fig:fit_fun}.
Since on one hand, full 3D time dependent simulations of the absorption effect are very time consuming, and on the other hand the effect of absorption in the radiation pressure model should be relatively easy to implement, we decided to introduce approximation formulae to model the spectral irradiance with absorption included.
Absorption may result in saturation of the absorption feature in the solar Lyman-$\alpha$~{} line.
The saturation starts at the center of the absorption feature and progresses towards longer and shorter wavelength with the increase of the solar distance.
Before the onset of saturation, the shape of the absorption feature can be approximated by the Gaussian function.
With the onset of saturation, we change the approximation function by allowing the parameter $n_a$ to be equal to 4 (instead of $n_a=2$ in the case of the Gaussian function) in Equation \ref{eq:fit_fun}.
\begin{align}
\label{eq:fit_fun}
F_{fit}(u_r)&=A_a \exp\left[- \frac{1}{2}\left(\frac{u_r - \xi_a}{\sigma_a}\right)^{n_a}\right]
\end{align}
The transition between one function and the other occurs at different distances and depends on the selected direction in space.
Using this procedure instead of the full absorption profile at each grid point, we obtain a set $\myvec{q} = \{A_a, \xi_a, \sigma_a, n_a \}$ of four parameters of approximate absorption profiles: the amplitude ($A_a$), the mean Doppler shift ($\xi_a$), the dispersion ($\sigma_a$), and the type of fitting function ($n_a$), which can be equal to 2 (the Gaussian function) or 4.
The values of these parameters for our computation grid are available as a additional table in MRT standard attached to this paper (see Table \ref{tab:params} as an example), as well as on the webpage\footnote{http://users.cbk.waw.pl/~ikowalska/index.php?content=abs}.
Along with the Lyman-$\alpha$~{} radiation pressure model described in \citet{IKL:20a}, these data allow to calculate the modified radiation pressure that includes the absorption effect at any location within the calculation grid.
At greater distances, our fit returns amplitudes greater than 1 (see Figure \ref{fig:fit_fun}, right hand panel), which is non-physical, therefore in these parts of the profile we set $F_{fit}=1$ in our calculations.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.31\linewidth]{fig2a.eps}
\includegraphics[width=0.31\linewidth]{fig2b.eps}
\includegraphics[width=0.31\linewidth]{fig2c.eps}
\caption{Example absorption profiles calculated according to Equation \ref{eq:I_abs} (black line) and approximation of the absorption profiles according to Equation \ref{eq:fit_fun} (red line). All of them showing the absorption effect in the upwind direction at three distances: 11 au -- left side panel, 80 au -- center panel, 140 au -- right side panel.}
\label{fig:fit_fun}
\end{figure*}
The amplitude of absorption increases with the distance from the Sun.
It is an expected effect since the column density behaves the same way.
For the same reason, there is also higher amplitude of absorption around the upwind direction.
There is some asymmetry visible between the northern and the southern hemispheres, where the amplitude tends to be higher for northern ecliptic latitudes (see the second plot in the first row in Figure \ref{fig:params}).
The same effect can be seen in the maps presented in Figure \ref{fig:abs_map}.
The magnitude of the Doppler shift of the absorption feature is stable with the distance for a given heliocentric line, as it can be seen in Figure \ref{fig:absorb_prof}.
The modulation of this quantity with the longitude reflects the modulation of the average radial velocity of the gas in different directions.
ISN H flows in from the upwind direction, where the radial velocity of the gas relative to the Sun is negative; hence the shift of the absorption profiles visible in the left panel of Figure \ref{fig:absorb_prof}. At the downwind axis, ISN H flows away from the Sun, and the absorption feature is Doppler-shifted towards positive radial velocities (see top axis in Figure \ref{fig:absorb_prof}).
The finite width of the absorption profiles, represented by the standard deviation parameter $\sigma_a$ in our fits, exists because of the thermal broadening and a certain radial gradient in the radial component of the velocity of ISN H along this radial line. The broadening is larger for greater distances because the thermal broadening of ISN H is reduced towards the Sun.
In Figure \ref{fig:params}, there are some discontinuity features (see the green markers in the top left panel), which are caused by the fact that we use two alternative fitting functions.
Some of the grid points in this case were fitted by one function (circles), and others by the other one (stars).
The transition occurs when the absorption profiles becomes saturated.
It seems that for the small distances (less than 30 au) the Gaussian function is preferred, while for large distances (greater than 60 au) the function with $n_a=4$ fits better. In the region of intermediate distances we observe a transition between these two functions, which results in an abrupt change in the amplitude parameter.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.3\linewidth]{fig3a.eps}
\includegraphics[width=0.3\linewidth]{fig3b.eps}
\includegraphics[width=0.3\linewidth]{fig3c.eps}
\includegraphics[width=0.3\linewidth]{fig3d.eps}
\includegraphics[width=0.3\linewidth]{fig3e.eps}
\includegraphics[width=0.3\linewidth]{fig3f.eps}
\includegraphics[width=0.3\linewidth]{fig3g.eps}
\includegraphics[width=0.3\linewidth]{fig3h.eps}
\includegraphics[width=0.3\linewidth]{fig3i.eps}
\caption{The parameters of absorption (the amplitude $A_a$ -- the first row, the mean Doppler shift $\xi_a$ -- the second row, the dispersion $\sigma_a$ -- the third row), presented as a function of heliographic longitude in the heliographic equatorial plane (left column), heliographic latitude in the crosswind plane (center column) and a distance from the Sun for four selected directions (right column). Markers indicate the nodes of our computation grid. Circles correspond to $n_a=2$, while stars to $n_a=4$. A table of all parameters in the 3D grid is available online at: \url{http://users.cbk.waw.pl/~ikowalska/index.php?content=abs}}
\label{fig:params}
\end{figure*}
The simple approximation for the profile of absorption shown in this section can be easily implemented and saves a lot of computational time.
Even far away from the Sun, where saturation is clearly visible, the fitted function corresponds quite well with the numerically simulated profiles.
\clearpage
\subsection{Radiation pressure attenuation factor and optical depth}
\label{sec:attenfact}
In this section, we analyse how absorption affects the gas density and the effective radiation pressure distribution in space.
To that end, we define a parameter that can be used to modify the radiation pressure factor $\mu$, which in the absence of absorption is distance- and location-independent.
By doing so, we will reduce the information that we have about the profile since we eliminate the dependence of radiation pressure on radial velocity of individual H atoms, but facilitate the use of absorption-related modification of radiation pressure in modeling of ISN H distribution in the heliosphere.
We introduce the effective radiation pressure attenuation factor $f_{abs}(\myvec{r})$ that tells how strong the absorption effect is.
This factor depends on the distance from the Sun and the direction in space described by $\myvec{r}$:
\begin{equation}
\label{eq:f_abs}
f_{abs}(\myvec{r})=
\begin{cases}
\frac{\int\limits_{u_{r_1}}^{u_{r_2}} \mu_{abs}(\myvec{r},u_r') du_r'}{\int\limits_{u_{r_1}}^{u_{r_2}} \mu(\myvec{r}_\text{E},u_r') du_r'}, & \text{ for }\, r > r_\text{E} \\
1, & \text{ for }\, r \leq r_\text{E}.
\end{cases}
\end{equation}
The integration range ($u_{r,1}$, $u_{r,2}$) is defined so that we integrate over the radial velocities for which the absorbed profile $\mu_{abs}(\myvec{r},u_r)$ differs more than $10^{-3}$ from the profile without absorption $\mu(\myvec{r_\text{E}},u_r)$.
In that way, we integrate over that part of the profile that is actually affected by the absorption.
In other words, we integrate over the radial velocity range of the atoms that are within the considered line of sight.
With this definition, the integration region varies from one grid point to another.
The magnitude of the attenuation factor is limited to 1 (from the definition).
When it is 1, there is no absorption, when it is smaller than 1, absorption is present and thus, there is a reduction in the effective radiation pressure.
The flux of electromagnetic radiation in empty space decreases with the distance squared.
But because of the absorption on hydrogen atoms, the solar Lyman-$\alpha$~{} flux behaves differently.
Figure \ref{fig:f_abs} shows this dependence for two characteristic directions in space: upwind and downwind. The attenuation factor drops to 0.9 at a distance of 9.17 au in the upwind direction and at a distance of 16.85 au in the downwind direction, which is outside hydrogen cavity located at 4.3 au and 13.13 au for upwind and downwind direction, respectively.
There is clear asymmetry in the absorption caused by the asymmetry in the local density distribution.
Close to the Sun, where there is almost no ISN H inside the hydrogen cavity, absorption is almost negligible.
Farther away, the slope becomes steeper as more and more hydrogen atoms interact with Lyman-$\alpha$~{}{} photons.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.6\linewidth]{fig4.eps}
\caption{Top panel shows the attenuation factor as a function of radial distance, simulated for 1999. Red line shows the upwind direction, blue line shows the downwind direction, and black line corresponds to the case where no absorption is present. The broken vertical bars mark the distances at which the attenuation factor drops to 0.9. Bottom panel presents hydrogen density as a function of radial distance in two directions (upwind - red and downwind - blue). Vertical dashed lines are as on the top panel, while vertical solid lines mark radial distance of the hydrogen cavity (e.g. where local density is $e$ times smaller than on the boundary of calculations). }
\label{fig:f_abs}
\end{figure*}
For better visualization, we created 2D maps of the sky at different distances that show the attenuation factor, presented in Figure~\ref{fig:abs_map}.
As expected, the strongest effect is in the upwind direction, where the density of ISN H atoms is the largest (for instance, at 17 au in the upwind direction $f_{abs}=0.79$, and in the downwind direction $f_{abs}=0.89$), even though the strongest effect of absorption on the density is found in the downwind direction (compare Figure \ref{fig:dens_ratio_map}).
We expected this, because the atoms located in the downwind directions have travelled the longest way from the unperturbed interstellar medium through the region where the solar radiation is absorbed. Consequently, they have been subjected to the absorption-reduced radiation pressure for much longer time than the atoms in the upwind direction.
This difference is visible in the density distribution.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.45\linewidth]{fig5a.eps}
\includegraphics[width=0.45\linewidth]{fig5b.eps}
\includegraphics[width=0.45\linewidth]{fig5c.eps}
\includegraphics[width=0.45\linewidth]{fig5d.eps}
\caption{Full-sky maps of the attenuation factor at selected distances from the Sun. The maps are shown in the heliographic coordinates. The upwind direction is at $lon=179.346\degr$ and $lat=5.128\degr$}
\label{fig:abs_map}
\end{figure*}
Inspection of figure \ref{fig:abs_map} shows that the absorption is the most inhomogeneous at middle distances, represented by the map for 17 au.
In this map, there is a clear structure of absorption related to the heliolatitude structure of the solar wind and resulting heliolatitude structure of the density of ISN H.
The smallest attenuation factor is about 0.78, and the largest about 0.9.
While the magnitude of absorption coefficient is relatively small, its inhomogeneity by percentage is relatively large.
By contrast, at larger distances, the magnitude of absorption is larger, with typical magnitudes of the absorption coefficient 0.62 at 50 au, reaching up to 0.58 in the upwind direction and going down to 0.66 in a relatively narrow region around the downwind axis.
At 140 au, the magnitude of absorption is still larger, with a typical value of the attenuation factor 0.49 and the span between the largest and lowest values 0.46 and 0.52.
The attenuation factor is an approximation for the absorption effect for the solar spectral flux, but can be also used to determine how we need to modify the average radiation pressure at a given point in space. The part of the Lyman-$\alpha$~{}{} profile that is responsible for the radiation pressure force acting on the considered hydrogen atoms is affected by the absorption effect. To calculate the averaged radiation pressure coefficient $\bar{\mu}_\text{E}(t)$, we need to multiply the averaged radiation pressure measured at 1 au by the attenuation factor.
\begin{equation}
\label{eq_mu}
\bar{\mu}(\myvec{r},t)=\bar{\mu}_\text{E}(t) f_{abs}(\myvec{r},t),
\end{equation}
where $\myvec{r}$ indicates the position with respect to the Sun in the heliographic coordinate system.
It is important to average radiation pressure at 1 au in the same limits as those used to calculate the corresponding $f_{abs}$. The limits used in our calculations along with the values of the attenuation factor ($f_{abs}$) are available in supplementary table in the MRT standard (see Table \ref{tab:params} as an example) as well as on the web page\footnote{http://users.cbk.waw.pl/~ikowalska/index.php?content=abs}.
In our model \citep{IKL:18a}, the spectral flux at 1 au is axially-symmetric with the symmetry axis being the polar line and it depends only on heliographic latitude. By time dependence we mean that the results depends on the solar cycle phase.
\begin{deluxetable*}{cccccccccc}
\tablecaption{\label{tab:params} Fit parameters and attenuation factor along with integration limits calculated for each nod of our computation grid. Full table is available as a text file. This is only an example.
}
\tablehead{\colhead{R [au]} & \colhead{ELON [deg]} & \colhead{ELAT [deg]} & \colhead{$A_a$} & \colhead{$\xi_a$ [km s$^-1$]} & \colhead{$\sigma_a$ [km s$^-1$]} & \colhead{$n_a$} & \colhead{$f_{abs}$} & \colhead{$u_{r,1}$ [km s$^-1$]} & \colhead{$u_{r,2}$ [km s$^-1$]}}
\startdata
50.00 & 94.883& -68.777& 1.0234 & 8.7801 & 9.9803& 2& 0.6255 &-24.0 & 40.0\\
50.00 & 88.090& -59.216& 1.0101 & 11.9127& 9.9974& 2& 0.6298 &-21.0 &43.0\\
50.00 & 84.313& -49.457& 0.9921 & 14.6745& 10.0252& 2& 0.6338 &-19.0 & 45.0\\
50.00 & 81.805& -39.617& 0.9680& 16.9631& 10.0553 &2& 0.6404 &-17.0 &47.0\\
50.00 & 79.927& -29.736& 0.9374 & 18.6714& 10.0874 &2 &0.6503 &-15.0& 49.0\\
50.00 & 78.390& -19.833& 0.9033& 19.7098& 10.1354 &2& 0.6540& -14.0 & 49.0\\
50.00 & 77.035& -9.919& 0.7758 & 19.7819& 12.8085 &4& 0.6667& -14.0 & 50.0\\
50.00 & 75.760 & 0.000& 0.7779 & 19.5226& 12.9347 &4 &0.6568& -14.0 & 49.0\\
50.00 & 74.485 & 9.919& 0.8034 & 18.6867& 13.1120& 4 &0.6414& -15.0& 48.0\\
50.00 & 73.130 & 19.833& 0.8340& 17.1997& 13.2702& 4 &0.6308& -17.0 & 47.0\\
50.00 & 71.593 & 29.736& 0.8601& 15.0992& 13.3645 &4 &0.6180& -19.0 & 45.0\\
50.00 & 69.715 & 39.617& 0.8803 & 12.4714& 13.3728 &4 &0.6166& -22.0 & 43.0\\
50.00 & 67.207 & 49.457 &0.8959 & 9.4237& 13.2939& 4 &0.6125 &-25.0 & 40.0\\
\enddata
\end{deluxetable*}
Another parameter commonly used in the literature is the optical depth in the wavelength $\lambda$, which can be expressed as:
\begin{equation}
\label{eg:tau}
\tau_\lambda=\int^r_{0} n_{pr}(\myvec{r'})\sigma_{cs}(\myvec{r'},\Delta \lambda,T_{g,pr}) dr'+\int^r_{0} n_{sc}(\myvec{r'})\sigma_{cs}(\myvec{r'},\Delta \lambda,T_{g,sc}) dr'
\end{equation}
Note that it is calculated for the two-population model, as in Equation \ref{eq:I_abs}.
When the optical depth $\tau=1$, the medium is considered as optical thick. In Figure \ref{fig:optical_depth} we show three different planes with contours corresponding to $\tau=1$ and $\tau=3$ for the wavelengths of maximum absorption. On the same plots we marked the location of the hydrogen cavity. The boundary between optical thin and optical thick medium is shown as a red line. It is located at a distance between 17 and 35 au, depending on the direction. The hydrogen cavity is deep inside the optically thin region, so most of a line of sight for an observer at 1 au traverses the optically thin region. A portion of the signal originating in the optically thick region is relatively low \citep[cf.][]{kubiak_etal:21b}.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.45\linewidth]{fig6a.eps}
\includegraphics[width=0.45\linewidth]{fig6b.eps}
\includegraphics[width=0.45\linewidth]{fig6c.eps}
\caption{Three planes: polar -- the upper left panel, crosswind -- the upper right panel, and equatorial -- the bottom panel. Green line shows the location of the hydrogen cavity, red line shows the location where $\tau=1$, and orange line shows where $\tau=3$. The lines of sight to calculate $\tau$ were assumed radial, originating at 1 au.}
\label{fig:optical_depth}
\end{figure*}
\clearpage
\subsection{Hydrogen density}
\label{sec:Hdensity}
The distribution of the ISN H is calculated using the nWTPM \citep{tarnopolski_bzowski:09} with subsequent modifications recapitulated by \citet{IKL:18b} and with the effect of absorption introduced in this paper.
The radiation pressure acting on hydrogen atoms is calculated using our latest model of the relation between the total flux in the solar Lyman-$\alpha$~{} line and the spectral flux within this line \citep{IKL:20a}.
We used results from this calculation to modify the radiation pressure that depends on the absorption effect.
It was done iteratively (as it was described in Section \ref{sec:calculations}).
We show the difference between the density calculated with the absorption effect included ($n_{\text{H},abs}$) and the density without this effect ($n_\text{H}$).
It will be helpful in identifying the regions in the heliosphere where absorption has a big impact and should not be neglected.
Figure \ref{fig:dens_ratio_map} demonstrates that neglecting the absorption results in underestimating of the density by $\sim 9$\% in the downwind direction, i.e., in the worst case.
A strong effect is also visible near the Sun, but the absolute values of the density in this region are very small.
It is not without significance that the map for the distance of 2.5 au is located inside the hydrogen cavity.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.45\linewidth]{fig7a.eps}
\includegraphics[width=0.45\linewidth]{fig7b.eps}
\includegraphics[width=0.45\linewidth]{fig7c.eps}
\includegraphics[width=0.45\linewidth]{fig7d.eps}
\caption{Maps of the difference between density calculated with absorption included and density without absorption normalized by the density without absorption. The coordinate system is the same as in Figure \ref{fig:abs_map}.}
\label{fig:dens_ratio_map}
\end{figure*}
\clearpage
\subsection{Dependence on the phase of the solar cycle}
\label{sec:solar_cycle}
In the previous section, we showed simulations performed for one moment in time (1999.0, when the solar activity was on an average level) to illustrate how the absorption effect works for different directions and distances in the heliosphere.
However, ISN H atoms need a substantial time to travel through the heliosphere, longer than the length of the solar cycle \citep{bzowski_kubiak:20a}.
During this travel, the radiation pressure that acts on H atoms is caused by the Sun in different phases of its activity.
Hence, we need an estimate of absorption for various phases of the solar cycle.
The effect of absorption depends on the intensity of the Lyman-$\alpha$~{} radiation and the column density of the ISN H.
The first one is modulated by the solar cycle activity, the second one is much more complex and in addition to radiation pressure, it also depends on the solar wind conditions and the photoionization rate.
The time evolution of these factors is only partly periodic and there is no simple model that allows us to make an analytical or even semi-analytical parametrization.
We compared the absorption effects in two extreme cases, when the local density reaches its maximum and minimum values during the solar cycle.
The local density at 1 au in the upwind direction has the highest value close to the solar minimum conditions (in our simulations this minimum was in November 1996) and has the lowest value during the solar maximum (in January 2002).
Figures \ref{fig:minMaxMap1} and \ref{fig:minMaxMap2} show the difference between the magnitudes of the attenuation factor in these two extreme cases.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.95\linewidth]{fig8a.eps}
\includegraphics[width=0.95\linewidth]{fig8b.eps}
\caption{Full-sky maps of the attenuation factor at selected distances from the Sun for the epochs 1996.8 (left column) and 2002.0 (the middle column). The third column shows the percentage difference between these two maps. Top row shows a distance of 2.5 au, while the bottom row shows this for 17 au. The maps are shown in heliographic coordinates.}
\label{fig:minMaxMap1}
\end{figure*}
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.95\linewidth]{fig9a.eps}
\includegraphics[width=0.95\linewidth]{fig9b.eps}
\caption{Same as Figure \ref{fig:minMaxMap1}, but for distance 50 au (top row) and 140 au (bottom row)}
\label{fig:minMaxMap2}
\end{figure*}
Inside the hydrogen cavity (the first row of plots in Figure \ref{fig:minMaxMap1}), the absorption effect is very weak and it is changing during solar cycle by a fraction of percent.
Further away from the Sun, the amplitude of changes during the solar activity cycle is larger, but never exceeds 3\%.
Therefore, all results presented in this paper (calculated for the mean solar activity in 1999) are accurate within 3\% uncertainty.
\clearpage
\subsection{Dependence of absorption on the density beyond the termination shock}
\label{sec:TSdens}
A recent analysis \citep{swaczyna_etal:20a} has shown that density of the hydrogen on the TS is not known as well as we thought \citep{bzowski_etal:09a}.
The difference in the estimates is 49\%, i.e., quite substantial.
We analyzed the effect of changing the density at the boundary of our calculations on the magnitude of absorption.
In our simulations of the gas distribution, the density at 300 au (the boundary of the calculations) scales the density inside the heliosphere linearly.
If we ignore absorption, the ratio of the local densities calculated for different densities at 300 au is constant (the black line in Figure \ref{fig:TSdens}).
But when we include absorption in our simulations, the ratio of the local density is changing with the distance from the Sun, and the gradient of these changes depends on the direction.
For an upwind direction, the ratio is higher than in the absorption-free case near the Sun and it is decreasing with the distance to stabilize around 20 au (the red line in Figure \ref{fig:TSdens}).
Similar situation is in crosswind direction (green line in Figure \ref{fig:TSdens}), but stabilization is achieved much further from the Sun (around 40 au).
A different behavior is seen for the downwind direction (the blue line in Figure \ref{fig:TSdens}), where the density ratio increases for the first 10 au and then slowly decreases with the distance, but never achieves the stabilization level characteristic for absorption-free case.
This can be explained by general low density in the tail, but also by the fact that in this region we expect not only a direct flow of the ISN H, but also inverse beam.
The absorption effect is stronger when the assumed density at the TS is higher, because the local density is higher.
Adoption of a greater density at the boundary, in conjunction with absorption, results in a small global increase in the densities inside the heliosphere, varying in space.
However, for the range of TS densities in question (0.085 cm$^{-3}$ in \citet{bzowski_etal:09a} vs 0.127 cm$^{-3}$ in \citet{swaczyna_etal:20a}) this increase of the local density is at most by 3\%, and typically smaller.
\begin{figure*}[ht!]
\centering
\includegraphics[width=0.6\linewidth]{fig10.eps}
\caption{Ratio of local density with different initial density at the boundary of our calculations. The higher value is 0.127 and the smaller one is 0.085. Absorption is included. Colours show different directions: red line -- upwind, blue line -- downwind, green line -- crosswind. Black horizontal line shows the constant ratio when we considering no-absorption case.}
\label{fig:TSdens}
\end{figure*}
\clearpage
\subsection{The signal observed by IBEX-Lo}
\label{sec:IBEXSignal}
In this section, we analyse the quantities relevant for direct-sampling observations of ISN H by IBEX-Lo. IBEX \citep{mccomas_etal:09a} is the first mission to directly sample ISN H in the Earth's orbit.
IBEX is a spin-stabilized spacecraft with the spin axis changed once (until 2012) or twice per orbit to approximately follow the Sun \citep{mccomas_etal:11a}.
The ISN atoms are observed using the IBEX-Lo instrument \citep{fuselier_etal:09b}, which is a time of flight mass spectrometer.
Before entering the detector, the observed atoms pass the collimator, which defines the field of view of the instrument.
The data are collected while the spacecraft is rotating, and the observed counts are binned into time-intervals with the length selected so that they correspond to fixed spin angle bins.
Since between the repositioning the spin axis is fixed in space, the observations during an individual orbital arc cover a specific, fixed region in the sky.
The signal observed by IBEX-Lo depends on the local density and speed of ISN H, which depend on the radiation pressure, so it may be a good tool to constrain our models.
In the past, numerous authors reported an important discrepancy between theoretical simulations and the actual IBEX observations of the ISN H.
They suggested that the reason may be that radiation pressure models are not accurate enough (\citet{schwadron_etal:13a,katushkina_etal:15b, rahmanifard_etal:19a}).
We developed a new solar Lyman-$\alpha$~{} radiation pressure model to address that issue \citep{IKL:18a}.
In this study we show yet another factor that may help to solve this problem.
In the simulations carried out using the nWTPM code \citep{sokol_etal:15b}, the distribution function of ISN H atoms is represented by a superposition of distribution functions corresponding to the primary and secondary populations of ISN H, and the signal observed by IBEX is simulated for the time of detection and the location and velocity of the detector in space.
In the presented simulations, we show the total flux of ISN H entering the detector, the mean speeds and energies of the two populations separately for spin angle bins identical to these used by IBEX-Lo for IBEX orbits belonging to a selected ISN observation season, which begins in December and ends in April each year.
Even though the ISN observations are carried out during yearly seasons, the observed signal features large variations between the seasons.
The ISN H is heavily modulated in the Earth's orbit due to the variations in the ionization rate and radiation pressure during the solar activity cycle \citep[][]{rucinski_bzowski:95b, bzowski_etal:97, tarnopolski_bzowski:09, bzowski_etal:13a}.
On the other hand, the observation conditions never repeat precisely from one year to another, therefore the observed ISN H signal is very sensitive to small differences in the spin angle pointing and the length of ``good times'' during the observations.
\begin{figure*}
\centering
\includegraphics[width=1.2\textwidth,angle=90]{fig11a.eps}
\includegraphics[width=1.2\textwidth,angle=90]{fig11b.eps}
\caption{
A simulated total flux (a sum of the fluxes of the primary and secondary populations, upper panel), mean energy (separately for the primary and the secondary populations, lower panel) of ISN H filtered by the collimator of IBEX-Lo. The quantities shown were simulated for each 6-degree spin angle interval within the range $180\degr-330\degr$. The black solid line in the upper panel corresponds to the absorbed model and the magenta solid line presents the simulations without absorption. In the lower panels, the red color represents the primary population and blue color the secondary population. Solid lines correspond to the model with absorption, and dashed lines without absorption. Separation between individual orbits is marked by the white and gray strips. The orbit numbers are shown at the top of the first panel, and the mean ecliptic longitudes of the Earth for individual orbits are presented at the bottom of the first panel. The lower subpanel in the first panel represents the ratio of the fluxes without absorption to that with absorption, and the lower subpanel in the second panel shows the differences in the energy, obtained from models with and without absorption.
}
\label{fig:ibex2010}
\end{figure*}
In Figure \ref{fig:ibex2010}, we choose to show the year 2010, which is during the minimum of the solar activity, when the hydrogen flux is the highest.
The biggest absorption effect is seen in the peaks of the middle and latest orbits (68--74), when it reaches up to 9\%.
Usually, the flux simulated with absorption included is higher than that without it.
The magnitude of the effect of absorption on the signal is dependent on the spin angle of the observed bins.
On the wings, it is just around 1\%-4\% but close to the peak flux, where the observation statistics is the best, it is up to 9\%. Since this effect is systematic, it may affect details of the analysis results but is not likely to explain the discrepancy between the model and observation mentioned before.
Absorption affects the energy of both populations in a similar way, and the magnitude of the effect does not depend on the spin angle or the strength of the signal.
The first part of the season (until orbit 64) gives different absorption-related features than the rest of the signal, namely the flux ratio is sometimes higher than one, and the energy difference is also much lower.
In this region, the signal is built up mostly by the secondary population of ISN H.
This result implies the high sensitivity of the signal observed by IBEX to details of radiation pressure and a large diagnostic potential of this kind of observations.
\clearpage
\section{Summary and Conclusions}
Lyman-$\alpha$~{}{} radiation is (next to solar gravity) one of the most important factors influencing the distribution of ISN hydrogen in the heliosphere.
Simulations of radiation pressure in the whole heliosphere are very important to calculate trajectories of hydrogen atoms.
Absorption of the Lyman-$\alpha$~{}{} radiation modifies the effective radiation pressure profile and the global hydrogen distribution.
In this paper we have calculated the radiation pressure profiles with absorption effects included on a 3D computation grid up to 140 au from the Sun.
The magnitude of absorption that modifies the solar Lyman-$\alpha$~{}{} profile can be estimated using a Gauss-like function, described by 4 parameters ($A_a$, $\xi_a$, $\sigma_a$ and $n_a$).
We have defined the attenuation factor ($f_{abs}$) that estimates the magnitude of the absorption effect at a given point in the heliosphere.
The attenuation factor increases with an increasing distance from the Sun, thus the Lyman-$\alpha$~{}{} spectral flux responsible for radiation pressure acting on ISN H atoms in the heliosphere drops faster than $1/r^2$.
We have shown how absorption changes the density of the ISN H. Even though attenuation factor indicates the highest absorption in the upwind direction, relative changes in the hydrogen density are the biggest in around the tail.
In general, absorption modifies the density by several percent, mostly in the downwind cone of $\sim 30\degr$ and at relatively small heliocentric distances of a few au.
We have analysed modulation of the absorption during solar cycle and found that the changes between extreme conditions were up to 3\%. Therefore, we concluded that our basic set of calculations done for epoch 1999.0 can be widely used within a 3\% accuracy.
Given the uncertainties related to uncertainties in the ionization rate of ISN H, of the solar EUV output etc., absorption may be neglected to the first approximation in the calculation of ISN the H density and related production rates of the pickup ions.
However, we found that absorption systematically modifies the ISN H observed by direct-sampling missions at 1 au, like IBEX and IMAP.
For these observation conditions, systematic differences may be on the order of 5--8\%, and are likely to be larger than the sampling uncertainties.
Therefore, absorption needs to be taken into account in the analyses of these observations, like that presented by \citet{schwadron_etal:13a, katushkina_etal:15b, galli_etal:19a, rahmanifard_etal:19a}.
The absorption effect calculated and described in this paper is incorporated into the latest version of the nWTPM code.
\acknowledgments
{\emph{Acknowledgments}}.
The authors would like to kindly thank Jeffrey Linsky and Eberhard Moebius for the helpful discussion.
This study was supported by Polish National Science Center grants 2019/35/B/ST9/01241, 2018/31/D/ST9/02852, and by Polish Ministry for Education and Science under contract MEiN/2021/2/DIR.
\bibliographystyle{aasjournal}
|
{
"timestamp": "2021-12-01T02:24:29",
"yymm": "2111",
"arxiv_id": "2111.15412",
"language": "en",
"url": "https://arxiv.org/abs/2111.15412"
}
|
\section{Introduction}
Light has become one of the most powerful and versatile tools. It is
an important medium for transporting information, either between
communicating partners or from an object under investigation to a
detector in imaging applications. Light also plays a key role in
laser-based manufacturing processes, such as welding, cutting and
direct laser lithography.
However, as recently highlighted~\cite{rubinsztein2016roadmap,
forbes2021structured, piccardo2021roadmap}, exploiting the full
physical potential of light requires solutions for its highly specific
and custom shaping. For example, increasing the telecommunication
bandwidth demands directing photonic signals from many single mode
fibers into a single multi-core or multimode fiber with high
efficiency~\cite{puttnam2021space}. Likewise, efficiently exploiting
the high optical power of industrial lasers requires to reshape their
often unsuitable native mode
profiles~\cite{li2015high,ackermann2021uniform}.
In any case, the restructuring of light should ideally take up only
little space. To this end, the processing of dielectric materials
such as glasses or polymers with ultra-fast lasers has opened
promising routes towards the creation of three-dimensional,
miniaturized light shapers~\cite{gross2015ultrafast}. A variety of
devices such as photonic lanterns, beam combiners and mode
multiplexers made from glass-embedded waveguide arrangements have been
successfully demonstrated and are making their way towards
commercialization. Such waveguides are formed by translating a
femtosecond laser focus through a glass volume along the desired
guiding paths, along which the refractive index of the glass is
permanently modified. Whilst undoubtedly powerful, devices using
waveguides as ``building blocks'' are nevertheless limited in the
sense that they don't fully exploit all degrees of freedom of a
volume. Ideally, one would like to have the power of arbitrarily
changing the entire three-dimensional refractive index distribution at
the microscale. Progress towards the fabrication of such 3D gradient
index materials for optical wavelengths has been recently reported
using two-photon polymerization~\cite{vzukauskas2015tuning,
ocier2020direct} and 3D printed glass optics~\cite{dylla20203d}.
Likewise, the computational design of 3D gradient index optics in
order to shape arbitrary transverse beam profiles is continuously
advancing. In Ref.~\cite{kunkel2020numerical}, the authors use an
irradiance mapping scheme relying on geometric optics, which is found
by employing algorithms used in optimal mass transport problems, on
top of which they run a multiplane Gerchberg-Saxton-like algorithm in
order to account for diffraction. The proposed algorithm, however,
has some constraints with regards to the smoothness and continuity of
input/output functions. Furthermore, it has so far only been
demonstrated for single-mode shaping, that is sculpting a specific
coherent output field from a single coherent input beam.
{Here we propose a new algorithmic approach for the design of 3D
gradient index devices. Unlike previous methods for designing
gradient index or freeform optics, we refrain from using tools used
in ray-optical designs. Instead, we employ an approach related to
machine learning, i.e., numerical beam propagation in conjunction
with suitable cost functions that are optimized using error
back-propagation and gradient descent. We demonstrate the
effectiveness and versatility of our approach by computing a variety
of highly miniaturized 3D gradient index designs that perform key
tasks in integrated photonics. After a detailed explanation of our
computational method and suitable cost functions in
section~\ref{algorithm}, we present solutions for three application
cases in section~\ref{results}.}
Firstly, we find a solution for a mode sorter of only 2.5~mm length,
which is capable of specifically mapping 45 Gaussian input beams onto
the Hermite-Gaussian modes of a multimode fiber with an average
conversion efficiency of 98.6\%. Secondly, we design an equally small
photonic lantern, which couples light from 45 Gaussian inputs
unspecifically into a multimode fiber with 99.2\% efficiency.
Finally, we present a highly miniaturized beam shaper of only 0.5~mm
length, which turns the irradiance profile of a monochromatic
$50\,\textrm{\textmu m}$-diameter multimode fiber source into a
speckle-free square of $60\,\textrm{\textmu m}$ side length.
All presented designs exhibit smooth 3D refractive index modulations
within a dynamic range of about $10^{-2}$ and are therefore feasible
for experimental realization using 2-photon
polymerization~\cite{vzukauskas2015tuning}.
\section{Inverse design algorithm}\label{algorithm}
In the following, we present an algorithm that allows to design
gradient-index structures for the conversion of a set
$\{u_n\}_{1\leq n \leq N}$ of $N$ mutually incoherent transverse input
modes to another set of output modes $\{v_n\}_{1\leq n \leq N}$,
according to various cost functions depending on the application.
We follow a typical inverse design approach by first defining a
forward model propagating the input modes from their definition plane
$z_{in}$ to the destination plane $z_{out}$, through a structure of
refractive index contrast $\Delta\textrm{RI}(x,y,z)$ that is initially
unknown and will be updated iteratively. We then define different cost
functions for several classes of multimode beam shaping problems and
present a generic error backpropagation algorithm to compute
$\Delta\textrm{RI}$ gradients. Finally, we combine these different
stages into a simple gradient descent iteration scheme to solve
$\Delta\textrm{RI}$ and eventually apply some constraints to it.
\subsection{Forward model}
\begin{figure}[ht]
\centering
\includegraphics[scale=0.9]{design}
\caption{Illustration of a Gaussian to smiley gradient index mode
conversion along with its discrete formulation, consisting of
$P$ planes separated by a small distance $\mathrm{d}z$.}
\label{fig:design}
\end{figure}
We consider a refractive index distribution $\Delta\textrm{RI}(x,y,z)$
of finite extent in a rectangular coordinate system ($O,x,y,z$), $z$
being the direction of propagation of the incoming laser modes
$\{u_n\}_{1\leq n \leq N}$. The wavelength of the input and output
modes in vacuum is denoted $\lambda_0$. The RI volume is discretized
into $P$ transverse planes located at axial positions
$\{z_p\}_{1\leq p \leq P}$ separated by a small distance
$\mathrm{d}z$, as illustrated in Figure~\ref{fig:design}. The
transverse resolution is denoted $(\mathrm{d}x,\mathrm{dy})$. The
refractive index $n_b$ of the medium between these different planes is
assumed to be the same as for the bulk material, while each plane
holds a phase mask proportional to the local $\Delta\textrm{RI}$
distribution and to $\mathrm{d}z$:
\begin{equation}
\Delta\varphi_p(x,y) = \frac{2\pi}{\lambda_0}\Delta\textrm{RI}(x,y,z_p)\mathrm{d}z,
\label{eq:phase_mask}
\end{equation}
for $p\in\{1,\dots,P\}$. Then, the numerical propagation of the input
modes $\{u_n\}$ through the $\Delta\textrm{RI}$ distribution is
achieved with the standard split-step beam propagation method (BPM),
alternating propagation in a uniform medium of refractive index $n_b$
and multiplication by a phase mask defined according to
Eq.~(\ref{eq:phase_mask}). A symmetric split-step scheme is enforced
simply by defining the input plane position $z_{in}$ at a distance
$\mathrm{d}z/2$ before $z_1$ and the output plane position $z_{out}$
at $\mathrm{d}z/2$ after $z_P$.
For propagating between planes we use the well-known angular spectrum
(AS) method:
\begin{equation}
\begin{aligned}
u_{\perp(z+\textrm{d}z)} & = \textsc{IFFT}\left(
\textsc{FFT}{\left(u_{\perp z}\right)}\times\exp{\left(i 2\pi \mathrm{d}z \sqrt{\frac{n_b^2}{\lambda_0^2}-f_x^2-f_y^2}\right)}\right) \\
\end{aligned}
\label{eq:AS}
\end{equation}
where \textsc{FFT} and \textsc{IFFT} refer to the two-dimensional fast
Fourier transform and its inverse. {A detailed pseudocode description
of the forward model, propagating a N-vector of input modes $U_{in}$
defined in the input plane $z_{in}$ to a N-vector of output modes
$U_{out}$ in the destination plane $z_{out}$, is given in
Algorithm~\ref{alg:forward_model} of Appendix~\ref{appendix:algorithms}.}
\subsection{Cost functions and errors for different classes of problems}
Depending on the beam shaping application, different cost functions
can be defined{, which quantify the quality of the light transform by
a single scalar value $C$}. Here, we consider two types of cost
functions, either distances that we want to minimize to zero, or
functions of merit representing a physical quantity that we want to
maximize.
In order to keep the notations simple, we still name
$\{u_n\}_{1\leq n\leq N}$ the elements of the N-vector $U_{out}$
representing the input modes propagated to the plane $z_{out}$. We
note $\bar{U} = (\partial C/\partial u_n)_{1\leq n \leq N}$ the error
vector whose elements $\{\bar{u}_n\}$ represent the variation of the
cost function with respect to each propagated mode $u_n$. The
$\{u_n\}$ being complex-valued, it is important to define some
consistent algebraic rules for achieving such differentiation. We use
the complex representation defined in~\cite{Jurling:14} along with the
corresponding differentiation rules. It is worth noting that other
representations and derivation rules based on Wirtinger calculus can
be used~\cite{chakravarthula2019wirtinger}, but they should lead to
the same result in the end when error computations with respect to
real-valued parameters (e.g. phases) are performed. {Using such rules,
we see that the errors $\{\bar{u}_n\}$ have the same dimension as
the input and output modes $\{u_n\}$ and $\{v_n\}$, so we call them
error modes in the following.} Table~\ref{table:cost_functions}
lists a few interesting cost functions {and associated error modes}
for different beam shaping problems, which we describe in the
following.
\begin{table}[ht]
\centering
\begin{tabular}{|l|l|l|}
\hline
\textbf{beam shaping class} & \textbf{cost function C} & \textbf{error} $\mathbf{\bar{U}}$ \\
\hline
multimode matching & \(\min \sum\limits_{n=1}^N\iint\vert u_n - v_n \vert^2\) &
\(\bar{u}_n = 2(u_n-v_n)\) \\
\hline
multimode 1--1 power coupling &
\(\max \sum\limits_{n=1}^{N}{\left\vert\iint{v_n^*u_n}\right\vert^2}\) &
\(\bar{u}_n = 2\left(\iint{v_n^*u_n}\right)v_n\) \\
\hline
multimode N--N power coupling &
\(\max \sum\limits_{n=1}^{N}\sum\limits_{l=1}^{N}{
\left\vert\iint{v_l^*u_n}\right\vert^2}\) &
\(\bar{u}_n = 2\sum\limits_{l=1}^N{\left(\iint{v_l^*u_n}\right)}v_l\) \\
\hline
multimode intensity shaping &
\(\min \iint\left(I_S-I_T\right)^2\) & \(\bar{u}_n = 4(I_S-I_T)u_n\) \\
\hline
\end{tabular}
\caption{Examples of cost functions and associated errors for different classes of beam shaping problems.}
\label{table:cost_functions}
\end{table}
The first cost function we refer to as "multimode matching" is a
squared $l_2$ distance between the modes $\{u_n\}$ and $\{v_n\}$ with
a one-to-one correspondence. We call it that way because, in the case
of lossless designs, its application matches rigorously the wavefront
matching method introduced in a previous
article~\cite{sakamaki2007new} in the context of mode multiplexing
with planar lightwave circuits (PLC). Indeed, a lossless design
implies the conservation of $\|u_n\|_2^2$ during $u_n$ propagation,
which leads to a simpler expression for the error vector components:
\begin{equation}
\bar{u}_n = -2v_n.
\label{eq:wavefront_matching}
\end{equation}
We see that, up to a constant factor, the error modes correspond to
the target modes $\{v_n\}$. {In this case, the application of the
backpropagation of errors we develop in the next subsection leads
exactly} to the wavefront matching method presented
in~\cite{sakamaki2007new}, even if the authors obtain this result with
an alternative reasoning.
If the {phase offset} between each mode pair $u_n$ and $v_n$ is not
relevant for the multiplexing problem, a different cost function can
be introduced that we name "multimode 1--1 power coupling". This time,
the cost function represents the sum of overlap integrals between
input and target modes and must therefore be maximized. We note that
the error vector components differ from the simplified expression of
multimode matching given in Eq.~(\ref{eq:wavefront_matching}) only by
a complex constant factor corresponding to the overlap integrals and
by a sign which just results from the difference between minimization
and maximization problems. This slight modification pre-aligns the
relative constant phases between each $u_n$ and $v_n$, which can
sometimes help to reduce the inverse design complexity.
We introduce a third cost function named "multimode N--N power
coupling", which aims at maximizing the total power coupling between
the input set of modes $\{u_n\}$ and the target modes $\{v_n\}$
\emph{without} requiring a one-to-one mapping. This metric can be
useful for applications such as incoherent combining of a set of modes
into a multimode fiber, or for space division multiplexing (SDM) in
telecommunications~\cite{puttnam2021space}, as we will illustrate in
the numerical results section. Compared to the multimode 1--1 power
coupling metric, this one gives much more flexibility to the
optimization algorithm since it allows {for mapping each input mode
onto an arbitrary linear combination of target modes
$\{v_n\}$. Usually, this additional degree of freedom results in
smoother and more symmetrical designs}.
Finally, we introduce a cost function named "multimode intensity shaping", which allows to shape the total intensity distribution $I_S$ of mutually incoherent input modes $\{u_n\}$ into a desired target intensity shape $I_T$. For this particular beam shaping application, it is not necessary to define a set of target modes $\{v_n\}${, as only $I_T$ is relevant}. Moreover, the $\{u_n\}$ being mutually incoherent, their time-averaged total intensity profile is simply given by:
\begin{equation}
I_S = \sum\limits_{n=1}^N{\vert u_n\vert^2}.
\end{equation}
In the following, we describe the backpropagation algorithm that
allows to compute the gradients $\partial C/\partial\Delta\textrm{RI}$
for any cost function $C$ in a unified fashion, starting from its
associated and {case specific} error vector $\bar{U}$.
\subsection{Backpropagation of errors}
Algorithm~\ref{alg:gradient_computation} of
Appendix~\ref{appendix:algorithms} describes the backpropagation of
the error modes vector $\bar{U}$ in order to compute the refractive
index gradients associated to each plane
$\nabla_\textrm{RI}[p] = \partial C/\partial\Delta\textrm{RI}[p]$ in a
reverse fashion. Here, the term "backpropagation" refers to the
traditional definition found in machine learning, where the chain rule
is applied in order to compute the partial derivatives of the cost
function with respect to some parameters of interest
backwards. Following the complex representation of errors defined
in~\cite{Jurling:14}, it appears that this procedure also corresponds
exactly to the physical backpropagation of the error modes through the
optical system.
\subsection{Optimization algorithm}
The optimization algorithm relies on gradient descent and can be summarized in the following steps:
\begin{enumerate}
\item Initialize $\Delta\textrm{RI}$.
\item Compute the propagated modes $U_{out}$ by calling the
function \textsc{propagate\_forward}
(Algorithm~\ref{alg:forward_model}) on the input modes $U_{in}$.
\item Eval $C$ and $\bar{U}$, associated to a particular beam shaping problem, using the propagated modes $U_{out}$.
\item Compute the refractive index gradients $\nabla_\textrm{RI}$
by calling \textsc{backpropagate\_gradient}
(Algorithm~\ref{alg:gradient_computation}) on the error modes
$\bar{U}$.
\item Update $\Delta\textrm{RI}$ with a gradient step along $\nabla_\textrm{RI}$.
\item Enforce constraints on $\Delta\text{RI}$.
\item If $C$ satisfies a stopping criterion then return $\Delta\textrm{RI}$, otherwise go to step 2.
\end{enumerate}
Concerning step 1, we often initialize $\Delta\textrm{RI}$ to zero
even if it is worth mentioning that some better initialization schemes
could be used~\cite{kunkel2020numerical}. A clever initialization can
help to achieve a faster convergence in case the propagated modes
through a constant refractive index medium have a very low spatial
overlap with the target modes.
Concerning step 5, one needs to introduce a constant step size, small
enough to avoid divergence, or use an adaptive learning rate
method. Of course the direction of the gradient step has to be chosen
consistently, depending on whether the cost function has to be
minimized or maximized.
Concerning step 6, it is worth mentioning that enforcing constraints
right after performing gradient steps finds a mathematical
justification in the proximal algorithms
framework\cite{parikh2014proximal}. A typical constraint that we wish
to apply is the restriction of $\Delta\textrm{RI}$ to a fixed
interval, in order to satisfy manufacturing constraints for instance.
\section{Numerical results}\label{results}
In this section, we present some numerical results that illustrate the
versatility of the optimization algorithm in addressing different beam
shaping problems involving fiber optics. We aim at designing fully
integrated devices of only millimeter length, which do not require any
collimation or focusing of the input and output beams. The refractive
index contrast required to realize some of the proposed devices may be
difficult to achieve with current glass or polymer gradient
manufacturing processes, but the main aim of this work is to propose a
proof of concept and give an idea of the objectives to be reached so
that these types of devices can be industrially manufactured.
\subsection{Hermite-Gaussian mode sorter}
The first system we design is a Hermite-Gaussian (HG) mode sorter,
inspired by recent work~\cite{fontaine2019laguerre}, where the authors
perform a conversion of 210 modes from a set of Gaussian input beams
arranged in a triangle to a set of co-propagating HG modes. {Mode
sorters are a proposed solution for increasing the telecommunication
bandwidth by exploiting the concept of selective mode-division
multiplexing, i.e., coupling many signals into a few-mode or
multimode fiber~\cite{gross2015ultrafast} with control of the
particular excitation of a given output mode}.
In~\cite{fontaine2019laguerre}, the HG conversion is performed by only
7 phase masks separated by free space propagation and is implemented
experimentally by a multi-pass cavity formed by a spatial light
modulator (SLM) and a flat mirror. The algorithm used in
article~\cite{fontaine2019laguerre} is equivalent to a coordinate
descent formulation of the wavefront matching method presented
in~\cite{sakamaki2007new}. It is important to underline that this
complex transformation is only made possible by a clever one-to-one
mapping between the cartesian coordinates of a Gaussian mode in the
triangle arrangement and the order $(m,n)$ of the corresponding HG
output mode. With so few phase patterns, any other arrangement would
lead to poor convergence of the mode transformation.
\begin{figure}[ht]
\centering \includegraphics[scale=1]{HG_sorter_input}
\caption{Triangle arrangement for the 45 Gaussian input modes.}
\label{fig:HG_sorter_input}
\end{figure}
In order to limit the computational resources to something reasonable,
we restrict the mode conversion to 45 modes, which still includes
interesting applications for
telecommunications~\cite{fontaine2018packaged}. We set the working
wavelength $\lambda_0 = 1.55\,\textrm{\textmu m}$ and use Gaussian
modes of waist $\omega_{in} = 5.2\,\textrm{\textmu m}$ as inputs,
typical of single mode fibers
(SMF-28). Figure~\ref{fig:HG_sorter_input} shows the triangle
arrangement of the 45 input modes, with a separation parameter
$\Delta = 4\,\omega_{in}$. For the outputs, we use the HG
representation of standard graded-index multimode fibers' eigenmodes,
with a maximum refractive index contrast
$dn_\textrm{max} = 15\times 10^{-3}$ between the core and the cladding
as in~\cite{fontaine2018packaged}. With a mode
solver~\cite{fallahkhair2008vector}, we compute the eigenmodes for
such a fiber profile and estimate the value
$\omega_{out} = 7.7\,\textrm{\textmu m}$ for the HG output modes'
waist. {To each of the 45 input modes, we assign one of the output
fiber HG eigenmodes according to the mapping described
in~~\cite{fontaine2019laguerre}. In short, if the Cartesian position
of an input mode in the triangle arrangement (in a $45^\circ$
rotated frame compared to Figure~\ref{fig:HG_sorter_input}) is
expressed as $(m\Delta, n\Delta)$, then the associated output mode
is $\textrm{HG}_{m,n}$.}
In simulation, the device is represented by a
$512\times 512\times 250$ array, the first two dimensions
corresponding to the transverse planes and the third one to the plane
index. The bulk refractive index used for the propagation between
planes is set to $n_b = 1.444$ corresponding to fused silica. The
transverse resolution is set to
$\mathrm{d}x = \mathrm{d}y = 0.65\,\textrm{\textmu m}$ and the
separation between planes is $\mathrm{d}z = 10\,\textrm{\textmu m}$,
leading to a total device length of 2.5 mm. At first glance, the axial
resolution may appear a bit low for BPM accuracy, but we verify
afterwards that it is feasible, by linearly interpolating the final
simulated RI profile with a better resolution
$\mathrm{d}z = 0.5\,\textrm{\textmu m}$ and by propagating the input
modes through it. Limiting the number of planes in the simulation is
crucial because we need to store all the input modes at each plane
during the forward pass (Algorithm~\ref{alg:forward_model}) in order
to be able to compute a gradient afterwards
(Algorithm~\ref{alg:gradient_computation}). Even with single-precision
complex numbers, this already requires 23.6 GB of memory allocation.
\begin{figure}[ht]
\centering \includegraphics[scale=1]{HG_sorter_output}
\caption{Intensity and phase patterns of the 45 created HG modes. The average conversion efficiency is 98.6\%.}
\label{fig:HG_sorter_output}
\end{figure}
In order to achieve this 45-mode conversion, we run 1500 iterations of
the optimization algorithm using the multimode 1-1 power coupling cost
function defined in table~\ref{table:cost_functions}. After each
iteration, we force the values of $\Delta\textrm{RI}$ to stay within a
range of $12\times 10^{-3}$, which is slightly smaller than the RI
contrast $dn_\textrm{max}$ required to guide the output HG modes. We
observed that enforcing stricter constraints on $\Delta\textrm{RI}$
causes losses in the mode conversion due to the device's inability to
strongly guide the small modes involved. In the end of the iterations,
the cost function reaches the value $C \simeq 44.37$, corresponding to
an average conversion efficiency of 98.6\% per mode. As
in~\cite{fontaine2019laguerre}, we introduce the $N\times N$ transfer
matrix $T$ of the device, whose elements $t_{ij} = \int{v_i^*u_j}$ are
the overlaps between the output and the propagated input modes. For
this particular device, we find that the auto-correlation matrix
$X = \vert T^\dagger T\vert$ has a largest off-diagonal term
$X_\textrm{max} = -46.16$~dB, corresponding to an extremely small
crosstalk between modes. The intensity and phase of each converted
mode are shown in Figure~\ref{fig:HG_sorter_output}, where we
recognize almost perfectly shaped HG modes.
\begin{figure}[ht]
\centering
\includegraphics[scale=1]{HG_sorter_transform}
\caption{Evolution of the simulated transverse RI profile sections
through the center of the mode sorter device along the
propagation direction, for the multimode HG conversion.}
\label{fig:HG_sorter_RI}
\end{figure}
Figure~\ref{fig:HG_sorter_RI} illustrates horizontal and vertical
sections {through the center of the mode converter, indicating a very
smooth evolution of the refractive index along the propagation
direction.} The input facet $z=z_{in}$ defines multiple waveguides
for each of the 45 Gaussian input modes while the end facet
$z=z_{out}$ defines a single larger waveguide holding the
co-propagating HG modes. In between, the waveguides are merged by the
algorithm in order to achieve the desired mode to mode mapping.
\subsection{Photonic lanterns}
Photonic lanterns are optical components allowing for a low-loss
conversion between a set of fundamental modes belonging to independent
single-mode waveguides and a set of high-order modes belonging to a
common multimode waveguide~\cite{birks2015photonic}.
{The functional difference between a photonic lantern and a mode
sorter as discussed previously is that the lantern is an
non-selective device that does not aim for a pre-defined
mode-to-mode mapping. The only goal is to couple as much power into
the set of modes supported by the output fiber, while preserving
mode orthogonality.} Such objects are usually fabricated by
tapering single-mode waveguides together or by femtosecond laser
writing and they find applications in
astrophotonics~\cite{norris2019astrophotonics} and
SDM~\cite{leon2014mode} for telecommunications.
The empirical adiabatic merging of different fiber cores gives rise to
super-modes by evanescent coupling, but it is challenging to ensure
that this process is lossless and that the final modal content exactly
matches a unitary superposition of modes belonging to a given few-mode
or multimode fiber. Assuming that we can control the RI distribution
as we wish, we will show that the multimode N-N power coupling metric
proposed in Table~\ref{table:cost_functions} is perfectly adapted to
overcome this difficulty. If the target waveguide was a graded-index
multimode fiber {whose eigenmodes are Hermite-Gaussian}, then the HG
mode sorter presented previously would already be an appropriate
solution. However, it is not necessarily possible to find a Cartesian
representation for the eigenmodes of an arbitrary fiber profile, {one
example are the} linearly polarized (LP) modes of a step-index
fiber.
\begin{figure}[ht]
\centering
\includegraphics{PL_input}
\caption{Concentric ring arrangement for the 45 input modes of the
photonic lantern.}
\label{fig:PL_input}
\end{figure}
Here, we still define the working wavelength
$\lambda_0 = 1.55\,\textrm{\textmu m}$ and the bulk refractive index
$n_b = 1.444$. The device is represented by a
$350\times 350 \times 250$ array, with the same transverse and axial
resolution as before, leading again to a total device length of 2.5
mm. Concerning the input modes, we still use the fundamental Gaussians
of single-mode fibers with $\omega_{in} = \textrm{5.2 \textmu m}$, but
this time with a concentric ring arrangement which has been shown to
lead to the best coupling efficiency for a photonic lantern design
with tapered
waveguides~\cite{fontaine2012geometric}. Figure~\ref{fig:PL_input}
represents the concentric ring arrangement for 45 input modes, with a
separation parameter $\Delta = 4\,\omega_{in}$ as before.
\begin{figure}[ht]
\centering
\includegraphics{PL_converted}
\caption{Intensity and phase profiles of the converted modes.}
\label{fig:PL_converted}
\end{figure}
Concerning the output modes, we use the first 45 Laguerre-Gaussian
(LG) modes with $\omega_{out} = \textrm{7.7 \textmu m}$, using their
real-valued representation with circular and radial nodal lines. For
the multimode N-N power coupling optimization, the exact base
representation is not relevant since it allows for a linear
rearrangement of the modes, but we find it nicer for visualizing the
intensity profiles later on.
After only 300 iterations of the optimization algorithm, we reach a
total power coupling efficiency of 99.2\%. As
in~\cite{fontaine2019laguerre}, we define insertion losses (IL) and
mode dependent losses (MDL) of the device using the singular values
$\{\sigma_i\}_{1\leq i \leq N}$ resulting from the singular value
decomposition (SVD) of the transfer matrix $T = U\Sigma V^\dagger$:
\begin{align}
\textrm{IL} = & \frac{1}{N}\sum\limits_{i=1}^N\vert \sigma_i\vert^2 \\
\textrm{MDL} = & \frac{\max{\{\vert \sigma_i\vert^2\}}}{\min{\{\vert \sigma_i \vert^2\}}}.
\end{align}
\begin{figure}[ht]
\centering
\includegraphics{PL_rotated}
\caption{Intensity and phase profiles of the converted modes rotated by the unitary matrix $UV^\dagger$ resulting from the SVD.}
\label{fig:PL_rotated}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics{PL_transform}
\caption{{Transverse RI profiles through the center of the photonic lantern design. The total power coupling efficiency is 99.2\%.}}
\label{fig:PL_RI}
\end{figure}
For this particular device, we obtain $\textrm{IL} = 0.035$~dB,
$\textrm{MDL} = 0.097$~dB and a maximum crosstalk
$X_\textrm{max} = -46.5$~dB, corresponding to an auto-correlation
matrix $X = \vert T^\dagger T\vert$ almost equal to the identity
matrix. Figure~\ref{fig:PL_converted} shows that the converted modes
look very different from the LG modes defining the output basis, but
Figure~\ref{fig:PL_rotated} shows that, since
$\Sigma \simeq \mathbb{I}$, the original LG modes can be recovered by
rotating the converted modes with the unitary matrix $UV^\dagger$
resulting from the SVD decomposition of the transfer matrix $T$.
Finally, Figure~\ref{fig:PL_RI} shows that the computed design is very
smooth and symmetrical, resembling a tapering of several single-mode
waveguides, thus truly deserving the name photonic lantern. Our
results prove that our method is able to engineer integrated photonic
lantern devices in a very precise fashion, with perfectly controlled
input and output modes. To our knowledge, this inverse design approach
is new in this context.
\subsection{Multimode intensity shaping}
{Finally, we would like to treat the application of shaping the
intensity profile of a light source emitting many mutually
incoherent modes. This class of beam shaping has importance in
various fields, such as digital holography or laser material
processing, where the native intensity profile of the light source
is often not ideal for the task at hand. } The associated cost
function is called ``multimode intensity shaping'' in
Table~\ref{table:cost_functions}. In a recent article, we have used
the same optimization metric to find two cascaded phase patterns for
shaping the intensity profile of a light emitting diode
(LED)~\cite{barre2021holographic}. Here, we aim at realizing the same
kind of intensity shaping with a volumetric gradient index design,
starting from the eigenmodes of a step-index fiber with a diameter of
$50\,\textrm{\textmu m}$ and $\textrm{NA} = 0.1$.
In simulation, we use $\lambda_0 = 0.640\,\textrm{\textmu m}$ as the
working wavelength and $n_b = 1.457$ as the refractive index of fused
silica at this wavelength. With such parameters, a mode solver finds
158 eigenmodes for this particular step-index fiber. {We further
assume that each input mode contributes with the same optical
power.} The device is represented by a $256\times 256\times 50$
array with a transverse resolution
$\mathrm{d}x = \mathrm{d}y = 0.5\,\textrm{\textmu m}$ and a separation
between planes $\mathrm{d}z = 10\,\textrm{\textmu m}$. This
corresponds to a total device length of only 0.5~mm. {As previously,
we validate our sampling choice by linearly interpolating the final
RI profile along the z-direction at a finer sampling interval of
$\mathrm{d}z = 0.5\,\textrm{\textmu m}$, followed by a simulated
readout of the gradient index design.}
As desired output, we define a square target intensity profile $I_T$
with a side length of $60\,\textrm{\textmu m}$. The square is modeled
as the Cartesian product of two 1D supergaussian profiles of order 20
to provide smooth edges.
\begin{figure}[ht]
\centering
\includegraphics{multimode_square_shaping}
\caption{Theoretical target intensity profile and converted multimode intensity profile obtained by simulation.}
\label{fig:square_shaping}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics{multimode_square_transform}
\caption{{RI profile and total intensity profile in the central x-z-plane of the beam shaper.}}
\label{fig:square_transform}
\end{figure}
For this simulation, we restrict the $\Delta\textrm{RI}$ range to
$5\times 10^{-3}$ and we also enforce a smooth design by Fourier
filtering the 3D RI profile with a sharp low-pass filter at each
iteration. This Fourier filtering corresponds to a projection of the
RI distribution on a set of band-limited functions, which makes it a
theoretically valid proximal operator~\cite{parikh2014proximal}. Here,
we enforce the {smallest details} in the RI profile to be around
$4\,\textrm{\textmu m}$. After 1500 iterations, even under these
constraints, we reach an almost perfect intensity conversion as
illustrated in Figure~\ref{fig:square_shaping}.
Figure~\ref{fig:square_transform} shows the RI profile and the total
intensity evolution of the beam along the propagation direction. We
observe that the obtained RI profile is very smooth, and its limited
$\Delta\textrm{RI}$ range makes it already manufacturable with
commercial 3D printers with polymers~\cite{Porte:21}, even if it may
require some slice stitching in order to reach the full 0.5~mm
length. We also see that the beam is guided in the computed RI
structure while its intensity is continuously shaped into the desired
profile. The fact that the total length is 5 times shorter for this
multimode shaping device than for the previous mode sorter or photonic
lantern shows that multimode intensity shaping requires in general
much less degrees of freedom than individual or collective mode
matching. The same conclusion holds for discrete designs such as
presented in Refs.~\cite{barre2021holographic}
and~\cite{fontaine2019laguerre}.
\section{Conclusion}
We have demonstrated a powerful and versatile computational method to
design millimeter-range integrated devices performing several
important multimode light transformations. Different cost functions
were introduced for solving particular mode multiplexing or intensity
shaping problems, but the optimization approach is stated in such a
general way that any other user-defined cost function would fall
within the scope presented here. The presented designs can already be
manufactured with the available two-photon polymerization technologies
and we can envision that their realization in a glass material will
also be accessible in the coming years. Our work could be extended
further to include broad-spectrum or multicolor sources, or to account
for light polarization, which would cover many more applications in
optics with fully integrated designs.
\clearpage
|
{
"timestamp": "2021-12-01T02:26:05",
"yymm": "2111",
"arxiv_id": "2111.15461",
"language": "en",
"url": "https://arxiv.org/abs/2111.15461"
}
|
\section{Introduction}
The standard approach to determine scattering quantities from Lattice QCD
is the Lüscher method~\cite{Luscher:1990ck}, which relates the finite-volume spectrum obtained from the lattice to the infinite-volume scattering amplitude.
It has been applied to many physical systems, see Ref.~\cite{Briceno:2017max} for a review. The formalism has also been recently extended to three particles with three different but conceptually equivalent formulations available in the literature at present~\cite{Hansen:2014eka,Hansen:2015zga,Hammer:2017uqm,Hammer:2017kms,Mai:2017bge}, see Refs.~\cite{Hansen:2019nir,Mai:2021lwb} for recent reviews.
In this contribution we study the techniques to extract scattering amplitudes
from the Euclidean Lattice field theory. We use a Euclidean Lattice $\phi^4$ theory with two fields having different masses. Using this theory has proven to be an excellent test environment for novel scattering studies, as shown in Refs.~\cite{Sharpe:2017jej,Romero-Lopez:2018rcb,Romero-Lopez:2020rdq}.
In particular we study the recent proposal \cite{Bruno:2020kyl}, in which the authors found a relation
between the scattering length and the Euclidean four-point functions at threshold kinematic. Henceforth, this will be referred to as the BH method.
We compare the BH method to the standard Lüscher approach and find good agreement.
This study of the BH method presented here is based on \cite{Garofalo:2021bzl}.
We also investigate the extraction of the scattering quantities at non-zero momentum. In particular we studied s-wave scattering amplitude for two particles with the Lüscher method~\cite{LUSCHER1991531}.
\section{Description of the Model}
The Euclidean model used here is composed of two real scalar fields $\phi_i, i=0,1$ with the Lagrangian
\begin{align}
{\cal L}= \sum_{i=0,1} \left( \frac{1}{2} \partial_\mu \phi_i\partial_\mu \phi_i +\frac{1}{2}m_i \phi_i^2 +\lambda_i \phi_i^4\right)
+\mu \phi_0^2 \phi_1^2\, ,
\label{eq:lagrangian}
\end{align}
with nondegenerate (bare) masses $m_0<m_1$. The Lagrangian has a $Z_2\otimes Z_2$ symmetry $\phi_0\to-\phi_0$ and $\phi_1\to-\phi_1$, which prevents sectors with even and odd number of particles to mix.
To study the problem numerically,
we define the theory on a finite hypercubic lattice with lattice spacing $a$ and a volume $T \cdot L^3$, where $T$ denotes the Euclidean time length and $L$ the spatial length. We define the derivatives of the Lagrangian (\ref{eq:lagrangian}) on a finite lattice as the finite differences $\partial_\mu \phi(x)=\frac{1}{a}\,(\phi(x+a\mu)-\phi(x))$. In addition, periodic boundary conditions are assumed in all directions. The discrete action is given in Ref.~\cite{Romero-Lopez:2018rcb} for the complex scalar theory, but it is trivial to adapt it to this case. We set $a=1$ in the following for convenience.
\section{The BH method}
In Ref.~\cite{Bruno:2020kyl}, Bruno and Hansen derived a relation between the scattering length $a_0$ and the following combination of Euclidean four-point and two-point correlation functions at the two-particle threshold:
\begin{equation}
C_4^\mathrm{BH}(t_f,t,t_i) \equiv \frac{\langle \tilde\phi_0(t_f,0)\tilde\phi_1(t,0)\tilde\phi_1(t_i,0) \tilde\phi_0(0,0)\rangle}
{\langle \tilde\phi_0(t_f,0) \tilde\phi_0(0,0)\rangle \langle \tilde\phi_1(t,0)\tilde\phi_1(t_i,0) \rangle} -1,
\end{equation}
with the time ordering $t_f > t > t_i > 0$, and $\tilde \phi_{i}(t,{\bf p})=\sum_{\bf x}e^{ i \bf p\cdot \bf x }\phi_{i}(t,\bf x)$ being
spatial Fourier transform of the field. In particular $\tilde \phi_{i}(t,0)$ is the field projected to zero spatial momentum. The relation of $C_4^\mathrm{BH}$ to the scattering length reads
\begin{align}
C_4^\mathrm{BH}(t_f,t,t_i)
\xrightarrow[t\gg t_i\gg 0]{T\gg t_f\gg t}
\frac{2}{ L^3}&\bigg[ \pi \frac{a_0 }{\mu_{01} }(t-t_i)
- 2a_0^2\sqrt{\frac{2 (t-t_i)}{\mu_{01}} }+ O\left((t-t_i)^0\right)\bigg]\,,
\label{eq:BH}
\end{align}
where $\mu_{01}=(M_0 M_1)/(M_0+M_1)$ is the reduced mass. It is defined in terms of the renormalized masses $M_0$ and $M_1$ of the two particles. These masses can be extracted as usual from an exponential fit at large time distances of the two-point correlation functions
\begin{equation}
\langle \tilde\phi_{i}(t,{\bf p})\tilde\phi_{i}(0,-{\bf p}) \rangle \approx
A_{1,i} \left(e^{ - E^{i}_1({\bf p}) t} + e^{ - E_1^{i}({\bf p}) (T - t)}\right)
\label{eq:2pt}
\end{equation}
with $E_1^{i}({\bf p} =0)=M_{i}$ for $i=0,1$. To reduce the statistical error we average over all points with the same source sink separation.
\subsection{Numerical result}
We generate ensembles using the Metropolis-Hastings algorithm with the bare masses $m_0=-4.925$ and $m_1=-4.85$, and for simplicity we choose $\lambda_0=\lambda_1=2\mu=2.5$. The list of ensembles generated in this work with their corresponding measured values of the masses $M_0$ and $M_1$ are compiled in \cref{tab:full_table}. In this model,
two-point correlators are dominated by the ground state from the first time slice. This was also observed
in previous investigations of the scalar theory \cite{Romero-Lopez:2020rdq}.
We tried three different strategies to extract the scattering length:
\begin{enumerate}
\item We attempt a direct fit of \cref{eq:BH} the the data.
\item We include an overall constant in the fit to account for the $O\left((t-t_i)^0\right)$ effect.
\item We make use of the shifted function at fixed $t_i$ and $t_f$,
$\Delta_t C_4^\mathrm{BH}(t_f,t,t_i)= C_4^\mathrm{BH}(t_f,t+1,t_i)-C_4^\mathrm{BH}(t_f,t,t_i)$,
where the constant term cancels out. We then determine $a_0$ by fitting to
\begin{equation}
\label{eq:Delta_BH}
\begin{split}
\Delta_t C_4^\mathrm{BH}(t_f,t,t_i&) \approx \frac{2}{L^3}\Big[\pi\frac{a_0}{\mu_{01}}\Big.
\Big.-2a_0^2 \sqrt{\frac{2}{\mu_{01}}}\Big(\sqrt{t+1-t_i} - \sqrt{t-t_i}\,\Big)\Big].
\end{split}
\end{equation}
\end{enumerate}
The three methods are compared in the left panel of \cref{fig:BH_03t16} for one of our ensembles. The black triangles represent the correlator of \cref{eq:BH} with $t_i=3$ and $t_f=16$ divided by $(t-t_i)$ so that it goes to a constant when $(t-t_i) \to \infty$.
The monotonic increase of the data points could be due to the $\left((t-t_i)^0\right)$ term in \cref{eq:BH}. A fit to the formula \cref{eq:BH} in the time region $[10,14]$---the black band--- gives a good $\chi^2/d.o.f\sim0.7$, but results in large uncertainties. The quality of the fit deteriorates very quickly if the fit range is extended: a fit in the time region $[6,14]$ yields $\chi^2/\mathrm{dof}\sim 5$.
With the second strategy---the red band in the left panel of \cref{fig:BH_03t16}---one is able to start fitting at significantly smaller $t$-values.
The data are well described with a $\chi^2/\mathrm{dof}\sim 0.2$
For the third approach, we study $\Delta_t C_4^\mathrm{BH}(t)$. This is shown in the left panel of \cref{fig:BH_03t16} as blue circles, and the blue band represents the best fit result with error.
The main advantage of the latter strategy is that it allows us to extract the physical information at smaller $t$ without introducing extra parameters in the fit.
Indeed, the data looks almost constant over the complete $t$-range available.
Only very close to $t_i$ the square root term might become visible.
For the third strategy, which looks most promising from a systematic point of view, we also investigate the dependence on the choice of $t_{i}$ and $t_{f}$. This is shown in right panel of \cref{fig:BH_03t16} for the same ensemble as in the left panel.
We do not observe any significant systematic effect stemming from excited state contributions when changing $t_i$ or $t_f$.
However, we clearly see significantly smaller statistical uncertainties with smaller $t_i$ and $t_f$ values.
\begin{figure}
\centering
\includegraphics[scale=0.48]{BH_L22T96.pdf}
\includegraphics[scale=0.48]{BH_L22T96_ti.pdf}
\caption{Left panel: Four-point function of \cref{eq:BH} multiplied by $L^3/2$, for $L=22$ and $T=96$ with $t_i=3$ and $t_f=16$ divided by $(t-t_i)$ (black triangles). the dashed vertical lines represent the fit interval, the black band represent the result of the fit \cref{eq:BH} and the red band is the same fit with an extra constant term. The blue circles and band represent the discrete derivative of the correlator \cref{eq:Delta_BH} and the corresponding fit.
Right panel: Plot of the discrete derivative of the correlator \cref{eq:Delta_BH}
for different values of $t_i$ and $t_f$. We do not observe any systematic shift and all correlators
are compatible. The points with smaller $t_i$ and $t_f$
tend to have smaller error.
}
\label{fig:BH_03t16}
\end{figure}
\subsection{Comparison to the Lüscher method}
In this section we compare the BH method described above with the Lüscher threshold expansion~\cite{Luscher:1985dn,Beane_2005}. The latter relates the two-particle energy shift, defined as $\Delta E_{2}^{01}=E_{2}^{01}-M_0-M_1$, to the scattering length $a_0$ via
\begin{equation}
\Delta E_{2}^{01} =-\frac{2\pi a_{0}}{\mu_{01} L^3}\left[ 1 + c_1 \frac{a_{0}}{L} + c_2\left(\frac{a_{0}}{L}\right)^2 \right] +O\left(L^{-6}\right)\,,
\label{eq:luescher_a0}
\end{equation}
with $c_1=-2.837297$, $c_2= 6.375183$ and $E_2^{01}$ being the interacting two-particle energy at zero total momentum.
$E_2^{01}$ can be extracted from
$C_2(t) = \langle \tilde\phi_1(t,0)\tilde\phi_0(t,0) \tilde\phi_1(0,0)\tilde\phi_0(0,0) \rangle$, whose large-$t$ behaviour is
\begin{align}
\begin{split}
C_2(t) \xrightarrow[T-t\gg0]{t\gg0} A_2 e^{-E_2^{01} \frac{T}{2}} \cosh{\left(E_2^{01} (t-\frac{T}{2})\right)}
+B_2 e^{-(M_0+M_1) \frac{T}{2}} \cosh{\left((M_1-M_0) (t-\frac{T}{2})\right)}\,. \label{eq:E2_01}
\end{split}
\end{align}
with the last term being a thermal pollution due to finite $T$ with periodic boundary condition.
Using $M_0$ and $M_1$ as input determined from the corresponding two-point functions, the only additional parameter is $B_2$.
Alternatively, it is possible to eliminate the second term defining
$
\tilde C_2(t)=C_2(t)/\cosh{\left((M_1-M_0) (t-\frac{T}{2})\right)},
$
and then taking the finite derivative
\begin{align}
\begin{split}
\Delta_t\tilde C_2(t)&=\tilde C_2(t+1)-\tilde C_2(t).
\label{eq:Delta_E2_01}
\end{split}
\end{align}
The two-particle energies obtained from \cref{eq:E2_01} are compatible with those from \cref{eq:Delta_E2_01}. The results are reported in \cref{tab:full_table}, along with the values for
the scattering length $a_0$ computed from $E_2$ using \cref{eq:luescher_a0}.
A comparison between the BH and the Lüscher method is depicted in \cref{fig:compare_L_BH} for all our ensembles. The values are compatible with each other. However, the BH method
gives systematically larger values for $a_0$.
For each ensemble separately Lüscher and BH methods appear compatible.
However we observe a systematic trend after averaging over all ensembles, as shown in the bands of fig.~\ref{fig:compare_L_BH}.
This might be attributed to different lattice artifacts.
The statistical error is similar in both approaches. Also, the scaling in $L$ appears to be similar.
The different systematics of the two methods offer in general a useful opportunity for cross-checks.
\begin{figure}
\centering
\includegraphics[scale=0.6]{compare_L_BH.pdf}
\caption{ Comparison of $a_0$ computed with BH method \cref{eq:Delta_BH} with $t_i=2$ and $t_f=10$ (blue circles), with $t_i=3$ and $t_f=16$ (red triangles) and Lüscher method \cref{eq:luescher_a0} (black squares) the horizontal bands correspond to the weighted average of each method.}
\label{fig:compare_L_BH}
\end{figure}
\begin{table*}
{\footnotesize
\centering
\setlength{\tabcolsep}{3pt}
\setlength{\tabcolsep}{3pt}
\begin{tabular}{cc|cc|cc|cc|ccc}
\hline\hline
T & L & \(M_0\) & \(M_1\) &\multicolumn{2}{c|}{$E_2^{01}$} &\multicolumn{2}{c|}{$a_0$ Lüscher} & \multicolumn{3}{c}{$a_0$ BH} \\
\hline
& & & & \( C_2\) & \( \Delta_t\tilde C_2\) & \( C_2\) & \( \Delta_t\tilde C_2\) & \(\Delta_t C_4^\mathrm{BH}\)(3,t,16)
& \(C_4^\mathrm{BH}+c\) & \(\Delta_t C_4^\mathrm{BH}\)(2,t,10) \\
\hline
48 & 20 & 0.14675(5) & 0.27487(5) & 0.4252(3) & 0.4253(3) &
-0.41(3) & -0.42(3) & -0.35(4) & -0.35(6) & -0.37(2) \\
64 & 20 & 0.14659(5) & 0.27480(5) & 0.4249(3) & 0.4250(3) &
-0.41(3) & -0.41(4) & -0.30(4) & -0.29(6) & -0.38(2) \\
96 & 20 & 0.14662(4) & 0.27487(4) & 0.4251(2) & 0.4251(3) &
-0.41(2) & -0.41(3) & -0.36(3) & -0.36(4) & -0.38(1) \\
96 & 22 & 0.14604(3) & 0.27470(4) & 0.4237(2) & 0.4237(3) &
-0.45(3) & -0.45(5) & -0.34(4) & -0.31(6) & -0.37(2) \\
96 & 24 & 0.14574(4) & 0.27458(4) & 0.4223(2) & 0.4221(3) &
-0.39(3) & -0.36(6) & -0.36(5) & -0.41(7) & -0.39(2) \\
96 & 26 & 0.14547(4) & 0.27455(3) & 0.4218(2) & 0.4219(3) &
-0.44(5) & -0.47(8) & -0.30(7) & -0.3(1) & -0.36(3) \\
96 & 32 & 0.14521(4) & 0.27449(4) & 0.4210(2) & 0.4213(3) &
-0.62(9) & -0.7(1) & -0.2(1) & -0.1(2) & -0.35(5) \\
128 & 20 & 0.14668(3) & 0.27484(3) & 0.42509(7) & 0.4251(3) &
-0.409(7) & -0.41(3) & -0.40(3) & -0.39(3) & -0.40(1) \\
\hline\hline
\end{tabular}
\caption{Measured values of $a_0$, $M_0$, $M_1$ and $E_2$.
The column \(\Delta_t C^\mathrm{BH}_4\) corresponds to the value of $a_0$ fitted with \cref{eq:Delta_BH} fixing $t_i=3$ and $t_f=16$ or $t_i=2$ and $t_f=10$.
The column \(C_\mathrm{BH}+c\) is the result of the fit with \cref{eq:BH} adding a constant term. The two-particle energy
$E_2$ is computed from $C_2$ with the fit of \cref{eq:E2_01} and from $\Delta \tilde C_2$ with
\cref{eq:Delta_E2_01}. The corresponding value of $a_0$ computed with the Lüscher method is reported in the corresponding columns.
We used $2\cdot10^{7}$ configurations for each ensemble, generated from 200 replicas each of
$10^5$ thermalized configurations. We bin the configurations in blocks of $10^5$ (the entirely replica) and we resample the resulting 200 configurations with jackknife. For the light mass $M_0$ we measured the integrated autocorrelation time $\tau_{int}\sim1.5$ , while $\tau_{int}\sim0.5$ for $M_1$. We skip 1000 configurations in each replica for thermalization.
}
\label{tab:full_table}
}
\end{table*}
\section{Scattering amplitude at not zero momentum with the L\"uscher method}
In this section, we report our study of the s-wave scattering amplitude for two particles with the Lüscher method \cite{LUSCHER1991531}.
We compute the spectrum of our $\phi^4$ model at ${\bf p}\neq 0$ for the lighter
particle. We generate ensembles using the Metropolis-Hastings algorithm with bare masses $m_0=-4.9$ and $m_1=-4.65$, to have $M_1\sim 3M_0$, keeping $\lambda_0=\lambda_1=2\mu=2.5$.
We extract the energies from an exponential fit to \cref{eq:2pt}
with the discretised momentum ${\bf p} = 2\pi {\bf n}/L$ with ${\bf n}$ being a integer vector in the set ${\bf n}\in \{ (0, 0, 0), (1, 0, 0), (1, 1, 0), (1, 1, 1)\}$.
As we can see from the left panel of \cref{fig:kcot}, we find that the measured one-particle energies $E_1^0({\bf p})$
significantly deviate from the continuum dispersion relation, while they
are in good agreement with the lattice dispersion relation
\begin{gather}
\cosh\left( E_1^0({\bf p})\right)= \cosh(M_0) +\frac{1}{2}\left( \sum_{i=1}^{3}4 \sin\left(\frac{ p_{i}}{2}\right)^2\right)\,,
\label{eq:lat-disp-rel}
\end{gather}
with $M_0$ being the mass measured in the fitted in \cref{eq:2pt} at zero momentum.
For each choice of the momentum we construct the two-particle operator in the A1 irrep of the cubic group $\hat O_2(t,\bf p)=\tilde\phi_0(t,\bf p)\tilde\phi_0(t, 0)$. Only for the first unit of momentum we construct the operator with back to back momentum still in the $A_1$ irrep $\hat O_2(t,0)=\sum_{i=x,y,z}\tilde\phi_0(t, p_i)\tilde\phi_0(t,-p_i)$ with $p_x=(2\pi/L,0,0)$, $p_y=(0,2\pi/L,0)$ and $p_z=(0,0,2\pi/L)$. We measured the two-particle energy from the exponential fit of the correlator
\begin{align}
\langle \hat O_2 (t,{\bf p}) \hat O_2 (0,-{\bf p}) \rangle\xrightarrow[T-t\gg0]{t\gg0} & A_2 e^{-E_2^0 ({\bf p})\frac{T}{2}} \cosh{\left(E_2^0({\bf p}) (t-\frac{T}{2})\right)} \\
+&A_1 e^{-(E_1^0({\bf p})+M_0) \frac{T}{2}} \cosh{\left((E_1^0({\bf p})-M_0) (t-\frac{T}{2})\right)}\,,
\end{align}
where $E_1^0({\bf p})$ and $M_0$ are the ones obtained from the fit of \cref{eq:2pt}.
From the two-particle energies $E_2^0({\bf p})$ we calculate the s-wave phase shift as \cite{LUSCHER1991531}
\begin{gather}
\cot \delta = \frac{ Z_{0,0}(1,q^2) }{\pi^{3/2} \gamma q}
\,,\label{eq:luescher_qc}
\end{gather}
where $Z_{0,0}$ is the L\"uscher zeta function, the Lorentz boost $\gamma=E_2^0({\bf p})/E_{CM}$ is defined in terms of center of mass energy $E_{CM}=E_2^0({\bf p})-{\bf p}^2$ and $q=kL/2\pi$ with the scattering momentum $k=\frac{E_{CM}}{4}-M_0^2 $.
We notice that, at large momentum $\bf p$, the values of the phase shift computed with \cref{eq:luescher_qc} come with large errors (red circles of \cref{fig:kcot}).
These large errors stem from the fact that the energies deviate significantly from the continuum dispersion relation. This problem was also observed in \cite{RUMMUKAINEN1995397}, where as a solution the author propose to compute the center of mass energy $E_{CM}$ using the lattice dispersion relation.
Here we follow a similar strategy.
We subtract the difference between the free two -particle energies
$E_2^{free,latt}-E_2^{free,cont}$ computed with the lattice \cref{eq:lat-disp-rel} and continuum dispersion relation from the
two-particle energy.
The factor $E_2^{free,latt}-E_2^{free,cont}$ is a lattice artifact, i.e. it goes to zero in the continuum limit. However, at finite lattice spacing, we notice a reduction of the statistical error in the phase shift computed from energies at large momentum $\bf$ \cref{fig:kcot} (blue point of the right panel of \cref{fig:kcot}).
\begin{figure}
\includegraphics[scale=0.65,trim=0 0 0 0]{E1_0_disp_rel.pdf}
\includegraphics[scale=0.65]{kcot_latt.pdf}
\caption{
Left panel: Values of the energies level measured for different values of $p^2$ minus the value predicted using the lattice dispersion relation (blue circles) or the continuum dispersion relation (red triangles).
Right panel: values of the phases shift $\delta$ obtained from \cref{eq:luescher_qc} using as input the two-particle energies measured on the lattice (red circles) or the energies corrected by a lattice artifact $E_2^{free,latt}-E_2^{free,cont}$ (blue point).
The statistic used is the same as described in the caption of \cref{tab:full_table}. }
\label{fig:kcot}
\end{figure}
\section{Conclusion}
In this contribution, we studied the BH method,
proposed in \cite{Bruno:2020kyl}. We have verified that it produces
results that are compatible with those of the Lüscher
method \citep{Luscher:1985dn}.
We also studied the s-wave scattering amplitude for two particles with the Lüscher method at non-zero momentum \cite{LUSCHER1991531}. As in \cite{RUMMUKAINEN1995397}, we observed that the error on the computed phase shift becomes larger with the momentum of the two-particle state. This increase of the error can be mitigated by the subtraction of a lattice artifact in the measured energy.
\section{Acknowledgements}
We gratefully acknowledge helpful discussions with M.~Bruno, M.~T.~Hansen
and S.~R.~Sharpe.
%
FRL acknowledges financial support from Generalitat Valenciana grants PROMETEO/2019/083 and CIDEGENT/2019/040, the EU project H2020-MSCA-ITN-2019//860881-HIDDeN, and the Spanish project FPA2017-85985-P.
%
The work of FRL is supported in part by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under grant Contract Numbers DE-SC0011090 and DE-SC0021006.
%
This work
is supported in part by the Deutsche Forschungsgemeinschaft (DFG,
German Research Foundation) and the
NSFC through the funds provided to the Sino-German
Collaborative Research Center CRC 110 “Symmetries
and the Emergence of Structure in QCD” (DFG Project-ID 196253076 - TRR 110, NSFC Grant No. 12070131001).
%
AR acknowledges support from Volkswagenstiftung (Grant No. 93562) and the Chinese Academy of Sciences (CAS) President’s International Fellowship Initiative (PIFI) (Grant No. 2021VMB0007).
%
The C\texttt{++} Performance Portability Programming Model Kokkos \cite{kokkos} and
the open source software packages R~\cite{R:2019} have
been used.
We thank B.~Kostrzewa for useful discussions on Kokkos.
\bibliographystyle{unsrt}
|
{
"timestamp": "2021-12-01T02:25:34",
"yymm": "2111",
"arxiv_id": "2111.15447",
"language": "en",
"url": "https://arxiv.org/abs/2111.15447"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.